Job Description: The IT Desktop/Infrastructure Support Specialist is responsible for delivering high-quality technical support for desktops, laptops, peripherals, and critical infrastructure systems across the organization. This role ensures smooth and secure operations by diagnosing and resolving hardware, software, and network-related issues while also contributing to infrastructure projects and system improvements. The ideal candidate will have hands-on experience in SuiteQL. Scripting . DevOps practices. Azure cloud services. Network administration .
Job Description: As a GCP Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Google Cloud Platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Databricks, Python, SQL, PySpark / Scala, and Informatica will be essential for performing the following key responsibilities: Key Responsibilities: Designing and developing data pipelines: Design and implement scalable and efficient data pipelines using GCP-native services (e.g., Cloud Composer, Dataflow, BigQuery) and tools like Databricks, PySpark, and Scala. This includes data ingestion, transformation, and loading (ETL/ELT) processes. Data modeling and database design: Develop data models and schema designs to support efficient data storage and analytics using tools like BigQuery, Cloud Storage, or other GCP-compatible storage solutions. Data integration and orchestration: Orchestrate and schedule complex data workflows using Cloud Composer (Apache Airflow) or similar orchestration tools. Manage end-to-end data integration across cloud and on-premises systems. Data quality and governance: Implement data quality checks, validation rules, and governance processes to ensure data accuracy, integrity, and compliance with organizational standards and external regulations. Performance optimization: Optimize pipelines and queries to enhance performance and reduce processing time, including tuning Spark jobs, SQL queries, and leveraging caching mechanisms or parallel processing in GCP. Monitoring and troubleshooting: Monitor data pipeline performance using GCP operations suite (formerly Stackdriver) or other monitoring tools. Identify bottlenecks and troubleshoot ingestion, transformation, or loading issues. Documentation and collaboration: Maintain clear and comprehensive documentation for data flows, ETL logic, and pipeline configurations. Collaborate closely with data scientists, business analysts, and product owners to understand requirements and deliver data engineering solutions. Skills and Qualifications: 5+ years of experience in a Data Engineer role with exposure to large-scale data processing. Strong hands-on experience with Google Cloud Platform (GCP), particularly services like BigQuery, Cloud Storage, Dataflow, and Cloud Composer. Proficient in Python and/or Scala, with a strong grasp of PySpark. Experience working with Databricks in a cloud environment. Solid experience building and maintaining big data pipelines, architectures, and data sets. Strong knowledge of Informatica for ETL/ELT processes. Proven track record of manipulating, processing, and extracting value from large-scale, unstructured datasets. Working knowledge of stream processing and scalable data stores (e.g., Kafka, Pub/Sub, BigQuery). Solid understanding of data modeling concepts and best practices in both OLTP and OLAP systems. Familiarity with data quality frameworks, governance policies, and compliance standards. Skilled in performance tuning, job optimization, and cost-efficient cloud architecture design. Excellent communication and collaboration skills to work effectively in cross-functional and client-facing roles. Bachelor's degree in Computer Science, Information Systems, or a related field (Mathematics, Engineering, etc.). Bonus: Experience with distributed computing frameworks like Hadoop and Spark
Hi, Position : Sr. GCP Data Engineer Location: Work From Home About Company: Epik Solutions is a global technology company that embraces creativity and diversity, using technology to inspire and implement solutions that meet our customers’ needs. Capabilities include: - Digital Transformation: We help clients reimagine and capture possibilities while strengthening the performance of their existing digital assets. Our Team brings ideas to life and builds organizations’ capabilities to deliver their best outcomes on an ongoing basis. - Workforce Transformation: We partner with companies that are seeking to hone their business strategies and amplify performance. Epik Solutions helps optimize workforce accessibility and capacity to produce real business results. Job Description: As a GCP Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Google Cloud Platform . You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Databricks , Python , SQL , PySpark / Scala , and Informatica will be essential for performing the following key responsibilities: Key Responsibilities: Designing and developing data pipelines: Design and implement scalable and efficient data pipelines using GCP-native services (e.g., Cloud Composer, Dataflow, BigQuery) and tools like Databricks , PySpark , and Scala . This includes data ingestion, transformation, and loading (ETL/ELT) processes. Data modeling and database design: Develop data models and schema designs to support efficient data storage and analytics using tools like BigQuery , Cloud Storage , or other GCP-compatible storage solutions. Data integration and orchestration: Orchestrate and schedule complex data workflows using Cloud Composer (Apache Airflow) or similar orchestration tools. Manage end-to-end data integration across cloud and on-premises systems. Data quality and governance: Implement data quality checks, validation rules, and governance processes to ensure data accuracy, integrity, and compliance with organizational standards and external regulations. Performance optimization: Optimize pipelines and queries to enhance performance and reduce processing time, including tuning Spark jobs, SQL queries, and leveraging caching mechanisms or parallel processing in GCP. Monitoring and troubleshooting: Monitor data pipeline performance using GCP operations suite (formerly Stackdriver) or other monitoring tools. Identify bottlenecks and troubleshoot ingestion, transformation, or loading issues. Documentation and collaboration: Maintain clear and comprehensive documentation for data flows, ETL logic, and pipeline configurations. Collaborate closely with data scientists, business analysts, and product owners to understand requirements and deliver data engineering solutions. Skills and Qualifications: 5+ years of experience in a Data Engineer role with exposure to large-scale data processing. Strong hands-on experience with Google Cloud Platform (GCP) , particularly services like BigQuery , Cloud Storage , Dataflow , and Cloud Composer . Proficient in Python and/or Scala , with a strong grasp of PySpark . Experience working with Databricks in a cloud environment. Solid experience building and maintaining big data pipelines, architectures, and data sets. Strong knowledge of Informatica for ETL/ELT processes. Proven track record of manipulating, processing, and extracting value from large-scale, unstructured datasets. Working knowledge of stream processing and scalable data stores (e.g., Kafka, Pub/Sub, BigQuery). Solid understanding of data modeling concepts and best practices in both OLTP and OLAP systems. Familiarity with data quality frameworks , governance policies, and compliance standards. Skilled in performance tuning , job optimization, and cost-efficient cloud architecture design. Excellent communication and collaboration skills to work effectively in cross-functional and client-facing roles. Bachelor's degree in Computer Science , Information Systems , or a related field (Mathematics, Engineering, etc.). Bonus: Experience with distributed computing frameworks like Hadoop and Spark.
As a GCP Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Google Cloud Platform . You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Databricks , Python , SQL , PySpark / Scala , and Informatica will be essential for performing the following key responsibilities: Key Responsibilities: Designing and developing data pipelines: Design and implement scalable and efficient data pipelines using GCP-native services (e.g., Cloud Composer, Dataflow, BigQuery) and tools like Databricks , PySpark , and Scala . This includes data ingestion, transformation, and loading (ETL/ELT) processes. Data modeling and database design: Develop data models and schema designs to support efficient data storage and analytics using tools like BigQuery , Cloud Storage , or other GCP-compatible storage solutions. Data integration and orchestration: Orchestrate and schedule complex data workflows using Cloud Composer (Apache Airflow) or similar orchestration tools. Manage end-to-end data integration across cloud and on-premises systems. Data quality and governance: Implement data quality checks, validation rules, and governance processes to ensure data accuracy, integrity, and compliance with organizational standards and external regulations. Performance optimization: Optimize pipelines and queries to enhance performance and reduce processing time, including tuning Spark jobs, SQL queries, and leveraging caching mechanisms or parallel processing in GCP. Monitoring and troubleshooting: Monitor data pipeline performance using GCP operations suite (formerly Stackdriver) or other monitoring tools. Identify bottlenecks and troubleshoot ingestion, transformation, or loading issues. Documentation and collaboration: Maintain clear and comprehensive documentation for data flows, ETL logic, and pipeline configurations. Collaborate closely with data scientists, business analysts, and product owners to understand requirements and deliver data engineering solutions. Skills and Qualifications: 5+ years of experience in a Data Engineer role with exposure to large-scale data processing. Strong hands-on experience with Google Cloud Platform (GCP) , particularly services like BigQuery , Cloud Storage , Dataflow , and Cloud Composer . Proficient in Python and/or Scala , with a strong grasp of PySpark . Experience working with Databricks in a cloud environment. Solid experience building and maintaining big data pipelines, architectures, and data sets. Strong knowledge of Informatica for ETL/ELT processes. Proven track record of manipulating, processing, and extracting value from large-scale, unstructured datasets. Working knowledge of stream processing and scalable data stores (e.g., Kafka, Pub/Sub, BigQuery). Solid understanding of data modeling concepts and best practices in both OLTP and OLAP systems. Familiarity with data quality frameworks , governance policies, and compliance standards. Skilled in performance tuning , job optimization, and cost-efficient cloud architecture design. Excellent communication and collaboration skills to work effectively in cross-functional and client-facing roles. Bachelor's degree in Computer Science , Information Systems , or a related field (Mathematics, Engineering, etc.). Bonus: Experience with distributed computing frameworks like Hadoop and Spark.