Job Title: Data Engineer Location: Remote / Flexible (India only) Type: C2H (Contract-to-Hire) – 6-month contract with possible extension Experience: 4–6 years About the Role: We are seeking a skilled Data Engineer with hands-on experience in DBT, Airflow, and Snowflake to design and maintain scalable data pipelines and dimensional data models. You will transform and model data, orchestrate workflows, and manage CI/CD pipelines to support enterprise analytics and reporting. The ideal candidate is comfortable working with multiple data sources, including APIs, MySQL, and SQL Server , and delivering production-ready data solutions. Key Responsibilities: Design and implement dimensional data models for enterprise reporting and analytics. Develop and manage DBT models for data transformation and business logic. Build and maintain Airflow DAGs for end-to-end data pipeline orchestration. Manage and optimize data workflows in Snowflake Data Warehouse, including ETL processes. Write complex SQL queries for data processing and performance optimization. Integrate and process data from multiple sources, including APIs, MySQL, and SQL Server. Implement and maintain CI/CD pipelines for data projects using Git and DevOps tools. Utilize Python for automation and pipeline management within Airflow. Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions. Required Skills & Experience: 4–6 years of experience as a Data Engineer or similar role. Hands-on experience in DBT development and maintenance for data transformation. Practical expertise in Apache Airflow, including DAG design and management. Strong working knowledge of Snowflake or similar cloud-based data warehouses. Advanced SQL skills for data extraction, transformation, and modeling. Experience with multiple data sources, including APIs, MySQL, and SQL Server. Familiarity with CI/CD pipelines, Git, and DevOps best practices. Proficiency in Python, especially for automation and workflow orchestration. Strong analytical mindset and ability to translate business requirements into technical solutions. Preferred Qualities: Excellent problem-solving and analytical skills. Effective communication and collaboration skills. Experience in fast-paced, agile environments. Key Skills to Highlight: DBT, Airflow, Snowflake Data Warehouse, Git, CI/CD, Python, SQL, API Integration
Job Title: Data Engineer Location: Remote / Flexible (India only) Type: C2H (Contract-to-Hire) – 6-month contract with possible extension Experience: 4–6 years About the Role: We are seeking a skilled Data Engineer with hands-on experience in DBT, Airflow, and Snowflake to design and maintain scalable data pipelines and dimensional data models. You will transform and model data, orchestrate workflows, and manage CI/CD pipelines to support enterprise analytics and reporting. The ideal candidate is comfortable working with multiple data sources, including APIs, MySQL, and SQL Server , and delivering production-ready data solutions. Key Responsibilities: Design and implement dimensional data models for enterprise reporting and analytics. Develop and manage DBT models for data transformation and business logic. Build and maintain Airflow DAGs for end-to-end data pipeline orchestration. Manage and optimize data workflows in Snowflake Data Warehouse, including ETL processes. Write complex SQL queries for data processing and performance optimization. Integrate and process data from multiple sources, including APIs, MySQL, and SQL Server. Implement and maintain CI/CD pipelines for data projects using Git and DevOps tools. Utilize Python for automation and pipeline management within Airflow. Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions. Required Skills & Experience: 4–6 years of experience as a Data Engineer or similar role. Hands-on experience in DBT development and maintenance for data transformation. Practical expertise in Apache Airflow, including DAG design and management. Strong working knowledge of Snowflake or similar cloud-based data warehouses. Advanced SQL skills for data extraction, transformation, and modeling. Experience with multiple data sources, including APIs, MySQL, and SQL Server. Familiarity with CI/CD pipelines, Git, and DevOps best practices. Proficiency in Python, especially for automation and workflow orchestration. Strong analytical mindset and ability to translate business requirements into technical solutions. Preferred Qualities: Excellent problem-solving and analytical skills. Effective communication and collaboration skills. Experience in fast-paced, agile environments. Key Skills to Highlight: DBT, Airflow, Snowflake Data Warehouse, Git, CI/CD, Python, SQL, API Integration
Senior Data Engineer Azure, Airflow & PySpark We're hiring a Senior Data Engineer (69 years of experience) to lead the modernization of our data ecosystem from legacy platforms (Azure Synapse, SQL Server) to a next-gen Microsoft Fabric lakehouse. If you bring deep Azure expertise, strong Airflow orchestration skills, and at least 4 years of hands-on PySpark experience, this role is for you. Key Responsibilities Drive the migration of data pipelines and stored procedures into Microsoft Fabric using PySpark, Delta Lake, and OneLake. Build and orchestrate scalable workflows with Apache Airflow and Azure Data Factory (ADF). Redesign and optimize legacy ADF pipelines for seamless Fabric integration. Develop star schema (dimensional) models to improve reporting and analytics performance. Enforce data governance, security, version control (Git), and CI/CD best practices. Partner with business stakeholders, architects, and analysts to deliver high-quality, trusted data solutions. Support Power BI integration, semantic modeling, and performance tuning in a Fabric-first environment. Requirements 4+ years hands-on experience with PySpark for data engineering and transformations. Proven expertise in Azure Synapse, SQL Server, ADF, and Microsoft Fabric. Strong background in Apache Airflow, Delta Lake, OneLake, T-SQL, and KQL. Solid experience with dimensional modeling, governance frameworks, and Azure data security. Proficiency in DevOps pipelines, Git, and automated deployment practices. Nice to Have Experience modernizing ETL/ELT workloads into Fabric and lakehouse architectures. Microsoft Azure certification (e.g., DP-203). Advanced Power BI skills (DAX, MDX, DirectLake, Tabular Editor, ALM Toolkit, DAX Studio, etc.).
🚀 We’re Hiring: Analytics Engineer (Part-Time | Remote | India-Based) Shiv Kanti Infosystems is looking for a highly skilled Analytics Engineer to design and implement executive-level dashboards that deliver real-time operational insights for our C-suite leadership team. This opportunity is open only for candidates based in India. 🔑 Key Responsibilities ✔ Collaborate with leadership to translate business needs into technical solutions ✔ Build real-time pipelines from Kafka to TimescaleDB ✔ Design transformations for event streams into structured datasets ✔ Develop Grafana dashboards for executive insights ✔ Optimize TimescaleDB for performance & scalability ✔ Ensure data accuracy, monitoring, and reliability 🎯 Desired Skill Set Must-Have: Strong expertise in Apache Kafka (ingestion, stream processing, connectors) SQL + TimescaleDB/PostgreSQL (time-series data modeling) Hands-on with Grafana dashboards ETL/ELT pipeline development Programming (Python/Java/Scala) Nice-to-Have: Apache Flink / Kafka Streams / Spark Streaming Docker/Kubernetes & CI/CD Monitoring & logging (Prometheus, ELK stack) Cloud platforms (AWS/GCP/Azure) 📌 Job Details Location: India (Remote) Seniority: Mid-Senior Level Type: Part-time Experience: 5–8 Years Qualification: B.Tech/M.Tech/BCA/MCA Shift: Afternoon IST (3:30 PM – 12:30 AM, subject to project needs) If you are passionate about real-time analytics and executive reporting , we’d love to connect.