Job Title: Data Engineer Location: Remote / Flexible (India only) Type: C2H (Contract-to-Hire) – 6-month contract with possible extension Experience: 4–6 years About the Role: We are seeking a skilled Data Engineer with hands-on experience in DBT, Airflow, and Snowflake to design and maintain scalable data pipelines and dimensional data models. You will transform and model data, orchestrate workflows, and manage CI/CD pipelines to support enterprise analytics and reporting. The ideal candidate is comfortable working with multiple data sources, including APIs, MySQL, and SQL Server , and delivering production-ready data solutions. Key Responsibilities: Design and implement dimensional data models for enterprise reporting and analytics. Develop and manage DBT models for data transformation and business logic. Build and maintain Airflow DAGs for end-to-end data pipeline orchestration. Manage and optimize data workflows in Snowflake Data Warehouse, including ETL processes. Write complex SQL queries for data processing and performance optimization. Integrate and process data from multiple sources, including APIs, MySQL, and SQL Server. Implement and maintain CI/CD pipelines for data projects using Git and DevOps tools. Utilize Python for automation and pipeline management within Airflow. Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions. Required Skills & Experience: 4–6 years of experience as a Data Engineer or similar role. Hands-on experience in DBT development and maintenance for data transformation. Practical expertise in Apache Airflow, including DAG design and management. Strong working knowledge of Snowflake or similar cloud-based data warehouses. Advanced SQL skills for data extraction, transformation, and modeling. Experience with multiple data sources, including APIs, MySQL, and SQL Server. Familiarity with CI/CD pipelines, Git, and DevOps best practices. Proficiency in Python, especially for automation and workflow orchestration. Strong analytical mindset and ability to translate business requirements into technical solutions. Preferred Qualities: Excellent problem-solving and analytical skills. Effective communication and collaboration skills. Experience in fast-paced, agile environments. Key Skills to Highlight: DBT, Airflow, Snowflake Data Warehouse, Git, CI/CD, Python, SQL, API Integration
Job Title: Data Engineer Location: Remote / Flexible (India only) Type: C2H (Contract-to-Hire) – 6-month contract with possible extension Experience: 4–6 years About the Role: We are seeking a skilled Data Engineer with hands-on experience in DBT, Airflow, and Snowflake to design and maintain scalable data pipelines and dimensional data models. You will transform and model data, orchestrate workflows, and manage CI/CD pipelines to support enterprise analytics and reporting. The ideal candidate is comfortable working with multiple data sources, including APIs, MySQL, and SQL Server , and delivering production-ready data solutions. Key Responsibilities: Design and implement dimensional data models for enterprise reporting and analytics. Develop and manage DBT models for data transformation and business logic. Build and maintain Airflow DAGs for end-to-end data pipeline orchestration. Manage and optimize data workflows in Snowflake Data Warehouse, including ETL processes. Write complex SQL queries for data processing and performance optimization. Integrate and process data from multiple sources, including APIs, MySQL, and SQL Server. Implement and maintain CI/CD pipelines for data projects using Git and DevOps tools. Utilize Python for automation and pipeline management within Airflow. Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions. Required Skills & Experience: 4–6 years of experience as a Data Engineer or similar role. Hands-on experience in DBT development and maintenance for data transformation. Practical expertise in Apache Airflow, including DAG design and management. Strong working knowledge of Snowflake or similar cloud-based data warehouses. Advanced SQL skills for data extraction, transformation, and modeling. Experience with multiple data sources, including APIs, MySQL, and SQL Server. Familiarity with CI/CD pipelines, Git, and DevOps best practices. Proficiency in Python, especially for automation and workflow orchestration. Strong analytical mindset and ability to translate business requirements into technical solutions. Preferred Qualities: Excellent problem-solving and analytical skills. Effective communication and collaboration skills. Experience in fast-paced, agile environments. Key Skills to Highlight: DBT, Airflow, Snowflake Data Warehouse, Git, CI/CD, Python, SQL, API Integration
Senior Data Engineer Azure, Airflow & PySpark We're hiring a Senior Data Engineer (69 years of experience) to lead the modernization of our data ecosystem from legacy platforms (Azure Synapse, SQL Server) to a next-gen Microsoft Fabric lakehouse. If you bring deep Azure expertise, strong Airflow orchestration skills, and at least 4 years of hands-on PySpark experience, this role is for you. Key Responsibilities Drive the migration of data pipelines and stored procedures into Microsoft Fabric using PySpark, Delta Lake, and OneLake. Build and orchestrate scalable workflows with Apache Airflow and Azure Data Factory (ADF). Redesign and optimize legacy ADF pipelines for seamless Fabric integration. Develop star schema (dimensional) models to improve reporting and analytics performance. Enforce data governance, security, version control (Git), and CI/CD best practices. Partner with business stakeholders, architects, and analysts to deliver high-quality, trusted data solutions. Support Power BI integration, semantic modeling, and performance tuning in a Fabric-first environment. Requirements 4+ years hands-on experience with PySpark for data engineering and transformations. Proven expertise in Azure Synapse, SQL Server, ADF, and Microsoft Fabric. Strong background in Apache Airflow, Delta Lake, OneLake, T-SQL, and KQL. Solid experience with dimensional modeling, governance frameworks, and Azure data security. Proficiency in DevOps pipelines, Git, and automated deployment practices. Nice to Have Experience modernizing ETL/ELT workloads into Fabric and lakehouse architectures. Microsoft Azure certification (e.g., DP-203). Advanced Power BI skills (DAX, MDX, DirectLake, Tabular Editor, ALM Toolkit, DAX Studio, etc.).
🚀 We’re Hiring: Analytics Engineer (Part-Time | Remote | India-Based) Shiv Kanti Infosystems is looking for a highly skilled Analytics Engineer to design and implement executive-level dashboards that deliver real-time operational insights for our C-suite leadership team. This opportunity is open only for candidates based in India. 🔑 Key Responsibilities ✔ Collaborate with leadership to translate business needs into technical solutions ✔ Build real-time pipelines from Kafka to TimescaleDB ✔ Design transformations for event streams into structured datasets ✔ Develop Grafana dashboards for executive insights ✔ Optimize TimescaleDB for performance & scalability ✔ Ensure data accuracy, monitoring, and reliability 🎯 Desired Skill Set Must-Have: Strong expertise in Apache Kafka (ingestion, stream processing, connectors) SQL + TimescaleDB/PostgreSQL (time-series data modeling) Hands-on with Grafana dashboards ETL/ELT pipeline development Programming (Python/Java/Scala) Nice-to-Have: Apache Flink / Kafka Streams / Spark Streaming Docker/Kubernetes & CI/CD Monitoring & logging (Prometheus, ELK stack) Cloud platforms (AWS/GCP/Azure) 📌 Job Details Location: India (Remote) Seniority: Mid-Senior Level Type: Part-time Experience: 5–8 Years Qualification: B.Tech/M.Tech/BCA/MCA Shift: Afternoon IST (3:30 PM – 12:30 AM, subject to project needs) If you are passionate about real-time analytics and executive reporting , we’d love to connect.
We’re looking for a talented Python–Flask Developer to design and develop ML-driven web applications that transform machine learning outputs and enterprise data into intuitive, actionable interfaces. The ideal candidate will have strong expertise in Python (Flask) for backend web development, experience with RESTful APIs and data integration, and familiarity with how ML models are consumed within web tools. 🔍 Position Overview As a Python–Flask Developer, you’ll build interactive, data-powered web applications that connect machine learning models, analytics, and enterprise data systems. You’ll collaborate closely with data science and BI teams to turn predictive insights into powerful user-facing tools. 🧩 Key Responsibilities Design, develop, and deploy Flask-based web applications integrating ML outputs and enterprise data. Build and consume RESTful APIs for analytics, insights, and prediction delivery. Connect apps to Azure, Delta Lake, or distributed data sources for real-time data access. Develop responsive interfaces using HTML, CSS, and Jinja2 templating. Collaborate with Data Science and BI teams to expose ML model outputs. Build interactive workflows supporting forecasting, tracking, and data exploration. Ensure secure coding and compliance with enterprise deployment standards. 🧠 Desired Skill Set Must-Have: 3+ years of hands-on experience in Python and Flask web application development. Strong understanding of REST API design, integration, and testing. Experience handling data from Azure Data Lake, Delta Lake, or SQL-based systems. Proficiency with HTML, CSS, and templating frameworks (Jinja2). Exposure to embedding or consuming ML models in web applications. Familiarity with Git and deployment best practices. Good to Have: Experience with Microsoft Fabric, PySpark, or dataflow pipelines. Exposure to LLM-powered apps, vector search, or RAG frameworks. Understanding of Azure ML, AI Foundry, or similar ML ecosystems. Knowledge of RBAC, OAuth, or managed identity for secure API access. Basic front-end skills: Bootstrap, JavaScript. 💼 Job Specifications Job Title: Python–Flask Developer Department: Software Development & Engineering Seniority Level: Mid-Level Employment Type: Full-time Experience Required: Minimum 3 years (Python–Flask) Qualification: B.Tech / M.Tech / BCA / MCA Shift Timing: Afternoon (1:30 PM – 10:30 PM IST)* Shift may vary based on project requirements. Location: Haryana, India/Hybrid
We’re looking for a skilled Data Engineer to join our growing offshore team! If you’re passionate about building reliable, scalable, and efficient data systems - this is a great opportunity to work on cutting-edge cloud projects with Azure and Big Data technologies. 🔹 Key Responsibilities Design and maintain scalable data pipelines and ETL workflows. Collaborate with data scientists and analysts to develop optimized data models . Implement data quality checks and validation processes. Integrate data from diverse sources (APIs, flat files, databases). Optimize performance and ensure data compliance and security . Maintain documentation for data processes and pipelines. 🔹 Technical Skills 3+ years of experience in data engineering . Hands-on with Azure Data Factory, Azure Databricks, Azure SQL, Synapse, and Azure Storage . Proficient in Python (PySpark) and SQL (Spark SQL / T-SQL) . Experience with Big Data tools (Spark, Hive) and data formats (CSV, JSON, Parquet, Delta). Knowledge of Git / Azure DevOps and distributed systems concepts. 🔹 Soft Skills Strong analytical and problem-solving mindset. Excellent communication and teamwork abilities. Comfortable working in a fast-paced, agile environment.