Job Title: Senior Data Engineer (Databricks & Airflow) Location: Remote Experience Level: 6–7 Years Employment Type: Full-time Required Qualifications: 6–7 years of experience in data engineering, with at least 3+ years working with Apache Airflow and Databricks in production environments. Strong proficiency in Python and SQL . Experience with Spark (PySpark/Scala), preferably in a Databricks environment. Experience building and managing data pipelines on AWS , Azure , or GCP . Solid understanding of data lake , data warehouse , and data mesh architectures. Familiarity with modern data formats like Parquet , Avro , Delta Lake , etc. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Strong understanding of data quality, observability, and governance best practices. Show more Show less
Job Title: Azure Databricks Engineer Experience: 4+ Years Required Skills: 4+ years of experience in Data Engineering . Strong hands-on experience with Azure Databricks and PySpark . Good understanding of Azure Data Factory (ADF) , Azure Data Lake (ADLS) , and Azure Synapse . Strong SQL skills and experience with large-scale data processing. Experience with version control systems (Git), CI/CD pipelines, and Agile methodology. Knowledge of Delta Lake, Lakehouse architecture, and distributed computing concepts. Preferred Skills: Experience with Airflow , Power BI , or machine learning pipelines . Familiarity with DevOps tools for automation and deployment in Azure. Azure certifications (e.g., DP-203) are a plus. Show more Show less
Job Title: Machine Learning Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated Machine Learning Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Design, deploy and manage Azure infrastructure including virtual machines, storage accounts, virtual networks, and other resources. Assist teams by deploying applications to AKS clusters using containerization technologies such as Docker and Kubernetes, Container Registry, etc.. Familiarity with the Azure CLI and ability to use PowerShell say to scan Azure resources, make modifications, spit out a report or a dump, etc.. Setting up a 2 or 3 tier application on Azure. VMs, web apps, load balancers, proxies, etc.… Well versed with security, AD, MI SPN, firewalls Networking: NSGs, VNETs, private end points, express routes, Bastion, etc.… Familiarity with a scripting language like Python for automation. Leveraging Terraform (or Bicep) for automating infrastructure deployment. Cost tracking, analysis, reporting, and management at the resource groups level. Experience with Azure DevOps Experience with Azure monitor Strong hands-on experience in ADF, Linked Service/IR (Self-hosted/managed), LogicApp, ServiceBus, Databricks, SQL Server Strong understanding of Python, Spark, and SQL (Nice to have) Ability to work in fast paced environments as we have tight SLAs for tickets. Self-driven and should possess exploratory mindset as the work requires a good amount of research (within and outside the application)
Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Design, deploy and manage Azure infrastructure including virtual machines, storage accounts, virtual networks, and other resources. Assist teams by deploying applications to AKS clusters using containerization technologies such as Docker and Kubernetes, Container Registry, etc.. Familiarity with the Azure CLI and ability to use PowerShell say to scan Azure resources, make modifications, spit out a report or a dump, etc.. Setting up a 2 or 3 tier application on Azure. VMs, web apps, load balancers, proxies, etc.… Well versed with security, AD, MI SPN, firewalls Networking: NSGs, VNETs, private end points, express routes, Bastion, etc.… Familiarity with a scripting language like Python for automation. Leveraging Terraform (or Bicep) for automating infrastructure deployment. Cost tracking, analysis, reporting, and management at the resource groups level. Experience with Azure DevOps Experience with Azure monitor Strong hands-on experience in ADF, Linked Service/IR (Self-hosted/managed), LogicApp, ServiceBus, Databricks, SQL Server Strong understanding of Python, Spark, and SQL (Nice to have) Ability to work in fast paced environments as we have tight SLAs for tickets. Self-driven and should possess exploratory mindset as the work requires a good amount of research (within and outside the application)
Job Title: Databricks Engineer Location: Remote Experience Level: 4-5 Years Employment Type: Full-time Required Qualifications: 6–7 years of experience in data engineering, with at least 3+ years working with Databricks in production environments. Strong proficiency in Python and SQL . Experience with Spark (PySpark/Scala), preferably in a Databricks environment. Experience building and managing data pipelines on AWS , Azure , or GCP . Solid understanding of data lake , data warehouse , and data mesh architectures. Familiarity with modern data formats like Parquet , Avro , Delta Lake , etc. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Strong understanding of data quality, observability, and governance best practices.
Job Title: Engineering Manager – Big Data, Azure & Databricks Location: Remote Employment Type: Full-time Department: Data Engineering / Technology Experience: 8+ years (3+ years in a managerial role) About the Role: We are seeking a highly experienced and strategic Engineering Manager with deep expertise in Big Data platforms , Microsoft Azure , and Databricks . The ideal candidate will lead a team of talented data engineers to design and deliver scalable data pipelines and analytics platforms, ensuring high performance, reliability, and security in a cloud-native environment. Key Responsibilities: Lead, mentor, and grow a team of data engineers working on large-scale distributed data systems. Architect and oversee the development of end-to-end data solutions using Azure Data Services and Databricks . Collaborate with cross-functional teams including data science, analytics, product, and business stakeholders to understand requirements and deliver impactful data products. Drive best practices in data engineering, coding standards, version control, CI/CD, and monitoring. Ensure high data quality, governance, and compliance with internal and external policies. Optimize performance and cost efficiency of data infrastructure in the cloud. Stay current with industry trends, emerging technologies, and apply them to improve system architecture and team capabilities. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 8+ years of experience in software/data engineering with at least 3 years in a leadership or managerial role. Proven hands-on experience with: Big Data ecosystems (Spark, Hive, Hadoop) Azure Cloud Services (Data Lake, Data Factory, Synapse Analytics, Blob Storage) Databricks (including Delta Lake, MLFlow, Unity Catalog) Strong programming experience in Python , Scala , or Java . Deep understanding of data modeling, ETL/ELT processes, and performance tuning. Experience managing Agile teams and delivering complex projects on time. Excellent problem-solving, leadership, and communication skills. Preferred Qualifications: Certification in Azure Data Engineer or Azure Solutions Architect. Experience with data governance, security, and compliance (e.g., GDPR, HIPAA). Familiarity with ML/AI workflows and collaboration with data science teams.
Job Title: Senior Databricks Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are looking for an experienced Senior Databricks SME with a strong background in data engineering to help build and optimize scalable, high-performance data solutions. The ideal candidate has hands-on experience in Databricks production environments and a deep understanding of modern data architecture. You'll work with cross-functional teams to create robust data pipelines and ensure the reliability, quality, and observability of data across platforms. Key Responsibilities: Design, build, and manage large-scale data pipelines and ETL/ELT workflows in Databricks using PySpark/Scala . Develop scalable data solutions using Python and SQL across diverse cloud environments (AWS, Azure, or GCP). Implement and optimize data lakes, data warehouses, and data mesh architectures using modern storage formats like Parquet , Avro , and Delta Lake . Ensure data governance, observability, and quality across all stages of the data lifecycle. Collaborate with data architects, analysts, and DevOps teams to drive architecture design, pipeline performance, and deployment practices. Optionally, utilize Docker and Kubernetes for containerized data engineering workflows. Required Skills: Experience in data engineering , including, 7+ years in Databricks production environments. Strong hands-on expertise in Python , SQL , and Apache Spark (PySpark/Scala) . Experience working with cloud platforms (AWS, Azure, or GCP) and building cloud-native data pipelines. Deep understanding of data lake , data warehouse , and data mesh principles. Proficiency with file formats like Parquet , Avro , and Delta Lake . Familiarity with containerization and orchestration tools ( Docker , Kubernetes ) is a plus. Solid grasp of data quality , observability , and governance best practices .
Key Responsibilities Design, develop, and maintain data pipelines and ETL processes on Databricks . Lead the architecture and implementation of scalable, secure, and high-performance data solutions. Collaborate with business stakeholders, data scientists, and analysts to gather requirements and deliver actionable data solutions. Optimize data workflows for performance, cost-efficiency, and reliability . Integrate data from multiple structured and unstructured sources into Delta Lake and cloud data warehouses. Implement best practices in data governance, security, and quality management. Mentor junior engineers and provide technical leadership in data engineering best practices. Troubleshoot and resolve issues in data pipelines, ensuring high data availability and integrity . Required Skills & Experience 7+ years of experience in Data Engineering . Proven Subject Matter Expertise in Databricks (including Databricks SQL, Delta Lake, and MLflow). Strong experience with PySpark , Spark SQL , and Scala . Hands-on experience with cloud platforms (Azure preferred; AWS/GCP a plus). Proficiency in data modeling, ETL development , and data warehousing concepts. Solid understanding of big data ecosystems (Hadoop, Kafka, etc.). Strong SQL development skills with a focus on performance tuning . Experience in implementing CI/CD pipelines for data solutions.
Key Responsibilities Design, develop, and maintain data pipelines and ETL processes on Databricks . Lead the architecture and implementation of scalable, secure, and high-performance data solutions. Collaborate with business stakeholders, data scientists, and analysts to gather requirements and deliver actionable data solutions. Optimize data workflows for performance, cost-efficiency, and reliability . Integrate data from multiple structured and unstructured sources into Delta Lake and cloud data warehouses. Implement best practices in data governance, security, and quality management. Mentor junior engineers and provide technical leadership in data engineering best practices. Troubleshoot and resolve issues in data pipelines, ensuring high data availability and integrity . Required Skills & Experience 7+ years of experience in Data Engineering . Proven Subject Matter Expertise in Databricks (including Databricks SQL, Delta Lake, and MLflow). Strong experience with PySpark , Spark SQL , and Scala . Hands-on experience with cloud platforms (Azure preferred; AWS/GCP a plus). Proficiency in data modeling, ETL development , and data warehousing concepts. Solid understanding of big data ecosystems (Hadoop, Kafka, etc.). Strong SQL development skills with a focus on performance tuning . Experience in implementing CI/CD pipelines for data solutions.
Design, deploy and manage Azure infrastructure including virtual machines, storage accounts, virtual networks, and other resources. Assist teams by deploying applications to AKS clusters using containerization technologies such as Docker and Kubernetes, Container Registry, etc.. Familiarity with the Azure CLI and ability to use PowerShell say to scan Azure resources, make modifications, spit out a report or a dump, etc.. Setting up a 2 or 3 tier application on Azure. VMs, web apps, load balancers, proxies, etc.… Well versed with security, AD, MI SPN, firewalls Networking: NSGs, VNETs, private end points, express routes, Bastion, etc.… Familiarity with a scripting language like Python for automation. Leveraging Terraform (or Bicep) for automating infrastructure deployment. Cost tracking, analysis, reporting, and management at the resource groups level. Experience with Azure DevOps Experience with Azure monitor Strong hands-on experience in ADF, Linked Service/IR (Self-hosted/managed), LogicApp, ServiceBus, Databricks, SQL Server Strong understanding of Python, Spark, and SQL (Nice to have) Ability to work in fast paced environments as we have tight SLAs for tickets. Self-driven and should possess exploratory mindset as the work requires a good amount of research (within and outside the application)
Full Stack Engineer Employment Type: Full-Time Shift timing : 3 pm – 11 pm IST Job Description ( 5+ years of experience required) Frontend Development (React.js) Strong understanding of React.js and its core principles. Proficiency in state management tools like Redux and React's built-in mechanisms. Hands-on experience with frontend tools like Webpack, Babel, NPM, and Yarn. Expertise in JavaScript, TypeScript, and modern ECMAScript standards. Familiarity with frontend design patterns (e.g., Monorepo, feature-driven architecture). Proficient in handling RESTful API calls using JavaScript and TypeScript. Knowledge of packaging React applications and version control tools (e.g., Git). Familiarity with third-party React libraries and creating build/deployment pipelines (CI/CD or containerization is a plus). Backend Development (Python API) Strong experience in Python and API development using FastAPI. Proficiency with relational databases like PostgreSQL or MySQL. Experience with authentication and authorization protocols (e.g., OAuth, OIDC). Knowledge of software design patterns and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with CI/CD pipelines and deployment on cloud platforms (Azure preferred). Added advantage: Experience with Django, Flask, or building scalable distributed systems.
Technical Manager – Azure Location: 100% Remote Employment Type: Full-time Shift Timing : 3 pm – 11 pm IST Job Summary: We are seeking an experienced Technical Manager to lead and manage a team of 20+ engineers, delivering high-quality solutions on the Microsoft Azure platform. The ideal candidate will have a strong technical background in Azure cloud services , excellent leadership skills, and a proven track record in managing large technical teams to deliver complex projects on time and within budget. Key Responsibilities: Lead, mentor, and manage a team of 20+ engineers, including developers, architects, and support staff. Drive the design, development, deployment, and maintenance of Azure-based solutions in alignment with business objectives. Ensure best practices for cloud architecture, security, scalability, and performance. Collaborate with cross-functional teams including Product Management, QA, and DevOps to ensure smooth project execution. Oversee resource allocation, workload distribution, and project timelines to achieve business goals. Conduct regular performance reviews, provide technical guidance, and develop training plans for team members. Establish coding standards, architecture guidelines, and ensure compliance with industry best practices. Manage escalations and troubleshoot high-level technical issues effectively. Stay updated with the latest Azure capabilities, tools, and technologies and integrate them into project solutions. Drive Agile/Scrum practices , ensuring timely delivery of sprints and releases. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 12+ years of overall IT experience , with at least 5 years in managing technical teams of 20+ members. Proven hands-on experience in Microsoft Azure services such as: Azure App Services, Azure Functions, Azure DevOps, Azure Kubernetes Service (AKS), Azure SQL, Azure Storage, etc. Strong expertise in Cloud Architecture , Infrastructure as Code (IaC) using ARM/Bicep/Terraform, and CI/CD pipelines. Proficiency in programming languages such as .NET, C#, or Python (preferred). Knowledge of DevOps practices , containerization (Docker, Kubernetes) , and microservices architecture . Excellent leadership, communication, and interpersonal skills. Ability to manage multiple projects, prioritize tasks, and work under tight deadlines. Azure certifications (e.g., Azure Solutions Architect Expert, Azure DevOps Engineer Expert) are highly desirable.
As an Azure Data Engineer specializing in Microsoft Fabric (Data Lake) based in Mumbai, you should have a minimum of 4 years of experience in the field, with at least 2 years dedicated to working with Microsoft Fabric technologies. Your expertise in Azure services is key, specifically in Data Lake, Synapse Analytics, Data Factory, Azure Storage, and Azure SQL. Your responsibilities will involve data modeling, ETL/ELT processes, and data integration patterns. It is essential to have experience in Power BI integration for effective data visualization. Proficiency in SQL, Python, or PySpark for data transformations is required for this role. A solid understanding of data governance, security, and compliance in cloud environments is also necessary. Previous experience working in Agile/Scrum environments is a plus. Strong problem-solving skills and the ability to work both independently and collaboratively within a team are crucial for success in this position.,
Job Title: Databricks Engineer (Remote) Location: 100 % Remote Job Type: Full-Time Shift timings : 3 pm - 11 pm IST About the Role: We are looking for an experienced Databricks Engineer with a strong background in data engineering to help build and optimize scalable, high-performance data solutions. The ideal candidate has hands-on experience in Databricks production environments and a deep understanding of modern data architecture. You'll work with cross-functional teams to create robust data pipelines and ensure the reliability, quality, and observability of data across platforms. Key Responsibilities Design, develop, and maintain data pipelines and ETL processes on Databricks . Lead the architecture and implementation of scalable, secure, and high-performance data solutions. Collaborate with business stakeholders, data scientists, and analysts to gather requirements and deliver actionable data solutions. Optimize data workflows for performance, cost-efficiency, and reliability . Integrate data from multiple structured and unstructured sources into Delta Lake and cloud data warehouses. Implement best practices in data governance, security, and quality management. Mentor junior engineers and provide technical leadership in data engineering best practices. Troubleshoot and resolve issues in data pipelines, ensuring high data availability and integrity . Required Skills & Experience 6-7 years of experience in Data Engineering . Proven Subject Matter Expertise in Databricks (including Databricks SQL, Delta Lake, and MLflow). Strong experience with PySpark , Spark SQL , and Scala . Hands-on experience with cloud platforms (Azure preferred; AWS/GCP a plus). Proficiency in data modeling, ETL development , and data warehousing concepts. Solid understanding of big data ecosystems (Hadoop, Kafka, etc.). Strong SQL development skills with a focus on performance tuning . Experience in implementing CI/CD pipelines for data solutions.