About the Role We are looking for an experienced Azure Cloud Engineer to join our Managed Services / Cloud Reliability team. The ideal candidate should have strong hands-on experience in Azure infrastructure, AKS, networking, identity, monitoring , and CI/CD automation . You will be responsible for ensuring high availability, performance, security , and operational stability of cloud environments in production. Key Responsibilities Manage, operate, and maintain Azure cloud infrastructure across multiple environments (Dev / QA / Prod). Deploy, configure, and troubleshoot services on Azure (VMs, App Services, Storage, Key Vault, Azure SQL, Networking) . Administer and maintain Azure Kubernetes Service (AKS) and containerized workloads (Docker). Implement and maintain Infrastructure as Code using Terraform / ARM / Bicep . Manage CI/CD pipelines using Azure DevOps (Build & Release automation). Monitor and analyze resource utilization, performance, and incidents using Azure Monitor / Log Analytics . Ensure adherence to security, compliance, and access controls using Azure AD & RBAC . Troubleshoot production issues and participate in on-call / incident response as required. Optimize cloud cost, storage tiers, and compute scaling strategies. Must-Have Skills Strong hands-on experience in Azure Cloud Services AKS / Kubernetes administration & Docker VNet, Subnets, NSG, Load Balancer, VPN Gateways, Firewalls (Networking essentials) Terraform / ARM Templates / Bicep (Infrastructure as Code) Azure DevOps Pipelines (CI/CD) Azure Monitor, Log Analytics, Alerts & Diagnostics Azure AD / IAM / RBAC / Key Vault (Identity & Security) Good troubleshooting & incident management skills
Job Description: Data Engineer (Azure Databricks) About the Role We are looking for a passionate Data Engineer experienced in building scalable, high-performance data platforms on Azure Databricks . The ideal candidate will have strong hands-on experience in ETL/ELT pipeline development , data modeling , and performance optimization , with a focus on delivering clean, reliable, and business-ready data. Key Responsibilities Design, develop, and optimize end-to-end ETL/ELT pipelines using Azure Databricks, Synapse, and Data Factory (ADF) Implement Medallion / Delta Lake architectures ensuring quality and consistency of data Build and maintain data lake and warehouse solutions supporting analytical and reporting needs Optimize Spark jobs and pipeline performance through tuning and best practices Develop data models (Star/Snowflake) and implement SCD Type 1 & 2 for dimensional data Automate data ingestion, transformation, and validation to improve efficiency and reduce manual work Integrate data from multiple sources (APIs, databases, structured/unstructured files) Collaborate with cross-functional teams across data, BI, and business units to ensure data readiness Manage and deploy code using Azure DevOps and version control systems Maintain documentation and follow SDLC best practices in agile environments Required Skills 35 years of experience in Data Engineering Proficient in Python, PySpark, SparkSQL, and Advanced SQL Strong hands-on experience in Azure Databricks, ADF, ADLS, Synapse Excellent understanding of data modeling and warehouse design Experience in performance tuning and pipeline optimization Working knowledge of CI/CD, Git, and Azure DevOps Good understanding of data governance, metadata, and automation frameworks Good to Have Exposure to AWS or multi-cloud environments Knowledge of Power BI / data visualization integration Experience working with API-based data ingestion Certifications such as Databricks Data Engineer Associate or Azure Data Engineer Associate Certifications Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Data Fundamentals
Position: Azure Data Engineer Location: Bangalore About Tenjumps At Tenjumps, we specialise in consulting, building, deploying, and managing complex enterprise applications and infrastructure. Our tailored solutions help technology-driven businesses build robust, high-performing, and scalable systems that evolve with their growth. Weve partnered with a leading logistics domain client to modernise their application platforms and unlock new opportunities through AI/ML-powered solutions. We are looking for a Senior Azure Data Engineer with strong expertise in building scalable, cloud-native data pipelines on Azure Databricks, ADF, and PySpark . The ideal candidate will lead end-to-end data engineering initiatives, drive architecture decisions, optimize big data workloads, and ensure enterprise-grade data quality and governance. Key Responsibilities Design, build, and optimize end-to-end ETL/ELT pipelines using Azure Databricks, ADF, and ADLS Architect and maintain Lakehouse / Medallion (BronzeSilverGold) data models Develop scalable PySpark/Spark SQL transformations for large datasets Own data ingestion frameworks (batch & real-time) from diverse sources—APIs, DBs, cloud, structured/unstructured Implement Delta Lake , schema evolution, SCD, partitioning, and data quality checks Collaborate with data architects and analysts to design data models and warehouse solutions Optimize Spark clusters, jobs, and pipelines for performance and cost efficiency Build metadata-driven frameworks for reusable components and automated workflows Deploy, monitor, and manage pipelines using Azure DevOps CI/CD Ensure data governance, security, version control, and documentation best practices Lead junior engineers, conduct code reviews, and drive continuous improvements Must-Have Skills 5+ years of experience in Data Engineering Strong hands-on expertise in: Azure Databricks (notebooks, jobs, Delta Lake) PySpark / Spark SQL Azure Data Factory (ADF) Azure Data Lake Gen2 (ADLS) Deep knowledge of ETL/ELT , distributed data processing, and big data ecosystems Strong experience in data modeling (Star Schema, SCD Type 1 & 2, dimensional design) Experience with CI/CD, Git, Azure DevOps Strong SQL skills (performance tuning, complex transformations) Understanding of data governance, security, and best practices
We are seeking a Machine Learning Engineer with strong hands-on experience in LLMs, NLP, GenAI, and MLOps to design, fine-tune, deploy, and optimize machine learning and generative AI solutions. The role involves working with large-scale datasets, modern ML frameworks, and production-grade ML pipelines to deliver intelligent, data-driven applications. Key Responsibilities Design, train, fine-tune, and evaluate machine learning and LLM-based models using PyTorch, TensorFlow, Hugging Face, and vLLM. Develop Generative AI solutions including summarization, classification, RAG pipelines, and prompt engineering using LLaMA, Qwen, Phi, and similar models Implement MLOps best practices using MLflow for experiment tracking, model versioning, and reproducibility. Build and deploy ML-powered applications using Flask, REST APIs, and Docker. Perform data preprocessing, analysis, and feature engineering using Python and NumPy. Collaborate with research and product teams to improve model explainability and performance, including integration with knowledge graphs. Optimize models for GPU-based training and inference. Develop and maintain CI/CD pipelines for ML workflows. Work closely with frontend/backend teams to integrate ML models into web applications (React, Node.js). Required Skills & Qualifications Masters degree in computer science or related field. Strong proficiency in Python and ML frameworks (PyTorch, TensorFlow). Hands-on experience with LLMs, NLP, Transformers, Hugging Face, PEFT, LoRA, LangChain, and RAG. Experience with MLOps tools such as MLflow and CI/CD pipelines. Knowledge of Docker, Git, Linux, and cloud platforms (AWS preferred). Experience building APIs and ML services using Flask / REST. Strong understanding of ML model training, fine-tuning, benchmarking, and evaluation.
We are seeking a Databricks Data Engineer to design, build, and optimize scalable data pipelines and analytics solutions on the Databricks Lakehouse Platform. You will be responsible for transforming raw data into high-value datasets that power analytics, AI/ML models, and business insights. Were looking for a dynamic, self-motivated, and proactive leader who thrives in a fast- paced environment and can foster a high-performing culture. Develop scalable ETL/ELT pipelines using PySpark, SQL, and Delta Lake. Build and maintain batch and stream data pipelines using Databricks Workflows. Design and implement Lakehouse architecture and optimize Delta Lake tables. Integrate data from multiple sources including APIs, databases, cloud storage, and third-party platforms. Ensure data quality and reliability using Delta Live Tables, expectations, and monitoring tools. Work with CI/CD pipelines using Git, Databricks Repos, and DevOps tools. Implement governance, security, and compliance standards (Unity Catalog, lineage, RBAC). Must have skills Strong knowledge of data modeling, data warehousing concepts, and Lakehouse fundamentals. Strong experience with Databricks, PySpark, Apache Spark, and SQL. Hands-on with Delta Lake, Databricks Workflows, Unity Catalog. Experience building scalable pipelines on Azure, AWS, or GCP. Solid understanding of distributed systems and performance optimization. Experience with streaming frameworks (Structured Streaming, Kafka, Auto Loader). Passion for problem-solving both technical and business-oriented User-centric mindset focused on building delightful software experiences Lead by example: implement best practices and maintain high-quality code standards Nice to have skills Databricks certification (Data Engineer Associate / Professional). Experience with ML workflows and feature engineering. Hands-on with Databricks Photon, serverless SQL, and optimization techniques. Familiarity with BI tools (Power BI, Tableau, Looker)