Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
14.0 - 18.0 years
0 Lacs
karnataka
On-site
You are hiring for the role of AVP - Databricks with a requirement of minimum 14+ years of experience. The job location can be in Bangalore, Hyderabad, NCR, Kolkata, Mumbai, or Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure that all solutions meet client requirements, best practices, and industry standards. You will serve as a subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads will also be part of your role. Additionally, you will act as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. We are looking for a candidate with a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred) with relevant years of experience in IT services, specifically in Databricks and cloud-based data engineering. Proven experience in leading end-to-end delivery and solution architecting of data engineering or analytics solutions on Databricks is a plus. Strong expertise in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desired. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a requirement. An in-depth understanding of data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As an AI/ML Specialist, you will be responsible for building intelligent systems utilizing OT sensor data and Azure ML tools. Your primary focus will be collaborating with data scientists, engineers, and operations teams to develop scalable AI solutions addressing critical manufacturing issues such as predictive maintenance, process optimization, and anomaly detection. This role involves bridging the edge and cloud environments by deploying AI solutions to run effectively on either cloud platforms or industrial edge devices. Your key functions will include designing and developing ML models using time-series sensor data from OT systems, working closely with engineering and data science teams to translate manufacturing challenges into AI use cases, implementing MLOps pipelines on Azure ML, and integrating with Databricks/Delta Lake. Additionally, you will be responsible for deploying and monitoring models at the edge using Azure IoT Edge, conducting model validation, retraining, and performance monitoring, as well as collaborating with plant operations to contextualize insights and integrate them into workflows. To qualify for this role, you should have a minimum of 5 years of experience in machine learning and AI. Hands-on experience with Azure ML, ML flow, Databricks, and PyTorch/TensorFlow is essential. You should also possess a proven ability to work with OT sensor data such as temperature, vibration, flow, etc. A strong background in time-series modeling, edge inferencing, and MLOps is required, along with familiarity with manufacturing KPIs and predictive modeling use cases.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of our team at Proximity, you will be taking on the role of both a hands-on tech lead and product manager. Your primary responsibility will be to deliver data/ML platforms and pipelines within a Databricks-Azure environment. In this capacity, you will be leading a small delivery team and collaborating with enabling teams to drive product, architecture, and data science initiatives. Your ability to translate business requirements into product strategy and technical delivery with a platform-first mindset will be crucial to our success. To excel in this role, you should possess technical proficiency in Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, and Azure. Additionally, expertise in Agile delivery, discovery cycles, outcome-focused planning, and trunk-based development will be advantageous. You should also be adept at collaborating with engineers, working across cross-functional teams, and fostering self-service platforms. Clear communication skills will be key in articulating decisions, roadmap, and priorities effectively. Joining our team comes with a host of benefits. You will have the opportunity to engage in Proximity Talks, where you can interact with fellow designers, engineers, and product enthusiasts, and gain insights from industry experts. Working alongside our world-class team will provide you with continuous learning opportunities, allowing you to challenge yourself and acquire new knowledge on a daily basis. Proximity is a leading technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. With headquarters in San Francisco and additional offices in Palo Alto, Dubai, Mumbai, and Bangalore, we have a track record of creating high-impact, scalable products used by 370 million daily users. The collective net worth of our client companies stands at $45.7 billion since our inception in 2019. At Proximity, we are a diverse team of coders, designers, product managers, and experts dedicated to solving complex problems and developing cutting-edge technology at scale. As our team of Proxonauts continues to expand rapidly, your contributions will play a significant role in the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and design wing at Studio Proximity, and gain behind-the-scenes access through our Instagram accounts @ProxWrks and @H.Jagda.,
Posted 2 weeks ago
3.0 - 5.0 years
5 - 8 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Contract Position Location Remote Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Remote
We are seeking a skilled Azure Data Engineer with strong Power BI capabilities to design, build, and maintain enterprise data lakes on Azure, ingest data from diverse sources, and develop insightful reports and dashboards. This role requires hands-on experience in Azure data services, ETL processes, and BI visualization to support data-driven decision-making. Key Responsibilities Design and implement end-to-end data pipelines using Azure Data Factory (ADF) for batch ingestion from various enterprise sources. Build and maintain a multi-zone Medallion Architecture data lake in Azure Data Lake Storage Gen2 (ADLS Gen2), including raw staging with metadata tracking, silver layer transformations (cleansing, enrichment, schema standardization), and gold layer curation (joins, aggregations). Perform data processing and transformations using Azure Databricks (PySpark/SQL) and ADF, ensuring data lineage, traceability, and compliance. Integrate data governance and security using Databricks Unity Catalog, Azure Active Directory (Azure AD), Role-Based Access Control (RBAC), and Access Control Lists (ACLs) for fine-grained access. Develop and optimize analytical reports and dashboards in Power BI, including KPI identification, custom visuals, responsive designs, and export functionalities to Excel/Word. Conduct data modeling, mapping, and extraction during discovery phases, aligning with functional requirements for enterprise analytics. Collaborate with cross-functional teams to define schemas, handle API-based ingestion (REST/OData), and implement audit trails, logging, and compliance with data protection policies. Participate in testing (unit, integration, performance), UAT support, and production deployment, ensuring high availability and scalability. Create training content and provide knowledge transfer on data lake implementation and Power BI usage. Monitor and troubleshoot pipelines, optimizing for batch processing efficiency and data quality. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 5+ years of experience in data engineering, with at least 3 years focused on Azure cloud services. Proven expertise in Azure Data Factory (ADF) for ETL/orchestration, Azure Data Lake Storage Gen2 (ADLS Gen2) for data lake management, and Azure Databricks for Spark-based transformations. Strong proficiency in Power BI for report and dashboard development, including DAX, custom visuals, data modeling, and integration with Azure data sources (e.g., DirectQuery or Import modes). Hands-on experience with Medallion Architecture (raw/silver/gold layers), data wrangling, and multi-source joins. Familiarity with API ingestion (REST, OData) from enterprise systems. Solid understanding of data governance tools like Databricks Unity Catalog, Azure AD for authentication, and RBAC/ACLs for security. Proficiency in SQL, PySpark, and data modeling techniques for dimensional and analytical schemas. Experience in agile methodologies, with the ability to deliver phased outcomes. Preferred Skills Certifications such as Microsoft Certified: Azure Data Engineer Associate (DP-203) or Power BI Data Analyst Associate (PL-300). Knowledge of Azure Synapse Analytics, Azure Monitor for logging, and integration with hybrid/on-premises sources. Experience in domains like energy, mobility, or enterprise analytics, with exposure to moderate data volumes. Strong problem-solving skills, with the ability to handle rate limits, pagination, and dynamic data in APIs. Familiarity with tools like Azure DevOps for CI/CD and version control of pipelines/notebooks. What We Offer Opportunity to work on cutting-edge data transformation projects. Competitive salary and benefits package. Collaborative environment with access to advanced Azure tools and training. Flexible work arrangements and professional growth opportunities. If you are a proactive engineer passionate about building scalable data solutions and delivering actionable insights, apply now. Role & responsibilities Preferred candidate profile
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
telangana
On-site
As the Vice President of Engineering at Teradata in India, you will be responsible for leading the software development organization for the AI Platform Group. This includes overseeing the execution of the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases. Your success in this role will be measured by your ability to build a world-class engineering culture, attract and retain technical talent, accelerate product delivery, and drive innovation that brings tangible value to customers. In this role, you will lead a team of over 150 engineers with a focus on helping customers achieve outcomes with Data and AI. Collaboration with key functions such as Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be essential to your success. You will also lead a regional team of up to 500 individuals, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. Collaboration with various stakeholders at regional and global levels will be a key aspect of your role. To be considered a qualified candidate for this position, you should have at least 10 years of senior leadership experience in product development or engineering within enterprise software product companies. Additionally, you should have a minimum of 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. You must have a proven track record of leading agentic AI development and scaling AI in a hybrid cloud environment, as well as experience with Agile and DevSecOps methodologies. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, modern data stack technologies, AI/ML infrastructure, enterprise security, and performance engineering is also crucial. A passion for open-source collaboration, building high-performing engineering cultures, and inclusive leadership is highly valued. Ideally, you should hold a Master's degree in engineering, Computer Science, or an MBA. At Teradata, we prioritize a people-first culture, offer a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our mission to empower our customers and drive innovation in the world of AI and data analytics.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
The ideal candidate for this position in Ahmedabad should be a graduate with at least 3 years of experience. At Bytes Technolab, we strive to create a cutting-edge workplace infrastructure that empowers our employees and clients. Our focus on utilizing the latest technologies enables our development team to deliver high-quality software solutions for a variety of businesses. You will be responsible for leveraging your 3+ years of experience in Machine Learning and Artificial Intelligence to contribute to our projects. Proficiency in Python programming and relevant libraries such as NumPy, Pandas, and scikit-learn is essential. Hands-on experience with frameworks like PyTorch, TensorFlow, Keras, Facenet, and OpenCV will be key in your role. Your role will involve working with GPU acceleration for deep learning model development using CUDA, cuDNN. A strong understanding of neural networks, computer vision, and other AI technologies will be crucial. Experience with Large Language Models (LLMs) like GPT, BERT, LLaMA, and familiarity with frameworks such as LangChain, AutoGPT, and BabyAGI are preferred. You should be able to translate business requirements into ML/AI solutions and deploy models on cloud platforms like AWS SageMaker, Azure ML, and Google AI Platform. Proficiency in ETL pipelines, data preprocessing, and feature engineering is required, along with experience in MLOps tools like MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across different hardware architectures is necessary. Knowledge of Natural Language Processing (NLP), Reinforcement Learning, and data versioning tools like DVC or Delta Lake is a plus. Skills in containerization tools like Docker and orchestration tools like Kubernetes will be beneficial for scalable deployments. You should have experience in model evaluation, A/B testing, and establishing continuous training pipelines. Working in Agile/Scrum environments with cross-functional teams, understanding ethical AI principles, model fairness, and bias mitigation techniques are important. Familiarity with CI/CD pipelines for machine learning workflows and the ability to communicate complex concepts to technical and non-technical stakeholders will be valuable.,
Posted 2 weeks ago
6.0 - 10.0 years
0 - 2 Lacs
Hyderabad
Work from Office
Job Title: Senior Data Engineer Azure Databricks & Azure Stack Location: [Onsite - Hyderabad] Experience: 6-8 years Employment Type: Full-Time Job Summary: TechNavitas seeking a highly skilled Senior Data Engineer with 6-8 years of experience in designing and implementing modern data engineering solutions on Azure Cloud. The ideal candidate will have deep expertise in Azure Databricks, Azure Stack, and building data dashboards using Databricks. You will play a critical role in developing scalable, secure, and high-performance data pipelines that power advanced analytics and machine learning workloads. Key Responsibilities: Design and Develop Data Pipelines: Build and optimize robust ETL/ELT workflows using Azure Databricks to process large-scale datasets from diverse sources. Azure Stack Integration: Implement and manage data workflows within Azure Stack environments for hybrid cloud scenarios. Dashboards & Visualization: Develop interactive dashboards and visualizations in Databricks for business and technical stakeholders. Performance Optimization: Tune Spark jobs for performance and cost efficiency, leveraging Delta Lake, Parquet, and advanced caching strategies. Data Modeling: Design and maintain logical and physical data models that support structured and unstructured data needs. Collaboration: Work closely with data scientists, analysts, and business teams to understand requirements and deliver data solutions that enable insights. Security & Compliance: Ensure adherence to enterprise data security, privacy, and governance standards, especially in hybrid Azure environments. Automation & CI/CD: Implement CI/CD pipelines for Databricks workflows using Azure DevOps or similar tools. Required Skills and Experience: Technical Skills: Min 6-8 years of data engineering experience with strong focus on Azure ecosystem. Deep expertise in Azure Databricks (PySpark/Scala/SparkSQL) for big data processing. Solid understanding of Azure Stack Hub/Edge for hybrid cloud architecture. Hands-on experience with Delta Lake, data lakes, and data lakehouse architectures. Proficiency in developing dashboards within Databricks SQL and integrating with BI tools like Power BI or Tableau Strong knowledge of data modeling, data warehousing (e.g., Synapse Analytics), and ELT/ETL best practices. Experience with event-driven architectures and streaming data pipelines using Azure Event Hubs, Kafka, or Databricks Structured Streaming. Familiarity with Git, Azure DevOps, and CI/CD automation for data workflows. Soft Skills: Strong problem-solving and analytical thinking. Ability to communicate technical concepts effectively to non-technical stakeholders. Proven track record of working in Agile/Scrum teams. Preferred Qualifications: Experience working with hybrid or multi-cloud environments (Azure Stack + Azure Public Cloud). Knowledge of ML lifecycle and MLOps practices for data pipelines feeding ML models. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Why Join Us? Work on cutting-edge data engineering projects across hybrid cloud environments. Be part of a dynamic team driving innovation in big data and advanced analytics. Competitive compensation and professional growth opportunities.
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad/Secunderabad
Hybrid
Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
telangana
On-site
As the Vice President of Engineering at Teradata, you will be responsible for leading the India-based software development organization within the AI Platform Group. Your main focus will be on executing the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases at scale. Success in this role will involve building a world-class engineering culture, attracting and retaining top technical talent, accelerating hybrid cloud-first product delivery, and driving innovation that brings measurable value to customers. You will be leading a team of over 150 engineers with the goal of helping customers achieve outcomes with Data and AI. Collaboration with Product Management, Product Operations, Security, Customer Success, and Executive Leadership will be key aspects of your role. Additionally, you will work closely with a high-impact regional team of up to 500 people, including software development, cloud engineering, DevOps, engineering operations, and architecture teams. To qualify for this position, you should have over 10 years of senior leadership experience in product development, engineering, or technology leadership within enterprise software product companies. You should also have at least 3 years of experience in a VP Product or equivalent role managing large-scale technical teams in a growth market. Experience in leading the development of agentic AI and scaling AI in a hybrid cloud environment is essential. Success in implementing and scaling Agile and DevSecOps methodologies, as well as modernizing legacy architectures into service-based systems, will be key qualifications. Your background should include expertise in cloud platforms, data harmonization, data analytics for AI, Kubernetes, containerization, and microservices-based architectures. Experience in delivering SaaS-based data and analytics platforms, familiarity with modern data stack technologies, AI/ML infrastructure, enterprise security, data governance, and API-first design will be beneficial. Additionally, a track record of building high-performing engineering cultures, inclusive leadership teams, and a passion for open-source collaboration are desired qualities. A Masters degree in engineering, Computer Science, or an MBA is preferred for this role. At Teradata, we prioritize a people-first culture, embrace a flexible work model, focus on well-being, and are committed to Diversity, Equity, and Inclusion. Join us in our dedication to fostering an equitable environment that celebrates individuals for all aspects of who they are.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
You are a results-driven Data Project Manager (PM) responsible for leading data initiatives within a regulated banking environment, focusing on leveraging Databricks and Confluent Kafka. Your role involves overseeing the successful end-to-end delivery of complex data transformation projects aligned with business and regulatory requirements. In this position, you will be required to lead the planning, execution, and delivery of enterprise data projects using Databricks and Confluent. This includes developing detailed project plans, delivery roadmaps, and work breakdown structures, as well as ensuring resource allocation, budgeting, and adherence to timelines and quality standards. Collaboration with data engineers, architects, business analysts, and platform teams is essential to align on project goals. You will act as the primary liaison between business units, technology teams, and vendors, facilitating regular updates, steering committee meetings, and issue/risk escalations. Your technical oversight responsibilities include managing solution delivery on Databricks for data processing, ML pipelines, and analytics, as well as overseeing real-time data streaming pipelines via Confluent Kafka. Ensuring alignment with data governance, security, and regulatory frameworks such as GDPR, CBUAE, and BCBS 239 is crucial. Risk and compliance management are key aspects of your role, involving ensuring regulatory reporting data flows comply with local and international financial standards and managing controls and audit requirements in collaboration with Compliance and Risk teams. The required skills and experience for this role include 7+ years of Project Management experience within the banking or financial services sector, proven experience in leading data platform projects, a strong understanding of data architecture, pipelines, and streaming technologies, experience in managing cross-functional teams, and proficiency in Agile/Scrum and Waterfall methodologies. Technical exposure to Databricks (Delta Lake, MLflow, Spark), Confluent Kafka (Kafka Connect, kSQL, Schema Registry), Azure or AWS Cloud Platforms, integration tools, CI/CD pipelines, and Oracle ERP Implementation is expected. Preferred qualifications include PMP/Prince2/Scrum Master certification, familiarity with regulatory frameworks, and a strong understanding of data governance principles. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Key performance indicators for this role include on-time, on-budget delivery of data initiatives, uptime and SLAs of data pipelines, user satisfaction, and compliance with regulatory milestones.,
Posted 2 weeks ago
8.0 - 13.0 years
11 - 16 Lacs
Noida, Greater Noida, Delhi / NCR
Work from Office
Core Responsibilities Seasoned Databricks developer having 8 + years of experience is expected to design, build, and maintain scalable data pipelines and workflows using Azure Databricks. This includes: Developing ETL pipelines using Python, SQL, and Delta Live Tables Managing job execution through Databricks Jobs, including scheduling and parameterisation Implementing data transformation using built-in Databricks features Collaborating with data architects and analysts to ensure data models align with business needs Implementing security and governance using features like encryption, access control and data lineage tracking Data performance management using built-in Databricks features (i.e. concurrency for better query execution) Must be an expert in writing complex SQL queries. Technical Skills The following technical proficiencies are commonly required: Languages: Python, SQL Databricks Features: Delta Lake, Delta Live Tables, MLflow, Databricks Jobs, Secrets Management, encryption access control etc. Azure Integration: Azure Key Vault, Azure Functions, Azure Data Lake, Azure DevOps Deployment Pipelines: Experience with CI/CD for Databricks and integration with Snowflake schemas (DEV, PRE-PROD, PROD) Monitoring & Optimisation: Cost analysis of clusters, performance tuning, and job concurrency management Desirable Experience Working knowledge of Control-M for orchestrating Databricks jobs Familiarity with Power BI integration and semantic modelling Exposure to geospatial or time-series data processing using Databricks Basic knowledge of data modelling Share your resume at Aarushi.Shukla@coforge.com if you are early or immediate joiner. Good to have exposure to Agile methodology of software development Soft Skills & Qualifications Strong problem-solving and communication and interpersonal skills. Ability to work collaboratively across teams (e.g. HR, Architecture, Data Engineering). A degree in Computer Science, Data Engineering, or a related field is typically preferred
Posted 2 weeks ago
3.0 - 5.0 years
25 - 40 Lacs
Bengaluru
Hybrid
The Modern Data Engineer is responsible for designing, implementing, and maintaining scalable data architectures using cloud technologies, primarily on AWS, to support the next evolutionary stage of the Investment Process. They build robust data pipelines, optimize data storage, and access patterns, and ensure data quality while collaborating across engineering teams to deliver high-value data products Key Responsibilities • Implement and maintain data pipelines for ingestion, transformation, and delivery • Ensure data quality through validation and monitoring processes • Collaborate with senior engineers to design scalable data solutions • Work with business analysts to understand and implement data requirements • Optimize data models and queries for performance and efficiency • Follow engineering best practices and contribute to team standards • Participate in code reviews and knowledge sharing activities • Implement data security controls and access policies • Troubleshoot and resolve data pipeline issues Core Technical Skills Cloud Platforms: Proficient with cloud-based data platforms (Snowflake, data lakehouse architecture) AWS Ecosystem : Strong knowledge of AWS services including Lambda, Glue, and S3 Streaming Architecture : Understanding of event-based or streaming data concepts using Kafka Programming: Strong proficiency in Python and SQL DevOps : Experience with CI/CD pipelines and infrastructure as code (Terraform) Data Security: Knowledge of implementing basic data access controls Database Systems : Experience with RDBMS (Oracle, Postgres, MSSQL) and exposure to NoSQL databases Data Integration : Understanding of data integration patterns and techniques Orchestration : Experience using workflow tools (Airflow, Control-M, etc.) Engineering Practices : Experience with GitHub, code verification, and validation Domain Knowledge: Basic knowledge of investment management industry concepts Core Technical Skills Cloud Platforms: Proficient with cloud-based data platforms (Snowflake, data lakehouse architecture) AWS Ecosystem : Strong knowledge of AWS services including Lambda, Glue, and S3 Streaming Architecture : Understanding of event-based or streaming data concepts using Kafka Programming: Strong proficiency in Python and SQL DevOps : Experience with CI/CD pipelines and infrastructure as code (Terraform) Data Security: Knowledge of implementing basic data access controls Database Systems : Experience with RDBMS (Oracle, Postgres, MSSQL) and exposure to NoSQL databases Data Integration : Understanding of data integration patterns and techniques Orchestration : Experience using workflow tools (Airflow, Control-M, etc.) Engineering Practices : Experience with GitHub, code verification, and validation Domain Knowledge: Basic knowledge of investment management industry concepts
Posted 2 weeks ago
2.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Tiger Analytics is a global AI and analytics consulting firm that is at the forefront of solving complex problems using data and technology. With a team of over 2800 experts spread across the globe, we are dedicated to making a positive impact on the lives of millions worldwide. Our culture is built on expertise, respect, and collaboration, with a focus on teamwork. While our headquarters are in Silicon Valley, we have delivery centers and offices in various cities in India, the US, UK, Canada, and Singapore, as well as a significant remote workforce. As an Azure Big Data Engineer at Tiger Analytics, you will be part of a dynamic team that is driving an AI revolution. Your typical day will involve working on a variety of analytics solutions and platforms, including data lakes, modern data platforms, and data fabric solutions using Open Source, Big Data, and Cloud technologies on Microsoft Azure. Your responsibilities may include designing and building scalable data ingestion pipelines, executing high-performance data processing, orchestrating pipelines, designing exception handling mechanisms, and collaborating with cross-functional teams to bring analytical solutions to life. To excel in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be well-versed in technologies such as Azure Data Factory, PySpark, Databricks, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Your passion for writing high-quality, scalable code and your ability to collaborate effectively with stakeholders are essential for success in this role. Experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, and Neo4J, as well as knowledge of different file formats and REST API design, will be advantageous. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with varying skills and backgrounds to apply. We are committed to providing equal opportunities for all our employees and fostering a culture of trust, respect, and growth. Your compensation package will be competitive and aligned with your expertise and experience. If you are looking to be part of a forward-thinking team that is pushing the boundaries of what is possible in AI and analytics, we invite you to join us at Tiger Analytics and be a part of our exciting journey towards building innovative solutions that inspire and energize.,
Posted 3 weeks ago
6.0 - 11.0 years
0 - 0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Databricks Engineer _ Pan India Role & responsibilities Develop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS . Insure MO rating engine experience required. Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines for batch and streaming data using Delta Lake and Spark Structured Streaming. Implement data quality checks, validations, and logging mechanisms. Optimize pipeline performance, cost, and reliability. Collaborate with data analysts, BI, and business teams to deliver fit for purpose datasets. Support data modelling efforts (star, snowflake schemas) de norm tables approach and assist with data warehousing initiatives. Work with orchestration tools Databricks Workflows to schedule and monitor pipelines. Follow best practices for version control, CI/CD, and collaborative development Skills Hands-on experience in ETL/Data Engineering roles. Strong expertise in Databricks (Pyspark, SQL, Delta Lake), Databricks Data Engineer Certification Preferred Experience with Spark optimization, partitioning, caching, and handling large-scale datasets. Proficiency in SQL and scripting in Python or Scala. Solid understanding of data lakehouse/medallion architectures and modern data platforms. Experience working with cloud storage systems like AWS S3 Familiarity with DevOps practices Git, CI/CD, Terraform, etc. Strong debugging, troubleshooting, and performance-tuning skills.
Posted 3 weeks ago
15.0 - 20.0 years
50 - 55 Lacs
Bengaluru
Work from Office
Mode: Contract As an Azure Data Architect, you will: Lead architectural design and migration strategies, especially from Oracle to Azure Data Lake Architect and build end-to-end data pipelines leveraging Databricks, Spark, and Delta Lake Design secure, scalable data solutions integrating ADF, SQL Data Warehouse, and on-prem/cloud systems Optimize cloud resource usage and pipeline performance Set up CI/CD pipelines with Azure DevOps Mentor team members and align architecture with business needs Qualifications: 10-15 years in Data Engineering/Architecture roles Extensive hands-on with: Databricks, Azure Data Factory, Azure SQL Data Warehouse Data integration, migration, cluster configuration, and performance tuning Azure DevOps and cloud monitoring tools Excellent interpersonal and stakeholder management skills
Posted 3 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 weeks ago
5.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
Sr. Data Analytics Engineer Power mission-critical decisions with governed insights Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains—HIPAA, FDA, SOC 2—and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact — Build full-stack analytics from lakehouse pipelines to real-time dashboards. Fail-safe engineering — TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack — Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture — Lead code reviews, share best practices, grow as a domain expert. Mission-critical context — Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset — Work in HIPAA-aligned environments where precision matters. Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions —dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything —from pipeline logic to RLS rules—in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery —DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations , or similar data quality frameworks. BI diversity—experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits & Call-to-Action Ajmera offers competitive compensation, flexible schedules, and a deeply technical culture where engineers lead the narrative. If you’re driven by reliable, audit-ready data products and want to own systems from raw ingestion to KPI dashboards— apply now and engineer insights that matter.
Posted 3 weeks ago
4.0 - 6.0 years
12 - 18 Lacs
Chennai, Bengaluru
Work from Office
Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Integration, Airflow, Delta Lake, Redshift, S3, Data Security, Cloud Platforms, Life Sciences. Roles & Responsibilities : Develop and maintain robust, scalable data pipelines for ingesting, transforming, and optimizing large datasets from diverse sources. Integrate multi-source data into performant, query-optimized formats such as Delta Lake, Redshift, and S3. Tune data processing jobs and storage layers to ensure cost efficiency and high throughput. Automate data workflows using orchestration tools like Airflow and Databricks APIs for ingestion, transformation, and reporting. Implement data validation and quality checks to ensure reliable and accurate data. Manage and optimize AWS and Databricks infrastructure to support scalable data operations. Lead cloud platform migrations and upgrades, transitioning legacy systems to modern, cloud-native solutions. Enforce security best practices, ensuring compliance with regulatory standards such as IAM and data encryption. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders to deliver data solutions. Experience Requirement : 4-6 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong background in designing and building data pipelines, and optimizing data storage and processing. Proficiency in using cloud services such as AWS (S3, Redshift, Lambda) for building scalable data solutions. Hands-on experience with containerized environments and orchestration tools like Airflow for automating data workflows. Expertise in data migration strategies and transitioning legacy data systems to modern cloud platforms. Experience with performance tuning, cost optimization, and lifecycle management of cloud data solutions. Familiarity with regulatory compliance (GDPR, HIPAA) and security practices (IAM, encryption). Experience in the Life Sciences or Pharma domain is highly preferred, with an understanding of industry-specific data requirements. Strong problem-solving abilities with a focus on delivering high-quality data solutions that meet business needs. Education : Any Graduation.
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm focused on delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Databricks Developer - AWS. As a Databricks Developer in this role, you will be responsible for solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Stay updated on new and emerging technologies and explore their potential applications for service offerings and products. - Collaborate with architects and lead engineers to design solutions that meet functional and non-functional requirements. - Demonstrate knowledge of relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess excellent coding skills, particularly in Python or Scala, with a preference for Python. Qualifications: Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - Stay informed about new technologies and their potential applications. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Proficient in Python or Scala coding. - Experience in the Data Engineering domain. - Completed at least 2 end-to-end projects in Databricks. Additional qualifications: - Familiarity with Delta Lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation in enterprise environments. - Ability to create complex data pipelines. - Strong knowledge of Data structures & algorithms. - Proficiency in SQL and Spark-SQL. - Experience in performance optimization to enhance efficiency and reduce costs. - Worked on both Batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Experience with cloud platforms (Azure, AWS, GCP) and common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. - Skilled in writing unit and integration test cases. - Excellent communication skills and experience working in teams of 5 or more. - Positive attitude towards learning new skills and upskilling. - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience in CI/CD to build pipelines for Databricks jobs. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. This is a full-time position based in India-Gurugram. The job posting was on August 5, 2024, and the unposting date is set for October 4, 2024.,
Posted 3 weeks ago
0.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant/ Data Engineer. In this role, you will collaborate closely with cross-functional teams, including developers, business analysts, and stakeholders, to deliver high-quality software solutions that enhance operational efficiency and support strategic business objectives. Responsibilities . Provide technical leadership and architectural guidance on Data engineer projects. . Design and implement data pipelines, data lakes, and data warehouse solutions using the Data engineer. . Optimize Spark-based data workflows for performance, scalability, and cost-efficiency. . Ensure robust data governance and security, including the implementation of Unity Catalog. . Collaborate with data scientists, business users, and engineering teams to align solutions with business goals. . Stay updated with evolving Data engineer features, best practices, and industry trends. . Proven expertise in Data engineering, including Spark, Delta Lake, and Unity Catalog. . Strong background in data engineering, with hands-on experience in building production-grade data pipelines and lakes. . Proficient in Python (preferred) or Scala for data transformation and automation. . Strong command of SQL and Spark SQL for data querying and processing. . Experience with cloud platforms such as Azure, AWS, or GCP. . Familiarity with DevOps/DataOps practices in data pipeline development. . Knowledge of Profisee or other Master Data Management (MDM) tools is a plus. . Certifications in Data Engineering or Spark. . Experience with Delta Live Tables, structured streaming, or metadata-driven frameworks . Development of new reports and updating the existing reports as requested by customers. . Automate the respective reports by the creation of config files. . Validate the premium in the reports against the IMS application to ensure there are no discrepancies by creation of config file. . Validation of all the reports that run on a monthly basis and to analyze the respective reports if there is any discrepancy in Qualifications we seek in you! Minimum Qualifications . BE/ B Tech/ MCA Preferred Qualifications/ Skills . Excellent analytical, problem-solving, communication and interpersonal skills . Able to work effectively in a fast-paced, sometimes stressful environment, and deliver production quality software within tight schedules . Must be results-oriented, self-motivated and have the ability to thrive in a fast-paced environment . Strong Specialty Insurance domain & IT knowledge Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
5.0 - 10.0 years
3 - 5 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Azure Databricks Developer Job Title: Azure Databricks Developer Experience: 5+ Years Location: PAN India (Remote/Hybrid as per project requirement) Employment Type: Full-time Job Summary: We are hiring an experienced Azure Databricks Developer to join our dynamic data engineering team. The ideal candidate will have strong expertise in building and optimizing big data solutions using Azure Databricks, Spark, and other Azure data services. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Databricks and Apache Spark. Integrate and manage large datasets using Azure Data Lake, Azure Data Factory, and other Azure services. Implement Delta Lake for efficient data versioning and performance optimization. Collaborate with cross-functional teams including data scientists and BI developers. Ensure best practices for data security, governance, and compliance. Monitor performance and troubleshoot Spark clusters and data pipelines. Skills & Requirements: Minimum 5 years of experience in data engineering with at least 2+ years in Azure Databricks. Proficiency in Apache Spark (PySpark/Scala). Strong hands-on experience with Azure services ADF, ADLS, Synapse Analytics. Expertise in building and managing ETL/ELT pipelines. Strong SQL skills and experience with performance tuning. Experience with CI/CD pipelines and Azure DevOps is a plus. Good understanding of data modeling, partitioning, and data lake architecture. Role & responsibilities Preferred candidate profile
Posted 3 weeks ago
10.0 - 16.0 years
40 - 50 Lacs
Noida, Hyderabad
Work from Office
Data engineering architect to lead the design and development of modern Delta Lakehouse architecture on Azure, the role will focus on building scalable modular data platforms that support complex use cases such as customer 360 (Integrate data from multiple domains to create a unified view of the customer), real-time insights etc. Mandatory Technical skills - Databricks, Azure, PySpark/Scala/SQL, Kafka, Python, ADF, SQL Good to have - Knowledge on open-source frameworks - Apache Kafka, NiFi, Airflow, Apache Flink,dbt, Iceberg Exp with data cataloging , metadata management, data quality, lineage, versioning, monitoring and DevOps practice Data governance , privacy , and regulatory compliance GDPR, data privacy, PII,PHI etc
Posted 3 weeks ago
7.0 - 10.0 years
5 - 10 Lacs
Bengaluru, Karnataka, India
On-site
Hands-on Experience with programming languages such as Python mandatorily. Thorough understanding of AWS from a data engineering and tools standpoint. Experience in another cloud is also beneficial. Experience in AWS Glue, Spark, and Python with Airflow for designing and developing data pipelines. Expertise in Informatica Cloud is advantageous Data Modeling: Advanced/Intermediate Data Modeling skills (Master/Ref/ODS/DW/DM) to enable Analytics on the Platform. Traditional data warehousing and ETL skillset, including strong SQL and PL/SQL skills. Experience with inbound and outbound integrations on the cloud platform Design and development of Data APIs (Python, Flask/FastAPI) to expose data on the platform Partner with SA to identify data inputs and related data sources, review sample data, identify gaps, and perform quality checks. Experience loading and querying cloud-hosted databases like Redshift, Snowflake, and BigQuery. Preferred - Knowledge of system-to-system integration, messaging/queuing, and managed file transfer. Preferred - Building and maintaining REST APIs, ensuring security and scalability Preferred - DevOps/DataOps: Experience with Infrastructure as Code, setting up CI/CD pipelines. Preferred - Building real-time streaming data ingestion
Posted 3 weeks ago
7.0 - 10.0 years
5 - 10 Lacs
Chennai, Tamil Nadu, India
On-site
Hands-on Experience with programming languages such as Python mandatorily. Thorough understanding of AWS from a data engineering and tools standpoint. Experience in another cloud is also beneficial. Experience in AWS Glue, Spark, and Python with Airflow for designing and developing data pipelines. Expertise in Informatica Cloud is advantageous Data Modeling: Advanced/Intermediate Data Modeling skills (Master/Ref/ODS/DW/DM) to enable Analytics on the Platform. Traditional data warehousing and ETL skillset, including strong SQL and PL/SQL skills. Experience with inbound and outbound integrations on the cloud platform Design and development of Data APIs (Python, Flask/FastAPI) to expose data on the platform Partner with SA to identify data inputs and related data sources, review sample data, identify gaps, and perform quality checks. Experience loading and querying cloud-hosted databases like Redshift, Snowflake, and BigQuery. Preferred - Knowledge of system-to-system integration, messaging/queuing, and managed file transfer. Preferred - Building and maintaining REST APIs, ensuring security and scalability Preferred - DevOps/DataOps: Experience with Infrastructure as Code, setting up CI/CD pipelines. Preferred - Building real-time streaming data ingestion
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough