Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
6 - 9 Lacs
Hyderabad, Bengaluru
Hybrid
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python
Posted 6 days ago
4.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Hybrid
Role: Azure Data Engineer Job Type: Full Time Job Location: Hyderabad Level of Experience: 5-8 Years Job Description : Experience in understanding on Azure DataBricks, Azure Synapse, Azure SQL and Azure Data Lake is required. • Experience in Creating: designing and developing data models for scalable, multi-terabyte data marts. • Experience in designing and hands-on development in cloud-based analytics solutions. • Should be able to analyze and understand complex data. • Thorough understanding of Azure Cloud Infrastructure. • Designing and building of data pipelines and Streaming ingestion methods. • Knowledge of Dev-Ops processes (Including CI/CD) and Infrastructure as code is essential. • Strong experience in common data warehouse modelling principles. • Knowledge in Power Bl is desirable. • Knowledge on PowerShell and work experience in Python or equivalent programming language is desirable. • Exposure or knowledge on Kusto(KQL) is an added advantage. • Exposure or knowledge of LLM models is an added advantage. Technical Soft Skills; Strong customer engagement skills to understand customer needs for Analytics solutions fully. • Experience in working in a fast-paced agile environment. • Ability to grasp the new technologies fast and start delivering projects quickly. • Strong problem solving and troubleshooting skills.
Posted 1 week ago
4.0 - 7.0 years
7 - 14 Lacs
Pune, Mumbai (All Areas)
Work from Office
Job Profile Description Create and maintain highly scalable data pipelines across Azure Data Lake Storage, and Azure Synapse using Data Factory, Databricks and Apache Spark/Scala Responsible for managing a growing cloud-based data ecosystem and reliability of our Corporate datalake and analytics data mart Contribute to the continued evolution of Corporate Analytics Platform and Integrated data model. Be part of Data Engineering team in all phases of work including analysis, design and architecture to develop and implement cutting-edge solutions. Negotiate and influence changes outside of the team that continuously shape and improve the Data strategy 4+ years of experience implementing analytics data Solutions leveraging Azure Data Factory, Databricks, Logic Apps, ML Studio, Datalake and Synapse Working experience with Scala, Python or R Bachelors degree or equivalent experience in Computer Science, Information Systems, or related disciplines.
Posted 1 week ago
3.0 - 5.0 years
9 - 17 Lacs
Bengaluru
Remote
Role Overview: We are looking for a highly skilled Azure Data Engineer or Power BI Analyst with 3 to 5 years of experience in building end-to-end data solutions on the Microsoft Azure platform. The ideal candidate should be proficient in data ingestion, transformation, modeling, and visualization using tools such as Azure Data Factory, Azure Databricks, SQL, Power BI, and Fabric. Role & responsibilities Design, develop, and maintain robust ETL/ELT pipelines using Azure Data Factory (ADF) and Azure Databricks Perform data ingestion from various on-prem/cloud sources to Azure Data Lake / Synapse / SQL Implement transformation logic using PySpark , SQL , and Data Frames Create Power BI dashboards and reports using DAX and advanced visualization techniques Develop and manage tabular models , semantic layers , and data schemas (Star/Snowflake) Optimize Power BI datasets and performance tuning (e.g., dataset refresh time, PLT) Collaborate with stakeholders to gather reporting requirements and deliver insights Ensure data accuracy, security, and compliance across all stages Leverage Azure DevOps for version control and CI/CD pipelines Participate in Agile ceremonies (scrum, sprint reviews, demos) Preferred candidate profile 3+ years of experience with Azure Data Factory, Databricks, Data Lake Proficient in Power BI , DAX , SQL , and Python Experience in building and optimizing tabular models and semantic layers Hands-on with Azure Synapse , Fabric , and DevOps Solid understanding of data modeling, ETL, data pipelines, and business logic implementation Strong communication skills and ability to work in Agile teams
Posted 1 week ago
5.0 - 8.0 years
20 - 25 Lacs
Hyderabad
Remote
Required Skills: Azure Synapse Azure Fabric Azure Data Factory (ADF) Azure Storage PySpark SQL Azure Key Vault Excellent communication skills as this would be client facing and L2 will be client round of interview. Responsibilities: Design and implement scalable data pipelines using Microsoft Fabric, including Dataflows Gen2, Lakehouse, Notebooks and SQL endpoints. Develop ETL/ELT solutions using PySpark, T-SQL and Spark Notebooks within Fabric and Azure Synapse. Manage and optimize data storage and compute in OneLake supporting Lakehouse and Warehouse use cases. Implement and manage Azure Key Vault for secure handling of secrets, credentials and connection strings. Configure and manage CI/CD pipelines for Data engineering projects using Azure Devops including automated deployment of Fabric assets. Integrate data from diverse sources including SQL server, Azure Blob, REST APIs and on-prem systems. Collaborate closely with business teams and PowerBI developers to ensure data models support reporting and self-service needs. Monitor and troubleshoot data pipeline performance, data quality and failure recovery. Contribute to architecture design, governance processes and performance tuning.
Posted 1 week ago
6.0 - 11.0 years
12 - 17 Lacs
Gurugram
Work from Office
Job Responsibilities Skillet Needed from the resource Data Architecture and Management : Understanding of Azure SQL technology, including SQL databases, operational data stores, and data transformation processes. Azure Data Factory : Expertise in using Azure Data Factory for ETL processes, including creating and managing pipelines. Python Programming : Proficiency in writing Python scripts, particularly using the pandas library, for data cleaning and transformation tasks. Azure Functions : Experience with Azure Functions for handling and processing Excel files, making them suitable for database import. API Integration : Skills in integrating various data sources, including APIs, into the data warehouse. BPO Exp Mandatory
Posted 1 week ago
5.0 - 9.0 years
1 - 1 Lacs
Visakhapatnam, Hyderabad, Vizianagaram
Work from Office
Role & responsibilities 5+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Synapse Analytics and Azure Data Factory (ADF) . Proven experience with Databricks , including development in PySpark or Scala . Proficiency in DBT for data modeling and transformation. Expert in Analytics and reporting Power BI expert who can Develop power BI models and develop interactive BI reports Setting up RLS in Power BI reports Expertise in SQL and performance tuning techniques. Strong understanding of data warehousing concepts and ETL/ELT design patterns. Experience working in Agile environments and familiarity with Git-based version control. Strong communication and collaboration skills. Preferred candidate profile Experience with CI/CD tools and DevOps for data engineering. Familiarity with Delta Lake and Lakehouse architecture. Exposure to other Azure services such as Azure Data Lake Storage (ADLS) , Azure Key Vault , and Azure DevOps. Experience with data quality frameworks or tools.
Posted 1 week ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Secunderabad
Work from Office
Proficiency in SQL, Python, and data pipeline frameworks such as Apache Spark, Databricks, or Airflow. Hands-on experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Strong understanding of data modeling, ETL/ELT, and data lake/warehouse/ Datamart architectures. Knowledge on Data Factory or AWS Glue Experience in developing reports and dashboards using tools like Power BI, Tableau, or Looker.
Posted 1 week ago
6.0 - 11.0 years
20 - 27 Lacs
Pune
Hybrid
6+yrs of experience as a Data Engineer, Expertise in the Azure platform Azure SQL DB, ADF and Azure Synapse, 5+ years of experience in database development using SQL, knowledge of data modeling, ETL processes and data warehouse design principles.
Posted 1 week ago
4.0 - 9.0 years
10 - 20 Lacs
Pune
Work from Office
As an Azure/SQL Data Analytics Consultant expect to be: •Working on projects that utilize products within the Microsoft Azure and SQL Data Analytics stack •Satisfying the expectations and requirements of customers, both internal and external Required Candidate profile Core:Azure Data Platform, SQL Server (T-SQL)Data Analytics(SSIS,SSAS, SSRS)Power BI,Synapse Supporting: Azure ML,Azure infra,Python,Data Factory Principles: Data Modelling,Data Warehouse Theory,
Posted 1 week ago
6.0 - 10.0 years
15 - 22 Lacs
Chennai
Work from Office
Job Tittle: Data Engineering Lead Exp: 6-8 yrs Location: Chennai Work Mode: WFO All 5 Days Shift Timing: General Shift Budget: Max 24 LPA Immediate Joiners Required Mail me at -> triveni2@elabsinfotech.com Mandatory Skills: . Data Engineer with Strong ETL experience . Azure Data Factory , Azure Synapse & Databricks All are Mandatory . Power BI, 1 yr exp . Azure Cloud . Must have managed a team of minimum 5 . Good Communication
Posted 1 week ago
5.0 - 8.0 years
15 - 22 Lacs
Noida, Bengaluru, Delhi / NCR
Hybrid
HI Candidates, we have an opportunities with one of the leading IT consulting Group for the data engineer role. Interested candidates can mail their CV's at Abhishek.saxena@mounttalent.com Job Description- What were looking for Data Engineer III with: 5+ years of experience with ETL Process, Data warehouse architecture 5+ Years of experience with Azure Data services i.e. ADF, ADLS Gen 2, Azure SQL dB, Synapse, Azure Databricks, Microsoft Fabric 5+ years of experience designing business intelligence solutions Strong proficiency in SQL and Python/pyspark Implementation experience of Medallion architecture and delta lake (or lakehouse) Experience with cloud-based data platforms, preferably Azure Familiarity with big data technologies and data warehousing concepts Working knowledge of Azure DevOps and CICD (build and release)
Posted 1 week ago
3.0 - 7.0 years
6 - 10 Lacs
Mumbai
Work from Office
Senior Azure Data Engineer ? L1 Support
Posted 1 week ago
8.0 - 13.0 years
10 - 15 Lacs
Pune
Work from Office
What You'll Do The Global Analytics and Insights (GAI) team is seeking an experienced and experienced Data Visualization Manager to lead our data-driven decision-making initiatives. The ideal candidate will have a background in Power BI, expert-level SQL proficiency, to drive actionable insights and demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. You will become an expert in Avalara's financial, marketing, sales, and operations data. This position will Report to Senior Manager What Your Responsibilities Will Be You will define and execute the organization's BI strategy, ensuring alignment with business goals. You will Lead, mentor, and manage a team of BI developers and analysts, fostering a continuous learning. You will Develop and implement robust data visualization and reporting solutions using Power BI. You will Optimize data models, dashboards, and reports to provide meaningful insights and support decision-making. You will Collaborate with business leaders, analysts, and cross-functional teams to gather and translate requirements into actionable BI solutions. Be a trusted advisor to business teams, identifying opportunities where BI can drive efficiencies and improvements. You will Ensure data accuracy, consistency, and integrity across multiple data sources. You will Stay updated with the latest advancements in BI tools, SQL performance tuning, and data visualization best practices. You will Define and enforce BI development standards, governance, and documentation best practices. You will work closely with Data Engineering teams to define and maintain scalable data pipelines. You will Drive automation and optimization of reporting processes to improve efficiency. What You'll Need to be Successful 8+ years of experience in Business Intelligence, Data Analytics, or related fields. 5+ Expert proficiency in Power BI, including DAX, Power Query, data modeling, and dashboard creation. 5+ years of strong SQL skills, with experience in writing complex queries, performance tuning, and working with large datasets. Familiarity with cloud-based BI solutions (e.g., Azure Synapse, AWS Redshift, Snowflake) is a plus. Should have understanding of ETL processes and data warehousing concepts. Strong problem-solving, analytical thinking, and decision-making skills.
Posted 1 week ago
7.0 - 12.0 years
8 - 18 Lacs
Kolkata
Remote
Position : Sr Azure Data Engineer Location: Remote Time : CET Time Role & responsibilities We are seeking a highly skilled Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in Microsoft Azure, Fabric Azure SQL, Azure Synapse, Python, and Power BI. Knowledge of Oracle DB and data replication tools will be preferred . This role involves designing, developing, and maintaining robust data pipelines and ensuring efficient data processing and integration across various platforms. Candidate understands stated needs & requirements of the stakeholders and produce high quality deliverables Monitors own work to ensure delivery within the desired performance standards. Understands the importance of delivery within expected time, budget and quality standards and displays concern in case of deviation. Good communication skills and a team player Design and Development: Architect, develop, and maintain scalable data pipelines using Microsoft Fabric and Azure services, including Azure SQL and Azure Synapse. Data Integration : Integrate data from multiple sources, ensuring data consistency, quality, and availability using data replication tools. Data Management: Manage and optimize databases, ensuring high performance and reliability. ETL Processes: Develop and maintain ETL processes to transform data into actionable insights. Data Analysis: Use Python and other tools to analyze data, create reports, and provide insights to support business decisions. Visualization : Develop and maintain dashboards and reports in Power BI to visualize complex data sets. Performance Tuning : Optimize database performance and troubleshoot any issues related to data processing and integration Preferred candidate profile Minimum 7 years of experience in data engineering or a related field. Proven experience with Microsoft Azure services, Fabrics including Azure SQL and Azure Synapse. Strong proficiency in Python for data analysis and scripting. Extensive experience with Power BI for data visualization. Knowledge of Oracle DB and experience with data replication tools. Proficient in SQL and database management. Experience with ETL tools and processes. Strong understanding of data warehousing concepts and architectures. Familiarity with cloud-based data platforms and services. Analytical Skills: Ability to analyze complex data sets and provide actionable insights. Problem-Solving: Strong problem-solving skills and the ability to troubleshoot data-related issues.
Posted 1 week ago
4.0 - 9.0 years
5 - 15 Lacs
Chandigarh, Pune, Bengaluru
Work from Office
Responsibilities A day in the life of an Infoscion • As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. • You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. • You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. • You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. • You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills: Technology->Cloud Platform->Azure Analytics Services->Azure Data Lake Preferred Skills: Technology->Cloud Platform->Azure Analytics Services->Azure Data Lake
Posted 1 week ago
3.0 - 5.0 years
7 - 10 Lacs
Pune
Work from Office
Job Title: Data Engineer Location : Pune, India (On-site) Experience : 3 5 years Employment Type: Full-time Job Summary We are looking for a hands-on Data Engineer who can design and build modern Lakehouse solutions on Microsoft Azure. You will own data ingestion from source-system APIs through Azure Data Factory into OneLake, curate bronze silver gold layers on Delta Lake, and deliver dimensional models that power analytics at scale. Key Responsibilities Build secure, scalable Azure Data Factory pipelines that ingest data from APIs, files, and databases into OneLake. Curate raw data into Delta Lake tables on ADLS Gen 2 using the Medallion (bronze silver gold) architecture, ensuring ACID compliance and optimal performance. Develop and optimize SQL/Spark SQL transformations in Azure Fabric Warehouse / Lakehouse environments. Apply dimensional-modelling best practices (star/snowflake, surrogate keys, SCDs) to create analytics-ready datasets. Implement monitoring, alerting, lineage, and CI/CD (Git/Azure DevOps) for all pipelines and artifacts. Document data flows, data dictionaries, and operational runbooks. Must-Have Technical Skills Azure Fabric & Lakehouse experience Azure Fabric Warehouse experience / Azure Synapse Data Factory building, parameterizing, and orchestrating API-driven ingestion pipelines ADLS Gen 2 + Delta Lake Strong SQL advanced querying, tuning, and procedural extensions (T-SQL / Spark SQL) Data-warehousing & Dimensional Modelling concepts Good-to-Have Skills Python (PySpark, automation, data-quality checks) Unix/Linux shell scripting DevOps (Git, Azure DevOps) Education & Certifications BE / B. Tech in computer science, Information Systems, or related field Preferred: Microsoft DP-203 Azure Data Engineer Associate Soft Skills Analytical, detail-oriented, and proactive problem solver Clear written and verbal communication; ability to simplify complex topics Collaborative and adaptable within agile, cross-functional teams
Posted 1 week ago
15.0 - 20.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Good To Have Skills: Experience with Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics.- Strong understanding of data modeling and database design principles.- Experience with data integration and ETL tools.- Familiarity with data governance and data quality best practices. Additional Information:- The candidate should have minimum 2 years of experience in Microsoft Azure Data Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
12.0 - 14.0 years
20 - 30 Lacs
Indore, Hyderabad
Work from Office
Microsoft Fabric Data engineer CTC Range 12 14 Years Location – Hyderabad/Indore Notice Period - Immediate * Primary Skill Microsoft Fabric Secondary Skill 1 Azure Data Factory (ADF) 12+ years of experience in Microsoft Azure Data Engineering for analytical projects. Proven expertise in designing, developing, and deploying high-volume, end-to-end ETL pipelines for complex models, including batch, and real-time data integration frameworks using Azure, Microsoft Fabric and Databricks. Extensive hands-on experience with Azure Data Factory, Databricks (with Unity Catalog), Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database for managing and processing large-scale data integrations. Experience in Databricks cluster optimization and workflow management to ensure cost-effective and high-performance processing. Sound knowledge of data modelling, data governance, data quality management, and data modernization processes. Develop architecture blueprints and technical design documentation for Azure-based data solutions. Provide technical leadership and guidance on cloud architecture best practices, ensuring scalable and secure solutions. Keep abreast of emerging Azure technologies and recommend enhancements to existing systems. Lead proof of concepts (PoCs) and adopt agile delivery methodologies for solution development and delivery. www.yash.com 'Information transmitted by this e-mail is proprietary to YASH Technologies and/ or its Customers and is intended for use only by the individual or entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are notified that any use or dissemination of this information in any manner is strictly prohibited. In such cases, please notify us immediately at info@yash.com and delete this mail from your records.
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that client requirements are met effectively and efficiently. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: -Must have experience on Azure Synapse and Pyspark.- Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Strong understanding of cloud computing concepts and architecture.- Experience with data integration and ETL processes.- Familiarity with database management systems and data modeling.- Ability to troubleshoot and resolve technical issues efficiently.-Must have experience on Azure Synapse. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Data Services.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
2.0 - 7.0 years
5 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job description Hiring for Azure developer with experience range 2 to 9 years Mandatory Skills: Azure, ADF, ADB, Azure synapse Education: BE/B.Tech/BCA/B.SC/MCA/M.Tech/MSc./MS Location: Pan India Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
2 Data Engineer Azure Synapse/ADF , Workiva To manage and maintain the associated Connector, Chains, Tables and Queries, making updates, as needed, as new metrics or requirements are identified Develop functional and technical requirements for any changes impacting wData (Workiva Data) Configure and unit test any changes impacting wData (connector, chains, tables, queries Promote wData changes
Posted 1 week ago
4.0 - 9.0 years
15 - 30 Lacs
Gurugram, Chennai
Work from Office
Role & responsibilities • Assume ownership of Data Engineering projects from inception to completion. Implement fully operational Unified Data Platform solutions in production environments using technologies like Databricks, Snowflake, Azure Synapse etc. Showcase proficiency in Data Modelling and Data Architecture Utilize modern data transformation tools such as DBT (Data Build Tool) to streamline and automate data pipelines (nice to have). Implement DevOps practices for continuous integration and deployment (CI/CD) to ensure robust and scalable data solutions (nice to have). Maintain code versioning and collaborate effectively within a version-controlled environment. Familiarity with Data Ingestion & Orchestration tools such as Azure Data Factory, Azure Synapse, AWS Glue etc. Set up processes for data management, templatized analytical modules/deliverables. Continuously improve processes with focus on automation and partner with different teams to develop system capability. Proactively seek opportunities to help and mentor team members by sharing knowledge and expanding skills. Ability to communicate effectively with internal and external stakeholders. Coordinating with cross-functional team members to make sure high quality in deliverables with no impact on timelines Preferred candidate profile • Expertise in computer programming languages such as: Python and Advance SQL • Should have working knowledge of Data Warehousing, Data Marts and Business Intelligence with hands-on experience implementing fully operational data warehouse solutions in production environments. • 3+ years of Working Knowledge of Big data tools (Hive, Spark) along with ETL tools and cloud platforms. • 3+ years of relevant experience in either Snowflake or Databricks. Certification in Snowflake or Databricks would be highly recommended. • Proficient in Data Modelling and ELT techniques. • Experienced with any of the ETL/Data Pipeline Orchestration tools such as Azure Data Factory, AWS Glue, Azure Synapse, Airflow etc. • Experience working with ingesting data from different data sources such as RDBMS, ERP Systems, APIs etc. • Knowledge of modern data transformation tools, particularly DBT (Data Build Tool), for streamlined and automated data pipelines (nice to have). • Experience in implementing DevOps practices for CI/CD to ensure robust and scalable data solutions (nice to have). • Proficient in maintaining code versioning and effective collaboration within a versioncontrolled environment. • Ability to work effectively as an individual contributor and in small teams. Should have experience mentoring junior team members. • Excellent problem-solving and troubleshooting ability with experience of supporting and working with cross functional teams in a dynamic environment. • Strong verbal and written communication skills with ability to communicate effectively, articulate results and issues to internal and client team.
Posted 1 week ago
3.0 - 8.0 years
6 - 18 Lacs
Kochi
Work from Office
Looking for a Data Engineer with 3+ yrs exp in Azure Data Factory, Synapse, Data Lake, Databricks, SQL, Python, Spark, CI/CD. Preferred: DP-203 cert, real-time data tools (Kafka, Stream Analytics), data governance (Purview), Power BI.
Posted 1 week ago
2.0 - 4.0 years
10 - 15 Lacs
Pune
Work from Office
Role & responsibilities Develop and Maintain Data Pipelines: Design, develop, and manage scalable ETL pipelines to process large datasets using PySpark, Databricks, and other big data technologies. Data Integration and Transformation: Work with various structured and unstructured data sources to build efficient data workflows and integrate them into a central data warehouse. Collaborate with Data Scientists & Analysts: Work closely with the data science and business intelligence teams to ensure the right data is available for advanced analytics, machine learning, and reporting. Optimize Performance: Optimize and tune data pipelines and ETL processes to improve data throughput and reduce latency, ensuring timely delivery of high-quality data. Automation and Monitoring: Implement automated workflows and monitoring tools to ensure data pipelines are running smoothly, and issues are proactively addressed. Ensure Data Quality: Build and maintain validation mechanisms to ensure the accuracy and consistency of the data. Data Storage and Access: Work with data storage solutions (e.g., Azure, AWS, Google Cloud) to ensure effective data storage and fast access for downstream users. Documentation and Reporting: Maintain proper documentation for all data processes and architectures to facilitate easier understanding and onboarding of new team members. Skills and Qualifications: Experience: 5+ years of experience as a Data Engineer or similar role, with hands-on experience in designing, building, and maintaining ETL pipelines. Technologies: Proficient in PySpark for large-scale data processing. Strong programming experience in Python , particularly for data engineering tasks. Experience working with Databricks for big data processing and collaboration. Hands-on experience with data storage solutions (e.g., AWS S3, Azure Data Lake, or Google Cloud Storage). Solid understanding of ETL concepts, tools, and best practices. Familiarity with SQL for querying and manipulating data in relational databases. Experience working with data orchestration tools such as Apache Airflow or Luigi is a plus. Data Modeling & Warehousing: Experience with data warehousing concepts and technologies (e.g., Redshift, Snowflake, or Big Query). Knowledge of data modeling, data transformations, and dimensional modeling. Soft Skills: Strong analytical and problem-solving skills. Excellent communication skills, capable of explaining complex data processes to non-technical stakeholders. Ability to work in a fast-paced, collaborative environment and manage multiple priorities. Preferred Qualifications: Bachelor's or masters degree in computer science, Engineering, or a related field. Certification or experience with cloud platforms like AWS , Azure , or Google Cloud . Experience in Apache Kafka or other stream-processing technologies.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17062 Jobs | Dublin
Wipro
9393 Jobs | Bengaluru
EY
7759 Jobs | London
Amazon
6056 Jobs | Seattle,WA
Accenture in India
6037 Jobs | Dublin 2
Uplers
5971 Jobs | Ahmedabad
Oracle
5764 Jobs | Redwood City
IBM
5714 Jobs | Armonk
Tata Consultancy Services
3524 Jobs | Thane
Capgemini
3518 Jobs | Paris,France