Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Title: Sr. Data Engineer Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 3+ years of hands-on experience in data processing focused projects Proficiency with Java, Python, or Scala, and SQL Knowledge of Apache Spark Experience with one of the major cloud providers AWS, Azure or GCP Hands-on experience with selected data processing technologies, such as Hadoop, MongoDB, Cassandra, Kafka, and Elasticsearch, as well as Python libraries (Pandas, NumPy, etc.) and data processing tools from cloud providers (EMR, Glue, Data Factory, BigTable, etc.) Relevant experience with version control systems and code review processes Knowledge of Agile methodologies Basic knowledge of Linux and Bash scripting Nice to have Hands-on experience with Databricks and Delta-Lake Ability to build Apache Airflow pipelines Experience with the Snowflake platform
Posted 1 month ago
8.0 - 12.0 years
8 - 18 Lacs
Pune
Work from Office
Critical Skills to Possess: Minimum 8 years of solid experience with: Understanding of Azure environment/infrastructure Azure/devops configuration Azure Data Factory Azure Functions Azure Logic Apps Power Apps SQL .NET Power Automate (Microsoft Flow) Gateway Configurations Manual Code Deployment in Azure Familiarity with monitoring Azure environments and resolving issues Familiarity with monitoring Azure costs and making adjustments accordingly Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Minimum 5 years of solid experience with: Understanding of Azure environment/infrastructure Azure/devops configuration Azure Data Factory Azure Functions Azure Logic Apps Power Apps SQL .NET Power Automate (Microsoft Flow) Gateway Configurations Manual Code Deployment in Azure Familiarity with monitoring Azure environments and resolving issues Familiarity with monitoring Azure costs and making adjustments accordingly
Posted 1 month ago
5.0 - 10.0 years
0 - 0 Lacs
Noida, Gurugram, Chennai
Work from Office
Hi, This is Vinita from Silverlink Technologies. We have an excellent Job opportunity with TCS for the post of Azure Data Engineer" at / Noida / chennai/Delhi Location. If interested, kindly forward me your word formatted updated resume ASAP on Vinita@silverlinktechnologies.com, kindly fill in the below-mentioned details too. Full Name: Contact No: Email ID: DOB: Experience: Relevant Exp: Current Company: Notice Period: Current CTC: Expected CTC: Offer in hand: If yes then offered ctc: Date of joining: Company name: Grades -- 10th: 12th: Graduation: Full time/Part Time? University Name: Current Location: Preferred Location: Gap in education: Gap in employment: **Mandatory**Pan Card Number: Have you ever worked with TCS? Do you have active PF account? Role: Azure Data Engineer Exp: 5-12yrs Mode: Permanent Notice Period: up to 1-2 Months only Interview Mode: Virtual For any queries can revert back on the below-mentioned details. Regards, Vinita Shetty | IT Recruiter Silverlink Technologies Email: Vinita@silverlinktechnologies.com Website: www.silverlinktechnologies.com
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Hiring a Full Stack Developer for a 6-month remote contractual role. The ideal candidate will have 46 years of experience in full-stack development using React.js, Node.js or Python (Flask), and strong backend experience in SQL and ETL. Familiarity with the Azure Cloud Platform, including Azure Databricks, ADF, and other Azure services, is required. You will be responsible for building scalable web applications, developing RESTful APIs, integrating with cloud-based data workflows, and ensuring the performance and security of deployed solutions. Strong communication skills and hands-on expertise in version control tools like Git are expected. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 month ago
5.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Diverse Lynx is looking for Pyspark, Azure databricks to join our dynamic team and embark on a rewarding career journey Develop and maintain big data pipelines using Pyspark Integrate Azure Databricks for scalable data processing Perform data transformation and optimization tasks Collaborate with analysts and data scientists
Posted 1 month ago
7.0 - 12.0 years
8 - 18 Lacs
Navi Mumbai, Pune
Work from Office
Interested candidates kindly submit the below form: https://forms.gle/ex1M5oa3qagMUcrn6 Job Description: We are looking for an experienced Team Lead Data Warehouse Migration, Data Engineering & BI to lead enterprise-level data transformation initiatives. The ideal candidate will have deep expertise in migration , Snowflake , Power BI and end-to-end data engineering using tools like Azure Data Factory , Databricks , and PySpark . Key Responsibilities: Lead and manage data warehouse migration projects , including extraction, transformation, and loading (ETL/ELT) across legacy and modern platforms. Architect and implement scalable Snowflake data warehousing solutions for analytics and reporting. Develop and schedule robust data pipelines using Azure Data Factory and Databricks . Write efficient and maintainable PySpark code for batch and real-time data processing. Design and develop dashboards and reports using Power BI to support business insights. Ensure data accuracy, security, and consistency throughout the project lifecycle. Collaborate with stakeholders to understand data and reporting requirements. Mentor and lead a team of data engineers and BI developers. Manage project timelines, deliverables, and team performance effectively Must-Have Skills: Data Migration: Hands-on experience with large-scale data migration, reconciliation, and transformation. Snowflake: Data modeling, performance tuning, ELT/ETL development, role-based access control. Azure Data Factory: Pipeline development, integration services, linked services. Databricks: Spark SQL, notebooks, cluster management, orchestration. PySpark: Advanced transformations, error handling, and optimization techniques. Power BI: Data visualization, DAX, Power Query, dashboard/report publishing and maintenance. Preferred Skills: Familiarity with Agile methodologies and sprint-based development. Experience in working with CI/CD for data workflows. Ability to lead client discussions and manage stakeholder expectations. Strong analytical and problem-solving abilities.
Posted 1 month ago
3.0 - 8.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Title DD&IT Delivery Lead Department Digital, Data & IT Novo Nordisk India Pvt Ltd Are you passionate about delivering cutting-edge digital and data-driven technology solutionsDo you thrive at the intersection of technology and business, and have a knack for leading complex IT projectsIf so, we have an exciting opportunity for you! Join Novo Nordisk as a Delivery Lead in our Digital, Data & IT (DDIT) team in Bangalore, India, and help us shape the future of healthcare. Read on and apply today for a life-changing career. The position As a Delivery Lead Digital, Data & IT, you will: Lead the full lifecycle of IT projects, from initiation and planning to execution, deployment, and post-go-live support. Define and manage project scope, timelines, budgets, and resources using Agile or hybrid methodologies. While Agile is preferred. Drive sprint planning, backlog grooming, and release management in collaboration with product owners and scrum teams. Conduct architecture and solution design reviews to ensure scalability and alignment with enterprise standards. Provide hands-on guidance on solution design, data modelling, API integration, and system interoperability. Ensure compliance with IT security policies and data privacy regulations, including GDPR and local requirements. Act as the primary point of contact for business stakeholders, translating business needs into technical deliverables. Facilitate workshops and design sessions with cross-functional teams, including marketing, sales, medical, and analytics. Manage vendor relationships, ensure contract compliance, SLA adherence, and performance reviews. Qualifications We are looking for an experienced professional who meets the following criteria: Bachelor''s degree in computer science, Information Technology, or related field OR and MBA/postgraduate with minimum 3 years of relevant experience. 68 years of experience in IT project delivery, with at least 3 years in a technical leadership or delivery management role. Proven experience in CRM platforms (e.g., Veeva, Salesforce), Omnichannel orchestration tools, and Patient Engagement platforms. Proven experience is required in commercial space of business. Experience in Data lakes and analytics platforms (e.g., Azure Synapse, Power BI) Mobile/web applications for field force enablement. Certifications in project management (PMP, PRINCE2) or Agile (Scrum Master, SAFe) are good to have. Relevant experience in managing projects can also be considered. Experience with IT governance models and technical documentation for best practices. Exposure to data privacy tools and frameworks. Familiarity with data and IT security best practices. About the department The DDIT department is located at our headquarters, where we manage projects and programs related to business requirements and specialized technical areas. Our team is dedicated to planning, organizing, and controlling resources to achieve project objectives. We foster a dynamic and innovative atmosphere, driving the adoption of Agile processes and best practices across the organization.
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Key Responsibilities: Optimize and manage data storage solutions in Data Lake and Snowflake. Develop and maintain ETL processes using Azure Data Factory and Databricks. Write efficient and maintainable code in Python for data processing and analysis. Ensure data quality and integrity across various data sources and platforms. Ensure data accuracy, integrity, and availability across various trading systems. Collaborate with traders, analysts, and IT teams to understand data requirements and deliver robust solutions. Optimize and enhance data architecture for performance and scalability Mandatory Skills: Python/ pyspark - 5+ years Fast API - 3+ years Pydantic - 3+ years SQL Alchemy - 3+ years Snowflake or SQL - 3+ years Data Lake - 1+ years Azure Data Factory (ADF) - 1+ years CI\CD, Azure fundamentals , GIT - 1+ Years
Posted 1 month ago
4.0 - 9.0 years
10 - 15 Lacs
Pune
Work from Office
MS Azure Infra (Must), PaaS will be a plus, ensuring solutions meet regulatory standards and manage risk effectively. Hands-On Experience using Terraform to design and deploy solutions (at least 5+ years), adhering to best practices to minimize risk and ensure compliance with regulatory requirements. Primary Skill AWS Infra along with PaaS will be an added advantage. Certification in Terraform is an added advantage. Certification in Azure and AWS is an added advantage. Can handle large audiences to present HLD, LLD, and ERC. Able to drive Solutions/Projects independently and lead projects with a focus on risk management and regulatory compliance. Secondary Skills Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
The Job Design / Implementation of Data Pipelines for processing large volumes of data Ingesting batch and streaming data from various data sources. Writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.) Data Modeling: Proficiency in creating both normalized and denormalized database schemas. Developing applications in Python. Developing ETL, OLAP based and Analytical Applications. Working experience in Azure / AWS services Working in Databricks, Snowflake or other cloud data platforms. Working in Agile / Scrum methodologies Good knowledge of cloud security concepts and implementation of different types of authentication methods working on Azure DevOps; Create and manage GIT and code versioning, build CI/CD pipelines and test plans Your Profile Experience with a strong focus on Data Engineering Design, develop, and maintain data pipelines Implement ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes for seamless data integration. Collaborate with cross-functional teams to design and implement large-scale distributed systems for data processing and analytics. Optimize and maintain CI/CD pipelines to ensure smooth deployment and integration of new data solutions. Exposed to python libraries such as NumPy, pandas, beautiful soup, etc Experience Databricks, Snowflake or other cloud data platforms.
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Build and optimize ETL workflows using Azure Databricks and PySpark. This includes developing efficient data processing pipelines, data validation, error handling, and performance tuning. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 4 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams. Youll join an entrepreneurial, inclusive culture. One where we succeed together across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you.
Posted 1 month ago
5.0 - 10.0 years
15 - 22 Lacs
Ahmedabad
Work from Office
• Design, develop, and maintain data pipelines and ETL processes using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.experience with SQL, Python, or other scripting languages Required Candidate profile ETL design,big data tools such as Hadoop or Spark• 3+ years of in data engineering with a focus on Azure cloud services.expe in data working with Azure cloud services, and designing data solutions
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Gurugram
Work from Office
We Are Hiring! Sr. Azure Data Engineer at GSPANN Technologies - 5 + years of experience . Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. 5 + years of experience . Location: Hyderabad, Gurgaon Key Skills & Experience: Azure Synapse Analytics Azure Data Factory (ADF) PySpark Databricks Expertise in developing and maintaining Stored Procedures Proven experience in designing and implementing scalable data solutions in Azure Preferred Qualifications: Minimum 6 years of hands-on experience working with Azure Data Services Strong analytical and problem-solving skills Excellent communication skills, both verbal and written Ability to collaborate effectively in a fast-paced, cross-functional environment Immediate Joiners Only: We are looking for professionals who can join immediately and contribute to dynamic projects. Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. Join GSPANN Technologies and accelerate your career with exciting opportunities in data engineering!
Posted 1 month ago
3.0 - 5.0 years
5 - 8 Lacs
Noida
Work from Office
Must be: Bachelors or Masters degree in Computer Science, Information Technology, or a related discipline. 35+ years of experience in SQL Development and Data Engineering . Strong hands-on skills in T-SQL , including complex joins, indexing strategies, and query optimization. Proven experience in Power BI development, including building dashboards, writing DAX expressions, and using Power Query . Should be: At least 1+ year of hands-on experience with one or more components of the Azure Data Platform : Azure Data Factory (ADF) Azure Databricks Azure SQL Database Azure Synapse Analytics Solid understanding of data warehouse architecture , including star and snowflake schemas , and data lake design principles. Familiarity with: Data Lake and Delta Lake concepts Lakehouse architecture Data governance , data lineage , and security controls within Azure
Posted 1 month ago
5.0 - 10.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Neudesic, an IBM Company is home to some very smart, talented and motivated people. People who want to work for an innovative company that values their skills and keeps their passions alive with new challenges and opportunities. We have created a culture of innovation that makes Neudesic not only an industry leader, but also a career destination for todays brightest technologists. You can see it in our year-over-year growth, made possible by satisfied employees dedicated to delivering the right solutions to our clients Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience
Posted 1 month ago
5.0 - 10.0 years
0 - 1 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com
Posted 1 month ago
6.0 - 10.0 years
16 - 30 Lacs
Pune
Hybrid
As an Analytics Engineer at Vanderlande you'll be at the forefront of driving data-driven solutions and shaping the future of our organization by working on the data platform. Your key responsibilities will involve translating business needs into functional requirements, designing and developing data products, pipelines, and reports, and analyzing data to solve use cases resulting in optimized business processes and fact-based decision-making. As being part of a cross functional full-stack team, together with your colleagues you are responsible for the creation and delivery of an end-to-end solution to our business stakeholders. In this role, you are a tech-savvy, action-oriented, and collaborative colleague who can wear multiple hats - part Data Engineer, - part Data Analyst and even though you have an area of expertise, you can fulfil each of those roles up to a certain point. On a day-to-day basis, you will focus on creating and maintaining data products, data pipelines using Python, PySpark and SQL, and dashboards using tools like Qlik, or Power BI. Working in an Agile environment, proactively contributes to scrum events ensuring seamless coordination with the team and swift adaptability to changing priorities. They thrive in a fast-paced, iterative development environment, where constant feedback and continuous improvement are key. In this role, you: Translate business needs into functional requirements providing essential information on business use cases. Translate functional requirements into thorough and feasible data products, analytics solutions and dynamic dashboards; Utilize Python, PySpark, SQL, or R for data retrieval and manipulation. Develop, test, and maintain data products, pipelines by the use of the Azure stack and Databricks, ensuring data reliability and quality. Design and implement architectures for efficient data extraction and transformation. Work on creating and maintaining landing zones in the data platform. Actively participate in and contribute to Continuous Integration and Continuous Deployment (CI/CD) practices, ensuring smooth and efficient development and deployment processes within the data platform Integrate data pipelines and reports into testing frameworks, allowing for rigorous performance testing and validation to ensure seamless performance. Monitor and maintain data pipeline stability, offering support when required. Analyze, interpret, and visualize data to drive business process optimization and fact-based decision-making. Create, deploy, and maintain interactive dashboards and visualizations using Qlik or Power BI. Perform a comprehensive analysis and proactively implement solutions to assess and enhance data quality and data reliability. You are eager to improve yourself and strive for continuous enhancement of processes and development of our data products. Stay updated with the latest developments in the analytics field and share knowledge with the team. Your Qualification and Skills: If youre an experienced, enthusiastic and versatile Analytics Engineer, you will bring: Minimum bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). 6 to 10 yrs of total exp with min 3 years of prior experience working as an Analytics Engineer/Data Engineer or in a similar role . Ability to work effectively as part of a cross-functional international full stack team, collaborating with other developers, and stakeholders. Experience in writing code in Python, PySpark, SQL, or R for data retrieval and manipulation is a strong preference. Demonstrated experience and proficiency in working with Azure Stack and Databricks is required for this role. Proficiency or interest in using DevOps practices and tools for continuous integration, continuous delivery (CI/CD), and automated deployment of data products and data pipelines Strong communication skills to effectively convey complex technical concepts in a clear and understandable manner to both technical and non-technical stakeholders. Enthusiastic, proactive, driven, and actively seeking opportunities for personal development and growth. Demonstrated curiosity and commitment to staying updated with the latest trends, tools, and best practices in the data & analytics field. Familiar with agile methodologies and thrives in its dynamic and collaborative environment. Knowledge of visualization tools like Power BI or Qlik is plus.
Posted 1 month ago
5.0 - 10.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Neudesic, an IBM Company is home to some very smart, talented and motivated people. People who want to work for an innovative company that values their skills and keeps their passions alive with new challenges and opportunities. We have created a culture of innovation that makes Neudesic not only an industry leader, but also a career destination for todays brightest technologists. You can see it in our year-over-year growth, made possible by satisfied employees dedicated to delivering the right solutions to our clients Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience
Posted 1 month ago
2.0 - 7.0 years
5 - 15 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role- Azure Devops EXP- 2+yrs Location- Pan INDIA
Posted 1 month ago
9.0 - 14.0 years
27 - 40 Lacs
Hyderabad
Remote
Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Eventhub, Kafka, Architecture,Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within 14th July 2025) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Mandatory skills: Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Posted 1 month ago
6.0 - 11.0 years
18 - 33 Lacs
Kolkata, Bengaluru, Delhi / NCR
Work from Office
Minimum 6 yrs of experience in building ETL pipeline using Azure data Factory, Azure Synapse Minimum 6 yrs of ETL development using PL/SQL Exposure in Azure data brick is added advantage Above average communication skill IC Role Can Join in 2-3 weeks Required Candidate profile Expert in SSIS and Azure Synapse ETL development. Significant experience in developing Python or pySpark. Significant experience in Database and Data Storage Platforms including Azure Data Lake.
Posted 1 month ago
3.0 - 8.0 years
6 - 16 Lacs
Noida, Kolkata, Bengaluru
Work from Office
Job Description Your Role and Responsibilities Responsible for implementing a robust data estate using Microsoft Azure Stack. Responsible for creating reusable and scalable data pipelines. Responsible for developing and deploying Big Data BI Solutions across industries and functions. Responsible for the development and deployment of new data platforms. Responsible for creating reusable components for rapid development of data platform. Responsible to create high-level architecture and data modelling. Responsible to guide and mentor the development teams. Responsible for getting UAT and Go Live for Data platform projects. Play an active role in team meetings and workshops with clients. Required Technical and Professional Expertise Minimum of 4+ years of experience in Data Warehousing with Big Data or Cloud. Graduate degree educated in computer science or a relevant subject. Good software engineering principals. Strong in SQL queries and data models. Good understanding of OLAP & OLTP concepts and implementations. Experience in working on Azure Services like Azure Data Factory, Azure Function, Azure SQL, Azure Data Bricks, Azure Data Lake, Synapse Analytics etc. Knowledge of Power shell is good to have. Experience of working in Agile delivery model. Knowledge of Big Data technologies, such as Spark, Hadoop/MapReduce is good to have. Knowledge and practical experience of cloud-based at forms and their ML/DL offerings (such as Google GCP, AWS, and Azure) would be advantageous. Understanding of infrastructure (including hosting, container-based deployments, and storage architectures) would be advantage. Preferred Technical and Professional Experience 6+ years of experience in ELT/Data Integration for Data Warehousing, Data Lakes, and Business Intelligence. Expertise in data storage, ETL/ELT, and data analytics tools and technologies. Proven hands-on experience in designing, modelling, testing, and optimizing data warehousing & data lake solutions. Proficiency in SQL and Spark-based data processing. Experience with Azure cloud big data technologies Like Azure Data Factory, Azure Databricks, Fabric, Synapse, ADLS Gen2, etc. Strong understanding of data and analytics data architecture. Experience working with Agile methodologies (Scrum, Kanban) and in large transformational programs. Ability to document architecture and present solutions effectively. Strong data modelling skills (conceptual, logical, and physical). Experience with Python.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Data Software Engineer with 5 to 12 years of experience in Big Data and related technologies. The ideal candidate will have expertise in distributed computing principles, Apache Spark, and hands-on programming with Python. Roles and Responsibility Design and implement Big Data solutions using Apache Spark and other relevant technologies. Develop and maintain large-scale data processing systems, including stream-processing systems. Collaborate with cross-functional teams to integrate data from multiple sources, such as RDBMS, ERP, and files. Optimize performance of Spark jobs and troubleshoot issues. Lead a team efficiently and contribute to the development of Big Data solutions. Experience with native Cloud data services, such as AWS or AZURE Databricks. Job Expert-level understanding of distributed computing principles and Apache Spark. Hands-on programming experience with Python and proficiency with Hadoop v2, Map Reduce, HDFS, and Sqoop. Experience with building stream-processing systems using technologies like Apache Storm or Spark-Streaming. Good understanding of Big Data querying tools, such as Hive and Impala. Knowledge of ETL techniques and frameworks, along with experience with NoSQL databases like HBase, Cassandra, and MongoDB. Ability to work in an AGILE environment and lead a team efficiently. Strong understanding of SQL queries, joins, stored procedures, and relational schemas. Experience with integrating data from multiple sources, including RDBMS (SQL Server, Oracle), ERP, and files.
Posted 1 month ago
3.0 - 8.0 years
3 - 6 Lacs
Bengaluru
Work from Office
We are looking for a skilled SQL PySpark professional with 3 to 8 years of experience to join our team. The ideal candidate will have expertise in developing data pipelines and transforming data using Databricks, Synapse notebooks, and Azure Data Factory. Roles and Responsibility Collaborate with technical architects and cloud solutions teams to design data pipelines, marts, and reporting solutions. Code, test, and optimize Databricks jobs for efficient data processing and report generation. Set up scalable data pipelines integrating with various data sources and cloud platforms using Databricks. Ensure best practices are followed in terms of code quality, data security, and scalability. Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms. Collaborate with data scientists and analysts to support machine learning workflows and analytic needs. Stay updated with the latest developments in Databricks and associated technologies to drive innovation. Job Proficiency in PySpark or Scala and SQL for data processing tasks. Hands-on experience with Azure Databricks, Delta Lake, Delta Live tables, Auto Loader, and Databricks SQL. Expertise with Azure Data Lake Storage (ADLS) Gen2 for optimized data storage and retrieval. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Experience with Power BI for dashboarding and reporting is a plus. Familiarity with Azure Synapse for analytics and integration tasks is desirable. Knowledge of Spark Streaming for real-time data stream processing is an advantage. MLOps knowledge for integrating machine learning into production workflows is beneficial. Familiarity with Azure Resource Manager (ARM) templates for infrastructure as code (IaC) practices is preferred. Demonstrated expertise of 4-5 years in developing data ingestion and transformation pipelines using Databricks, Synapse notebooks, and Azure Data Factory. Solid understanding and hands-on experience with Delta tables, Delta Lake, and Azure Data Lake Storage Gen2. Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation. Proficiency in building and optimizing query layers using Databricks SQL. Demonstrated experience integrating Databricks with Azure Synapse, ADLS Gen2, and Power BI for end-to-end analytics solutions. Prior experience in developing, optimizing, and deploying Power BI reports. Familiarity with modern CI/CD practices, especially in the context of Databricks and cloud-native solutions.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Senior Azure Data Engineer with 5 to 10 years of experience to design and implement scalable data pipelines using Azure technologies, driving data transformation, analytics, and machine learning. The ideal candidate will have a strong background in data engineering and proficiency in Python, PySpark, and Spark Pools. Roles and Responsibility Design and implement scalable Databricks data pipelines using PySpark. Transform raw data into actionable insights through data analysis and machine learning. Build, deploy, and maintain machine learning models using MLlib or TensorFlow. Optimize cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. Execute large-scale data processing using Spark Pools and fine-tune configurations for efficiency. Collaborate with cross-functional teams to identify business requirements and develop solutions. Job Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Minimum 5 years of experience in data engineering, with at least 3 years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python, PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow. Hands-on experience with Azure Data Lake, Blob Storage, Synapse Analytics, and other relevant technologies. Strong understanding of data modeling, data warehousing, and ETL processes. Experience with agile development methodologies and version control systems.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough