Jobs
Interviews

1265 Azure Databricks Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6 - 11 years

8 - 13 Lacs

Gurugram

Work from Office

What this job involves JLL Technologies Enterprise Data team is a newly established central organization that oversees JLLs data strategy. The Senior data Engineering professional will work with our colleagues at JLL around the globe in providing solutions, developing new products, building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data and we are just getting started on that journey! Senior Data Engineer is self-starter to work in a diverse and fast-paced environment that can join our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience. As a Senior Data Engi neer focused on AI and emerging technologies at JLL Technologies, you will: Design and implement advanced data infrastructure incorporating AI-driven technologies, with a focus on retrieval augmented generation (RAG) and other cutting-edge approaches Architect and implement modern data approaches, including RAG systems, to meet key business objectives and provide end-to-end data solutions that enhance AI capabilities and decision-making processes Conduct independent research into emerging data technologies, AI advancements, and industry trends, identifying potential innovations that could benefit the organization Initiate and execute proof-of-concept (POC) projects to evaluate new features, technologies, and methodologies in data engineering, AI, and machine learning Gain and apply a comprehensive understanding of data flow and storage across multiple applications (e.g., CRM, Broker & Sales tools, Finance, HR), while considering the integration of AI-powered systems and RAG technologies Design and develop data management and persistence solutions for various application use cases, leveraging both relational and non-relational databases, while enhancing data processing capabilities and implementing AI-driven retrieval and generation systems Collaborate with cross-functional teams to identify and implement opportunities for RAG and other AI technologies to improve data retrieval, analysis, and generation across the organization Stay current with emerging trends in AI, machine learning, and data engineering, applying this knowledge to ensure the organization remains at the forefront of technological advancements in data management and AI-driven solutions Develop implementation strategies for successfully integrating proven new technologies and features into existing data infrastructure and processes Continuously enhance your skills and knowledge, staying ahead of the curve in data engineering and AI integration, and sharing insights with team members to foster a culture of innovation Sounds like you? What we are looking for: 6+ years' overall work experience and Bachelor's degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Minimum of 6 years of experience in MDM design, development, support and operations, with proficiency in Python and Node.js. Excellent technical, analytical and organizational skills. Hands-on engineering lead who is curious about technology, able to quickly adapt to change, and understands technologies supporting areas such as Cloud Computing (with a focus on Azure), Microservices, Streaming Technologies, Network, and Security. Proficiency in Azure services, including Azure Functions, Cosmos DB, and Azure Databricks. Experience with vector databases like Pinecone and in-memory data stores like Redis. Familiarity with container technologies and orchestration platforms. Design & develop data management and data persistence solutions for application use cases leveraging relational and non-relational databases, enhancing data processing capabilities. Experience handling unstructured data, working in a data lake environment, leveraging data streaming, and developing data pipelines driven by events/queues. Team player, reliable, self-motivated, and self-disciplined individual capable of executing multiple projects simultaneously within a fast-paced environment, working with cross-functional teams. Genuine excitement about data infrastructure and operations, with familiarity working in cloud environments, particularly Azure. Ability to optimize and tune database performance, particularly for Cosmos DB and Redis. Understanding of vector embeddings and their applications in AI and machine learning contexts What you can expect from us: Youll join an entrepreneurial, inclusive culture. One where we succeed together across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you...

Posted 2 months ago

Apply

5 - 10 years

15 - 30 Lacs

Noida, Bengaluru

Hybrid

Minimum experience of working in projects as a Senior Azure Data Engineer. B.Tech/B.E degree in Computer Science or Information Technology. Experience with enterprise integration tools and ETL (extract, transform, load) tools like Data Bricks, Azure Data factory, and Talend/Informatica, etc. Analyzing Data using python, Spark streaming, SSIS/Informatica Batch ETL and data base tools like SQL, Mongo DB for processing data from different sources. Experience with Platform Automation tools (DB management, Azure, Jenkins, GitHub) will be an added advantage. Design, operate, and integrate Different systems to enable efficiencies in key areas of the business Understand Business Requirements, Interacting with business users and or reverse engineering existing data products Good Understanding and working knowledge of distributed databases and Pipelines Ability to analyze and identify the root cause for technical issues. Proven ability to use relevant data analytics approaches and tools to problem solve and trouble shoot. Excellent documentation and communication skills.

Posted 2 months ago

Apply

10 - 20 years

0 Lacs

Kochi, Pune, Bengaluru

Work from Office

Data Architecture Design and Implementation,Data Integration, Optimization and Performance Tuning, Collaboration with Data Teams, Security and Governance, Cloud Infrastructure and Automation, Continuous Improvement and Innovation

Posted 2 months ago

Apply

1 - 6 years

6 - 16 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Job Title: Data Architect Location: Noida (Sec-132) Job Description: 1. Strong experience in Azure - Azure Data Factory, Azure Data Lake, Azure Data bricks 2. Good at Cosmos DB, Azure SQL data warehouse/synapse 3. Excellent in data ingestion (Batch and real-time processing) 4. Good understanding of synapse workspace and synapse analytics 5. Good hands-on experience on Pyspark or Scala spark 6. Good hands-on experience on Delta Lake and Spark streaming 7. Good Understanding of Azure DevOps and Azure Infrastructure concepts 8. Have at least one project end-to-end hands-on implementation experience as an architect 9. Expert and persuasive communication skills (verbal and written) 10. Expert in presentation and skilled at managing multiple clients. 11. Good at Python / Shell Scripting 12. Designing the data Catalog, Governance Architecture and data security. 13. Developing and maintaining a centralized data catalog that documents metadata, data lineage, and data definitions across various data sources and systems. 14. Developing Metadata to ensure data discoverability, accessibility, and understanding for data users. 15. Implementing data governance workflows and processes to manage data lifecycle 16. All other duties and responsibilities as assigned

Posted 2 months ago

Apply

5 - 10 years

15 - 20 Lacs

Gurugram

Hybrid

Data Engineer- Immediate Joiners Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Databricks, ETL processes, and SQL. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions on time. Develop complex data models and algorithms to solve business problems using PySpark and Spark-SQL. Ensure scalability, reliability, and performance of the system by monitoring its behavior under various loads. Participate in code reviews to ensure adherence to coding standards and best practices. Desired Candidate Profile 5-10 years of experience as a Data Engineer with expertise in Azure Databricks, ETL processes, Pyspark, SQL. Bachelor's degree (B.Tech/B.E.) from a reputed institution in Any Specialization. Strong understanding of big data technologies such as Hadoop ecosystem components including HDFS, MapReduce, Hive etc.

Posted 2 months ago

Apply

6 - 9 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Sharing the JD for your reference : Experience : 6-10+ yrs Primary skills set : Azure Databricks , ADF SQL , Unity CATALOG, Pyspark/Python Kindly, share the following details : Updated CV Relevant Skills Total Experience Current CTC Expected CTC Notice Period Current Location Preferred Location

Posted 2 months ago

Apply

5 - 10 years

20 - 35 Lacs

Noida, Bengaluru, Mumbai (All Areas)

Hybrid

Job Description / Skill set: 5+ years of related experience with a Bachelors degree; consulting experience preferred. 5+ years of hands-on experience in data engineering /ETL using Databricks on AWS / Azure cloud infrastructure and functions. 3+ years of experience in PBI and Data Warehousing experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with AWS (e.g. S3, Athena, Glue, Lambda, etc.) preferred. Deep understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized, OBT) implementing highly performant data ingestion pipelines from multiple sources Strong proficiency in Python and SQL. Deep understanding of Databricks platform features (Delta Lake, Databricks SQL, MLflow) Experience with CI/CD on Databricks using tools such as BitBucket, GitHub Actions, and Databricks CLI -nyea Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow. Basic working knowledge of API or Stream based data extraction processes like Salesforce API, Bulk API. Understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing) Excellent problem-solving and analytical skills Able to Work Independently Excellent oral and written communication skills Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: experience with building data pipeline from various business applications like Salesforce, Marketo, NetSuite, Workday etc.

Posted 2 months ago

Apply

3 - 7 years

10 - 20 Lacs

Bengaluru

Work from Office

For this Job, you will require: • BTech/ Masters degree or equivalent in computer science/computer engineering/IT, technology, or related field. • 8 (or) More Years of Good Hands-on Experience in Azure Data Engineering • Good hands-on experience on Azure Data Lake, Data Factory, Data Bricks and Logic apps. • Strong SQL Skills and Hands on experience. To join us, you should: • Have excellent relationship building and organizational skills. • Enjoy working in a fast-changing field. • Be able to demonstrate the solution/product capabilities to clients. • Be able to design new solutions and products for multiple domains. You must have: • Designed (or) worked-on solutions on Azure platform using Azure data factory, Azure Data Lake, Azure Data Bricks, Azure Synapse Analytics, Azure SQL, Power shell, Azure logic apps, Azure automation, Blob storage and T-SQL. • Should have done at least 1 greenfield Data Lake, Delta Lake(or) Data warehousing project in Azure landscape. • Design, build and maintain Azure Data Analytics solution. • Should be well familiarized with CI/CD in Azure DevOps platform. • Build solution as per the customer needs and defined Architecture. • Analyse, design, implement and test medium to complex mappings independently. • Strong experience in transforming large datasets as per business needs. • Should be able to write scripts in Python or any other scripting language. • Understanding of Self-Service BI/Analytics, User Security, Mobility, and other BI • Participate in design/discovery sessions and workshops with end users. • A flexible approach to dealing with ad-hoc queries. • To Guide and help the junior team members in achieving their goals and objective.

Posted 2 months ago

Apply

3 - 6 years

20 - 25 Lacs

Hyderabad

Work from Office

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As member of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 2+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and data engineering 1+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 1+ years of experience in Databricks optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 months ago

Apply

4 - 9 years

7 - 15 Lacs

Bengaluru

Work from Office

NOTE- Mandatory to have experience in Azure Data Engineering with ADF, ADB, ADLS, Azure Synapse. Hiring for- MNC Client Location- Bangalore (work from office) Role- Senior Analyst- Azure- Data Management Shift - General Purpose- Understand client requirements and build ETL solution using Azure Data Factory, Azure Databricks & PySpark. Build solution in such a way that it can absorb clients change request very easily. Find innovative ways to accomplish tasks and handle multiple projects simultaneously and independently. Works with Data & appropriate teams to effectively source required data. Identify data gaps and work with client teams to effectively communicate the findings to stakeholders/clients . Main accountabilities: Develop ETL solution to populate Centralized Repository by integrating data from various data sources. Create Data Pipelines, Data Flow, Data Model according to the business requirement. Proficient in implementing all transformations according to business needs. Identify data gaps in data lake and work with relevant data/client teams to get necessary data required for dashboarding/ reporting. Strong experience working on Azure data platform, Azure Data Factory, Azure Data Bricks. Strong experience working on ETL components and scripting languages like PySpark, Python. Experience in creating Pipelines, Alerts, email notifications, and scheduling jobs. What are we looking for? Bachelors' degree in Engineering or Science or equivalent graduates with at least 4-7 years of overall experience in data management including data integration, modeling & optimization. Minimum 3 years of experience working on Azure cloud, Azure Data Factory, Azure Databricks. Minimum 2-3 years of experience in PySpark, Python, etc. for data ETL. In-depth understanding of data warehouse, ETL concept and modeling principles. Strong ability to design, build and manage data. Strong understanding of Data integration.

Posted 2 months ago

Apply

4 - 6 years

12 - 16 Lacs

Navi Mumbai

Work from Office

Position : Data Engineer Experience: 4+ years Availability: Immediate joiners preferred Work Mode: Work from Office (WFO) Location: MIDC, Ghansoli, Navi Mumbai Working Days: 5 days a week (Client Location) About the Role: We are seeking a skilled and detail-oriented Data Engineer to join our growing team. The ideal candidate will have strong proficiency in Python, SQL and Databricks and experience in building and maintaining scalable data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineers to ensure data availability, reliability, and quality across our systems. Required Skills & Qualifications: Experience with Databricks - Mandatory Proficiency with open-Source technologies like HDFS, Hive, Kafka, Spark - Mandatory Ability to solve any ongoing issues with operating the cluster - Mandatory Expert in SQL Experience in Python Experience with building stream-processing systems Optional

Posted 2 months ago

Apply

8 - 13 years

25 - 30 Lacs

Bengaluru

Hybrid

Over all 8+ years of solid experience in data projects. Excellent Design, develop, and maintain robust ETL/ELT pipelines for data ingestion, transformation, and storage. Proficient in SQL and must worked on complex joins, Subqueries, functions, procedure Able to perform SQL tunning and query optimization without support. Design, develop, and maintain ETL pipelines using Databricks, PySpark to extract, transform, and load data from various sources. Must have good working experience on Delta tables, deduplication, merging with terabyte of data set Optimize and fine-tune existing ETL workflows for performance and scalability. Excellent knowledge in dimensional modelling and Data Warehouse Must have experience on working with large data set Experience working with batch and real-time data processing (Good to have). Implemented data validation, quality checks , and ensure adherence to security and compliance standards. Ability to develop reliable, secure, compliant data processing systems. Work closely with cross-functional teams to support data analytics, reporting, and business intelligence initiatives. One should be self-driven and work independently without support.

Posted 2 months ago

Apply

5 - 7 years

30 - 32 Lacs

Bengaluru

Work from Office

The candidate will be responsible for developing and managing business intelligence solutions, transforming raw data into meaningful insights, and collaborating with various stakeholders to meet business needs. Optional skills in data engineering, such as experience with Databricks and Azure Data Factory, are highly desirable. Roles and Responsibilities: Develop, design, and maintain Power BI reports and dashboards. Optimize data models and queries to ensure efficient performance. Integrate data from various sources to create comprehensive reports. Ensure data accuracy and consistency across all reports and dashboards. Provide training and support to end-users on Power BI tools and functionalities. Key Characteristics: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and as part of a team. Attention to detail and a commitment to delivering high-quality work. Mandatory Skills: Proficiency in Power BI, including Power Query, MQuery and DAX. Experience with data visualization and report design. Strong SQL skills for data extraction and manipulation. Understanding of data warehousing concepts and data modeling. Ability to work with large datasets and optimize performance. Optional Skills: Experience with Databricks, Azure Data Factory and Azure Datalakes Knowledge of other BI tools and technologies. Familiarity with data integration and ETL processes.

Posted 2 months ago

Apply

5 - 9 years

18 - 22 Lacs

Mohali

Remote

In this Role, Your Responsibilities Will Be: Collaborate with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data requirements and deliver effective solutions. Leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Expertise in data modeling, with a strong focus on data warehouse and lakehouse design. Design and implement data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar tools to prepare data for analysis and reporting. Implement data quality checks and governance practices to ensure accuracy, consistency, and security of data assets. Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms. Experience with data integration and ETL tools like Azure Data Factory. Proven expertise in Microsoft Fabric or similar data platforms. In-depth knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions. Strong problem-solving skills with a track record of resolving complex technical challenges. Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Ability to work independently and collaboratively within a team environment. Microsoft certifications in data-related fields are preferred. DP-700 (Microsoft Certified: Fabric Data Engineer Associate) is a plus. Who You Are: You show tremendous amount of initiative in tough situations; are exceptional at spotting and seizing opportunities. You observe situational and group dynamics and select the best-fit approach. You make implementation plans that allocate resources precisely. You pursue everything with energy, drive, and the need to finish. For This Role, You Will Need: Experience: 5+ years in Data Warehousing with on-premises or cloud technologies. Analytical & Problem-Solving Skills: Strong analytical abilities with a proven track record of resolving complex data challenges. Communication Skills: Ability to effectively engage with internal customers across various functional areas. Database & SQL Expertise: Proficient in database management, SQL query optimization, and data mapping. Excel Proficiency: Strong knowledge of Excel, including formulas, filters, macros, pivots, and related operations. MS Fabric Expertise: Extensive experience with Fabric components, including Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models. Programming Skills: Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging. Flexibility: Willingness to work flexible hours based on project requirements. Technical Documentation: Strong documentation skills for maintaining clear and structured records. Language Proficiency: Fluent in English. SQL & Data Modeling: Advanced SQL skills, including experience with complex queries, data modeling, and performance tuning. Medallion Architecture: Hands-on experience implementing Medallion Architecture for data processing. Database Experience: Working knowledge of Oracle, SAP, or other relational databases. Manufacturing Industry Experience: Prior experience in a manufacturing environment is strongly preferred. Learning Agility: Ability to quickly learn new business areas, software, and emerging technologies. Leadership & Time Management: Strong leadership and organizational skills, with the ability to prioritize, multitask, and meet deadlines. Confidentiality: Ability to handle sensitive and confidential information with discretion. Project Management: Capable of managing both short- and long-term projects effectively. Cross-Functional Collaboration: Ability to work across various organizational levels and relationships. Strategic & Tactical Thinking: Ability to balance strategic insights with hands-on execution. ERP Systems: Experience with Oracle, SAP, or other ERP systems is a plus. Travel Requirements: Willing to travel up to 20% as needed. Preferred Qualifications that Set You Apart: Education: BA/BS/B.E./B.Tech in Business, Information Systems, Technology, or a related field. Technical Background: Bachelor's degree or equivalent in Science, focusing on MIS, Computer Science, Engineering, or a related discipline. Communication Skills: Strong interpersonal skills in English (spoken and written) to collaborate effectively with overseas teams. Database & SQL Expertise: Proficiency in Oracle PL/SQL. Azure Experience: Hands-on experience with Azure services, including Azure Synapse Analytics and Azure Data Lake. DevOps & Agile: Practical experience with Azure DevOps and knowledge of Agile and Scrum methodologies. Certifications: Agile certification is preferred.

Posted 2 months ago

Apply

5 - 10 years

0 Lacs

Chennai, Coimbatore, Bengaluru

Hybrid

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 10th May [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 10th May [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 10th May [Saturday] 2025 Experience: 5 years to 12 years Time: 9.00 AM to 5 PM Venue: HEXAWARE TECHNOLOGIES H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********

Posted 2 months ago

Apply

6 - 11 years

30 - 35 Lacs

Indore, Hyderabad, Delhi / NCR

Work from Office

Support enhancements to the MDM platform Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years in Python and advanced SQL including profiling, refactoring Experience with REST API and Hands on Azure Databricks and ADF Experience with Markit EDM or Semarchy

Posted 2 months ago

Apply

3 - 5 years

6 - 10 Lacs

Chennai

Work from Office

Basic Qualifications: Minimum Experience: 2-4 years of experience in designing, implementing, and supporting Data Warehousing and Business Intelligence solutions. Educational Qualification: A bachelor's degree or equivalent in Computer Science, Engineering, Information Systems, or a related field. Certifications: Relevant certifications such as Microsoft Azure Data Engineer, Azure Fundamentals, or equivalent will be a plus. Technical Skills: Data Engineering & Warehousing: Designing, implementing, and supporting Data Warehousing solutions. Experience with hybrid cloud deployments and integration between on-premises and cloud environments. Tools & Technologies: Azure Data Factory (ADF), Azure Synapse Analytics, Azure Data Lake, Azure SQL, Databricks. ETL & Data Pipelines: Experience in creating and maintaining data pipelines using Azure Data Factory, Pyspark Notebooks, Spark SQL, and Python. Data Transformation & Integration : Implementing ETL processes to extract, transform, and load data from diverse sources into data warehousing solutions. Spark : Knowledge in Spark Core Internals, Spark SQL, Structured Streaming, and Delta Lake. Data Security & Compliance: Familiarity with data privacy regulations, ensuring security in cloud-based data operations. Data Analytics: Conceptual understanding of dimensional modeling, ETL processes, and reporting tools. Experience with structured and unstructured data types. Roles and Responsibilities: Data Pipeline Design & Implementation : Design and implement scalable, efficient data pipelines for data ingestion, transformation, and loading processes. ETL Process Management : Build and maintain ETL processes to ensure smooth data extraction, transformation, and loading. Troubleshooting & Issue Resolution : Provide deep code-level analysis of Spark and related technologies to resolve complex customer issues, particularly with Spark internals, Spark SQL, Structured Streaming, and Delta. Performance Monitoring & Optimization : Continuously monitor and fine-tune data pipelines and workflows to improve efficiency and performance, especially for large-scale data sets. Cloud Integration : Manage hybrid cloud deployments, integrating on-premises systems with cloud environments. Security & Compliance : Ensure data security and comply with data privacy regulations during all data engineering activities. Collaboration : Work closely with business stakeholders to understand requirements and ensure the solutions align with business needs and objectives. Best Practices & Documentation : Follow data engineering best practices like code modularity, and version control, and maintain clear documentation for developed solutions.

Posted 2 months ago

Apply

5 - 7 years

8 - 15 Lacs

Bengaluru

Work from Office

Dear Candidate, I'm actively hiring for Azure Data engineer role who has Relevant experience of 5+ years in Databricks , Python and Pyspark for EY India- Bengaluru Location. ONLY Immediate joiners ,15 days NP or Currently Serving NP will be considered. If you are interested in this opportunity, kindly share your Updated resume with me at Krithika.L@in.ey.com with the below details: Name: Skill: Notice Period (if serving mention LWD) : Contact Number: Email: Current Location: Preferred Location: Total Exp: Relevant Exp: Current Company: Education (Mention Year of completion): Current CTC (LPA): Expected CTC (LPA) : Offer In Hand (Mention Date of joining): Regards, Krithika Talent team, EY India

Posted 2 months ago

Apply

3 - 7 years

10 - 20 Lacs

Hyderabad

Hybrid

Skills: Azure Data Factory, Azure Databricks, Pyspark, Azure Functions SQL Scripting: Python, PySpark, Unix, SQL Orchestration: Azure Data Factory, Databricks CI/CD: GitHub, Azure DevOps Experience of batch and streaming pipeline.(Azure Event Hub / Stream Analytics) Accion Labs -We help companies with faster, leaner, and smarter digital engineering. Headquartered in Pittsburgh, Pennsylvania, USA, Accion Labs is a product engineering company committed to helping organizations transform their business using emerging technologies. Accion Labs makes the digital engineering journey a lot easier with 25+IP products, a talent pool of over 1900+ employees and global offices in US, UK, Singapore, Malaysia, and India (Bangalore, Mumbai, Pune, Goa). With Best Regards, | Sejal Patel | Lead Talent Acquisition | sejal.patel@accionlabs.com www.accionlabs.com US | India | Singapore | Malaysia | UAE | Australia | UK Driving Outcomes Through Actions

Posted 2 months ago

Apply

3 - 8 years

5 - 15 Lacs

Noida, Gautam Buddha Nagar, Greater Noida

Work from Office

We are looking for a highly skilled Data Engineer in Azure Databricks to join our team. The ideal candidate should have experience, primarily in Databricks and Python. Company w. https://www.ecomstreet.com/ Experience 3+ years Location Greater Noida West near Gaur City Mall We are looking for immediate joiners. If interested the please revert with your updated resume on malti@ecommstreet.com - Candidate must have minimum 3+ years of hands on experience in Databricks platform. - Analyze business requirements and translate them into technical specifications for data pipelines, data lakes, and analytical processes on the Databricks platform. - Work as part of a team to develop Cloud Data and Analytics solutions. - Participate in development of cloud data warehouses, data as a service, business intelligence solutions. - Good understanding of Azure Databricks platform, its cluster and can build data analytics solutions to support the required performance & scale. - The candidate should be able to design, develop, and maintain data pipelines and data streams. - The candidate should also be able to extract and transform data, especially unstructured data, across various data processing layers using Databricks, Python. - Familiarity with tools such as Jira, Slack and GitHub. - Candidate should write complex queries for data processing. Regards Malti Rawat HRBP - Recruitment m.+91 9811767269 e. malti@ecommstreet.com w. https://www.ecomstreet.com/

Posted 2 months ago

Apply

5 - 10 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 3 months ago

Apply

- 3 years

3 - 8 Lacs

Pune

Work from Office

Work Experience 0-3 years of relevant experience in a Azure data engineering role. Role Focus As a DataEngineer , you will contribute to cutting-edge global projects and innovative product initiatives, delivering impactful solutions for our Fortune clients. In this role, you will take ownership of the entire data pipeline and infrastructure development lifecyclefrom ideation and design to implementation and ongoing optimization. Your efforts will ensure the delivery of high-performance, scalable, and reliable data solutions. Join us to become a driving force in shaping the future of data infrastructure and innovation, paving the way for transformative advancements in the data ecosystem. Key Responsibilities Data Pipeline Development: Build, maintain, and optimize ETL/ELT pipelines for seamless data flow. Data Integration: Consolidate data from various sources into unified systems. Database Management: Design and optimize scalable data storage solutions. Data Quality Assurance: Ensure data accuracy, consistency, and completeness. Collaboration: Work with analysts, scientists, and stakeholders to meet data needs. Performance Optimization: Enhance pipeline efficiency and database performance. Data Security: Implement and maintain robust data security and governance policies. Innovation: Adopt new tools and design scalable solutions for future growth. Monitoring: Continuously monitor and maintain data systems for reliability. Data Engineers ensure reliable, high-quality data infrastructure for analytics and decision-making. Qualification Bachelors or master’s degree in computer science, Information Technology, or a related field. Certifications with related field will be an added advantage. Key Competencies Must have experience with SQL, Python and Hadoop Good to have experience with Cloud Computing Platforms (AWS, Azure, GCP, etc.), DevOps Practices, Agile Development Methodologies. ETL or other similar technologies will be an advantage. Key Competencies Core Skills : Proficiency in SQL, Python, or Scala for data processing and manipulation. Data Platforms: Experience with cloud platforms such as AWS, Azure Tools: Azure Data Factory and Azure Data Bricks Soft Skills: Strong problem-solving abilities, collaboration, and communication skills to work effectively with technical and non-technical teams. Apply now To apply, please submit your resume and a cover letter as a – statement of purpose. Interested candidates should send detailed profile on hire@inteliment.com

Posted 3 months ago

Apply

8 - 12 years

15 - 30 Lacs

Chennai

Hybrid

Warm Greetings from SP Staffing!! Role :Azure Data Architect Experience Required :8 to 12 yrs Work Location : Chennai.Delhi/Hyderabad Required Skills, Azure Databricks, ADF, Pyspark/Spark, Python/Scala Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 3 months ago

Apply

8 - 12 years

15 - 30 Lacs

Hyderabad

Hybrid

Warm Greetings from SP Staffing!! Role :Azure Data Architect Experience Required :8 to 12 yrs Work Location : Chennai.Delhi/Hyderabad Required Skills, Azure Databricks, ADF, Pyspark/Spark, Python/Scala Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 3 months ago

Apply

3 - 5 years

6 - 8 Lacs

Pune

Work from Office

Job Title: Senior Data Engineer Experience Required: 3 to 5 Years Location: Baner, Pune Job Type: Full-Time (WFO) Job Summary We are seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing scalable data pipelines, working with cloud platforms like Microsoft Azure, AWS and utilizing advanced tools such as Datalakes, PySpark, and Azure Data Factory. The role involves collaborating with cross-functional teams to design and implement robust data solutions that support business intelligence, analytics, and decision-making processes. Key Responsibilities Design, develop, and maintain scalable ETL pipelines to ingest, transform, and process large datasets from various sources. Build and optimize data pipelines and architectures for efficient and secure data processing. Work extensively with Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics for cloud data integration and management. Utilize Databricks and PySpark for advanced big data processing and analytics. Implement data modelling and design data warehouses to support business intelligence tools like Power BI . Ensure data quality, governance, and security using Azure DevOps and Azure Functions . Develop and maintain SQL Server databases and write optimized SQL queries for analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into effective data engineering solutions. Implement Data architecture best practices to support big data initiatives and analytics use cases. Monitor, troubleshoot, and improve data workflows and processes to ensure seamless data flow. Required Skills and Qualifications Educational Background : Bachelor's or master's degree in computer science, Information Systems, or a related field. Technical Skills : Strong expertise in ETL development , Data Engineering , and Data Pipeline -Development . Proficiency in Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics . Advanced knowledge of Databricks , PySpark , and Python for data processing. Hands-on experience with SQL Azure , SQL Server , and data warehousing solutions. Knowledge of Power BI for reporting and dashboard creation. Familiarity with Azure Functions , Azure DevOps , and cloud computing in Microsoft Azure . Understanding of data architecture and data modelling principles. Experience with Big Data tools and frameworks. Experience : Proven experience in designing and implementing large-scale data processing systems. Hands-on experience with DWH and handling big data workloads. Ability to work with both structured and unstructured datasets. Soft Skills : Strong problem-solving and analytical skills. Excellent communication and collaboration abilities to work effectively in a team environment. A proactive mindset with a passion for learning and adopting new technologies. Preferred Skills Experience with Azure Data Warehouse technologies. Knowledge of Azure Machine Learning or similar AI/ML frameworks. Familiarity with Data Governance and Data Compliance practices.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies