Jobs
Interviews

23 Delta Tables Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Role Overview: You will be responsible for designing, implementing, and maintaining ETL processes using ADF and ADB. Your role will involve creating and managing views in ADB and SQL to ensure efficient data access. Additionally, you will optimize SQL queries for large datasets and high performance. Conducting end-to-end testing and impact analysis on data pipelines will also be a part of your responsibilities. Key Responsibilities: - Identify and resolve bottlenecks in data processing to ensure smooth operation of data pipelines. - Optimize SQL queries and Delta Tables to achieve fast data processing. - Implement data sharing methods such as Delta Share, SQL Endpoints, and utilize Delta Tables for efficient data sharing and processing. - Integrate external systems through Databricks Notebooks and build scalable solutions. Experience in building APIs is considered a plus. - Collaborate with teams to understand requirements and design solutions effectively. - Provide documentation for data processes and architectures to ensure clarity and transparency. Qualifications Required: - Proficiency in ETL processes using ADF and ADB. - Strong SQL skills with the ability to optimize queries for performance. - Experience in data pipeline optimization and performance tuning. - Knowledge of data sharing methods like Delta Share and SQL Endpoints. - Ability to integrate external systems and build APIs using Databricks Notebooks. - Excellent collaboration skills and the ability to document data processes effectively.,

Posted 2 days ago

Apply

4.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Senior Associate at Microsoft, Fabric, Azure (Databricks & ADF), PySpark, your role will involve: - Requirement gathering and analysis - Designing and implementing data pipelines using Microsoft Fabric & Databricks - Extracting, transforming, and loading (ETL) data from various sources into Azure Data Lake Storage - Implementing data security and governance measures - Monitoring and optimizing data pipelines for performance and efficiency - Troubleshooting and resolving data engineering issues - Providing optimized solutions for any problem related to data engineering - Working with a variety of sources like Relational DB, API, File System, Realtime streams, CDC, etc. - Demonstrating strong knowledge on Databricks, Delta tables Qualifications Required: - 4-10 years of experience in Data Engineering or related roles - Hands-on experience in Microsoft Fabric and Azure Databricks - Proficiency in PySpark for data processing and scripting - Strong command over Python & SQL for writing complex queries, performance tuning, etc. - Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas) - Hands-on experience in performance tuning & optimization on Databricks & MS Fabric - Understanding CI/CD practices in a data engineering context - Excellent problem-solving and communication skills - Exposure to BI tools like Power BI, Tableau, or Looker Additional Details: - Experienced in Azure DevOps is a plus - Familiarity with data security and compliance in the cloud - Experience with different databases like Synapse, SQL DB, Snowflake etc.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Role Overview: At PwC, you will be part of the managed services team focusing on outsourced solutions and providing support to clients across various functions. Your role will involve helping organizations streamline operations, reduce costs, and enhance efficiency by managing key processes and functions. You will leverage your skills in project management, technology, and process optimization to deliver high-quality services to clients. Specifically, as a Data Engineer Offshore, you will play a crucial role in designing, implementing, and maintaining scalable data pipelines and systems to support data-driven initiatives. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines using DataStage and other ETL tools. - Utilize AWS cloud services for data storage, processing, and analytics. - Implement and optimize Delta Live Tables and Delta Tables for efficient data storage and querying. - Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs. - Ensure data quality, integrity, and security across all data systems and pipelines. - Monitor and troubleshoot data workflows to ensure smooth operations. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Proven experience as a Data Engineer or in a similar role. - Strong proficiency in SQL and experience with relational databases such as Teradata. - Hands-on experience with AWS services such as S3, EMR, Redshift, and Lambda. - Experience with Delta Live Tables and Delta Tables in a data engineering context. - Solid understanding of Apache Spark, Kafka, and Spark Streaming. - Strong problem-solving skills and attention to detail. - Excellent communication and collaboration skills. Additional Details: The company is seeking an experienced Data Engineer with a strong background in data engineering and proficiency in various data technologies such as Teradata, DataStage, AWS, Databricks, SQL, and more. The role will involve continuous improvement and optimization of managed services processes, tools, and services to deliver high-quality services to clients.,

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

At PwC, the focus of individuals in managed services is on a variety of outsourced solutions and supporting clients across various functions. They play a crucial role in helping organizations streamline operations, reduce costs, and enhance efficiency by managing key processes and functions on behalf of clients. Skilled in project management, technology, and process optimization, they are adept at delivering high-quality services to clients. Those in managed service management and strategy at PwC concentrate on transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. The role involves continuous improvement processes and optimizing managed services through tools and services. We are currently looking for a Data Engineer Offshore with expertise in Tera Data, DataStage, AWS, Databricks, SQL, Delta Live tables, Delta tables, Spark - Kafka, Spark Streaming, MQ, ETL. As a Data Engineer, you will be a valuable addition to our dynamic team. The ideal candidate should have a solid background in data engineering and proficiency in various data technologies, including Teradata, DataStage, AWS, Databricks, SQL, among others. Your responsibilities will include designing, implementing, and maintaining scalable data pipelines and systems to support our data-driven initiatives. Minimum Qualifications: - Bachelor's degree in computer science/IT or a relevant field - 3 - 5 years of experience Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines using DataStage and other ETL tools. - Utilize AWS cloud services for data storage, processing, and analytics. - Leverage Databricks for data analysis, processing, and transformation, ensuring high performance and reliability. - Implement and optimize Delta Live Tables and Delta Tables for efficient data storage and querying. - Work with Apache Spark for processing large datasets, ensuring optimal performance and scalability. - Integrate Kafka and Spark Streaming for building real-time data processing applications. - Collaborate with cross-functional teams to gather requirements and deliver data solutions meeting business needs. - Ensure data quality, integrity, and security across all data systems and pipelines. - Monitor and troubleshoot data workflows to ensure smooth operations. - Document data processes, architecture designs, and technical specifications. Preferred Qualifications: - Master's degree in computer science/IT or a relevant field - Certification in AWS or Databricks Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field - Proven experience as a Data Engineer or in a similar role - Strong proficiency in SQL and experience with relational databases like Teradata - Hands-on experience with AWS services such as S3, EMR, Redshift, and Lambda - Proficiency in using Databricks for data engineering tasks - Experience with Delta Live Tables and Delta Tables in a data engineering context - Solid understanding of Apache Spark, Kafka, and Spark Streaming - Experience with messaging systems like MQ is a plus - Strong problem-solving skills and attention to detail - Excellent communication and collaboration skills Preferred Skills: - Experience with data warehousing and big data technologies - Familiarity with data governance and data security best practices,

Posted 6 days ago

Apply

4.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Senior Associate at Microsoft Fabric, Azure (Databricks & ADF), PySpark, you will be responsible for designing and implementing scalable data solutions within the Microsoft Azure ecosystem. With 4-10 years of experience in the field, you will leverage your expertise in Microsoft Fabric, Azure Databricks, PySpark, Python, and SQL to build end-to-end data pipelines, ensuring efficient data processing and extraction. Your primary responsibilities will include gathering requirements, designing and implementing data pipelines using Microsoft Fabric & Databricks, performing ETL operations to extract data into Azure Data Lake Storage, implementing data security measures, monitoring pipeline performance, and troubleshooting data engineering issues. Additionally, you will work with a variety of data sources like Relational DB, API, File System, Realtime streams, and CDC. The ideal candidate will possess hands-on experience in Microsoft Fabric, Azure Databricks, and proficiency in PySpark for data processing. A strong command over Python and SQL is essential for writing complex queries, optimizing performance, and ensuring data integrity. Experience with Azure Data Lake Storage, Data Warehouse concepts, and familiarity with Databricks and Delta tables will be crucial for success in this role. Furthermore, you should have a good understanding of CI/CD practices in a data engineering context, along with excellent problem-solving and communication skills. Exposure to BI tools like Power BI, Tableau, or Looker will be beneficial. Experience with Azure DevOps, data security, and compliance in the cloud, as well as different databases like Synapse, SQL DB, and Snowflake, will be considered advantageous. If you are a data engineering professional looking to work on challenging projects within a dynamic environment, this role offers an exciting opportunity to showcase your skills and contribute to the development of cutting-edge data solutions.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Senior Specialist (Databricks) at Nasdaq Bangalore, you will have the opportunity to be part of a team that is at the forefront of revolutionizing markets and developing innovative solutions. Your role will involve working closely with defined business requirements to provide analytical, modelling, dimensional modeling, and testing to design optimal solutions. You will play a key role in translating business information needs into adaptable and sustainable data structures. Your primary responsibilities will include designing, building, and maintaining data pipelines within the Databricks Lakehouse Platform. Your expertise in tasks such as utilizing the Databricks Lakehouse Platform, implementing ETL tasks using Apache Spark SQL and Python, and developing ETL pipelines using the Medallion Architecture will be crucial to the success of data-driven initiatives. Additionally, you will be responsible for identifying, analyzing, and resolving technical problems with Nasdaq Data Platforms and related ecosystems. To excel in this role, you are expected to have 8-10 years of overall experience, with at least 5-6 years of Data Engineering experience specifically working on Databricks. Proficiency in SQL and Python for Data Manipulation and transformation, knowledge of Modern Data technologies such as Spark, Informatica, Parquet, and Delta Tables, and familiarity with cloud computing platforms like AWS are essential. An understanding of data modeling, architecture, best practices, and AI/ML Ops in Databricks is also required. A Bachelor/Master's degree in a relevant field or equivalent qualification is preferred. In addition to technical skills, you should possess strong communication, problem-solving, and leadership abilities. You will be expected to lead administrative tasks, ensure timely project delivery within budget constraints, and maintain accurate and detailed administration according to regulations. At Nasdaq, we foster a vibrant and entrepreneurial culture where taking initiative, challenging the status quo, and embracing work-life balance are encouraged. We offer a range of benefits, including an annual monetary bonus, opportunities to become a Nasdaq shareholder, health insurance, flexible working schedules, and various employee development programs. If you are a passionate and experienced Data Engineer with a drive for innovation and effectiveness, we encourage you to apply in English for this exciting opportunity at Nasdaq Bangalore. We look forward to connecting with you and exploring how your skills and expertise can contribute to our dynamic team.,

Posted 2 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint in hybrid environments-from on-premises to SaaS to containers to cloud. We're transforming the software industry. We're Flexera. With more than 50,000 customers across the world, we re achieving that goal . But we know we can't do any of that without our team . Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans Come and see why we're consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Senior Manager, Development We are seeking a dynamic and technically proficient Senior Manager to lead our data engineering initiatives within Flexera's Cloud Cost Optimization space. This role combines hands-on expertise in Databricks with proven leadership in managing high-performing engineering teams. The ideal candidate will be passionate about building scalable data solutions and mentoring teams to deliver impactful business outcomes. Key Responsibilities Technical Leadership Architect, design, and implement scalable data pipelines using PySpark on the Databricks platform. Leverage Delta Lake, Delta Tables, and Databricks SQL to build robust data solutions. Develop and maintain batch processing and Spark streaming workflows. Implement orchestration workflows using Databricks Workflows and Azure Data Factory, ensuring automation, monitoring, and alerting. Optimize cluster configurations, autoscaling strategies, and cost management within Databricks environments. Stay current with emerging technologies and bring innovation to the team. Team Management Manage teams responsible for building microservices-based applications using Golang, React and Databricks. Lead, mentor, and grow a team of data engineers, fostering a culture of collaboration, ownership, and continuous improvement. Conduct performance evaluations, provide feedback, and support career development. Manage team dynamics and resolve challenges to maintain productivity and engagement. Cross-Functional Collaboration Partner with product managers, architects, and operations teams to align technical deliverables with business goals. Identify dependencies, manage risks, and ensure seamless coordination across teams. Qualifications Bachelor's or master's degree in computer science, Engineering, or a related field. 10+ years of experience in software/data engineering, with 3+ years in a managerial role. Hands-on experience with Databricks, including pipeline development and orchestration. Strong programming skills in Python and PySpark. Proven experience in cloud-native development, preferably on AWS. Deep understanding of data modelling, ETL best practices, and DevOps for data pipelines. Experience deploying Databricks resources using Terraform is a plus. Excellent problem-solving, decision-making, and communication skills. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. W e encourage candidates requiring accommodations to please let us know by emailing .

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

At PwC, individuals in managed services focus on providing outsourced solutions and supporting clients across various functions. By managing key processes and functions on behalf of organisations, they help streamline operations, reduce costs, and enhance efficiency. Skilled in project management, technology, and process optimization, they deliver high-quality services to clients. Those specializing in managed service management and strategy at PwC concentrate on transitioning and running services, managing delivery teams, programmes, commercials, performance, and delivery risk. The role involves continuous improvement, optimizing managed services processes, tools, and services. Your focus lies in building meaningful client connections, managing and inspiring others, and deepening technical expertise while navigating complex situations. Embracing ambiguity, you anticipate the needs of teams and clients to deliver quality service. You are encouraged to ask questions and view unclear paths as opportunities for growth. Upholding professional and technical standards, including PwC tax and audit guidance, the firm's code of conduct, and independence requirements, is essential. As a Data Engineer Offshore - Tera Data, DataStage, AWS, Data Bricks, SQL, Delta Live tables, Delta tables, Spark - Kafka, Spark Streaming, MQ, ETL Associate, you will be responsible for designing, implementing, and maintaining scalable data pipelines and systems to support data-driven initiatives. The ideal candidate will have a Bachelor's degree in computer science/IT or a relevant field, with 2-5 years of experience and proficiency in data technologies such as Teradata, DataStage, AWS, Databricks, SQL, etc. Key responsibilities include designing, developing, and maintaining scalable ETL pipelines, leveraging AWS cloud services, utilizing Databricks for data processing, implementing Delta Live Tables and Delta Tables, working with Apache Spark and Kafka, integrating Spark Streaming, ensuring data quality, integrity, and security, and documenting data processes and technical specifications. Qualifications for this role include a Bachelor's degree in Computer Science or a related field, proven experience as a Data Engineer, proficiency in SQL and relational databases, hands-on experience with AWS services, and familiarity with Apache Spark, Kafka, and Spark Streaming. Preferred skills include experience in data warehousing, big data technologies, data governance, and data security best practices. Certification in AWS or Databricks is a plus.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

The role of Senior Data Engineer at GSPANN involves designing, developing, and optimizing scalable data solutions, utilizing expertise in Azure Data Factory, Azure Databricks, PySpark, Delta Tables, and advanced data modeling. The position also demands proficiency in performance optimization, API integrations, DevOps practices, and data governance. You will be responsible for designing, developing, and orchestrating scalable data pipelines using Azure Data Factory (ADF). Additionally, you will build and manage Apache Spark clusters, create notebooks, and execute jobs in Azure Databricks. Ingesting, organizing, and transforming data within the Microsoft Fabric ecosystem using OneLake will also be part of your role. Your tasks will include authoring complex transformations, writing SQL queries for large-scale data processing using PySpark and Spark SQL, and creating, optimizing, and maintaining Delta Lake tables. Furthermore, you will parse, validate, and transform semi-structured JSON datasets, build and consume REST/OData services for custom data ingestion through API integration, and implement bronze, silver, and gold layers in data lakes using the Medallion Architecture. To ensure efficient processing of high data volumes for large-scale performance optimization, you will apply partitioning, caching, and resource tuning. Designing star and snowflake schemas, along with fact and dimension tables for multidimensional modeling in reporting use cases, will be a crucial aspect of your responsibilities. Working with tabular and OLAP cube structures in Azure Analysis Services to facilitate downstream business intelligence will also be part of your role, along with collaborating with the DevOps team to define infrastructure, manage access and security, and automate deployments. In terms of skills and experience, you are expected to ingest and harmonize data from SAP ECC and S/4HANA systems using Data Sphere. Utilizing Git, Azure DevOps Pipelines, Terraform, or Azure Resource Manager templates for CI/CD and DevOps tooling, leveraging Azure Monitor, Log Analytics, and data pipeline metrics for data observability and monitoring, conducting query diagnostics, identifying bottlenecks, and determining root causes for performance troubleshooting are among the key responsibilities. Applying metadata management, tracking data lineage, and enforcing compliance best practices for data governance and cataloging are also part of the role. Lastly, documenting processes, designs, and solutions effectively in Confluence is essential for this position.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Senior Specialist (Databricks) at Nasdaq Bangalore, you will be joining the Bangalore technology center in India, where innovation and effectiveness are the driving forces. Nasdaq is at the forefront of revolutionizing markets and constantly evolving by adopting new technologies to create innovative solutions, aiming to shape the future. In this role, your primary responsibility will be to analyze defined business requirements, providing analytical insights, modeling, dimensional modeling, and testing to design solutions that meet customer needs effectively. You will focus on understanding business data needs and translating them into adaptable, extensible, and sustainable data structures. As a Databricks Data Engineer, your role will involve designing, building, and maintaining data pipelines within the Databricks Lakehouse Platform. Your expertise will be crucial in enabling efficient data processing, analysis, and reporting for data-driven initiatives. You will utilize the Databricks Lakehouse Platform for data engineering tasks, implement ETL tasks using Apache Spark SQL and Python, and develop ETL pipelines following the Medallion Architecture. Moreover, you will be responsible for adding new sources to the Lakehouse platform, reviewing technology platforms on AWS cloud, supervising data extraction methods, resolving technical issues, and ensuring project delivery within the assigned timeline and budget. You will also lead administrative tasks, ensuring completeness and accuracy in administration processes. To excel in this role, you are expected to have 8-10 years of overall experience with at least 5-6 years of specific Data Engineering experience on Databricks. Proficiency in SQL and Python for data manipulation, knowledge of modern data technologies, cloud computing platforms like AWS, data modeling, architecture, best practices, and familiarity with AI/ML Ops in Databricks are essential. A Bachelor's/Master's degree in a relevant field or equivalent qualification is required. It would be advantageous if you have knowledge of Terraform and hold certifications in relevant fields. Nasdaq offers a vibrant and entrepreneurial work environment where taking initiative, challenging the status quo, and embracing intelligent risks are encouraged. The company values diversity, inclusivity, and work-life balance in a hybrid-first environment. As an employee, you can benefit from various perks such as an annual monetary bonus, becoming a Nasdaq shareholder, health insurance, flexible working schedules, internal mentorship programs, and a wide selection of online learning resources. If you believe you possess the required skills and experience for this role, we encourage you to submit your application in English as soon as possible. The selection process is ongoing, and we aim to get back to you within 2-3 weeks. At Nasdaq, we are committed to providing reasonable accommodations to individuals with disabilities throughout the job application and interview process, ensuring equal access to employment opportunities. If you require any accommodations, please reach out to us to discuss your needs.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Senior ETL Developer in the Data Services Team, you will play a lead role in ETL design, data modeling, and ETL development. Your responsibilities will include facilitating best practice guidelines, providing technical leadership, working with stakeholders to translate requirements into solutions, gaining approval for designs and effort estimates, and documenting work via Functional and Tech Specs. You will also be involved in analyzing processes for gaps and weaknesses, preparing roadmaps and migration plans, and communicating progress using the Agile Methodology. To excel in this role, you should have at least 5 years of experience with Oracle, Data Warehousing, and Data Modeling. Additionally, you should have 4 years of experience with ODI or Informatica IDMC, 3 years of experience with Databricks Lakehouse and/or Delta tables, and 2 years of experience in designing, implementing, and supporting a Kimball method data warehouse on SQL Server or Oracle. Strong SQL skills, a background in Data Integration, Data Security, and Enterprise Data Warehouse development, as well as experience in Change Management, Release Management, and Source Code control practices are also required. The ideal candidate will have a high school diploma or equivalent, with a preference for a Bachelor of Arts or a Bachelor of Science degree in computer science, systems analysis, or a related area. If you are enthusiastic about leveraging your ETL expertise to drive digital modernization and enhance data services, we encourage you to apply for this role and be part of our dynamic team.,

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This job is with Kyndryl, an inclusive employer and a member of myGwork the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As a Data Engineer , you will leverage your expertise in Databricks , big data platforms , and modern data engineering practices to develop scalable data solutions for our clients. Candidates with healthcare experience, particularly with EPIC systems , are strongly encouraged to apply. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. Responsibilities Develop data ingestion, data processing and analytical pipelines for big data, relational databases and data warehouse solutions Design and implement data pipelines and ETL/ELT processes using Databricks, Apache Spark, and related tools. Collaborate with business stakeholders, analysts, and data scientists to deliver accessible, high-quality data solutions. Provide guidance on cloud migration strategies and data architecture patterns such as Lakehouse and Data Mesh Provide pros/cons, and migration considerations for private and public cloud architectures Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Experience working with Data Governance, Data security and Data Privacy (Unity Catalogue or Purview) Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won&apost find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You&aposre good at what you do and possess the required experience to prove it. However, equally as important - you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused - someone who prioritizes customer success in their work. And finally, you&aposre open and borderless - naturally inclusive in how you work with others. Required Technical And Professional Experience 3+ years of consulting or client service delivery experience on Azure Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 8 years of experience in the IT industry. 3+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Azure Synapse Extensive hands-on experience implementing data ingestion, ETL and data processing. Hands-on experience in and Big Data technologies such as Java, Python, SQL, ADLS/Blob, PySpark and Spark SQL, Databricks, HD Insight and live streaming technologies such as EventHub. Experience with cloud-based database technologies (Azure PAAS DB, AWS RDS and NoSQL). Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, etc. Experience with monitoring and diagnostic tools (SQL Profiler, Extended Events, etc). Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes. Experience with relational databases and expertise in writing and optimizing T-SQL queries and stored procedures. Experience in using Big Data File Formats and compression techniques. Experience in Developer tools such as Azure DevOps, Visual Studio Team Server, Git, Jenkins, etc. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Preferred Technical And Professional Experience Cloud platform certification, e.g., Microsoft Certified: (DP-700) Azure Data Engineer Associate, AWS Certified Data Analytics - Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working with EPIC healthcare systems (e.g., Clarity, Caboodle). Databricks certifications (e.g., Databricks Certified Data Engineer Associate or Professional). Knowledge of GenAI tools, Microsoft Fabric, or Microsoft Copilot. Familiarity with healthcare data standards and compliance (e.g., HIPAA, GDPR). Experience with DevSecOps and CI/CD deployments Experience in NoSQL databases design Knowledge on , Gen AI fundamentals and industry supporting use cases. Hands-on experience with Delta Lake and Delta Tables within the Databricks environment for building scalable and reliable data pipelines. Being You Diversity is a whole lot more than what we look like or where we come from, it&aposs how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we&aposre not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That&aposs the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked &aposHow Did You Hear About Us' during the application process, select &aposEmployee Referral' and enter your contact&aposs Kyndryl email address. Show more Show less

Posted 1 month ago

Apply

1.0 - 7.0 years

12 - 14 Lacs

Mumbai, Maharashtra, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 month ago

Apply

1.0 - 7.0 years

12 - 14 Lacs

Gurgaon, Haryana, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 month ago

Apply

1.0 - 7.0 years

12 - 14 Lacs

Hyderabad, Telangana, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 11 Lacs

Bengaluru, Karnataka, India

On-site

At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 month ago

Apply

1.0 - 6.0 years

4 - 10 Lacs

Gurgaon, Haryana, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is availablehere .

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You have extensive experience in analytics and large-scale data processing across diverse data platforms and tools. Your responsibilities will include managing data storage and transformation across AWS S3, DynamoDB, Postgres, and Delta Tables with efficient schema design and partitioning. You will develop scalable analytics solutions using Athena and automate workflows with proper monitoring and error handling. Ensuring data quality, access control, and compliance through robust validation, logging, and governance practices will be a crucial part of your role. Additionally, you will design and maintain data pipelines using Python, Spark, Delta Lake framework, AWS Step functions, Event Bridge, AppFlow, and OAUTH. The tech stack you will be working with includes S3, Postgres, DynamoDB, Tableau, Python, and Spark.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a talented Big Data Engineer, you will be responsible for developing and managing our company's Big Data solutions. Your role will involve designing and implementing Big Data tools and frameworks, implementing ELT processes, collaborating with development teams, building cloud platforms, and maintaining the production system. To excel in this position, you should possess in-depth knowledge of Hadoop technologies, exceptional project management skills, and advanced problem-solving abilities. A successful Big Data Engineer comprehends the company's needs and establishes scalable data solutions to meet current and future requirements effectively. Your responsibilities will include meeting with managers to assess the company's Big Data requirements, developing solutions on AWS utilizing tools like Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, and Hadoop. You will also be involved in loading disparate data sets, conducting pre-processing services using tools such as Athena, Glue, and Spark, collaborating with software research and development teams, building cloud platforms for application development, and ensuring the maintenance of production systems. The requirements for this role include a minimum of 5 years of experience as a Big Data Engineer, proficiency in Python & PySpark, expertise in Hadoop, Apache Spark, Databricks, Delta Tables, and AWS data analytics services. Additionally, you should have extensive experience with Delta Tables, JSON, Parquet file formats, familiarity with AWS data analytics services like Athena, Glue, Redshift, EMR, knowledge of Data warehousing, NoSQL, and RDBMS databases. Good communication skills and the ability to solve complex data processing and transformation-related problems are essential for success in this role.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer in Pune, your responsibilities will include designing, implementing, and optimizing end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. You will be developing data pipelines to extract and transform data in near real-time using cloud-native technologies. Implementing data validation and quality checks to ensure accuracy and consistency will also be part of your role. Monitoring system performance, troubleshooting issues, and implementing optimizations to enhance reliability and efficiency will be crucial tasks. Collaboration with business users, analysts, and other stakeholders to understand data requirements and deliver tailored solutions is an essential aspect of this position. Documentation of technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation will be expected. Providing technical guidance and support to team members and stakeholders as needed will also be a key responsibility. Desirable competencies for this role include having 8+ years of work experience, proficiency in writing complex SQL queries on MPP systems such as Snowflake or Redshift, experience in Databricks and Delta tables, data engineering experience with Spark, Scala, or Python, familiarity with the Microsoft Azure stack including Azure Storage Accounts, Data Factory, and Databricks, experience in Azure DevOps and CI/CD pipelines, working knowledge of Python, and being comfortable participating in 2-week sprint development cycles.,

Posted 2 months ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Pune

Work from Office

Designation: Big Data Lead/Architect Location: Pune Experience: 8-10 years NP - immediate joiner/15-30 days notice Reports To – Product Engineering Head Job Overview We are looking to hire a talented big data engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes, collaborate with development teams, build cloud platforms, and maintain the production system. To ensure success as a big data engineer, you should have in-depth knowledge of Hadoop technologies, excellent project management skills, and high-level problem-solving skills. A top-notch Big Data Engineer understands the needs of the company and institutes scalable data solutions for its current and future needs. Responsibilities: Meeting with managers to determine the company’s Big Data needs. Developing big data solutions on AWS, using Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, Hadoop, etc. Loading disparate data sets and conducting pre-processing services using Athena, Glue, Spark, etc. Collaborating with the software research and development teams. Building cloud platforms for the development of company applications. Maintaining production systems. Requirements: 8-10 years of experience as a big data engineer. Must be proficient with Python & PySpark. In-depth knowledge of Hadoop, Apache Spark, Databricks, Delta Tables, AWS data analytics services. Must have extensive experience with Delta Tables, JSON, Parquet file format. Good to have experience with AWS data analytics services like Athena, Glue, Redshift, EMR. Familiarity with Data warehousing will be a plus. Must have Knowledge of NoSQL and RDBMS databases. Good communication skills. Ability to solve complex data processing, transformation related problems

Posted 2 months ago

Apply

5.0 - 8.0 years

15 - 18 Lacs

Coimbatore

Hybrid

Role & responsibilities Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions.

Posted 3 months ago

Apply

8 - 13 years

25 - 30 Lacs

Bengaluru

Hybrid

Over all 8+ years of solid experience in data projects. Excellent Design, develop, and maintain robust ETL/ELT pipelines for data ingestion, transformation, and storage. Proficient in SQL and must worked on complex joins, Subqueries, functions, procedure Able to perform SQL tunning and query optimization without support. Design, develop, and maintain ETL pipelines using Databricks, PySpark to extract, transform, and load data from various sources. Must have good working experience on Delta tables, deduplication, merging with terabyte of data set Optimize and fine-tune existing ETL workflows for performance and scalability. Excellent knowledge in dimensional modelling and Data Warehouse Must have experience on working with large data set Experience working with batch and real-time data processing (Good to have). Implemented data validation, quality checks , and ensure adherence to security and compliance standards. Ability to develop reliable, secure, compliant data processing systems. Work closely with cross-functional teams to support data analytics, reporting, and business intelligence initiatives. One should be self-driven and work independently without support.

Posted 4 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies