Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior ETL Developer in the Data Services Team, you will play a lead role in ETL design, data modeling, and ETL development. Your responsibilities will include facilitating best practice guidelines, providing technical leadership, working with stakeholders to translate requirements into solutions, gaining approval for designs and effort estimates, and documenting work via Functional and Tech Specs. You will also be involved in analyzing processes for gaps and weaknesses, preparing roadmaps and migration plans, and communicating progress using the Agile Methodology. To excel in this role, you should have at least 5 years of experience with Oracle, Data Warehousing, and Data Modeling. Additionally, you should have 4 years of experience with ODI or Informatica IDMC, 3 years of experience with Databricks Lakehouse and/or Delta tables, and 2 years of experience in designing, implementing, and supporting a Kimball method data warehouse on SQL Server or Oracle. Strong SQL skills, a background in Data Integration, Data Security, and Enterprise Data Warehouse development, as well as experience in Change Management, Release Management, and Source Code control practices are also required. The ideal candidate will have a high school diploma or equivalent, with a preference for a Bachelor of Arts or a Bachelor of Science degree in computer science, systems analysis, or a related area. If you are enthusiastic about leveraging your ETL expertise to drive digital modernization and enhance data services, we encourage you to apply for this role and be part of our dynamic team.,
Posted 4 days ago
8.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This job is with Kyndryl, an inclusive employer and a member of myGwork the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As a Data Engineer , you will leverage your expertise in Databricks , big data platforms , and modern data engineering practices to develop scalable data solutions for our clients. Candidates with healthcare experience, particularly with EPIC systems , are strongly encouraged to apply. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. Responsibilities Develop data ingestion, data processing and analytical pipelines for big data, relational databases and data warehouse solutions Design and implement data pipelines and ETL/ELT processes using Databricks, Apache Spark, and related tools. Collaborate with business stakeholders, analysts, and data scientists to deliver accessible, high-quality data solutions. Provide guidance on cloud migration strategies and data architecture patterns such as Lakehouse and Data Mesh Provide pros/cons, and migration considerations for private and public cloud architectures Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Experience working with Data Governance, Data security and Data Privacy (Unity Catalogue or Purview) Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won&apost find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You&aposre good at what you do and possess the required experience to prove it. However, equally as important - you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused - someone who prioritizes customer success in their work. And finally, you&aposre open and borderless - naturally inclusive in how you work with others. Required Technical And Professional Experience 3+ years of consulting or client service delivery experience on Azure Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 8 years of experience in the IT industry. 3+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Azure Synapse Extensive hands-on experience implementing data ingestion, ETL and data processing. Hands-on experience in and Big Data technologies such as Java, Python, SQL, ADLS/Blob, PySpark and Spark SQL, Databricks, HD Insight and live streaming technologies such as EventHub. Experience with cloud-based database technologies (Azure PAAS DB, AWS RDS and NoSQL). Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, etc. Experience with monitoring and diagnostic tools (SQL Profiler, Extended Events, etc). Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes. Experience with relational databases and expertise in writing and optimizing T-SQL queries and stored procedures. Experience in using Big Data File Formats and compression techniques. Experience in Developer tools such as Azure DevOps, Visual Studio Team Server, Git, Jenkins, etc. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Preferred Technical And Professional Experience Cloud platform certification, e.g., Microsoft Certified: (DP-700) Azure Data Engineer Associate, AWS Certified Data Analytics - Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working with EPIC healthcare systems (e.g., Clarity, Caboodle). Databricks certifications (e.g., Databricks Certified Data Engineer Associate or Professional). Knowledge of GenAI tools, Microsoft Fabric, or Microsoft Copilot. Familiarity with healthcare data standards and compliance (e.g., HIPAA, GDPR). Experience with DevSecOps and CI/CD deployments Experience in NoSQL databases design Knowledge on , Gen AI fundamentals and industry supporting use cases. Hands-on experience with Delta Lake and Delta Tables within the Databricks environment for building scalable and reliable data pipelines. Being You Diversity is a whole lot more than what we look like or where we come from, it&aposs how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we&aposre not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That&aposs the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked &aposHow Did You Hear About Us' during the application process, select &aposEmployee Referral' and enter your contact&aposs Kyndryl email address. Show more Show less
Posted 4 days ago
1.0 - 7.0 years
12 - 14 Lacs
Mumbai, Maharashtra, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 week ago
1.0 - 7.0 years
12 - 14 Lacs
Gurgaon, Haryana, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 week ago
1.0 - 7.0 years
12 - 14 Lacs
Hyderabad, Telangana, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 week ago
8.0 - 12.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions
Posted 1 week ago
1.0 - 6.0 years
4 - 10 Lacs
Gurgaon, Haryana, India
On-site
Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is availablehere .
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You have extensive experience in analytics and large-scale data processing across diverse data platforms and tools. Your responsibilities will include managing data storage and transformation across AWS S3, DynamoDB, Postgres, and Delta Tables with efficient schema design and partitioning. You will develop scalable analytics solutions using Athena and automate workflows with proper monitoring and error handling. Ensuring data quality, access control, and compliance through robust validation, logging, and governance practices will be a crucial part of your role. Additionally, you will design and maintain data pipelines using Python, Spark, Delta Lake framework, AWS Step functions, Event Bridge, AppFlow, and OAUTH. The tech stack you will be working with includes S3, Postgres, DynamoDB, Tableau, Python, and Spark.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a talented Big Data Engineer, you will be responsible for developing and managing our company's Big Data solutions. Your role will involve designing and implementing Big Data tools and frameworks, implementing ELT processes, collaborating with development teams, building cloud platforms, and maintaining the production system. To excel in this position, you should possess in-depth knowledge of Hadoop technologies, exceptional project management skills, and advanced problem-solving abilities. A successful Big Data Engineer comprehends the company's needs and establishes scalable data solutions to meet current and future requirements effectively. Your responsibilities will include meeting with managers to assess the company's Big Data requirements, developing solutions on AWS utilizing tools like Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, and Hadoop. You will also be involved in loading disparate data sets, conducting pre-processing services using tools such as Athena, Glue, and Spark, collaborating with software research and development teams, building cloud platforms for application development, and ensuring the maintenance of production systems. The requirements for this role include a minimum of 5 years of experience as a Big Data Engineer, proficiency in Python & PySpark, expertise in Hadoop, Apache Spark, Databricks, Delta Tables, and AWS data analytics services. Additionally, you should have extensive experience with Delta Tables, JSON, Parquet file formats, familiarity with AWS data analytics services like Athena, Glue, Redshift, EMR, knowledge of Data warehousing, NoSQL, and RDBMS databases. Good communication skills and the ability to solve complex data processing and transformation-related problems are essential for success in this role.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer in Pune, your responsibilities will include designing, implementing, and optimizing end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. You will be developing data pipelines to extract and transform data in near real-time using cloud-native technologies. Implementing data validation and quality checks to ensure accuracy and consistency will also be part of your role. Monitoring system performance, troubleshooting issues, and implementing optimizations to enhance reliability and efficiency will be crucial tasks. Collaboration with business users, analysts, and other stakeholders to understand data requirements and deliver tailored solutions is an essential aspect of this position. Documentation of technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation will be expected. Providing technical guidance and support to team members and stakeholders as needed will also be a key responsibility. Desirable competencies for this role include having 8+ years of work experience, proficiency in writing complex SQL queries on MPP systems such as Snowflake or Redshift, experience in Databricks and Delta tables, data engineering experience with Spark, Scala, or Python, familiarity with the Microsoft Azure stack including Azure Storage Accounts, Data Factory, and Databricks, experience in Azure DevOps and CI/CD pipelines, working knowledge of Python, and being comfortable participating in 2-week sprint development cycles.,
Posted 2 weeks ago
8.0 - 12.0 years
20 - 25 Lacs
Pune
Work from Office
Designation: Big Data Lead/Architect Location: Pune Experience: 8-10 years NP - immediate joiner/15-30 days notice Reports To – Product Engineering Head Job Overview We are looking to hire a talented big data engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes, collaborate with development teams, build cloud platforms, and maintain the production system. To ensure success as a big data engineer, you should have in-depth knowledge of Hadoop technologies, excellent project management skills, and high-level problem-solving skills. A top-notch Big Data Engineer understands the needs of the company and institutes scalable data solutions for its current and future needs. Responsibilities: Meeting with managers to determine the company’s Big Data needs. Developing big data solutions on AWS, using Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, Hadoop, etc. Loading disparate data sets and conducting pre-processing services using Athena, Glue, Spark, etc. Collaborating with the software research and development teams. Building cloud platforms for the development of company applications. Maintaining production systems. Requirements: 8-10 years of experience as a big data engineer. Must be proficient with Python & PySpark. In-depth knowledge of Hadoop, Apache Spark, Databricks, Delta Tables, AWS data analytics services. Must have extensive experience with Delta Tables, JSON, Parquet file format. Good to have experience with AWS data analytics services like Athena, Glue, Redshift, EMR. Familiarity with Data warehousing will be a plus. Must have Knowledge of NoSQL and RDBMS databases. Good communication skills. Ability to solve complex data processing, transformation related problems
Posted 1 month ago
5.0 - 8.0 years
15 - 18 Lacs
Coimbatore
Hybrid
Role & responsibilities Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions.
Posted 2 months ago
8 - 13 years
25 - 30 Lacs
Bengaluru
Hybrid
Over all 8+ years of solid experience in data projects. Excellent Design, develop, and maintain robust ETL/ELT pipelines for data ingestion, transformation, and storage. Proficient in SQL and must worked on complex joins, Subqueries, functions, procedure Able to perform SQL tunning and query optimization without support. Design, develop, and maintain ETL pipelines using Databricks, PySpark to extract, transform, and load data from various sources. Must have good working experience on Delta tables, deduplication, merging with terabyte of data set Optimize and fine-tune existing ETL workflows for performance and scalability. Excellent knowledge in dimensional modelling and Data Warehouse Must have experience on working with large data set Experience working with batch and real-time data processing (Good to have). Implemented data validation, quality checks , and ensure adherence to security and compliance standards. Ability to develop reliable, secure, compliant data processing systems. Work closely with cross-functional teams to support data analytics, reporting, and business intelligence initiatives. One should be self-driven and work independently without support.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough