Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
16 - 27 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Designation: Lead Data Engineer Location: Hyderabad, Indore, Ahmedabad Experience: 8 years Role & responsibilities What You Will Do: • Analyze Business Requirements. • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema. • Transformation of Data in Power BI/SQL/ETL Tool. • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas. • Experience writing SQL Queries and stored procedures. • Design effective Power BI solutions based on business requirements. • Manage a team of Power BI developers and guide their work. • Integrate data from various sources into Power BI for analysis. • Optimize performance of reports and dashboards for smooth usage. • Collaborate with stakeholders to align Power BI projects with goals. • Knowledge of Data Warehousing(must), Data Engineering is a plus What we need? • B. Tech computer science or equivalent • Minimum 5+ years of relevant experience Perks and benefits
Posted 2 months ago
5.0 - 10.0 years
13 - 23 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Hi, We are excited to announce that #LTI Mindtree is currently recruiting #Data Engineers! Roles Available: - Specialist - Data Engineering: 5 to 8 years of experience Senior Specialist - Data Engineering: 8 to 12 years of experience Location: Bangalore, Pune, Mumbai, Kolkata, Hyderabad, Chennai and Delhi NCR. Work Mode: Hybrid Notice period: Till 60 days Link to share your details: ( https://lnkd.in/daty4F25 ) Job Summary: We are seeking an experienced and strategic Data to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or master’s degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.
Posted 2 months ago
10.0 - 15.0 years
12 - 22 Lacs
New Delhi, Gurugram
Hybrid
Team Leadership & Management: Lead, mentor, and develop a team of data engineers. Foster a collaborative and innovative team environment. Conduct performance evaluations and support professional growth. Data Engineering & Architecture: Architect and implement scalable data solutions using Azure Databricks and Snowflake. Design, build, and maintain robust data pipelines with a solid understanding of ETL/ELT processes. Optimize data workflows for performance, reliability, and scalability. Solution Architecture: Architect comprehensive data solutions tailored to business needs. Lead the design and implementation of data warehouses, ensuring alignment with organizational objectives. Collaborate with stakeholders to define and refine data requirements and solutions. AI Integration: Work alongside data scientists and AI specialists to integrate machine learning models into data pipelines. Implement AI-driven solutions to enhance data processing and analytics capabilities. Engineering Project Management: Manage data engineering projects from inception to completion, ensuring timely delivery and adherence to project goals. Utilize project management methodologies to track progress, allocate resources, and mitigate risks. Coordinate with stakeholders to define project requirements and objectives. Infrastructure as Code & Automation: Implement and manage infrastructure using Terraform. Develop and maintain CI/CD pipelines to automate deployments and ensure continuous integration and delivery of data solutions. Quality Assurance & Best Practices: Establish and enforce data engineering best practices and standards. Ensure data quality, security, and compliance across all data initiatives. Conduct code reviews and ensure adherence to coding standards. Collaboration & Communication: Work closely with data analysts, business intelligence teams, and other stakeholders to understand data needs and deliver solutions. Communicate technical concepts and project statuses effectively to non-technical stakeholders. Undergraduate degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent experience. Experience: 8+ years of overall experience in data engineering. 2+ years of experience in Managing Data Engineering Teams. Proven experience with Azure Databricks and Snowflake. Solid experience in designing data solutions for data warehouses. Hands-on experience with Terraform for infrastructure as code. Strong knowledge of CI/CD tools and practices. Experience integrating AI and machine learning models into data pipelines. Technical Skills: Proficiency in Spark, Scala, Python, SQL, and Databricks. Proven Unix scripting and SQL skills. Strong understanding of SQL and database management. Familiarity with data warehousing, ETL/ELT processes, and big data technologies. Experience with cloud platforms, preferably Microsoft Azure. Project Management: Proven ability to manage multiple projects simultaneously. Familiarity with project management tools (e.g., Jira, Trello, Asana,Rally). Strong organizational and time-management skills. Soft Skills: Excellent leadership and team management abilities. Ability to work collaboratively in a fast-paced environment. Proven ability to perform with minimal supervision. Solid work prioritization, planning, and organizational skills. Leadership qualities including being proactive, thoughtful, thorough, decisive, and flexible. Role & responsibilities Preferred candidate profile
Posted 2 months ago
10.0 - 15.0 years
12 - 22 Lacs
Hyderabad
Hybrid
Team Leadership & Management: Lead, mentor, and develop a team of data engineers. Foster a collaborative and innovative team environment. Conduct performance evaluations and support professional growth. Data Engineering & Architecture: Architect and implement scalable data solutions using Azure Databricks and Snowflake. Design, build, and maintain robust data pipelines with a solid understanding of ETL/ELT processes. Optimize data workflows for performance, reliability, and scalability. Solution Architecture: Architect comprehensive data solutions tailored to business needs. Lead the design and implementation of data warehouses, ensuring alignment with organizational objectives. Collaborate with stakeholders to define and refine data requirements and solutions. AI Integration: Work alongside data scientists and AI specialists to integrate machine learning models into data pipelines. Implement AI-driven solutions to enhance data processing and analytics capabilities. Engineering Project Management: Manage data engineering projects from inception to completion, ensuring timely delivery and adherence to project goals. Utilize project management methodologies to track progress, allocate resources, and mitigate risks. Coordinate with stakeholders to define project requirements and objectives. Infrastructure as Code & Automation: Implement and manage infrastructure using Terraform. Develop and maintain CI/CD pipelines to automate deployments and ensure continuous integration and delivery of data solutions. Quality Assurance & Best Practices: Establish and enforce data engineering best practices and standards. Ensure data quality, security, and compliance across all data initiatives. Conduct code reviews and ensure adherence to coding standards. Collaboration & Communication: Work closely with data analysts, business intelligence teams, and other stakeholders to understand data needs and deliver solutions. Communicate technical concepts and project statuses effectively to non-technical stakeholders. Undergraduate degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent experience. Experience: 8+ years of overall experience in data engineering. 2+ years of experience in Managing Data Engineering Teams. Proven experience with Azure Databricks and Snowflake. Solid experience in designing data solutions for data warehouses. Hands-on experience with Terraform for infrastructure as code. Strong knowledge of CI/CD tools and practices. Experience integrating AI and machine learning models into data pipelines. Technical Skills: Proficiency in Spark, Scala, Python, SQL, and Databricks. Proven Unix scripting and SQL skills. Strong understanding of SQL and database management. Familiarity with data warehousing, ETL/ELT processes, and big data technologies. Experience with cloud platforms, preferably Microsoft Azure. Project Management: Proven ability to manage multiple projects simultaneously. Familiarity with project management tools (e.g., Jira, Trello, Asana,Rally). Strong organizational and time-management skills. Soft Skills: Excellent leadership and team management abilities. Ability to work collaboratively in a fast-paced environment. Proven ability to perform with minimal supervision. Solid work prioritization, planning, and organizational skills. Leadership qualities including being proactive, thoughtful, thorough, decisive, and flexible. Role & responsibilities Preferred candidate profile
Posted 2 months ago
7.0 - 12.0 years
14 - 20 Lacs
Noida, New Delhi, Gurugram
Hybrid
Primary Responsibilities: Design and develop applications and services running on Azure, with a strong emphasis on Azure Databricks, ensuring optimal performance, scalability, and security. Build and maintain data pipelines using Azure Databricks and other Azure data integration tools. Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets. Write extensive query in SQL and Snowflake Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance. Create, understand, and validate design and estimated effort for given module/task, and be able to justify it. Possess solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments. Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery. Maintain code quality by writing clean, maintainable, and testable code. Monitor performance and optimize resources to ensure cost-effectiveness and high availability. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Provide technical support and consultation for infrastructure questions. Help develop, manage, and monitor continuous integration and delivery systems. Take accountability and ownership of features and teamwork. Comply with the terms and conditions of the employment contract, company policies and procedures, and any directives. Required Qualifications: B.Tech/MCA (Minimum 16 years of formal education) Overall 7+ years of experience. Minimum of 3 years of experience in Azure (ADF), Databricks and DevOps. 5 years of experience in writing advanced leve l SQL. 2-3 years of experience in writing, reading, and debugging Spark, Scala, and Python code . 3 or more years of experience in architecting, designing, developing, and implementing cloud solutions on Azure. Proficiency in programming languages and scripting tools. Understanding of cloud data storage and database technologies such as SQL and NoSQL. Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts. Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform. Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks. Proven excellent communication, writing, and presentation skills. Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills. Preferred Qualifications: Knowledge of AI/ML or LLM (GenAI). Knowledge of US Healthcare domain and experience with healthcare data. Experience and skills with Snowflake.Role & responsibilities Preferred candidate profile
Posted 2 months ago
2.0 - 7.0 years
7 - 17 Lacs
Mumbai
Work from Office
Greetings!!! We have an opening with Reputed Finance Industry for the role of Data Management Business Analyst Experince: 2+ years Role & responsibilities Extract and analyze data from the MES system to identify trends, performance metrics, and areas for improvement. Business requirements should be elicited, analyzed, specified, and verified Create documents like Functional Specification Documents (FSD) with Table-Column mapping. Engage with various stakeholders like Business, Data Science, PowerBI team for cross-functional data validation and support. Identify opportunities to optimize manufacturing processes based on data analysis and user feedback. Preferred candidate profile 2 + years of relevant experience in Data Modelling & Management experience . Interested candidates can share your resume to josy@topgearconsultants.com
Posted 2 months ago
6.0 - 11.0 years
0 - 0 Lacs
Hyderabad
Hybrid
Azure Databricks Lead (Sr. Data Engineer) - Hyderabad Who we are Tiger Analytics is a global analytics consulting firm. With data and technology at the core of our solutions, we are solving some of the toughest problems out there. Our culture is modeled around expertise and mutual respect with a team first mindset. Working at Tiger, youll be at the heart of this AI revolution. Youll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire. We are headquartered in the Silicon Valley and have our delivery centers across the globe. Role Overview: We are seeking street-smart and technically strong Senior Data Engineers / Leads who can take ownership of designing and developing cutting-edge data and AI platforms using Azure-native technologies and Databricks. You will play a critical role in building scalable data pipelines, modern data architectures, and intelligent analytics solutions. Key Responsibilities: Design and implement scalable, metadata-driven frameworks for data ingestion, quality, and transformation across both batch and streaming datasets. Develop and optimize end-to-end data pipelines to process structured and unstructured data, enabling the creation of analytical data products. Build robust exception handling, logging, and monitoring mechanisms for better observability and operational support. Take ownership of complex modules and lead the development of critical data workflows and components. Provide guidance to data engineers and peers on best practices. Collaborate with cross-functional teams—including business consultants, data architects & scientists, and application developers—to deliver impactful analytics solutions. Required Qualifications: 5+ years of overall technical experience, with a minimum of 2 years of hands-on experience with Microsoft Azure and Databricks. Proven experience delivering at least one end-to-end Data Lakehouse solution on Azure Databricks using the Medallion Architecture. Strong working knowledge of the Databricks ecosystem , including: PySpark, Notebooks, Structured Streaming, Unity Catalog, Delta Live Tables, Workflows, and SQL Warehouse. Advanced programming , unit testing, and debugging skills in Python and SQL. Hands-on experience with Azure-native services such as: Azure Data Factory, ADLS Gen2, Azure SQL Database, and Event Hub. Solid understanding of data modeling techniques , including both Dimensional and Third Normal Form (3NF) models. Exposure to developing LLM/Generative AI -powered applications. Must have excellent understanding of CI/CD workflows using Azure DevOps. Bonus: Knowledge of Azure infrastructure, including provisioning, networking, security, and governance. Educational Background: Bachelor’s degree (B.E/B.Tech) in Computer Science, Information Technology, or a related field from a reputed institute (preferred). You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal- opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire, packages are among the best in industry. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Additional Benefits: Health insurance (self & family), virtual wellness platform, Car Lease Program and knowledge communities.
Posted 2 months ago
8.0 - 13.0 years
14 - 18 Lacs
Bengaluru
Work from Office
The Solution Architect Data Engineer will design, implement, and manage data solutions for the insurance business, leveraging expertise in Cognos, DB2, Azure Databricks, ETL processes, and SQL. The role involves working with cross-functional teams to design scalable data architectures and enable advanced analytics and reporting, supporting the company's finance, underwriting, claims, and customer service operations. Key Responsibilities: Data Architecture & Design: Design and implement robust, scalable data architectures and solutions in the insurance domain using Azure Databricks, DB2, and other data platforms. Data Integration & ETL Processes: Lead the development and optimization of ETL pipelines to extract, transform, and load data from multiple sources, ensuring data integrity and performance. Cognos Reporting: Oversee the design and maintenance of Cognos reporting systems, developing custom reports and dashboards to support business users in finance, claims, underwriting, and operations. Data Engineering: Design, build, and maintain data models, data pipelines, and databases to enable business intelligence and advanced analytics across the organization. Cloud Infrastructure: Develop and manage data solutions on Azure, including Databricks for data processing, ensuring seamless integration with existing systems (e.g., DB2, legacy platforms). SQL Development: Write and optimize complex SQL queries for data extraction, manipulation, and reporting purposes, with a focus on performance and scalability. Data Governance & Quality: Ensure data quality, consistency, and governance across all data solutions, implementing best practices and adhering to industry standards (e.g., GDPR, insurance regulations). Collaboration: Work closely with business stakeholders, data scientists, and analysts to understand business needs and translate them into technical solutions that drive actionable insights. Solution Architecture: Provide architectural leadership in designing data platforms, ensuring that solutions meet business requirements, are cost-effective, and can scale for future growth. Performance Optimization: Continuously monitor and tune the performance of databases, ETL processes, and reporting tools to meet service level agreements (SLAs). Documentation: Create and maintain comprehensive technical documentation including architecture diagrams, ETL process flows, and data dictionaries. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Solution Architect or Data Engineer in the insurance industry, with a strong focus on data solutions. Hands-on experience with Cognos (for reporting and dashboarding) and DB2 (for database management). Proficiency in Azure Databricks for data processing, machine learning, and real-time analytics. Extensive experience in ETL development, data integration, and data transformation processes. Strong knowledge of Python, SQL (advanced query writing, optimization, and troubleshooting). Experience with cloud platforms (Azure preferred) and hybrid data environments (on-premises and cloud). Familiarity with data governance and regulatory requirements in the insurance industry (e.g., Solvency II, IFRS 17). Strong problem-solving skills, with the ability to troubleshoot and resolve complex technical issues related to data architecture and performance. Excellent verbal and written communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud-based data platforms (e.g., Azure Data Lake, Azure Synapse, AWS Redshift). Knowledge of machine learning workflows, leveraging Databricks for model training and deployment. Familiarity with insurance-specific data models and their use in finance, claims, and underwriting operations. Certifications in Azure Databricks, Microsoft Azure, DB2, or related technologies. Knowledge of additional reporting tools (e.g., Power BI, Tableau) is a plus. Key Competencies: Technical Leadership: Ability to guide and mentor development teams in implementing best practices for data architecture and engineering. Analytical Skills: Strong analytical and problem-solving skills, with a focus on optimizing data systems for performance and scalability. Collaborative Mindset: Ability to work effectively in a cross-functional team, communicating complex technical solutions in simple terms to business stakeholders. Attention to Detail: Meticulous attention to detail, ensuring high-quality data output and system performance.
Posted 2 months ago
5.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at Job Function: Data Analytics & Computational Sciences Job Sub Function: Data Engineering Job Category: Scientific/Technology All Job Posting Locations: Bangalore, Karnataka, India Job Description: Position Summary Johnson & Johnson MedTech is seeking a Sr Eng Data Engineering for Digital Surgery Platform (DSP) in Bangalore, India. Johnson & Johnson (J&J) stands as the worlds leading manufacturer of healthcare products and a service provider in the pharmaceutical and medical device sectors. At Johnson & Johnson MedTechs Digital Surgery Platform, we are groundbreaking the future of healthcare by harnessing the power of people and technology, transitioning to a digital-first MedTech enterprise. With a focus on innovation and an ambitious strategic vision, we are integrating robotic-assisted surgery platforms, connected medical devices, surgical instruments, medical imaging, surgical efficiency solutions, and OR workflow into the next-generation MedTech platform. This initiative will also foster new surgical insights, improve supply chain innovation, use cloud infrastructure, incorporate cybersecurity, collaborate with hospital EMRs, and elevate our digital solutions. We are a diverse and growing team, that nurture creativity, deep understanding of data processing techniques, and the use of sophisticated analytics technologies to deliver results. Overview As a Sr Eng Data Engineering for J&J MedTech Digital Surgery Platform (DSP), you will play a pivotal role in building the modern cloud data platform by demonstrating your in-depth technical expertise and interpersonal skills. In this role, you will be required to focus on accelerating digital product development as part of the multifunctional and fast-paced DSP data platform team and will give to the digital transformation through innovative data solutions. One of the key success criteria for this role is to ensure the quality of DSP software solutions and demonstrate the ability to collaborate effectively with the core infrastructure and other engineering teams and work closely with the DSP security and technical quality partners. Responsibilities Work with platform data engineering, core platform, security, and technical quality to design, implement and deploy data engineering solutions. Develop pipelines for ingestion, transformation, orchestration, and consumption of various types of data. Design and deploy data layering pipelines that use modern Spark based data processing technologies such as Databricks and Delta Live Table (DLT). Integrate data engineering solutions with Azure data governance components not limited to Purview and Databricks Unity Catalog. Implement and support security monitoring solutions within Azure Databricks ecosystem. Design, implement, and support data monitoring solutions in data analytical workspaces. Configure and deploy Databricks Analytical workspaces in Azure with IaC (Terraform, Databricks API) with J&J DevOps automation tools within JPM/Xena framework. Implement automated CICD processes for data processing pipelines. Support DataOps for the distributed DSP data architecture. Function as a data engineering SME within the data platform. Manage authoring and execution of automated test scripts. Build effective partnerships with DSP architecture, core infrastructure and other domains to design and deploy data engineering solutions. Work closely with the DSP Product Managers to understand business needs, translate them to system requirements, demonstrate in-depth understanding of use cases for building prototypes and solutions for data processing pipelines. Operate in SAFe Agile DevOps principles and methodology in building quality DSP technical solutions. Author and implement automated test scripts as mandates DSP quality requirements. Qualifications Required Bachelor s degree or equivalent experience in software or computer science or data engineering. 8+ years of overall IT experience. 5-7 years of experience in cloud computing and data systems. Advanced Python programming skills. Expert level in Azure Databricks Spark technology and data engineering (Python) including Delta Live Tables (DLT). Experience in design and implementation of secure Azure data solutions. In-depth knowledge of the data architecture - infrastructure, network components, data processing Proficiency in building data pipelines in Azure Databricks. Proficiency in configuration and administration of Azure Databricks workspaces and Databricks Unity Catalog. Deep understanding of principles of modern data Lakehouse. Deep understanding of Azure system capabilities, data services, and ability to implement security controls. Proficiency with enterprise DevOps tools including Bitbucket, Jenkins, Artifactory. Experience with DataOps. Experience with quality software systems. Deep understanding of and experience in SAFe Agile. Understanding of SDLC. Preferred Master s degree or equivalent. Proven healthcare experience. Azure Databricks certification. Ability to analyze use cases and translate them into system requirements, make data driven decisions DevOpss automation tools with JPM/Xena framework. Expertise in automated testing. Experience in AI and MLs. Excellent verbal and written communication skills. Ability to travel up to 10% of domestic required. Johnson & Johnson is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
Posted 2 months ago
4.0 - 5.0 years
3 - 7 Lacs
Mumbai, Pune, Chennai
Work from Office
Job Category: IT Job Type: Full Time Job Location: Bangalore Chennai Mumbai Pune Exp:- 4 to 5 Years Location:- Pune/Mumbai/Bangalore/Chennai JD : Azure Data Engineer with QA: Must Have - Azure Data Bricks, Azure Data Factory, Spark SQL Years - 4-5 years of development experience in Azure Data Bricks Strong experience in SQL along with performing Azure Data bricks Quality Assurance. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Kind Note: Please apply or share your resume only if it matches the above criteria.
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Hybrid
Sr Azure Data Engineer About Cloudaeon: Cloudaeon is a global technology consulting and services company. We support companies in managing cloud infrastructure and solutions with the help of big data, DevOps and analytics. We offer first-class solutions and services that use big data and always exceed customer expectations. Our deep vertical knowledge, combined with expertise in several enterprise- class big data platforms, helps develop targeted solutions to meet our customers' business needs. Our global team consists of experienced professionals with experience in various tech stacks. Every member of our team is very active and committed to helping our customers achieve their goals. Job Role: We are looking for a Senior Azure Data Engineer with overall 5+ years of experience to join our team. The ideal candidate should have expertise in Azure Data Factory (ADF), Databricks, SQL, Python, and experience working with SAP IS-Auto as a data source. This role involves data modeling, systematic layer modeling, and ETL/ELT pipeline development to enable efficient data processing and analytics. You will use various methods to transform raw data into useful data systems. Overall, you will strive for efficiency by aligning data systems with business goals. Responsibilities: Develop & Optimize ETL Pipelines: Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. Data Modeling & Systematic Layer Modeling: Design logical, physical, and systematic data models for structured and unstructured data. Integrate SAP IS-Auto: Extract, transform, and load data from SAP IS-Auto into Azure-based data platforms. Database Management: Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. Big Data Processing: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. Data Quality & Governance: Implement data validation, lineage tracking, and security measures for high-quality, compliant data. Collaboration: Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. Requirements: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. SAP IS-Auto Data Handling: Experience integrating SAP IS-Auto as a data source into data pipelines. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Strong problem-solving skills coupled with good communication skills. Open minded, inquisitive, life-long learner. Good conversion of high-level business & technical requirements into technical specs. Feeling comfortable in using Azure cloud technologies. Customer centric, passionate about delivering great digital products and services. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).
Posted 2 months ago
2.0 - 4.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Overview As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 7+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields.
Posted 2 months ago
10.0 - 15.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Title :Senior SQL Developer Experience 10 -15Years Location :Bangalore ExperienceMinimum of 10+ years in database development and management roles. SQL MasteryAdvanced expertise in crafting and optimizing complex SQL queries and scripts. AWS RedshiftProven experience in managing, tuning, and optimizing large-scale Redshift clusters. PostgreSQLDeep understanding of PostgreSQL, including query planning, indexing strategies, and advanced tuning techniques. Data PipelinesExtensive experience in ETL development and integrating data from multiple sources into cloud environments. Cloud ProficiencyStrong experience with AWS services like ECS, S3, KMS, Lambda, Glue, and IAM. Data ModelingComprehensive knowledge of data modeling techniques for both OLAP and OLTP systems. ScriptingProficiency in Python, C#, or other scripting languages for automation and data manipulation. Preferred Qualifications LeadershipPrior experience in leading database or data engineering teams. Data VisualizationFamiliarity with reporting and visualization tools like Tableau, Power BI, or Looker. DevOpsKnowledge of CI/CD pipelines, infrastructure as code (e.g., Terraform), and version control (Git). CertificationsAny relevant certifications (e.g., AWS Certified Solutions Architect, AWS Certified Database - Specialty, PostgreSQL Certified Professional) will be a plus. Azure DatabricksFamiliarity with Azure Databricks for data engineering and analytics workflows will be a significant advantage. Soft Skills Strong problem-solving and analytical capabilities. Exceptional communication skills for collaboration with technical and non-technical stakeholders. A results-driven mindset with the ability to work independently or lead within a team. Qualification: Bachelor's or masters degree in Computer Science, Information Systems, Engineering or equivalent. 10+ years of experience
Posted 2 months ago
7.0 - 11.0 years
15 - 20 Lacs
Mumbai
Work from Office
This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance Establish and enforce data governance policies and standards. Primary Skills Experience 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Greetings from Tech Mahindra!! With reference to your profile on Naukri portal, we are contacting you to share a better job opportunity for the role of SQL Developer with our own organization, Tech Mahindra based. COMPANY PROFILE: Tech Mahindra is an Indian multinational information technology services and consulting company. Website: www.techmahindra.com We are looking for SQL Developer for our Organization. Job Details: Experience : 5+ years Education : Any Work timings : Normal Shift Mode-Hybrid Location open for all locations No of days working : 05 Days Working Required Skills - Experience: 5+ years Experience on SQL DB/Server including building SQL Database, database designing, data modelling and data warehousing. Strong experience Creating complex stored procedures and functions, Dynamic SQLs Strong experience in performance tuning activities •Must have experience on Azure Data Factory V2, Azure Synapse, Azure Databricks and SSIS. •Strong Azure SQL Database and Azure SQL Datawarehouse concepts. •Strong verbal and written communications skills Kindly share only interested candidates forward your updated resumes with below details at: ps00874998@techmahindra.com Total years of experience: Relevant experience in SQL developer : Relevant experience in Azure Data Factory :- Relevant experience in Azure Databricks :- Offer amount (if holding any offer ) : Location of offer:- Reason for looking another offer:- Notice Period (if serving LWD) : Current location :- Preferred location : CTC: Exp CTC: When you are available for the interview? (Time/Date): How soon can you join? Best Regards, Prerna Sharma Business Associate | RMG Tech Mahindra | PS00874998@TechMahindra.com
Posted 2 months ago
8.0 - 13.0 years
35 - 40 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Qualification: B.Tech / MCA / BCA / B.Sc (Computer Science or related field) Notice Period: Immediate Joiners Preferred We Have an Immediate Requirement For Azure Data Engineers at Senior and Lead levels to work on cutting-edge data projects in Azure ecosystem. Primary Skills: Azure Data Factory (ADF) Azure Databricks SQL Development & Optimization Power BI Reports, Dashboards, DAX Data Integration & ETL Pipelines Data Modeling and Transformation Secondary Skills: Azure Synapse Analytics Python / PySpark in Databricks DevOps exposure for data pipelines Version control using Git Exposure to Agile & CI/CD Environments Strong communication and client-handling skills Key Responsibilities: Design and implement data pipelines using ADF and Databricks Transform and process large-scale datasets efficiently Develop interactive reports and dashboards in Power BI Write and optimize SQL queries and stored procedures Work closely with architects and business teams to deliver high-quality data solutions Lead and mentor junior data engineers in best practices
Posted 2 months ago
8.0 - 10.0 years
15 - 20 Lacs
Pune
Work from Office
Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 8-10 years 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.
Posted 2 months ago
14.0 - 20.0 years
20 - 35 Lacs
Pune, Chennai, Coimbatore
Hybrid
Key Skill /Skill Specialization Azure Architect Mandatory Skills Azure Data Factory, DataBricks, SQL , ETL , Power BI or any other analytic tool Desired Skills PySpark, Python , SQL . Look for immediate to 25 days joiners only. Location: Pune or Chennai. Exp. range 14 20 years. Job Description "Job Description: - Overall 16+ Experience in development with 8+ years of experience with Azure Data Factory and Databricks with Big Data ecosystem tools (e.g. Hadoop, Hive) - Knowledge of Python, Pyspark , SQL is a MUST - Data processing within the ETL for big data data pipelines, architectures & data sets - Good understanding of Datawarehousing concepts - Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications -Understanding on Architecting scalable and reliable cloud infrastructure and cost optimization - Ability to understand client data ecosystem and be able to map to final data model - Ability to understand , Design , developed and delivered any custom requirement to fit into solution - Ensure that service delivery meets best-in-class standards – e.g. related to quality of architecture, code and testing - Ensure that solutions delivery expectations are met or exceeded - Managing client communication from Technical perspective - Ability to adapt to cutting edge technologies - Manage and groom team members - Identify area's which can be automated to improve team efficiency and provide suggestions to product team - Work closely with the product team to understand requirements and lead the development of the solutions end to end.Work with product team to resolve issue and suggest new features - Support organizational initiatives related to competency development, capability building, etc. - Proactively learn healthcare standards, workflows and technologies - Be a hands-on technology professional to guide and mentor team members Educational background: Computer Science or Engineering Desired Skills: - Develop and maintain innovative Azure solutions. The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.) - Interested in developing knowledge in the healthcare domain, along with deepening knowledge associated with software development processes and cloud technologies.
Posted 2 months ago
9.0 - 14.0 years
9 - 19 Lacs
Bengaluru
Remote
Job Description MUST have skills Microsoft Azure BI Stack (ADF, Azure Databricks, PySpark, ADLS) • SQL • Python • Understanding of Big Data, ETL concepts • Experience in Azure Analytics and DevOps • Experience in building data pipelines using Python, PySpark & Spark SQL Responsibilities Play the role of senior developer for the assigned projects •Provide technology consulting to customers and internal project teams • Responsible for create/maintain scalable ETL pipeline with Azure data factory • Responsible to ingest data from various sources. • Create complex ETL work flows using ADF, ADB, ADL and Datawarehouse • Responsible for creating Databricks notebooks using PySpark for performing data cleansing, transformation and processing • Deployment of code using CI/CD pipelines General Skills At least 3-5 years relevant experience as Data Engineer in Azure Stack • Strong communication skills, ability to work directly with client and to lead junior team members •Problem solving and analytical skills •Ability to explore emerging technologies •Eagerness to learn and ability to adapt to changes •Experience with Azure ARM templates, PowerShell and CI/CD using Azure DevOps Nice to have skills Azure DP 203 Certified •PowerBI •Apache Airflow •Automation scripts •Log Analytics, KQL •Understanding of DWH concepts •Good Knowledge of database modelling and database performance tuning.
Posted 2 months ago
8.0 - 13.0 years
16 - 22 Lacs
Noida, Chennai, Bengaluru
Work from Office
Location : Bangalore, Chennai, Delhi, Pune. Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack. - Ability to provide solutions that are forward-thinking in data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues. - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs. - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members. - Orchestrate the data pipelines in scheduler via Airflow. Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles. - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture. - Should have hands-on experience in SQL, Python and Spark (PySpark). - Candidate must have experience in AWS/ Azure stack. - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes. - Experience with Apache Kafka for use with streaming data / event-based data. - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala). - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J). - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. - Should have experience working in Agile methodology. - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail.
Posted 2 months ago
8.0 - 12.0 years
15 - 27 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager
Posted 2 months ago
6.0 - 11.0 years
10 - 20 Lacs
Chennai, Bengaluru
Hybrid
Hiring for Big Data Lead Experience : 6 - 12+ yrs Work location : Chennai and Bangalore Shift timings : 12:30pm - 9:30pm Work Mode : 5 days WFO Primary: Azure, Databricks, ADF, Pyspark, SQL Sharing JD for your reference : Must Have 6 + Years of IT experience in Datawarehouse and ETL • Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python • Ability to understand Design, Source to target mapping (STTM) and create specifications documents • Flexibility to operate from client office locations • Able to mentor and guide junior resources, as needed Nice to Have • Any relevant certifications • Banking experience on RISK & Regulatory OR Commercial OR Credit Cards/Retail Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location
Posted 2 months ago
4.0 - 6.0 years
8 - 10 Lacs
Bengaluru
Hybrid
Job opportunity from Hexaware Technologies ! Hiring Azure Databricks consultant for Bangalore location only (Immediate joiner required) Interested, please share below details to manojkumark2@hexaware.com Total Exp: Exp in Databricks: Exp in Pyspark & SQL: CCTC & ECTC: Notice Period / LWD: Currently in Bangalore : Yes /No
Posted 2 months ago
5.0 - 10.0 years
7 - 12 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Qualification: B.Tech / MCA / BCA / B.Sc (Computer Science or related field) Notice Period: Immediate Joiners Preferred We Have an Immediate Requirement For Azure Data Engineers at Senior and Lead levels to work on cutting-edge data projects in Azure ecosystem. Primary Skills: Azure Data Factory (ADF) Azure Databricks SQL Development & Optimization Power BI Reports, Dashboards, DAX Data Integration & ETL Pipelines Data Modeling and Transformation Secondary Skills: Azure Synapse Analytics Python / PySpark in Databricks DevOps exposure for data pipelines Version control using Git Exposure to Agile & CI/CD Environments Strong communication and client-handling skills Key Responsibilities: Design and implement data pipelines using ADF and Databricks Transform and process large-scale datasets efficiently Develop interactive reports and dashboards in Power BI Write and optimize SQL queries and stored procedures Work closely with architects and business teams to deliver high-quality data solutions Lead and mentor junior data engineers in best practices
Posted 2 months ago
3.0 - 7.0 years
12 - 20 Lacs
Mumbai Suburban
Work from Office
Job Role: Minimum 3 years of previous industry work experience will be preferred •In-depth understanding of database structure principles. • Knowledge of data mining and segmentation techniques, expertise in SQL and Oracle. • Familiarity with data visualization and data oriented. • Ability to document complex business processes and handle all types of customer requests. • Good communication skill in English; Math & Statistical analysis, ability to interpret and collate relevant data. • Should have working experience on on-premises and cloud-based data infrastructure handling large and diverse datasets • Experience in one or more of the below technologies is preferred • AWS/GCP/Azure • Kubernetes/Docker Swarm • Apache Hadoop & Apache Spark • Elastic Stack/Elk • Airflow / Prefect • MongoDB, Cassandra, Redis, Memcached and DynamoDB • MySQL, Cassandra, and Oracle SQL • PowerBI/Tableau/Qlik view
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France