Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 - 12.0 years
18 - 33 Lacs
Navi Mumbai
Work from Office
About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling
Posted 1 week ago
7.0 - 12.0 years
18 - 33 Lacs
Navi Mumbai
Work from Office
About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling
Posted 2 weeks ago
7.0 - 12.0 years
20 - 35 Lacs
Mumbai
Work from Office
Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling Location: Mumbai
Posted 2 weeks ago
16.0 - 21.0 years
16 - 21 Lacs
Delhi NCR, , India
On-site
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology
Posted 3 weeks ago
16.0 - 21.0 years
16 - 21 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology
Posted 3 weeks ago
16.0 - 21.0 years
16 - 21 Lacs
Pune, Maharashtra, India
On-site
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology
Posted 3 weeks ago
14 - 20 years
20 - 35 Lacs
Pune, Chennai, Mumbai (All Areas)
Work from Office
Role :Data & Analytics Architect Required Skill Set :Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Preferred Specializations or Prior Experience : Manufacturing, Hi-Tech, CPG Use cases where Analytics and AI have been applied Location: PAN INDIA Desired Competencies (Managerial/Behavioural Competency): Must-Have: 14+ years of IT industry experience IoT / Industry 4.0 / Industrial AI experience for at least 3+ years In depth knowledge of Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Strong written , verbal communication with good presentation skills Excellent Knowledge of Data governance , Medallion architecture , UNS , Data lake architectures , AIML and Data Science Experience with cloud platforms (e.g., GCP, AWS, Azure) and cloud-based Analytics & AI / ML services. Proven experience working with clients in the Manufacturing CPG /High-Tech / oil & Gas /Pharma industries. Good understanding of technology trends, market forces and industry imperatives. Excellent communication, presentation, and interpersonal skills. Ability to work independently and collaboratively in a team environment. Good-to-Have: Degree in Data Science or Statistics. Led consulting & advisory programs at CxO level, managing business outcomes. Point of View articulation for CxO level. Manufacturing (Discreet or Process) industry background for application of AI technology for business impact. Entrepreneurial and comfortable working in a complex and fast-paced environment Responsibilities / Expected Deliverables: We are seeking a highly skilled and experienced Data and Analytics / Consultant to provide expert guidance and support to our clients in the Manufacturing, Consumer Packaged Goods (CPG), and High-Tech industries. This role requires a deep understanding of Architect , Design and Implementation experience in cloud data platforms Experience in handling multiple type of data (Structured, streaming , Semi structured etc.) Strategic experience in Data & Analytics (cloud data architecture, lake-house architecture, data fabric, data mesh concept) Experience in deploying DevOps / CICD techniques. Automate and Deploy Data Pipelines / ETLs in DevOps Environment Experience in Strategizing Data governance activities. The ideal candidate will possess exceptional communication, consulting, and problem-solving skills, along with a strong technical foundation in data Arch. The role involves leading the Data Arch Tech Advisory engagement, bringing thought leadership to engage CxOs actively. Following would be some of the key roles and responsibilities: Business-oriented Engage with customer CxOs to evangelise adoption of AI & GenAI Author proposals for solving business problems and achieving business objectives, leveraging Data Analytics & AI technologies Advisory Experience in managing the entire lifecycle of Data Analytics , is an added advantage. This includes: Develop roadmap for introduction and scaling of Data architecture in customer organization Define best suited AI operating model for customers Guide the teams on solution approaches and roadmaps. Build and leverage frameworks for RoI from AI. Effectively communicate complex technical information to both technical and non-technical audiences, presenting findings and recommendations in a clear, concise, and compelling. Demonstrate though-leadership to identify various use-cases that need to be built for showcase to prospective customers.
Posted 1 month ago
7 - 12 years
20 - 35 Lacs
Hyderabad
Work from Office
Job Description Data Engineer Job Description Job Title: Senior Data Engineer (Data Lake & Medallion Architecture) Location: US based Remote (U.S. working hours) / Hybrid (if near office locations) Mutiple opportunities - Employment Type: Contract (6+ months, extendable) / Full-Time Level: Senior (810+ years of experience) Send resume to priya.clara@iconma.com About the Role: CMS Energy is seeking a highly skilled Data Engineer to join their team in modernizing their data infrastructure through the Data Lake 2.0 initiative. Candidate will play a pivotal role in designing and implementing a cloud-based Medallion architecture (using Databricks ) to centralize fragmented data systems, enabling advanced analytics and actionable insights for customer operations and experience teams. This role requires deep expertise in ETL, data warehousing, and pipeline automation , with a focus on cost efficiency and scalability . Key Responsibilities: Data Pipeline Development: Build and optimize ETL/ELT pipelines to ingest data from SAP, CRMs, and legacy systems into Databricks . Implement the Medallion architecture (bronze (raw) silver (cleaned and validated data) gold (enriched and analytics-ready data)) layers with strict data quality checks. Leverage DBT for transformation workflows and ensure alignment with business logic. Cloud Data Infrastructure: Design and deploy scalable data models in Databricks (AWS/Azure). Collaborate with platform teams to retire legacy systems (e.g., SAP Business Objects, Crystal Reports). Collaboration & Governance: Work with analytics teams to support Power BI/Tableau dashboards and predictive models (e.g., customer propensity, outage forecasting). Document data standards, lineage, and metadata for governance. Performance Optimization: Troubleshoot pipeline bottlenecks and optimize SQL/Python scripts for efficiency. Ensure data security and compliance with access controls. Required Skills & Qualifications: Technical Expertise: Must-Have : 8+ years in data engineering , with proven experience in cloud-based data lakes/warehouses . Advanced proficiency in SQL, Python, and DBT . Hands-on experience with Databricks (Delta Lake, Spark SQL) and Medallion architecture. Strong knowledge of ETL frameworks and data modeling (star schema, fact/dimension tables). Nice-to-Have : Familiarity with SAP HANA , CRM systems, or utility industry data. Exposure to AI/ML pipelines (e.g., LLMs for customer analytics). Soft Skills: Ability to collaborate with cross-functional teams (operations, marketing, IT). Strong problem-solving skills to address legacy system challenges. Comfortable in an Agile environment (sprint planning, retrospectives).
Posted 2 months ago
16 - 21 years
40 - 60 Lacs
Pune, Delhi NCR, Gurgaon
Hybrid
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models – data warehouse etc. Lead cross-functional teams, define data strategies, and leverage the latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) • Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow.We are an equal opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Perks and benefits
Posted 2 months ago
3 - 8 years
10 - 13 Lacs
Mumbai
Work from Office
Manage and optimize data pipelines for medallion architecture (Landing, Bronze, Silver, Gold) using AWS S3 Interested candidates can share their CV at urmi.veera@cygnusad.co.in or call on 85910 61941
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2