Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 10.0 years
0 Lacs
karnataka
On-site
You are a developer of digital futures at Tietoevry, a leading technology company with a strong Nordic heritage and global capabilities. With core values of openness, trust, and diversity, you collaborate with customers to create digital futures where businesses, societies, and humanity thrive. The company's 24,000 experts specialize in cloud, data, and software, serving enterprise and public-sector customers in around 90 countries. Tietoevry's annual turnover is approximately EUR 3 billion, and its shares are listed on the NASDAQ exchange in Helsinki, Stockholm, and Oslo Brs. In the USA, EVRY USA delivers IT services through global delivery centers and offices in India (EVRY India). The company offers a comprehensive IT services portfolio, driving digital transformation across sectors like Banking & Financial Services, Insurance, Healthcare, Retail & Logistics, and Energy, Utilities & Manufacturing. EVRY India's process and project maturity are high, with offshore development centers in India appraised at CMMI DEV Maturity Level 5 & CMMI SVC Maturity Level 5 and certified under ISO 9001:2015 & ISO/IEC 27001:2013. As a Senior Data Modeler, you will lead the design and development of enterprise-grade data models for a modern cloud data platform built on Snowflake and Azure. With a strong foundation in data modeling best practices and hands-on experience with the Medallion Architecture, you will ensure data structures are scalable, reusable, and aligned with business and regulatory requirements. You will work on data models that meet processing, analytics, and reporting needs, focusing on Snowflake data warehousing and Medallion Architecture's Bronze, Silver, and Gold layers. Collaborating with various stakeholders, you will translate business needs into scalable data models, drive data model governance, and ensure compliance with data governance, quality, and security requirements. **Pre-requisites:** - 10 years of experience in data modeling, data architecture, or data engineering roles. - 4 years of experience modeling data in Snowflake or other cloud data warehouses. - Strong understanding and hands-on experience with Medallion Architecture and modern data platform design. - Experience using data modeling tools (Erwin etc.). - Proficiency in data modeling techniques: 3NF, dimensional modeling, data vault, and star/snowflake schemas. - Expert-level SQL and experience working with semi-structured data (JSON, XML). - Familiarity with Azure data services (ADF, ADLS, Synapse, Purview). **Key Responsibilities:** - Design, develop, and maintain data models for Snowflake data warehousing. - Lead the design and implementation of logical, physical, and canonical data models. - Architect data models for Bronze, Silver, and Gold layers following the Medallion Architecture. - Collaborate with stakeholders to translate business needs into scalable data models. - Drive data model governance and compliance with data requirements. - Conduct data profiling, gap analysis, and data integration efforts. - Support time travel kind of reporting and build models for operational & analytical reports. Recruiter Information: - Recruiter Name: Harish Gotur - Recruiter Email Id: harish.gotur@tietoevry.com,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are developers of digital futures! Tietoevry creates purposeful technology that reinvents the world for good. We are a leading technology company with a strong Nordic heritage and global capabilities. Based on our core values of openness, trust, and diversity, we work with our customers to develop digital futures where businesses, societies, and humanity thrive. Our 24,000 experts globally specialize in cloud, data, and software, serving thousands of enterprise and public-sector customers in approximately 90 countries. Tietoevry's annual turnover is approximately EUR 3 billion, and the company's shares are listed on the NASDAQ exchange in Helsinki and Stockholm, as well as on Oslo Brs. EVRY USA delivers IT services to a wide range of customers in the USA through its global delivery centers and India offices (EVRY India) in Bangalore & Chandigarh, India. We offer a comprehensive IT services portfolio and drive digital transformation across various sectors including Banking & Financial Services, Insurance, Healthcare, Retail & Logistics, and Energy, Utilities & Manufacturing. EVRY India's process and project maturity is very high, with the two offshore development centers in India being appraised at CMMI DEV Maturity Level 5 & CMMI SVC Maturity Level 5 and certified under ISO 9001:2015 & ISO/IEC 27001:2013. We are seeking a highly experienced Snowflake Architect with deep expertise in building scalable data platforms on Azure, applying Medallion Architecture principles. The ideal candidate should have strong experience working in the Banking domain. The candidate will play a key role in architecting secure, performant, and compliant data solutions to support business intelligence, risk, compliance, and analytics initiatives. **Pre-requisites:** - 5 years of hands-on experience in Snowflake including schema design, security setup, and performance tuning. - Implementation experience using Snowpark. - Must have a Data Architecture background. - Deployed a fully operational data solution into production on Snowflake & Azure. - Snowflake certification preferred. - Familiarity with data modeling practices like dimensional modeling & data vault. - Understanding of the dbt tool. **Key Responsibilities:** - Design and implement scalable and performant data platforms using Snowflake on Azure, tailored for banking industry use cases. - Architect ingestion, transformation, and consumption layers using Medallion Architecture for a performant & scalable data platform. - Work with data engineers to build modular and reusable bronze, silver and gold layer models that support diverse workloads. - Provide architectural oversight and best practices to ensure scalability, performance, and maintainability. - Collaborate with stakeholders from risk, compliance, and analytics teams to translate requirements into data-driven solutions. - Build architecture to support time travel kind of reporting. - Support CI/CD automation and environment management using tools like Azure DevOps and Git. - Build architecture to support operational & analytical reports. Recruiter Information: - Recruiter Name: Harish Gotur - Recruiter Email Id: harish.gotur@tietoevry.com,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. As an ETL Technical Lead at Orion Innovation located in Chennai, you are required to have at least 5 years of ETL experience and 3 years of experience specifically in Azure Synapse. Your role will involve designing, developing, and managing ETL processes within the Azure ecosystem. You must possess proficiency with Azure Synapse Pipelines, Azure Dedicated SQL Pool, Azure Data Lake Storage (ADLS), and other related Azure services. Additionally, experience with audit logging, data governance, and implementing data integrity and data lineage best practices is essential. Your responsibilities will include leading and managing the ETL team, providing mentorship, technical guidance, and driving the delivery of key data initiatives. You will design, develop, and maintain ETL pipelines using Azure Synapse Pipelines for ingesting data from various file formats and securely storing them in Azure Data Lake Storage (ADLS). Furthermore, you will architect, implement, and manage data solutions following the Medallion architecture for effective data processing and transformation. It is crucial to leverage Azure Data Lake Storage (ADLS) to build scalable and high-performance data storage solutions, ensuring optimal data lake management. You will also be responsible for managing the Azure Dedicated SQL Pool to optimize query performance and scalability. Automation of data workflows and processes using Logic Apps, as well as ensuring secure and compliant data handling through audit logging and access controls, will be part of your duties. Collaborating with data scientists to integrate ETL pipelines with Machine Learning models for predictive analytics and advanced data science use cases is key. Troubleshooting and resolving complex data pipeline issues, monitoring and optimizing performance, and acting as the primary technical point of contact for the ETL team are also essential aspects of this role. Orion Systems Integrators, LLC and its affiliates are committed to protecting your privacy. For more information on the Candidate Privacy Policy, please refer to the official documentation on the Orion website.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Senior Data Engineer, you will play a crucial role in supporting the Global BI team for Isolation Valves as they transition to Microsoft Fabric. Your primary responsibilities will involve data gathering, modeling, integration, and database design to facilitate efficient data management. You will be tasked with developing and optimizing scalable data models to cater to analytical and reporting needs, utilizing Microsoft Fabric and Azure technologies for high-performance data processing. Your duties will include collaborating with cross-functional teams such as data analysts, data scientists, and business collaborators to comprehend their data requirements and deliver effective solutions. You will leverage Fabric Lakehouse for data storage, governance, and processing to back Power BI and automation initiatives. Additionally, your expertise in data modeling, particularly in data warehouse and lakehouse design, will be essential in designing and implementing data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Furthermore, you will be responsible for developing ETL processes using tools like SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar platforms to prepare data for analysis and reporting. Implementing data quality checks and governance practices to ensure data accuracy, consistency, and security will also fall under your purview. You will supervise and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Your role will require a strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms, along with experience in data integration and ETL tools like Azure Data Factory. A deep understanding of Microsoft Fabric or similar data platforms, as well as comprehensive knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions, will be necessary. Effective communication skills to convey technical concepts to both technical and non-technical stakeholders, the ability to work both independently and within a team environment, and the willingness to stay abreast of new technologies and business areas are also vital for success in this role. To excel in this position, you should possess 5-7 years of experience in Data Warehousing with on-premises or cloud technologies, strong analytical abilities to tackle complex data challenges, and proficiency in database management, SQL query optimization, and data mapping. A solid grasp of Excel, including formulas, filters, macros, pivots, and related operations, is essential. Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging, along with a willingness to work flexible hours based on project requirements, is also required. Furthermore, hands-on experience with Fabric components such as Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models, as well as advanced SQL skills and experience with complex queries, data modeling, and performance tuning, are highly desired. Prior exposure to implementing Medallion Architecture for data processing, experience in a manufacturing environment, and familiarity with Oracle, SAP, or other ERP systems will be advantageous. A Bachelor's degree or equivalent experience in a Science-related field, with good interpersonal skills in English (spoken and written) and Agile certification, will set you apart as a strong candidate for this role. At Emerson, we are committed to fostering a workplace where every employee is valued, respected, and empowered to grow. Our culture encourages innovation, collaboration, and diverse perspectives, recognizing that great ideas stem from great teams. We invest in your ongoing career development, offering mentorship, training, and leadership opportunities to ensure your success and make a lasting impact. Employee wellbeing is a priority for us, and we provide competitive benefits plans, medical insurance options, Employee Assistance Program, flexible time off, and other supportive resources to help you thrive. Emerson is a global leader in automation technology and software, dedicated to helping customers in critical industries operate more sustainably and efficiently. Our commitment to our people, communities, and the planet drives us to create positive impacts through innovation, collaboration, and diversity. If you seek an environment where you can contribute to meaningful work, develop your skills, and make a difference, join us at Emerson. Let's go together towards a brighter future.,
Posted 4 days ago
6.0 - 11.0 years
8 - 12 Lacs
Chennai
Work from Office
Skills : Azure/AWS, Synapse, Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehousing Experience : 6+ Years Locations : Chennai, Bangalore, Pune, Coimbatore Work from Office
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
You should have a minimum of 7 years of experience in Database warehouse / lake house programming and should have successfully implemented at least 2 end-to-end data warehouse / data lake projects. Additionally, you should have experience in implementing at least 1 Azure Data warehouse / lake house project end-to-end, converting business requirements into concept / technical specifications, and collaborating with source system experts to finalize ETL and analytics design. You will also be responsible for supporting data modeler developers in the design and development of ETLs and creating activity plans based on agreed concepts with timelines. Your technical expertise should include a strong background with Microsoft Azure components such as Azure Data Factory, Azure Synapse, Azure SQL Database, Azure Key Vault, MS Fabric, Azure DevOps (ADO), and Virtual Networks (VNets). You should also have expertise in Medallion Architecture for Lakehouses and data modeling in the Gold layer, along with a solid understanding of Data Warehouse design principles like star schema, snowflake schema, and data partitioning. Proficiency in MS SQL Database Packages, Stored procedures, Functions, procedures, Triggers, and data transformation activities using SQL is required, as well as knowledge in SQL loader, Data pump, and Import/Export utilities. Experience with data visualization or BI tools like Tableau, Power BI, capacity planning, environment management, performance tuning, and familiarity with cloud cloning/copying processes within Azure will be essential for this role. Knowledge of green computing principles and optimizing cloud resources for cost and environmental efficiency is also desired. You should possess excellent interpersonal and communication skills to collaborate effectively with technical and non-technical teams, communicate complex concepts, and influence key stakeholders. Additionally, analyzing demands, contributing to cost/benefit analysis, and estimation are part of the responsibilities. Preferred qualifications include certifications like Azure Solutions Architect Expert or Azure Data Engineer Associate. Skills required for this role include database management, Tableau, Power BI, ETL processes, Azure SQL Database, Medallion Architecture, Azure services, data visualization, data warehouse design, and Microsoft Azure technologies.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of our team at Proximity, you will be taking on the role of both a hands-on tech lead and product manager. Your primary responsibility will be to deliver data/ML platforms and pipelines within a Databricks-Azure environment. In this capacity, you will be leading a small delivery team and collaborating with enabling teams to drive product, architecture, and data science initiatives. Your ability to translate business requirements into product strategy and technical delivery with a platform-first mindset will be crucial to our success. To excel in this role, you should possess technical proficiency in Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, and Azure. Additionally, expertise in Agile delivery, discovery cycles, outcome-focused planning, and trunk-based development will be advantageous. You should also be adept at collaborating with engineers, working across cross-functional teams, and fostering self-service platforms. Clear communication skills will be key in articulating decisions, roadmap, and priorities effectively. Joining our team comes with a host of benefits. You will have the opportunity to engage in Proximity Talks, where you can interact with fellow designers, engineers, and product enthusiasts, and gain insights from industry experts. Working alongside our world-class team will provide you with continuous learning opportunities, allowing you to challenge yourself and acquire new knowledge on a daily basis. Proximity is a leading technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. With headquarters in San Francisco and additional offices in Palo Alto, Dubai, Mumbai, and Bangalore, we have a track record of creating high-impact, scalable products used by 370 million daily users. The collective net worth of our client companies stands at $45.7 billion since our inception in 2019. At Proximity, we are a diverse team of coders, designers, product managers, and experts dedicated to solving complex problems and developing cutting-edge technology at scale. As our team of Proxonauts continues to expand rapidly, your contributions will play a significant role in the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and design wing at Studio Proximity, and gain behind-the-scenes access through our Instagram accounts @ProxWrks and @H.Jagda.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
The Data Warehouse Engineer will be responsible for managing and optimizing data processes in an Azure environment using Snowflake. The ideal candidate should have solid SQL skills and a basic understanding of data modeling. Experience with CI/CD processes and Azure ADF is preferred. Additionally, expertise in ETL/ELT frameworks and ER/Studio would be a plus. As a Senior Data Warehouse Engineer, in addition to the core requirements, you will oversee other engineers while also being actively involved in data modeling and Snowflake SQL optimization. You will be responsible for conducting design reviews, code reviews, and deployment reviews with the engineering team. Familiarity with medallion architecture and experience in Healthcare or life sciences industry will be highly advantageous. At Myridius, we are committed to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we aim to drive organizations through the rapidly evolving landscapes of technology and business. Our integration of cutting-edge technology with deep domain knowledge enables businesses to seize new opportunities, drive significant growth, and maintain a competitive edge in the global market. We go beyond typical service delivery to craft transformative outcomes that help businesses not just adapt, but thrive in a world of continuous change. Discover how Myridius can elevate your business to new heights of innovation by visiting us at www.myridius.com and start leading the change.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Warehouse Engineer at Myridius, you will be responsible for working with solid SQL language skills and possessing basic knowledge of data modeling. Your role will involve collaborating with Snowflake in Azure, CI/CD process using any tooling. Additionally, familiarity with Azure ADF and ETL/ELT frameworks would be beneficial for this position. It would be advantageous to have experience in ER/Studio and a good understanding of Healthcare/life sciences industry. Knowledge of GxP processes will be a plus in this role. For a Senior Data Warehouse Engineer position, you will be overseeing engineers while actively engaging in the same tasks. Your responsibilities will include conducting design reviews, code reviews, and deployment reviews with engineers. You should have expertise in solid data modeling, preferably using ER/Studio or an equivalent tool. Optimizing Snowflake SQL queries to enhance performance and familiarity with medallion architecture will be key aspects of this role. At Myridius, we are dedicated to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we drive a new vision to propel organizations through rapidly evolving technology and business landscapes. Our commitment to exceeding expectations ensures measurable impact and fosters sustainable innovation. Together with our clients, we co-create solutions that anticipate future trends and help businesses thrive in a world of continuous change. If you are passionate about driving significant growth and maintaining a competitive edge in the global market, join Myridius in crafting transformative outcomes and elevating businesses to new heights of innovation. Visit www.myridius.com to learn more about how we lead the change.,
Posted 2 weeks ago
10.0 - 14.0 years
25 - 37 Lacs
Noida
Hybrid
Description - Internal 'Accountable for the data engineering lifecycle including research, proof of concepts, architecture, design, development, test, deployment, and maintenance. Design, develop, implement, and run cross-domain, modular, flexible, scalable, secure, reliable, and quality data solutions that transform data for meaningful analyses and analytics while ensuring operability. Layer in instrumentation in the development process so that data pipelines that can be monitored to detect internal problems before they result in user-visible outages or data quality issues. Build processes and diagnostic tools to troubleshoot, maintain, and optimize solutions and respond to customer and production issues. Embrace continuous learning of engineering practices to ensure industry best practices and technology adoption, including DevOps, Cloud, and Agile thinking. Tech debt reduction/Tech transformation including open source adoption, cloud adoption, HCP assessment, and adoption. Maintain high-quality documentation of data definitions, transformations, and processes to ensure data governance and security' Functions may include database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning and other similar roles. Responsibilities may include Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems. Duties may include management of design services, providing sizing and configuration assistance, ensuring strict data quality, and performing needs assessments. Analyzes current business practices, processes and procedures as well as identifying future business opportunities for leveraging data storage and retrieval system capabilities. Manages relationships with software and hardware vendors to understand the potential architectural impact of different vendor strategies and data acquisition. May design schemas, write SQL or other data markup scripting and helps to support development of Analytics and Applications that build on top of data. Selects, develops and evaluates personnel to ensure the efficient operation of the function. - Generally work is self-directed and not prescribed. - Works with less structured, more complex issues. - Serves as a resource to others. Qualifications - Internal - Undergraduate degree or equivalent experience. 'Proficient in design and documentation of data exchanges across various channels including APIs, streams, batch feeds Proficient in source to target mapping, gap analysis and applies data transformation rules based on understanding of business rules, data structures Develops and implements scripts to maintain and monitor performance tuning. Designs scalable job scheduler solutions and advises on appropriate tools/technologies to use. Works across multiple domains to define and build data models Understands all the connected technology services and their impacts. Assesses design and proposes options to ensure the solution meets business needs in terms of security, scalability, reliability, and feasibility. ' 'Understanding of healthcare data, including Electronic Health Records (EHR), claims data, and regulatory compliance such as HIPAA. Familiarity with healthcare regulations and data exchange standards (e.g. HL7, FHIR) Experience with data analytics tools like Tableau, Power BI, or similar. Familiarity with automation tools and scripting languages (e.g., Bash, PowerShell) to automate repetitive tasks. Experience in optimizing data processing workflows for performance and cost-efficiency.'
Posted 1 month ago
16.0 - 21.0 years
40 - 60 Lacs
Pune, Gurugram, Delhi / NCR
Hybrid
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models – data warehouse etc. Lead cross-functional teams, define data strategies, and leverage the latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) • Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow.We are an equal opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Perks and benefits
Posted 1 month ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 month ago
3.0 - 7.0 years
22 - 25 Lacs
Bengaluru
Hybrid
Role & responsibilities 3-6 years of experience in Data Engineering Pipeline Ownership and Quality Assurance, with hands-on expertise in building, testing, and maintaining data pipelines. Proficiency with Azure Data Factory (ADF), Azure Databricks (ADB), and PySpark for data pipeline orchestration and processing large-scale datasets. Strong experience in writing SQL queries and performing data validation, data profiling, and schema checks. Experience with big data validation, including schema enforcement, data integrity checks, and automated anomaly detection. Ability to design, develop, and implement automated test cases to monitor and improve data pipeline efficiency. Deep understanding of Medallion Architecture (Raw, Bronze, Silver, Gold) for structured data flow management. Hands-on experience with Apache Airflow for scheduling, monitoring, and managing workflows. Strong knowledge of Python for developing data quality scripts, test automation, and ETL validations. Familiarity with CI/CD pipelines for deploying and automating data engineering workflows. Solid data governance and data security practices within the Azure ecosystem. Additional Requirements: Ownership of data pipelines ensuring end-to-end execution, monitoring, and troubleshooting failures proactively. Strong stakeholder management skills, including follow-ups with business teams across multiple regions to gather requirements, address issues, and optimize processes. Time flexibility to align with global teams for efficient communication and collaboration. Excellent problem-solving skills with the ability to simulate and test edge cases in data processing environments. Strong communication skills to document and articulate pipeline issues, troubleshooting steps, and solutions effectively. Experience with Unity Catalog or willingness to learn. Preferred candidate profile Immediate Joiner's
Posted 1 month ago
7.0 - 12.0 years
18 - 33 Lacs
Navi Mumbai
Work from Office
About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Mumbai
Work from Office
Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling Location: Mumbai
Posted 2 months ago
16.0 - 21.0 years
16 - 21 Lacs
Delhi NCR, , India
On-site
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology
Posted 2 months ago
16.0 - 21.0 years
16 - 21 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology
Posted 2 months ago
16.0 - 21.0 years
16 - 21 Lacs
Pune, Maharashtra, India
On-site
Role & responsibilities Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation data lake data models data warehouse etc. Lead cross-functional teams, define data strategies, andleveragethe latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. Preferred candidate profile Minimum 16 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology
Posted 2 months ago
14 - 20 years
20 - 35 Lacs
Pune, Chennai, Mumbai (All Areas)
Work from Office
Role :Data & Analytics Architect Required Skill Set :Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Preferred Specializations or Prior Experience : Manufacturing, Hi-Tech, CPG Use cases where Analytics and AI have been applied Location: PAN INDIA Desired Competencies (Managerial/Behavioural Competency): Must-Have: 14+ years of IT industry experience IoT / Industry 4.0 / Industrial AI experience for at least 3+ years In depth knowledge of Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Strong written , verbal communication with good presentation skills Excellent Knowledge of Data governance , Medallion architecture , UNS , Data lake architectures , AIML and Data Science Experience with cloud platforms (e.g., GCP, AWS, Azure) and cloud-based Analytics & AI / ML services. Proven experience working with clients in the Manufacturing CPG /High-Tech / oil & Gas /Pharma industries. Good understanding of technology trends, market forces and industry imperatives. Excellent communication, presentation, and interpersonal skills. Ability to work independently and collaboratively in a team environment. Good-to-Have: Degree in Data Science or Statistics. Led consulting & advisory programs at CxO level, managing business outcomes. Point of View articulation for CxO level. Manufacturing (Discreet or Process) industry background for application of AI technology for business impact. Entrepreneurial and comfortable working in a complex and fast-paced environment Responsibilities / Expected Deliverables: We are seeking a highly skilled and experienced Data and Analytics / Consultant to provide expert guidance and support to our clients in the Manufacturing, Consumer Packaged Goods (CPG), and High-Tech industries. This role requires a deep understanding of Architect , Design and Implementation experience in cloud data platforms Experience in handling multiple type of data (Structured, streaming , Semi structured etc.) Strategic experience in Data & Analytics (cloud data architecture, lake-house architecture, data fabric, data mesh concept) Experience in deploying DevOps / CICD techniques. Automate and Deploy Data Pipelines / ETLs in DevOps Environment Experience in Strategizing Data governance activities. The ideal candidate will possess exceptional communication, consulting, and problem-solving skills, along with a strong technical foundation in data Arch. The role involves leading the Data Arch Tech Advisory engagement, bringing thought leadership to engage CxOs actively. Following would be some of the key roles and responsibilities: Business-oriented Engage with customer CxOs to evangelise adoption of AI & GenAI Author proposals for solving business problems and achieving business objectives, leveraging Data Analytics & AI technologies Advisory Experience in managing the entire lifecycle of Data Analytics , is an added advantage. This includes: Develop roadmap for introduction and scaling of Data architecture in customer organization Define best suited AI operating model for customers Guide the teams on solution approaches and roadmaps. Build and leverage frameworks for RoI from AI. Effectively communicate complex technical information to both technical and non-technical audiences, presenting findings and recommendations in a clear, concise, and compelling. Demonstrate though-leadership to identify various use-cases that need to be built for showcase to prospective customers.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough