Jobs
Interviews

144 Dimensional Modeling Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You are looking for a skilled and analytical Data Analyst with expertise in data modeling, data analysis, and Python programming. As a Data Analyst, you will be responsible for designing data models, conducting in-depth analysis, and creating automated solutions to facilitate business decision-making and reporting. Your key responsibilities will include designing and implementing conceptual, logical, and physical data models to support analytics and reporting. You will analyze large datasets to uncover trends, patterns, and insights that drive business decisions. Additionally, you will develop and maintain Python scripts for data extraction, transformation, and analysis. Collaboration with data engineers, business analysts, and stakeholders to comprehend data requirements is essential. Creating dashboards, reports, and visualizations to effectively communicate findings will be part of your role. Ensuring data quality, consistency, and integrity across systems, as well as documenting data definitions, models, and analysis processes, will also be key responsibilities. The ideal candidate for this position should have strong experience in data modeling, including ER diagrams, normalization, and dimensional modeling. Proficiency in Python for data analysis using Pandas, NumPy, Matplotlib, etc., is required. A solid understanding of SQL and relational databases is necessary, along with experience in data visualization tools such as Power BI, Tableau, or matplotlib/seaborn. You should be able to translate business requirements into technical solutions and possess excellent analytical, problem-solving, and communication skills. Virtusa values teamwork, quality of life, professional and personal development. Joining Virtusa means becoming part of a global team of 27,000 individuals who are dedicated to your growth. You will have the opportunity to work on exciting projects and leverage state-of-the-art technologies throughout your career with us. At Virtusa, collaboration and a team-oriented environment are paramount, providing great minds with a dynamic space to cultivate new ideas and promote excellence.,

Posted 1 week ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Gurugram

Work from Office

Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About the Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Ahmedabad, Bengaluru

Work from Office

SUMMARY Sr. Data Analytics Engineer Databricks - Power mission-critical decisions with governed insight s Company: Ajmera Infotech Private Limited (AIPL) Location: Ahmedabad, Bangalore /Bengaluru, Hyderabad (On-site) Experience: 5 9 years Position Type: Full-time, Permanent Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains HIPAA, FDA, SOC 2 and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact Build full-stack analytics from lake house pipelines to real-time dashboards. Fail-safe engineering TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack Databricks, PySpark , Delta Lake, Power BI, Airflow. Mentorship culture Lead code reviews, share best practices, grow as a domain expert. Mission-critical context Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset Work in HIPAA-aligned environments where precision matters. Requirements Key Responsibilities Build scalable pipelines using SQL, PySpark , Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything from pipeline logic to RLS rules in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake, Unity Catalog. Power BI mastery DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication) Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations, or similar data quality frameworks. BI diversity experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits What We Offer Competitive salary package with performance-based bonuses. Comprehensive health insurance for you and your family.

Posted 1 week ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Chennai

Remote

About Company: Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. Job Description: We are looking for an experienced Senior Data Modeler to join our agile team and support enterprise-level data initiatives. The ideal candidate will have a strong background in cloud-based data modeling , preferably within the Azure ecosystem , and be able to design and implement robust data models that support scalable and efficient data pipelines. Responsibilities: Design and implement conceptual, logical, and physical data models based on business needs. Work on Azure cloud technologies including Azure Data Lake , Azure Data Factory , and Dremio for data virtualization. Create and maintain Low-Level Design (LLD) documents, Unit Test Plans , and related documentation. Collaborate with data engineers, developers, and analysts to ensure accurate and scalable data modeling. Optimize data-related processes and adhere to coding and modeling standards. Conduct integration testing and support bug fixing throughout the SDLC. Participate in SCRUM ceremonies and work closely with onshore and offshore teams . Manage timelines, deliverables, and communicate blockers or tradeoffs proactively. Assist with documentation required for OIS clearance and compliance audits. Required Skills & Qualifications: Bachelors in Computer Science or related field. 8+ years of professional experience in data modeling (logical & physical). Strong expertise in SQL and experience with relational databases and data warehouses . Hands-on experience with data modeling tools like Erwin or equivalent. 5+ years working with Azure Data Lake , Azure Data Factory , and Dremio . Solid understanding of data structures , indexing , and optimization techniques . Performance tuning skills for models and queries in large datasets. Strong communication skills, both verbal and written. Highly organized, collaborative, and proactive team player. Benefits & Perks: Opportunity to work with leading global clients Flexible work arrangements with remote options Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities

Posted 1 week ago

Apply

2.0 - 7.0 years

5 - 8 Lacs

Jharkhand

Remote

Job Title:- Snowflake + Power BI Job Location:-Remote Job Summary: We are seeking a skilled and detail-oriented BI Developer / Data Analyst with strong expertise in Snowflake and Power BI , specifically in dataset creation, data modeling, and dashboard development . The ideal candidate will work closely with stakeholders to transform raw data into insightful, actionable business intelligence. Key Responsibilities: Design and build robust data models and datasets in Power BI to support business reporting and analytics. Develop and maintain complex SQL queries and data pipelines within Snowflake . Create reusable, scalable, and efficient data views and schemas to support self-service BI. Collaborate with business stakeholders to gather requirements, define KPIs, and design dashboards. Optimize data models for performance and scalability in both Snowflake and Power BI. Ensure data quality, governance, and security best practices are implemented. Troubleshoot data issues and resolve inconsistencies across systems. Maintain documentation of data definitions, metrics, and processes. Required Skills and Qualifications: 2-5+ years of experience in data analytics, BI development, or data engineering. Hands-on experience with Snowflake (SQL development, data warehousing concepts). Proven expertise in Power BI (Power Query, DAX, data modeling, report/dashboard development). Strong SQL skills ability to write, optimize, and troubleshoot complex queries. Experience working with large datasets and designing efficient data pipelines. Solid understanding of dimensional modeling , star/snowflake schemas , and data normalization . Familiarity with data governance, security, and compliance frameworks. Strong analytical and communication skills, with the ability to present complex data clearly.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced Senior dbt Engineer with a strong background in Snowflake and Azure cloud platforms. Your primary responsibility will be to lead the design and development of scalable, governed, and efficient data transformation pipelines using dbt. You will collaborate across functions to deliver business-ready data solutions. With at least 8 years of experience in data engineering, analytics engineering, or similar roles, you have proven expertise in dbt (Data Build Tool) and modern data transformation practices. Your advanced proficiency in SQL and deep understanding of dimensional modeling, medallion architecture, and ELT principles will be crucial for success in this role. You must have strong hands-on experience with Snowflake, including query optimization, and be proficient with Azure cloud services such as Azure Data Factory and Blob Storage. Your communication and collaboration skills should be exemplary, and you should also have familiarity with data governance, metadata management, and data quality frameworks. As a Senior dbt Engineer, your key responsibilities will include leading the design, development, and maintenance of dbt models and transformation layers. You will define and enforce data modeling standards, best practices, and development guidelines while driving the end-to-end ELT process to ensure reliability and data quality across all layers. Collaboration with data product owners, analysts, and stakeholders to translate complex business needs into clean, reusable data assets is essential. You will utilize best practices on Snowflake to build scalable and robust dbt models and integrate dbt workflows with orchestration tools like Azure Data Factory, Apache Airflow, or dbt Cloud for robust monitoring and alerting. Supporting CI/CD implementation for dbt deployments using tools like GitHub Actions, Azure DevOps, or similar will also be part of your responsibilities. If you are looking for a challenging opportunity to leverage your expertise in dbt, Snowflake, and Azure cloud platforms to drive digital transformation and deliver impactful data solutions, this role is perfect for you.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

The PEX Report Developer position entails collaborating with fund accounting professionals and technology teams to develop, maintain, and enhance customized reporting statements. As a PEX Report Developer, your primary responsibility will involve utilizing QlikView version 11 or higher to create and manage reporting solutions. You should possess a minimum of 2 years of experience with a focus on QlikView Dashboard Design & Development. A strong understanding of SQL, relational databases, and Dimensional Modeling is essential for this role. Proficiency in working with large datasets and experience in handling complex data models involving more than 10 tables is required. You will be tasked with integrating data from various sources into a QlikView Data Model, including Social Media content and API extensions. The ideal candidate will have a Bachelor's degree in Computer Science and extensive expertise in all aspects of the QlikView lifecycle. You should be well-versed in complex QlikView functions, such as set analysis, alternate states, and advanced scripting. Experience with section access and implementing data level security is crucial for this role. Additionally, familiarity with QlikView distributed architecture, SDLC, and Agile software development concepts is preferred. Responsibilities of the role include creating new reporting and dashboard applications using technologies like QlikView and NPrinting to facilitate better decision-making within the business areas. You will collaborate with stakeholders to identify use cases, gather requirements, and translate them into system and functional specifications. Additionally, you will be responsible for installing, configuring, and maintaining the QlikView environment, developing complex QlikView applications, and defining data extraction processes from multiple sources. As part of the team, you will have the opportunity to mentor and train other team members on best practices related to QlikView. Furthermore, you will contribute to designing support procedures, training IT support, and providing end-user support for QlikView-related issues. Following the SDLC methodology is an integral part of this role. At GlobalLogic, we offer a culture that prioritizes caring, continuous learning and development opportunities, meaningful work on impactful projects, balance, flexibility, and a high-trust environment. As a trusted digital engineering partner, we collaborate with leading companies worldwide, driving digital transformation and creating intelligent products and services. Join us at GlobalLogic, a Hitachi Group Company, and be part of a team that is shaping the digital revolution and redefining industries through innovation and collaboration.,

Posted 1 week ago

Apply

3.0 - 6.0 years

7 - 17 Lacs

Bengaluru

Work from Office

Job Requirements Responsibilities Manage and optimize Snowflake databases, ensuring data integrity and efficient data retrieval. Extract, transform, and load data from various sources into data warehouses and data lakes for analysis and reporting purposes. Ensure data quality and integrity by implementing data validation and testing procedures. Ensure the security of the backend systems, implementing encryption, authentication, and authorization measures. Optimize server-side performance for fast response times and efficient resource utilization. Conduct code reviews and mentor junior developers to maintain high-quality coding standards. Implement and maintain version control using Git for efficient code management. Troubleshoot and resolve issues related to backend functionality and performance. Expertise on designing, developing data integration and data transformation using ETL tools such as DBT with Snowflake Competencies Proven experience as a Backend Developer with a focus on Snowflake. Snowflake : Should be expert in Snowflake. Strong knowledge in Database design, optimization, and administration Good to have: Knowledge of data warehousing concepts, such as dimensional modeling, star and snowflake schemas. Familiarity with cloud platforms and services, such as AWS, Azure and their data-related offerings, such as S3 etc. Proficient in version control systems, especially Git. Collaborating with business users, gathering requirements, data analysis, data mapping and documentation. Understanding of data modeling concepts and familiarity with Snowflake's data modeling tools and techniques. Also, one of the ETL/ELT tools. Experience with Agile solution development Candidate Profile: Bachelors degree in computer science, information technology or a related field 3-5 Years of overall experience with minimum 4 years of experience in backend development

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

nagpur, maharashtra

On-site

As a Data Integration Architect at our organization, you will play a vital role in collaborating with business stakeholders to comprehend their data integration needs and requirements. Your primary responsibility will involve translating these business requirements into technical specifications and designs. Additionally, you will be tasked with developing architectural solutions that are in alignment with the bank's overarching IT strategy. With over 12 years of experience, you must possess an in-depth knowledge of ETL products and stay abreast of the latest features, updates, and best practices within the ETL ecosystem. Your expertise will be critical in implementing real-time data streaming architecture using ETL tools and addressing data quality and consistency issues during the integration process. Furthermore, you will be required to identify and rectify performance bottlenecks in ETL workflows and mappings, optimize data integration processes for efficiency and speed, and implement security measures to safeguard sensitive data. Compliance with relevant data protection and privacy regulations will be paramount in your role. Collaboration with various IT teams, including database administrators, developers, and infrastructure teams, is essential to ensure a cohesive and well-integrated solution. Your ability to create and maintain comprehensive documentation for implemented data integration solutions, manage data integration projects, and work closely with project managers to define project scope and goals will be crucial for success in this role. In terms of required skills, strong communication, time management, and organizational skills are essential. Proficiency in SQL, MS-Office, data warehousing concepts, dimensional modeling, and data integration patterns is a must. The role demands the ability to work under pressure, possess good analytical and decision-making skills, and thrive in a competitive environment. If you are seeking a challenging opportunity to utilize your expertise in data integration and architecture, we invite you to apply for this position. Join our dynamic team and contribute to delivering data integration solutions that drive seamless data flow across systems within the bank while ensuring compliance and security.,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

hyderabad, telangana

On-site

You have a great opportunity to join our team as a Data Architect with 9+ years of experience. In this role, you will be responsible for designing, implementing, and managing cloud-based solutions on AWS and Snowflake. Your main tasks will include working with stakeholders to gather requirements, designing solutions, developing and executing test plans, and overseeing the information architecture for the data warehouse. To excel in this role, you must have a strong skillset in Snowflake, DBT, and Data Architecture Design experience in Data Warehouse. Additionally, it would be beneficial to have Informatica or any ETL Knowledge or Hands-On Experience, as well as Databricks understanding. You should have 9 - 11 years of IT experience with 3+ years of Data Architecture experience in Data Warehouse and 4+ years in Snowflake. As a Data Architect, you will need to optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. You should have a deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support. In addition to your technical responsibilities, you will also be required to maintain detailed documentation for data solutions and processes, provide training and leadership to share expertise and best practices with the team, and collaborate with the data engineering team to ensure that data solutions are developed according to best practices. If you have 10+ years of overall experience in architecting and building large-scale, distributed big data products, expertise in designing and implementing highly scalable, highly available Cloud services and solutions, experience with AWS and Snowflake, as well as a strong understanding of data warehousing and data engineering principles, then this role is perfect for you. This is a full-time position based in Hyderabad, Telangana, with a Monday to Friday work schedule. Therefore, you must be able to reliably commute or plan to relocate before starting work. As part of the application process, we would like to know your notice period, years of experience in Snowflake, Data Architecture experience in Data Warehouse, current location, willingness to work from the office in Hyderabad, current CTC, and expected CTC. If you meet the requirements and are excited about this opportunity, we look forward to receiving your application. (Note: Experience: total work: 9 years is required for this position),

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Data Engineer specializing in Databricks, you will play a crucial role in designing, developing, and optimizing our next-generation data platform. Your responsibilities will include leading a team of data engineers, offering technical guidance, mentorship, and ensuring the scalability and high performance of data solutions. You will be expected to lead the design, development, and implementation of scalable and reliable data pipelines using Databricks, Spark, and other relevant technologies. It will also be part of your role to define and enforce data engineering best practices, coding standards, and architectural patterns. Additionally, providing technical guidance and mentorship to junior and mid-level data engineers, conducting code reviews, and ensuring the quality, performance, and maintainability of data solutions will be key aspects of your job. Your expertise in Databricks will be essential as you architect and implement data solutions on the Databricks platform, including Databricks Lakehouse, Delta Lake, and Unity Catalog. Optimizing Spark workloads for performance and cost efficiency on Databricks, developing and managing Databricks notebooks, jobs, and workflows, and proficiently using Databricks features such as Delta Live Tables (DLT), Photon, and SQL Analytics will be part of your daily tasks. In terms of pipeline development and operations, you will need to develop, test, and deploy robust ETL/ELT pipelines for data ingestion, transformation, and loading from various sources like relational databases, APIs, and streaming data. Implementing monitoring, alerting, and logging for data pipelines to ensure operational excellence, as well as troubleshooting and resolving complex data-related issues, will also fall under your responsibilities. Collaboration and communication are crucial aspects of this role as you will work closely with cross-functional teams, including product managers, data scientists, and software engineers. Clear communication of complex technical concepts to both technical and non-technical stakeholders is vital. Staying updated with industry trends and emerging technologies in data engineering and Databricks will also be expected. Key Skills required for this role include extensive hands-on experience with the Databricks platform, including Databricks Workspace, Spark on Databricks, Delta Lake, and Unity Catalog. Strong proficiency in optimizing Spark jobs, understanding Spark architecture, experience with Databricks features like Delta Live Tables (DLT), Photon, and Databricks SQL Analytics, and a deep understanding of data warehousing concepts, dimensional modeling, and data lake architectures are essential for success in this position.,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai

Work from Office

> 5+ years of professional experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) 3+ years of professional experience with Enterprise Domains Like HR, Finance, Supply Chain 4+ years of professional experience with more than one SQL and relational databases including expertise in Presto, Spark, and MySQL Professional experience designing and implementing real-time pipelines (Apache Kafka, or similar technologies) 5+ years of professional experience in custom ETL design, implementation, and maintenance 3+ years of professional experience with Data Modeling including expertise in Data Warehouse design and Dimensional Modeling 5+ years of professional experience working with cloud or on-premises Big Data/MPP analytics platform (Teradata, AWS Redshift, Google BigQuery, Azure Synapse Analytics, or similar) Experience with data quality and validation (using Apache Airflow) Experience with anomaly/outlier detection Experience with Data Science workflow (Jupyter Notebooks, Bento, or similar tools) Experience with Airflow or similar workflow management systems Experience querying massive datasets using Spark, Presto, Hive, or similar Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Salesforce, etc.) Experience in data visualizations using Power BI and Tableau. Proficiency in Python programming language and Python libraries, with a focus on data engineering and data science applications. Professional fluency in English required Mandatory Skills: Data Analysis. Experience: 3-5 Years. >

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

> Long Description Bachelors Degree preferred, or equivalent combination of education, training, and experience. 5+ years of professional experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) 3+ years of professional experience with Enterprise Domains Like HR, Finance, Supply Chain 6+ years of professional experience with more than one SQL and relational databases including expertise in Presto, Spark, and MySQL Professional experience designing and implementing real-time pipelines (Apache Kafka, or similar technologies) 5+ years of professional experience in custom ETL design, implementation, and maintenance 3+ years of professional experience with Data Modeling including expertise in Data Warehouse design and Dimensional Modeling 5+ years of professional experience working with cloud or on-premises Big Data/MPP analytics platform (Teradata, AWS Redshift, Google BigQuery, Azure Synapse Analytics, or similar) Experience with data quality and validation (using Apache Airflow) Experience with anomaly/outlier detection Experience with Data Science workflow (Jupyter Notebooks, Bento, or similar tools) Experience with Airflow or similar workflow management systems Experience querying massive datasets using Spark, Presto, Hive, or similar Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Salesforce, etc.) Experience in data visualizations using Power BI and Tableau. Proficiency in Python programming language and Python libraries, with a focus on data engineering and data science applications. Professional fluency in English required Mandatory Skills: Data Analysis. Experience: 5-8 Years. >

Posted 2 weeks ago

Apply

8.0 - 11.0 years

16 - 20 Lacs

Hyderabad

Remote

US Shift(Night Shift) 8+ yrs in Data Modeling, 3+ yrs in ER Studio (ERwin not preferred), strong in relational & dimensional modeling, normalization. HR & EPM experience is a plus. Skilled in metadata, data dictionaries, documentation, communication.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Remote

Client is 5+years of experience in data modeling and a minimum of 3 years of proficiency using ER Studio.The ideal candidate will possess a deep understanding of data architecture, database design, and conceptual, logical, and physical data models.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Cloud Data Modeler with 6 to 9 years of experience in GCP environments, you will play a crucial role in designing schema architecture, creating performance-efficient data models, and guiding teams on cloud-based data integration best practices. Your expertise will be focused on GCP data platforms such as BigQuery, CloudSQL, and AlloyDB. Your responsibilities will include architecting and implementing scalable data models for cloud data warehouses and databases, optimizing OLTP/OLAP systems for reporting and analytics, supporting cloud data lake and warehouse architecture, and reviewing and optimizing existing schemas for cost and performance on GCP. You will also be responsible for defining documentation standards, ensuring model version tracking, and collaborating with DevOps and DataOps teams for deployment consistency. Key Requirements: - Deep knowledge of GCP data platforms including BigQuery, CloudSQL, and AlloyDB - Expertise in data modeling, normalization, and dimensional modeling - Understanding of distributed query engines, table partitioning, and clustering - Familiarity with DBSchema or similar tools Preferred Skills: - Prior experience in BFSI or asset management industries - Working experience with Data Catalogs, lineage, and governance tools Soft Skills: - Collaborative and consultative mindset - Strong communication and requirements gathering skills - Organized and methodical approach to data architecture challenges By joining our team, you will have the opportunity to contribute to modern data architecture in a cloud-first enterprise, influence critical decisions around GCP-based data infrastructure, and be part of a future-ready data strategy implementation team.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At PwC, the infrastructure team focuses on designing and implementing secure IT systems that support business operations. The primary goal is to ensure the smooth functioning of networks, servers, and data centers to enhance performance and minimize downtime. In the infrastructure engineering role at PwC, you will be tasked with creating robust and scalable technology infrastructure solutions for clients. This will involve working on network architecture, server management, and cloud computing. As a Data Modeler, we are seeking candidates with a solid background in data modeling, metadata management, and data system optimization. Your responsibilities will include analyzing business requirements, developing long-term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise for this role include: - Analyzing and translating business needs into long-term data model solutions. - Evaluating existing data systems and suggesting enhancements. - Defining rules for translating and transforming data across various models. - Collaborating with the development team to create conceptual data models and data flows. - Establishing best practices for data coding to maintain consistency within the system. - Reviewing modifications of existing systems for cross-compatibility. - Implementing data strategies and developing physical data models. - Updating and optimizing local and metadata models. - Utilizing canonical data modeling techniques to improve data system efficiency. - Evaluating implemented data systems for variances, discrepancies, and efficiency. - Troubleshooting and optimizing data systems for optimal performance. - Demonstrating strong expertise in relational and dimensional modeling (OLTP, OLAP). - Using data modeling tools like Erwin, ER/Studio, Visio, PowerDesigner effectively. - Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). - Understanding of NoSQL databases (MongoDB, Cassandra) and their data structures. - Experience with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). - Familiarity with ETL processes, data integration, and data governance frameworks. - Strong analytical, problem-solving, and communication skills. Qualifications for this position include: - A Bachelor's degree in Engineering or a related field. - 5 to 9 years of experience in data modeling or a related field. - 4+ years of hands-on experience with dimensional and relational data modeling. - Expert knowledge of metadata management and related tools. - Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. - Knowledge of transactional databases and data warehouses. Preferred Skills: - Experience in cloud-based data solutions (AWS, Azure, GCP). - Knowledge of big data technologies (Hadoop, Spark, Kafka). - Understanding of graph databases and real-time data processing. - Certifications in data management, modeling, or cloud data engineering. - Excellent communication and presentation skills. - Strong interpersonal skills to collaborate effectively with various teams.,

Posted 2 weeks ago

Apply

7.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

JD For Informatica Developer: Key Responsibilities: Design, develop, test, and deploy ETL mappings, workflows, and sessions using Informatica PowerCenter. Analyze source systems, define transformation logic, and create data mappings and process flows. Optimize ETL performance and troubleshoot data quality issues. Collaborate with database administrators, data architects, and QA teams to ensure data integrity and optimal system performance. Develop and maintain technical documentation including data flow diagrams, mapping documents, and deployment instructions. Support data migration, data integration, and business intelligence initiatives. Participate in code reviews, testing, and deployment activities. Ensure adherence to data governance and security standards. Required Qualifications: 7+ years of hands-on experience with Informatica PowerCenter in a development role. Strong proficiency in SQL and experience with at least one RDBMS (e.g., Oracle, SQL Server, DB2). Solid understanding of data warehousing concepts, dimensional modeling, and ETL best practices. Experience with performance tuning of Informatica mappings and sessions. Knowledge of job scheduling tools (e.g., UC4, Autosys, Control-M) is a plus. Familiarity with version control tools (e.g., Git, SVN). Preferred Skills: Experience with cloud data platforms (AWS, Snowflake, etc.). Knowledge of other Informatica products such as IDQ, MDM, or Cloud Data Integration. Exposure to Agile/Scrum methodologies. Good communication and interpersonal skills. Ability to work independently and collaboratively in a team environment.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

7 - 14 Lacs

Halol

Work from Office

Role & responsibilities Single Point of Contact for Vehicle dimensional quality from program initiation until Integration Vehicle Build Team Leader for IVB/Matching/Body Shop Validation/Launch/ Vehicle Dimensional Quality Issue Identification, Resolution, and Execution Possesses strong leadership & communication skills and be able to interface across several organizations to solve complex cross-functional vehicle integration issues. Possesses high level of analytical ability & interpersonal skills to work effectively with others, motivating employees. Must have a strong background in dimensional engineering, tooling & manufacturing processes, and leading problem-solving teams. Should have high level of cross-functional knowledge in die/stamping, body, dimensional, & product engineering. Driver for metal matching and vehicle dimensional quality management and drive the "Drive to Nominal" strategy across all functions for sheet metal and plastics. Key in improving Craftsmanship among the line operators to attain the vehicle fits and finish. Review the product designs and GD&T to ensure manufacturing requirements to meet vehicle specifications while assembly are met. Development/execution of the iterative matching/body shop validation plan Decision making process/resolution/execution of all major vehicle dimensional issues from start of IVB until plant transition (including dimensional issues related to wind noise, water leak, closing effort, and sealing issues). Interfacing with all organizations outside Body ME on dimensional issues, including presentations for upper management Assigning issue owners and driving issue resolution to improve dimensional quality for major issues (including perceived quality/jewel effect) Driving to Resolution the major cross functional issues by getting all affected parties involved High level reporting on the dimensional quality/status of the vehicle from IVB thru launch, including the following metrics: BIW A, B, C & Finished vehicle (FVS) TAC Fixture Reviews and problem resolution follow up related to key issues Completion of the dimensional exit criteria The DTS gate reviews (@ PQRR MVS, EOA) to determine acceptable ship targets & DTS reconciliation Exhibits interior and exterior fit & finish requirements as per DTS. Evaluate components and assembly output. Analyse CF and matching at vehicle. Expertise in sheet metal components and assembly metrics requirement through stage wise analysis. Identify Root cause and resolve BIW dimensional issues from Matching until plant transition. The Dimensional Validation Engineers, DPM, Zone MEs, etc. all support the BIW QL in this effort. Developing/maintaining the Program Dimensional Quality Report and posting it to the BIW execution website. This includes updating the ABC Metric reports Responsible for development/execution of the iterative matching/body shop validation plan. And lead the Iterative match process at the suppliers working with the SQE and PE along with Stamping and other suppliers to correct quality issues affecting the build Reporting out on the dimensional quality/status of the vehicle from IVB thru launch, which includes the following metrics: BIW A, B, C & Finished vehicle (FVS) Conducting weekly zone/area focused dimensional performance meetings with appropriate BIW zone MEs, plant personnel, etc. to review dimensional metrics (ABC, FVS, PQA, Fit Gate) and providing prioritized top issues. BIW QL assigns issue owners and drives issue resolution to improve dimensional quality. Daily Dimensional meetings to coordinate tool tune in activities and engineering resources as required. Ensuring tooling 101/drill panel activities are scheduled and completed at Assembly Plant if not performed prior to buy-off. The implementation of the common shim log/tool change process with assembly plant resources. Once implemented, all shim requests are to be approved by the BIW QL before implementing. Prioritizing, communicating, and driving issue resolution for the issues identified from the M3 & M4 builds DTS gate reviews (@ PQRR MVS-Finished Vehicle Ship Targets, EOA-DTS Reconciliation) to determine acceptable ship targets Preferred candidate profile GD&T. Problem Solving, Six Sigma(preferable). BIW, Exterior & Interior, Vehicle Dimension.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

7 - 11 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Job Title: Erwin Data Modeler, Insurance domain Location: Any Job Type: Full-Time | 2-11pm Shift Job Summary We are seeking a skilled and experienced Data Modeler with hands-on expertise in Erwin Data Modeling to join our team. The ideal candidate will have a strong background in data architecture and modeling, with a minimum of 4 years of relevant experience. Knowledge of the insurance domain is a significant plus. Key Responsibilities Design, develop, and maintain conceptual, logical, and physical data models using Erwin Data Modeler. Collaborate with business analysts, data architects, and developers to understand data requirements and translate them into data models. Ensure data models align with enterprise standards and best practices. Perform data analysis and profiling to support modeling efforts. Maintain metadata and documentation for data models. Support data governance and data quality initiatives. Participate in reviews and provide feedback on data models and database designs. Required Skills & Qualifications Strong understanding of data modeling concepts including normalization, denormalization, and dimensional modeling. Knowledge on any relational database will be an advantage. Familiarity with data warehousing and ETL processes. Excellent analytical and problem-solving skills. Strong communication and collaboration abilities.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

1 - 4 Lacs

Noida

Hybrid

Position Overview The role would be to help architect, build, and maintain a robust, scalable, and sustainable businessintelligence platform. Assisted by the Data Team this role will work with highly scalable systems, complex data models, and a large amount of transactional data. Company Overview BOLD is an established and fast-growing product company that transforms work lives. Since 2005,weve helped more than 10,000,000 folks from all over America(and beyond!) reach higher and dobetter. A career at BOLD promises great challenges, opportunities, culture and the environment. Withour headquarters in Puerto Rico and offices in San Francisco and India, were a global organization on a path to change the career industry Key Responsibilities Architect, develop, and maintain a highly scalable data warehouse and build/maintain ETL processes. Utilize Python and Airflow to integrate data from across the business into data warehouse. Integrate third party data into the data warehouse like google analytics, google ads, Iterable Required Skills Experience working as an ETL developer in a Data Engineering, Data Warehousing or Business Intelligence team Understanding of data integration/data engineering architecture and should be aware of ETL standards, methodologies, guidelines and techniques Hands on with python programming language and its packages like Pandas, NumPy Strong understanding of SQL queries, aggregate functions, Complex joins and performance tuning Should have good exposure of Databases like Snowflake/SQL Server/Oracle/PostgreSQL(any one of these) Broad understanding of data warehousing and dimensional modelling concept

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At PwC, our team in infrastructure is dedicated to designing and implementing secure and robust IT systems that facilitate business operations. We focus on ensuring the smooth functioning of networks, servers, and data centers to enhance performance and reduce downtime. As part of the infrastructure engineering team at PwC, your role will involve creating and implementing scalable technology infrastructure solutions for our clients. This will encompass tasks such as network architecture, server management, and experience in cloud computing. We are currently seeking a Data Modeler with a solid background in data modeling, metadata management, and optimizing data systems. In this role, you will be responsible for analyzing business requirements, developing long-term data models, and maintaining the efficiency and consistency of our data systems. Key Responsibilities: - Analyze business needs and translate them into long-term data model solutions. - Evaluate existing data systems and suggest enhancements. - Define rules for data translation and transformation across different models. - Collaborate with the development team to design conceptual data models and data flows. - Establish best practices for data coding to ensure system consistency. - Review modifications to existing systems to ensure cross-compatibility. - Implement data strategies and create physical data models. - Update and optimize local and metadata models. - Utilize canonical data modeling techniques to improve system efficiency. - Evaluate implemented data systems for discrepancies, variances, and efficiency. - Troubleshoot and optimize data systems to achieve optimal performance. Key Requirements: - Proficiency in relational and dimensional modeling (OLTP, OLAP). - Experience with data modeling tools such as Erwin, ER/Studio, Visio, and PowerDesigner. - Strong skills in SQL and database management systems like Oracle, SQL Server, MySQL, and PostgreSQL. - Familiarity with NoSQL databases such as MongoDB and Cassandra, including their data structures. - Hands-on experience with data warehouses and BI tools like Snowflake, Redshift, BigQuery, Tableau, and Power BI. - Knowledge of ETL processes, data integration, and data governance frameworks. - Excellent analytical, problem-solving, and communication skills. Qualifications: - Bachelor's degree in Engineering or a related field. - 5 to 9 years of experience in data modeling or related areas. - 4+ years of practical experience in dimensional and relational data modeling. - Expertise in metadata management and relevant tools. - Proficiency in data modeling tools like Erwin, Power Designer, or Lucid. - Understanding of transactional databases and data warehouses. Preferred Skills: - Experience in cloud-based data solutions, such as AWS, Azure, and GCP. - Knowledge of big data technologies like Hadoop, Spark, and Kafka. - Understanding of graph databases and real-time data processing. - Certifications in data management, modeling, or cloud data engineering. - Strong communication and presentation skills. - Excellent interpersonal skills to collaborate effectively with diverse teams.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Model Designer, you will be responsible for structuring and organizing data to support scalable applications and reporting systems. Your role will play a crucial part in maintaining data consistency and reusability throughout the organization. Your key responsibilities will include developing and maintaining conceptual, logical, and physical data models, ensuring that these models align with reporting, analytics, and application requirements. You will collaborate with stakeholders to gather data requirements, normalize data, optimize relationships, and work closely with developers and DBAs to implement these models in production. To be successful in this role, you should have 6-9 years of experience in data modeling or business intelligence, a strong understanding of relational databases and data warehousing, and proficiency in data modeling tools such as PowerDesigner, dbt, and ER/Studio. You should possess the ability to interpret business needs and translate them into effective data structures, with a keen attention to detail. It would be advantageous if you have experience with dimensional modeling, including star/snowflake schemas, and familiarity with master data management concepts. Key Skills: er/studio, data warehousing, modeling, powerdesigner, relational databases, data modeling tools, business intelligence, master data management, data, dimensional modeling, dbt, data modeling,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Noida, India

Work from Office

Key Responsibilities: 1.Architect and design end to end data pipelines starting from Source systems to Data warehouse. 2.Lead the development of scalable Python- Spark based data processing workflows 3.Define and implement data modeling standards for DWH including fact/dimension schema and historical handling. 4.Oversee performance tuning of Python, Spark and ETL loads. 5.Ensure robust data integration with Tableau reporting by designing data structures optimized for Bl consumption. 6.Mentor junior engineers and drive engineering best practices. 7.Work loosely with business stakeholders, developers and product teams to align data initiatives with business goals, 8.Define SLAs, error handling, logging, monitoring and alerting mechanisms across pipelines. Must Have: 1. Strong Oracle SQL expertise and deep oracle DWH experience. 2. Proficiency in Python and Spark with experience handling large scale data transformations. 3. Experience in building batch data pipelines and managing dependencies. 4. Solid understanding of data warehousing principles and dimensional modeling. 5. Experience working with reporting tools like Tableau. 6. Good to have experience in cloud-based DWHs (like Snowflake) for future- readiness. Mandatory Competencies ETL - ETL - Data Stage Beh - Communication and collaboration BI and Reporting Tools - BI and Reporting Tools - Tableau QA/QE - QA Analytics - Data Analysis Database - Database Programming - SQL Big Data - Big Data - SPARK Programming Language - Python - Python Shell ETL - ETL - Ab Initio

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Noida, Hyderabad

Work from Office

Senior Data Modelller About the Role: We are seeking an experienced Senior Data Modeller to join our team. In this role, you will be responsible for designing and standardization of enterprise-wide data models across multiple domains such as Customer, Product, Billing, and Network. The ideal candidate will work closely with cross-functional teams to translate business needs into scalable and governed data structures. You will work closely with customers, and technology partners to deliver data solutions that address complex business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Responsibilities: Design logical and physical data models aligned with enterprise and industry standards Create and maintain data models for Customer, Product, Usage, and Service domains Translate business requirements into normalized and analytical schemas (Star/Snowflake) Define and maintain entity relationships, hierarchy levels (Customer - Account - MSISDN), and attribute lineage Standardize attribute definitions across systems and simplify legacy structures Collaborate with engineering teams to implement models in cloud data platforms (e.g., Databricks) Collaborate with domain stewards to simplify and standardize legacy data structures Work with governance teams to tag attributes for privacy, compliance, and data quality Document metadata, lineage, and maintain version control of data models Support analytics, reporting, and machine learning teams by enabling standardized data access Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Qualifications: Bachelors or masters degree in computer science, Data Science, or a related technical field 7+ years of experience in data modelling roles Hands-on experience building data models and platforms Strong experience with data modeling tools (Erwin,Azure Analysis services, SSAS, dbt, informatica) Hands-on experience with modern cloud data platforms (Databricks, Azure Synapse, Snowflake) Deep understanding of data warehousing concepts and normalized/denormalized models Expertise in SQL, data profiling, schema design, and metadata documentation Familiarity with domain-driven design, data mesh and modular architecture Experience in large-scale transformation or modernization programs Knowledge of regulatory frameworks such as GDPR or data privacy-by-design

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies