Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About the Company Creospan is a growing tech collective of makers, shakers, and problem solvers, offering solutions today that will propel businesses into a better tomorrow. “Tomorrow’s ideas, built today!” In addition to being able to work alongside equally brilliant and motivated developers, our consultants appreciate the opportunity to learn and apply new skills and methodologies to different clients and industries. Job Title: Data Modeler Location: Pune (Pan India relocation is considerable - High preference is Pune) Hybrid: 3 days WFO & 2 days WFH Shift timings: UK Working Hours (9AM — 5PM GMT) Notice period: Immediate Gap: Upto 3 Months (Strictly not more than that) Project Overview: Creation and management of business data models in all their forms, including conceptual models, logical data models and physical data models (relational database designs, message models and others). Expert level understanding of relational database concepts, dimensional database concepts and database architecture and design, ontology and taxonomy design. Background working with key data domains as account, holding and transactions within security servicing or asset management space. Expertise in designing data driven solution on Snowflake for complex business needs. Knowledge of entire application lifecycle including Design, Development, Deployment, Operation and Maintenance in an Agile and DevOps culture. Role: This person strengthens the impact of, and provides recommendations on data-models and architecture that will need to be available and shared consistently across the TA organization through the identification, definition and analysis of how data related assets aid business outcomes. The Data Modeler\Architect is responsible for making data trusted, understood and easy to use. They will be responsible for the entire lifecycle of the data architectural assets, from design and development to deployment, operation and maintenance, with a focus on automation and quality. Must Have Skills: 10+ years of experience in Enterprise-level Data Architecture, Data Modelling, and Database Engineering Expertise in OLAP & OLTP design, Data Warehouse solutions, ELT/ETL processes Proficiency in data modelling concepts and practices such as normalization, denormalization, and dimensional modelling (Star Schema, Snowflake Schema, Data Vault, Medallion Data Lake) Experience with Snowflake-specific features, including clustering, partitioning, and schema design best practices Proficiency in Enterprise Modelling tools - Erwin, PowerDesigner, IBM Infosphere etc. Strong experience in Microsoft Azure data pipelines (Data Factory, Synapse, SQL DB, Cosmos DB, Databricks) Familiarity with Snowflake’s native tools and services including Snowflake Data Sharing, Snowflake Streams & Tasks, and Snowflake Secure Data Sharing Strong knowledge of SQL performance tuning, query optimization, and indexing strategies Strong verbal and written communication skills for collaborating with both technical teams and business stakeholders Working knowledge of BIAN, ACORD, ESG risk data integration Nice to Haves: At least 3+ in security servicing or asset Management/investment experience is highly desired Understanding of software development life cycle including planning, development, quality assurance, change management and release management Strong problem-solving skills and ability to troubleshoot complex issues Excellent communication and collaboration skills to work effectively in a team environment Self-motivated and ability to work independently with minimal supervision Excellent communication skills: experience in communicating with tech and non-tech teams Deep understanding of data and information architecture, especially in asset management space Familiarity with MDM, data vault, and data warehouse design and implementation techniques Business domain, data/content and process understanding (which are more important than technical skills). Being techno functional is a plus Good presentation skills in creating Data Architecture diagrams Data modelling and information classification expertise at the project and enterprise level Understanding of common information architecture frameworks and information models Experience with distributed data and analytics platforms in cloud and hybrid environments. Also an understanding of a variety of data access and analytic approaches (for example, microservices and event-based architectures) Knowledge of problem analysis, structured analysis and design, and programming techniques Python, R
Posted 2 days ago
10.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Reporting to the VP COG ECM enterprise Forms Portfolio Delivery Manager, this role will be responsible for managing and supporting Implementation of a new Document solution for identified applications with the CCM landscape, in APAC. OpenText xPression and Duckcreek has been the corporate document generation tool of choice within Chubb. But xPression going end of life and be unsupported from 2025. A new Customer Communications Management (CCM) platform – Quadient Inspire - has been selected to replace xPression by a global working group and implementation of this new tool (including migration of existing forms/templates from xPression where applicable). Apart from migrating from xPression, there are multiple existing applications to be replaced with Quadient Inspire The role is based in Hyderabad/India with some travel to other Chubb offices. Although there are no direct line management responsibilities within this role, the successful applicant will be responsible for task management of Business Analysts and an Onshore/Offshore development team. The role will require the ability to manage multiple project/enhancement streams with a variety of levels of technical/functional scope and across a number of different technologies. In this role, you will: Lead the design and development of comprehensive data engineering frameworks and patterns. Establish engineering design standards and guidelines for the creation, usage, and maintenance of data across COG (Chubb overseas general) Derive innovation and build highly scalable real-time data pipelines and data platforms to support the business needs. Act as mentor and lead for the data engineering organization that is business-focused, proactive, and resilient. Promote data governance and master/reference data management as a strategic discipline. Implement strategies to monitor the effectiveness of data management. Be an engineering leader and coach data engineers and be an active member of the data leadership team. Evaluate emerging data technologies and determine their business benefits and impact on the future-state data platform. Develop and promote a strong data management framework, emphasizing data quality, governance, and compliance with regulatory requirements Collaborate with Data Modelers to create data models (conceptual, logical, and physical) Architect meta-data management processes to ensure data lineage, data definitions, and ownership are well-documented and understood Collaborate closely with business leaders, IT teams, and external partners to understand data requirements and ensure alignment with strategic goals Act as a primary point of contact for data engineering discussions and inquiries from various stakeholders Lead the implementation of data architectures on cloud platforms (AWS, Azure, Google Cloud) to improve efficiency and scalability Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka, Apache Flink) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, Talend, Apache Nifi) Experience with cloud-based data architectures and platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure SQL Database) Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Familiarity with Agile methodologies and experience working in Agile project environments Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 2 days ago
10.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Reporting to the VP COG ECM enterprise Forms Portfolio Delivery Manager, this role will be responsible for managing and supporting Implementation of a new Document solution for identified applications with the CCM landscape, in APAC. OpenText xPression and Duckcreek has been the corporate document generation tool of choice within Chubb. But xPression going end of life and be unsupported from 2025. A new Customer Communications Management (CCM) platform – Quadient Inspire - has been selected to replace xPression by a global working group and implementation of this new tool (including migration of existing forms/templates from xPression where applicable). Apart from migrating from xPression, there are multiple existing applications to be replaced with Quadient Inspire The role is based in Hyderabad/India with some travel to other Chubb offices. Although there are no direct line management responsibilities within this role, the successful applicant will be responsible for task management of Business Analysts and an Onshore/Offshore development team. The role will require the ability to manage multiple project/enhancement streams with a variety of levels of technical/functional scope and across a number of different technologies. In this role, you will: Lead the design and development of comprehensive data engineering frameworks and patterns. Establish engineering design standards and guidelines for the creation, usage, and maintenance of data across COG (Chubb overseas general) Derive innovation and build highly scalable real-time data pipelines and data platforms to support the business needs. Act as mentor and lead for the data engineering organization that is business-focused, proactive, and resilient. Promote data governance and master/reference data management as a strategic discipline. Implement strategies to monitor the effectiveness of data management. Be an engineering leader and coach data engineers and be an active member of the data leadership team. Evaluate emerging data technologies and determine their business benefits and impact on the future-state data platform. Develop and promote a strong data management framework, emphasizing data quality, governance, and compliance with regulatory requirements Collaborate with Data Modelers to create data models (conceptual, logical, and physical) Architect meta-data management processes to ensure data lineage, data definitions, and ownership are well-documented and understood Collaborate closely with business leaders, IT teams, and external partners to understand data requirements and ensure alignment with strategic goals Act as a primary point of contact for data engineering discussions and inquiries from various stakeholders Lead the implementation of data architectures on cloud platforms (AWS, Azure, Google Cloud) to improve efficiency and scalability Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka, Apache Flink) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, Talend, Apache Nifi) Experience with cloud-based data architectures and platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure SQL Database) Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Familiarity with Agile methodologies and experience working in Agile project environments Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 2 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description & Summary: We are looking for a skilled Azure Cloud Data Engineer with strong expertise in Python programming , Databricks , and advanced SQL to join our team in Noida . The candidate will be responsible for designing, developing, and optimizing scalable data solutions on the Azure cloud platform. You will play a critical role in building data pipelines and transforming complex data into actionable insights by leveraging cloud-native tools and technologies. Level: Senior Consultant / Manager Location: Noida LOS: Competency: Data & Analytics Skill: Azure Data Engineering Job Position Title: Azure Cloud Data Engineer with Python Programming – Senior Consultant/Manager (6+ Years) Responsibilities: · Design, develop, and manage scalable and secure data pipelines using Azure Databricks and Azure Data Factory. · Write clean, efficient, and reusable code primarily in Python for cloud automation, data processing, and orchestration. · Architect and implement cloud-based data solutions, integrating structured and unstructured data sources. · Build and optimize ETL workflows and ensure seamless data integration across platforms. · Develop data models using normalization and denormalization techniques to support OLTP and OLAP systems. · Manage Azure-based storage solutions including Azure Data Lake and Blob Storage. · Troubleshoot performance bottlenecks in data flows and ETL processes. · Integrate advanced analytics and support BI use cases within the Azure ecosystem. · Lead code reviews and ensure adherence to version control practices (e.g., Git). · Contribute to the design and deployment of enterprise-level data warehousing solutions. · Stay current with Azure cloud technologies and Python ecosystem updates to adopt best practices and emerging tools. Mandatory skill sets: · Strong Python programming skills (Must-Have) – advanced scripting, automation, and cloud SDK experience · Strong SQL skills (Must-Have) · Azure Databricks (Must-Have) · Azure Data Factory · Azure Blob Storage / Azure Data Lake Storage · Apache Spark (hands-on experience) · Data modeling (Normalization & Denormalization) · Data warehousing and BI tools integration · Git (Version Control) · Building scalable ETL pipelines Preferred skill sets (Good to Have): · Understanding of OLTP and OLAP environments · Experience with Kafka and Hadoop · Azure Synapse Analytics · Azure DevOps for CI/CD integration · Agile delivery methodologies Years of experience required: · 6+ years of overall experience in cloud engineering or data engineering roles, with at least 2-3 years of hands-on experience with Azure cloud services. · Proven track record of strong Python development with at least 2-3 years of hands-on experience. Education qualification: BE/B.Tech/MBA/MCA
Posted 3 days ago
0.0 - 4.0 years
0 Lacs
surat, gujarat
On-site
You have just completed your masters and are looking to kickstart your career as a web developer. We have excellent opportunities available for you to begin your journey with us, and we are committed to making this experience valuable for you. As a fresher in website development, your role will involve coding, innovative design, and layout of websites. At DecodeUp, you will start as a Trainee, focusing on a specific technology that will be assigned to you initially. You will be mentored by seasoned team leaders and seniors to enhance your skills in that particular technology. Upon receiving comprehensive documentation outlining the theoretical and practical concepts, you will have the chance to showcase your abilities by working on live projects. Your responsibilities will include building websites from scratch, encompassing the design and functionality of every page from the home page to inner pages. Key responsibilities: - Proficiency in PHP and OOPs concepts - Basic knowledge of HTML, CSS, and JavaScript - Strong research, problem-solving, and analytical skills - Understanding of database queries, indexing, optimization, normalization, and de-normalization - Experience in application performance and scalability tuning - Ability to thrive in a collaborative and fast-paced office environment - Effective communication skills to provide guidance and establish best practices - Familiarity with front-end technologies, Bootstrap, and Ajax - Strong understanding of UI, cross-browser compatibility, and web standards - Collaboration with senior developers to learn processes - Attention to detail Professional Benefits: - Respect and value for every team member - Friendly competition and support for professional growth - Exposure to the latest technologies and continuous skill development - Emphasis on work-life balance and mental well-being - Opportunities to work with global clients Personal Benefits: - Flexible work schedule - 5 days working - Personalized mentorship and networking opportunities - Fully paid time off and leave options - Access to free conferences, courses, workshops, and global education resources - Learning opportunities from partners worldwide in over 70 countries In summary, joining our team at DecodeUp as a web developer will provide you with a supportive environment to grow both professionally and personally, with access to cutting-edge technologies and a global network of learning resources.,
Posted 3 days ago
5.0 - 8.0 years
0 Lacs
Gurgaon
On-site
202505104 Gurugram, Haryana, India Bevorzugt Description Job Responsibility: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. Technical Competencies: Deep hands-on experience with MongoDB data modelling , schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development : aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting
Posted 6 days ago
0 years
0 Lacs
Telangana, India
On-site
Job Description Proficiency in data modeling tools such as ER/Studio, ERwin or similar. Deep understanding of relational database design, normalization/denormalization, and data warehousing principles. Experience with SQL and working knowledge of database platforms like Oracle, SQL Server, PostgreSQL, or Snowflake. Strong knowledge of metadata management, data lineage, and data governance practices. Understanding of data integration, ETL processes, and data quality frameworks. Ability to interpret and translate complex business requirements into scalable data models. Excellent communication and documentation skills to collaborate with cross-functional teams.
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description Job Responsibility: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. Technical Competencies Deep hands-on experience with MongoDB data modelling, schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development: aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
As a MongoDB Data Engineer, you will be a key contributor in architecting, modelling, and developing data solutions using MongoDB to support our document and metadata workflows. You will collaborate closely with cross-functional teams to deliver scalable, performant, and secure data platforms, with exposure to Azure cloud infrastructure. You will play a central role in modelling document and transactional data, building aggregation and reporting pipelines, and ensuring best practices in database performance and reliability,including deploying, configuring, and tuning self-hosted MongoDB environments. You will work in a start-up-like environment but with the scale and mission of a global business behind you. The Role: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. . Technical Competencies: Deep hands-on experience with MongoDB data modelling , schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development : aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Job Title : Data Modeler Location : Remote Experience : 6+ years Mode : 6 month contract + ext. Key Responsibilities : - Design and maintain Conceptual, Logical, and Physical Data Models aligned with business requirements and technical specifications. - Build efficient Star and Snowflake schemas for analytical and reporting use cases. - Apply Normalization and Denormalization techniques to optimize data structures for various workloads. Design and manage Snowflake schemas and objects, including : - Secure and Materialized Views - Streams and Tasks - Time Travel and Cloning - Performance tuning strategies - Write and optimize complex SQL queries using Window Functions, Common Table Expressions (CTEs), and other advanced features. - Automate and optimize data transformation pipelines and data model deployments using scripting or orchestration tools. - Collaborate with data engineers, BI developers, and business stakeholders to understand data needs and translate them into scalable models. Required Skills and Qualifications : - 6+ years of experience in data modeling across large data environments. - Proven expertise in conceptual, logical, and physical modeling techniques. - Strong knowledge of Snowflake architecture and features. - Deep understanding of SQL and query performance optimization. - Experience with automation and scripting for data workflows. - Strong analytical and communication skills. - Familiarity with data governance, security, and compliance best practices is a plus. Preferred Qualifications : - Experience with data modeling tools (e.g., ER/Studio, Erwin, dbt, SQL DBM). - Exposure to cloud data platforms like AWS, Azure, or GCP. - Knowledge of CI/CD for data pipeline deployments.
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Gurugram, Haryana
On-site
202505104 Gurugram, Haryana, India Bevorzugt Description Job Responsibility: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. Technical Competencies: Deep hands-on experience with MongoDB data modelling , schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development : aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting
Posted 1 week ago
6.0 years
0 Lacs
Delhi, India
Remote
Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing --- Job Summary: We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. --- Key Responsibilities: 1. Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. 2. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. 3. SQL & Scripting: Write and maintain Advanced SQL queries including: Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. --- Required Skills & Qualifications: 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. --- Preferred Skills (Nice to Have): Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.
Posted 1 week ago
6.0 years
12 - 18 Lacs
Delhi, India
Remote
Skills: Data Modeling, Snowflake, Schemas, Star Schema Design, SQL, Data Integration, Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing Job Summary We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. Key Responsibilities Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. SQL & Scripting: Write And Maintain Advanced SQL Queries Including Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. Required Skills & Qualifications 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. Preferred Skills (Nice To Have) Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
ETL Developer Overall Exp – 5 to 6 years Relevant Yrs of Exp – 2 to 3 yrs WFO - Mandatory - Monday to Friday Migration project Oracle Cloud to Database MySQL (on-premises), No AWS or Azure is required SQL script and Python migration On-premises (OS: Linux) solutions - Standalone data base, connect with multiple data source Location- Sector, 98, Noida • Strong knowledge of Python 3.x • Experience with ETL libraries: pandas, SQL alchemy, cx_ Oracle, pymysql, pyodbc, or mysql-connector-python • Exception handling, logging frameworks, scheduling via cron, Airflow, or custom scripts Databases • Strong SQL skills in Oracle (PL/SQL) and MySQL • Understanding of data types, normalization/denormalization, indexing, and relational integrity • Comfortable reading and analysing stored procedures, triggers, and constraints Data Transformation • Experience with ID mapping, data cleansing, type casting, lookup table joins • Comfortable with large data files, incremental updates, and historical data loads Documentation • Maintain clear documentation of ETL logic, transformation rules, and exception cases
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us VE3 is at the forefront of delivering cloud‑native data solutions to premier clients across finance, retail and healthcare. As a rapidly growing UK‑based consultancy, we pride ourselves on fostering a collaborative, inclusive environment where every voice is heard—and every idea can become tomorrow’s breakthrough. Role: Database Designer / Senior Data Engineer What You’ll Do Architect & Design Lead the design of modern, scalable data platforms on AWS and/or Azure, using best practices for security, cost‑optimisation and performance. Develop detailed data models (conceptual, logical, physical) and document data dictionaries and lineage. Build & Optimize Implement robust ETL/ELT pipelines using Python, SQL, Scala (as appropriate), leveraging services such as AWS Glue, Azure Data Factory, and open‑source frameworks (Spark, Airflow). Tune data stores (RDS, SQL Data Warehouse, NoSQL like Redis) for throughput, concurrency and cost. Establish real‑time data streaming solutions via AWS Kinesis, Azure Event Hubs or Kafka. Collaborate & Deliver Work closely with data analysts, BI teams and stakeholders to translate business requirements into data solutions and dashboards. Partner with DevOps/Cloud Ops to automate CI/CD for data code and infrastructure (Terraform, CloudFormation). Governance & Quality Define and enforce data governance, security and compliance standards (GDPR, ISO27001). Implement monitoring, alerting and data quality frameworks (Great Expectations, AWS CloudWatch). Mentor & Innovate Act as a technical mentor for junior engineers; run brown‑bag sessions on new cloud services or data‑engineering patterns. Proactively research emerging big‑data and streaming technologies to keep our toolset cutting‑edge. Who You Are Academic Background: Bachelor’s (or higher) in Computer Science, Engineering, IT or similar. Experience: ≥3 years in a hands‑on Database Designer / Data Engineer role, ideally within a cloud environment. Technical Skills: Languages: Expert in SQL; strong Python or Scala proficiency. Cloud Services: At least one of AWS (Glue, S3, Kinesis, RDS) or Azure (Data Factory, Data Lake Storage, SQL Database). Data Modelling: Solid understanding of OLTP vs OLAP, star/snowflake schemas, normalization & denormalization trade‑offs. Pipeline Tools: Familiarity with Apache Spark, Kafka, Airflow or equivalent. Soft Skills: Excellent communicator—able to present complex technical designs in clear, non‑technical terms. Strong analytical mindset; thrives on solving performance bottlenecks and scaling challenges. Team player—collaborative attitude in agile/scrum settings. Nice to Have Certifications: AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate/Expert. Exposure to data‑science workflows (Jupyter, ML pipelines). Experience with containerized workloads (Docker, Kubernetes) for data processing. Familiarity with DataOps practices and tools (dbt, Great Expectations, Terraform). Our Commitment to Diversity We’re an equal‑opportunity employer committed to inclusive hiring. All qualified applicants—regardless of ethnicity, gender identity, sexual orientation, neurodiversity, disability status or veteran status—are encouraged to apply.
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Data Engineer Experience : 4-9 Years Location : Noida, Chennai & Pune Skills : Python, Pyspark, Snowflake & Redshift Key Responsibilities • Migration & Modernization • Lead the migration of data pipelines, models, and workloads from Redshift to Snowflake/Yellowbrick. • Design and implement landing, staging, and curated data zones to support scalable ingestion and consumption patterns. • Evaluate and recommend tools and frameworks for migration, including file formats, ingestion tools, and orchestration. • Design and build robust ETL/ELT pipelines using Python, PySpark, SQL, and orchestration tools (e.g., Airflow, dbt). • Support both batch and streaming pipelines, with real-time processing via Kafka, Snowpipe, or Spark Structured Streaming. • Build modular, reusable, and testable pipeline components that handle high volume and ensure data integrity. • Define and implement data modeling strategies (star, snowflake, normalization/denormalization) for analytics and BI layers. • Implement strategies for data versioning, late-arriving data, and slowly changing dimensions. • Implement automated data validation and anomaly detection (using tools like dbt tests, Great Expectations, or custom checks). • Build logging and alerting into pipelines to monitor SLA adherence, data freshness, and pipeline health. • Contribute to data governance initiatives including metadata tracking, data lineage, and access control. Required Skills & Experience • 10+ years in data engineering roles with increasing responsibility. • Proven experience leading data migration or re-platforming projects. • Strong command of Python, SQL, and PySpark for data pipeline development. • Hands-on experience with modern data platforms like Snowflake, Redshift, Yellowbrick, or BigQuery. • Proficient in building streaming pipelines with tools like Kafka, Flink, or Snowpipe. • Deep understanding of data modeling, partitioning, indexing, and query optimization. • Expertise with ETL orchestration tools (e.g., Apache Airflow, Prefect, Dagster, or dbt). • Comfortable working with large datasets and solving performance bottlenecks. • Experience in designing data validation frameworks and implementing DQ rules.
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be working as a Data Schema Designer focusing on designing clean, extensible, and high-performance schemas for GCP data platforms in Chennai. The role is crucial in standardizing data design, enabling scalability, and ensuring cross-system consistency. Your responsibilities will include creating and maintaining unified data schema standards across BigQuery, CloudSQL, and AlloyDB, collaborating with engineering and analytics teams to identify modeling best practices, ensuring schema alignment with ingestion pipelines, transformations, and business rules, developing entity relationship diagrams and schema documentation templates, and assisting in the automation of schema deployments and version control. To excel in this role, you must possess expert knowledge in schema design principles for GCP platforms, proficiency with schema documentation tools such as DBSchema and dbt docs, a deep understanding of data normalization, denormalization, and indexing strategies, as well as hands-on experience with OLTP and OLAP schemas. Preferred skills for this role include exposure to CI/CD workflows and Git-based schema management, experience in metadata governance and data cataloging. Soft skills like precision and clarity in technical documentation, collaboration mindset with attention to performance and quality are also valued. By joining this role, you will be the backbone of reliable and scalable data systems, influence architectural decisions through thoughtful schema design, and work with modern cloud data stacks and enterprise data teams. Skills required for this position include GCP, denormalization, metadata governance, data, OLAP schemas, Git-based schema management, CI/CD workflows, data cataloging, schema documentation tools (e.g., DBSchema, dbt docs), indexing strategies, OLTP schemas, collaboration, analytics, technical documentation, schema design principles for GCP platforms, and data normalization.,
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Modeller specializing in GCP and Cloud Databases, you will play a crucial role in designing and optimizing data models for both OLTP and OLAP systems. Your expertise in cloud-based databases, data architecture, and modeling will be essential in collaborating with engineering and analytics teams to ensure efficient operational systems and real-time reporting pipelines. You will be responsible for designing conceptual, logical, and physical data models tailored for OLTP and OLAP systems. Your focus will be on developing and refining models that support performance-optimized cloud data pipelines, implementing models in BigQuery, CloudSQL, and AlloyDB, as well as designing schemas with indexing, partitioning, and data sharding strategies. Translating business requirements into scalable data architecture and schemas will be a key aspect of your role, along with optimizing for near real-time ingestion, transformation, and query performance. You will utilize tools like DBSchema for collaborative modeling and documentation while creating and maintaining metadata and documentation around models. In terms of required skills, hands-on experience with GCP databases (BigQuery, CloudSQL, AlloyDB), a strong understanding of OLTP and OLAP systems, and proficiency in database performance tuning are essential. Additionally, familiarity with modeling tools such as DBSchema or ERWin, as well as a proficiency in SQL, schema definition, and normalization/denormalization techniques, will be beneficial. Preferred skills include functional knowledge of the Mutual Fund or BFSI domain, experience integrating with cloud-native ETL and data orchestration pipelines, and familiarity with schema version control and CI/CD in a data context. In addition to technical skills, soft skills such as strong analytical and communication abilities, attention to detail, and a collaborative approach across engineering, product, and analytics teams are highly valued. Joining this role will provide you with the opportunity to work on enterprise-scale cloud data architectures, drive performance-oriented data modeling for advanced analytics, and collaborate with high-performing cloud-native data teams.,
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Schema Designer – GCP Platforms Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are hiring a Data Schema Designer who will focus on designing clean, extensible, and high-performance schemas for GCP data platforms. This role is crucial in standardizing data design, enabling scalability, and ensuring cross-system consistency. Key Responsibilities Create and maintain unified data schema standards across BigQuery, CloudSQL, and AlloyDB Collaborate with engineering and analytics teams to identify modeling best practices Ensure schema alignment with ingestion pipelines, transformations, and business rules Develop entity relationship diagrams and schema documentation templates Assist in automation of schema deployments and version control Must-Have Skills Expert knowledge in schema design principles for GCP platforms Proficiency with schema documentation tools (e.g., DBSchema, dbt docs) Deep understanding of data normalization, denormalization, and indexing strategies Hands-on experience with OLTP and OLAP schemas Preferred Skills Exposure to CI/CD workflows and Git-based schema management Experience in metadata governance and data cataloging Soft Skills Precision and clarity in technical documentation Collaboration mindset with attention to performance and quality Why Join Be the backbone of reliable and scalable data systems Influence architectural decisions through thoughtful schema design Work with modern cloud data stacks and enterprise data teams Skills: gcp,denormalization,metadata governance,data,olap schemas,git-based schema management,ci/cd workflows,data cataloging,schema documentation tools (e.g., dbschema, dbt docs),indexing strategies,oltp schemas,collaboration,analytics,technical documentation,schema design principles for gcp platforms,data normalization,schema
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Modeler with Expertise in DBSchema & GCP Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are hiring a Data Modeler with proven hands-on experience using DBSchema in GCP environments. This role will focus on designing highly maintainable and performance-tuned data models for OLTP and OLAP systems using modern modeling tools and practices. Key Responsibilities Develop conceptual, logical, and physical models with DBSchema for cloud environments Align schema design with application requirements and analytics consumption Ensure proper indexing, normalization/denormalization, and partitioning for performance Support schema documentation, reverse engineering, and visualization in DBSchema Review and optimize models in BigQuery, CloudSQL, and AlloyDB Must-Have Skills Expertise in DBSchema modeling tool and collaborative schema documentation Strong experience with GCP databases: BigQuery, CloudSQL, AlloyDB Knowledge of OLTP and OLAP system structures and performance tuning Proficiency in SQL and schema evolution/versioning best practices Preferred Skills Experience integrating DBSchema with CI/CD pipelines Knowledge of real-time ingestion pipelines and federated schema design Soft Skills Detail-oriented, organized, and communicative Comfortable presenting schema design to cross-functional teams Why Join Leverage industry-leading tools in modern GCP environments Improve modeling workflows and documentation quality Contribute to enterprise data architecture with visibility and impact Skills: gcp,dbschema,olap,modeling,data,cloudsql,pipelines,alloydb,sql,oltp,bigquery,schema
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Modeller – GCP & Cloud Databases Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are looking for a hands-on Data Modeller with strong expertise in cloud-based databases, data architecture, and modeling for OLTP and OLAP systems. You will work closely with engineering and analytics teams to design and optimize conceptual, logical, and physical data models, supporting both operational systems and near real-time reporting pipelines. Key Responsibilities Design conceptual, logical, and physical data models for OLTP and OLAP systems Develop and refine models that support performance-optimized cloud data pipelines Collaborate with data engineers to implement models in BigQuery, CloudSQL, and AlloyDB Design schemas and apply indexing, partitioning, and data sharding strategies Translate business requirements into scalable data architecture and schemas Optimize for near real-time ingestion, transformation, and query performance Use tools such as DBSchema or similar for collaborative modeling and documentation Create and maintain metadata and documentation around models Must-Have Skills Hands-on experience with GCP databases: BigQuery, CloudSQL, AlloyDB Strong understanding of OLTP vs OLAP systems and respective design principles Experience in database performance tuning: indexing, sharding, and partitioning Skilled in modeling tools such as DBSchema, ERWin, or similar Understanding of variables that impact performance in real-time/near real-time systems Proficient in SQL, schema definition, and normalization/denormalization techniques Preferred Skills Functional knowledge of the Mutual Fund or BFSI domain Experience integrating with cloud-native ETL and data orchestration pipelines Familiarity with schema version control and CI/CD in a data context Soft Skills Strong analytical and communication skills Detail-oriented and documentation-focused Ability to collaborate across engineering, product, and analytics teams Why Join Work on enterprise-scale cloud data architectures Drive performance-first data modeling for advanced analytics Collaborate with high-performing cloud-native data teams Skills: olap,normalization,indexing,gcp databases,sharding,olap systems,modeling,schema definition,sql,data,oltp systems,alloydb,erwin,modeling tools,bigquery,database performance tuning,databases,partitioning,denormalization,dbschema,cloudsql
Posted 2 weeks ago
10.0 years
0 Lacs
India
Remote
Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Job Type: Full-Time Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. DBA Developer Perform DBA Task , Like SQL Server Installation, Backups, Configure HADR, Clustering and Logshipping Performance Tuning Audit and Compliance Knowledge of Databases Design database solutions using tables, stored procedures, functions, views, and indexes Data Transfer from Dev environment to Production and other related environment Schema Comparison Bulk operations Server side coding Understanding Normalization, Denormalization, Primary Keys, Foreign Keys and Constraints, Transactions, ACID, Indexes as optimization tool,Views Working with Database Manager in creating physical tables from logical models ETL, Data Migration (using CSV,EXCEL,TXT files),Adhoc Reporting Migration of Database from Older Version of SQL Server to New Versions Distributed DB’s , Remote Server and configuring LinkServers Integrating SQL Server with Oracle using Openqueries ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 2 weeks ago
4.0 years
18 - 22 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1800000 - Rs 2200000 (ie INR 18-22 LPA) Min Experience: 4 years Location: Bangalore, Bengaluru JobType: full-time We are seeking a skilled and detail-oriented Data Modeller with 4-6 years of experience to join our growing data engineering team. In this role, you will play a critical part in designing, implementing, and optimizing robust data models that support business intelligence, analytics, and operational data needs. You will collaborate with cross-functional teams to understand business requirements and convert them into scalable and efficient data solutions, primarily leveraging Amazon Redshift and Erwin Data Modeller. Requirements Key Responsibilities: Design and implement conceptual, logical, and physical data models that support business processes and reporting needs. Develop data models optimized for Amazon Redshift, ensuring performance, scalability, and integrity of data. Work closely with business analysts, data engineers, and stakeholders to translate business requirements into data structures. Use Erwin Data Modeller (Erwin ERP) to create and maintain data models and maintain metadata repositories. Collaborate with ETL developers to ensure efficient data ingestion and transformation pipelines that align with the data model. Apply normalization, denormalization, and indexing strategies to optimize data performance and access. Perform data profiling and source system analysis to validate assumptions and model accuracy. Create and maintain detailed documentation, including data dictionaries, entity relationship diagrams (ERDs), and data lineage information. Drive consistency and standardization across all data models, ensuring alignment with enterprise data architecture and governance policies. Identify opportunities to improve data quality, model efficiency, and pipeline performance. Required Skills and Qualifications: 4-6 years of hands-on experience in data modeling, including conceptual, logical, and physical modeling. Strong expertise in Amazon Redshift and Redshift-specific modeling best practices. Proficiency with Erwin Data Modeller (Erwin ERP) or similar modeling tools. Strong knowledge of SQL with experience writing complex queries and performance tuning. Solid understanding of ETL processes and experience working alongside ETL engineers to integrate data from multiple sources. Familiarity with dimensional modeling, data warehousing principles, and star/snowflake schemas. Experience with metadata management, data governance, and maintaining modeling standards. Ability to work independently and collaboratively in a fast-paced, data-driven environment. Strong analytical and communication skills with the ability to present technical concepts to non-technical stakeholders. Preferred Qualifications: Experience working in a cloud-native data environment (AWS preferred). Exposure to other data modeling tools and cloud data warehouses is a plus. Familiarity with data catalog tools, data lineage tracing, and data quality frameworks
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough