Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be working as a Data Schema Designer focusing on designing clean, extensible, and high-performance schemas for GCP data platforms in Chennai. The role is crucial in standardizing data design, enabling scalability, and ensuring cross-system consistency. Your responsibilities will include creating and maintaining unified data schema standards across BigQuery, CloudSQL, and AlloyDB, collaborating with engineering and analytics teams to identify modeling best practices, ensuring schema alignment with ingestion pipelines, transformations, and business rules, developing entity relationship diagrams and schema documentation templates, and assisting in the automation of schema deployments and version control. To excel in this role, you must possess expert knowledge in schema design principles for GCP platforms, proficiency with schema documentation tools such as DBSchema and dbt docs, a deep understanding of data normalization, denormalization, and indexing strategies, as well as hands-on experience with OLTP and OLAP schemas. Preferred skills for this role include exposure to CI/CD workflows and Git-based schema management, experience in metadata governance and data cataloging. Soft skills like precision and clarity in technical documentation, collaboration mindset with attention to performance and quality are also valued. By joining this role, you will be the backbone of reliable and scalable data systems, influence architectural decisions through thoughtful schema design, and work with modern cloud data stacks and enterprise data teams. Skills required for this position include GCP, denormalization, metadata governance, data, OLAP schemas, Git-based schema management, CI/CD workflows, data cataloging, schema documentation tools (e.g., DBSchema, dbt docs), indexing strategies, OLTP schemas, collaboration, analytics, technical documentation, schema design principles for GCP platforms, and data normalization.,
Posted 2 months ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,
Posted 2 months ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Modeller specializing in GCP and Cloud Databases, you will play a crucial role in designing and optimizing data models for both OLTP and OLAP systems. Your expertise in cloud-based databases, data architecture, and modeling will be essential in collaborating with engineering and analytics teams to ensure efficient operational systems and real-time reporting pipelines. You will be responsible for designing conceptual, logical, and physical data models tailored for OLTP and OLAP systems. Your focus will be on developing and refining models that support performance-optimized cloud data pipelines, implementing models in BigQuery, CloudSQL, and AlloyDB, as well as designing schemas with indexing, partitioning, and data sharding strategies. Translating business requirements into scalable data architecture and schemas will be a key aspect of your role, along with optimizing for near real-time ingestion, transformation, and query performance. You will utilize tools like DBSchema for collaborative modeling and documentation while creating and maintaining metadata and documentation around models. In terms of required skills, hands-on experience with GCP databases (BigQuery, CloudSQL, AlloyDB), a strong understanding of OLTP and OLAP systems, and proficiency in database performance tuning are essential. Additionally, familiarity with modeling tools such as DBSchema or ERWin, as well as a proficiency in SQL, schema definition, and normalization/denormalization techniques, will be beneficial. Preferred skills include functional knowledge of the Mutual Fund or BFSI domain, experience integrating with cloud-native ETL and data orchestration pipelines, and familiarity with schema version control and CI/CD in a data context. In addition to technical skills, soft skills such as strong analytical and communication abilities, attention to detail, and a collaborative approach across engineering, product, and analytics teams are highly valued. Joining this role will provide you with the opportunity to work on enterprise-scale cloud data architectures, drive performance-oriented data modeling for advanced analytics, and collaborate with high-performing cloud-native data teams.,
Posted 2 months ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Schema Designer – GCP Platforms Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are hiring a Data Schema Designer who will focus on designing clean, extensible, and high-performance schemas for GCP data platforms. This role is crucial in standardizing data design, enabling scalability, and ensuring cross-system consistency. Key Responsibilities Create and maintain unified data schema standards across BigQuery, CloudSQL, and AlloyDB Collaborate with engineering and analytics teams to identify modeling best practices Ensure schema alignment with ingestion pipelines, transformations, and business rules Develop entity relationship diagrams and schema documentation templates Assist in automation of schema deployments and version control Must-Have Skills Expert knowledge in schema design principles for GCP platforms Proficiency with schema documentation tools (e.g., DBSchema, dbt docs) Deep understanding of data normalization, denormalization, and indexing strategies Hands-on experience with OLTP and OLAP schemas Preferred Skills Exposure to CI/CD workflows and Git-based schema management Experience in metadata governance and data cataloging Soft Skills Precision and clarity in technical documentation Collaboration mindset with attention to performance and quality Why Join Be the backbone of reliable and scalable data systems Influence architectural decisions through thoughtful schema design Work with modern cloud data stacks and enterprise data teams Skills: gcp,denormalization,metadata governance,data,olap schemas,git-based schema management,ci/cd workflows,data cataloging,schema documentation tools (e.g., dbschema, dbt docs),indexing strategies,oltp schemas,collaboration,analytics,technical documentation,schema design principles for gcp platforms,data normalization,schema
Posted 2 months ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Modeler with Expertise in DBSchema & GCP Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are hiring a Data Modeler with proven hands-on experience using DBSchema in GCP environments. This role will focus on designing highly maintainable and performance-tuned data models for OLTP and OLAP systems using modern modeling tools and practices. Key Responsibilities Develop conceptual, logical, and physical models with DBSchema for cloud environments Align schema design with application requirements and analytics consumption Ensure proper indexing, normalization/denormalization, and partitioning for performance Support schema documentation, reverse engineering, and visualization in DBSchema Review and optimize models in BigQuery, CloudSQL, and AlloyDB Must-Have Skills Expertise in DBSchema modeling tool and collaborative schema documentation Strong experience with GCP databases: BigQuery, CloudSQL, AlloyDB Knowledge of OLTP and OLAP system structures and performance tuning Proficiency in SQL and schema evolution/versioning best practices Preferred Skills Experience integrating DBSchema with CI/CD pipelines Knowledge of real-time ingestion pipelines and federated schema design Soft Skills Detail-oriented, organized, and communicative Comfortable presenting schema design to cross-functional teams Why Join Leverage industry-leading tools in modern GCP environments Improve modeling workflows and documentation quality Contribute to enterprise data architecture with visibility and impact Skills: gcp,dbschema,olap,modeling,data,cloudsql,pipelines,alloydb,sql,oltp,bigquery,schema
Posted 2 months ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Modeller – GCP & Cloud Databases Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are looking for a hands-on Data Modeller with strong expertise in cloud-based databases, data architecture, and modeling for OLTP and OLAP systems. You will work closely with engineering and analytics teams to design and optimize conceptual, logical, and physical data models, supporting both operational systems and near real-time reporting pipelines. Key Responsibilities Design conceptual, logical, and physical data models for OLTP and OLAP systems Develop and refine models that support performance-optimized cloud data pipelines Collaborate with data engineers to implement models in BigQuery, CloudSQL, and AlloyDB Design schemas and apply indexing, partitioning, and data sharding strategies Translate business requirements into scalable data architecture and schemas Optimize for near real-time ingestion, transformation, and query performance Use tools such as DBSchema or similar for collaborative modeling and documentation Create and maintain metadata and documentation around models Must-Have Skills Hands-on experience with GCP databases: BigQuery, CloudSQL, AlloyDB Strong understanding of OLTP vs OLAP systems and respective design principles Experience in database performance tuning: indexing, sharding, and partitioning Skilled in modeling tools such as DBSchema, ERWin, or similar Understanding of variables that impact performance in real-time/near real-time systems Proficient in SQL, schema definition, and normalization/denormalization techniques Preferred Skills Functional knowledge of the Mutual Fund or BFSI domain Experience integrating with cloud-native ETL and data orchestration pipelines Familiarity with schema version control and CI/CD in a data context Soft Skills Strong analytical and communication skills Detail-oriented and documentation-focused Ability to collaborate across engineering, product, and analytics teams Why Join Work on enterprise-scale cloud data architectures Drive performance-first data modeling for advanced analytics Collaborate with high-performing cloud-native data teams Skills: olap,normalization,indexing,gcp databases,sharding,olap systems,modeling,schema definition,sql,data,oltp systems,alloydb,erwin,modeling tools,bigquery,database performance tuning,databases,partitioning,denormalization,dbschema,cloudsql
Posted 2 months ago
10.0 years
0 Lacs
India
Remote
Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Job Type: Full-Time Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. DBA Developer Perform DBA Task , Like SQL Server Installation, Backups, Configure HADR, Clustering and Logshipping Performance Tuning Audit and Compliance Knowledge of Databases Design database solutions using tables, stored procedures, functions, views, and indexes Data Transfer from Dev environment to Production and other related environment Schema Comparison Bulk operations Server side coding Understanding Normalization, Denormalization, Primary Keys, Foreign Keys and Constraints, Transactions, ACID, Indexes as optimization tool,Views Working with Database Manager in creating physical tables from logical models ETL, Data Migration (using CSV,EXCEL,TXT files),Adhoc Reporting Migration of Database from Older Version of SQL Server to New Versions Distributed DB’s , Remote Server and configuring LinkServers Integrating SQL Server with Oracle using Openqueries ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 2 months ago
4.0 years
18 - 22 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1800000 - Rs 2200000 (ie INR 18-22 LPA) Min Experience: 4 years Location: Bangalore, Bengaluru JobType: full-time We are seeking a skilled and detail-oriented Data Modeller with 4-6 years of experience to join our growing data engineering team. In this role, you will play a critical part in designing, implementing, and optimizing robust data models that support business intelligence, analytics, and operational data needs. You will collaborate with cross-functional teams to understand business requirements and convert them into scalable and efficient data solutions, primarily leveraging Amazon Redshift and Erwin Data Modeller. Requirements Key Responsibilities: Design and implement conceptual, logical, and physical data models that support business processes and reporting needs. Develop data models optimized for Amazon Redshift, ensuring performance, scalability, and integrity of data. Work closely with business analysts, data engineers, and stakeholders to translate business requirements into data structures. Use Erwin Data Modeller (Erwin ERP) to create and maintain data models and maintain metadata repositories. Collaborate with ETL developers to ensure efficient data ingestion and transformation pipelines that align with the data model. Apply normalization, denormalization, and indexing strategies to optimize data performance and access. Perform data profiling and source system analysis to validate assumptions and model accuracy. Create and maintain detailed documentation, including data dictionaries, entity relationship diagrams (ERDs), and data lineage information. Drive consistency and standardization across all data models, ensuring alignment with enterprise data architecture and governance policies. Identify opportunities to improve data quality, model efficiency, and pipeline performance. Required Skills and Qualifications: 4-6 years of hands-on experience in data modeling, including conceptual, logical, and physical modeling. Strong expertise in Amazon Redshift and Redshift-specific modeling best practices. Proficiency with Erwin Data Modeller (Erwin ERP) or similar modeling tools. Strong knowledge of SQL with experience writing complex queries and performance tuning. Solid understanding of ETL processes and experience working alongside ETL engineers to integrate data from multiple sources. Familiarity with dimensional modeling, data warehousing principles, and star/snowflake schemas. Experience with metadata management, data governance, and maintaining modeling standards. Ability to work independently and collaboratively in a fast-paced, data-driven environment. Strong analytical and communication skills with the ability to present technical concepts to non-technical stakeholders. Preferred Qualifications: Experience working in a cloud-native data environment (AWS preferred). Exposure to other data modeling tools and cloud data warehouses is a plus. Familiarity with data catalog tools, data lineage tracing, and data quality frameworks
Posted 2 months ago
0 years
0 Lacs
Telangana
On-site
Proficiency in data modeling tools such as ER/Studio, ERwin or similar. Deep understanding of relational database design, normalization/denormalization, and data warehousing principles. Experience with SQL and working knowledge of database platforms like Oracle, SQL Server, PostgreSQL, or Snowflake. Strong knowledge of metadata management, data lineage, and data governance practices. Understanding of data integration, ETL processes, and data quality frameworks. Ability to interpret and translate complex business requirements into scalable data models. Excellent communication and documentation skills to collaborate with cross-functional teams.
Posted 2 months ago
0 years
0 Lacs
Hyderābād
Remote
Job Summary: We are seeking a highly analytical and experienced Data Modeler and Analyst to play a pivotal role in designing, developing, and maintaining our enterprise data models and providing insightful data analysis. The ideal candidate will bridge the gap between business needs and technical solutions, ensuring our data architecture is robust, scalable, and accurately reflects our organizational data requirements. This role requires a strong understanding of data modeling principles, excellent SQL skills, and the ability to translate complex data into actionable insights for various stakeholders. Key Responsibilities: Data Modeling: Design, develop, and maintain conceptual, logical, and physical data models for various data initiatives (e.g., data warehouses, data marts, operational data stores, transactional systems). Work closely with business stakeholders, subject matter experts, and technical teams to gather and understand data requirements, translating them into accurate and efficient data models. Implement best practices for data modeling, including normalization, denormalization, dimensional modeling (star schema, snowflake schema), and data vault methodologies. Create and maintain data dictionaries, metadata repositories, and data lineage documentation. Ensure data model integrity, consistency, and compliance with data governance standards. Perform data profiling and data quality assessments to understand source system data and identify modeling opportunities and challenges. Data Analysis: Perform in-depth data analysis to identify trends, patterns, anomalies, and insights that can drive business decisions. Write complex SQL queries to extract, transform, and analyze data from various relational and non-relational databases. Develop and present clear, concise, and compelling reports, dashboards, and visualizations to communicate findings to technical and non-technical audiences. Collaborate with business units to define KPIs, metrics, and reporting requirements. Support ad-hoc data analysis requests and provide data-driven recommendations. Collaboration & Communication: Act as a liaison between business users and technical development teams (ETL developers, BI developers, DBAs). Participate in data governance initiatives, ensuring data quality, security, and privacy. Contribute to the continuous improvement of data architecture and data management practices. Mentor junior data professionals and share knowledge within the team. Tools: Informatica, Teradata, Axiom, SQL,Databricks Job Types: Full-time, Permanent, Fresher, Freelance Contract length: 18 months Pay: ₹300,000.00 - ₹28,000,000.00 per year Benefits: Paid time off Work from home Schedule: Day shift Work Location: In person
Posted 2 months ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Description Perform DBA Tasks like SQL Server Installation, Backups, Configure HADR, Clustering, AG and Log shipping Install, configure, and maintain PostgreSQL database systems Install, configure, and maintain MondoDB(No SQL) database systems AWS, Azure Knowledge, Database server performance Analysis Audit and Compliance Knowledge of Databases Design database solutions using tables, stored procedures, functions, views, and indexes Data Transfer from Dev environment to Production and other related environment Schema Comparison, Bulk operations, Server side coding Understanding Normalization, Denormalization, Primary Keys, Foreign Keys and Constraints, Transactions, ACID, Indexes as optimization tool, Views Working with Database Manager in creating physical tables from logical models ETL, Data Migration (using CSV, EXCEL, TXT files),Adhoc Reporting Migration of Database from Older Version of SQL Server to New Versions Distributed DB's, Remote Server and configuring Link Servers Integrating SQL Server with Oracle using Open queries. Job Requirements/Qualifications Good Knowledge of MongoDB and PostgreSQL both OnPrem and Cloud Native servers Good Knowledge of SQL Server All versions including SAAS and PAAS model SSIS, SSRS BULK copy tools like BCP, DTS etc., Good Knowledge in DML, DDL, ETL, Table Level Backups, System objects BACKEND DATA Upload techniques Performance tuning Good in Excel (Pivoting and Analysis) Familiarity with ISO 20000, ISO 27001, PCI/DSS and other related standards Excellent communication skills (Interpersonal, Verbal & Written) Ability to multi-task and manage multiple priorities Should have high energy and a passion for helping people Customer focused Should be able to work 24 /7 including Night shifts Demonstrate ownership, commitment and accountability Ability to build and maintain efficient working relationships and strong people management skill Ability to interact at levels Candidate should be Computer Science Graduate or Equivalent The candidate should have completed 5 Years on Databases Administration MSSQL or in the similar role The candidate should have completed 2 Years on Databases Administration (PostgreSQL & MongoDB) (ref:hirist.tech)
Posted 2 months ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Collaborating with stakeholders to understand their business process requirements and objectives, translating requirements into SAP Analytics Cloud (SAC) solutions. Extracting, transforming, and loading data necessary for data modelling purposes. Validating and assuring data quality and accuracy, performing data cleansing and enrichment, and building data models and stories. Creating comprehensive reports and dashboards to help business stakeholders track their key performance indicators (KPIs) and drive insights from their data. Augmenting solution experiences and visualisations using low/no-code development. Essential Requirements A degree in Computer Science, Business Informatics or a comparable degree. At least 2 years experience working on SAP Analytics Cloud (SAC) solutions as a Data Analyst and/or Data Engineer. Experience in building data pipelines, preparing and integrating data sources for comprehensive analysis and reporting. Experience in building efficient data models, understanding of relational data modelling and denormalization technique. Fluency in SQL for building database transformations and extractions. Fluency with the SAC ecosystem and visualization capabilities. Past experience with SAP data around Finance and/or Operations processes will be appreciated. Certification in one or more of the following will be appreciated: SAC Data Analyst, Data Engineer, Low-Code/No-Code Developer. Desirable but not required: experience in building advanced visualisations or experiences using low/no-code, JavaScript and/or R. Excellent communication skills to be able to work independently with stakeholders. Energetic, organised and self-motivated. Fluent in business English.
Posted 2 months ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Collaborating with stakeholders to understand their business process requirements and objectives, translating requirements into SAP Analytics Cloud (SAC) solutions. Extracting, transforming, and loading data necessary for data modelling purposes. Validating and assuring data quality and accuracy, performing data cleansing and enrichment, and building data models and stories. Creating comprehensive reports and dashboards to help business stakeholders track their key performance indicators (KPIs) and drive insights from their data. Augmenting solution experiences and visualisations using low/no-code development. Essential Requirements A degree in Computer Science, Business Informatics or a comparable degree. At least 2 years experience working on SAP Analytics Cloud (SAC) solutions as a Data Analyst and/or Data Engineer. Experience in building data pipelines, preparing and integrating data sources for comprehensive analysis and reporting. Experience in building efficient data models, understanding of relational data modelling and denormalization technique. Fluency in SQL for building database transformations and extractions. Fluency with the SAC ecosystem and visualization capabilities. Past experience with SAP data around Finance and/or Operations processes will be appreciated. Certification in one or more of the following will be appreciated: SAC Data Analyst, Data Engineer, Low-Code/No-Code Developer. Desirable but not required: experience in building advanced visualisations or experiences using low/no-code, JavaScript and/or R. Excellent communication skills to be able to work independently with stakeholders. Energetic, organised and self-motivated. Fluent in business English.
Posted 2 months ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Azure Data Modeler Overall Exp: 5+ Location: Mumbai Need quick joiner to 30 Days Key Responsibilities: • Design, implement, and optimize data models in Azure Databricks to support large-scale data migration projects from Salesforce, Oracle, and SQL Server-based applications. • Collaborate with business stakeholders, data architects, and data engineers to understand data requirements and translate them into scalable, high-performance data models. • Develop conceptual, logical, and physical data models to ensure data consistency, integrity, and optimal performance in the Azure cloud environment. • Work closely with the data engineering team to ensure smooth integration of data from Salesforce, Oracle, and SQL Server into Azure Databricks using ETL pipelines. • Optimize and fine-tune data models for large datasets, focusing on performance, scalability, and cost efficiency within the Azure platform. • Ensure data governance, security, and compliance standards are met when designing and implementing data models and pipelines. • Develop and maintain documentation for data models, including relationships, entities, transformations, and business rules. • Implement best practices for data modeling, including naming conventions, data types, and normalization/denormalization strategies. • Collaborate with data scientists and analysts to ensure the data models support business intelligence, reporting, and machine learning initiatives. • Conduct data quality assessments and validate the accuracy of migrated data. • Troubleshoot, resolve, and optimize data model-related performance and data quality issues across the migration process. Qualifications: • 5+ years of experience in data modeling, with a strong focus on cloud-based environments (preferably Azure). • Strong proficiency in SQL and experience working with Spark, Python, and other data processing languages. • Proven experience with Azure Databricks, Azure Data Lake, and other Azure data services
Posted 2 months ago
10.0 years
0 Lacs
India
Remote
Job Title: Lead Data Engineer Experience: 8–10 Years Location: Remote Mandatory: Prior hands-on experience with Fivetran integrations About the Role: We are seeking a highly skilled Lead Data Engineer with 8–10 years of deep expertise in cloud-native data platforms, including Snowflake, Azure, DBT , and Fivetran . This role will drive the design, development, and optimization of scalable data pipelines, leading a cross-functional team and ensuring data engineering best practices are implemented and maintained. Key Responsibilities: Lead the design and development of data pipelines (batch and real-time) using Azure, Snowflake, DBT, Python , and Fivetran . Translate complex business and data requirements into scalable, efficient data engineering solutions. Architect multi-cluster Snowflake setups with an eye on performance and cost. Design and implement robust CI/CD pipelines for data workflows (Git-based). Collaborate closely with analysts, architects, and business teams to ensure data architecture aligns with organizational goals. Mentor and review work of onshore/offshore data engineers. Define and enforce coding standards, testing frameworks, monitoring strategies , and data quality best practices. Handle real-time data processing scenarios where applicable. Own end-to-end delivery and documentation for data engineering projects. Must-Have Skills: Fivetran : Proven experience integrating and managing Fivetran connectors and sync strategies. Snowflake Expertise : Warehouse management, cost optimization, query tuning Internal vs. external stages, loading/unloading strategies Schema design, security model, and user access Python (advanced): Modular, production-ready code for ETL/ELT, APIs, and orchestration DBT : Strong command of DBT for transformation workflows and modular pipelines Azure : Azure Data Factory (ADF), Databricks Integration with Snowflake and other services SQL : Expert-level SQL for transformations, validations, and optimizations Version Control : Git, branching, pull requests, and peer code reviews CI/CD : DevOps/DataOps workflows for data pipelines Data Modeling : Star schema, Data Vault, normalization/denormalization techniques Strong documentation using Confluence, Word, Excel, etc. Excellent communication skills – verbal and written Good to Have: Experience with real-time data streaming tools (Event Hub, Kafka) Exposure to monitoring/data observability tools Experience with cost management strategies for cloud data platforms Exposure to Agile/Scrum-based environments
Posted 2 months ago
5.0 years
0 Lacs
Delhi
On-site
The Role Context: This is an exciting opportunity to join a dynamic and growing organization, working at the forefront of technology trends and developments in social impact sector. Wadhwani Center for Government Digital Transformation (WGDT) works with the government ministries and state departments in India with a mission of “ Enabling digital transformation to enhance the impact of government policy, initiatives and programs ”. We are seeking a highly motivated and detail-oriented individual to join our team as a Data Engineer with experience in the designing, constructing, and maintaining the architecture and infrastructure necessary for data generation, storage and processing and contribute to the successful implementation of digital government policies and programs. You will play a key role in developing, robust, scalable, and efficient systems to manage large volumes of data, make it accessible for analysis and decision-making and driving innovation & optimizing operations across various government ministries and state departments in India. Key Responsibilities: a. Data Architecture Design : Design, develop, and maintain scalable data pipelines and infrastructure for ingesting, processing, storing, and analyzing large volumes of data efficiently. This involves understanding business requirements and translating them into technical solutions. b. Data Integration: Integrate data from various sources such as databases, APIs, streaming platforms, and third-party systems. Should ensure the data is collected reliably and efficiently, maintaining data quality and integrity throughout the process as per the Ministries/government data standards. c. Data Modeling: Design and implement data models to organize and structure data for efficient storage and retrieval. They use techniques such as dimensional modeling, normalization, and denormalization depending on the specific requirements of the project. d. Data Pipeline Development/ ETL (Extract, Transform, Load): Develop data pipeline/ETL processes to extract data from source systems, transform it into the desired format, and load it into the target data systems. This involves writing scripts or using ETL tools or building data pipelines to automate the process and ensure data accuracy and consistency. e. Data Quality and Governance: Implement data quality checks and data governance policies to ensure data accuracy, consistency, and compliance with regulations. Should be able to design and track data lineage, data stewardship, metadata management, building business glossary etc. f. Data lakes or Warehousing: Design and maintain data lakes and data warehouse to store and manage structured data from relational databases, semi-structured data like JSON or XML, and unstructured data such as text documents, images, and videos at any scale. Should be able to integrate with big data processing frameworks such as Apache Hadoop, Apache Spark, and Apache Flink, as well as with machine learning and data visualization tools. g. Data Security : Implement security practices, technologies, and policies designed to protect data from unauthorized access, alteration, or destruction throughout its lifecycle. It should include data access, encryption, data masking and anonymization, data loss prevention, compliance, and regulatory requirements such as DPDP, GDPR, etc. h. Database Management: Administer and optimize databases, both relational and NoSQL, to manage large volumes of data effectively. i. Data Migration: Plan and execute data migration projects to transfer data between systems while ensuring data consistency and minimal downtime. a. Performance Optimization : Optimize data pipelines and queries for performance and scalability. Identify and resolve bottlenecks, tune database configurations, and implement caching and indexing strategies to improve data processing speed and efficiency. b. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with access to the necessary data resources. They also work closely with IT operations teams to deploy and maintain data infrastructure in production environments. c. Documentation and Reporting: Document their work including data models, data pipelines/ETL processes, and system configurations. Create documentation and provide training to other team members to ensure the sustainability and maintainability of data systems. d. Continuous Learning: Stay updated with the latest technologies and trends in data engineering and related fields. Should participate in training programs, attend conferences, and engage with the data engineering community to enhance their skills and knowledge. Desired Skills/ Competencies Education: A Bachelor's or Master's degree in Computer Science, Software Engineering, Data Science, or equivalent with at least 5 years of experience. Database Management: Strong expertise in working with databases, such as SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Big Data Technologies: Familiarity with big data technologies, such as Apache Hadoop, Spark, and related ecosystem components, for processing and analyzing large-scale datasets. ETL Tools: Experience with ETL tools (e.g., Apache NiFi, Talend, Apache Airflow, Talend Open Studio, Pentaho, Infosphere) for designing and orchestrating data workflows. Data Modeling and Warehousing: Knowledge of data modeling techniques and experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Data Governance and Security: Understanding of data governance principles and best practices for ensuring data quality and security. Cloud Computing: Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services for scalable and cost-effective data storage and processing. Streaming Data Processing: Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Apache Flink) for handling streaming data. KPIs: Data Pipeline Efficiency: Measure the efficiency of data pipelines in terms of data processing time, throughput, and resource utilization. KPIs could include average time to process data, data ingestion rates, and pipeline latency. Data Quality Metrics: Track data quality metrics such as completeness, accuracy, consistency, and timeliness of data. KPIs could include data error rates, missing values, data duplication rates, and data validation failures. System Uptime and Availability: Monitor the uptime and availability of data infrastructure, including databases, data warehouses, and data processing systems. KPIs could include system uptime percentage, mean time between failures (MTBF), and mean time to repair (MTTR). Data Storage Efficiency: Measure the efficiency of data storage systems in terms of storage utilization, data compression rates, and data retention policies. KPIs could include storage utilization rates, data compression ratios, and data storage costs per unit. Data Security and Compliance: Track adherence to data security policies and regulatory compliance requirements such as DPDP, GDPR, HIPAA, or PCI DSS. KPIs could include security incident rates, data access permissions, and compliance audit findings. Data Processing Performance: Monitor the performance of data processing tasks such as ETL (Extract, Transform, Load) processes, data transformations, and data aggregations. KPIs could include data processing time, CPU usage, and memory consumption. Scalability and Performance Tuning: Measure the scalability and performance of data systems under varying workloads and data volumes. KPIs could include scalability benchmarks, system response times under load, and performance improvements achieved through tuning. Resource Utilization and Cost Optimization: Track resource utilization and costs associated with data infrastructure, including compute resources, storage, and network bandwidth. KPIs could include cost per data unit processed, cost per query, and cost savings achieved through optimization. Incident Response and Resolution: Monitor the response time and resolution time for data-related incidents and issues. KPIs could include incident response time, time to diagnose and resolve issues, and customer satisfaction ratings for support services. Documentation and Knowledge Sharing : Measure the quality and completeness of documentation for data infrastructure, data pipelines, and data processes. KPIs could include documentation coverage, documentation update frequency, and knowledge sharing activities such as internal training sessions or knowledge base contributions. Years of experience of the current role holder New Position Ideal years of experience 3 – 5 years Career progression for this role CTO WGDT (Head of Incubation Centre) ******************************************************************************* Wadhwani Corporate Profile: (Click on this link) Our Culture: WF is a global not-for-profit, and works like a start-up, in a fast-moving, dynamic pace where change is the only constant and flexibility is the key to success. Three mantras that we practice across job roles, levels, functions, programs and initiatives, are Quality, Speed, Scale, in that order. We are an ambitious and inclusive organization, where everyone is encouraged to contribute and ideate. We are intensely and insanely focused on driving excellence in everything we do. We want individuals with the drive for excellence, and passion to do whatever it takes to deliver world class outcomes to our beneficiaries. We set our own standards often more rigorous than what our beneficiaries demand, and we want individuals who love it this way. We have a creative and highly energetic environment – one in which we look to each other to innovate new solutions not only for our beneficiaries but for ourselves too. Open to collaborate with a borderless mentality, often going beyond the hierarchy and siloed definitions of functional KRAs, are the individuals who will thrive in our environment. This is a workplace where expertise is shared with colleagues around the globe. Individuals uncomfortable with change, constant innovation, and short learning cycles and those looking for stability and orderly working days may not find WF to be the right place for them. Finally, we want individuals who want to do greater good for the society leveraging their area of expertise, skills and experience. The foundation is an equal opportunity firm with no bias towards gender, race, colour, ethnicity, country, language, age and any other dimension that comes in the way of progress. Join us and be a part of us! Bachelors in Technology / Masters in Technology
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities JOB DESCRIPTION Develop, optimize, and maintain complex SQL queries, stored procedures, functions, and views. Analyze slow-performing queries and optimize execution plans to improve database performance. Design and implement indexing strategies to enhance query efficiency. Work with developers to optimize database interactions in applications. Develop and implement Teradata best practices for large-scale data processing and ETL workflows. Monitor and troubleshoot Teradata performance issues using tools like DBQL (Database Query Log), Viewpoint, and Explain Plan Analysis. Perform data modeling, normalization, and schema design improvements. Collaborate with teams to implement best practices for database tuning and performance enhancement. Automate repetitive database tasks using scripts and scheduled jobs. Document database architecture, queries, and optimization techniques. Responsibilities Required Skills & Qualifications: Strong proficiency in Teradata SQL, including query optimization techniques. Strong proficiency in SQL (T-SQL, PL/SQL, or equivalent). Experience with indexing strategies, partitioning, and caching techniques. Knowledge of database normalization, denormalization, and best practices. Familiarity with ETL processes, data warehousing, and large datasets. Experience in writing and optimizing stored procedures, triggers, and functions. Hands-on experience in Teradata performance tuning, indexing, partitioning, and statistics collection. Experience with EXPLAIN plans, DBQL analysis, and Teradata Viewpoint monitoring. Candidate should have PowerBI / Tableau integration experience - Good to Have About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role.
Posted 2 months ago
5.0 - 31.0 years
9 - 15 Lacs
Bengaluru/Bangalore
On-site
Job Title: NoSQL Database Administrator (DBA )Department: IT / Data Management Job Purpose: The NoSQL Database Administrator will be responsible for designing, deploying, securing, and optimizing NoSQL databases to ensure high availability, reliability, and scalability of mission-critical applications. The role involves close collaboration with developers, architects, and security teams, especially in compliance-driven environments such as UIDAI. Key Responsibilities: Collaborate with developers and solution architects to design and implement efficient and scalable NoSQL database schemas. Ensure database normalization, denormalization where appropriate, and implement indexing strategies to optimize performance. Evaluate and deploy replication architectures to support high availability and fault tolerance. Monitor and analyze database performance using tools like NoSQL Enterprise Monitor and custom monitoring scripts. Troubleshoot performance bottlenecks and optimize queries using query analysis, index tuning, and rewriting techniques. Fine-tune NoSQL server parameters, buffer pools, caches, and system configurations to improve throughput and minimize latency. Implement and manage Role-Based Access Control (RBAC), authentication, authorization, and auditing to maintain data integrity, confidentiality, and compliance. Act as a liaison with UIDAI-appointed GRCP and security audit agencies, ensuring all security audits are conducted timely, and provide the necessary documentation and artifacts to address risks and non-conformities. Participate in disaster recovery planning, backup management, and failover testing. Key Skills & Qualifications: Educational Qualifications: Bachelor’s or Master’s Degree in Computer Science, Information Technology, or a related field. Technical Skills: Proficiency in NoSQL databases such as MongoDB, Cassandra, Couchbase, DynamoDB, or similar. Strong knowledge of database schema design, data modeling, and performance optimization. Experience in setting up replication, sharding, clustering, and backup strategies. Familiarity with performance monitoring tools and writing custom scripts for health checks. Hands-on experience with database security, RBAC, encryption, and auditing mechanisms. Strong troubleshooting skills related to query optimization and server configurations. Compliance & Security: Experience with data privacy regulations and security standards, particularly in compliance-driven sectors like UIDAI. Ability to coordinate with government and regulatory security audit teams. Behavioral Skills: Excellent communication and stakeholder management. Strong analytical, problem-solving, and documentation skills. Proactive and detail-oriented with a focus on system reliability and security. Key Interfaces: Internal: Developers, Solution Architects, DevOps, Security Teams, Project Managers. External: UIDAI-appointed GRCP, third-party auditors, security audit agencies. Key Challenges: Maintaining optimal performance and uptime in a high-demand, compliance-driven environment. Ensuring security, scalability, and availability of large-scale NoSQL deployments. Keeping up with evolving data security standards and audit requirements.
Posted 2 months ago
9.0 - 14.0 years
0 - 0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Design and develop conceptual, logical, and physical data models for enterprise and application-level databases. Translate business requirements into well-structured data models that support analytics, reporting, and operational systems. Define and maintain data standards, naming conventions, and metadata for consistency across systems. Collaborate with data architects, engineers, and analysts to implement models into databases and data warehouses/lakes. Analyze existing data systems and provide recommendations for optimization, refactoring, and improvements. Create entity relationship diagrams (ERDs) and data flow diagrams to document data structures and relationships. Support data governance initiatives including data lineage, quality, and cataloging. Review and validate data models with business and technical stakeholders. Provide guidance on normalization, denormalization, and performance tuning of database designs. Ensure models comply with organizational data policies, security, and regulatory requirements. Looking for a Data Modeler Architect to design conceptual, logical, and physical data models. Must translate business needs into scalable models for analytics and operational systems. Strong in normalization , denormalization , ERDs , and data governance practices. Experience in star/snowflake schemas and medallion architecture preferred. Role requires close collaboration with architects, engineers, and analysts. Data modelling, Normalization, Denormalization, Star and snowflake schemas, Medallion architecture, ERDLogical data model, Physical data model & Conceptual data model
Posted 2 months ago
5.0 - 10.0 years
9 - 14 Lacs
Vijayawada, Hyderabad
Work from Office
We are actively seeking experienced Power BI Administrators who can take full ownership of Power BI environments from data modeling and report development to security management and system integration. This role is ideal for professionals with a solid technical foundation and hands-on expertise across the Power BI ecosystem, including enterprise BI environments. Key Responsibilities: Data Model Management: Maintain and optimize Power BI data models to meet evolving analytical and reporting needs. Data Import & Transformation: Import and transform data from various sources using Power Query (M) and implement business logic using DAX. Advanced Measure Creation: Design complex DAX measures, KPIs, and calculated columns tailored to dynamic reporting requirements. Access & Permission Management: Administer and manage user access, roles, and workspace security settings in Power BI Service. Interactive Reporting: Develop insightful and interactive dashboards and reports aligned with business goals and user needs. Error Handling & Data Validation: Identify, investigate, and resolve data inconsistencies and refresh issues, ensuring data accuracy and report reliability. BCS & BIRT Integration: Develop and manage data extraction reports using Business Connectivity Services (BCS) and BIRT (Business Intelligence and Reporting Tool). Preferred Skills: Proven experience as a Power BI Administrator or similar BI role. Strong expertise in Power BI Desktop, Power BI Service, Power Query, DAX, and security configuration. Familiar with report lifecycle management, data governance, and large-scale enterprise BI environments. Experience with BCS and BIRT tools is highly preferred. Capable of independently troubleshooting data and configuration issues. Excellent communication and documentation skills.
Posted 2 months ago
9.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Having 9+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenario's like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). Good communication skills (ref:hirist.tech)
Posted 2 months ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Summary: You will be creating, maintaining, and supporting data pipelines with information coming from our vessels, projects, campaigns, third party data services, and so on. You will play a key role in organizing data and developing & maintaining data models and designing modern data solutions and products on our cloud data platform. You will work closely with the business to define & finetune requirements. You will support our data scientists and report developers across the organization and enable them to find the required data and information. Your responsibilities You have a result-driven and hands-on mindset and prefer to work in an agile environment. You are a team player and good communicator. You have experience with SQL or other data-oriented development languages (Python, Scala, Spark etc). You have proven experience in developing data models and database structures. You have proven experience with UML modelling and ER modelling for documenting and designing data structures. You have proven experience in the development of data pipelines and orchestrations. You have a master or bachelor‘s degree in the field of engineering or computer science You like to iterate quickly and try out new things Ideally, you have experience with a wide variety of data tools & data like geospatial, time series, structured & unstructured, etc. Your profile Experience on Microsoft Azure data stack (Synapse/data factory, power bi, data bricks, data lake, Microsoft SQL, Microsoft AAS) is mandatory. Experience with machine learning and AI is a plus Knowledge in fundamental data modeling concepts such as entities, relationships, normalization, and denormalization. Knowledge of different data modeling techniques (e.g., ER diagrams, star schema, snowflake schema). Experience with reporting tools is a plus (Grafana, Power bi, Tableau). Having a healthy appetite and open mind for new technologies is a plus Holds a bachelor's or master's degree in computer science, information technology, or a related field. Relevant experience level of 6-10 years is mandatory. Job location is Chennai . Our offer An extensive mobility program for a healthy work-life balance. A permanent training track which allows you to develop yourself personally and professionally. A stimulating, innovative workplace with numerous growth opportunities. A people-oriented environment with an interactive health program and a focus on employee wellbeing. Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Bea able to align data models with business goals and enterprise architecture Collaborate with Data Architects, Engineers, Business Analysts, and Leadership teams Lead data modelling, governance discussions and decision-making across cross-functional teams Proactively identify data inconsistencies, integrity issues, and optimization opportunities Design scalable and future-proof data models Define and enforce enterprise data modelling standards and best practices Experience working in Agile environments (Scrum, Kanban) Identify impacted applications, size capabilities, and create new capabilities Lead complex initiatives with multiple cross-application impacts, ensuring seamless integration Drive innovation, optimize processes, and deliver high-quality architecture solutions Understand business objectives, review business scenarios, and plan acceptance criteria for proposed solution architecture Discuss capabilities with individual applications, resolve dependencies and conflicts, and reach agreements on proposed high-level approaches and solutions Participate in Architecture Review, present solutions, and review other solutions Work with Enterprise architects to learn and adopt standards and best practices Design solutions adhering to applicable rules and compliances Stay updated with the latest technology trends to solve business problems with minimal change or impact Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 8+ years of proven experience in a similar role, leading and mentoring a team of architects and technical leads Extensive experience with Relational, Dimensional, and NoSQL Data Modelling Experience in driving innovation, optimizing processes, and delivering high-quality solutions Experience in large scale OLAP, OLTP, and hybrid data processing systems Experience in complex initiatives with multiple cross-application impacts Expert in Erwin for Conceptual, Logical, and Physical Data Modelling Expertise in Relational Databases, SQL, indexing and partitioning for databases like Teradata, Snowflake, Azure Synapse or traditional RDBMS Expertise in ETL/ELT architecture, data pipelines, and integration strategies Expertise in Data Normalization, Denormalization and Performance Optimization Exposure to cloud platforms, tools, and AI-based solutions Solid knowledge of 3NF, Star Schema, Snowflake schema, and Data Vault Knowledge of Java, Python, Spring, Spring boot framework, SQL, Mongo DBS, KAFKA, React JS, Dynatrace, Power BI kind of exposure Knowledge of Azure Platform as a Service (PaaS) offerings (Azure Functions, App Service, Event grid) Good knowledge of the latest happenings in the technology world Advanced SQL skills for complex queries, stored procedures, indexing, partitioning, macros, recursive queries, query tuning and OLAP functions Understanding of Data Privacy Regulations, Master Data Management, and Data Quality Proven excellent communication and leadership skills Proven ability to think from a long-term perspective and arrive at intentional and strategic architecture Proven ability to provide consistent solutions across Lines of Business (LOB) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 3 months ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer Location: Bangalore About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |