Jobs
Interviews

789 Data Modelling Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a SAP Native Hana Consultant with 6-10 years of experience, you will be responsible for working without supervision and managing projects independently from start to finish. This includes tasks such as requirements gathering, designing, developing, testing, project management, maintenance, and support. You will be based in Noida, Hyderabad, or Pune and work during EST hours (5:30 PM- 2:30 AM) in a Work From Office (WFO) mode. Immediate availability to start within 15 days is required for this role. Your role will involve hands-on experience in developing and implementing end-to-end SAP BW solutions, including data extraction, transformation, modeling, and reporting from both Standard and Custom Data Sources. You should have proficiency in designing and developing data models in SAP BW, SAP HANA, or Hybrid models that combine BW & HANA features. Expertise in building reports, dashboards, and analytics using SAP BW tools like BW Query Designer, AFO (Analysis for Office, Excel), WebI, Lumira Designer, and Power BI is essential. Experience with SAP BW/4HANA tools such as ADSO, Composite Provider, ODP BW Extractors, Open ODS View, BW Query, Webi, etc., is preferred. Additionally, you should have experience in HANA Modelling, including Calculation views, Stored procedures, Scalar & Table functions. Your responsibilities will involve creating and enhancing new/existing Native HANA models, views, and BOBJ reports and dashboards to align with business needs and IT standards. If you are interested in this opportunity, please share your profile with Anupma, the Creative Recruiter at Stefanini NA/APAC, by emailing anupma.sinha@stefanini.com for a detailed discussion and possible next steps. Looking forward to your response.,

Posted 1 week ago

Apply

13.0 - 17.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Senior Architect - Data Modelling at Tiger Analytics, you will be instrumental in solving complex data management challenges to help organizations become more data-driven. Your role will involve transitioning between being an Individual Contributor, team member, and Architect as required by each project, with a focus on defining, designing, and delivering actionable insights. Your day-to-day responsibilities will include: - Leading the translation of business requirements into conceptual, logical, and physical Data models. - Designing data architectures to encompass the entire data lifecycle, including data organization, acquisition, storage, security, and visualization. - Advising clients on the best modelling approach based on their needs and target architecture. - Analyzing datasets, guiding the team in creating Source to Target Mapping and Data dictionaries, and generating insightful data profiles. - Defining Data Modelling Best Practices and ensuring their implementation across projects. - Optimizing Data Models, defining Ingestion logic, ingestion frequency, and data consumption patterns in collaboration with Data Engineers. - Ensuring adherence to data quality standards, data procedures, and driving automation in modelling activities. - Collaborating with various stakeholders to design and develop next-generation data platforms. - Providing technical expertise and support to troubleshoot and resolve data-related issues, mentoring team members, and collaborating on pre-sales Proof-of-Concepts. - Enabling data catalog and business glossary as per the process defined by the data governance team. Job Requirements: - Minimum 13 years of experience in the data space with hands-on experience in designing data architecture. - Proficiency in RDBMS systems (e.g., Oracle, DB2, SQL Server) and Data Modelling concepts like Relational, Dimensional, Data Vault Modelling. - Experience in developing data models for multiple business domains, OLTP, OLAP systems, Cloud DW, and cloud platforms (e.g., AWS, Azure, GCP). - Hands-on experience in Data Modelling Tools, ETL tools, data ingestion frameworks, NoSQL databases, and Big Data ecosystems. - Familiarity with Data Quality, Data Governance, Industry Data Models, agile methodology, and good communication skills. - Exposure to Python and contribution to proposals and RFPs will be an added advantage. At Tiger Analytics, we value diversity, inclusivity, and individual growth. We encourage you to apply even if you do not meet all the requirements today, as we believe in finding unique roles for every individual. We are an equal-opportunity employer committed to fostering a culture of listening, trust, respect, and personal development. Please note that the job designation and compensation will be based on your expertise and experience, with our compensation packages being highly competitive within the industry.,

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 22 Lacs

indore, pune, ahmedabad

Work from Office

We are seeking a highly experienced and motivated Senior SAP BW Developer with a minimum of 8 years of hands-on experience. The ideal candidate will possess an in-depth, expert-level understanding of all SAP BW artifacts and demonstrate strong, proven programming proficiency in SAP ABAP and SQL. This role requires a candidate who can independently design, develop, implement, and optimize complex data warehousing solutions within the SAP BW environment to meet critical business requirements. You will be a key technical expert, providing guidance and mentorship to junior team members and contributing to the overall data strategy. Work with customer technical leads, client executives, and partners to manage and deliver successful implementations of cloud solutions becoming a trusted advisor to decision makers throughout the engagement. Work with internal specialists, product and engineering teams to package best practices and lessons learned into thought leadership, methodologies and published assets. Interact with sales, partners, and customer technical stakeholders to manage project scope, priorities, deliverables, risks/issues, and timelines for successful client outcomes. Advocate for customer needs in order to overcome adoption blockers and drive new feature development based on your field experience. Propose architectures for SAP products and manage the deployment of cloud based SAP solutions according to complex customer requirements and implementation best practices. Minimum of 8 years of dedicated experience in SAP BW implementation, development, and support. Design, develop, and implement robust and scalable SAP BW solutions (7.5 and below) - End of end BW solution development including data modelling, extraction, transformation, loading and reporting. Expertise and comprehensive hands-on knowledge in SAP BW artifacts covering data modelling, ETL process, process orchestration, reporting and authorizations. Strong and demonstrable proficiency in SAP ABAP programming, specifically within the BW context (transformations, extractors, function modules). Solid experience with SQL scripting, including complex queries and performance optimization. System Optimization & Performance Tuning: Identify and resolve performance bottlenecks in BW data loads, queries, and overall system performance.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

pune

Work from Office

About the Position: Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Technical and Professional Requirements: Sound knowledge of the Python programming framework & libraries Familiarity with database technologies such as SQL and NoSQL Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. Expertise in how to combine several data sources into one system. Implement security and data protection solutions. Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Preferred Skills: Knowledge of Cloud technologies like GCP/AWS/Azure Ability to document data Job Responsibilities: Data Management - Being able to trace usage of legacy data assets and propose best migration options. Data Modelling Building and Optimizing ETL Jobs Ability to communicate effectively with team members - as we will be moving data around and porting existing data jobs, interaction with the team will be vital. Documentation of data relationships and context Educational Requirements: Any graduate with 60% or above

Posted 1 week ago

Apply

6.0 - 11.0 years

1 - 6 Lacs

bengaluru

Remote

ETL/ELT, Data Modelling, Synapse, ADF, Microsoft Fabric, Databricks, SQL

Posted 1 week ago

Apply

7.0 - 12.0 years

17 - 32 Lacs

hyderabad, chennai, bengaluru

Work from Office

Job Title: Senior Data Engineer Location: Pan India Experience: 7+ Years Joining: Immediate/Short Notice Preferred Job Summary: We are looking for an experienced Senior Data Engineer to design, develop, and optimize scalable data solutions across Enterprise Data Lake (EDL) and hybrid cloud platforms. The role involves data architecture, pipeline orchestration, metadata governance, and building reusable data products aligned with business goals. Key Responsibilities: Design & implement scalable data pipelines (Spark, Hive, Kafka, Bronze-Silver-Gold architecture). Work on data architecture, modelling, and orchestration for large-scale systems. Implement metadata governance, lineage, and business glossary using Apache Atlas. Support DataOps/MLOps best practices and mentor teams. Integrate data across structured & unstructured sources (ODS, CRM, NoSQL). Required Skills: Strong hands-on experience with Apache Hive, HBase, Kafka, Spark, Elasticsearch . Expertise in data architecture, modelling, orchestration, and DataOps . Familiarity with Data Mesh, Data Product development, and hybrid cloud (AWS/Azure/GCP) . Knowledge of metadata governance, ETL/ELT, NoSQL data models . Strong problem-solving and communication skills.

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 17 Lacs

bengaluru

Work from Office

Role & responsibilities Technical Solution Development: 1. Build and Unit Test: Develop and unit test solutions based on business requirements using SAP Analytics Cloud. 2. Functional and Technical Specifications: Translate business requirements into functional and technical specifications for application objects. 3. Design Intuitive Applications: Create intuitive and easy-to-understand application stories while adhering to organizational standards. 4. Prototype Solutions: Prototype and validate solutions using SAC features. 5. Integration and User Acceptance Testing: Support execution of integration and user acceptance testing, as well as perform break/fix activities. Story Development and Visualization: 1. Advanced Story Design: Develop SAC stories with various layouts (canvas, responsive, grid) by blending data, formulas, cross calculations, input controls, and linked analysis. 2. Visualizations: Create charts, tables, maps, dropdown menus, and interactive objects to visualize data. 3. Planning Models: Work with analytic and planning models within SAC. 4. Have worked on embedded analytics (Fair knowledge is fine) Data Integration and Modelling: 1. Data Modelling: Understand data modelling concepts and apply them within SAC. 2. Cloud Connectivity: Familiarity with connecting SAC to both SAP and non-SAP data sources. 3. Experience with SAP BW, HANA, and Cloud Databases: Develop applications integrating with these systems. 4. Must have knowledge in data model creation, schemas 5. Must have knowledge in Java script Security Configuration: 1. Configure security settings at the model, dimension, story, and file structure levels. 2. Ensure data access controls align with business requirements. (Fair knowledge is fine) 3. Collaboration and Communication: (Fair knowledge is fine) 4. Business Requirements: Act as a key point of contact for communicating business requirements related to reporting needs. (Fair knowledge is fine) 5. Stakeholder Engagement: Collaborate with business and technical stakeholders to design models, dimensions, members, and hierarchies. (Fair knowledge is fine) Training and Support: 6. End-User Training: Provide training to business users on using SAC features effectively. (Fair knowledge is fine) Administration Must have a basic knowledge in story migration, cataloguing , subscription management, user management Preferred candidate profile Required Skills - Bachelors degree in engineering, Finance, Accounting, Business, or a related field. Master's degree is a plus. 5+ years of total experience, with 3+ years of implementation experience Preferably SAC certified Basic Qlik Sense knowledge is plus

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

This role is for a GCP Data Engineer who can build cloud analytics platforms to meet expanding business requirements with speed and quality using lean Agile practices. You will work on analysing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the GCP. You will be responsible for designing the transformation and modernization on GCP. Experience with large scale solutions and operationalizing of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on the Google Cloud Platform. Responsibilities Develop technical solutions for Data Engineering and work between 1 PM and 10 PM IST to enable more overlap time with European and North American counterparts. This role will work closely with teams in US and as well as Europe to ensure robust, integrated migration aligned with Global Data Engineering patterns and standards. Design and deploying data pipelines with automated data lineage. Develop, reusable Data Engineering patterns. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Ensure timely migration of Ford Credit Europe FCE Teradata warehouse to GCP and to enable Teradata platform decommissioning by end 2025 with a strong focus on ensuring continued, robust, and accurate Regulatory Reporting capability. Position Opportunities The Data Engineer role within FC Data Engineering supports the following opportunities for successful individuals: Key player in a high priority program to unlock the potential of Data Engineering Products and Services & secure operational resilience for Ford Credit Europe. Explore and implement leading edge technologies, tooling and software development best practices. Experience of managing data warehousing and product delivery within a financially regulated environment. Experience of collaborative development practices within an open-plan, team-designed environment. Experience of working with third party suppliers / supplier management. Continued personal and professional development with support and encouragement for further certification. Qualifications Essential: 5+ years of experience in data engineering, with a focus on data warehousing and ETL development (including data modelling, ETL processes, and data warehousing principles). 5+ years of SQL development experience. 3+ years of Cloud experience (GCP preferred) with solutions designed and implemented at production scale. Strong understanding of key GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, BigQuery, Dataflow, DataFusion, Dataproc, Cloud Build, AirFlow, and Pub/Sub, alongside and storage including Cloud Storage, Bigtable, Cloud Spanner. Excellent problem-solving skills, with the ability to design and optimize complex data pipelines. Strong communication and collaboration skills, capable of working effectively with both technical and non-technical stakeholders as part of a large global and diverse team. Experience developing with micro service architecture from container orchestration framework. Designing pipelines and architectures for data processing. Strong evidence of self-motivation to continuously develop own engineering skills and those of the team. Proven record of working autonomously in areas of high ambiguity, without day-to-day supervisory support. Evidence of a proactive mindset to problem solving and willingness to take the initiative. Strong prioritization, co-ordination, organizational and communication skills, and a proven ability to balance workload and competing demands to meet deadlines. Desired: Professional Certification in GCP (e.g., Professional Data Engineer). Data engineering or development experience gained in a regulated, financial environment. Experience with Teradata to GCP migrations is a plus. Strong expertise in SQL and experience with programming languages such as Python, Java, and/or Apache Beam. Experience of coaching and mentoring Data Engineers. Experience with data security, governance, and compliance best practices in the cloud. An understanding of current architecture standards and digital platform services strategy.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be working as a Data Scientist for Group Internal Audits Audit Analytics function at a world-class leading Bank. Your primary responsibility will be to drive innovation and gain insights across the bank's audit risk areas using data science and advanced analytics techniques. You will be involved in exploratory data analysis, features creation, modeling, and collaborating with key stakeholders to design data analytical solutions. Your role will also include contributing to the design and implementation of data analytics processes, providing predictive outputs and insights, interpreting analytics outputs for audit teams, and continuously seeking feedback for improvement. You will be developing AI/GenAI models and RPA solutions using Python and Dataiku, collaborating with cross-functional teams, and leading junior data scientist specialists in the team. Additionally, you will be involved in testing and quality assurance of tools/solutions, ensuring compliance with regulatory and business conduct standards, embracing agile philosophy, and embedding the bank's values in your team. You will also perform other responsibilities assigned under Group, Country, Business, or Functional policies and procedures. To be successful in this role, you should have a minimum of 5 years of relevant experience in data science and analytics, with expertise in data analysis techniques, data modeling, project management, and analytical problem-solving skills. You should possess strong communication skills, project management capabilities, and the ability to work within a multi-disciplinary team. A Bachelor's degree in Computer Science, Data Science, or Artificial Intelligence is preferred. Joining Standard Chartered Bank means being part of an international bank that values diversity, inclusion, and continuous growth. You will have access to various benefits such as retirement savings, medical and life insurance, flexible working options, wellbeing support, continuous learning opportunities, and an inclusive work culture that celebrates unique talents and diversity. If you are passionate about making a positive difference and driving commerce and prosperity, Standard Chartered Bank is looking forward to hearing from you.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Power BI Developer with 8 to 10 years of experience, you will be responsible for developing tabular, multidimensional data models and reports for complex ERP data. Your role will involve creating Power Query data models and pre-defined reports to cater to a diverse group of business users. Utilizing your expertise in building enterprise data models using Power BI desktop, you will also work with Microsoft Azure platform, Power BI Desktop/SSAS, and Embedded PowerBI. In this position, you will play a key role in developing, publishing, and scheduling reports and dashboards to meet the business requirements. Additionally, your experience in requirement analysis for financial and acquisitions data will be crucial in ensuring the accuracy and relevance of the reports. You will be expected to have a strong understanding of Power BI application security layer models and knowledge of Microsoft Power platform and connectors to external data sources. Your responsibilities will also include data modeling based on SQL Server & Power-BI, visualizing data in dashboards, and implementing row level security features within BI reports. Ensuring the consistent quality of data over the lifecycle of the report, cleaning up data, setting up automated data feed processes, and implementing processes to adapt reports based on product evolution will be part of your daily tasks. Furthermore, as a Power BI Developer, you will be involved in end-to-end application development, maintenance, and application management. Regular interaction with clients, daily estimations, design, development, and deployment activities will be integral to your role. To be successful in this position, you are required to have a degree in B.E/B.Tech/MCA. At CGI, we are a team of builders committed to helping our clients succeed. As a member of CGI, you will have the opportunity to grow and develop professionally within a dynamic environment. We offer a competitive compensation package, opportunities for growth, and benefits that start on the first day of employment. If you are a resource with strong SQL and Data Modeling skills, we invite you to join our team at CGI where ownership, teamwork, respect, and belonging are at the core of our culture. Together, let's turn insights into action and shape the future of IT and business consulting services.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

Qualcomm India Private Limited is currently looking for a dedicated Staff IT Data Engineer to become a part of the Enterprise Architecture, Data and Services (EADS) Team. In this role, you will be responsible for designing, developing, and supporting advanced data pipelines within a multi-cloud environment, with a primary focus on Databricks. Your main duties will include facilitating data ingestion from various sources, processing data using a standardized data lake approach, and provisioning data for analytical purposes. The position requires a strong implementation of DevSecOps practices, including Continuous Integration/Continuous Deployment (CI/CD), Infrastructure as Code (IaC), and automated testing to improve data operations, monitor and optimize data loads, and establish data retention policies. As a qualified candidate, you should possess experience in designing, architecting, and presenting data systems for customers, with a minimum of 5 years of experience in data engineering, architecture, or analytics roles. Proficiency in the Databricks platform, including cluster management, Delta Lake, MLFlow, and Unity catalog with Collibra integration is essential. Additionally, expertise in Spark, Python, SQL, and ETL frameworks, as well as experience with cloud platforms like AWS and their integration with Databricks, is required. A deep understanding of data warehousing concepts, big data processing, and real-time analytics is also crucial for this role. To be considered for this position, you must hold a Bachelor's degree in computer engineering, Computer Science, Information Systems, or a related field, with a minimum of 5 years of IT-related work experience, or 7+ years of IT-related work experience without a Bachelor's degree. Strong programming experience, preferably in Python or Java, along with significant experience in Databricks and SQL or NoSQL Databases, is mandatory. Familiarity with data modeling, CI/CD, DevSecOps, and other related technologies will be an advantage. Preferred qualifications for this role include being a Databricks Certified Data Engineer or Databricks-AWS platform architect, experience with cloud platforms like Azure and GCP, and knowledge of Big Data handling with PySpark. A Bachelor of Science in Computer Science, Information Technology, Engineering, or a related field, or equivalent professional experience is required. Qualcomm India Private Limited is an equal opportunity employer and is committed to providing accessible processes for individuals with disabilities. If you require accommodations during the application/hiring process, please contact Qualcomm via email at disability-accommodations@qualcomm.com or through their toll-free number. Please note that only individuals seeking a position directly at Qualcomm are permitted to use their Careers Site; staffing and recruiting agencies should refrain from submitting profiles, applications, or resumes.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, you will be responsible for designing and implementing data pipelines using various AWS services including S3, Glue, PySpark, and EMR. Your role will involve optimizing data storage and retrieval through AWS database services such as RDS, Redshift, and DynamoDB. Building different data warehousing layers to cater to specific use cases will be a key aspect of your responsibilities. Your expertise in SQL and strong understanding of ETL processes and data modeling will be essential in ensuring the accuracy and availability of data to customers. You will also need to comprehend how technical decisions can impact business analytics and reporting for our clients. Experience with AWS cloud and services like S3 Buckets, Glue Studio, Redshift, Athena, Lambda, and SQS queues will be beneficial in this role. The ideal candidate should possess a Bachelor's or Master's degree in Computer Science or Information Technology. Prior experience with batch job scheduling and identifying data or job dependencies will be advantageous. Proficiency in data warehousing, ETL processes, and big data processing will be considered a valuable skill set. Additionally, familiarity with the Pharma domain would be a plus for this position. Join our team to contribute to the development of robust data solutions and drive impactful insights for our clients in a dynamic and collaborative environment.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for designing and implementing databases, as well as data modeling. This includes creating indexes, views, complex triggers, stored procedures, efficient functions, and appropriate store procedures to facilitate effective data manipulation and data consistency. You should have a good understanding of keys, constraints, indexes, joins, CTEs, partitioning, Row Number() and Windows functions, temporary tables, UDTs, types of union, and materialized views. Utilizing strong analytical and problem-solving skills, you will be required to work on database normalization and de-normalization, and have expertise in writing complex logic. Troubleshooting, optimizing, and tuning SQL processes and queries will be part of your responsibilities, as well as writing unit test cases. Understanding database transactions and states, and designing, implementing, and monitoring queries and stored procedures for performance optimization are key tasks. Additionally, you should be able to debug programs, integrate applications with third-party web services, and identify opportunities for improved performance in SQL operations. Knowledge of database backup, restore, and maintenance procedures, along with experience in working with development, testing, UAT, and production environments and their databases is essential. Skills in working with SSRS and SSIS are required, and experience in SSAS will be considered a plus. Good communication skills are also important for this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You should have at least 5 years of hands-on experience in SFCC, with a focus on working with SiteGenesis and/or SFRA based storefronts. Your knowledge should extend to Demandware or equivalent JavaScript-based scripting languages. A thorough understanding of the Retail domain and the various downstream and upstream systems involved is essential. Experience with third-party integrations from SFC is a plus. A good grasp of headless implementation concepts in SFCC is required, including exposing APIs using OCAPI and SCAPI. Familiarity with API concepts and technologies such as REST, JSON, XML, YAML, GraphQL, and Swagger is important. Test Driven Development experience with the use of mocking frameworks is desirable. Experience with data modeling and a strong understanding of agile methodology and scrum ceremonies are necessary. Knowledge of branching/merge strategies, code repository frameworks like Git/Bitbucket, and code reviews is expected. Hands-on experience in integrations with code quality tools is a bonus.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this role should have a strong background in SQL, CDP (TreasureData), Python/Dig-Dag, and Presto/Sql for data engineering. It is essential to possess knowledge and hands-on experience in cloud technologies such as Microsoft Azure, AWS, ETL processes, and API integration tools. Proficiency in Python and SQL is a must, along with exposure to Big Data technologies like Presto, Hadoop, Cassandra, MongoDB, etc. Previous experience with CDP implementation using tools like Treasure Data or similar platforms such as Action IQ would be a significant advantage. Familiarity with data modelling and architecture is preferred, as well as excellent SQL and advanced SQL skills. Knowledge or experience in data visualization tools like Power BI and an understanding of AI/ML concepts would be beneficial. The candidate should hold a BE/BTech degree and be actively involved in requirements gathering, demonstrating the ability to create technical documentation and possessing strong analytical and problem-solving skills. The role entails working on the end-to-end implementation of CDP projects, participating in CDP BAU activities, Go-live cut-over, and providing day-to-day CDP application support. Automation of existing tasks and flexibility with working hours are expected, along with the ability to thrive in a process-oriented environment. Please note that the above job description is a standard summary based on the provided information.,

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

bengaluru

Hybrid

Position: Data Engineer, Sr - I *** JOB DESCRIPTION *** OVERVIEW The Data Engineer, Sr - I will work closely with clients and provide high level technical consulting services, configuration of the elluminate platform, development and oversight for specific projects that include trial configuration, quality control, project management, assessing new technologies, process improvements, system validation, SOP development, custom analytics development, clinical software implementations and integrations, platform configuration, ETL and custom analytics development. As a Data Engineer, Sr - I, you will serve as the primary technical lead, spearheading consulting efforts related to clinical systems software. You will also design and develop reporting programs as needed. KEY TASKS & RESPONSIBILITIES Leading consulting efforts and providing high-level technical consulting services to clients, including configuring the elluminate platform and overseeing specific projects such as trial configuration, quality control, and project management. Designing, developing, testing, and deploying efficient SQL code to support SDTM, custom reports, and visualizations using tools like MS SQL, elluminate Mapper, and Qlik. Providing technical guidance, training, and support to team members and users on processes, technology, and products. Managing multiple timelines and deliverables for single or multiple clients, and handling client communications as assigned. Possessing high-level debugging skills for ETL or analytical issues by referring to the required source data. Demonstrating in-depth knowledge of at least one Elluminate module, with hands-on experience in all other modules. Delivering proactive technical support for all client-reported support tickets. Facilitating client onboarding workshops and conducting training sessions for end users on the Elluminate platform. Configuring, migrating, and supporting the elluminate platform for assigned clients. Creating and maintaining all required specifications and quality control documents as per SOP and processes. Ensuring compliance with eClinical Solutions/industry quality standards, regulations, guidelines, and procedures. Position: Data Engineer, Sr - I CANDIDATES PROFILE Education/Language: 5+ years of professional experience in a Services or Consulting role preferred Bachelor's or Master's degree in science, technical or business discipline or equivalent experience preferred 5+ years in database design and development experience preferred Understanding of Cloud / Hybrid data architecture concepts is a plus Prior experience in customer facing roles is a plus Knowledge of clinical trial data is a plus - CDISC ODM, SDTM, or ADAM standards Experience in Pharmaceutical/Biotechnology/Life Science industry is a plus Experience in Data Capturing tools (Rave, Veeva, InForm, IVRS) Professional Skills & Experience: Have critical observation and communication skills to Identify any gaps in processes or product Ability to work with various technical and non-technical teams both internal to eCS and clients Must be team oriented with strong collaboration, prioritization, and adaptability skills Excellent knowledge of English; verbal and written communication skills with ability to interact with users and clients providing solutions Experience in the Life Sciences industry, CRO / Clinical Trial regulated environment preferred Experience in regulatory computer systems validation a strong plus Demonstrating strong analytical and problem-solving skills to identify issues and develop creative solutions that drive results. Conveying information clearly and concisely to diverse audiences, facilitating understanding and collaboration. Working effectively in a team environment, contributing to group objectives, and supporting colleagues. Adapting to changing circumstances and accepting new challenges with a positive attitude. Understanding clinical trial data and applying CDISC, SDTM/AdAM standards. Performing other duties as assigned. Position: Data Engineer, Sr - I Technical Skills & Experience: Proficient in SQL, T-SQL, PL/SQL programming or R Package or Python or SAS Proficiency in Microsoft Office Applications, specifically MS Project and MS Excel Experience with multiple Database Platforms: Oracle, SQL Server, Teradata, DB2 Oracle Experience with Data Reporting Tools: QlikSense, QlikView, Spotfire, Tableau, JReview, Business Objects, Cognos, MicroStrategy, IBM DataStage, Informatica, Spark or related Familiarity with other languages and concepts: .NET, C#, Python, R, Java, HTML, SSRS, AWS, Azure, Spark, REST APIs, Big Data, ETL, Data Pipelines, Data Modelling, Data Analytics, BI, Data Warehouse, Data Lake or related

Posted 1 week ago

Apply

6.0 - 10.0 years

18 - 20 Lacs

chennai

Remote

We are seeking a highly experienced and motivated Senior SAP BW Developer with a minimum of 8 years of hands-on experience. The ideal candidate will possess an in-depth, expert-level understanding of all SAP BW artifacts and demonstrate strong, proven programming proficiency in SAP ABAP and SQL. This role requires a candidate who can independently design, develop, implement, and optimize complex data warehousing solutions within the SAP BW environment to meet critical business requirements. You will be a key technical expert, providing guidance and mentorship to junior team members and contributing to the overall data strategy. Work with customer technical leads, client executives, and partners to manage and deliver successful implementations of cloud solutions becoming a trusted advisor to decision makers throughout the engagement. Work with internal specialists, product and engineering teams to package best practices and lessons learned into thought leadership, methodologies and published assets. Interact with sales, partners, and customer technical stakeholders to manage project scope, priorities, deliverables, risks/issues, and timelines for successful client outcomes. Advocate for customer needs in order to overcome adoption blockers and drive new feature development based on your field experience. Propose architectures for SAP products and manage the deployment of cloud based SAP solutions according to complex customer requirements and implementation best practices. Minimum of 8 years of dedicated experience in SAP BW implementation, development, and support. Design, develop, and implement robust and scalable SAP BW solutions (7.5 and below) - End of end BW solution development including data modelling, extraction, transformation, loading and reporting. Expertise and comprehensive hands-on knowledge in SAP BW artifacts covering data modelling, ETL process, process orchestration, reporting and authorizations. Strong and demonstrable proficiency in SAP ABAP programming, specifically within the BW context (transformations, extractors, function modules). Solid experience with SQL scripting, including complex queries and performance optimization. System Optimization & Performance Tuning: Identify and resolve performance bottlenecks in BW data loads, queries, and overall system performance.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Enterprise Data Architect at Ramboll Tech, you will play a vital role in transforming data into a strategic asset, ensuring it is well-structured, governed, and effectively leveraged for business growth. Your responsibilities will include identifying, analyzing, and recommending how information assets drive business outcomes, as well as sharing consistent data throughout Ramboll. By joining our Technology & Data Architecture team, you will collaborate with Domain Enterprise Architects, Data Strategy, and Data Platform teams to shape the enterprise data layer. Additionally, you will partner with Innovation and Digital Transformation Directors to drive digitalization, innovation, and scaling of digital solutions across various business domains. Your focus will be on delivering value by developing data strategies, roadmaps, and solutions that directly address the challenges and opportunities within our business areas. You will design and implement modern data architectures using cutting-edge technologies and ensure alignment with business objectives. Furthermore, you will work on integrating disparate business systems and data sources to facilitate seamless data flow across the organization. In this role, you will play a crucial part in designing and developing data models that support business processes, analytics, and reporting requirements. Additionally, you will collaborate with cross-functional teams, including business stakeholders, data scientists, and data engineers, to understand data requirements and deliver solutions that meet business needs. Your expertise in data architecture, cloud platforms, data integration, and data modeling will be essential in driving our digital transformation journey. We are looking for a candidate with a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 5 years of professional experience in data architecture. Experience with cloud platforms such as Microsoft Azure, GCP, or AWS, as well as a deep understanding of modern data stack components, is required. Strong skills in data modeling, ETL processes, and data integration are essential, along with experience in data governance practices. Your exceptional analytical and problem-solving skills, combined with your ability to design innovative solutions to complex data challenges, will be key to your success in this role. Effective communication and interpersonal skills will enable you to convey technical concepts to non-technical stakeholders and influence within a matrixed organization. By continuously evaluating and recommending new tools and technologies, you will contribute to improving the efficiency and effectiveness of data engineering processing within Ramboll Tech. Join us in shaping a more sustainable future through data-centric principles and innovative solutions. Apply now to be part of our dynamic team at Ramboll Tech and make a meaningful impact on our digital transformation journey.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be responsible for interacting with clients (internal or external) to understand their problems and develop solutions to meet their needs. You will lead projects and collaborate with a team to ensure that requirements are identified, user stories are created, and work is planned effectively to deliver solutions that align with evolving business needs. Managing team activities, strategizing task approaches, setting timelines, and delegating tasks among team members will be part of your role. You will conduct meetings, document findings, and communicate effectively with clients, management, and cross-functional teams. Additionally, you will create ad-hoc reports for internal requests and automate processes using data transformation tools. To excel in this role, you should possess strong analytical, problem-solving, interpersonal, and communication skills. Proficiency in DBMS, Data Modelling, and SQL querying is essential. Experience with cloud technologies (GCP/AWS/Azure/Snowflake), ETL/ELT pipelines, CI/CD, orchestration tools (e.g., Apache Airflow, GCP workflows), and Python for ETL/ELT processes and data modeling is required. You should also be adept at creating reports and dashboards using Power BI/Tableau and have knowledge of ML models and Gen AI for modern architectures. Experience with version control platforms like GitHub, IaC tools (e.g., Terraform, Ansible), stakeholder management, client communication, and Financial Services domain will be advantageous. Familiarity with Machine Learning tools and techniques is beneficial. Ideally, you should have 3-7 years of experience and hold a BTech/MTech/BE/ME/MBA in Analytics. Compensation will be in line with industry standards.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Data Engineer with 7-10 years of experience, you will be responsible for architecting, creating, and maintaining data pipelines and ETL processes in AWS. Your role will involve supporting and optimizing the current desktop data tool set and Excel analysis pipeline to a transformative Cloud-based highly scalable architecture. You will work in an agile environment within a collaborative agile cross-functional product team using Scrum and Kanban methodologies. Collaboration is key in this role, as you will work closely with data science teams and business analysts to refine data requirements for various initiatives and data consumption needs. Additionally, you will be required to educate and train colleagues such as data scientists, analysts, and stakeholders in data pipelining and preparation techniques to facilitate easier integration and consumption of data for their use cases. Your expertise in programming languages like Python, Spark, and SQL will be essential, along with prior experience in AWS services such as AWS Lambda, Glue, Step function, Cloud Formation, and CDK. Knowledge of building bespoke ETL solutions, data modeling, and T-SQL for managing business data and reporting is also crucial for this role. You should be capable of conducting technical deep-dives into code and architecture and have the ability to design, build, and manage data pipelines encompassing data transformation, data models, schemas, metadata, and workload management. Furthermore, your role will involve working with data science teams to refine and optimize data science and machine learning models and algorithms. Effective communication skills are essential to collaborate effectively across departments and ensure compliance and governance during data use. In this role, you will be expected to work within and promote a DevOps culture and Continuous Delivery process to enhance efficiency and productivity. This position offers the opportunity to be part of a dynamic team that aims to drive positive change through technology and innovation. Please note that this role is based in Mumbai, with the flexibility to work remotely from anywhere in India.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Product Manager, your main responsibility will be to drive the development of new products and product changes in collaboration with cross-functional team members. You will be tasked with converting legacy data assets into strategic canonical models and integrating data feeds. It will be crucial to develop expertise in financing data products to maximize their value creation. You will co-create a clear product vision, strategy, and roadmap that covers the entire product development lifecycle, including creation, version upgrades, infrastructure upgrades, and eventual decommissioning. Operating within an agile team, you will actively participate in agile ceremonies to ensure that product requirements are well-understood by designers and engineers. Furthermore, you will oversee the testing of the product with clients to gather insights and ensure readiness for market entry. This includes conducting UX testing, end-user testing, and research to refine the product. Your role will also involve generating insights for the financing business by extracting additional value from data and analytics. Ensuring the quality and compliance of the product is also part of your responsibilities. This includes maintaining technical and non-functional requirements such as production stability, performance, and maintainability. You will be part of the IB Financing Data and Analytics team in Pune, working on Cloud data mesh adoption and democratizing MI and Insights for business users. The team is focused on modernizing tooling and rebuilding Azure cloud-native solutions to deliver valuable insights. To excel in this role, you should have experience working in agile environments and possess a deep understanding of agile delivery frameworks and product management. Previous experience delivering complex products to clients, especially in the IB financing domain, will be beneficial. A team player with a proactive and enthusiastic personality, you should demonstrate a true agile mindset and strong analytical and problem-solving skills. Effective communication and active listening skills are essential for building networks and partnerships at all levels. Additionally, having expertise in data modeling and a minimum of 8 years of experience in managing world-class analytics products will be highly valuable. Possessing certifications such as CFA, Microsoft Azure fundamentals, or Databricks fundamentals would be considered a bonus.,

Posted 1 week ago

Apply

12.0 - 17.0 years

0 Lacs

karnataka

On-site

You will be joining HCLTech as a Sr Trino Lead with 12 to 17 years of experience. The ideal candidate should have a notice period of immediate to 30 days or a maximum of 60 days. In this role, your responsibilities will include: - Demonstrating strong proficiency in SQL for querying and managing data effectively. - Integrating and querying data from various sources such as databases, file systems, and data lakes. - Optimizing query performance by leveraging skills in parallel processing and query optimization techniques. - Applying data modeling techniques to ensure efficient data structuring. - Possessing ETL skills to facilitate data extraction, transformation, and loading processes. - Utilizing experience with Linux operating systems to configure system limits and manage file descriptors effectively. If you meet the above requirements and are ready to take on this challenging role, we encourage you to apply and be a part of our dynamic team at HCLTech.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Greetings from Coders Brain Technology Pvt. Ltd. Coders Brain is a global leader in services, digital, and business solutions, partnering with clients to simplify, strengthen, and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise, and a global network of innovation and delivery centers. We are looking for a candidate with 3-5 years of experience in Bangalore, proficient in Data Warehouse, Data Modelling, data engineering, and creating a master data lake via AWS Cloud. If you possess the required skills and experience and are interested in this opportunity, please click on the apply button. Alternatively, you can send your resume to prerna.jain@codersbrain.com / pooja.gupta@codersbrain.com. Join us at Coders Brain and be a part of our dynamic team driving innovation and excellence in the industry.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

You should have at least 10+ years of experience in Data Architecture and leading data driven projects. Your responsibilities will include having a strong understanding of various Data Modelling paradigms like Kimball, Inmon, Data Marts, Data Vault, Medallion, etc. You should also have solid expertise in Cloud Based data strategies with a preference for AWS, as well as big data technologies. Designing data pipelines for ETL will be a key part of your role, so expert knowledge on ingestion, transformation, and data quality is a must. Hands-on experience in SQL is necessary, along with a deep understanding of PostGreSQL development, query optimization, and designing indexes. You should be able to understand and manipulate intermediate to complex levels of SQL. Thorough knowledge of Postgres PL/SQL for working with complex warehouse workflows is also required. Using advanced SQL concepts such as RANK, DENSE_RANK, and applying advanced statistical concepts through SQL is necessary. Working experience with PostGres SQL extensions like PostGIS is desired. Expertise in writing ETL pipelines that combine Python and SQL is a requirement. An understanding of data manipulation libraries in Python like Pandas, Polars, DuckDB is desired. Additionally, you should have experience in designing Data visualization using tools such as Tableau and PowerBI.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an ideal candidate for this role, you should possess a minimum of 5 years of experience in ETL and Data Integration, Data Modelling, System Integration, Testing and Validation, Data Engineering Expertise, SAP Systems Knowledge, Strong SQL Skills, Effective Communication and Collaboration Skills, and a solid understanding of financial markets, instruments, and banking operations. Your expertise in these areas will be crucial in successfully fulfilling the responsibilities of this position.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies