Jobs
Interviews

479 Data Lake Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4 - 9 years

6 - 11 Lacs

Mumbai

Work from Office

Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accenture's approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes we're the ideal partner to help them create and realize their vision for the future – beyond their expectations. You are: A builder at heart – curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelor's degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Master's degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | Accenture Qualifications Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelor’s degree or equivalent experience in Data Engineering, analytics, or similar field

Posted 2 months ago

Apply

7 - 12 years

3 - 7 Lacs

Bengaluru

Work from Office

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Data Architecture Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Data ArchitectKemper is seeking a Data Architect to join our team. You will work as part of a distributed team and with Infrastructure, Enterprise Data Services and Application Development teams to coordinate the creation, enrichment, and movement of data throughout the enterprise. Your central responsibility as an architect will be improving the consistency, timeliness, quality, security, and delivery of data as part of Kemper's Data Governance framework. In addition, the Architect must streamline data flows and optimize cost management in a hybrid cloud environment. Your duties may include assessing architectural models and supervising data migrations across IaaS, PaaS, SaaS and on premises systems, as well as data platform selection and, on-boarding of data management solutions that meet the technical and operational needs of the company. To succeed in this role, you should know how to examine new and legacy requirements and define cost effective patterns to be implemented by other teams. You must then be able to represent required patterns during implementation projects. The ideal candidate will have proven experience in cloud (Snowflake, AWS and Azure) architectural analysis and management. Responsibilities Define architectural standards and guidelines for data products and processes. Assess and document when and how to use existing and newly architected producers and consumers, the technologies to be used for various purposes, and models of selected entities and processes. The guidelines should encourage reuse of existing data products, as well as address issues of security, timeliness, and quality. Work with Information & Insights, Data Governance, Business Data Stewards, and Implementation teams to define standard and ad-hoc data products and data product sets. Work with Enterprise Architecture, Security, and Implementation teams to define the transformation of data products throughout hybrid cloud environments assuring that both functional and non-functional requirements are addressed. This includes the ownership, frequency of movement, the source and destination of each step, how the data is transformed as it moves, and any aggregation or calculations. Working with Data Governance and project teams to model and map data sources, including descriptions of the business meaning of the data, its uses, its quality, the applications that maintain it and the technologies in which it is stored. Documentation of a data source must describe the semantics of the data so that the occasional subtle differences in meaning are understood. Defining integrative views of data to draw together data from across the enterprise. Some views will use data stores of extracted data and others will bring together data in near real time. Solutions must consider data currency, availability, response times and data volumes, etc. Working with modeling and storage teams to define Conceptual, Logical and Physical data views limiting technical debt as data flows through transformations. Investigating and leading participation in POCs of emerging technologies and practices. Leveraging and evolving existing [core] data products and patterns. Communicate and lead understanding of data architectural services across the enterprise. Ensure a focus on data quality by working effectively with data and system stewards. QualificationsBachelor's degree in computer science, Computer Engineering, or equivalent experience. A minimum of 3 years' experience in a similar role. Demonstrable knowledge of Secure DevOps and SDLC processes. Must have AWS or Azure experience.Experience with Data Vault 2 required. Snowflake a plus.Familiarity of system concepts and tools within an enterprise architecture framework. Including Cataloging, MDM, RDM, Data Lakes, Storage Patterns, etc. Excellent organizational and analytical abilities. Outstanding problem solver. Good written and verbal communication skills.

Posted 2 months ago

Apply

9 - 11 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : Data Engineering, Cloud Data Migration Minimum 9 year(s) of experience is required Educational Qualification : BE or BTech must Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have Skills :Data Modeling Techniques and Methodologies, SSI: NON SSI:Good to Have Skills :SSI:Data Engineering, Cloud Data Migration NON SSI :Job Requirements :Key Responsibilities :1Drive discussions with clients deal teams to understand business requirements, how Industry Data Model fits in implementation and solutioning 2Develop the solution blueprint and scoping, estimation, staffing for delivery project and solutioning 3Drive Discovery activities and design workshops with client and lead strategic road mapping and operating model design discussions 4Good to have Data Vault,Cloud DB design,Graph data modeling, Ontology, Data Engineering,Data Lake design Technical Experience :1 9plus year overall exp,4plus Data Modeling,Cloud DB Model,3NF,Dimensional,Conversion of RDBMS data model to Graph Data ModelInstrumental in DB design through all stages of Data Model 2 Exp on at least one Cloud DB Design work must be familiar with Data Architecture Principles Professional Attributes :1Strong requirement analysis and technical solutioning skill in Data and Analytics 2Excellent writing, communication and presentation skills 3Eagerness to learn and develop self on an ongoing basis 4Excellent client facing and interpersonal skills Educational Qualification:BE or BTech mustAdditional Info :Exp in estimation,PoVs,Solution Approach creation Exp on data transformation,analytic projects,DWH Qualification BE or BTech must

Posted 2 months ago

Apply

7 - 12 years

3 - 7 Lacs

Bengaluru

Work from Office

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Data Architecture Good to have skills : Cloud Infrastructure Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Data ArchitectKemper is seeking a Data Architect to join our team. You will work as part of a distributed team and with Infrastructure, Enterprise Data Services and Application Development teams to coordinate the creation, enrichment, and movement of data throughout the enterprise. Your central responsibility as an architect will be improving the consistency, timeliness, quality, security, and delivery of data as part of Kemper's Data Governance framework. In addition, the Architect must streamline data flows and optimize cost management in a hybrid cloud environment. Your duties may include assessing architectural models and supervising data migrations across IaaS, PaaS, SaaS and on premises systems, as well as data platform selection and, on-boarding of data management solutions that meet the technical and operational needs of the company. To succeed in this role, you should know how to examine new and legacy requirements and define cost effective patterns to be implemented by other teams. You must then be able to represent required patterns during implementation projects. The ideal candidate will have proven experience in cloud (Snowflake, AWS and Azure) architectural analysis and management. Responsibilities Define architectural standards and guidelines for data products and processes. Assess and document when and how to use existing and newly architected producers and consumers, the technologies to be used for various purposes, and models of selected entities and processes. The guidelines should encourage reuse of existing data products, as well as address issues of security, timeliness, and quality. Work with Information & Insights, Data Governance, Business Data Stewards, and Implementation teams to define standard and ad-hoc data products and data product sets. Work with Enterprise Architecture, Security, and Implementation teams to define the transformation of data products throughout hybrid cloud environments assuring that both functional and non-functional requirements are addressed. This includes the ownership, frequency of movement, the source and destination of each step, how the data is transformed as it moves, and any aggregation or calculations. Working with Data Governance and project teams to model and map data sources, including descriptions of the business meaning of the data, its uses, its quality, the applications that maintain it and the technologies in which it is stored. Documentation of a data source must describe the semantics of the data so that the occasional subtle differences in meaning are understood. Defining integrative views of data to draw together data from across the enterprise. Some views will use data stores of extracted data and others will bring together data in near real time. Solutions must consider data currency, availability, response times and data volumes, etc. Working with modeling and storage teams to define Conceptual, Logical and Physical data views limiting technical debt as data flows through transformations. Investigating and leading participation in POCs of emerging technologies and practices. Leveraging and evolving existing [core] data products and patterns. Communicate and lead understanding of data architectural services across the enterprise. Ensure a focus on data quality by working effectively with data and system stewards. QualificationsBachelor's degree in computer science, Computer Engineering, or equivalent experience. A minimum of 3 years' experience in a similar role. Demonstrable knowledge of Secure DevOps and SDLC processes. Must have AWS or Azure experience.Experience with Data Vault 2 required. Snowflake a plus.Familiarity of system concepts and tools within an enterprise architecture framework. Including Cataloging, MDM, RDM, Data Lakes, Storage Patterns, etc. Excellent organizational and analytical abilities. Outstanding problem solver. Good written and verbal communication skills.

Posted 2 months ago

Apply

3 - 8 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : GCP Dataflow Good to have skills : Google BigQuery Minimum 3 year(s) of experience is required Educational Qualification : Any Graduate Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems using GCP Dataflow. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing using GCP Dataflow. Create data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to identify and resolve data-related issues. Develop and maintain documentation related to data solutions and processes. Stay updated with the latest advancements in data engineering technologies and integrate innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Experience in GCP Dataflow. Good To Have Skills:Experience in Google BigQuery. Strong understanding of ETL processes and data migration. Experience in data modeling and database design. Experience in data warehousing and data lake concepts. Experience in programming languages such as Python, Java, or Scala. Additional Information: The candidate should have a minimum of 3 years of experience in GCP Dataflow. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Indore office. Qualification Any Graduate

Posted 2 months ago

Apply

7 - 12 years

11 - 21 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Dear Candidates, Looking for Databricks , Data Analytic, data engineer,analytics,data lake,AWS,pyspark with MNC Client. Notice period-immediate/Max 15 days. Location-Hyderbad Please find the below Job description: Proficiency in Dat abricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language) - Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information: - The candidate sh.ould have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. Key Responsibilities: Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. ¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.€¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. €¢ Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis€¢ Minimum 4 year of Experience in one or more programming languages Python, Java, Scala.€¢ Experience using airflow for the data pipelines in min 1 project.€¢ 1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes: €¢ Ready to work in B Shift (12 PM €“ 10 PM) €¢ A Client facing skills: solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.€¢ Good critical thinking and problem-solving abilities €¢ Health care knowledge €¢ Good Communication Skills Educational Qualification: Bachelor of Engineering / Bachelor of Technology Additional Information: Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Kindly mention above details ..if you are interested in above position. Kindly share your profile-w akdevi@crownsolution.com. Regards, devii

Posted 2 months ago

Apply

8 - 13 years

6 - 11 Lacs

Gurugram

Work from Office

AHEAD is looking for a Senior Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Senior Data Engineer will be responsible for strategic planning and hands-on engineering of Big Data and cloud environments that support our clients advanced analytics, data science, and other data platform initiatives. This consultant will design, build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. They will be expected to be hands-on technically, but also present to leadership, and lead projects. The Senior Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Senior Data Engineer, you will design and implement data pipelines to enable analytics and machine learning on rich datasets. Roles and Responsibilities A Data Engineer should be able to design, build, operationalize, secure, and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Engineers APIs for returning data from these structures to the Enterprise Applications Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications 8+ years of professional technical experience 4+ years of hands-on Data Architecture and Data Modelling. 4+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 4+ years of programming languages such as Python 2+ years of experience working in cloud environments (AWS and/or Azure) Strong client-facing communication and facilitation skills Key Skills Python, Cloud, Linux, Windows, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Snowflake, SQL/RDBMS, OLAP, Data Engineering

Posted 2 months ago

Apply

2 - 5 years

14 - 17 Lacs

Mumbai

Work from Office

Who you are A seasoned Data Engineer with a passion for building and managing data pipelines in large-scale environments. Have good experience working with big data technologies, data integration frameworks, and cloud-based data platforms. Have a strong foundation in Apache Spark, PySpark, Kafka, and SQL.What you’ll doAs a Data Engineer – Data Platform Services, your responsibilities include: Data Ingestion & Processing Assisting in building and optimizing data pipelines for structured and unstructured data. Working with Kafka and Apache Spark to manage real-time and batch data ingestion. Supporting data integration using IBM CDC and Universal Data Mover (UDM). Big Data & Data Lakehouse Management Managing and processing large datasets using PySpark and Iceberg tables. Assisting in migrating data workloads from IIAS to Cloudera Data Lake. Supporting data lineage tracking and metadata management for compliance. Optimization & Performance Tuning Helping to optimize PySpark jobs for efficiency and scalability. Supporting data partitioning, indexing, and caching strategies. Monitoring and troubleshooting pipeline issues and performance bottlenecks. Security & Compliance Implementing role-based access controls (RBAC) and encryption policies. Supporting data security and compliance efforts using Thales CipherTrust. Ensuring data governance best practices are followed. Collaboration & Automation Working with Data Scientists, Analysts, and DevOps teams to enable seamless data access. Assisting in automation of data workflows using Apache Airflow. Supporting Denodo-based data virtualization for efficient data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.

Posted 2 months ago

Apply

2 - 5 years

7 - 11 Lacs

Mumbai

Work from Office

What you’ll do As a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Schema Design Developing conceptual, logical, and physical data models to support enterprise data requirements. Designing schema structures for Apache Iceberg tables on Cloudera Data Platform. Collaborating with ETL developers and data engineers to optimize data models for efficient ingestion and retrieval. Data Governance & Quality Assurance Ensuring data accuracy, consistency, and integrity across data models. Supporting data lineage and metadata management to enhance data traceability. Implementing naming conventions, data definitions, and standardization in collaboration with governance teams. ETL & Data Pipeline Support Assisting in the migration of data from IIAS to Cloudera Data Lake by designing efficient data structures. Working with Denodo for data virtualization, ensuring optimized data access across multiple sources. Collaborating with teams using Talend Data Quality (DQ) tools to ensure high-quality data in the models. Collaboration & Documentation Working closely with business analysts, architects, and reporting teams to understand data requirements. Maintaining data dictionaries, entity relationships, and technical documentation for data models. Supporting data visualization and analytics teams by designing reporting-friendly data models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in data modeling, database design, and data engineering. Hands-on experience with ERwin Data Modeler for creating and managing data models. Strong knowledge of relational databases (PostgreSQL) and big data platforms (Cloudera, Apache Iceberg). Proficiency in SQL and NoSQL database concepts. Understanding of data governance, metadata management, and data security principles. Familiarity with ETL processes and data pipeline optimization. Strong analytical, problem-solving, and documentation skills. Preferred technical and professional experience Experience working on Cloudera migration projects. Exposure to Denodo for data virtualization and Talend DQ for data quality management. Knowledge of Kafka, Airflow, and PySpark for data processing. Familiarity with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. Certifications in Data Modeling, Cloudera Data Engineering, or IBM Data Solutions.

Posted 2 months ago

Apply

2 - 5 years

7 - 11 Lacs

Mumbai

Work from Office

Who you areA highly skilled Data Engineer specializing in Data Modeling with experience in designing, implementing, and optimizing data structures that support the storage, retrieval and processing of data for large-scale enterprise environments. Having expertise in conceptual, logical, and physical data modeling, along with a deep understanding of ETL processes, data lake architectures, and modern data platforms. Proficient in ERwin, PostgreSQL, Apache Iceberg, Cloudera Data Platform, and Denodo. Possess ability to work with cross-functional teams, data architects, and business stakeholders ensures that data models align with enterprise data strategies and support analytical use cases effectively. What you’ll doAs a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Architecture Designing and developing conceptual, logical, and physical data models to support data migration from IIAS to Cloudera Data Lake. Creating and optimizing data models for structured, semi-structured, and unstructured data stored in Apache Iceberg tables on Cloudera. Establishing data lineage and metadata management for the new data platform. Implementing Denodo-based data virtualization models to ensure seamless data access across multiple sources. Data Governance & Quality Ensuring data integrity, consistency, and compliance with regulatory standards, including Banking/regulatory guidelines. Implementing Talend Data Quality (DQ) solutions to maintain high data accuracy. Defining and enforcing naming conventions, data definitions, and business rules for structured and semi-structured data. ETL & Data Pipeline Optimization Supporting the migration of ETL workflows from IBM DataStage to PySpark, ensuring models align with the new ingestion framework. Collaborating with data engineers to define schema evolution strategies for Iceberg tables. Ensuring performance optimization for large-scale data processing on Cloudera. Collaboration & Documentation Working closely with business analysts, architects, and developers to translate business requirements into scalable data models. Documenting data dictionary, entity relationships, and mapping specifications for data migration. Supporting reporting and analytics teams (Qlik Sense/Tableau) by providing well-structured data models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in data modeling, database design, and data engineering. Hands-on experience with ERwin Data Modeler for creating and managing data models. Strong knowledge of relational databases (PostgreSQL) and big data platforms (Cloudera, Apache Iceberg). Proficiency in SQL and NoSQL database concepts. Understanding of data governance, metadata management, and data security principles. Familiarity with ETL processes and data pipeline optimization. Strong analytical, problem-solving, and documentation skills. Preferred technical and professional experience Experience working on Cloudera migration projects. Exposure to Denodo for data virtualization and Talend DQ for data quality management. Knowledge of Kafka, Airflow, and PySpark for data processing. Familiarity with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. Certifications in Data Modeling, Cloudera Data Engineering, or IBM Data Solutions.

Posted 2 months ago

Apply

2 - 5 years

7 - 11 Lacs

Mumbai

Work from Office

Who you areA highly skilled Data Engineer specializing in Data Modeling with experience in designing, implementing, and optimizing data structures that support the storage, retrieval and processing of data for large-scale enterprise environments. Having expertise in conceptual, logical, and physical data modeling, along with a deep understanding of ETL processes, data lake architectures, and modern data platforms. Proficient in ERwin, PostgreSQL, Apache Iceberg, Cloudera Data Platform, and Denodo. Possess ability to work with cross-functional teams, data architects, and business stakeholders ensures that data models align with enterprise data strategies and support analytical use cases effectively. What you’ll doAs a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Architecture Designing and developing conceptual, logical, and physical data models to support data migration from IIAS to Cloudera Data Lake. Creating and optimizing data models for structured, semi-structured, and unstructured data stored in Apache Iceberg tables on Cloudera. Establishing data lineage and metadata management for the new data platform. Implementing Denodo-based data virtualization models to ensure seamless data access across multiple sources. Data Governance & Quality Ensuring data integrity, consistency, and compliance with regulatory standards, including Banking/regulatory guidelines. Implementing Talend Data Quality (DQ) solutions to maintain high data accuracy. Defining and enforcing naming conventions, data definitions, and business rules for structured and semi-structured data. ETL & Data Pipeline Optimization Supporting the migration of ETL workflows from IBM DataStage to PySpark, ensuring models align with the new ingestion framework. Collaborating with data engineers to define schema evolution strategies for Iceberg tables. Ensuring performance optimization for large-scale data processing on Cloudera. Collaboration & Documentation Working closely with business analysts, architects, and developers to translate business requirements into scalable data models. Documenting data dictionary, entity relationships, and mapping specifications for data migration. Supporting reporting and analytics teams (Qlik Sense/Tableau) by providing well-structured data models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Cloudera migration projects in the banking or financial sector. Knowledge of PySpark, Kafka, Airflow, and cloud-native data processing. Experience with Talend DQ for data quality monitoring. Preferred technical and professional experience Experience in Cloudera migration projects in the banking or financial sector. Knowledge of PySpark, Kafka, Airflow, and cloud-native data processing. Experience with Talend DQ for data quality monitoring. Familiarity with graph databases (DGraph Enterprise) for data relationships. Experience with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. IBM, Cloudera, or AWS/GCP certifications in Data Engineering or Data Modeling.

Posted 2 months ago

Apply

9 - 12 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Data Engineer RESPONSIBILITIES/TASKS: - Work with Technical Leads and Architects to analyse solutions. - Translate complex business requirements into tangible data requirements through collaborative work with both business and technical subject matter experts. - Develop / modify data models with an eye towards high performance, scalability, flexibility, and usability. - Ensure data models are in alignment with the overall architecture standards. - Create source to target mapping documentation. - Subject matter knowledge guidance in source system analysis and ETL build. - Responsible for overseeing data integrity in the Smart Conductor system - Serves as data flow and enrichment “owner” with deep expertise in data dynamics, capable of recognizing and elevating improvement opportunities early in the process - Work with product owners to understand business reporting requirements and deliver appropriate insights on regular basis - Responsible for system configuration to deliver reports, data visualizations, and other solution components - SKILLS REQUIRED: - More than 5 years of software development experience - Proficient in Power BI/Tableau, Google data Studio, R, SQL, Python - Strong knowledge of cloud computing and experience in Microsoft Azure - Azure ML Studio, Azure Machine Learning - Strong knowledge in SSIS - Proficient in Azure services - Azure Data Factory, Synapse, Data Lake - Experience querying, analysing, or managing data required! - Experience within the healthcare insurance industry with payer data strongly preferred - Experience in data cleansing, data engineering, data enrichment, data warehousing/ Business Intelligence preferred. - Strong analytical, problem solving and planning skills. - Strong organizational and presentation skills. - Excellent interpersonal and communication skills. - Ability to multi-task in a fast-paced environment. - Flexibility to adapt readily to changing business needs in a fast-paced environment. - Team player who is delivery-oriented and takes responsibility for the team's success. - Enthusiastic, can-do attitude with the drive to continually learn and improve. - Knowledge of Agile, SCRUM and/or Agile methodologies. Required Skills Etl,Sql,Azure

Posted 2 months ago

Apply

8 - 13 years

12 - 22 Lacs

Gurugram

Work from Office

Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities

Posted 2 months ago

Apply

3 - 8 years

4 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Role & responsibilities Design, build, and maintain scalable and efficient data pipelines and ETL/ELT processes. Develop and optimize data models for analytics and operational purposes in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery). Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable datasets. Implement data quality checks, monitoring, and alerting for pipelines. Work with structured and unstructured data across various sources (APIs, databases, streaming). Ensure data security, compliance, and governance practices are followed. Write clean, efficient, and testable code using Python, SQL, or Scala. Support the development of data catalogs and documentation. Participate in code reviews and contribute to best practices in data engineering. Preferred candidate profile 3- 9 years of hands-on experience in data engineering or a similar role. Strong proficiency in SQL and Python, Pyspark. Experience with data pipeline orchestration tools like Apache Airflow, Prefect, or Luigi.( Any Of the skill) Familiarity with cloud platforms such as AWS, Azure, or GCP (e.g., S3, Lambda, Glue, BigQuery, Dataflow).( Any of the Skill) Experience with big data tools such as Spark, Kafka, Hive, or Hadoop.(ANy One) Strong understanding of relational and non-relational databases. Exposure to CI/CD practices and tools (e.g., Git, Jenkins, Docker). Excellent problem-solving and communication skills.

Posted 2 months ago

Apply

9 - 14 years

20 - 35 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Hybrid

Lead and manage data-driven projects from initiation to completion, ensuring timely delivery and alignment with business objectives. Experienced in at least in one of the following engagements - BI/DW, DWH, Data Warehousing, Data Analytics, Business Intelligence, Data Projects, Data Management, Data GovernanceProven experience in project management, preferably in data analytics, business intelligence, or related domains. Strong analytical skills with experience in working with large datasets and deriving actionable insights. Collaborate with cross-functional teams, including data scientists, analysts, and engineers, to drive data-related initiatives. Develop and maintain project plans, timelines, and resource allocation for data & analytics projects. Oversee data collection, analysis, and visualization to support decision-making. Ensure data quality, integrity, and security compliance throughout project execution. Use data insights to optimize project performance, identify risks, and implement corrective actions. Communicate project progress, challenges, and key findings to stakeholders and senior management. Implement Agile or Scrum methodologies for efficient project execution in a data-focused environment. Stay updated with industry trends and advancements in data analytics and project management best practices. Proven experience in project management, preferably in data analytics, business intelligence, or related domains. Strong analytical skills with experience in working with large datasets and deriving actionable insights. Excellent problem-solving skills and ability to work with complex datasets. Strong communication and stakeholder management skills. Knowledge of cloud platforms (AWS, Google Cloud, Azure) and database management is a plus. Certification in PMP, PRINCE2, or Agile methodologies is an advantage. Locations: Pune/Bangalore/Hyderabad/Chennai/Mumbai

Posted 2 months ago

Apply

5 - 10 years

15 - 25 Lacs

Bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Snowflake Developer Qualification : Any Graduate or Above Relevant Experience : 5 to 10 Years Required Technical Skill Set (Skill Name) : Snowflake, Azure Managed Services platforms Keywords Must-Have Proficient in SQL programming (stored procedures, user defined functions, CTEs, window functions), Design and implement Snowflake data warehousing solutions, including data modelling and schema designing Snowflake Able to source data from APIs, data lake, on premise systems to Snowflake. Process semi structured data using Snowflake specific features like variant, lateral flatten Experience in using Snow pipe to load micro batch data. Good knowledge of caching layers, micro partitions, clustering keys, clustering depth, materialized views, scale in/out vs scale up/down of warehouses. Ability to implement data pipelines to handle data retention, data redaction use cases. Proficient in designing and implementing complex data models, ETL processes, and data governance frameworks. Strong hands on in migration projects to Snowflake Deep understanding of cloud-based data platforms and data integration techniques. Skilled in writing efficient SQL queries and optimizing database performance. Ability to development and implementation of a real-time data streaming solution using Snowflake Location : PAN INDIA CTC Range : 25 LPA-40 LPA Notice period : Any Shift Timing : N/A Mode of Interview : Virtual Mode of Work : WFO( Work From Office) Pooja Singh KS IT Staffing Analyst Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA. Ipooja.singh@blackwhite.in I www.blackwhite.in

Posted 2 months ago

Apply

3 - 6 years

20 - 25 Lacs

Hyderabad

Work from Office

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As member of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 2+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and data engineering 1+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 1+ years of experience in Databricks optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 months ago

Apply

14 - 20 years

20 - 35 Lacs

Pune, Chennai, Mumbai (All Areas)

Work from Office

Role :Data & Analytics Architect Required Skill Set :Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Preferred Specializations or Prior Experience : Manufacturing, Hi-Tech, CPG Use cases where Analytics and AI have been applied Location: PAN INDIA Desired Competencies (Managerial/Behavioural Competency): Must-Have: 14+ years of IT industry experience IoT / Industry 4.0 / Industrial AI experience for at least 3+ years In depth knowledge of Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Strong written , verbal communication with good presentation skills Excellent Knowledge of Data governance , Medallion architecture , UNS , Data lake architectures , AIML and Data Science Experience with cloud platforms (e.g., GCP, AWS, Azure) and cloud-based Analytics & AI / ML services. Proven experience working with clients in the Manufacturing CPG /High-Tech / oil & Gas /Pharma industries. Good understanding of technology trends, market forces and industry imperatives. Excellent communication, presentation, and interpersonal skills. Ability to work independently and collaboratively in a team environment. Good-to-Have: Degree in Data Science or Statistics. Led consulting & advisory programs at CxO level, managing business outcomes. Point of View articulation for CxO level. Manufacturing (Discreet or Process) industry background for application of AI technology for business impact. Entrepreneurial and comfortable working in a complex and fast-paced environment Responsibilities / Expected Deliverables: We are seeking a highly skilled and experienced Data and Analytics / Consultant to provide expert guidance and support to our clients in the Manufacturing, Consumer Packaged Goods (CPG), and High-Tech industries. This role requires a deep understanding of Architect , Design and Implementation experience in cloud data platforms Experience in handling multiple type of data (Structured, streaming , Semi structured etc.) Strategic experience in Data & Analytics (cloud data architecture, lake-house architecture, data fabric, data mesh concept) Experience in deploying DevOps / CICD techniques. Automate and Deploy Data Pipelines / ETLs in DevOps Environment Experience in Strategizing Data governance activities. The ideal candidate will possess exceptional communication, consulting, and problem-solving skills, along with a strong technical foundation in data Arch. The role involves leading the Data Arch Tech Advisory engagement, bringing thought leadership to engage CxOs actively. Following would be some of the key roles and responsibilities: Business-oriented Engage with customer CxOs to evangelise adoption of AI & GenAI Author proposals for solving business problems and achieving business objectives, leveraging Data Analytics & AI technologies Advisory Experience in managing the entire lifecycle of Data Analytics , is an added advantage. This includes: Develop roadmap for introduction and scaling of Data architecture in customer organization Define best suited AI operating model for customers Guide the teams on solution approaches and roadmaps. Build and leverage frameworks for RoI from AI. Effectively communicate complex technical information to both technical and non-technical audiences, presenting findings and recommendations in a clear, concise, and compelling. Demonstrate though-leadership to identify various use-cases that need to be built for showcase to prospective customers.

Posted 2 months ago

Apply

5 - 10 years

20 - 25 Lacs

Bengaluru

Work from Office

About The Role : Job Title Transformation Principal Change Analyst Corporate TitleAVP LocationBangalore, India Role Description We are looking for an experienced Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Responsible for change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the story, maintain risk management controls, monitor and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years leading complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 months ago

Apply

7 - 12 years

20 - 25 Lacs

Noida, Hyderabad, Gurugram

Work from Office

hands-on exp in Dataiku, including building & managing data pipelines, workflows, & automation. Should be highly proficient in SQL and capable of extracting data from RDBMS, Data Lakes, & Excel. Must have solid knowledge of cloud platforms like AWS, Required Candidate profile 7+ years of relevant hands-on experience in Dataiku Developer. Must have strong hands-on experience in Dataiku, including building and managing data pipelines. solid knowledge of cloud platforms

Posted 2 months ago

Apply

12 - 22 years

35 - 65 Lacs

Chennai

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 2 months ago

Apply

10 - 18 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 2 months ago

Apply

10 - 20 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 2 months ago

Apply

2 - 7 years

11 - 15 Lacs

Hyderabad

Work from Office

About the role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 1 2 years of experience in s oftware & data engineering and a nalytics and a proven track record of designing and implementing complex data solutions. Y ou will be expected to design, create, deploy, and manage Blackbauds data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at?Blackbaud. This?individual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What youll be doing Develop and direct the strategy for all aspects of Blackbauds Data and Analytics platforms, products and services Set, communicate and facilitate technical direction?more broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What we want you to have: 1 0 + years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Experience building modern products and infrastructure Experience working with .Net /Java and Microservice Architecture Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes , data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Able to work flexible hours as required by business priorities ? Ability to deliver software that meets consistent standards of quality, security and operability. Stay up to date on everything Blackbaud, follow us on , , , and Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.

Posted 2 months ago

Apply

6 - 11 years

20 - 25 Lacs

Hyderabad

Work from Office

Principal IS Architect – Clinical Computation Platform What you will do Amgen’s Clinical Computation Platform Product Team manages a core set of clinical computation solutions that support global clinical development. This team is responsible for building and maintaining systems for clinical data storage, data auditing and security management, analysis and reporting capabilities. These capabilities are pivotal in Amgen’s goal to serve patients. The Principal IS Architect will define the architecture vision, create roadmaps and support the design and implementation of advanced computational platforms to support clinical development, ensuring that IT strategies align with business goals. The Principal IS Architect will work closely with partners across departments, including CfDA, GDO, CfOR, CfTI, CPMS and IT teams, to design and implement scalable, reliable, and hard-working solutions. Develop and maintain the enterprise architecture vision and strategy, ensuring alignment with business objectives Create and maintain architectural roadmaps that guide the evolution of IT systems and capabilities Establish and enforce architectural standards, policies, and governance frameworks Evaluate emerging technologies and assess their potential impact on the enterprise/domain/solution architecture Identify and mitigate architectural risks, ensuring that IT systems are scalable, secure, and resilient Maintain comprehensive documentation of the architecture, including principles, standards, and models Drive continuous improvement in the architecture by finding opportunities for innovation and efficiency Work with partners to gather and analyze requirements, ensuring that solutions meet both business and technical needs Evaluate and recommend technologies and tools that best fit the solution requirements Ensure seamless integration between systems and platforms, both within the organization and with external partners Design systems that can scale to meet growing business needs and performance demands Develop and maintain logical, physical, and conceptual data models to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 8 to 10 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10 to 14 years of experience Computer Science, IT or related field OR Diploma with 14 to 18 years of experience Computer Science, IT or related field Proficiency in designing scalable, secure, and cost-effective solutions. Expertise in cloud platforms (AWS, Azure, GCP), data lakes, and data warehouses. Experience in evaluating and selecting technology vendors. Ability to create and demonstrate proof of concept solutions to validate technical feasibility. Strong knowledge of Clinical Research and Development Domain Experience working in agile methodology, including Product Teams and Product Development models Preferred Qualifications: Strong solution design and problem-solving skills Experience in developing differentiated and deliverable solutions Ability to analyze client requirements and translate them into solutions Experience with machine learning and artificial intelligence applications in clinical research. Strong programming skills in languages such as Python, R, or Java. Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies