Home
Jobs

310 Data Lake Jobs - Page 12

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1 - 4 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Associate - Next Gen Forecasting What you will do Let’s do this. Let’s change the world. In this vital role you will support an ambitious program to evolve how Amgen does forecasting, moving from batch processes (e.g., sales forecasting to COGS forecast, clinical study forecasting) to a more continuous process. The hardworking professional we seek is curious by nature, organizationally and data savvy, with a strong record of Finance transformation, partner management and accomplishments in Finance, Accounting, or Procurement. This role will help redesign existing processes to incorporate Artificial Intelligence and Machine Learning capabilities to significantly reduce time and resources needed to build forecasts. As the Next Gen Forecasting Associate at Amgen India, you will support innovation and continuous improvement in Finance’s planning, reporting and data processes with a focus on enhancing current technologies and adapting new technologies where relevant. This individual will collaborate with cross-functional teams and support business objectives. This role reports directly to the Next Gen Forecasting Manager in Hyderabad, India. Roles & Responsibilities: Priorities can often change in a fast-paced technology environment like Amgen’s, so this role includes, but is not limited to, the following: Support implementation of real-time / continuous forecasting capabilities Establish baseline analyses, define current and future state using traditional approaches and emerging digital technologies Identify which areas would benefit most from automation / AI / ML Identify additional process / governance changes to move from batch to continuous forecasting Closely partner with Business, Accounting, FP&A, Technology and other impacted functions to define and implement proposed changes Partners with Amgen Technology function to support both existing and new finance platforms Partners with local and global teams on use cases for Artificial Intelligence (AI), Machine Learning (ML) and Robotic Process Automation (RPA) Collaborate with cross-functional teams and Centers of Excellence globally to drive operational efficiency Contributes to a learning environment and enhances learning methodologies of technical tools where applicable. Serve as local financial systems and financial data subject matter expert, supporting local team with questions Supports global finance teams and business partners with centrally delivered financial reporting via tableau and other tools Supports local adoption of Anaplan for operating expense planning / tracking What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 0 to 3 years of Finance experience OR Diploma and 4 to 7 years of Finance experience Track record of supporting new finance capabilities Proficiency in data analytics and business intelligence tools. Experience with finance reporting and planning system technologies Experience with technical support of financial platforms Knowledge of financial management and accounting principles. Experience with ERP systems Resourceful individual who can “connect the dots” across matrixed organization Preferred Qualifications: Experience in pharmaceutical and/or biotechnology industry. Experience in financial planning, analysis, and reporting. Experience with global finance operations. Knowledge of advanced financial modeling techniques. Business performance management Finance transformation experience involving recent technology advancements Prior multinational capability center experience Experience with Oracle Hyperion/EPM, S4/SAP, Anaplan, Tableau/PowerBI, DataBricks, Alteryx, data lakes, data structures Soft Skills: Excellent project management abilities. Strong communication and interpersonal skills. High level of integrity and ethical standards. Problem-solving and critical thinking capabilities. Ability to influence and drive change. Adaptability to a dynamic and fast-paced environment. Strong organizational and time management skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply

3 - 5 years

4 - 8 Lacs

Gurugram

Work from Office

Naukri logo

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Sr. Associate Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 1 month ago

Apply

3 - 5 years

4 - 9 Lacs

Gurugram

Work from Office

Naukri logo

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 1 month ago

Apply

12 - 15 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

EMPLOYMENT QUALIFICATIONS: EDUCATION: - Bachelor's degree in software/computer engineering field. - Continuous learning, as defined by the Company's learning philosophy, is required. - Certification or progress toward certification is highly preferred and encouraged. SKILLS/KNOWLEDGE/ABILITIES (SKA) REQUIRED: - Minimum 10 years of relevant IT experience in managing multiple information systems projects. - Knowledge of healthcare domain - Should have knowledge in ETL process, SQL, Database (Oracle/MS SQL).etc. Proficient in Power BI/Tableau, Google data Studio, R, SQL, Python - Strong knowledge of cloud computing and experience in Microsoft Azure - Azure ML Studio, Azure Machine Learning - Strong knowledge in SSIS - Proficient in Azure services - Azure Data Factory, Synapse, Data Lake - Experience querying, analysing, or managing data required - Experience in data cleansing, data engineering, data enrichment, data warehousing/ Business Intelligence preferred - Strong analytical, problem solving and planning skills. - Strong organizational and presentation skills. - Excellent interpersonal and communication skills. - Ability to multi-task in a fast-paced environment. - Flexibility to adapt readily to changing business needs in a fast-paced environment - Team player who is delivery-oriented and takes responsibility for the team's success. - Enthusiastic, can-do attitude with the drive to continually learn and improve. - Knowledge of Agile, SCRUM and/or Agile methodologies. - Knowledge and experience using PPM tools, MS Project, Jira, Confluence, MS Excel, PowerPoint, SharePoint, and Visio - Exceptional interpersonal and communication skills, both oral and written - Ability to mentor junior team members with less experience - Ability to build relationships and work collaboratively with diverse leaders - Ability to communicate and influence across business and technical domains, and across all levels of the organization - Dynamic public speaking capabilities in front of large groups. - Ability to communicate ideas to both technical and non-technical audiences - Comfortable with ambiguity and can handle the unexpected with flexibility - Ability to work with a collaborative approach and build trust with others Required Skills Healthcare,Etl,Sql,Power Bi

Posted 1 month ago

Apply

5 - 7 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes: Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Outputs Expected: Code: Code as per design Follow coding standards templates and checklists Review code - for team and peers Documentation: Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure: Define and govern configuration management plan Ensure compliance from the team Test: Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain relevance: Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project: Manage delivery of modules and/or manage user stories Manage Defects: Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate: Create and provide input for effort estimation for projects Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release: Execute and monitor release process Design: Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface with Customer: Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team: Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications: Take relevant domain/technology certification Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware's. Strong analytical and problem-solving abilities Knowledge Examples: Appropriate software programs / modules Functional and technical designing Programming languages - proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile - Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments: Responsibilities: - Understand business requirements and existing system designs, security applications and guidelines, etc. - Work with various SME's in understanding business process flows, functional requirements specifications of existing system, their current challenges and constraints and future expectation. - Streamline the process of sourcing, organizing data (from a wide variety of data sources using Python, PySpark, SQL, Spark) and accelerating data for analysis. - Support the data curation process by feeding the data catalog and knowledge bases. - Create data tools for analytics and data scientist team members that assist them in building and optimizing the data products for consumption. - Work with data and analytics experts to strive for greater functionality in the data systems. - Clearly articulate data stories using data science, advanced statistical analysis, visualization tools, PowerPoint presentations, written and oral communication. - Manage technical, analytical, and business documentation on all data efforts. - Engage in hands on development and work with both onsite and offsite leads and engineers. Competencies: - 5+ years of experience in building data engineering pipelines on both onpremise and cloud platforms (Snowflake) - 5+ years of experience in developing Python based data-applications to support data ingestion, transformation, data visualizations (plotly, streamlit, flask, dask) - Strong experience coding in Python, PySpark, SQL and building automations. - Knowledge of Cybersecurity, IT infrastructure and Software concepts. - 3+ years of experience using data warehousing / data lake techniques in cloud environments. - 3+ years of developing data visualizations using Tableau, Plotly, Streamlit - Experience with ELT/ETL tools like DBT, Cribl, etc. - Experience on capturing incremental data changes, streaming data ingestion and stream processing. - Experience in processes supporting data governance, data structures, metadata management. - Solid grasp of data and analytics concepts and methodologies including data science, data engineering, and data story-telling Required Skills Python,Sql,Cloud Platform

Posted 1 month ago

Apply

3 - 7 years

8 - 11 Lacs

Gurugram

Work from Office

Naukri logo

KDataScience (USA & INDIA) is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Designing and implementing scalable and reliable data pipelines, data models, and data infrastructure for processing large and complex datasets. Developing and maintaining databases, data warehouses, and data lakes that store and manage the organization's data. Developing and implementing data integration and ETL (Extract, Transform, Load) processes to ensure that data flows smoothly and accurately between different systems and data sources. Ensuring data quality, consistency, and accuracy through data profiling, cleansing, and validation. Building and maintaining data processing and analytics systems that support business intelligence, machine learning, and other data-driven applications. Optimizing the performance and scalability of data systems and infrastructure to ensure that they can handle the organization's growing data needs.

Posted 1 month ago

Apply

4 - 9 years

6 - 11 Lacs

Mumbai

Work from Office

Naukri logo

Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accenture's approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes we're the ideal partner to help them create and realize their vision for the future – beyond their expectations. You are: A builder at heart – curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelor's degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Master's degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | Accenture Qualifications Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelor’s degree or equivalent experience in Data Engineering, analytics, or similar field

Posted 1 month ago

Apply

7 - 12 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Data Architecture Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Data ArchitectKemper is seeking a Data Architect to join our team. You will work as part of a distributed team and with Infrastructure, Enterprise Data Services and Application Development teams to coordinate the creation, enrichment, and movement of data throughout the enterprise. Your central responsibility as an architect will be improving the consistency, timeliness, quality, security, and delivery of data as part of Kemper's Data Governance framework. In addition, the Architect must streamline data flows and optimize cost management in a hybrid cloud environment. Your duties may include assessing architectural models and supervising data migrations across IaaS, PaaS, SaaS and on premises systems, as well as data platform selection and, on-boarding of data management solutions that meet the technical and operational needs of the company. To succeed in this role, you should know how to examine new and legacy requirements and define cost effective patterns to be implemented by other teams. You must then be able to represent required patterns during implementation projects. The ideal candidate will have proven experience in cloud (Snowflake, AWS and Azure) architectural analysis and management. Responsibilities Define architectural standards and guidelines for data products and processes. Assess and document when and how to use existing and newly architected producers and consumers, the technologies to be used for various purposes, and models of selected entities and processes. The guidelines should encourage reuse of existing data products, as well as address issues of security, timeliness, and quality. Work with Information & Insights, Data Governance, Business Data Stewards, and Implementation teams to define standard and ad-hoc data products and data product sets. Work with Enterprise Architecture, Security, and Implementation teams to define the transformation of data products throughout hybrid cloud environments assuring that both functional and non-functional requirements are addressed. This includes the ownership, frequency of movement, the source and destination of each step, how the data is transformed as it moves, and any aggregation or calculations. Working with Data Governance and project teams to model and map data sources, including descriptions of the business meaning of the data, its uses, its quality, the applications that maintain it and the technologies in which it is stored. Documentation of a data source must describe the semantics of the data so that the occasional subtle differences in meaning are understood. Defining integrative views of data to draw together data from across the enterprise. Some views will use data stores of extracted data and others will bring together data in near real time. Solutions must consider data currency, availability, response times and data volumes, etc. Working with modeling and storage teams to define Conceptual, Logical and Physical data views limiting technical debt as data flows through transformations. Investigating and leading participation in POCs of emerging technologies and practices. Leveraging and evolving existing [core] data products and patterns. Communicate and lead understanding of data architectural services across the enterprise. Ensure a focus on data quality by working effectively with data and system stewards. QualificationsBachelor's degree in computer science, Computer Engineering, or equivalent experience. A minimum of 3 years' experience in a similar role. Demonstrable knowledge of Secure DevOps and SDLC processes. Must have AWS or Azure experience.Experience with Data Vault 2 required. Snowflake a plus.Familiarity of system concepts and tools within an enterprise architecture framework. Including Cataloging, MDM, RDM, Data Lakes, Storage Patterns, etc. Excellent organizational and analytical abilities. Outstanding problem solver. Good written and verbal communication skills.

Posted 1 month ago

Apply

9 - 11 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : Data Engineering, Cloud Data Migration Minimum 9 year(s) of experience is required Educational Qualification : BE or BTech must Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have Skills :Data Modeling Techniques and Methodologies, SSI: NON SSI:Good to Have Skills :SSI:Data Engineering, Cloud Data Migration NON SSI :Job Requirements :Key Responsibilities :1Drive discussions with clients deal teams to understand business requirements, how Industry Data Model fits in implementation and solutioning 2Develop the solution blueprint and scoping, estimation, staffing for delivery project and solutioning 3Drive Discovery activities and design workshops with client and lead strategic road mapping and operating model design discussions 4Good to have Data Vault,Cloud DB design,Graph data modeling, Ontology, Data Engineering,Data Lake design Technical Experience :1 9plus year overall exp,4plus Data Modeling,Cloud DB Model,3NF,Dimensional,Conversion of RDBMS data model to Graph Data ModelInstrumental in DB design through all stages of Data Model 2 Exp on at least one Cloud DB Design work must be familiar with Data Architecture Principles Professional Attributes :1Strong requirement analysis and technical solutioning skill in Data and Analytics 2Excellent writing, communication and presentation skills 3Eagerness to learn and develop self on an ongoing basis 4Excellent client facing and interpersonal skills Educational Qualification:BE or BTech mustAdditional Info :Exp in estimation,PoVs,Solution Approach creation Exp on data transformation,analytic projects,DWH Qualification BE or BTech must

Posted 1 month ago

Apply

7 - 12 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Data Architecture Good to have skills : Cloud Infrastructure Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Data ArchitectKemper is seeking a Data Architect to join our team. You will work as part of a distributed team and with Infrastructure, Enterprise Data Services and Application Development teams to coordinate the creation, enrichment, and movement of data throughout the enterprise. Your central responsibility as an architect will be improving the consistency, timeliness, quality, security, and delivery of data as part of Kemper's Data Governance framework. In addition, the Architect must streamline data flows and optimize cost management in a hybrid cloud environment. Your duties may include assessing architectural models and supervising data migrations across IaaS, PaaS, SaaS and on premises systems, as well as data platform selection and, on-boarding of data management solutions that meet the technical and operational needs of the company. To succeed in this role, you should know how to examine new and legacy requirements and define cost effective patterns to be implemented by other teams. You must then be able to represent required patterns during implementation projects. The ideal candidate will have proven experience in cloud (Snowflake, AWS and Azure) architectural analysis and management. Responsibilities Define architectural standards and guidelines for data products and processes. Assess and document when and how to use existing and newly architected producers and consumers, the technologies to be used for various purposes, and models of selected entities and processes. The guidelines should encourage reuse of existing data products, as well as address issues of security, timeliness, and quality. Work with Information & Insights, Data Governance, Business Data Stewards, and Implementation teams to define standard and ad-hoc data products and data product sets. Work with Enterprise Architecture, Security, and Implementation teams to define the transformation of data products throughout hybrid cloud environments assuring that both functional and non-functional requirements are addressed. This includes the ownership, frequency of movement, the source and destination of each step, how the data is transformed as it moves, and any aggregation or calculations. Working with Data Governance and project teams to model and map data sources, including descriptions of the business meaning of the data, its uses, its quality, the applications that maintain it and the technologies in which it is stored. Documentation of a data source must describe the semantics of the data so that the occasional subtle differences in meaning are understood. Defining integrative views of data to draw together data from across the enterprise. Some views will use data stores of extracted data and others will bring together data in near real time. Solutions must consider data currency, availability, response times and data volumes, etc. Working with modeling and storage teams to define Conceptual, Logical and Physical data views limiting technical debt as data flows through transformations. Investigating and leading participation in POCs of emerging technologies and practices. Leveraging and evolving existing [core] data products and patterns. Communicate and lead understanding of data architectural services across the enterprise. Ensure a focus on data quality by working effectively with data and system stewards. QualificationsBachelor's degree in computer science, Computer Engineering, or equivalent experience. A minimum of 3 years' experience in a similar role. Demonstrable knowledge of Secure DevOps and SDLC processes. Must have AWS or Azure experience.Experience with Data Vault 2 required. Snowflake a plus.Familiarity of system concepts and tools within an enterprise architecture framework. Including Cataloging, MDM, RDM, Data Lakes, Storage Patterns, etc. Excellent organizational and analytical abilities. Outstanding problem solver. Good written and verbal communication skills.

Posted 1 month ago

Apply

3 - 8 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : GCP Dataflow Good to have skills : Google BigQuery Minimum 3 year(s) of experience is required Educational Qualification : Any Graduate Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems using GCP Dataflow. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing using GCP Dataflow. Create data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to identify and resolve data-related issues. Develop and maintain documentation related to data solutions and processes. Stay updated with the latest advancements in data engineering technologies and integrate innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Experience in GCP Dataflow. Good To Have Skills:Experience in Google BigQuery. Strong understanding of ETL processes and data migration. Experience in data modeling and database design. Experience in data warehousing and data lake concepts. Experience in programming languages such as Python, Java, or Scala. Additional Information: The candidate should have a minimum of 3 years of experience in GCP Dataflow. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Indore office. Qualification Any Graduate

Posted 1 month ago

Apply

7 - 12 years

11 - 21 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Dear Candidates, Looking for Databricks , Data Analytic, data engineer,analytics,data lake,AWS,pyspark with MNC Client. Notice period-immediate/Max 15 days. Location-Hyderbad Please find the below Job description: Proficiency in Dat abricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language) - Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information: - The candidate sh.ould have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. Key Responsibilities: Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. ¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.€¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. €¢ Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis€¢ Minimum 4 year of Experience in one or more programming languages Python, Java, Scala.€¢ Experience using airflow for the data pipelines in min 1 project.€¢ 1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes: €¢ Ready to work in B Shift (12 PM €“ 10 PM) €¢ A Client facing skills: solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.€¢ Good critical thinking and problem-solving abilities €¢ Health care knowledge €¢ Good Communication Skills Educational Qualification: Bachelor of Engineering / Bachelor of Technology Additional Information: Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Kindly mention above details ..if you are interested in above position. Kindly share your profile-w akdevi@crownsolution.com. Regards, devii

Posted 1 month ago

Apply

8 - 13 years

6 - 11 Lacs

Gurugram

Work from Office

Naukri logo

AHEAD is looking for a Senior Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Senior Data Engineer will be responsible for strategic planning and hands-on engineering of Big Data and cloud environments that support our clients advanced analytics, data science, and other data platform initiatives. This consultant will design, build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. They will be expected to be hands-on technically, but also present to leadership, and lead projects. The Senior Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Senior Data Engineer, you will design and implement data pipelines to enable analytics and machine learning on rich datasets. Roles and Responsibilities A Data Engineer should be able to design, build, operationalize, secure, and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Engineers APIs for returning data from these structures to the Enterprise Applications Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications 8+ years of professional technical experience 4+ years of hands-on Data Architecture and Data Modelling. 4+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 4+ years of programming languages such as Python 2+ years of experience working in cloud environments (AWS and/or Azure) Strong client-facing communication and facilitation skills Key Skills Python, Cloud, Linux, Windows, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Snowflake, SQL/RDBMS, OLAP, Data Engineering

Posted 1 month ago

Apply

2 - 5 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Who you are A seasoned Data Engineer with a passion for building and managing data pipelines in large-scale environments. Have good experience working with big data technologies, data integration frameworks, and cloud-based data platforms. Have a strong foundation in Apache Spark, PySpark, Kafka, and SQL.What you’ll doAs a Data Engineer – Data Platform Services, your responsibilities include: Data Ingestion & Processing Assisting in building and optimizing data pipelines for structured and unstructured data. Working with Kafka and Apache Spark to manage real-time and batch data ingestion. Supporting data integration using IBM CDC and Universal Data Mover (UDM). Big Data & Data Lakehouse Management Managing and processing large datasets using PySpark and Iceberg tables. Assisting in migrating data workloads from IIAS to Cloudera Data Lake. Supporting data lineage tracking and metadata management for compliance. Optimization & Performance Tuning Helping to optimize PySpark jobs for efficiency and scalability. Supporting data partitioning, indexing, and caching strategies. Monitoring and troubleshooting pipeline issues and performance bottlenecks. Security & Compliance Implementing role-based access controls (RBAC) and encryption policies. Supporting data security and compliance efforts using Thales CipherTrust. Ensuring data governance best practices are followed. Collaboration & Automation Working with Data Scientists, Analysts, and DevOps teams to enable seamless data access. Assisting in automation of data workflows using Apache Airflow. Supporting Denodo-based data virtualization for efficient data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.

Posted 1 month ago

Apply

2 - 5 years

7 - 11 Lacs

Mumbai

Work from Office

Naukri logo

What you’ll do As a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Schema Design Developing conceptual, logical, and physical data models to support enterprise data requirements. Designing schema structures for Apache Iceberg tables on Cloudera Data Platform. Collaborating with ETL developers and data engineers to optimize data models for efficient ingestion and retrieval. Data Governance & Quality Assurance Ensuring data accuracy, consistency, and integrity across data models. Supporting data lineage and metadata management to enhance data traceability. Implementing naming conventions, data definitions, and standardization in collaboration with governance teams. ETL & Data Pipeline Support Assisting in the migration of data from IIAS to Cloudera Data Lake by designing efficient data structures. Working with Denodo for data virtualization, ensuring optimized data access across multiple sources. Collaborating with teams using Talend Data Quality (DQ) tools to ensure high-quality data in the models. Collaboration & Documentation Working closely with business analysts, architects, and reporting teams to understand data requirements. Maintaining data dictionaries, entity relationships, and technical documentation for data models. Supporting data visualization and analytics teams by designing reporting-friendly data models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in data modeling, database design, and data engineering. Hands-on experience with ERwin Data Modeler for creating and managing data models. Strong knowledge of relational databases (PostgreSQL) and big data platforms (Cloudera, Apache Iceberg). Proficiency in SQL and NoSQL database concepts. Understanding of data governance, metadata management, and data security principles. Familiarity with ETL processes and data pipeline optimization. Strong analytical, problem-solving, and documentation skills. Preferred technical and professional experience Experience working on Cloudera migration projects. Exposure to Denodo for data virtualization and Talend DQ for data quality management. Knowledge of Kafka, Airflow, and PySpark for data processing. Familiarity with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. Certifications in Data Modeling, Cloudera Data Engineering, or IBM Data Solutions.

Posted 1 month ago

Apply

2 - 5 years

7 - 11 Lacs

Mumbai

Work from Office

Naukri logo

Who you areA highly skilled Data Engineer specializing in Data Modeling with experience in designing, implementing, and optimizing data structures that support the storage, retrieval and processing of data for large-scale enterprise environments. Having expertise in conceptual, logical, and physical data modeling, along with a deep understanding of ETL processes, data lake architectures, and modern data platforms. Proficient in ERwin, PostgreSQL, Apache Iceberg, Cloudera Data Platform, and Denodo. Possess ability to work with cross-functional teams, data architects, and business stakeholders ensures that data models align with enterprise data strategies and support analytical use cases effectively. What you’ll doAs a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Architecture Designing and developing conceptual, logical, and physical data models to support data migration from IIAS to Cloudera Data Lake. Creating and optimizing data models for structured, semi-structured, and unstructured data stored in Apache Iceberg tables on Cloudera. Establishing data lineage and metadata management for the new data platform. Implementing Denodo-based data virtualization models to ensure seamless data access across multiple sources. Data Governance & Quality Ensuring data integrity, consistency, and compliance with regulatory standards, including Banking/regulatory guidelines. Implementing Talend Data Quality (DQ) solutions to maintain high data accuracy. Defining and enforcing naming conventions, data definitions, and business rules for structured and semi-structured data. ETL & Data Pipeline Optimization Supporting the migration of ETL workflows from IBM DataStage to PySpark, ensuring models align with the new ingestion framework. Collaborating with data engineers to define schema evolution strategies for Iceberg tables. Ensuring performance optimization for large-scale data processing on Cloudera. Collaboration & Documentation Working closely with business analysts, architects, and developers to translate business requirements into scalable data models. Documenting data dictionary, entity relationships, and mapping specifications for data migration. Supporting reporting and analytics teams (Qlik Sense/Tableau) by providing well-structured data models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in data modeling, database design, and data engineering. Hands-on experience with ERwin Data Modeler for creating and managing data models. Strong knowledge of relational databases (PostgreSQL) and big data platforms (Cloudera, Apache Iceberg). Proficiency in SQL and NoSQL database concepts. Understanding of data governance, metadata management, and data security principles. Familiarity with ETL processes and data pipeline optimization. Strong analytical, problem-solving, and documentation skills. Preferred technical and professional experience Experience working on Cloudera migration projects. Exposure to Denodo for data virtualization and Talend DQ for data quality management. Knowledge of Kafka, Airflow, and PySpark for data processing. Familiarity with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. Certifications in Data Modeling, Cloudera Data Engineering, or IBM Data Solutions.

Posted 1 month ago

Apply

2 - 5 years

7 - 11 Lacs

Mumbai

Work from Office

Naukri logo

Who you areA highly skilled Data Engineer specializing in Data Modeling with experience in designing, implementing, and optimizing data structures that support the storage, retrieval and processing of data for large-scale enterprise environments. Having expertise in conceptual, logical, and physical data modeling, along with a deep understanding of ETL processes, data lake architectures, and modern data platforms. Proficient in ERwin, PostgreSQL, Apache Iceberg, Cloudera Data Platform, and Denodo. Possess ability to work with cross-functional teams, data architects, and business stakeholders ensures that data models align with enterprise data strategies and support analytical use cases effectively. What you’ll doAs a Data Engineer – Data Modeling, you will be responsible for: Data Modeling & Architecture Designing and developing conceptual, logical, and physical data models to support data migration from IIAS to Cloudera Data Lake. Creating and optimizing data models for structured, semi-structured, and unstructured data stored in Apache Iceberg tables on Cloudera. Establishing data lineage and metadata management for the new data platform. Implementing Denodo-based data virtualization models to ensure seamless data access across multiple sources. Data Governance & Quality Ensuring data integrity, consistency, and compliance with regulatory standards, including Banking/regulatory guidelines. Implementing Talend Data Quality (DQ) solutions to maintain high data accuracy. Defining and enforcing naming conventions, data definitions, and business rules for structured and semi-structured data. ETL & Data Pipeline Optimization Supporting the migration of ETL workflows from IBM DataStage to PySpark, ensuring models align with the new ingestion framework. Collaborating with data engineers to define schema evolution strategies for Iceberg tables. Ensuring performance optimization for large-scale data processing on Cloudera. Collaboration & Documentation Working closely with business analysts, architects, and developers to translate business requirements into scalable data models. Documenting data dictionary, entity relationships, and mapping specifications for data migration. Supporting reporting and analytics teams (Qlik Sense/Tableau) by providing well-structured data models. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Cloudera migration projects in the banking or financial sector. Knowledge of PySpark, Kafka, Airflow, and cloud-native data processing. Experience with Talend DQ for data quality monitoring. Preferred technical and professional experience Experience in Cloudera migration projects in the banking or financial sector. Knowledge of PySpark, Kafka, Airflow, and cloud-native data processing. Experience with Talend DQ for data quality monitoring. Familiarity with graph databases (DGraph Enterprise) for data relationships. Experience with GitLab, Sonatype Nexus, and CheckMarx for CI/CD and security compliance. IBM, Cloudera, or AWS/GCP certifications in Data Engineering or Data Modeling.

Posted 1 month ago

Apply

9 - 12 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Data Engineer RESPONSIBILITIES/TASKS: - Work with Technical Leads and Architects to analyse solutions. - Translate complex business requirements into tangible data requirements through collaborative work with both business and technical subject matter experts. - Develop / modify data models with an eye towards high performance, scalability, flexibility, and usability. - Ensure data models are in alignment with the overall architecture standards. - Create source to target mapping documentation. - Subject matter knowledge guidance in source system analysis and ETL build. - Responsible for overseeing data integrity in the Smart Conductor system - Serves as data flow and enrichment “owner” with deep expertise in data dynamics, capable of recognizing and elevating improvement opportunities early in the process - Work with product owners to understand business reporting requirements and deliver appropriate insights on regular basis - Responsible for system configuration to deliver reports, data visualizations, and other solution components - SKILLS REQUIRED: - More than 5 years of software development experience - Proficient in Power BI/Tableau, Google data Studio, R, SQL, Python - Strong knowledge of cloud computing and experience in Microsoft Azure - Azure ML Studio, Azure Machine Learning - Strong knowledge in SSIS - Proficient in Azure services - Azure Data Factory, Synapse, Data Lake - Experience querying, analysing, or managing data required! - Experience within the healthcare insurance industry with payer data strongly preferred - Experience in data cleansing, data engineering, data enrichment, data warehousing/ Business Intelligence preferred. - Strong analytical, problem solving and planning skills. - Strong organizational and presentation skills. - Excellent interpersonal and communication skills. - Ability to multi-task in a fast-paced environment. - Flexibility to adapt readily to changing business needs in a fast-paced environment. - Team player who is delivery-oriented and takes responsibility for the team's success. - Enthusiastic, can-do attitude with the drive to continually learn and improve. - Knowledge of Agile, SCRUM and/or Agile methodologies. Required Skills Etl,Sql,Azure

Posted 1 month ago

Apply

8 - 13 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities

Posted 1 month ago

Apply

3 - 8 years

4 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, build, and maintain scalable and efficient data pipelines and ETL/ELT processes. Develop and optimize data models for analytics and operational purposes in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery). Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable datasets. Implement data quality checks, monitoring, and alerting for pipelines. Work with structured and unstructured data across various sources (APIs, databases, streaming). Ensure data security, compliance, and governance practices are followed. Write clean, efficient, and testable code using Python, SQL, or Scala. Support the development of data catalogs and documentation. Participate in code reviews and contribute to best practices in data engineering. Preferred candidate profile 3- 9 years of hands-on experience in data engineering or a similar role. Strong proficiency in SQL and Python, Pyspark. Experience with data pipeline orchestration tools like Apache Airflow, Prefect, or Luigi.( Any Of the skill) Familiarity with cloud platforms such as AWS, Azure, or GCP (e.g., S3, Lambda, Glue, BigQuery, Dataflow).( Any of the Skill) Experience with big data tools such as Spark, Kafka, Hive, or Hadoop.(ANy One) Strong understanding of relational and non-relational databases. Exposure to CI/CD practices and tools (e.g., Git, Jenkins, Docker). Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

9 - 14 years

20 - 35 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Lead and manage data-driven projects from initiation to completion, ensuring timely delivery and alignment with business objectives. Experienced in at least in one of the following engagements - BI/DW, DWH, Data Warehousing, Data Analytics, Business Intelligence, Data Projects, Data Management, Data GovernanceProven experience in project management, preferably in data analytics, business intelligence, or related domains. Strong analytical skills with experience in working with large datasets and deriving actionable insights. Collaborate with cross-functional teams, including data scientists, analysts, and engineers, to drive data-related initiatives. Develop and maintain project plans, timelines, and resource allocation for data & analytics projects. Oversee data collection, analysis, and visualization to support decision-making. Ensure data quality, integrity, and security compliance throughout project execution. Use data insights to optimize project performance, identify risks, and implement corrective actions. Communicate project progress, challenges, and key findings to stakeholders and senior management. Implement Agile or Scrum methodologies for efficient project execution in a data-focused environment. Stay updated with industry trends and advancements in data analytics and project management best practices. Proven experience in project management, preferably in data analytics, business intelligence, or related domains. Strong analytical skills with experience in working with large datasets and deriving actionable insights. Excellent problem-solving skills and ability to work with complex datasets. Strong communication and stakeholder management skills. Knowledge of cloud platforms (AWS, Google Cloud, Azure) and database management is a plus. Certification in PMP, PRINCE2, or Agile methodologies is an advantage. Locations: Pune/Bangalore/Hyderabad/Chennai/Mumbai

Posted 1 month ago

Apply

5 - 10 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Snowflake Developer Qualification : Any Graduate or Above Relevant Experience : 5 to 10 Years Required Technical Skill Set (Skill Name) : Snowflake, Azure Managed Services platforms Keywords Must-Have Proficient in SQL programming (stored procedures, user defined functions, CTEs, window functions), Design and implement Snowflake data warehousing solutions, including data modelling and schema designing Snowflake Able to source data from APIs, data lake, on premise systems to Snowflake. Process semi structured data using Snowflake specific features like variant, lateral flatten Experience in using Snow pipe to load micro batch data. Good knowledge of caching layers, micro partitions, clustering keys, clustering depth, materialized views, scale in/out vs scale up/down of warehouses. Ability to implement data pipelines to handle data retention, data redaction use cases. Proficient in designing and implementing complex data models, ETL processes, and data governance frameworks. Strong hands on in migration projects to Snowflake Deep understanding of cloud-based data platforms and data integration techniques. Skilled in writing efficient SQL queries and optimizing database performance. Ability to development and implementation of a real-time data streaming solution using Snowflake Location : PAN INDIA CTC Range : 25 LPA-40 LPA Notice period : Any Shift Timing : N/A Mode of Interview : Virtual Mode of Work : WFO( Work From Office) Pooja Singh KS IT Staffing Analyst Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA. Ipooja.singh@blackwhite.in I www.blackwhite.in

Posted 1 month ago

Apply

3 - 6 years

20 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As member of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 2+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and data engineering 1+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 1+ years of experience in Databricks optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 1 month ago

Apply

14 - 20 years

20 - 35 Lacs

Pune, Chennai, Mumbai (All Areas)

Work from Office

Naukri logo

Role :Data & Analytics Architect Required Skill Set :Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Preferred Specializations or Prior Experience : Manufacturing, Hi-Tech, CPG Use cases where Analytics and AI have been applied Location: PAN INDIA Desired Competencies (Managerial/Behavioural Competency): Must-Have: 14+ years of IT industry experience IoT / Industry 4.0 / Industrial AI experience for at least 3+ years In depth knowledge of Data Integration , Data modelling , IOT Data Management and information delivery layers of Data & Analytics Strong written , verbal communication with good presentation skills Excellent Knowledge of Data governance , Medallion architecture , UNS , Data lake architectures , AIML and Data Science Experience with cloud platforms (e.g., GCP, AWS, Azure) and cloud-based Analytics & AI / ML services. Proven experience working with clients in the Manufacturing CPG /High-Tech / oil & Gas /Pharma industries. Good understanding of technology trends, market forces and industry imperatives. Excellent communication, presentation, and interpersonal skills. Ability to work independently and collaboratively in a team environment. Good-to-Have: Degree in Data Science or Statistics. Led consulting & advisory programs at CxO level, managing business outcomes. Point of View articulation for CxO level. Manufacturing (Discreet or Process) industry background for application of AI technology for business impact. Entrepreneurial and comfortable working in a complex and fast-paced environment Responsibilities / Expected Deliverables: We are seeking a highly skilled and experienced Data and Analytics / Consultant to provide expert guidance and support to our clients in the Manufacturing, Consumer Packaged Goods (CPG), and High-Tech industries. This role requires a deep understanding of Architect , Design and Implementation experience in cloud data platforms Experience in handling multiple type of data (Structured, streaming , Semi structured etc.) Strategic experience in Data & Analytics (cloud data architecture, lake-house architecture, data fabric, data mesh concept) Experience in deploying DevOps / CICD techniques. Automate and Deploy Data Pipelines / ETLs in DevOps Environment Experience in Strategizing Data governance activities. The ideal candidate will possess exceptional communication, consulting, and problem-solving skills, along with a strong technical foundation in data Arch. The role involves leading the Data Arch Tech Advisory engagement, bringing thought leadership to engage CxOs actively. Following would be some of the key roles and responsibilities: Business-oriented Engage with customer CxOs to evangelise adoption of AI & GenAI Author proposals for solving business problems and achieving business objectives, leveraging Data Analytics & AI technologies Advisory Experience in managing the entire lifecycle of Data Analytics , is an added advantage. This includes: Develop roadmap for introduction and scaling of Data architecture in customer organization Define best suited AI operating model for customers Guide the teams on solution approaches and roadmaps. Build and leverage frameworks for RoI from AI. Effectively communicate complex technical information to both technical and non-technical audiences, presenting findings and recommendations in a clear, concise, and compelling. Demonstrate though-leadership to identify various use-cases that need to be built for showcase to prospective customers.

Posted 1 month ago

Apply

5 - 10 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : Job Title Transformation Principal Change Analyst Corporate TitleAVP LocationBangalore, India Role Description We are looking for an experienced Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Responsible for change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the story, maintain risk management controls, monitor and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years leading complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies