Home
Jobs

219 Etl Processes Jobs - Page 9

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 3.0 years

2 - 5 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Description We are looking for a detail-oriented Data Analyst to join our team in India. The ideal candidate will have 2-3 years of experience in data analysis and will be responsible for transforming data into insights that drive business decisions. Responsibilities Analyze and interpret complex data sets to identify trends, patterns, and insights. Develop and maintain dashboards and reports to facilitate data-driven decision-making. Collaborate with cross-functional teams to gather data requirements and provide analytical support. Conduct data validation and ensure data integrity for reporting purposes. Utilize statistical methods to analyze data and generate useful business insights. Skills and Qualifications Bachelor's degree in Data Science, Statistics, Mathematics, Computer Science or a related field. Proficiency in SQL for data querying and manipulation. Experience with data visualization tools such as Tableau, Power BI, or similar. Strong analytical skills with the ability to interpret data and provide actionable insights. Familiarity with programming languages such as Python or R for data analysis. Knowledge of statistical methods and experience in applying them to real-world problems. Excellent problem-solving skills and attention to detail.

Posted 1 month ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

The Sustainability Data and Technology Program is a bank wide program to deliver a strategic solution for Environmental, Social and Governance data across Deutsche Bank. The Program is part of the Sustainability Strategy Key Deliverable. As a Business Analyst, you will be part of the Data Team. You will be responsible for reviewing business use cases from stakeholders, gathering & documenting requirements, defining high level implementation steps and creating business user stories. You will closely work with the Product Owner and development teams and bring business and functional analysis skills into the development team to ensure that the implementation of requirements aligns with our business needs and technical quality standards. Your key responsibilities Working with the business and technology stakeholders to define, agree and socialise requirements for ESG Data Sourcing and Transformation, needed for the Consumer base within the bank. Work with architects and engineers to ensure that both functional and non-functional requirements can be realised in the design and delivery in a way which respects the architecture strategy. Analyse complex datasets to derive insights to support requirement definition by completing the data profiling of vendor data. Define & document business requirements for review by senior stakeholders, in JIRA and other documentation tools such as Confluence, Draw.IO. Defining acceptance criteria with stakeholders and supporting user acceptance testing to ensure quality product delivery, supporting the Defect Management. Responsible for reviewing User Stories along with test cases based on appropriate interpretation of Business Requirements Liaising with business teams and development teams in Agile ceremonies such as Product Backlog Refinements to review the User Stories and to prioritise the Product Backlog, to support the requirements in its path to release in production environment. To act as a point of contact for the Development Teams for any business requirement clarifications Provide support to the Functional Analysts within the Development Teams to produce Analysis artifacts Designing & specifying data mapping to transform source system data into a format which can be consumed by other business areas within the bank Supporting the design and conceptualization of new business solution options and articulating identified impacts and risks Monitor, track issues, risks and dependencies on analysis and requirements work Your skills and experience Mandatory Skills 4+ years business analyst experience in the Banking Industry across the full project life cycle, with broad domain knowledge and understanding of core business processes, systems and data flows Experience of specifying ETL processes within Data projects Experience of a large system implementation project across multiple Business Units and across multiple geographies. It is essential that they are aware of the sort of issues that may arise with a central implementation across different locations Strong knowledge of business analysis methods (e.g. best practices in Requirements Management and UAT) Demonstrates the maturity and persuasiveness required to engage in business dialogue and support stakeholders Excellent analysis skills and good problem solving skills Ability to communicate and interpret stakeholders needs and requirements An understanding of systems delivery lifecycles and Agile delivery methodologies A good appreciation of systems and data architectures Strong discipline in data reconciliation, data integrity, controls and documentation Understanding of controls around software development to manage business requirements Ability to work in virtual teams and matrixed organizations Good team player, facilitator-negotiator and networker. Able to lead senior managers towards common goals and build consensus across a diverse group Ability to share information, transfer knowledge and expertise to team members Ability to commit to and prioritise work duties and tasks Ability to work in a fast paced environment with competing and ever changing priorities, whilst maintaining a constant focus on delivery Willingness to chip in and cover multiple roles when required such as cover for Project Managers, assisting architecture, performing testing and write ups of meeting minutes Expertise in Microsoft Office applications (Word, Excel, Visio, PowerPoint) Proficient ability to query large datasets (e.g. SQL, Hue, Impala, Python) with a view to test/analyse content and data profiling Desirable Skills In depth understanding of the aspects of ESG reporting Knowledge of ESG data vendors

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

Foundit logo

Job Description Were AtkinsRealis, a world class Engineering Services and Nuclear organization. We connect people, data and technology to transform the world's infrastructure and energy systems. Together, with our industry partners and clients, and our global team of consultants, designers, engineers and project managers, we can change the world. Created by the integration of long-standing organizations dating back to 1911, we are a world-leading professional services company dedicated to engineering a better future for our planet and its people. We deploy global capabilities locally to our clients and deliver unique end-to-end services across the whole life cycle of an asset including consulting, advisory & environmental services, intelligent networks & cybersecurity, design & engineering, procurement, project & construction management, operations & maintenance, decommissioning and capital. The breadth and depth of our capabilities are delivered to clients in key strategic sectors. News and information are available at or follow us on LinkedIn. Our teams take great pride in delivering some of the worlds most prestigious projects. This success is driven by our talented people, whose diverse perspectives, expertise, and knowledge set us apart. Join us and you'll be part of our genuinely collaborative environment, where everyone is supported to make the most of their talents and expertise. When it comes to work-life balance, AtkinsRealis is a great place to be. So, let's discuss how our flexible and remote working policies can support your priorities. We're passionate about are work while valuing each other equally. So, ask us about some of our recent pledges for Women's Equality and being a Disability Confident and Inclusive Employer. PIMS Engineers are responsible for integrating advanced technologies and streamlining processes to manage project deliverables efficiently. This mid-level role demands a strategic thinker with strong automation expertise, project management acumen, and the ability to translate complex business needs into scalable digital solutions. The engineer will also support project mobilization, ensure foundational systems are in place, and mentor junior team members. Key Responsibilities: Common Data Environment (CDE) Management. Ensure centralized, secure, and structured data management across all project phases. Project Mobilization & System Implementation. Support the setup and mobilization of new projects, ensuring foundational systems and processes are in place. Deploy new tools and systems to enhance operational efficiency and reduce manual effort. Advanced Reporting & Analytics. Design and maintain interactive dashboards and visual reports using Power BI. Define and monitor KPIs, build data models, and perform DAX queries. Deliver customized ad hoc reports and support data-driven decision-making. Standardization & Automation. Drive consistency across projects by implementing standardized processes and automation solutions. Identify automation opportunities and apply lean techniques to improve project controls. Project Controls & GovernanceDevelop Cost Breakdown Structures (CBS) aligned with Work Breakdown Structures (WBS). Apply strong understanding of CPM, earned value management, and risk/schedule management. Collaborate with stakeholders to ensure governance frameworks are effectively implemented. Mentorship & Knowledge Sharing. Guide junior engineers in automation, reporting, and project control practices. Contribute to internal knowledge bases and promote continuous improvement. Expertise Areas: Advanced proficiency in Microsoft Power Platform (Power BI, Power Apps, Power Automate), Excel (including VBA), and SQL. Working knowledge of Python and Azure Services. Experience with ETL processes, SSAS, SSIS, and SSRS. Analyze large datasets and translate insights into actionable strategies. Build tabular and multidimensional data models. Perform DAX queries and create dynamic, interactive dashboards. Strong understanding of stakeholder management, communication, risk, and schedule management. Familiarity with Primavera P6 and earned value concepts. Ability to adapt quickly to changing project needs and environments. Qualifications: Bachelors degree in Construction Management, Civil Engineering, or related field. 7'8 years of relevant experience on global projects (UK, Middle East, US), preferably in consultancy roles. Minimum 3 years of hands-on experience in automation and Power Platform tools. Strong communication, presentation, and report writing skills. Preferred Skills & Certifications: PMP, PRINCE2, or equivalent Business Analysis certifications. Advanced Power BI and O365 proficiency. Automation skills using Visual Basic, Python, or similar tools. Leadership and mentoring capabilities. What We Can Offer You: Varied, interesting and meaningful work. A hybrid working environment with flexibility and great opportunities. Opportunities for training and, as the team grows, career progression or sideways moves. An opportunity to work within a large global multi-disciplinary consultancy on a mission to change the ways we approach business as usual. Why work for AtkinsRealis We at AtkinsRealis are committed to developing its people both personally and professionally. Our colleagues have the advantage of access to a high ranging training portfolio and development activities designed to help make the best of individuals abilities and talents. We also actively support staff in achieving corporate membership of relevant institutions. Meeting Your Needs: To help you get the most out of life in and outside of work, we offer employees Total Reward. Making sure you're supported is important to us. So, if you identify as having a disability, tell us ahead of your interview, and well discuss any adjustments you might need. Additional Information: We are an equal opportunity, drug-free employer committed to promoting a diverse and inclusive community - a place where we can all be ourselves, thrive and develop. To help embed inclusion for all, from day one, we offer a range of family friendly, inclusive employment policies, flexible working arrangements and employee networks to support staff from different backgrounds. As an Equal Opportunities Employer, we value applications from all backgrounds, cultures and ability. We care about your privacy and are committed to protecting your privacy. Please consult our on our Careers site to know more about how we collect, use and transfer your Personal Data. Link:

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Job Title: Senior SQL Architect Location: Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad Job Type: Contract (Immediate Joiner Preferred) Email to Apply: Job Summary: We are looking for a Senior SQL Architect to lead the architecture, design, and implementation of robust database solutions using Microsoft SQL Server. This role demands a strategic thinker with strong technical skills and the ability to collaborate across departments. Key Responsibilities: Design and architect scalable SQL Server database systems Lead data modeling, database design, and ETL processes Collaborate with development teams for seamless DB integration Maintain database standards, best practices, and technical documentation Troubleshoot complex DB issues and optimize performance Implement robust security and compliance frameworks Mentor junior DB professionals Mandatory Technical Skills: 10+ years of SQL Server experience Advanced T-SQL, data modeling, and performance tuning Experience with high availability (Clustering, Mirroring, Always-On) Hands-on with SSIS, Azure Data Factory, or similar ETL tools Azure SQL / AWS RDS experience Strong knowledge of security, BI, and DW concepts Soft Skills: Excellent analytical and problem-solving abilities Strong leadership and mentoring experience Effective communication with cross-functional teams Detail-oriented and quality-focused Good to Have: Oracle/MySQL experience DevOps & CI/CD exposure Data governance, data science, or ML familiarity Location: Remote Start Date: Immediate Interested candidates, please share your resume with the following details: Current CTC: Expected CTC: Notice Period/Availability: Preferred Location: Remote Send to: hr@sridatta.co.in Contact: | 9032956160

Posted 1 month ago

Apply

10 - 15 years

15 - 25 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Experience: 10+ Years Job Description: Role Overview: We are seeking an experienced AWS Data & Analytics Architect with a strong background in delivery and excellent communication skills. The ideal candidate will have over 10 years of experience and a proven track record in managing teams and client relationships. You will be responsible for leading data modernization and transformation projects using AWS services. Key Responsibilities: Lead and architect data modernization/transformation projects using AWS services. Manage and mentor a team of data engineers and analysts. Build and maintain strong client relationships, ensuring successful project delivery. Design and implement scalable data architectures and solutions. Oversee the migration of large datasets to AWS, ensuring data integrity and security. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Ensure best practices in data management and governance are followed. Required Skills and Experience: 10+ years of experience in data architecture and analytics. Hands-on experience with AWS services such as Redshift, S3, Glue, Lambda, RDS, and others. Proven experience in delivering 1-2 large data migration/modernization projects using AWS. Strong leadership and team management skills. Excellent communication and interpersonal skills. Deep understanding of data modeling, ETL processes, and data warehousing. Experience with data governance and security best practices. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications: AWS Certified Solutions Architect Professional or AWS Certified Big Data Specialty. Experience with other cloud platforms (e.g., Azure, GCP) is a plus. Familiarity with machine learning and AI technologies.

Posted 1 month ago

Apply

12 - 16 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Key Responsibilities Negotiate with customers to architect and design business BI and data management application and platform solutions to address complex cross functional and technical needs, and converts design to functional and technical specifications. Identifies and recommend industry BKMs for integrated solution design and standards. Manage BI,, Big Data and AI/Ml software application and platform configuration and/or coding, including query and report building, dashboard creation, and master data and integration object development. Introduce and manage BKMs for delivery of self service models. Responsible for the Data Science lifecycle including data curation, and model,build, test and deployment. Lead overall testing (unit, integration, performance, acceptance) and data conversions using standard tools and following established processes and guidelines. Oversees contingent workers performing configuration, coding and object development work. Perform Quality assurance. Manage the execution of BI application and platform evaluations and POCs. Assess vendor strategies, roadmaps and next generation technologies, and recommend changes to architectural strategy and technology roadmap. Manage personnel providing business Intelligence, big data or AI/ML application and platform support services to meet performance, availability, customer service level agreement, and customer satisfaction targets. Monitors specific IT systems or set of systems for availability, performance and capabilities and reports anomalies through predefined process. Manages completion of root cause analysis and resolution of complex outages or incident trends working with infrastructure and technical teams, support providers and application vendors. Recommends and executes required corrective actions. Participate in evaluation and recommendation of patches, point releases, major upgrades and new systems. Deliver and manage personnel ressponsible for the delivery of BI, Big Data and AI/ML project and support services within area of responsibility within allocated budget. Develops project budgets. Develops understanding cost model and cost drivers for service(s) and recommend service area budgets and cost optimization activities. May be responsible for timely renewal of maintenance and subscription contracts. Adheres to and manages junior staff to adhere to GIS project management, software application development, testing, service management, change management, RCA and other relevant processes, standards, governance and controls. May manage execution of sox contols and testing, and support internal and external audits. Plan and manage medium to large scale Business intelligence, Big Data or AI/ML application or platform projects to ensure effective and efficient execution in line with guardrails of scope, timeline, budget and quality. May serve as a Business Intelligence, Big Data, or AI/ML team lead on a large, complex, cross functional project. May manage junior project managers. Oversees/manages contingent workers performing Business Intelligence, Big Data and AI/ML project and/or support services. Responsible for the selection, onboarding and offboarding of contingent workers in a timely manner. Manges contingent worker project/task assignments and ensure work product quality. Approves contingent worker timesheets/cost. Architect and design solutions utilizing associated Bigdata technologies(databricks), with a focus on delivering solutions using Azure/AWS Design and develop data models, schemas, and databases. Define data governance policies and standards. Collaborate with business stakeholders to understand data requirements. Optimize data storage, retrieval, and processing. Ensure data security and compliance. Evaluate and recommend data technologies and tools. Develop Spark applications for data processing and analysis Optimize Spark jobs for performance and scalability. Troubleshoot and debug Spark-related issues. Collaborate with data scientists and data engineers. Monitor Spark clusters and manage resources. Build production ready highly scalable, available, fault-tolerant data processing systems using Azure technologies, Spark, Elasticsearch and other big data technologies Willingness to drive people on all sides of an issue to a common understanding and then drive them toward resolution Should be able to clearly communicate ideas in technical or business terms with their peers across IT Define Big Data technology strategies and roadmaps Functional Knowledge Demonstrates in-depth understanding of concepts, theories and principles in own job family and basic knowledge of other related job families Business Expertise Applies understanding of the industry and how own area contributes to the achievement of objectives Leadership Manages a generally homogeneous team; adapts plans and priorities to meet service and/or operational challenges Impact Impacts the level of service and the team's ability to meet quality, volume, and timeliness objectivesGuided by policies and resource requirements within business unit, department or sub-function Interpersonal Skills Guides, influences and persuades others internally in related areas or externally Qualifications , Experience & Mandatory Skills Experience with ETL processes and data integration. Expertise on cloud platforms ( Azure, AWS)- Any one Strong Proficiency in Python. Expertise with data streaming frameworks (e.g., Kafka). Proficiency in working with Databricks Bachelor's degree in Computer Science or relevant field. 12+ years' experience with data/Architect related positions and responsibilities As a Data Architect - Engineer, you will play a pivotal role in designing, implementing, and optimizing data solutions within our organization. You'll collaborate with cross-functional teams to define data architecture, ensure data quality, and drive efficient data pipelines. Additionally, you'll specialize in building and optimizing Apache Spark applications for big data processing, streaming, and analytics.

Posted 1 month ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.

Posted 1 month ago

Apply

8 - 12 years

25 - 30 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Key Responsibilities: ETL Development: Design, develop, and implement Ab Initio components and graphs for ETL processes. Develop complex data pipelines for large-scale data processing. Create and maintain data integration solutions. Data Analysis and Requirements: Analyze data requirements and collaborate with stakeholders to understand business needs. Understand and translate business requirements into technical solutions. Performance Tuning and Optimization: Optimize Ab Initio processes for performance and efficiency. Troubleshoot and debug issues related to application performance and deployment. Code and Documentation:

Posted 1 month ago

Apply

8 - 12 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

We are currently looking for a Team Lead- MDM to join us at our facility in Gurugram, Haryana. Technical Competencies: Should have hands on experience on Master data creation and maintenance (Material/Vendor/Pricing/Customer/PIRs/Source List/BOM data etc) Hands on experience on SAP toolsets in Data space including Data extraction programs from SAP, SQVIs, ETL processes, Load programs, LSMW, data quality maintenance and cleansing etc. Knowledge of Request Management tools eg: SNOW, Remedy etc Knowledge of key database concepts, data models, relationships between different types of data An understanding of end-to-end set-up and business impact of master data key fields Knowledge about SAP, S/4 HANA, SAP-MDG, Ariba , SFDC, MW (Informatica etc) or additional ERP platforms, IT tools and technologies is desirable Experience in Data Management Processes (Data Profiling & Cleansing, Workflows, Data Quality, Governance Process, relationships & dependencies with IT teams etc.), or functional knowledge in SAP MM/PP or OTC modules will be an added advantage Should have prior experience of handling a team. Primary Responsibilities/ Role Expectations: As an MDM Team Lead, the role would involve: Getting adept with MDM process and gaining knowledge by initially working on daily business requests for master data objects - creation/update/obsolete/reactivate. Responsible for holding key business stakeholder interactions for feedbacks, business requirements and to maintain data governance and data quality. Testing master data creations/updates across tools and interfaces Getting the team and the allocated region ready for future additional tasks/requirements/projects as and when needed Responsible for maintaining data governance, data quality and for data cleansing activities Mentoring the team members on topics of expertise Strong ownership focus, drive to excel & deliver Flexibility to work in shifts. Other Skills & Experience: Professional experience of 8+ years Good communication skills, stakeholder alignment, experience of interaction with end clients/ international colleagues across geographies Ability to resolve conflicts, share, collaborate and work as a leader for the allocated team.

Posted 1 month ago

Apply

8 - 12 years

16 - 20 Lacs

Mumbai

Work from Office

Naukri logo

We are looking for a skilled PL/SQL Developer with expertise in Oracle 19c to join our team. In this role, you will be responsible for developing, enhancing, and maintaining database applications using Oracle PL/SQL, ensuring data integrity, and optimizing performance. You will collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. Key Responsibilities: PL/SQL Development & Maintenance: o Design, develop, and maintain PL/SQL packages, procedures, functions, triggers, and scripts for Oracle 19c database systems. o Implement business logic in PL/SQL, leveraging Oracle's latest features such as JSON handling, result caching, native compilation, and parallel query execution. Performance Optimization: o Analyze and optimize existing PL/SQL code and queries to improve performance. o Use execution plans and database tuning techniques to ensure optimal resource usage. o Implement bulk processing techniques like BULK COLLECT, FORALL, and parallel processing for large datasets. Database Design & Architecture: o Assist in database design, normalization, and the creation of data models. o Develop and maintain data pipelines, ETL processes, and stored procedures in Oracle 19c for data migration and integration tasks. Troubleshooting & Issue Resolution: o Proactively identify, troubleshoot, and resolve database performance issues and data inconsistencies. o Perform root cause analysis of production issues related to database performance and PL/SQL code. Database Security & Compliance: o Ensure security best practices in database development, including data encryption, access controls, and adherence to internal policies. o Implement data validation and error handling to ensure data integrity and reliability. Collaboration & Documentation: o Work closely with Application Developers, System Administrators, and Business Analysts to understand functional requirements and translate them into technical solutions. o Document database designs, processes, and key functionality of PL/SQL procedures for future reference and maintenance. Continuous Improvement: o Stay updated on new Oracle features, patches, and enhancements (especially in Oracle 19c) to apply best practices and integrate the latest capabilities into the development process. o Participate in peer reviews, knowledge-sharing sessions, and contribute to ongoing process improvements.

Posted 1 month ago

Apply

6 - 11 years

25 - 30 Lacs

Noida, Hyderabad, Bengaluru

Work from Office

Naukri logo

1. Experience in IP and Management Protocols viz. SNMP, Netconf etc. 2. Should have worked on IP networks and experience in using SNMP/MIB 3. Good Knowledge of RAN Network for 2G, 3G, 4G and 5G 4. Knowledge of fixed line/wireless networks architecture 5. Experience/Knowledge in Telecom technologies like LTE, 5G, SDN and Telco Clouds, IOT, NE Virtualization 6. Should have worked on ETL processes 7. Knowledge of Performance Management on EMS/NMS. 8. Knowledge and experience in the OSS domain. 9. Good to have knowledge on any COTS Network Performance Monitoring Systems like SOLARWINDS, SEVONE, NAGIOS CORE etc. 10. Good in Technical Documentation viz. DLD, LLD, HLD 11. Good at data processing tools (such as MS Excel/scripts) 12. Familiar to Database system (MS-SQL/MYSQL/ORACLE/DRUID) 13. Basic Knowledge and experience of Java14. Experience of working in Agile team Location - Bangalore, Hyderabad, Noida, Pune

Posted 1 month ago

Apply

12 - 16 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

IT MANAGER, DATA ENGINEERING AND ANALYTICS will lead a team of data engineers and analysts responsible for designing, developing, and maintaining robust data systems and integrations. This role is critical for ensuring the smooth collection, transformation, integration and visualization of data, making it easily accessible for analytics and decision-making across the organization. The Manager will collaborate closely with analysts, developers, business leaders and other stakeholders to ensure that the data infrastructure meets business needs and is scalable, reliable, and efficient. What Youll Do: Team Leadership: Manage, mentor, and guide a team of data engineers and analysts, ensuring their professional development and optimizing team performance. Foster a culture of collaboration, accountability, and continuous learning within the team. Lead performance reviews, provide career guidance, and handle resource planning. Data Engineering & Analytics: Design and implement data pipelines, data models, and architectures that are robust, scalable, and efficient. Develop and enforce data quality frameworks to ensure accuracy, consistency, and reliability of data assets. Establish and maintain data lineage processes to track the flow and transformation of data across systems. Ensure the design and maintenance of robust data warehousing solutions to support analytics and reporting needs. Collaboration and Stakeholder Management: Collaborate with stakeholders, including functional owners, analysts and business leaders, to understand business needs and translate them into technical requirements. Work closely with these stakeholders to ensure the data infrastructure supports organizational goals and provides reliable data for business decisions. Build and Foster relationships with major stakeholders to ensure Management perspectives on Data Strategy and its alignment with Business objectives. Project Management: Drive end-to-end delivery of analytics projects, ensuring quality and timeliness. Manage project roadmaps, prioritize tasks, and allocate resources effectively. Manage project timelines and mitigate risks to ensure timely delivery of high-quality data engineering projects. Technology and Infrastructure: Evaluate and implement new tools, technologies, and best practices to improve the efficiency of data engineering processes. Oversee the design, development, and maintenance of data pipelines, ensuring that data is collected, cleaned, and stored efficiently. Ensure there are no data pipeline leaks and monitor production pipelines to maintain their integrity. Familiarity with reporting tools such as Superset and Tableau is beneficial for creating intuitive data visualizations and reports. Machine Learning and GenAI Integration: Machine Learning: Knowledge of machine learning concepts and integration with data pipelines is a plus. This includes understanding how machine learning models can be used to enhance data quality, predict data trends, and automate decision-making processes. GenAI: Familiarity with Generative AI (GenAI) concepts and exposure is advantageous, particularly in enabling GenAI features on new datasets. Leveraging GenAI with data pipelines to automate tasks, streamline workflows, and uncover deeper insights is beneficial. What Youll Bring: 12+ years of experience in data engineering, with at least 3 years in a managerial role. Technical Expertise: Strong knowledge of data engineering concepts, including data warehousing, ETL processes, and data pipeline design. Proficiency in Azure Synapse or data factory, SQL, Python, and other data engineering tools. Data Modeling: Expertise in data modeling is essential, with the ability to design and implement robust, scalable data models that support complex analytics and reporting needs. Experience with data modeling frameworks and tools is highly valued. Leadership Skills: Proven ability to lead and motivate a team of engineers while managing cross-functional collaborations. Problem-Solving: Strong analytical and troubleshooting skills to address complex data-related challenges. Communication: Excellent verbal and written communication skills to effectively interact with technical and non-technical stakeholders. This includes the ability to motivate team members, provide regular constructive feedback, and facilitate open communication channels to ensure team alignment and success. Data Architecture: Experience with designing scalable, high-performance data systems and understanding cloud platforms such as Azure, Data Bricks. Machine Learning and GenAI: Knowledge of machine learning concepts and integration with data pipelines, as well as familiarity with GenAI, is a plus. Data Governance: Experience with data governance best practices is desirable. Open Mindset: An open mindset with a willingness to learn new technologies, processes, and methodologies is essential. The ability to adapt quickly to evolving data engineering landscapes and embrace innovative solutions is highly valued.

Posted 1 month ago

Apply

3 - 4 years

5 - 6 Lacs

Noida, Gurugram, Bengaluru

Work from Office

Naukri logo

Senior Engineer: The T C A practice has experienced significant growth in demand for engineering & architecture roles from CST, driven by client needs that extend beyond traditional data & analytics architecture skills. There is an increasing emphasis on deep technical sk ills like s uch as strong ex pertise i n Azure, Snowflake, Azure OpenAI, and Snowflake Cortex, along with a solid understanding of their respective functionalities. In dividual w ill work on a robust pipeline of T C A-driven projects with pharma clients . This role offers significant opportunities for progression within the practice. What Youll Do Opportunity to work on high-impact projects with leading clients. Exposure to complex and technological initiatives Learning support through organization sponsored trainings & certifications Collaborative and growth-oriented team culture. Clear progression path within the practice. Opportunity work on latest technologies Successful delivery of client projects and continuous learning mindset certifications in newer areas Contribution to partner with project leads and AEEC leads todeliver complex projects & growTCA practice. Development of experttech solutions for client needs with positive feedback from clients and team members. What Youll Bring 3- 4 years of experience in RDF ontologies, RDF based knowledge graph (Anzo graph DB preferred), Data modelling, Azure cloud and data engineering Understanding of ETL processes, Data pull using Azure services via polling mechanism and API/middleware development using Azure services. Strong ability to identify data anomalies, design data validation rules, and perform data cleanup to ensure high-quality data. Experience in pharma or life sciences data: Familiarity with pharmaceutical datasets, including product, patient, or healthcare provider data, is a plus.

Posted 1 month ago

Apply

8 - 10 years

25 - 27 Lacs

Noida

Work from Office

Naukri logo

Experience in designing and implementing business process workflows using Collibra Workflow Designer. Understanding of Collibra Data Governance Center (DGC) and its modules, including Data Catalog, Business Glossary, and Policy Manager. Experience in metadata harvesting, lineage tracking, and governance to improve data visibility. Proficiency in using Collibra REST APIs for workflow automation, data exchange, and custom integrations with other tools. Familiarity with Collibra Data Quality & Observability, setting up data quality rules and configuring DQ workflows. Familiarity with Groovy & Java for developing custom workflows and scripts within Collibra. Ability to write Python & SQL for data validation, integration scripts, and automation. Understanding of ETL processes and integrating Collibra with cloud/on-prem databases. Familiarity with data governance frameworks (e.g., GDPR, CCPA, HIPAA) and best practices. Experience in managing technical and business metadata effectively. Ability to track data lineage and assess downstream/upstream data impacts.

Posted 1 month ago

Apply

6 - 8 years

4 - 8 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Overview The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. Key Responsibilities Design and implement scalable data pipelines using Azure Data Bricks. Develop ETL processes to efficiently extract, transform, and load data. Collaborate with data scientists and analysts to define and refine data requirements. Optimize Spark jobs for performance and efficiency. Monitor and troubleshoot production workflows and jobs. Implement data quality checks and validation processes. Create and maintain technical documentation related to data architecture. Conduct code reviews to ensure best practices are followed. Work on integrating data from various sources including databases, APIs, and third-party services. Utilize SQL and Python for data manipulation and analysis. Collaborate with DevOps teams to deploy and maintain data solutions. Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. Facilitate data visualization initiatives for better data-driven insights. Provide training and support to team members on data tools and practices. Participate in cross-functional projects to enhance data sharing and access. Required Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 6 years of experience in data engineering or a related domain. Strong expertise in Azure Data Bricks and data lake concepts. Proficiency with SQL, Python, and Spark. Solid understanding of data warehousing concepts. Experience with ETL tools and frameworks. Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. Excellent problem-solving and analytical skills. Ability to work collaboratively in a diverse team environment. Experience with data visualization tools such as Power BI or Tableau. Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. Knowledge of data governance and data quality best practices. Hands-on experience with big data technologies and frameworks. A relevant certification in Azure is a plus. Ability to adapt to changing technologies and evolving business requirements. Location- Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 month ago

Apply

4 - 6 years

4 - 6 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Job Title: Power BI Developer (Contract 6 Months) Location: Remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad Duration: 6 Months (Contract) Job Summary We are seeking a skilled Power BI Developer for a 6-month contract engagement to support our data visualization and business intelligence initiatives. The ideal candidate will have strong experience in designing and developing insightful Power BI dashboards and reports, transforming raw data into actionable insights. Key Responsibilities Design, develop, and publish interactive dashboards and reports using Power BI Collaborate with stakeholders to gather requirements and translate business needs into technical solutions Develop and optimize DAX queries and data models Connect Power BI to various data sources including SQL Server, Excel, SharePoint, Azure, etc. Create and maintain proper documentation of dashboards, data models, and business logic Ensure data accuracy, consistency, and performance of reports Provide support for data refresh and troubleshooting Power BI services Work with cross-functional teams including business analysts, data engineers, and domain experts Required Skills & Qualifications 4+ years of experience in Power BI development Strong expertise in DAX, Power Query, and data modeling Proficiency in SQL and experience with relational databases (e.g., SQL Server, Oracle) Understanding of ETL processes and data warehousing concepts Experience connecting Power BI to multiple data sources Familiarity with Power BI Service (publishing, scheduling, security roles) Excellent analytical and problem-solving skills Good communication skills and ability to work independently

Posted 1 month ago

Apply

1.0 years

3 - 3 Lacs

Chandigarh, Chandigarh, IN

On-site

Internshala logo

About the job: Key responsibilities: 1. Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. 2. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. 3. Participate in performance tuning and optimization of existing data workflows. 4. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. 5. Document code, processes, and architecture for reproducibility and future reference. 6. Debug issues in data pipelines and contribute to their resolution. Who can apply: Only those candidates can apply who: have minimum 1 years of experience are Computer Science Engineering students Salary: ₹ 3,20,000 - 3,50,000 /year Experience: 1 year(s) Deadline: 2025-06-06 23:59:59 Skills required: Python, SQL, Big Data Analytics, Google Cloud Platforms (GCP) and ETL processes Other Requirements: 1. Strong foundational knowledge in SQL, Python, and data structures. 2. Familiarity with GCP or any other cloud platform (AWS/Azure). 3. Basic understanding of data warehousing, ETL/ELT processes, and version control systems (e.g., Git). 4. Ability to work independently in a fast-paced environment. 5. Excellent communication and problem-solving skills. 6. Prior internship experience in a data engineering or related role. 7. Experience with GCP-native tools like BigQuery, Cloud Functions, Dataflow, or Apache Beam. 8. Exposure to workflow orchestration tools like Airflow or Cloud Composer. 9. Interest in exploring DevOps/data infrastructure practices. About Company: Fitelo is a pre-series A-funded startup operating in the space of weight loss and disease management. We aim for our users to achieve long-term wellness through simple, powerful, personalized, and holistic changes in nutrition and eating habits. We have served 25k+ users across 55+ countries and have a 99.3% success rate through our 200+ expert nutritionists. We use a combination of nutrition expertise and our proprietary habit-building technology to deliver outcomes.

Posted 1 month ago

Apply

3 - 4 years

4 - 6 Lacs

Gurugram

Work from Office

Naukri logo

We are seeking a skilled Database Developer with 3-4 years of experience to design, develop, and optimize databases. The ideal candidate will have expertise in SQL development, database performance tuning, and data modelling. They should also be proficient in working with relational databases like SQL Server, MS SQL, or PostgreSQL and have experience in ETL processes and stored procedures. Key Responsibilities: Database Design & Development Design and develop database schemas, tables, views, and indexes. Write optimized SQL queries, stored procedures, functions, and triggers . Implement database security best practices. Performance Optimization Analyze and improve database query performance . Perform indexing, partitioning, and query optimization techniques. Monitor and troubleshoot database performance issues. ETL & Data Integration Develop and manage ETL processes to migrate and transform data. Work with SSIS, Talend, or other ETL tools for data integration. Database Maintenance & Administration Ensure data integrity, backups, and disaster recovery strategies. Manage database deployments and migrations. Collaborate with DevOps teams to implement CI/CD for databases. Collaboration & Documentation Work closely with developers, data analysts, and business teams . Document database architecture, queries, and procedures. Required Skills & Experience: 3-4 years of experience as a Database Developer . Strong proficiency in SQL Server / MySQL / PostgreSQL . Experience in query optimization and performance tuning . Hands-on experience with stored procedures, triggers, and functions . Knowledge of ETL processes and data warehousing . Experience with database version control and CI/CD for databases . Understanding of NoSQL databases (MongoDB, Radis) is a plus. Familiarity with cloud-based databases (Azure SQL, AWS RDS, GCP Big Query) is a plus.

Posted 1 month ago

Apply

6 - 8 years

8 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Lets do this. Lets change the world. In this vital role you will create and develop data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research collaborators. You will also provide technical leadership to junior team members. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Lead, manage, and mentor a high-performing team of data engineers Design, develop, and implement data pipelines, ETL processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications: 3+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Able to engage with business collaborators and mentor team to develop data pipelines and data models Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Good understanding of data modeling, data warehousing, and data integration concepts Good experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Understanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies