Jobs
Interviews

25268 Etl Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Tech Lead - Data Bricks Job Date: Aug 2, 2025 Job Requisition Id: 59586 Location: Hyderabad, TG, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Data Bricks Professionals in the following areas : Experience 8+ Years Job Description Over all 8+ + Years experience and a Minimum 3+Years exp in Azure should have worked as lead for at least 3 year Should come from DWH background, Should have strong ETL experience Strong have strong hands on experience in Azure Data Bricks/Pyspark Strong have strong hands on experience inAzure Data Factory, Devops Strong knowledge on Bigdata stack Strong Knowledge of Azure EventHubs and Pub-Sub model, security Strong Communication and Analytical skills. Highly proficient at SQL development Experience working in an Agile environment Work as team lead to develop Cloud Data and Analytics solutions Mentor junior developers and testers Able to build strong relationships with client technical team Participate in the development of cloud data warehouses, data as a service, business intelligence solutions Data wrangling of heterogeneous data Coding complex Spark (Scala or Python). Required Behavioral Competencies Accountability : Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration : Shares information within team, participates in team activities, asks questions to understand other points of view. Agility : Demonstrates readiness for change, asking questions and determining how changes could impact own work. Customer Focus : Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations. Communication : Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results : Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets. Resolves Conflict : Displays sensitivity in interactions and strives to understand others’ views and concerns. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow - people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. This position involves developing batch and real-time data pipelines utilizing various data analytics processing frameworks in support of Data Science and Machine Learning practices. You will assist in integrating data from various sources, both internal and external, performing extract, transform, load (ETL) data conversions, and facilitating data cleansing and enrichment. Additionally, you will be involved in full systems life cycle management activities, including analysis, technical requirements, design, coding, testing, and implementation of systems and applications software. The role also entails synthesizing disparate data sources to create reusable and reproducible data assets, as well as assisting the Data Science community in analytical model feature tuning. Responsibilities include contributing to data engineering projects and building solutions by leveraging foundational knowledge in software/application development, programming languages for statistical modeling and analysis, data warehousing, and Cloud solutions. You will collaborate effectively, produce data engineering documentation, gather requirements, organize data, and define project scopes. Data analysis and presentation of findings to stakeholders to support business needs will be part of your tasks. Additionally, you will participate in the integration of data for data engineering projects, understand and utilize analytic reporting tools and technologies, and assist with data engineering maintenance and support. Defining data interconnections between operational and business functions, backup and recovery, and utilizing technology solutions for POC analysis are also key responsibilities. Requirements for this role include understanding of database systems and data warehousing solutions, data life cycle stages, data environment scalability, data security, regulations, and compliance. You should be familiar with analytics reporting technologies, algorithms, data structures, Cloud services platforms, ETL tools capabilities, Machine learning algorithms, building data APIs, and coding using programming languages for statistical analysis and modeling. Basic knowledge of distributed systems and a Bachelor's degree in MIS, mathematics, statistics, computer science, or equivalent job experience are necessary qualifications. This is a permanent position at UPS, committed to providing a workplace free of discrimination, harassment, and retaliation.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Tech Lead -Azure Databricks/ Azure Data Factory Job Date: Aug 2, 2025 Job Requisition Id: 61535 Location: Gurgaon, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Microsoft Fabric Professionals in the following areas : Experience 8+ Years Job Description Position: Data Analytics Lead. Experience: 8+ Years. Responsibilities: Build, manage, and foster a high-functioning team of data engineers and Data analysts. Collaborate with business and technical teams to capture and prioritize platform ingestion requirements. Experience of working with manufacturing industry in building a centralized data platform for self service reporting. Lead the data analytics team members, providing guidance, mentorship, and support to ensure their professional growth and success. Responsible for managing customer, partner, and internal data on the cloud and on-premises. Evaluate and understand current data technologies and trends and promote a culture of learning. Build and end to end data strategy from collecting the requirements from business to modelling the data and building reports and dashboards Required Skills: Experience in data engineering and architecture, with a focus on developing scalable cloud solutions in Azure Synapse / Microsoft Fabric / Azure Databricks Accountable for the data group’s activities including architecting, developing, and maintaining a centralized data platform including our operational data, data warehouse, data lake, Data factory pipelines, and data-related services. Experience in designing and building operationally efficient pipelines, utilising core Azure components, such as Azure Data Factory, Azure Databricks and Pyspark etc Strong understanding of data architecture, data modelling, and ETL processes. Proficiency in SQL and Pyspark Strong knowledge of building PowerBI reports and dashboards. Excellent communication skills Strong problem-solving and analytical skills. Required Technical/ Functional Competencies Domain/ Industry Knowledge: Basic knowledge of customer's business processes- relevant technology platform or product. Able to prepare process maps, workflows, business cases and simple business models in line with customer requirements with assistance from SME and apply industry standards/ practices in implementation with guidance from experienced team members. Requirement Gathering And Analysis: Working knowledge of requirement management processes and requirement analysis processes, tools & methodologies. Able to analyse the impact of change requested/ enhancement/ defect fix and identify dependencies or interrelationships among requirements & transition requirements for engagement. Product/ Technology Knowledge: Working knowledge of technology product/platform standards and specifications. Able to implement code or configure/customize products and provide inputs in design and architecture adhering to industry standards/ practices in implementation. Analyze various frameworks/tools, review the code and provide feedback on improvement opportunities. Architecture Tools And Frameworks: Working knowledge of architecture Industry tools & frameworks. Able to identify pros/ cons of available tools & frameworks in market and use those as per Customer requirement and explore new tools/ framework for implementation. Architecture Concepts And Principles : Working knowledge of architectural elements, SDLC, methodologies. Able to provides architectural design/ documentation at an application or function capability level and implement architectural patterns in solution & engagements and communicates architecture direction to the business. Analytics Solution Design: Knowledge of statistical & machine learning techniques like classification, linear regression modelling, clustering & decision trees. Able to identify the cause of errors and their potential solutions. Tools & Platform Knowledge: Familiar with wide range of mainstream commercial & open-source data science/analytics software tools, their constraints, advantages, disadvantages, and areas of application. Required Behavioral Competencies Accountability: Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration: Shares information within team, participates in team activities, asks questions to understand other points of view. Agility: Demonstrates readiness for change, asking questions and determining how changes could impact own work. Customer Focus: Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations. Communication: Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results: Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets. Resolves Conflict: Displays sensitivity in interactions and strives to understand others’ views and concerns. Certifications Mandatory At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Embark on a transformative journey as a Data Test Lead at Barclays, where the vision is clear to redefine the future of banking and craft innovative solutions. In this role, you will be responsible for creating and enhancing the data that drives the bank's financial transactions, placing data quality at the forefront of all operations. This presents a unique opportunity to shape the organization's data usage and be a part of an exciting transformation in the banking sector. To excel as a Data Test Lead, you should have experience with a diverse range of solutions including Fraud Detection, Fraud Servicing & IDV, Application Fraud, and Consumption BI patterns. Strong Test Automation skills are essential, along with the ability to create frameworks for regression packs. Providing technical guidance and driving the Test Automation team is crucial, emphasizing proactive automation to ensure alignment with the development lifecycle. Collaborating on the DevOps agenda, configuring Jenkins/GitLab pipelines, and maturing automation capabilities through proper documentation are key responsibilities. Additional valued skills for this role include collaborating with development teams to ensure testability and quality throughout the SDLC, identifying opportunities for test optimization, and mentoring junior QA engineers on automation best practices. Effective communication skills, SQL proficiency, working knowledge of Oracle, Hadoop, Pyspark, Ab-initio, and other ETL tools, as well as experience with metadata, domain maintenance, and JIRA, are also highly advantageous. The purpose of this role is to design, develop, and execute testing strategies to validate functionality, performance, and user experience, while working closely with cross-functional teams to identify and resolve defects. The Accountabilities include developing and implementing comprehensive test plans, executing automated test scripts, analysing requirements, conducting root cause analysis, and staying informed of industry technology trends. As an Assistant Vice President, you are expected to advise and influence decision-making, contribute to policy development, and lead a team performing complex tasks with professionalism and expertise. People Leaders are also expected to demonstrate leadership behaviours that create an environment for colleagues to excel. Colleagues at Barclays are expected to uphold the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as demonstrate the Barclays Mindset of Empower, Challenge, and Drive in their daily interactions and work.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You have over 8 years of experience and are located in Balewadi, Pune. You possess a strong understanding of Data Architecture and have led data-driven projects. Your expertise includes knowledge of Data Modelling paradigms like Kimball, Inmon, Data Marts, Data Vault, Medallion, etc. Experience with Cloud Based data strategies, particularly AWS, is preferred. Designing data pipelines for ETL with expert knowledge on ingestion, transformation, and data quality is a must, along with hands-on experience in SQL. In-depth understanding of PostGreSQL development, query optimization, and designing indexes is a key requirement. Proficiency in Postgres PL/SQL for complex warehouse workflows is necessary. You should be able to manipulate intermediate to complex SQL and use advanced SQL concepts like RANK, DENSE_RANK, and apply advanced statistical concepts through SQL. Working experience with PostGres SQL extensions like PostGIS is desired. Expertise in writing ETL pipelines combining Python + SQL is required, as well as understanding of data manipulation libraries in Python like Pandas, Polars, DuckDB. Experience in designing Data visualization with tools such as Tableau and PowerBI is desirable. Your responsibilities include participation in designing and developing features in the existing Data Warehouse, providing leadership in establishing connections between Engineering, product, and analytics/data scientists team. Designing, implementing, and updating existing/new batch ETL pipelines, defining and implementing data architecture, and working with various data orchestration tools like Apache Airflow, Dagster, Prefect, and others. Collaboration with engineers and data analysts to build reliable datasets that can be trusted and used by the company is essential. You should be comfortable in a fast-paced start-up environment, passionate about your job, and enjoy a dynamic international working environment. Background or experience in the telecom industry is a plus, though not mandatory. You should have a penchant for automating tasks and enjoy monitoring processes.,

Posted 1 week ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description Want to participate in building the next generation of online payment system that supports multiple countries and payment methods? Amazon Payment Services (APS) is a leading payment service provider in MENA region with operations spanning across 8 countries and offers online payment services to thousands of merchants. APS team is building robust payment solution for driving the best payment experience on & off Amazon. Over 100 million customers send tens of billions of dollars moving at light-speed through our systems annually. We build systems that process payments at an unprecedented scale with accuracy, speed and mission-critical availability. We innovate to improve customer experience, with support for currency of choice, in-store payments, pay on delivery, credit and debit card payments, seller disbursements and gift cards. Many new exciting & challenging ideas are in the works. Key job responsibilities Data Engineers focus on managing data requests, maintaining operational excellence, and enhancing core infrastructure. You will be collaborating closely with both technical and non-technical teams to design and execute roadmaps Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

noida, uttar pradesh

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We're looking for candidates with Syniti and other programming skills to join the EY GDS SAP BI & Data. This is a fantastic opportunity to be part of a leading firm while being instrumental in the growth. Your key responsibilities include: - Providing expert level business analysis on SAP modules FI, CO, MM, SD, PM, PP, PS - Implementing and developing customer deliverables that meet or exceed customer requirements - Developing and demonstrating a good understanding of business processes for the assigned functional area/data objects - Demonstrating a strong knowledge of underlying technical data structures and definitions for the assigned functional process area/data objects - Contributing to an integrated data solution through data analysis, reporting, and collaboration with on-site colleagues and clients - Expertise in SAP BW7.5, SAP BW on HANA/BW4 HANA - Working closely with other consultants on customer site as part of small to large size project teams - Conducting requirements analysis, data analysis, and creating reports - Maintaining responsibility for completion and accuracy of the deliverables - Actively expanding consulting skills and professional development through training courses, mentoring, and daily interaction with clients Skills and attributes for success: - Hands-on experience of SAP BW7.5, HANA implementation, and support - Building an understanding of Standard and custom SAP BW Extractors functionality with ABAP Debugging skills - Prior experience in Supporting ETL and Incident management/Bug-Fix - Hands-on Experience in Understanding and Applying transformations using ABAP and AMDP, advanced DSOs, Composite Providers using LSA ++ and performance optimization concepts - Prior Experience with Traditional non-HANA BW data modeling, Multi-Cubes, ODS Objects, Info Cubes, Transfer Rules, Start Routines, End Routines, Info Set Queries, Info Objects, and User Exits - Hands-on experience with SAP HANA data modeling views (Attribute, Analytics, and Calculation views) - Proficient in Development and understanding of SAP Analysis for Microsoft Office to perform custom calculations, filtering, and sorts to support complex business planning and reporting scenarios - Hands-on experience in the collection of Transport Requests through the landscape - Experience in Performance tuning and troubleshooting /Monthly Release activities as necessary - Knowledge of SAP ECC Business processes, functional aspects in Sales, Billing, Finance, Controlling, Project systems To qualify for the role, you must have: - Minimum 7+ years of SAP Analytics/Business Intelligence/Business Warehouse (BI/BW/HANA) related experience with a professional services advisory firm or publicly traded company and experience leading and delivering full lifecycle implementations - Minimum 1 end to end implementation experience with SAP HANA 1.0 and 2.0 with at least 1 number of full lifecycle project implementation experience with SAP HANA SQL and/or SAP S/4 HANA Ideally, you should also have: - Bachelor's degree from an accredited college/university - Hands-on experience on SAP HANA Modeling: Table creation (row store, column store), ABAP Procedures, data modeling, modeling views (Calculation, Attributes views), decision tables, analytical privilege will be an added advantage - Knowledge in roles and authorizations What we look for: - A team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment with consulting skills - An opportunity to be a part of a market-leading, multi-disciplinary team of 1400+ professionals, in the only integrated global transaction business worldwide - Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Title: PySpark Data Engineer Experience: 5 – 8 Years Location: Hyderabad Employment Type: Full-Time Job Summary We are looking for a skilled and experienced PySpark Data Engineer to join our growing data engineering team. The ideal candidate will have 5–8 years of experience in designing and implementing data pipelines using PySpark , AWS Glue , and Apache Airflow , with strong proficiency in SQL . You will be responsible for building scalable data processing solutions, optimizing data workflows, and collaborating with cross-functional teams to deliver high-quality data assets. Key Responsibilities Design, develop, and maintain large-scale ETL pipelines using PySpark and AWS Glue. Orchestrate and schedule data workflows using Apache Airflow. Optimize data processing jobs for performance and cost-efficiency. Work with large datasets from various sources, ensuring data quality and consistency. Collaborate with Data Scientists, Analysts, and other Engineers to understand data requirements and deliver solutions. Write efficient, reusable, and well-documented code following best practices. Monitor data pipeline health and performance; resolve data-related issues proactively. Participate in code reviews, architecture discussions, and performance tuning. Requirements 5–8 years of experience in data engineering roles. Strong expertise in PySpark for distributed data processing. Hands-on experience with AWS Glue and other AWS data services (S3, Athena, Lambda, etc.). Experience with Apache Airflow for workflow orchestration. Strong proficiency in SQL for data extraction, transformation, and analysis. Familiarity with data modeling concepts and data lake/data warehouse architectures. Experience with version control systems (e.g., Git) and CI/CD processes. Ability to write clean, scalable, and production-grade code. Benefits Company standard benefits.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

As an Assistant Vice President, Data Engineering Expert at Analytics & Information Management (AIM) in Gurugram, you will play a crucial role in leading the Data/Information Management Team. Your responsibilities will include driving the development and implementation of data analytics solutions to support key business objectives for Legal Operations as part of the COO (Chief Operating Office). You will be expected to build and manage high-performing teams, deliver impactful insights, and foster a data-driven culture within the organization. In this role, you will be responsible for supporting Business Execution, Legal Data & Reporting activities for the Chief Operating Office by implementing data engineering solutions to manage banking operations. This will involve establishing monitoring routines, scorecards, and escalation workflows, as well as overseeing Data Strategy, Smart Automation, Insight Generation, Data Quality, and Reporting activities using proven analytical techniques. Additionally, you will be required to enable proactive issue detection, implement a governance framework, and interface between business and technology partners for digitizing data collection. You will also need to communicate findings and recommendations to senior management, stay updated with the latest trends in analytics, ensure compliance with data governance policies, and set up a governance operating framework to enable operationalization of data domains. To excel in this role, you should have at least 8 years of experience in Business Transformation Solution Design roles with proficiency in tools/technologies like Python, PySpark, Tableau, MicroStrategy, and SQL. Strong understanding of Data Transformation, Data Strategy, Data Architecture, Data Tracing & Lineage, and Database Management & Optimization will be essential. Additionally, experience in AI solutions, banking operations, and regulatory requirements related to data privacy and security will be beneficial. A Bachelor's/University degree in STEM is required for this position, with a Master's degree being preferred. Your ability to work as a senior member in a team of data engineering professionals and effectively manage end-to-end conceptualization & implementation of data strategies will be critical for success in this role. If you are excited about the opportunity to lead a dynamic Data/Information Management Team and drive impactful insights through data analytics solutions, we encourage you to apply for this position and be a part of our talented team at AIM, Gurugram.,

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

This is a data engineer position where you will be responsible for designing, developing, implementing, and maintaining data flow channels and data processing systems to support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable, and secure manner in coordination with the Data & Analytics team. Your main objective will be to define optimal solutions for data collection, processing, and warehousing, particularly within the banking & finance domain. You must have expertise in Spark Java development for big data processing, Python, and Apache Spark. You will be involved in designing, coding, and testing data systems and integrating them into the internal infrastructure. Your responsibilities will include ensuring high-quality software development with complete documentation, developing and optimizing scalable Spark Java-based data pipelines, designing and implementing distributed computing solutions for risk modeling, pricing, and regulatory compliance, ensuring efficient data storage and retrieval using Big Data, implementing best practices for Spark performance tuning, maintaining high code quality through testing, CI/CD pipelines, and version control, working on batch processing frameworks for Market risk analytics, and promoting unit/functional testing and code inspection processes. You will also collaborate with business stakeholders, Business Analysts, and other data scientists to understand and interpret complex datasets. Qualifications: - 5-8 years of experience in working in data ecosystems - 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting, and other Big data frameworks - 3+ years of experience with relational SQL and NoSQL databases such as Oracle, MongoDB, HBase - Strong proficiency in Python and Spark Java with knowledge of core Spark concepts (RDDs, Dataframes, Spark Streaming, etc.), Scala, and SQL - Data integration, migration, and large-scale ETL experience - Data modeling experience - Experience building and optimizing big data pipelines, architectures, and datasets - Strong analytic skills and experience working with unstructured datasets - Experience with various technologies like Confluent Kafka, Redhat JBPM, CI/CD build pipelines, Git, BitBucket, Jira, external cloud platforms, container technologies, and supporting frameworks - Highly effective interpersonal and communication skills - Experience with software development life cycle Education: - Bachelors/University degree or equivalent experience in computer science, engineering, or a similar domain This is a full-time position in the Data Architecture job family group within the Technology sector.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Iron Mountain, we believe that work, when done well, can have a positive impact on our customers, employees, and the planet. That's why we are looking for smart and committed individuals to join our team. Whether you are starting your career or seeking a change, we invite you to explore how you can enhance the impact of your work at Iron Mountain. We offer expert and sustainable solutions in records and information management, digital transformation services, data centers, asset lifecycle management, and fine art storage, handling, and logistics. Collaborating with over 225,000 customers worldwide, we aim to preserve valuable artifacts, optimize inventory, and safeguard data privacy through innovative and socially responsible practices. If you are interested in being part of our growth journey and expanding your skills in a culture that values diverse contributions, let's have a conversation. As Iron Mountain progresses with its digital transformation, we are expanding our Enterprise Data Platform Team, which plays a crucial role in supporting data integration solutions, reporting, and analytics. The team focuses on maintaining and enhancing data platform components essential for delivering our data solutions. As a Data Platform Engineer at Iron Mountain, you will leverage your advanced knowledge of cloud big data technologies, software development expertise, and strong SQL skills. The ideal candidate will have a background in software development and big data engineering, with experience working in a remote environment and supporting both on-shore and off-shore engineering teams. Key Responsibilities: - Building and operationalizing cloud-based platform components - Developing production-quality ingestion pipelines with automated quality checks to centralize access to all data sets - Assessing current system architecture and recommending solutions for improvement - Building automation using Python modules to support product development and data analytics initiatives - Ensuring maximum uptime of the platform by utilizing cloud technologies such as Kubernetes, Terraform, Docker, etc. - Resolving technical issues promptly and providing guidance to development teams - Researching current and emerging technologies and proposing necessary changes - Assessing the business impact of technical decisions and participating in collaborative environments to foster new ideas - Maintaining comprehensive documentation on processes and decision-making Your Qualifications: - Experience with DevOps/Automation tools to minimize operational overhead - Ability to contribute to self-organizing teams within the Agile/Scrum project methodology - Bachelor's Degree in Computer Science or related field - 3+ years of related IT experience - 1+ years of experience building complex ETL pipelines with dependency management - 2+ years of experience in Big Data technologies such as Spark, Hive, Hadoop, etc. - Industry-recognized certifications - Strong familiarity with PaaS services, containers, and orchestrations - Excellent verbal and written communication skills What's in it for you - Be part of a global organization focused on transformation and innovation - A supportive environment where you can voice your opinions and be your authentic self - Global connectivity to learn from teammates across 52 countries - Embrace diversity, inclusion, and differences within a winning team - Competitive Total Reward offerings to support your career, family, wellness, and retirement Iron Mountain is a global leader in storage and information management services, trusted by organizations worldwide. We safeguard critical business information, sensitive data, and cultural artifacts. Our services help lower costs, mitigate risks, comply with regulations, and enable digital solutions. If you require accommodations due to a disability, please reach out to us. Category: Information Technology,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

coimbatore, tamil nadu

On-site

As a Big Data Engineer at KGIS, you will be an integral part of the team dedicated to building cutting-edge digital and analytics solutions for global enterprises. With a focus on designing, developing, and optimizing large-scale data processing systems, you will lead the way in creating scalable data pipelines, driving performance tuning, and spearheading cloud-native big data initiatives. Your responsibilities will include designing and developing robust Big Data solutions using Apache Spark, building both batch and real-time data pipelines utilizing technologies like Spark, Spark Streaming, Kafka, and RabbitMQ, implementing ETL processes for data ingestion and transformation, and optimizing Spark jobs for enhanced performance and scalability. You will also work with NoSQL technologies such as HBase, Cassandra, or MongoDB, query large datasets using tools like Hive and Impala, ensure seamless integration of data from various sources, and lead a team of data engineers while following Agile methodologies. To excel in this role, you must possess deep expertise in Apache Spark and distributed computing, strong programming skills in Python, solid experience with Hadoop v2, MapReduce, HDFS, and Sqoop, proficiency in real-time stream processing using Apache Storm or Spark Streaming, and familiarity with messaging systems like Kafka or RabbitMQ. Additionally, you should have SQL mastery, hands-on experience with NoSQL databases, knowledge of cloud-native services in AWS or Azure, a strong understanding of ETL tools and performance tuning, an Agile mindset, and excellent problem-solving skills. While not mandatory, exposure to data lake and lakehouse architectures, familiarity with DevOps tools for CI/CD and data pipeline monitoring, and certifications in cloud or big data technologies are considered advantageous. Joining KGIS will provide you with the opportunity to work on innovative projects with Fortune 500 clients, be part of a fast-paced and meritocratic culture that values ownership, gain access to cutting-edge tools and technologies, and thrive in a collaborative and growth-focused environment. If you are ready to elevate your Big Data career and contribute to our digital transformation journey, apply now and embark on this exciting opportunity at KGIS.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You will be working in a hybrid mode at multiple locations including Bangalore, Chennai, Gurgaon, Pune, and Kolkata. With at least 6 years of experience in IT, you must possess a Bachelor's and/or master's degree in computer science or equivalent field. Your expertise should lie in Snowflake security, Snowflake SQL, and the design and implementation of various Snowflake objects. Practical experience with Snowflake utilities such as SnowSQL, Snowpipe, Snowsight, and Snowflake connectors is essential. You should have a deep understanding of Star and Snowflake dimensional modeling and a strong knowledge of Data Management principles. Additionally, familiarity with the Databricks Data & AI platform and Databricks Delta Lake Architecture is required. Hands-on experience in SQL and Spark (PySpark), as well as building ETL/data warehouse transformation processes, will be a significant part of your role. Strong verbal and written communication skills are essential, along with analytical and problem-solving abilities. Attention to detail is paramount in your work. The mandatory skills for this position include proficiency in (Snowflake + ADF + SQL) OR (Snowflake+ SQL).,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

You are seeking a Principal Machine Learning Engineer to join our team in Pune on a hybrid schedule. Reporting to the Director of Machine Learning, you will collaborate with Product and Engineering teams to address challenges and discover new business opportunities. Your role will involve applying quantitative analysis, modeling, and data mining to support informed product decisions for PubMatic. In this position, your responsibilities will include conducting in-depth analysis to optimize product KPIs, utilizing statistics, modeling, and machine learning to enhance system efficiency and relevance algorithms, analyzing data to make product recommendations and conduct A/B experiments, working closely with cross-functional teams to identify trends and solve problems, and collaborating with stakeholders throughout the end-to-end analysis process. The ideal candidate should have at least seven years of hands-on experience in designing Machine Learning models using statistical packages like R, MATLAB, Python (NumPy, Scikit-learn + Pandas) or MLlib. You should have the ability to mentor team members effectively, articulate product questions, and use statistics to derive solutions. Proficiency in SQL for data extraction and ETL flow design, along with a background in an interdisciplinary/cross-functional field, is preferred. An interest in data, metrics, analysis, trends, and practical knowledge of measurement, statistics, and program evaluation is essential. Strong problem-solving skills, sound business judgment, and the ability to translate analysis results into actionable business recommendations are also required. Additionally, candidates should hold a bachelor's degree in engineering (CS / IT) or an equivalent degree from reputable Institutes / Universities. PubMatic operates on a hybrid work schedule (3 days in office and 2 days remote) to foster collaboration, innovation, and productivity. Benefits include paternity/maternity leave, healthcare insurance, broadband reimbursement, a well-stocked kitchen, catered lunches, and more. PubMatic is a leading digital advertising platform that offers transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Founded in 2006, PubMatic enables content creators to run a profitable advertising business, reinvesting in the diverse content consumers demand.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

This position requires a seasoned professional as a Senior Manager with specialized knowledge of credit risk management. You will be overseeing the development, enhancement, and validation of credit risk models, ensuring compliance with regulatory standards, and driving innovation in risk management practices. The ideal candidate should have hands-on experience in Credit Risk Model Validation or Development with SAS and Python. Moreover, you should possess good hands-on experience in Regulatory Models such as AIRB, CECL, CCAR, Basel, IFRS9. You will primarily work as a consultant for the centralized advanced analytics team of a banking or financial firm as a Credit Risk Model Development/Validation and Researcher Specialist. Your responsibilities will include interacting with various business units including their risk, finance, controllership stakeholders. Furthermore, you will be responsible for coordinating with auditors and model development or validation teams to ensure the Enterprise Modeling Governance standards are followed. Your activities will include, but not be limited to: - Providing thought leadership and executing comprehensive modeling strategies aligned with business objectives and industry best practices. - Designing, developing, and validating predictive models to ensure accuracy, reliability, and compliance with regulatory standards. - Conducting rigorous testing and validation methodologies to ensure model robustness and reliability. - Providing analytical support for recommending actions to mitigate risk and using judgment-based decision-making regarding policies and procedures. - Assessing the quality of the data for model development as well as inputs to the model, providing recommendations to improve the data quality at the source. - Leading, training, and mentoring junior members in the team to foster a collaborative and innovative team culture. - Proposing recommendations to improve monitoring systems and capabilities based on identified risk and control gaps. - Conducting in-depth research on existing and emerging policies related to credit risk modeling and contributing to the creation of white papers. - Researching and contributing to artifacts creation as required in a consulting role. To qualify for this role, you should have experience in developing, validating models, and risk management of credit risk models. Additionally, you should possess knowledge of various statistical techniques and proven skills in regulatory and non-regulatory credit risk modeling. Understanding and experience on the regulatory risk model development/validation guidelines including SR 11-7, Basel IRB, CCAR, CECL, IFRS9, etc., will be crucial. You should have hands-on expertise in SQL, ETL, SAS, Python, R, working with large datasets, and a Master's degree in a quantitative discipline (Statistics/Economics/Finance/Data Science, etc.). The preferred qualifications for this role include strong networking, negotiation, and influencing skills, knowledge of credit risk management for retail and wholesale lending products, hands-on experience in Machine Learning modeling techniques, and prior Project Management and People Management expertise. The required skills and certifications for this role include Model Validation, SAS, Python, Regulatory Model, Model Development, and Credit Risk.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer with over 6 years of experience, you will play a crucial role in the migration management from Informatica MDM to Ataccama MDM. Your responsibilities will include developing migration strategies, plans, and timelines while ensuring data accuracy, consistency, and completeness throughout the migration process. You will be tasked with managing ETL processes to extract, transform, and load data into Ataccama. Additionally, implementing and maintaining data quality rules and processes in Ataccama, as well as overseeing API integrations for seamless data flow will be part of your daily tasks. Collaboration and coordination are vital aspects of this role, where you will work closely with cross-functional teams to gather requirements, provide training and support on Ataccama MDM, and troubleshoot migration issues in collaboration with IT and business units. Your role will also involve documentation and reporting tasks such as documenting migration processes, generating reports on migration progress and data quality metrics, and providing recommendations for continuous improvement in data management practices. To excel in this position, you should possess proven experience in migrating from Informatica MDM to Ataccama MDM, hands-on experience with ETL, data quality, and MDM processes, as well as proficiency in Ataccama MDM and related tools. Strong analytical and problem-solving skills, attention to detail, proficiency in data modeling and database management, along with excellent communication and interpersonal skills are essential for success. A Bachelor's degree in Information Management, Computer Science, Data Science, or a related field is required. Possession of Ataccama MDM certification is preferred. Additionally, holding an Australian Visa, knowledge of industry standards and regulations related to data management, proficiency in SQL and data querying languages, and a willingness to learn Ataccama are considered advantageous for this role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an SAP Material Master Coordinator at Weir Minerals, you will play a crucial role in coordinating with key stakeholders to process requests for new items to be set up in the SAP ERP system. Your responsibilities will include maintaining the Material Master for existing items, troubleshooting system issues, and continuously improving processes to reduce cycle time for new item creation and enhance speed to market. This position supports the Canada Business Unit and requires working in the 2nd shift from 4PM to 01AM IST to overlap with their business hours. Why choose Weir: At Weir, you will be part of a global organization dedicated to building a better future. We are committed to reinventing, adapting quickly, and finding sustainable ways to access necessary resources. You will have the opportunity to grow in a dynamic environment where challenges lead to personal and professional development. Weir promotes inclusivity, innovation, collaboration, and personal wellbeing. Key Responsibilities: - Process requests for new items in SAP promptly to increase speed to market - Collaborate with Sales, Procurement, Engineering, and Manufacturing departments to ensure accurate information - Update and maintain MRP parameters in SAP for optimized sourcing and distribution - Create and maintain info records and source lists - Conduct material costing - Improve processes for item creation and Material Master maintenance in SAP - Coordinate with global SAP counterparts for system improvements - Ensure data accuracy and troubleshoot system issues - Prepare and maintain process documents and training materials - Demonstrate commitment to safety culture Job Knowledge/Education and Qualifications: - Proficiency in data analysis and MS Office suite - Bachelor's Degree or College Diploma in technical or commercial discipline - 1-2 years of experience in industrial, engineering, or manufacturing environment - Prior experience with Material Master in SAP preferred - Strong computer skills and typing proficiency Nice to have: - Experience in maintaining/updating training and process documents - Knowledge of data migration and ETL processes Experience: - 2-4 years in SAP Material Master Behavioral Skills: - Excellent communication and analytical skills - Pro-active, innovative, and quality-conscious - Demonstrates personal accountability and ownership - Open-minded to supporting urgent requests Founded in 1871, Weir is a global engineering business focused on making mining operations smarter, efficient, and sustainable. With a workforce of 11,000 talented individuals across 60 countries, Weir is dedicated to supporting a low carbon future by providing essential metals and minerals. Join us to do the best work of your life. Compensation: - Division: esco or minerals - Working option: LI-remote - Recruiter personal #: LI-AB1,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a skilled Snowflake Engineer with 6 to 10 years of experience, you will be a key member of the team at UST. Your primary responsibility will be to develop, implement, and optimize data solutions on the Snowflake cloud platform. Your expertise in Snowflake, coupled with a strong understanding of ETL processes, data engineering, and data processing technologies, will be crucial for the success of this role. You will be tasked with designing and maintaining high-performance data pipelines and data warehouses on the Snowflake platform, focusing on scalability and efficient data storage. Your role will involve working closely with cross-functional teams to design data solutions that align with business requirements. Engaging with stakeholders to understand business needs and translating them into technical solutions will also be a key part of your responsibilities. Key Responsibilities: - Design, implement, and optimize data warehouses on the Snowflake cloud platform. - Develop end-to-end data pipelines and maintain ETL workflows for seamless data processing. - Utilize PySpark for data transformations within the Snowflake environment, ensuring high data quality. - Collaborate with cross-functional teams to design data solutions aligned with business requirements. - Continuously monitor and optimize data storage, processing, and retrieval performance in Snowflake. Required Qualifications: - 5 to 7 years of experience as a Data Engineer, with a strong emphasis on Snowflake. - Proven experience in designing, implementing, and optimizing data warehouses on the Snowflake platform. - Strong knowledge of Snowflake architecture, features, and best practices for data storage and performance optimization. - Proficiency in Python, SQL, or Scala for data processing and transformations. - Experience with data modeling techniques and designing efficient data schemas for optimal performance in Snowflake. UST is a global digital transformation solutions provider that partners with the world's best companies to drive real impact through transformation. With over 30,000 employees in 30 countries, UST is committed to embedding innovation and agility into their clients" organizations, touching billions of lives in the process. Join us on this exciting journey to make a boundless impact through technology and purpose.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

We are currently seeking an Oracle Data Integrator (ODI) Specialist for a contract opportunity based in Pune/Mohali (Work from Office). The ideal candidate should have 5 to 7 years of experience in designing and deploying robust data integration and ETL solutions. As an ODI Specialist, you will be responsible for working on high-impact projects, utilizing your expertise in Oracle Data Integrator, strong ETL knowledge, data warehousing concepts, and proficiency in Oracle Database (SQL/PLSQL). Immediate joiners with a passion for collaborative team environments are encouraged to apply. This role offers an immediate engagement, competitive hourly rate, and the opportunity to work on challenging projects that will make a real impact. If you are ready to showcase your ODI expertise and take on a rewarding contract role, we look forward to connecting with you.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Your focus will be on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Your responsibilities will include developing Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. You will also need to develop and maintain Kafka-based data pipelines, which involves designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. Additionally, you will create and optimize Spark applications using Scala and PySpark to process large datasets and implement data transformations and aggregations. Another important aspect of your role will be integrating Kafka with Spark for real-time processing. You will be building systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. Collaboration with data teams including data engineers, data scientists, and DevOps is essential to design and implement data solutions effectively. You will also need to tune and optimize Spark and Kafka clusters to ensure high performance, scalability, and efficiency of data processing workflows. Writing clean, functional, and optimized code while adhering to coding standards and best practices will be a key part of your daily tasks. Troubleshooting and resolving issues related to Kafka and Spark applications, as well as maintaining documentation for Kafka configurations, Spark jobs, and other processes are also important aspects of the role. Continuous learning and applying new advancements in functional programming, big data, and related technologies is crucial. Proficiency in the Hadoop ecosystem big data tech stack (HDFS, YARN, MapReduce, Hive, Impala), Spark (Scala, Python), Kafka, ETL processes, and data ingestion tools is required. Deep hands-on expertise in Pyspark, Scala, Kafka, programming languages such as Scala, Python, or Java for developing Spark applications, and SQL for data querying and analysis are necessary. Additionally, familiarity with data warehousing concepts, Linux/Unix operating systems, problem-solving, analytical skills, and version control systems will be beneficial in performing your duties effectively. This is a full-time position in the Technology job family group, specifically in Applications Development. If you require a reasonable accommodation to use search tools or apply for a career opportunity due to a disability, please review Accessibility at Citi. You can also refer to Citis EEO Policy Statement and the Know Your Rights poster for more information.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. Ensure seamless data flow from various sources to destinations including ADLS Gen 2. Implement data quality checks and validation frameworks. Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. Manage version control and deployment pipelines using Git and DevOps best practices. Provide accurate effort estimation and manage project timelines effectively. Collaborate with cross-functional teams to ensure aligned project goals and objectives. Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. Facilitate discussions between technical and non-technical stakeholders to drive project success. Document technical solutions and design specifications clearly and concisely. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. 8+ years of experience in big data architecture and engineering. Extensive experience with PySpark, SparkSQL, and Databricks on Azure. Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. Strong experience in data modeling, metadata frameworks, and effort estimation. Experience of DevSecOps practices with proficiency in Git. Demonstrated experience in implementing data quality, data security, and data governance measures. Industry experience in banking, insurance, or pharma is a significant plus. Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. Certification in Azure, Databricks or related Cloud technologies is a must. Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory Skill Sets Data Architect/Data Engineer/AWS Preferred Skill Sets Data Architect/Data Engineer/AWS Years Of Experience Required 8--12 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Architecture Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. Ensure seamless data flow from various sources to destinations including ADLS Gen 2. Implement data quality checks and validation frameworks. Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. Manage version control and deployment pipelines using Git and DevOps best practices. Provide accurate effort estimation and manage project timelines effectively. Collaborate with cross-functional teams to ensure aligned project goals and objectives. Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. Facilitate discussions between technical and non-technical stakeholders to drive project success. Document technical solutions and design specifications clearly and concisely. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. 8+ years of experience in big data architecture and engineering. Extensive experience with PySpark, SparkSQL, and Databricks on Azure. Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. Strong experience in data modeling, metadata frameworks, and effort estimation. Experience of DevSecOps practices with proficiency in Git. Demonstrated experience in implementing data quality, data security, and data governance measures. Industry experience in banking, insurance, or pharma is a significant plus. Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. Certification in Azure, Databricks or related Cloud technologies is a must. Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory Skill Sets Data Architect/Data Engineer/AWS Preferred Skill Sets Data Architect/Data Engineer/AWS Years Of Experience Required 8--12 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Purpose We are seeking an experienced Oracel ETL and PL/SQL Developer to design, develop, test, and deploy data integration solutions using ETL tools and PL/SQL. The successful candidate will work closely with cross-functional teams to ensure data quality, accuracy, and integrity. Key Accountabilities ETL Development: Design, develop, test, and deploy ETL processes using ETL tools (ODI) to extract, transform, and load data into various data warehouses and systems. PL/SQL Development: Write complex PL/SQL queries to extract, manipulate, and analyze data from relational databases (e.g., Oracle, SQL Server). Data Modeling: Develop and maintain data models, data dictionaries, and data mappings to ensure data consistency and integrity. Data Quality: Identify and resolve data quality issues, ensuring data accuracy, completeness, and consistency. Collaboration: Work closely with business stakeholders, data analysts, and data scientists to gather requirements, design solutions, and implement data integration projects. Testing and Deployment: Test ETL processes and PL/SQL queries to ensure data integrity and deploy changes to production environments. Performance Optimization: Optimize ETL processes and PL/SQL queries to improve performance, scalability, and reliability. Documentation: Maintain accurate and up-to-date documentation of ETL processes, PL/SQL queries, and data models. QUALIFICATIONS & EXPERIENCE: Education: Bachelor's degree or Higer. Experience: 8-10 years of experience in ETL and PL/SQL development. Skills: PL/SQL (Oracle, SQL Server) Data modeling and data warehousing concepts Data quality and data integrity principles Collaboration and communication skills Strong problem-solving skills

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Reports To AVP, Integration & Data, Analytics & AI Group Data Analytics and Artificial Intelligence Function: Data Management & International Enablement Department: DAAI Job Family Work Level: Prepared / Revision 30-09-2024 Job Purpose This Role will be responsible for Designing, Developing and implementation of BI Reporting & Dashboard projects in PowerBI (Cloud & On Prem platforms) for international regions across FAB. The purpose of this role is to implement multiple BI & Data Analytics projects by collaborating with key stakeholders across different international regions. The job holder will manage all future major/minor development enhancement and define architectural requirements in BI and Analytics. Additionally, this Job role would also involve Data Modeling capabilities from the Data Lake / Warehouse platforms to facilitate data sets creation to self-assist dashboards creation by Business users. Main responsibility of the job holder is to design, develop and maintain PowerBI dashboards in cloud/On-Prem with other BI Major/Minor enhancement projects and contribute skills to Analytics project as and when required. Lead planning and implementation of BI projects. Job holder will actively participate in the requirement gathering & analysis, design and understanding of end-to-end BI solution architectures, testing and implementation phases enabling performance of the entire platform. Work with IT project manager to determine project missions, goals, tasks, timelines, and resource requirements; manage the overall Project Plan; resolve or assist in the resolution of conflicts within and between BI projects or functional areas; develop methods to monitor project’s overall progress; and provide corrective supervision if necessary. Job Holder role is responsible for conceptualizing complex Institutional problems, discovering business insights through visualization techniques. Should be well adapted to develop and execute database queries and conduct analysis. Well verse Knowledge of ETL/ELT tools, SSAS tabular model design, platform enabling configuration is essential. The role requires job holder to be a creative thinker and able to propose innovative ways to look at business problems by proper analysis of data to identify the co-relations and defining proper KPIs and metrics to benefit Business teams with the entire workflow. Analytical Skill Sets are key to this Role. BI Specialist will need to present back their findings to the business and conduct sessions or user walkthrough of the platform. One of the key responsibilities of this role is to work on the analytics opportunities to support digital transformation initiatives. Need to perform Business Analysis of BRD to make sure business and IT team clearly understands the scope and requirements. Review functional solution documents prepared by the IT team to make sure they meet the business requirements. Banking Functional Knowledge will be an added advantage. In addition to the normal roles of project management, job holder is also expected to work closely with the business teams and interpret their BI needs in technical terms and define the same to ensure adherence to all BI standards and best practices. Should have a very good understanding of data and ability to answer complex analytical questions to help organization shape its products and services in a better way. Embark on exploratory data analysis projects to achieve better understanding of phenomena as well as to discover untapped areas of growth and optimization. Work on projects that promote data-based decision across business strategy, product design, campaign management and customer service and work out data driven solutions, create significant value addition. Work with business team to document the business requirements, anticipated benefits, success criteria for analytics initiatives. Data Modelling from the Consolidation Data platforms to facilitate data sets creation to self - assist dashboards creation for Business users. Data Modeler experience will be added advantage.

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Area(s) of responsibility ETL TESTING JD The Skills that are Key to this role Technical Develop and execute test plans, test cases, and test scripts for ETL processes. Expertise in validate data extraction, transformation, and loading workflows. Writing PL SQL Queries/Procedures and managing databases, validating data transformations, and ensuring data integrity Identify and report data quality issues and inconsistencies. Collaborate with ETL developers and data engineers to resolve data quality issues. Analyze test results and provide detailed reports to stakeholders. Automate repetitive testing tasks to improve efficiency. Ensure compliance with industry standards and best practices in data quality assurance. Experience with tools like Informatica, Control-M, and DataStage for automating data extraction and transformation processes Understanding of data warehousing architectures and schemas to ensure effective data integration Experience building, maintaining, and optimizing automated test cases Good to have experience with Selenium, Cucumber, Java, Shell, groovy scripting. Experience with automated application build, deployment, and support using Maven and Ant Experience with performing version control and continuous integration of build, deploy, and test, using Jenkins, Stash Designing innovative technical solutions using Automation practices. Experience in framework development and maintenance. Experience working with AWS is a big plus! Experience as a developer (e.g.- Java, Spring) a plus Communicate effectively within team as well as with partners

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies