Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Reporting Data Engineer Join EY as a MARS Data Engineer and be at the forefront of providing and implementing innovative data insights, data products, and data services. MARS is a data platform providing custom data insights, DaaS and DaaP for a variety of EY departments and staff. We leverage software development practices to develop intricate data insights and develop data products. Your Key Responsibilities As a member of the MARS team, you will play a critical role in our mission of providing innovative data insights, the operations and support of the MARS data platform. This includes supporting customers, internal team members, and management. Operations and support include estimating, designing, developing and delivery of data products and services. You will contribute your creative solutions and knowledge to our data platform which features 2TB of mobile device data daily (300K+ devices). Our platform empowers our product managers and help enable our teams to build a better working world. As reporting engineer with the MARS team, the following activities are expected: Collaborate closely with the product manager to align activities to timelines and deadlines Proactively suggest new ideas and solutions, driving them to implementation with minimal guidance on technical delivery Provide input to the MARS roadmap and actively participate to bring it to life Collaborate with the Intune engineering team to get a clear understanding of the mobile device lifecycle and the relationship to Intune data and reporting Serve as the last level of support for all MARS data reporting questions and issues. Participate and contribute in the below activities: Customer discussions and requirement gathering sessions Application reports (daily, weekly, monthly, quarterly, annually) Custom reporting for manual reports, dashboards, exports, APIs, and semantic models Customer Service engagements Daily team meetings Work estimates and daily status Data & Dashboard monitoring & troubleshooting Automation Data management and classification Maintaining design documentation for Data schema, data models, data catalogue, and related products/services. Monitoring and integrating a variety of data sources Maintain and develop custom data quality tools General Skills Skills and attributes for success Analytical Ability: Strong analytical skills in supporting core technologies, particularly in managing large user bases, to effectively troubleshoot and optimize data solutions. Communication Skills: Excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly to both technical and non-technical stakeholders. Proficiency in English is required, with additional languages being a plus. Interpersonal Skills: Strong interpersonal skills, sound judgment, and tact to foster collaboration with colleagues and customers across diverse cultural backgrounds. Creative Problem-Solving: Ability to conceptualize innovative solutions that add value to end users, particularly in the context of mobile applications and services. Self-Starter Mentality: A proactive and self-motivated approach to work, with the ability to take initiative and drive projects forward independently. Documentation Skills: Clear and concise documentation skills, ensuring that all processes, solutions, and communications are well-documented for future reference. Organizational skills: The ability to define project plans, execute them, and manage ongoing risks and communications throughout the project lifecycle. Cross-Cultural Awareness: Awareness of and sensitivity to cross-cultural dynamics, enabling effective collaboration with global teams and clients. User Experience Focus: Passionate about improving user experience, with an understanding of how to measure, monitor, and enhance user satisfaction through feedback and analytics. To qualify for the role, you must have the following qualifications: At least three-years of experience in the following technologies and methodologies Hands-on experience in Microsoft Intune data, Mobile Device and Application Management data (MSFT APIs, Graph and IDW) Proven experience in mobile platform engineering or a related field. Strong understanding of mobile technologies and security protocols, particularly within an Intune-based environment. Experience with Microsoft Intune, including mobile device and application management. Proficient in supporting Modern Workplace tools and resources. Skilled in supporting Modern Workplace tools and resources Experience with iOS and Android operating systems. Proficient in PowerShell scripting for automation and management tasks. Ability to operate proactively and independently in a fast-paced environment. Solution oriented mindset with the capability to design and implement creative Mobile solutions and the ability to suggest and implement solutions that meet EY’s requirements Ability to work in UK working hours Specific technology skills include the following: Technical Skills Power BI - semantic models, Advanced Dashboards Power Bi Templates Intune Reporting and Intune Data Intune Compliance Intune Device Intune Policy management Intune Metrics Intune Monitoring SPLUNK data and reporting Sentinel data and reporting HR data and reporting Mobile Defender data and reporting AAD-Active Directory Data quality & data assurance Data Bricks Web Analytics Mobile Analytics Azure Data Factory Azure pipelines/synapses Azure SQL DB/Server ADF Automation Azure Kubernetes Service (KaaS) Key Vault management Azure Monitoring App Proxy & Azure Front Door data exports API Development Python, SQL, KQL, Power Apps MSFT Intune APIs, (Export, App Install) Virtual Machines SharePoint - General operations Data modeling ETL and related technologies Ideally, you’ll also have the following: Strong communication skills to effectively liaise with various stakeholders. A proactive approach to suggesting and implementing new ideas. Familiarity with the latest trends in mobile technology. Ability to explain very technical topics to non-technical stakeholders Experience in managing and supporting large mobile environments. Testing and Quality Assurance – ensure our mobile platform meets quality, performance and security standards. Implementation of new products and/or service offerings. Experience with working in a large global environment XML data formats Agile delivery Object-oriented design and programming Software development Mobile What we look for: A person that demonstrates a commitment to integrity, initiative, collaboration, efficiency and three or more years in the field of data analytics, and Intune data reporting. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E / MCA / B.Tech / MTECH /MS Graduation (Minimum 16 years of formal education, Correspondence courses are not relevant) 2+ years of experience on Azure database offering like SQL DB, Postgres DB, constructing data pipelines using Azure data factory, design and development of analytics using Azure data bricks and snowpark 2+ years of experience on Cloud based DW: Snowflake, Azure SQL DW 2+ years of experience in data engineering and working on large data warehouse including design and development of ETL / ELT 3+ years of experience in constructing large and complex SQL queries on terabytes of warehouse database system Good Knowledge on Agile practices - Scrum, Kanban Knowledge on Kubernetes, Jenkins, CI / CD Pipelines, SonarQube, Artifactory, GIT, Unit Testing Main tech experience : Dockers, Kubernetes and Kafka Database: Azure SQL databases Knowledge on Apache Kafka and Data Streaming Main tech experience : Terraform and Azure.. Ability to identify system changes and verify that technical system specifications meet the business requirements Solid problem solving, analytical kills, Good communication and presentation skills, Good attitude and self-motivated Solid problem solving, analytical kills Proven good communication and presentation skills Proven good attitude and self-motivated Preferred Qualifications 2+ years of experience on working with cloud native monitoring and logging tool like Log analytics 2+ years of experience in scheduling tools on cloud either using Apache Airflow or logic apps or any native/third party scheduling tool on cloud Exposure on ATDD, Fortify, SonarQube Unix scripting, DW concepts, ETL Frameworks: Scala / Spark, DataStage At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Description Job Description: You will be part of 8 people team in charge to develop robust system to provide the real time details of tires to warehouse management system. You will be responsible to transform & develop the integration of 2 existing applications. Responsible for the analysis and the development of solutions in full autonomy, as a developer you should be able to design, build, deploy and maintain the applications or adapts existing ones. Ensuring applications stability, scalability, performance, security and consistency, he/she verifies the functional and/or technical quality of developments by implementing and/or executing tests on a wide variety of devices and setups. Every team member work in full autonomy on their project from discussion with business to delivery of usable solution. Technical Primary Skills – Mandatory Core & Advance Java (Version 8 & above). Spring boot 2.0 & above & REST API. Basic knowledge on the Mongo DB. Apache Kafka. Technical Secondary Skills – Good to have MQ, Quarkus, Azure, ADF. Basics of Dockers, Gitlab & K8S. Excellent English communication (Mandatory) Good in Problem Solving & Analytical thinking (Mandatory) Your future duties and responsibilities Required Qualifications To Be Successful In This Role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. Responsibilities Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 4-8 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ADF Business Components, ADL Assistance, Android Debug Bridge (ADB) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
10 - 15 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the Company: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank. Job Purpose To lead and own the Data platform for a Business vertical Experience Overall experience between 10 - 15 years, applicant must have minimum 7 years of hard core professional experience in data modeling for large Data Warehouse with multiple Sources. Technical Skills Expertise in core skill of data modeling principles/methods including conceptual, logical & physical Data Models Ability to utilize BI tools like Power BI, Tableau, etc to represent insights Expert knowledge of metadata management, data modeling tools like ER Studio, Erwin or others. Hands-on experience in relational, dimensional and/or analytical modelling (using RDBMS, dimensional, NoSQL, ETL and data ingestion protocols). Good to have knowledge of – - Azure powershell scripting or Python scripting for data transformation in ADF - SSIS, SSAS, BI tools like Power BI - Azure PaaS components like Azure Data Factory, Azure Data Bricks, Azure Data Lake, Azure Synapse (DWH), Polybase, ExpressRoute tunneling, etc. - API integration Responsibility Front face with business representatives to understand the data office requirements Understanding the business requirement, .translate them into data requirement, articulate the requirement to the ta engineers, build the data models for business consumption. Work with development team to implement the proposed data model into physical data model, build data flows Work with development team to optimize the database structure with best practices applying optimization methods. Analyze, document and implement to re-use of data model for new initiatives. Will interact with stakeholder, Users, other IT teams to understand the eco system and analyze for solutions Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Data Modeling and Data Analyst. Good to have a certification of Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Should have excellent problem-solving and time management skills Strong analytical thinking skills Applicant should have excellent communication skill and process oriented with flexible execution mindset. Strategic Thinking with Research and Development mindset. Clear and demonstrative communication Efficiently identify and solves issues Identify, track and escalate risks in a timely manner Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on IndusInd Bank Candidates selected after the screening rounds will be processed further by IndusInd Bank Show more Show less
Posted 1 month ago
10 - 15 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the company: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank. Job Purpose: To work on implementing data modeling solutions To design data flow and structure to reduce data redundancy and improving data movement among systems defining a data lineage To work in the Azure Data Warehouse To work with large data volume of data integration Experience With overall experience between 10 to 15 years, applicant must have minimum 8 to 11 years of hard core professional experience in data modeling for large Data Warehouse with multiple Sources. Technical Skills Expertise in core skill of data modeling principles/methods including conceptual, logical & physical Data Models Ability to utilize BI tools like Power BI, Tableau, etc to represent insights Experience in translating/mapping relational data models into XML and Schemas Expert knowledge of metadata management, relational & data modeling tools like ER Studio, Erwin or others. Hands-on experience in relational, dimensional and/or analytical experience (using RDBMS, dimensional, NoSQL, ETL and data ingestion protocols). Very strong in SQL queries Expertise in performance tuning of SQL queries. Ability to analyse source system and create Source to Target mapping. Ability to understand the business use case and create data models or joined data in Datawarehouse. Preferred experience in banking domain and experience in building data models/marts for various banking functions. Good to have knowledge of – -Azure powershell scripting or Python scripting for data transformation in ADF - SSIS, SSAS, BI tools like Power BI -Azure PaaS components like Azure Data Factory, Azure Data Bricks, Azure Data Lake, Azure Synapse (DWH), Polybase, ExpressRoute tunneling, etc. -API integration Responsibility Understanding the existing data model, existing data warehouse design, functional domain subject areas of data, documenting the same with as is architecture and proposed one. Understanding existing ETL process, various sources and analyzing, documenting the best approach to design logical data model where required Work with development team to implement the proposed data model into physical data model, build data flows Work with development team to optimize the database structure with best practices applying optimization methods. Analyze, document and implement to re-use of data model for new initiatives. Will interact with stakeholder, Users, other IT teams to understand the eco system and analyze for solutions Work on user requirements and create queries for creating consumption views for users from the existing DW data. Will train and lead a small team of data engineers. Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Data Modeling and Data Analyst. Good to have a certification of Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Should have excellent problem-solving and time management skills Strong analytical thinking skills Applicant should have excellent communication skill and process oriented with flexible execution mindset. Strategic Thinking with Research and Development mindset. Clear and demonstrative communication Efficiently identify and solves issues Identify, track and escalate risks in a timely manner Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on IndusInd Bank Candidates selected after the screening rounds will be processed further by IndusInd Bank Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Design and develop technical solutions for Oracle Fusion applications, including customizations, extensions, and integrations. Collaborate with business analysts, functional consultants, and other stakeholders to understand business requirements and translate them into technical specifications. Develop and maintain Oracle Fusion reports, interfaces, conversions, and extensions (RICEW). Ensure the performance, scalability, and reliability of Oracle Fusion applications. Troubleshoot and resolve technical issues related to Oracle Fusion applications. Participate in code reviews, testing, and quality assurance processes to ensure high-quality deliverables. Provide technical support and guidance to end-users and other team members. Stay updated with the latest Oracle Fusion updates, patches, and best practices. Document technical designs, configurations, and procedures Mandatory skill sets Experience with Oracle BI Publisher and OTBI. Strong knowledge of Oracle Fusion architecture, data models, and development tools. Proficiency in Oracle Fusion middleware technologies such as Oracle SOA, Oracle ADF, and Oracle BPEL. Experience with SQL, PL/SQL, Java, and XML. Familiarity with Oracle Fusion applications modules (e.g., Financials, HCM, SCM). Understanding of integration technologies and methodologies (e.g., REST/SOAP web services, APIs). Strong problem-solving skills and attention to detail. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Preferred skill sets Understanding of integration technologies and methodologies (e.g., REST/SOAP web services, APIs). Year of experience required 2-4 Yrs experience Educational Qualification BE/BTech/MBA/MCA/CAs Oracle Fusion Technical Certification Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Chartered Accountant Diploma Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Oracle Fusion Applications Optional Skills Accepting Feedback, Active Listening, Business Transformation, Communication, Design Automation, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Optimism, Oracle Application Development Framework (ADF), Oracle Business Intelligence (BI) Publisher, Oracle Cloud Infrastructure, Oracle Data Integration, Process Improvement, Process Optimization, Strategic Technology Planning, Teamwork, Well Being Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: Azure Data Engineer (4-7 Years Experience) Position : Azure Data Engineer Experience : 4-7 Years Location : Gurugram Type : Full Time Preferred Certifications : Azure Data Engineer Associate, Databricks (Not Mandatory) About The Role : We are looking for a skilled Azure Data Engineer with 4-7 years of experience in Azure Data Services, including Azure Data Factory (ADF), Synapse Analytics , and Databricks. The candidate will play a key role in developing and maintaining data solutions on Azure. Key Responsibilities : Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from various data sources to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Required Skills : Strong experience with Azure Data Services, including Azure Data Factory (ADF), Synapse Analytics, and Databricks. Proficiency in SQL, data transformation and ETL processes. Hands-on experience with Azure Data Lake migrations and Python/Pyspark Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Preferred Qualifications : Azure Data Engineer Associate certification. Databricks Certification. Mandatory Skill Sets: Azure, ADF, SQL Preferred Skill Sets: Azure, ADF, SQL Years Of Experience Required: 4-7 years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure, Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. In Oracle human capital at PwC, you will specialise in providing consulting services for Oracle human capital management (HCM) applications. You will analyse client requirements, implement HCM software solutions, and provide training and support for seamless integration and utilisation of Oracle HCM applications. Working in this area, you will enable clients to optimise their human resources processes, enhance talent management, and achieve their strategic objectives. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary A career within PwC's Oracle Services Practice, will provide you with the opportunity to help organizations use enterprise technology to achieve their digital technology goals and capitalize on business opportunities. We help our clients implement and effectively use Oracle offerings to solve their business problems and fuel success in the areas of finance operations, human capital management, supply chain management, reporting and analytics, and governance, risk and compliance. Responsibilities: Participate in the implementation of Oracle HCM Cloud modules such as Core HR, Payroll, Benefits, Talent Management, Compensation, and others. Configure Oracle HCM Cloud applications to meet client requirements. Develop and customize reports using Oracle BI Publisher, OTBI, and other reporting tools. Create and modify HCM extracts, HDL (HCM Data Loader) scripts, and other data integration processes. Design and develop integrations using Oracle Integration Cloud (OIC) or other middleware solutions. Mandatory skill sets Design and develop integrations using Oracle Integration Cloud (OIC) or other middleware solutions. Modules: Absence, Time and Labour, Payroll, workforce planning, HR helpdesk, Oracle digital assistants, Oracle guided learning Preferred skill sets Provide technical support and troubleshooting for Oracle HCM Cloud applications. Perform routine maintenance and upgrades to ensure optimal performance of the HCM system. Year of experience required 2-4 Yrs experience Educational Qualification BE/BTech/MBA/MCA/CAs Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Chartered Accountant Diploma, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Oracle HCM Cloud Optional Skills Absence Management, Absence Management, Accepting Feedback, Active Listening, Benefits Administration, Business Analysis, Business Process Improvement, Change Management, Communication, Emotional Regulation, Empathy, Employee Engagement Strategies, Employee Engagement Surveys, Employee Relations Investigations, Human Capital Management, Human Resources (HR) Consulting, Human Resources (HR) Metrics, Human Resources (HR) Policies, Human Resources (HR) Project Management, Human Resources (HR) Transformation, Human Resources Management (HRM), Inclusion, Intellectual Curiosity, Optimism, Oracle Application Development Framework (ADF) {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from various data sources to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Mandatory Skill Sets Strong experience with Azure Data Services, including Azure Data Factory (ADF), Synapse Analytics, and Databricks. Proficiency in SQL, data transformation and ETL processes. Hands-on experience with Azure Data Lake migrations and Python/Pyspark Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Preferred Skill Sets Azure Data Engineer Associate certification. Databricks Certification. Years Of Experience Required 4 – 7 yrs Education Qualification BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure, Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Artificial Intelligence, Big Data, C++ Programming Language, Communication, Complex Data Analysis, Data-Driven Decision Making (DIDM), Data Engineering, Data Lake, Data Mining, Data Modeling, Data Pipeline, Data Quality, Data Science, Data Science Algorithms, Data Science Troubleshooting, Data Science Workflows, Deep Learning, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Machine Learning {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
3 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Responsibilities: Lead and Manage multiple Projects related to regulatory reports automation. Coordinate with multiple stakeholders within teams across Finance, Compliance, Legal and Operations for periodic reviews, tracking progress, and ensuring defined actionable are tracked to closure in areas related to functional and technical specifications across multiple Projects related to regulatory reports automation. Prepare, Review, Publish Status Updates across Projects on regular defined frequencies to different stakeholders. Coordinate with Vendor teams for tracking and getting all Project implementations completed as planned. Ensuring that all teams involved in project are documenting testing scenarios and test cases and perform testing. Ensuring that Data Dictionary, Data Lineage and checks and controls required to ensure quality, consistency and integrity of reporting are well defined, documented and always updated. Liaising with internal teams for getting responses to audit queries. Must have skills/experience Business Analysis/ Data Analysis / Report Design / Mapping Project Management Change Management. Minimum 3 years in reporting in Banking Domain Ability to work and deliver under high pressure environment Worked with Cross-Functional Teams Good to have skills/experience. Understanding of Regulatory / Compliance / Risk with respect to Banking Domain. Minimum work experience of 3 Years, preferably in Business Analyst, Techno-Functional Consulting Role in Data Warehousing/ Reporting Automation Projects. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 2-10Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree - Computer Applications, Bachelor of Engineering, Master of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
5 - 10 years
10 - 12 Lacs
Chennai
Work from Office
Role Overview: We are looking for an experienced Data Engineer to join our Azure Data Platform team. The ideal candidate will have a deep understanding of Azures data engineering and cloud technology stack. This role is pivotal in driving data-driven decision-making, operational analytics, and advanced manufacturing intelligence initiatives. Job Title: Data Engineer – Azure Data Platform Location: Padi, Chennai Job Type: Full-Time Key Responsibilities: Lead the design and implementation of data architectures that support operational analytics and advanced manufacturing intelligence, ensuring scalability and flexibility to handle increasing data volumes. Design, implement, and maintain scalable data and analytics platforms using Microsoft Azure services, such as Azure Data Factory (ADF), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Develop and manage ETL processes, data pipelines, and batch jobs to ensure efficient data flow and transformation, optimizing pipeline runs and monitoring compute and storage usage. Implement metadata management solutions to ensure data quality and governance, leading to consistent data quality and integrity. Integrate data from key sources such as SAP, SQL Server, and cloud databases, IoT and other live streaming data into centralized data structures to support analytics and decision-making. Provide expertise on data ingestion (SAP, SQL), data transformation, and the automation of data pipelines in a manufacturing context. Ensure the data platform supports dashboarding and advanced analytics, enabling business users to independently create and evolve dashboards. Implement manufacturing-specific analytics solutions, including leadership and operational dashboards, and other analytics solutions across our value chain leveraging Azure’s comprehensive toolset. Define and monitor KPIs, ensuring data quality and the accuracy of insights delivered to business stakeholders. Identify and manage project risks related to data security, system integration, and scalability. Independently maintain the data platform, ensuring its reliability and performance, and implementing best practices for data security and compliance. Advise the Data Platform project manager and leadership team on best practices for data management and scaling needs, providing guidance on integrating data from IoT and other SaaS platforms, as well as newer systems as they come into the digital landscape. Work closely with data scientists to ensure data is available in the required format for their analyses and collaborate with Power BI developers to support dashboarding and reporting needs. Create data marts for business users to facilitate self-service analytics. Mentor and train junior engineers, fostering their professional growth and development, and providing guidance and support on best practices and technical challenges. Qualifications & Experience: Bachelor’s degree in Engineering, Computer Science, or a related field. Experience: 8-10 years of experience, with a minimum of 5 years working on core data engineering responsibilities on a cloud platform. Project Management experience is a big plus. Proven track record of implementing data-driven solutions in areas such as plant automation, operational analytics, quality control, supply chain optimization. Technical Proficiency: Expertise in cloud-based data platforms, particularly within the Azure ecosystem (Azure Data Factory, Synapse Analytics, Databricks). Familiarity with SAP as a data source. Proficiency in programming languages such as SQL, Python, and R for analytics and reporting. Soft Skills: Strong analytical mindset with the ability to translate manufacturing challenges into data-driven insights and solutions. Excellent communication and organizational skills. What We Offer: The opportunity to work on transformative data analytics projects that drive innovation and operational excellence in manufacturing. A collaborative and dynamic work environment focused on professional growth and career development.
Posted 1 month ago
5 - 8 years
9 - 10 Lacs
Pune
Work from Office
Azure Data Factory(min 4.5Yrs), Data Warehousing(3Yrs), SQL Tech Stack Table Azure Data Factory Data Warehousing SQL Required Candidate profile whatsapp or call 9599062625
Posted 1 month ago
0.0 - 5.0 years
0 Lacs
Gurugram, Haryana
On-site
Reporting to: AGM- Compliance Location: Central Support Office, Gurgaon Education: Technically qualified with good background of BFSI IT System Experience: Min 5-7 years in BFSI Sector in technical domain Objective Person will be responsible for rendering professional assistance on day-to-day basis having good knowledge and experience of IT systems of BFSI Responsibility · Monitoring and testing of core IT systems used for extracting the data for NHB/RBI reporting; · Pulling the MIS data from IT systems and synthesizing as per NHB/RBI requirement; · Support regulatory interfaces like Automated Data Flow (ADF portal) ; · Assisting in end-to-end automation of regulatory returns · Creation of automated dashboard and MIS · Analysis and monitoring of data on different modules of loans, deposits, treasury etc; · Analysing the data and functioning of SQL, SAP etc · Monitoring of AMLOCK software for Anti-Money Laundering · Regularly monitoring the audit trails and system logs to detect any unauthorised activity. · Assisting and handling of Compliance tool · Any other matter as may be assigned from time to time by the Chief Compliance Officer Competencies Min 5 -7 years’ experience in a similar role with in NBFC/ Banking · SQL and SAP knowledge · Expert in Excel and formulas · Should have done monitoring and testing of IT systems in BFSI sector · Cross-functional team synergy · Technical monitoring and judgement along with diligence in meeting commitments · Ability to function independently yet communicates laterally and upwardly with ease. · Ability to work under pressure with limited resource and tight timelines · Excellent communication and Stakeholder Management Functional : · Strong regulatory, business and technical sense. · Detailed knowledge of NBFC/HFC product, policies and IT systems. · Strong understanding of business processes across all functions. · Ability to organize and manage multiple priorities. Job Type: Full-time Pay: Up to ₹1,800,000.00 per year Schedule: Day shift Application Question(s): What is your notice period (in days)? What is your current annual compensation in INR? What is your expected annual compensation in INR? Experience: BFSI sector work: 5 years (Required) SQL and SAP: 5 years (Required) IT systems in BFSI sector: 5 years (Required) AMLOCK software for Anti-Money Laundering : 5 years (Required) NBFC/HFC product, policies and IT systems: 5 years (Required) Location: Gurgaon, Haryana (Required) Work Location: In person
Posted 1 month ago
3 - 6 years
12 - 15 Lacs
Hyderabad
Remote
Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, we’d love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Data bricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).
Posted 1 month ago
5 - 8 years
22 - 30 Lacs
Pune, Chennai
Work from Office
Experience: Minimum of 5 years of experience in data engineering, with a strong focus on data pipeline development. At least 2 years of experience leading teams or projects in the healthcare, life sciences, or related domains. Proficiency in Python, with experience in data manipulation libraries. Hands-on experience with AWS Glue, AWS Lambda, S3, Redshift, and other relevant AWS data services. Familiarity with data integration tools, ETL (Extract, Transform, Load) frameworks, and data warehousing solutions. Proven experience working in an onsite-offshore model, managing distributed teams, and coordinating development across multiple time zones.
Posted 1 month ago
3 - 8 years
13 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Summary: We are looking for a skilled Azure Data Engineer to join our Data & Analytics team. You will be responsible for building and optimizing our data pipelines, designing and implementing data solutions on Microsoft Azure, and enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines and data processing systems using Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Integrate data from various structured and unstructured data sources into a centralized data lake or data warehouse. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality data solutions. Optimize data flows for performance, reliability, and scalability. Implement and manage data security, privacy, and compliance policies. Monitor and troubleshoot data pipeline issues and ensure system reliability. Leverage DevOps practices for CI/CD pipelines using tools like Azure DevOps or GitHub Actions. Document data flows, architecture, and data models. Required Skills & Qualifications: Bachelors or Master’s degree in Computer Science, Information Systems, or a related field. 4+ years of experience in data engineering or a similar role. Hands-on experience with Azure services such as: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 Azure SQL Database or Azure SQL Managed Instance Azure Databricks (preferred) Proficiency in SQL, Python, and/or PySpark for data transformation. Experience with data modeling, ETL/ELT processes, and data integration. Strong understanding of data governance, security, and compliance in the cloud. Familiarity with version control systems (e.g., Git) and CI/CD practices. Preferred Qualifications: Microsoft Certified: Azure Data Engineer Associate or equivalent certification. Experience with real-time data processing using Azure Stream Analytics or Apache Kafka. Knowledge of Power BI and data visualization best practices. Experience working in Agile or Scrum development environments.
Posted 1 month ago
6 - 8 years
12 - 16 Lacs
Hyderabad
Remote
Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, wed love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).
Posted 1 month ago
5 - 10 years
10 - 20 Lacs
Pune
Hybrid
Design, develop, and maintain scalable data pipelines using Python and PySpark. Implement and optimize SQL queries for data extraction, transformation, and loading (ETL) processes. Develop and manage data integration solutions using Azure Synapse and Azure Data Factory (ADF). Collaborate with DevOps teams to automate deployment and manage infrastructure. Create and maintain comprehensive documentation for data processes and solutions. Work closely with business stakeholders to understand requirements and deliver actionable insights. Develop and maintain reporting solutions using Power BI, Business Objects, and Tableau. Primary Skills: Proficiency in Python and PySpark for data processing and analysis. Strong SQL skills for database management and ETL processes. Experience with Azure Synapse and Azure Data Factory (ADF) for data integration. Knowledge of DevOps practices for continuous integration and deployment. Additional Skills: Experience with reporting tools such as Power BI, Business Objects, and Tableau. Ability to translate business requirements into technical solutions. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Role & responsibilities Preferred candidate profile
Posted 1 month ago
4 - 8 years
10 - 18 Lacs
Kochi, Chennai, Bengaluru
Hybrid
Data warehouse developer Experience: 3-8 years Location Chennai/Kochi/Bangalore Responsibilities: Design, build, and maintain scalable and robust data engineering pipelines using Microsoft Azure technologies such as SQL Azure, Azure Data Factory, and Azure Databricks. Develop and optimize data solutions using Azure SQL, PySpark, and PySQL to handle complex data transformation and processing tasks. Implement and manage data storage solutions in One Lake and Azure SQL, ensuring data integrity and accessibility. Work closely with stakeholders to design and build effective reporting and analytics solutions using Power BI and other analytical tools. Collaborate with IT and security teams to integrate solutions within Azure AD and ensure compliance with data security and privacy standards. Contribute to the architectural design of database and lakehouse structures, optimizing for performance and scalability. Utilize .NET frameworks where applicable, to enhance data processing and integration capabilities. Design and implement OLAP and data warehousing solutions, adhering to best practices in data warehouse design concepts. Perform database and query performance tuning and optimizations to ensure high performance and reliability. Stay updated with the latest technologies and trends in big data, proposing and implementing new tools and technologies to improve data systems and processes. Implement unit testing and automation strategies to ensure the reliability and performance of the full-stack application. Conduct thorough code reviews, providing constructive feedback to team members and ensuring adherence to coding standards and best practices. Collaborate with QA engineers to implement and maintain automated testing procedures, including API testing. Work in an Agile environment, participating in sprint planning, daily stand-ups, and retrospective meetings to ensure timely and iterative project delivery. Stay abreast of industry trends and emerging technologies to continuously improve skills and contribute innovative ideas. Requirements: Bachelors degree in computer science, Engineering, or a related field. 3-8 years of professional experience in data engineering or a related field. Profound expertise in SQL,T-SQL, database design, and data warehousing principles. Strong experience with Microsoft Azure tools including MS Fabric, SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. Proficient in Python, PySpark, and PySQL for data processing and analytics tasks. Experience with Power BI and other reporting and analytics tools. Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. Knowledge of .NET frameworks is highly preferred. Excellent problem-solving, analytical, and communication skills. Bachelors or Masters degree in Computer Science, Engineering, or a related field. Interested candidates can share their resumes at megha.chattopadhyay@aspiresys.com
Posted 1 month ago
6 - 8 years
8 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Skill Set - Azure Data Engineer (ADF, ADB), Python Work Type - Contract on third party payroll, Hybrid Location - Bangalore, Chennai, Hyderabad Notice Period - Immediate Joiner
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for ADF (Application Development Framework) professionals in India is witnessing significant growth, with numerous opportunities available for job seekers in this field. ADF is a popular framework used for building enterprise applications, and companies across various industries are actively looking for skilled professionals to join their teams.
Here are 5 major cities in India where there is a high demand for ADF professionals: - Bangalore - Hyderabad - Pune - Chennai - Mumbai
The estimated salary range for ADF professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the ADF job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and Architect. As professionals gain more experience and expertise in ADF, they can progress to higher-level positions with greater responsibilities.
In addition to ADF expertise, professionals in this field are often expected to have knowledge of related technologies such as Java, Oracle Database, SQL, JavaScript, and web development frameworks like Angular or React.
Here are 25 interview questions for ADF roles, categorized by difficulty level: - Basic: - What is ADF and its key features? - What is the difference between ADF Faces and ADF Task Flows? - Medium: - Explain the lifecycle of an ADF application. - How do you handle exceptions in ADF applications? - Advanced: - Discuss the advantages of using ADF Business Components. - How would you optimize performance in an ADF application?
As you explore job opportunities in the ADF market in India, make sure to enhance your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and mindset, you can excel in your ADF career and secure rewarding opportunities in the industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2