Home
Jobs

795 Adf Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

16 - 18 Lacs

Noida

On-site

GlassDoor logo

Software Engineer 2/ Senior Software Engineer Job Responsibilities: Requirements Gathering & Data Analysis (~15%) Uncover Customer Needs : Actively gather customer requirements and analyze user needs to ensure software development aligns with real-world problems. Transform Needs into Action : Translate these requirements into clear and actionable software development tasks. Deep Collaboration : Collaborate daily with stakeholders across the project, including internal and external teams, to gain a comprehensive understanding of business objectives. Building the Foundation: System Architecture (~10%) Prototype & Analyze : Develop iterative prototypes while analyzing upstream data sources to ensure the solution aligns perfectly with business needs. Evaluate & Validate : Assess design alternatives, technical feasibility, and build proofs of concept to gather early user feedback and choose the most effective approach. Design for Scale : Craft a robust, scalable, and efficient database schema, documenting all architectural dependencies for future reference. Optimize Implementation : Translate functional specifications into software design by developing algorithms for optimal performance. Write clean, well-documented, and efficient code (~55%): Technologies: Microsoft Fabric, Azure Synapse, Azure Data Explorer, along with other Azure services, Power BI, Machine Learning, Power Apps, Dynamic 365, HTML 5, and React. Azure Data Platform Specialist : Develop, maintain, and enhance data pipelines using Azure Data Factory (ADF) to streamline data flow. Analyze data models in Azure Analysis Services for deeper insights. Leverage the processing muscle of Azure Databricks for complex data transformations. Data Visualization Wizard : Craft compelling reports, dashboards, and analytical models using BI tools like Power BI to transform raw data into actionable insights. AI & Machine Learning Powerhouse : Craft and maintain cutting-edge machine learning models using Python to uncover hidden insights in data, predict future trends, and even integrate with powerful Large Language Models (LLMs) to unlock new possibilities. Full-Stack Rockstar : Build beautiful and interactive user interfaces (UIs) with the latest front-end frameworks like React, and craft powerful back-end code based on system specifications. Level up your coding with cutting-edge AI : Write code faster and smarter with AIpowered copilots that suggest code completions and help you learn the latest technologies. Quality Champion : Implement unit testing to ensure code quality and functionality. Utilize the latest frameworks and libraries to develop and maintain web applications that are efficient and reliable. Data-Driven Decisions : Analyze reports generated from various tools to identify trends and incorporate those findings into ongoing development for continuous improvement. Collaborative Code Craftsmanship : Foster a culture of code excellence through peer and external code reviews facilitated by Git and Azure DevOps. Automation Advocate : Automate daily builds for efficient verification and customer feedback, ensuring a smooth development process. Ensuring Seamless User Experience : Bridge the gap between defined requirements, business logic implemented in the database, and user experience to ensure users can easily interact with the data. Proactive Problem Solver : Proactively debug, monitor, and troubleshoot solutions to maintain optimal performance and a positive user experience. Quality Control and Assurance (10%) Code Excellence: Ensure code quality aligns with industry standards, best practices, and automated quality tools for maintainable and efficient development. Proactive Debugging: Continuously monitor, debug, and troubleshoot solutions to maintain optimal performance and reliability. End-to-End & Automated Testing : Implement automated testing frameworks to streamline testing processes, enhance coverage, and improve efficiency. Conduct comprehensive manual and automated tests across all stages of development to validate functionality, security, and user experience. AI-Powered Testing : Leverage AI-driven testing tools for intelligent test case generation. Collaborative Code Reviews: Foster a culture of excellence by conducting peer and external code reviews to enhance code quality and maintainability. Seamless Deployment: Oversee the deployment process, ensuring successful implementation and validation of live solutions. Continuous Learning & Skill Development (10%) Community & Training : Sharpen your skills by actively participating in technical learning communities and internal training programs. Industry Certifications : Earn industry-recognized certifications to stay ahead of the curve in in-demand technologies like data analysis, Azure development, data engineering, AI engineering, and data science (as applicable). Online Learning Platforms : Expand your skillset through online courses offered by platforms like Microsoft Learn, Coursera, edX, Udemy, and Pluralsight. Candidate Profile Eligible Branches: B. Tech./B.E. (CSE/IT) M. Tech./ M.E. (CSE/IT) Eligibility criteria: 60% plus or equivalent in Computer Science/Information Technology 2 to 6 years of software development experience Why consider MAQ Software? Make an Impact: Contributing to Cutting-Edge Projects Shape the Future: Work on complex projects for industry leaders like Microsoft and other Fortune 500 companies, utilizing the latest software platforms like Microsoft Fabric, Azure Synapse, Power BI and a range of Microsoft Azure services. Rapid Project Delivery: Gain experience across the entire software development lifecycle by delivering 4-6 projects per year, ensuring a fast-paced and rewarding experience. Agile & Efficient: Adopt the latest software engineering techniques including Agile and Lean methodologies to contribute effectively and reach your full potential. Continuous Learning for Long-Term Growth: Our comprehensive training program ensures your long-term career growth. Through continuous learning initiatives, hands-on exposure to cutting-edge technologies, and access to industry-leading resources, you'll stay ahead of the curve. Our structured upskilling programs empower you to refine your expertise and adapt to the ever-evolving technology landscape. Location: Hyderabad or Noida How to Apply: Send your resume to ojaswiy Job Types: Full-time, Permanent Pay: ₹1,600,000.00 - ₹1,800,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

0 years

1 - 3 Lacs

Ahmedabad

On-site

GlassDoor logo

1- 2 Yrs Faulty Dignosis with Parts replacement of All types of Printers 18,000-20,000 Grad./Diploma N/A How Many Types of Printer What is the difference b/w DMP and Laser Printer What is the difference b/w Laser and MFP printer What is the difference b/w 32 and 80 column What is the difference b/w Laser scanner and ADF scanner What is the diagnosis if Paper is jam in Printer What is the Problem of Part if Printing is Blurr What is the problem of Part if printer is giving Blank printing What is the part Problem if Printer is not picking the Paper if prnter is not giving print even after sending print command from System, what are the problems

Posted 2 weeks ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Skills scala,Python,Pyspark Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

To be responsible for data modelling, design, and development of the batch and real-time extraction, load, transform (ELT) processes, and the setup of the data integration framework, ensuring best practices are followed during the integration development. Bachelors degree in CS/IT or related field (minimum) Azure Data Engineer (ADF, ADSL, MS Fabric), Databricks Azure DevOps, Confluence Show more Show less

Posted 2 weeks ago

Apply

10.0 - 17.0 years

12 - 19 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Job Purpose: We are seeking an experienced ADF Technical Architect with over 10 to 17 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required with strong communication skills. Requirements: We are seeking an experienced ADF Technical Architect with over 10 to 17 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required with strong communication skills. The ideal candidate should have: Key Responsibilities: Participate in data strategy and road map exercises, data architecture definition, business intelligence/data warehouse solution and platform selection, design and blueprinting, and implementation. Lead other team members and provide technical leadership in all phases of a project from discovery and planning through implementation and delivery. Work experience in RFP, RFQ's. Work through all stages of a data solution life cycle: analyze/profile data, create conceptual, logical & physical data model designs, architect and design ETL, reporting, and analytics solutions. Lead source to target mapping, define interface process and standards, and implement the standards Perform Root Cause Analysis and develop data remediation solutions Develop and implement proactive monitoring and alert mechanism for data issues. Collaborate with other workstream leads to ensure the overall developments are in sync Identify risks and opportunities of potential logic and data issues within the data environment Guide, influence, and mentor junior members of the team Collaborate effectively with the onsite-offshore team and ensure day to day deliverables are met Qualifications & Key skills required: Bachelor's degree and 10+ years of experience in related data and analytics area Demonstrated knowledge of modern data solutions such as Azure Data Fabric, Synapse Analytics, Lake houses, Data lakes Strong source to target mapping experience and ETL principles/knowledge Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required Excellent verbal and written communication skills. Strong quantitative and analytical skills with accuracy and attention to detail Ability to work well independently with minimal supervision and can manage multiple priorities Proven experiences with Azure, AWS, GCP, OCI and other modern technology platforms is required

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Analyst (Snowflake) Job ID: POS-9943 Primary Skill: SQL Location: Hyderabad Experience Secondary skills: Snowflake, and ADF Mode of work: Work from Office Experience : 5-7 Years About The Job Are you someone with an in-depth understanding of ETL and a strong background in developing Snowflake and ADF ETL-based solutions who develop, document, unit test, and maintain ETL applications and deliver successful code meeting customer expectations? If yes, this opportunity can be the next step in your career. Read on. We are looking for a Snowflake and ADF developer to join our Data leverage team – a team of high-energy individuals who thrive in a rapid-pace and agile product development environment. As a Developer, you will provide accountability in the ETL and Data Integration space, from the development phase through delivery. You will work closely with the Project Manager, Technical Lead, and client teams. Your prime responsibilities will be to develop bug free code with proper unit testing and documentation. You will provide inputs to planning, estimation, scheduling, and coordination of technical activities related to ETL-based applications. You will be responsible for meeting development schedules and delivering high-quality ETL-based solutions that meet technical specifications and design requirements ensuring customer satisfaction. You are expected to possess good knowledge in tools – Snowflake and ADF. Know Your Team At ValueMomentum’s Engineering Center , we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development, leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Responsibilities Developing Modern Data Warehouse solutions using Snowflake and ADF. Ability to provide solutions that are forward-thinking in the data engineering and analytics space Good understanding of star and snowflake dimensional modeling. Good knowledge of Snowflake security, Snowflake SQL, and designing other Snowflake objects. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Taks, Streams, Time travel, Cloning, Optimizer, data sharing, stored procedures, and UDFs. Good understanding of Databricks Data and Databricks Delta Lake Architecture. Experience in Azure Data Factory (ADF) to design, implement, and manage complex data integration and transformation workflows. Good understanding of SDLC and Agile Methodologies. Strong problem-solving skills and analytical skills with proven strength in applying root-cause analysis. Ability to communicate verbally and in technical writing to all levels of the organization in a proactive, contextually appropriate manner Strong teamwork and interpersonal skills at all levels. Dedicated to excellence in one’s work; strong organizational skills; detail-oriented and thorough. Hands on experience in support activities, able to create and resolve tickets – Jira, ServiceNow, Azure DevOps. Requirements Strong experience in Snowflake and ADF. Experience of working in an Onsite/Offshore model. 5+ years of experience in Snowflake and ADF development. About The Company Headquartered in New Jersey, US, ValueMomentum is the largest standalone provider of IT Services and Solutions to Insurers. Our industry focus, expertise in technology backed by R&D, and our customer-first approach uniquely position us to deliver the value we promise and drive momentum to our customers’ initiatives. ValueMomentum is amongst the top 10 insurance-focused IT services firms in North America by number of customers. Leading Insurance firms trust ValueMomentum with their Digital, Data, Core, and IT Transformation initiatives. Benefits We at ValueMomentum offer you a congenial environment to work and grow in the company of experienced professionals. Some benefits that are available to you are: Competitive compensation package. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles. Outcomes Collaborate closely with data analysts data scientists and other stakeholders to ensure data accessibility quality and security across various data sources.rnDesign develop and maintain data pipelines that collect process and transform large volumes of data from various sources. Implement ETL (Extract Transform Load) processes to facilitate efficient data movement and transformation. Integrate data from multiple sources including databases APIs cloud services and third-party data providers. Establish data quality checks and validation procedures to ensure data accuracy completeness and consistency. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Documentation Create documentation for personal work and review deliverable documents including source-target mappings test cases and results. Configuration Follow configuration processes diligently. Testing Create and conduct unit tests for data pipelines and transformations to ensure data quality and correctness. Validate the accuracy and performance of data processes. Domain Relevance Develop features and components with a solid understanding of the business problems being addressed for the client. Understand data schemas in relation to domain-specific contexts such as EDI formats. Defect Management Raise fix and retest defects in accordance with project standards. Estimation Estimate time effort and resource dependencies for personal work. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Design Understanding Understand design and low-level design (LLD) and link it to requirements and user stories. Certifications Obtain relevant technology certifications to enhance skills and knowledge. Skill Examples Proficiency in SQL Python or other programming languages utilized for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning data processes. Proficiency in querying data warehouses. Knowledge Examples Knowledge Examples Knowledge of various ETL services provided by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow and Azure ADF/ADLF. Understanding of data warehousing principles and practices. Proficiency in SQL for analytics including windowing functions. Familiarity with data schemas and models. Understanding of domain-related data and its implications. Additional Comments Design, develop, and maintain data pipelines and architectures using Azure services. Collaborate with data scientists and analysts to meet data needs. Optimize data systems for performance and reliability. Monitor and troubleshoot data storage and processing issues. Responsibilities Design, develop, and maintain data pipelines and architectures using Azure services. Collaborate with data scientists and analysts to meet data needs. Optimize data systems for performance and reliability. Monitor and troubleshoot data storage and processing issues. Ensure data security and compliance with company policies. Document data solutions and architecture for future reference. Stay updated with Azure data engineering best practices and tools. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in data engineering. Proficiency in Azure Data Factory, Azure SQL Database, and Azure Databricks. Experience with data modeling and ETL processes. Strong understanding of database management and data warehousing concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Skills Azure Data Factory Azure SQL Database Azure Databricks ETL Data Modeling SQL Python Big Data Technologies Data Warehousing Azure DevOps Skills Azure,Aws,Aws Cloud,Azure Cloud Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary – Senior Associate - Azure Data Engineer Responsibilities Job Description: PwC India is seeking a talented Azure Data Engineer to join our team in Mumbai or Gurgaon. The ideal candidate will have 2-5 years of experience with a strong focus on Azure services, data engineering, and analytics. This role offers an exciting opportunity to work on cutting-edge projects for global clients while leveraging your expertise in cloud technologies and data management. Responsibilities: Azure Service Implementation: Design, develop, and maintain data solutions using Azure services, with a particular focus on ADF, Azure Databricks, ADLS Gen 2, Azure Functions, Azure Repo, Azure Monitor, Synapse Implement and optimize data lakes and data warehouses on Azure platforms. Data Pipeline Development: Create and maintain efficient ETL processes using PySpark and other relevant tools. Develop scalable and performant data pipelines to process large volumes of data. Implement data quality checks and monitoring systems to ensure data integrity. Database Management: Proficient with SQL and NoSQL databases, optimizing queries and database structures for performance. Design and implement database schemas that align with business requirements and data models. Performance Optimization: Continuously monitor and optimize the performance of data processing jobs and queries. Implement best practices for cost optimization in AWS environments. Troubleshoot and resolve performance bottlenecks in data pipelines and analytics processes. Collaboration and Documentation: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Develop and maintain comprehensive documentation for data architectures, processes, and best practices. Participate in code reviews and contribute to the team's knowledge base. Required Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 2-5 years of experience in data engineering, with a focus on Azure technologies. Strong hands-on experience with Azure services, particularly ADF, Azure Databricks, ADLS Gen 2, Azure Functions, Azure Repo, Azure Monitor, Synapse Proficiency in Python and PySpark for data processing and analysis. Extremely good with SQL/PL SQL Demonstrated ability to optimize data pipelines and queries for performance. Strong problem-solving skills and attention to detail. Preferred Skills: Azure certifications Familiarity with data visualization tools (e.g., Tableau, Power BI). Experience with data modeling and data warehouse concepts. Innovation thinking and creativity in solution delivery Mandatory Skill Sets Azure Services/Python/Sql Preferred Skill Sets Azure Services/Python/Sql Years Of Experience Required 2-5 years Education Qualification BE/BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Tamil Nadu, India

On-site

Linkedin logo

Senior Data Engineer - DBT and Snowflake Years of Experience : 5 Job location: Chennai Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Should hold minimum 5 years of experience in DBT and Snowflake. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, non-relational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Additional Requirement: Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. Establish best DBT processes to improve performance, scalability, and reliability. Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). Migrate legacy transformation code into modular DBT data models. #SeniorDataEngineer #DBTDeveloper #SnowflakeDeveloper #DBTJobs #SnowflakeJobs #ModernDataStack #SrDataEngineering #SeniorDataEngineer #ETLDeveloper #DataTransformation #SQL #Python #Airflow #Azure #AWS #GCP #Fivetran #Databricks #ADF #Glue #CloudData Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The role involves working with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Develop and maintain robust Python-based backend services and RESTful APIs to support machine learning models in production. Deploy and manage containerized applications using Docker and orchestrate them using Azure Kubernetes Service (AKS). Implement and manage ML pipelines using MLflow for model tracking, reproducibility, and deployment. Design, schedule, and maintain automated workflows using Apache Airflow to orchestrate data and ML pipelines. Collaborate with Data Scientists to productize NLP models, with a focus on language models, embeddings, and text preprocessing techniques (e.g., tokenization, lemmatization, vectorization). Ensure high code quality and version control using Git; manage CI/CD pipelines for reliable deployment. Handle unstructured text data and build scalable backend infrastructure for inference and retraining workflows. Participate in system design and architecture reviews for scalable and maintainable machine learning services. Proactively monitor, debug, and optimize ML applications in production environments. Communicate technical solutions and project status clearly to team leads and product stakeholders. Qualifications Minimum Experience (relevant): 5 years Maximum Experience (relevant): 9 years Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Required Skills Proficiency in Python and frameworks like FastAPI or Flask for building APIs. Solid hands-on experience with Docker, Kubernetes (AKS), and deploying production-grade applications. Familiarity with MLflow, including model packaging, logging, and deployment. Experience with Apache Airflow for orchestrating ETL and ML workflows. Understanding of NLP pipelines, language models (e.g., BERT, GPT variants), and associated libraries (e.g., spaCy, Hugging Face Transformers). Exposure to cloud environments, preferably Azure. Strong debugging, testing, and optimization skills for scalable systems. Experience working with large datasets and unstructured data, especially text. Preferred Skills Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable Azure based data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Understanding of Node.js is a plus, but not required. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Location- Delhi/Chennai Required Technical Skill Set Azure Data Factory, Databricks, SQL, Snowflake Desired Competencies (Technical/Behavioral Competency) Must-Have · Hands-on experience of developing and operating ETL/ELT pipelines with Azure Data Factory, Azure Databricks (Spark), Azure Data Lake, Synapse, Azure SQL Database. · Should have a good understanding of SQL and Data Warehousing concepts. Good-to-Have · Should be comfortable working with Git, Power Shell/bash · Experience in PowerBI, Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics will be an added advantage SN Responsibility of / Expectations from the Role 1 Create data pipelines using ADF/ADB for ETL 2 Hands on experience in Python OR Scala OR pySpark, mandatory skill to design pipeline and data processing 3 Analyzing data issues and pipeline failures 4 Providing data/code fixes for data issues and pipeline failures observed in the system 5 Ensure various data pipelines run smoothly in production, monitor pipelines running in production environment and raise alarms 6 Good understanding of Data Reliability, hands on experience of driving data quality initiatives 7 Drive continuous improvement initiatives that enable TCS to deliver services – better, cheaper, faster 8 Strong analytical skills to troubleshoot and optimize data work flows Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Forsys Forsys Inc. is a leader in Lead-to-Revenue transformation, combining strategy, technology, and business transformation to drive growth. With a team of over 600 professionals spread across the US, India, UK, Colombia, and Brazil, and headquartered in the Bay Area, Forsys epitomizes innovation and excellence. Our role as an implementation partner for major vendors like Conga, Salesforce and Oracle; an incubator for pioneering ideas and solutions positions us uniquely in the consulting industry. We are dedicated to unlocking new revenue streams for our clients and fostering a culture of innovation. Discover our vision and the impact we’re making at forsysinc.com Required Skills/Experience: Oracle Financial Fusion Functional Experience with Oracle (Accounts Payable, Accounts Receivable, Fixed Asset, Cash Management, Oracle Treasury, General Ledger & FCC), Modules. Must have at least 5+ years’ experience in an Oracle EBS Financial role. Must have at least 2+ years’ experience in an Oracle Fusion Financial role. Must have at least 5+ years of experience in an Oracle Finance Functional role. Strong knowledge of related application configurations and processes. Knowledge of Project and Software Development Life Cycle Methodologies. Exposure/Certification in Cloud module preferred Must have Good Exposure in iExpenses and iProcurement At least 2 full lifecycle implementations/upgrades on Oracle Cloud ERP, including the following phases: requirements gathering, fit/gap analysis, functional design documentation, user acceptance testing, and training, deployment activities Extensive experience in E2E Implementation on Oracle Financial Cloud. Modules: Account Payable, Account Receivable, Fixed Assets, General Ledger, Cash Management, Purchasing, FCC Good Exposure to Reporting Tools (BI Publisher, OTBI, Financial Reporting Centre, Smart View) Knowledge of SLA (Subledger Accounting) rules to replace traditional transaction codes for generating multiple accounting representations for one business transaction Exposure to Integration through FBDI, Web Services, ADF Desktop Integration Good Exposure to accounting knowledge. Show more Show less

Posted 2 weeks ago

Apply

12.0 - 15.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

ECI is the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From its unmatched range of services, ECI provides stability, security and improved business performance, freeing clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in ECI. At ECI, we believe success is driven by passion and purpose. Our passion for technology is only surpassed by our commitment to empowering our employees around the world . The Opportunity: ECI has an exciting opportunity for an experienced Data Architect, who will work with our clients in building robust data centric applications. Client satisfaction is our primary objective; all available positions are customer facing requiring EXCELLENT communication and people skills. A positive attitude, rigorous work habits and professionalism in the work place are a must. Fluency in English, both written and verbal are required. This is an onsite role with work timings, 1 PM IST – 10 PM IST / 2 PM IST – 11 PM IST. What you will do: Design and develop data architecture for large enterprise application Should be able to build and demonstrate quick POC Review customer environment for master data processes and help with overall data solution & governance model Work closely with team business and IT stakeholders to understand master data requirements and current constraints Should be able to mentor technically to junior resources Should be able to set industry standards with his own work. Who you are: 12 to 15 years of experience as a Data Architect Hands on experience in full life cycle Master Data Management Hands of experience in ADF, Azure Purview, Databricks, Azure Fabric Services Lead Data architecture roadmaps, defined business cases and implementations for clients Experience in leading, evaluating and designing Data Architecture based on the overall Enterprise Data Strategy / Architecture Review customer environment for master data processes and help with overall data governance model Hands on experience in building cloud based later enterprise data warehouses. Experience in leading, evaluating and designing Data Architecture based on the overall Enterprise Data Strategy / Architecture Implementing best practices for data governance, data modeling, and data migrations Should be a good team player Bonus points if you have: Deep knowledge of Master Data Management (MDM) principles, processes, architectures, protocols, patterns, and technologies Strong knowledge of ETL and Data Modeling Deep knowledge of Master Data Management (MDM) principles, processes, architectures, protocols, patterns, and technologies ECI’s culture is all about connection - connection with our clients, our technology and most importantly with each other. In addition to working with an amazing team around the world, ECI also offers a competitive compensation package and so much more! If you believe you would be a great fit and are ready for your best job ever, we would like to hear from you! Love Your Job, Share Your Technology Passion, Create Your Future Here! Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mysore, Karnataka, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your Primary Responsibilities Include Proficient Software Development with Microsoft Technologies: Demonstrate expertise in software development using Microsoft technologies, ensuring high-quality code and efficient application performance. Collaborative Problem-Solving and Stakeholder Engagement: Collaborate effectively with stakeholders to understand product requirements and challenges, proactively addressing issues through analytical problem-solving and strategic software solutions. Agile Learning and Technology Integration: Stay updated with the latest Microsoft technologies, eagerly embracing continuous learning and integrating newfound knowledge to enhance software development processes and product features Preferred Education Master's Degree Required Technical And Professional Expertise SQL ADF Azure Data Bricks Preferred Technical And Professional Experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Rajkot, Gujarat, India

On-site

Linkedin logo

Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products.Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience. Career Level - IC3 Responsibilities Fusion development team works on design, development and maintenance of Fusion Global HR, Talent, Configuration Workbench and Compensation product areas. As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. Bachelors or Masters Degree (B.E./B.Tech./MCA/M.Tech./M.S.) from reputed universities. 1-8 years of experience in Applications or product development. Mandatory Skills Strong Knowledge of object oriented programming concepts Product design & development experience in [Java / J2EE technologies (JSP/Servlet)] OR [Database fundamentals, SQL, PL/SQL] Optional Skills Development experience on the Fusion Middleware platform Familiarity with ADF and Exposure to development in the cloud Development experience in Oracle Applications / HCM functionalityAnalyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job title: Senior Manager About The Role: As a Senior Manager, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 10-13years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 2 weeks ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Key Responsibilities: Collaborate closely with stakeholders and cross-functional teams to understand business requirements and translate them into technical specifications for data warehouse development. Design, develop, and maintain scalable, sustainable ETL and SQL Server Data warehouse solutions for healthcare payor data management needs. Develop and optimize ETL processes using strong SQL skills and ADF pipelines. Port to ADF existing SSIS integrations. Ensure data quality and integrity through the implementation of data governance techniques addressing controls, monitoring, alerting/, validation checks, and error handling procedures. Participate in triaging production issues as well as rotational Production support. Lead developer teams when needed in cross-functional projects. Conduct Daily Scrums, Code Reviews and Production Turnover verifications when needed including bridging the gap between Offshore developers and Onshore leadership. Mentor and guide offshore team members, fostering a culture of collaboration and continuous learning. Qualifications: Bachelors degree in computer science, Information Systems, or a related field. 6+ years’ experience designing ETL solutions. Minimum of 5 years of professional experience in healthcare related areas with functional knowledge of healthcare business capabilities and data for Enrollments, Members, Authorizations, Claims, Provider functional areas. Prior Experience with Azure Data Factory (ADF) development, SSIS development and porting ETL implementations from SSIS to ADF. Strong proficiency in SQL (SQL Server, T-SQL) demonstrated experience in writing complex queries and optimizing database performance. Excellent communication and collaboration skills to work effectively with business stakeholders and cross-functional teams. Experience in facing Business teams in eliciting, developing and refining business requirements. Prior experience in Data modeling. Architecture/Design, including leading design and code reviews for ETL teams. Experienced in Azure DevOps, CI/CD and Release Management practices. Awareness of Best practices in Data Management, Data Governance and in Ensuring defect free production deployments. Job Benefits: Salary : Competitive and among the best in the industry Health Insurance : Comprehensive coverage for you and your family Flexible Timings : Work-life balance with adaptable schedules Team Lunches & Outings : Regular team bonding activities and celebrations Growth Opportunities : A supportive environment for learning and career advancement

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We’re hiring a Senior ML Engineer (MLOps) — 3-5 yrs Location: Chennai What you’ll do Tame data → pull, clean, and shape structured & unstructured data. Orchestrate pipelines → Airflow / Step Functions / ADF… your call. Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI. Scale → Spark / Databricks for the heavy lifting. Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Pair up → work with engineers, architects, and business folks to solve real problems, fast. What you bring 3+ yrs hands-on MLOps (4-5 yrs total software experience). Proven chops on one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark , Python, SQL, TensorFlow / PyTorch / Scikit-learn. You debug Kubernetes in your sleep and treat Dockerfiles like breathing. You prototype with open-source first, choose the right tool, then make it scale. Sharp mind, low ego, bias for action. Nice-to-haves Sagemaker, Azure ML, or Vertex AI in production. Love for clean code, clear docs, and crisp PRs. Why Datadivr? Domain focus: we live and breathe F&B — your work ships to plants, not just slides. Small team, big autonomy: no endless layers; you own what you build. 📬 How to apply Shoot your CV + a short note on a project you shipped to careers@datadivr.com or DM me here. We reply to every serious applicant. Know someone perfect? Please share — good people know good people. Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Thiruvananthapuram

On-site

GlassDoor logo

3 - 5 Years 1 Opening Trivandrum Role description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes: Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Outputs Expected: Code: Develop code independently for the above Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution Domain relevance: Develop features and components with good understanding of the business problem being addressed for the client Manage Project: Manage module level activities Manage Defects: Perform defect RCA and mitigation Estimate: Estimate time effort resource dependence for one's own work and others' work including modules Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards/process Release: Execute release process Design: LLD for multiple components Mentoring: Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples: Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments: Resource needs to have sound technical knowhow on Azure Data bricks, has knowledge in SQL querying, and managing data bricks notebooks. Hands on experience on SQL using SQL constraints, operators, modifying and querying data from the table. Skills Azure Databricks,Adf,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Associate Director. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 12+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 2 weeks ago

Apply

3.0 years

6 - 8 Lacs

Chennai

Remote

GlassDoor logo

Job Title: Azure Data Engineer Experience: 3+ years Location : Remote Job Description : 3+ years of experience as a Data Engineer with strong Azure expertise Proficiency in Azure Data Factory (ADF) and Azure Blob Storage Working knowledge of SQL and data modeling principles Experience working with REST APIs for data integration Hands-on experience with Snowflake data warehouse Exposure to GitHub and Azure DevOps for CI/CD and version control Understanding of DevOps concepts as applied to data workflows Azure certification (e.g., DP-203) is highly desirable Strong problem-solving and communication skills Speak with Employer: Mobile Number: 7418488223 Mail Id : ahalya.b@findq.in Job Types: Full-time, Permanent Benefits: Health insurance Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Remote

Naukri logo

Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 2 weeks ago

Apply

4.0 years

1 - 10 Lacs

Noida

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About UHG United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms – United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Primary Responsibilities: Gather, analyze and document business requirements. At the same time leveraging knowledge of claims, clinical and other healthcare systems ETL jobs Development using Talend, Python, Cloud Based Data-Warehouse, Jenkins, Kafka and an orchestration tool Writing advanced SQL queries Create and interpret functional and technical specifications and design documents Understand the business and how various data elements and subject areas are utilized in order to develop and deliver the reports to business Be an SME either on Claims, member or provider module Provide regular status updates to higher management Design, develop, and implement scalable and high-performing data models and solutions using Snowflake and Oracle Manage and optimize data replication and ingestion processes using Oracle and Snowflake Develop and maintain ETL pipelines using Azure Data Factory (ADF) and Databricks Optimize query performance and reduce latency by leveraging pre-aggregated tables and efficient data processing techniques Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions Implement data security measures and ensure compliance with industry standards Automate data governance and security controls to maintain data integrity and compliance Develop and maintain comprehensive documentation for data architecture, data flows, ETL processes, and configurations Continuously optimize the performance of data pipelines and queries to improve efficiency and reduce costs Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors or 4 year university degree 5+ years of experience Experience in developing ETL jobs using Snowflake, ADF , Databricks and Python Experience in writing efficient and advanced SQL queries Experience in both producing and consuming data utilizing Kafka Experience working on large scale cloud-based data warehouse- Snowflake, Databricks Good experience in building data pipelines using ADF Knowledge of Agile methodologies, roles, responsibilities and deliverables Proficiency in Python for data processing and automation Demonstrated ability to learn and adapt to new data technologies Preferred Qualifications: Certified in Azure Data Engineering (AZ-205) Extensive experience with Azure cloud services (Azure Data Factory, Azure Databricks, Azure SQL Database, etc.) Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI/CD) Knowledge of SQL and NoSQL databases Proficiency in Python for data processing and automation Proven excellent time management, communication, decision making, and presentation skills Proven good problem-solving skills Proven good communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

Exploring ADF Jobs in India

The job market for ADF (Application Development Framework) professionals in India is witnessing significant growth, with numerous opportunities available for job seekers in this field. ADF is a popular framework used for building enterprise applications, and companies across various industries are actively looking for skilled professionals to join their teams.

Top Hiring Locations in India

Here are 5 major cities in India where there is a high demand for ADF professionals: - Bangalore - Hyderabad - Pune - Chennai - Mumbai

Average Salary Range

The estimated salary range for ADF professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

In the ADF job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and Architect. As professionals gain more experience and expertise in ADF, they can progress to higher-level positions with greater responsibilities.

Related Skills

In addition to ADF expertise, professionals in this field are often expected to have knowledge of related technologies such as Java, Oracle Database, SQL, JavaScript, and web development frameworks like Angular or React.

Interview Questions

Here are 25 interview questions for ADF roles, categorized by difficulty level: - Basic: - What is ADF and its key features? - What is the difference between ADF Faces and ADF Task Flows? - Medium: - Explain the lifecycle of an ADF application. - How do you handle exceptions in ADF applications? - Advanced: - Discuss the advantages of using ADF Business Components. - How would you optimize performance in an ADF application?

Closing Remark

As you explore job opportunities in the ADF market in India, make sure to enhance your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and mindset, you can excel in your ADF career and secure rewarding opportunities in the industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies