Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities We are looking for Technical resource for Oracle Apps R12 financial modules-based application. Below will the main responsibilities of the user: Development activity of Oracle R12.2 release Interact with business users and BA/SA to understand the requirements Prepare the technical specification documents Develop the new Interface, conversion and reports Develop/Customize/personalize new/existing oracle form and OAF pages Perform Impact analysis on possible code changes Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree in Computer Science / Engineering 3+ years of Oracle EBS (Technical) experience with R12 release Development experience in the EBS environment in Reports, Interfaces, Conversions, Extensions, Workflow (RICEW) and Forms deliverables Experience in P2P, Oracle General Ledger (GL), Account Payables (AP), Receivables (AR), Cash Management (CM), Sub-ledger Accounting (SLA), and System administrator modules Experience of end-user interaction for requirements gathering, understanding customer needs and working with multiple groups to coordinate and carry out technical activities which include new development, maintenance and production support activities Good knowledge of R12 financial table structure Good knowledge of Agile Methodologies Good hands-on knowledge of SQL, PLSQL, Oracle reports, Oracle form. OAF/ADF, BI publisher reports, Shell scripting and WebServices (Integrated SOA Gateway) Oracle APEX Knowledge Knowledge of WebServices using Integrated SOA Gateway Proven good analytical, performance tuning and debugging skills. At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 3 weeks ago
8.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Engineer – Azure Data Platform Location: Padi, Chennai Job Type: Full-Time Role Overview: We are looking for an experienced Data Engineer to join our Azure Data Platform team. The ideal candidate will have a deep understanding of Azure’s data engineering and cloud technology stack. This role is pivotal in driving data-driven decision-making, operational analytics, and advanced manufacturing intelligence initiatives. Key Responsibilities: Lead the design and implementation of data architectures that support operational analytics and advanced manufacturing intelligence, ensuring scalability and flexibility to handle increasing data volumes. Design, implement, and maintain scalable data and analytics platforms using Microsoft Azure services, such as Azure Data Factory (ADF), Azure Data Lake Storage Gen2, and Azure Synapse Analytics. Develop and manage ETL processes, data pipelines, and batch jobs to ensure efficient data flow and transformation, optimizing pipeline runs and monitoring compute and storage usage. Implement metadata management solutions to ensure data quality and governance, leading to consistent data quality and integrity. Integrate data from key sources such as SAP, SQL Server, and cloud databases, IoT and other live streaming data into centralized data structures to support analytics and decision-making. Provide expertise on data ingestion (SAP, SQL), data transformation, and the automation of data pipelines in a manufacturing context. Ensure the data platform supports dashboarding and advanced analytics, enabling business users to independently create and evolve dashboards. Implement manufacturing-specific analytics solutions, including leadership and operational dashboards, and other analytics solutions across our value chain leveraging Azure’s comprehensive toolset. Define and monitor KPIs, ensuring data quality and the accuracy of insights delivered to business stakeholders. Identify and manage project risks related to data security, system integration, and scalability. Independently maintain the data platform, ensuring its reliability and performance, and implementing best practices for data security and compliance. Advise the Data Platform project manager and leadership team on best practices for data management and scaling needs, providing guidance on integrating data from IoT and other SaaS platforms, as well as newer systems as they come into the digital landscape. Work closely with data scientists to ensure data is available in the required format for their analyses and collaborate with Power BI developers to support dashboarding and reporting needs. Create data marts for business users to facilitate self-service analytics. Mentor and train junior engineers, fostering their professional growth and development, and providing guidance and support on best practices and technical challenges. Qualifications & Experience: Education: Bachelor’s degree in Engineering, Computer Science, or a related field. Experience: 8-10 years of experience, with a minimum of 5 years working on core data engineering responsibilities on a cloud platform. Project Management experience is a big plus. Proven track record of implementing data-driven solutions in areas such as plant automation, operational analytics, quality control, supply chain optimization. Technical Proficiency: Expertise in cloud-based data platforms, particularly within the Azure ecosystem (Azure Data Factory, Synapse Analytics, Databricks). Familiarity with SAP as a data source. Proficiency in programming languages such as SQL, Python, and R for analytics and reporting. Soft Skills: Strong analytical mindset with the ability to translate manufacturing challenges into data-driven insights and solutions. Excellent communication and organizational skills. What We Offer: The opportunity to work on transformative data analytics projects that drive innovation and operational excellence in manufacturing. A collaborative and dynamic work environment focused on professional growth and career development. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We are looking for a skilled DevOps Engineer with solid experience in cloud infrastructure management, CI/CD pipelines, and deployment on Kubernetes. The ideal candidate will have a strong background in Azure cloud platforms, with proficiency in configuring, automating, and optimizing cloud deployments to ensure scalability, reliability, and security of our : Design, implement, and manage CI/CD pipelines using Azure DevOps, GitHub, and Jenkins for automated deployments of applications and infrastructure changes. Architect and deploy solutions on Kubernetes clusters (EKS and AKS) to support containerized applications and microservices architecture. Collaborate with development teams to streamline code deployments, releases, and continuous integration processes across multiple environments. Configure and manage Azure services including Azure Synapse Analytics, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), and other data services for efficient data processing and analytics workflows. Utilize AWS services such as Amazon EMR, Amazon Redshift, Amazon S3, Amazon Aurora, IAM policies, and Azure Monitor for data management, warehousing, and governance. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and management of cloud resources. Ensure high availability, performance monitoring, and disaster recovery strategies for cloud-based applications and services. Develop and enforce security best practices and compliance policies, including IAM policies, encryption, and access controls across Azure environments. Collaborate with cross-functional teams to troubleshoot production issues, conduct root cause analysis, and implement solutions to prevent recurrence. Stay current with industry trends, best practices, and evolving technologies in cloud computing, DevOps, and container : Bachelor's degree in Computer Science, Engineering, or related field; or equivalent work experience. 5+ years of experience as a DevOps Engineer or similar role with hands-on expertise in AWS and Azure cloud environments. Strong proficiency in Azure DevOps, Git, GitHub, Jenkins, and CI/CD pipeline automation. Experience deploying and managing Kubernetes clusters (EKS, AKS) and container orchestration platforms. Deep understanding of cloud-native architectures, microservices, and serverless computing. Familiarity with Azure Synapse, ADF, ADLS, and AWS data services (EMR, Redshift, Glue) for data integration and analytics. Solid grasp of infrastructure as code (IaC) tools like Terraform, CloudFormation, or ARM templates. Experience with monitoring tools (e.g., Prometheus, Grafana) and logging solutions for cloud-based applications. Excellent troubleshooting skills and ability to resolve complex technical issues in production environments. (ref:hirist.tech) Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About ProcDNA ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge technology to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged; it's ingrained in our DNA. Ready to join our epic growth journey? What We Are Looking For You’ll be driving the adoption of the latest technologies in our solutions, bringing in thought leadership to guide clients on complex data management problems, and driving business performance. You will work with the leadership team to bring subject matter expertise in areas such as Big Data, ETL, Reporting, CRM, Data Warehousing, MDM, DevOps, Software Development, etc. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do Leading end-to-end data management solution projects for multiple clients across data engineering and BI technologies. Responsible for creating a project management plan and ensuring adherence to project timelines. Integrate multiple data sources into one visualization to tell a story. Interact with customers to understand their business problems and provide best-in-class analytics solutions. Interact with Data Platform leaders and understand data flows that integrate into Tableau/analytics. Understand data governance, quality, security, and integrate analytics with these enterprise platforms. Interact with UX/UI global functions and design best-in class visualization for customers, harnessing all product capabilities. Must have 7 - 10 years of data warehousing and data engineering. Experience in interacting with Life Science clients directly, discussing requirements, and stakeholder management. Experience in requirement gathering and designing enterprise warehouse solutions from scratch. Hands-on experience with ETL tools like ADF, Databricks, and Informatica; experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc; experience in data warehouse: SQL/NoSQL, Amazon Redshift, Snowflake, Apache Hive, HDFS, etc. BI tools knowledge and experience in leading the implementation of dashboards. Deep understanding of data governance and data quality management frameworks. Strong communication and presentation skills with a strong problem-solving attitude. Excellent analytical, problem-solving, and debugging skills, with a strong ability to quickly learn and comprehend business processes and problems to effectively develop technical solutions to their requirements. Skills: mdm,sql,hdfs,data warehousing,big data,devops,cloud,amazon redshift,snowflake,pharmaceutical consulting,data management,apache hive,azure,reporting,problem-solving,luigi,informatica,analytical skills,presentation skills,data governance,adf,data engineering,crm,databricks,bi technologies,airflow,team management,business technology,aws,azkaban,software development,etl,client management,data quality management,life science Show more Show less
Posted 3 weeks ago
4.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Job Description About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Azure Data Engineer. Experience: 4+ Years Skill Set: Azure Synapse, Pyspark, ADF and SQL. Location: Pune, Hyderabad, Gurgaon 5+ years of experience in software development, technical operations, and running large-scale applications. 4+ years of experience in developing or supporting Azure Data Factory (API/APIM), Azure Databricks, Azure DevOps, Azure Data Lake storage (ADLS), SQL and Synapse data warehouse, Azure Cosmos DB 2+ years of experience working in Data Engineering Any experience in data virtualization products like Denodo is desirable Azure Data Engineer or Solutions Architect certification is desirable Should have a good understanding of container platforms like Docker and Kubernetes. Should be able to assess the application/platform time to time for architectural improvements and provide inputs to the relevant teams Very Good troubleshooting skills (quick identification of the application issues and providing quick resolutions with no or minimal user/business impact) Hands-on experience in working with high-volume, mission-critical applications Deep appreciation of IT tools, techniques, systems, and solutions. Excellent communication skills along with experience in driving triage calls which involves different technical stake holders Has creative problem-solving skills related to cross-functional issues amidst the changing priorities. Should be flexible and resourceful to swiftly manage the changing operational goals and demands. Good experience in handling escalations and take complete responsibility and ownership of all critical issues to get a technical/logical closure. Good understanding of the IT Infrastructure Library (ITIL) framework and various IT Service Management (ITSM) tools available in the marketplace
Posted 3 weeks ago
4.0 - 7.0 years
8 - 15 Lacs
Hyderabad
Hybrid
We are seeking a highly motivated Senior Data Engineer OR Data Engineer within Envoy Global's tech team to join us on a full time, permanent basis. This role is responsible for designing, developing, and documenting data pipelines and ETL jobs to enable data migration, data integration and data warehousing. That includes ETL jobs, reports, dashboards and data pipelines. The person in this role will work closely with Data Architect, BI & Analytics team and Engineering teams to deliver data assets for Data Security, DW and Analytics. As our Senior Data Engineer OR Data Engineer, you will be required to: Design, build, test and maintain cloud-based data pipelines to acquire, profile, cleanse, consolidate, transform, integrate data Design and develop ETL processes for the Data Warehouse lifecycle (staging of data, ODS data integration, EDW and data marts) and Data Security (Data archival, Data obfuscation, etc.). Build complex SQL queries on large datasets and performance tune as needed Design and develop data pipelines and ETL jobs using SSIS and Azure Data Factory Maintain ETL packages and supporting data objects for our growing BI infrastructure Carry out monitoring, tuning, and database performance analysis Facilitate integration of our application with other systems by developing data pipelines Prepare key documentation to support the technical design in technical specifications Collaborate and work alongside with other technical professionals (BI Report developers, Data Analysts, Architect) Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Design, Develop, and Document Data Pipelines and ETL Jobs: Create and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to support data migration, integration, and warehousing. Data Assets Delivery: Collaborate with Data Architects, BI & Analytics teams, and Engineering teams to deliver high-quality data assets for data security, data warehousing (DW), and analytics. ETL Jobs, Reports, Dashboards, and Data Pipelines: Develop and manage ETL jobs, generate reports, create dashboards, and ensure the smooth operation of data pipelines. 3+ years of experience as a SSIS ETL developer, Data Engineer or a related role 2+ years of experience using Azure Data Factory Knowledgeable in Data Modelling and Data warehouse concepts Experience working with Azure stack Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application. Please provide your updated resume, highlighting your relevant experience and the reasons you believe you would be a valuable member of our team. We look forward to reviewing your subm
Posted 3 weeks ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities We are seeking a developer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. B.Tech degree and 5+ years of ETL development experience in Microsoft data track are required. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory Skill Sets ETL Development Preferred Skill Sets Microsoft Stack Years Of Experience Required 4+ Education Qualification B.Tech/B.E./MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Development Optional Skills Microsoft Stack Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 3 weeks ago
3.0 - 10.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 3-10 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Azure Data Factory, Data Engineering, Microsoft Azure Databricks Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Key Responsibilities: Data Lake and Lakehouse Implementation: Design, implement, and manage Data Lake and Lakehouse architectures. (Must have) Develop and maintain scalable data pipelines and workflows. (Must have) Utilize Azure Data Lake Services (ADLS) for data storage and management. (Must have) Knowledge on Medalion Architecture, Delta Format. (Must have) Data Processing and Transformation: Use PySpark for data processing and transformations. (Must have) Implement Delta Live Tables for real-time data processing and analytics. (Good to have) Ensure data quality and consistency across all stages of the data lifecycle. (Must have) Data Management and Governance: Employ Unity Catalog for data governance and metadata management. (Good to have) Ensure robust data security and compliance with industry standards. (Must have) Data Integration: Extract, transform, and load (ETL) data from multiple sources (Must have) including SAP (Good to have), Dynamics 365 (Good to have), and other systems. Utilize Azure Data Factory (ADF) and Synapse Analytics for data integration and orchestration. (Must have) Performance Optimization of the Jobs. (Must have) Data Storage and Access: Implement and manage Azure Data Lake Storage (ADLS) for large-scale data storage. (Must have) Optimize data storage and retrieval processes for performance and cost-efficiency. (Must have) Collaboration and Communication: Work closely with data scientists, analysts, and other stakeholders to understand data requirements. (Must have) Provide technical guidance and mentorship to junior team members. (Good to have) Continuous Improvement: Stay updated with the latest industry trends and technologies in data engineering and cloud computing. (Good to have) Continuously improve data processes and infrastructure for efficiency and scalability. (Must have) Required Skills And Qualifications Technical Skills: Proficient in PySpark and Python for data processing and analysis. Strong experience with Azure Data Lake Services (ADLS) and Data Lake architecture. Hands-on experience with Databricks for data engineering and analytics. Knowledge of Unity Catalog for data governance. Expertise in Delta Live Tables for real-time data processing. Familiarity with Azure Fabric for data integration and orchestration. Proficient in Azure Data Factory (ADF) and Synapse Analytics for ETL and data warehousing. Experience in pulling data from multiple sources like SAP, Dynamics 365, and others. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Attention to detail and commitment to data accuracy and quality. Certifications Required Certification in Azure Data Engineering or relevant Azure certifications. DP203 (Must have) Certification in Databricks. Databricks certified Data Engineer Associate (Must have) Databricks certified Data Engineer Professional (Good Have) Mandatory skill sets: Azure DE, Pyspark, Databricks Preferred skill sets: Azure DE, Pyspark, Databricks Years of experience required: 5-10 Years Educational Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Role: Databricks Developer Exp: 6-8yrs Location: KOL/HYD/PUNE Looking for immediate joiners or NP upto 25 days JD 5+ years relevant and progressive data engineering experience Deep Technical knowledge and experience in Databricks, Python, Scala, Microsoft Azure architecture and platform including Synapse, ADF (Azure Data Factory) pipelines and Synapse stored procedures Hands-on experience working with data pipelines using a variety of source and target locations (e.g., Databricks, Synapse, SQL Server, Data Lake, file-based, SQL and No-SQL database) Experience in engineering practices such as development, code refactoring, and leveraging design patterns, CI/CD, and building highly scalable data applications and processes Experience developing batch ETL pipelines; real-time pipelines are a plus Knowledge of advanced data engineering concepts such as dimensional modeling, ETL, data governance, data warehousing involving structured and unstructured data Thorough knowledge of Synapse and SQL Server including T-SQL and stored procedures Experience working with and supporting cross-functional teams in a dynamic environment A successful history of manipulating, processing and extracting value from large disconnected datasets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytic solutions Develop and deliver documentation on data engineering capabilities, standards, and processes; participate in coaching, mentoring, design reviews and code reviews Partner with business analysts and solutions architects to develop technical architecture for strategic enterprise projects and initiatives. Solve complex data problems to deliver insights that helps the organization achieve its goals Knowledge and understanding of Boomi is a plus As it is 24*7 production support project, Resource should will to work in shifts and 7-night shifts in a month Show more Show less
Posted 3 weeks ago
3.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. Responsibilities Job Description : Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks. Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologie Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 3-10 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Databricks Platform, Extract Transform Load (ETL), PySpark, Python (Programming Language), Structured Query Language (SQL) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 3 weeks ago
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
5.0 - 7.0 years
8 - 10 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
We are seeking an experienced Power BI Developer to join our team. The role involves creating insightful and interactive reports and dashboards using Power BI, optimizing SQL queries, and troubleshooting data-related issues. The ideal candidate will have hands-on experience with complex DAX queries, various data sources, and different reporting modes (Import, Direct Query). Responsibilities include working with PostgreSQL, MS SQL, developing robust SQL code, writing stored procedures, and ensuring high-quality data modeling. Candidates should have expertise in SQL optimization, performance tuning, and working with Common Table Expressions (CTEs) and complex joins. The role requires proficiency in designing engaging visual reports, applying themes/templates, and keeping up with the latest Power BI features and best practices.
Posted 3 weeks ago
7.0 years
0 Lacs
India
On-site
Job Title :- Senior Data Engineer Experience :- 7+ yrs Notice :- Imm joiner We are looking for a highly experienced Azure Data Engineer with strong technical expertise in data engineering tools and platforms within the Azure ecosystem. The ideal candidate should have 7 + years of relevant experience and a deep understanding of data warehousing, data pipelines, and cloud-based data processing frameworks. Key Responsibilities: Design and implement scalable data pipelines and ETL processes using Azure Data Factory (ADF), PySpark, and Databricks . Manage and optimize Azure Data Lake and integrate with Azure Synapse Analytics for large-scale data storage and analytics. Collaborate with cross-functional teams to gather requirements, design data solutions, and deliver actionable insights. Develop and optimize SQL queries for data extraction and transformation. Apply data modeling techniques and implement best practices for data governance and quality. Work closely with BI developers and stakeholders to support reporting and dash-boarding solutions. Implement and manage CI/CD pipelines for data engineering solutions. Required Skills: 7+ years of experience in Azure Data Engineering. Strong proficiency in SQL and at least one programming language (preferably Python ). Deep experience with: Azure Data Factory (ADF) Azure Databricks Azure Data Lake Azure Synapse Analytics PySpark Knowledge of data warehousing concepts and implementation. Experience in Apache Spark or similar ETL tools. Preferred/Good to Have: Experience with Microsoft Fabric (MS Fabric) . Familiarity with Power BI for data visualization. Domain knowledge in Finance , Procurement , or Human Capital . Show more Show less
Posted 3 weeks ago
5.0 - 9.0 years
7 - 15 Lacs
Kochi, Chennai, Bengaluru
Hybrid
Greetings from Aspire Systems!! Currently hiring for ETL Testing with SSIS, SQL and ADF. Role : ETL testing Exp : 5+ Only Location : Chennai/ Bangalore/ Kochi Notice - Immediate to 20 days only. Share CV to safoora.imthiyas@aspiresys.com / Call on 9384788107 - immediate joiners only Job Summary: We are seeking a highly skilled ETL Testing - Software Engineer with 5 to 6 years of experience to join our dynamic team. The ideal candidate will have proficient knowledge in SSIS, SSRS, SQL, MS SQL Server and knowledge of Azure Databricks. This role involves ensuring the quality and reliability of ETL processes and data integration solutions. Key Responsibilities: ETL Testing: Design, develop, and execute ETL test plans and test cases to ensure data accuracy, completeness, and integrity. SSIS SSRS Reporting: Validate the BI Reports using complex SQL Queries SQL Proficiency: Write complex SQL queries for data validation, testing, and troubleshooting. Azure Databricks: Apply basic knowledge of Azure Databricks for data processing and analytics. Data Quality Assurance : Perform data validation and verification to ensure data quality and consistency across various systems. Defect Management : Identify, document, and track defects using appropriate tools such as JIRA and methodologies. Collaboration : Work closely with developers, business analysts, and other stakeholders to understand requirements and ensure comprehensive testing coverage. Documentation : Create and maintain detailed documentation of test cases, test results, and testing processes. Required Skills and Qualifications: Proficient in SSIS /SSRS/SQL skills for writing and optimizing queries. Sound knowledge of Azure Databricks. Strong analytical and problem-solving skills to identify issues and ensure data accuracy. Attention to Detail: High attention to detail and commitment to delivering high-quality work. Communication: Excellent verbal and written communication skills. Team Player: Ability to work effectively in a team environment and collaborate with cross-functional teams. Experience: Previous experience in a similar role within the Insurance sector.
Posted 3 weeks ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: Sr Data Engineer (Azure) Experience: 5+ years of Experience What you should have? Candidate having technical background of SAP (SAP BW/SAP HANA, and SAP BDC) would be preferred. Proficiency in Microsoft Fabric, Azure Data Lake, Azure Synapse, and Azure Databricks. Solid understanding of data modeling, ETL/ELT processes, and SQL. Experience with PySpark and Python for data engineering tasks. Familiarity with CI/CD practices using Azure DevOps or GitHub. Ability to work in Agile environments and collaborate with diverse teams Good analytical skills with excellent knowledge of SQL. Well versed with Azure Services Must have experience and knowledge on ADF, ADLS, Blob storage Must have experience in building data pipelines Hand on development on PySpark, Databricks Experience using software version control tools (Git) Work in Agile methodologies and might be required to perform QA for work done by other team members in the sprint Work with team and assist the Product Owner and technology lead in identifying and estimating data platform engineering Knowledge and ability to setup DevOps and Test frameworks Familiarity with API integration processes Exposure to Power BI , streaming data and other Azure services Responsibilities- Develop Data pipelines to load data using Azure services. Perform Data Model design, ETL/ELT development optimized for efficient storage, access, and computation to serve various Business Intelligence use cases Contribute fully/partially to areas of API integration, End to end Devops automation, test automation, data visualisation (Power BI) and Business intelligence reporting solutions Knowledge of programming languages such as spark or python Create technical design documentation which includes current and future functionality, database objects affected, specifications, and flows/diagrams to detail the proposed database and/or Data Integration implementation Show more Show less
Posted 3 weeks ago
10.0 - 16.0 years
25 - 27 Lacs
Chennai
Work from Office
We at Dexian India, are looking to hire a Cloud Data PM with over 10 years of hands-on experience in AWS/Azure, DWH, and ETL. The role is based in Chennai with a shift from 2.00pm to 11.00pm IST. Key qualifications we seek in candidates include: - Solid understanding of SQL and data modeling - Proficiency in DWH architecture, including EDW/DM concepts and Star/Snowflake schema - Experience in designing and building data pipelines on Azure Cloud stack - Familiarity with Azure Data Explorer, Data Factory, Data Bricks, Synapse Analytics, Azure Fabric, Azure Analysis Services, and Azure SQL Datawarehouse - Knowledge of Azure DevOps and CI/CD Pipelines - Previous experience managing scrum teams and working as a Scrum Master or Project Manager on at least 2 projects - Exposure to on-premise transactional database environments like Oracle, SQL Server, Snowflake, MySQL, and/or Postgres - Ability to lead enterprise data strategies, including data lake delivery - Proficiency in data visualization tools such as Power BI or Tableau, and statistical analysis using R or Python - Strong problem-solving skills with a track record of deriving business insights from large datasets - Excellent communication skills and the ability to provide strategic direction to technical and business teams - Prior experience in presales, RFP and RFI responses, and proposal writing is mandatory - Capability to explain complex data solutions clearly to senior management - Experience in implementing, managing, and supporting data warehouse projects or applications - Track record of leading full-cycle implementation projects related to Business Intelligence - Strong team and stakeholder management skills - Attention to detail, accuracy, and ability to meet tight deadlines - Knowledge of application development, APIs, Microservices, and Integration components Tools & Technology Experience Required: - Strong hands-on experience in SQL or PLSQL - Proficiency in Python - SSIS or Informatica (Mandatory one of the tools) - BI: Power BI, or Tableau (Mandatory one of the tools)
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 3 weeks ago
0.0 years
0 Lacs
Thiruvananthapuram, Kerala
On-site
We are looking for a highly skilled and detail-oriented Data & Visualisation Specialist to join Zafin team. The ideal candidate will have a strong background in Business Intelligence (BI), data analysis, and visualisation, with advanced technical expertise in Azure Data Factory (ADF), SQL, Azure Analysis Services, and Power BI. In this role, you will be responsible for performing ETL operations, designing interactive dashboards, and delivering actionable insights to support strategic decision-making. Key Responsibilities: · Azure Data Factory: Design, build, and manage ETL pipelines in Azure Data Factory to facilitate seamless data integration across systems. · SQL & Data Management: Develop and optimize SQL queries for extracting, transforming, and loading data while ensuring data quality and accuracy. · Data Transformation & Modelling: Build and maintain data models using Azure Analysis Services (AAS), optimizing for performance and usability. · Power BI Development: Create, maintain, and enhance complex Power BI reports and dashboards tailored to business requirements. · DAX Expertise: Write and optimize advanced DAX queries and calculations to deliver dynamic and insightful reports. · Collaboration: Work closely with stakeholders to gather requirements, deliver insights, and help drive data-informed decision-making across the organization. · Attention to Detail: Ensure data consistency and accuracy through rigorous validation and testing processes. o Presentation & Reporting: · Effectively communicate insights and updates to stakeholders, delivering clear and concise documentation. Skills and Qualifications: Technical Expertise: · Proficient in Azure Data Factory for building ETL pipelines and managing data flows. · Strong experience with SQL, including query optimization and data transformation. · Knowledge of Azure Analysis Services for data modelling · Advanced Power BI skills, including DAX, report development, and data modelling. · Familiarity with Microsoft Fabric and Azure Analytics (a plus) · Analytical Thinking: Ability to work with complex datasets, identify trends, and tackle ambiguous challenges effectively Communication Skills: · Excellent verbal and written communication skills, with the ability to convey complex technical information to non-technical stakeholders. · Educational Qualification: Minimum of a Bachelor's degree, preferably in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or a related discipline Job Type: Full-time Pay: Up to ₹1,000,000.00 per year Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Have you worked with Azure Data Factory (ADF)? Work Location: In person
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for ADF (Application Development Framework) professionals in India is witnessing significant growth, with numerous opportunities available for job seekers in this field. ADF is a popular framework used for building enterprise applications, and companies across various industries are actively looking for skilled professionals to join their teams.
Here are 5 major cities in India where there is a high demand for ADF professionals: - Bangalore - Hyderabad - Pune - Chennai - Mumbai
The estimated salary range for ADF professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the ADF job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and Architect. As professionals gain more experience and expertise in ADF, they can progress to higher-level positions with greater responsibilities.
In addition to ADF expertise, professionals in this field are often expected to have knowledge of related technologies such as Java, Oracle Database, SQL, JavaScript, and web development frameworks like Angular or React.
Here are 25 interview questions for ADF roles, categorized by difficulty level: - Basic: - What is ADF and its key features? - What is the difference between ADF Faces and ADF Task Flows? - Medium: - Explain the lifecycle of an ADF application. - How do you handle exceptions in ADF applications? - Advanced: - Discuss the advantages of using ADF Business Components. - How would you optimize performance in an ADF application?
As you explore job opportunities in the ADF market in India, make sure to enhance your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and mindset, you can excel in your ADF career and secure rewarding opportunities in the industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.