Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
India
On-site
Job Description: We are seeking a highly skilled 4+ Azure Data Engineer to design, develop, and optimize data pipelines and data integration solutions in a cloud-based environment. The ideal candidate will have strong technical expertise in Azure, Data Engineering tools, and advanced ETL design along with excellent communication and problem-solving skills. Key Responsibilities: Design and develop advanced ETL pipelines for data ingestion and egress for batch data. Build scalable data solutions using Azure Data Factory (ADF) , Databricks , Spark (PySpark & Scala Spark) , and other Azure services. Troubleshoot data jobs, identify issues, and implement effective root cause solutions. Collaborate with stakeholders to gather requirements and propose efficient solution designs. Ensure data quality, reliability, and adherence to best practices in data engineering. Maintain detailed documentation of problem definitions, solutions, and architecture. Work independently with minimal supervision while ensuring project deadlines are met. Required Skills & Qualifications: Microsoft Certified: Azure Fundamentals (preferred). Microsoft Certified: Azure Data Engineer Associate (preferred). Proficiency in SQL , Python , and Scala . Strong knowledge of Azure Cloud services , ADF , and Databricks . Hands-on experience with Apache Spark (PySpark & Scala Spark). Expertise in designing and implementing complex ETL pipelines for batch data. Strong troubleshooting skills with the ability to perform root cause analysis. Soft Skills: Excellent verbal and written communication skills. Strong documentation skills for drafting problem definitions and solutions. Ability to effectively gather requirements and propose solution designs. Self-driven with the ability to work independently with minimal supervision.
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are seeking a skilled and experienced Azure Databricks Engineer to join our growing data engineering team. The ideal candidate will have deep hands-on expertise in building scalable data pipelines and streaming architectures using Azure-native technologies. Prior experience in the banking or financial services domain is highly desirable, as you will be working with critical data assets and supporting regulatory, risk, and operational reporting use cases. Key Responsibilities: Design, develop, and optimize data pipelines using Databricks (PySpark) for batch and real-time data processing. Implement CDC (Change Data Capture) and Delta Live Tables/Autoloader to support near-real-time ingestion. Integrate various structured and semi-structured data sources using ADF, ADLS, and Kafka (Confluent). Develop CI/CD pipelines for data engineering workflows using GitHub Actions or Azure DevOps. Write efficient and reusable SQL and Python code for data transformations and validations. Ensure data quality, lineage, governance, and security across all ingestion and transformation layers. Collaborate closely with business analysts, data scientists, and data stewards to support use cases in risk, finance, compliance, and operations. Participate in code reviews, architectural discussions, and documentation efforts. Required Skills & Qualifications: Strong proficiency in SQL, Python, and PySpark. Proven experience with Azure Databricks, including notebooks, jobs, clusters, and Delta Lake. Experience with Azure Data Lake Storage (ADLS Gen2) and Azure Data Factory (ADF). Hands-on with Confluent Kafka for streaming data integration. Strong understanding of Autoloader, CDC mechanisms, and Delta Lake-based architecture. Experience implementing CI/CD pipelines using GitHub and/or Azure DevOps. Knowledge of data modeling, data warehousing, and data security best practices. Exposure to regulatory and risk data use cases in the banking/financial sector is a strong plus. Preferred Qualifications: Azure certifications (e.g., Azure Data Engineer Associate). Experience with tools such as Delta Live Tables, Unity Catalog, and Lakehouse architecture. Familiarity with business glossaries, data lineage tools, and data governance frameworks. Understanding of financial data including GL, loan, customer, transaction, or market risk domains.
Posted 2 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer - Senior Location: Noida Employment Type: Permanent Experience Required: Minimum 5 years Primary Skills: Cloud - AWS (AWS Lambda, AWS EventBridge, AWS Fargate) --- Job Description We are seeking a highly skilled Senior Data Engineer to design, implement, and maintain scalable data pipelines that support machine learning model training and inference. Responsibilities: Build and maintain large-scale data pipelines ensuring scalability, reliability, and efficiency. Collaborate with data scientists to streamline the deployment and management of machine learning models. Design and optimize ETL (Extract, Transform, Load) processes and integrate data from multiple sources into structured storage systems. Automate ML workflows using MLOps tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended - TFX). Monitor model performance, data lineage, and system health in production environments. Work cross-functionally to improve data architecture and enable seamless ML model integration. Manage and optimize cloud platforms and data storage solutions (AWS, GCP, Azure). Ensure data security, integrity, and compliance with governance policies. Troubleshoot and optimize pipelines to improve reliability and performance. --- Required Skills Languages: Python, SQL, PySpark Cloud: AWS Services (Lambda, EventBridge, Fargate), Cloud Platforms (AWS, GCP, Azure) DevOps: Docker, Kubernetes, Containerization ETL Tools: AWS Glue, SQL Server (SSIS, SQL Packages) Nice to Have: Redshift, SAS dataset knowledge --- Mandatory Competencies DevOps/Configuration Management: Docker DevOps/Configuration Management: Cloud Platforms - AWS DevOps/Configuration Management: Containerization (Docker, Kubernetes) ETL: AWS Glue Database: SQL Server - SQL Packages
Posted 2 days ago
3.0 - 6.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector FS X-Sector Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Job Description & Summary – Senior Associate – Azure Data Engineer Role : Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. Solid understanding of Azure Functions and their application in data processing workflows. Understanding of DevOps practices and CI/CD pipelines for data solutions. Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. Collaborate with data architects and business stakeholders to translate requirements into technical solutions. Implement and manage data integration processes using SQL Server and Python. Design and deploy Azure Functions to support data processing workflows. Monitor and troubleshoot data pipeline performance and reliability issues. Ensure data quality, security, and compliance with industry standards and best practices. Document technical specifications and maintain clear and concise project documentation. Mandatory Skill Sets Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years Of Experience Required 3-6 Years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Tools, Microsoft Azure, PySpark Optional Skills Python (Programming Language) Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 4 to 7 years ( 3 + years relevant) Education Qualification Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities We are seeking a highly skilled and experienced Python developer with 6-7 years of hands-on experience in software development. Key Responsibilities: - Design, develop, test and maintain robust and scalable backend applications using FastAPI deliver high- performance APIs. - Write reusable efficient code following best practices - Collaborate with cross-functional teams, integrate user-facing elements with server-side logic - Architect and implement distributed, scalable microservices leveraging Temporal workflows for orchestrating complex processes. - Participate in code reviews and mentor junior developers - Debug and resolve technical issues and production incidents - Follow agile methodologies and contribute to sprint planning and estimations - Strong communication and collaboration skills - Relevant certifications are a plus Required Skills: - Strong proficiency in Python 3.x. - Collaborate closely with DevOps to implement CI/CD pipelines for Python projects, ensuring smooth deployment to production environments. Integrate with various databases (e.g., Cosmos DB,) and message queues (e.g., Kafka, eventhub) for seamless backend operations. - Experience in one or more Python frameworks (Django, Flask, FastAPI) - Develop and maintain unit and integration tests using frameworks like pytest and unittest to ensure code quality and reliability. - Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Python services. - Familiarity with asynchronous programming (e.g., asyncio, aiohttp) and event-driven architectures. - Strong skill in PySpark for large-scale data processing - Solid understanding of Object-Oriented Programming and design principles - Proficient in using version control systems like Git Mandatory Skill Sets Python Developer Preferred Skill Sets Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Years Of Experience Required 4-7 Years Education Qualification B.Tech/B.E./MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory Skill Sets Strong proficiency in Azure Databricks, including Spark and Delta Lake. Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. Proficiency in data integration and ETL processes and T-SQL. Experienced working in Python for data engineering Experienced working in Postgres Database Experienced working in graph database Experienced in architecture design and data modelling Good To Have Skill Sets: Unity Catalog / Purview Familiarity with Fabric/Snowflake service offerings Visualization tool – PowerBI Preferred Skill Sets Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Debugging, Emotional Regulation {+ 41 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. Solid understanding of Azure Functions and their application in data processing workflows. Understanding of DevOps practices and CI/CD pipelines for data solutions. Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. Collaborate with data architects and business stakeholders to translate requirements into technical solutions. Implement and manage data integration processes using SQL Server and Python. Design and deploy Azure Functions to support data processing workflows. Monitor and troubleshoot data pipeline performance and reliability issues. Ensure data quality, security, and compliance with industry standards and best practices. Document technical specifications and maintain clear and concise project documentation. Mandatory Skill Sets Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years Of Experience Required 3-6 Years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Job Description: Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Mandatory Skill Sets Databricks Preferred Skill Sets Databricks Years Of Experience Required 7-14 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Bachelor of Technology, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date August 11, 2025
Posted 2 days ago
8.0 years
0 Lacs
India
Remote
Quant Engineer Location: Bangalore(Remote) Fulltime Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🚀 We're Hiring: Data Scientist (AI/ML | Industrial IoT | Time Series) 📍 Location: Hyderabad 🧠 Experience: 5+ Years Join our AI/ML initiative to predict industrial alarms from complex sensor data in refinery environments. You'll lead the development of predictive models using time series data, maintenance logs, and work in an Expert-in-the-Loop (EITL) setup with domain experts. 🔍 Key Responsibilities: Develop ML models for anomaly detection & alarm prediction from sensor/IoT time series data. Collaborate with domain experts to validate model outputs. Implement data preprocessing, feature engineering & scalable pipelines. Monitor model performance, drift, explainability (SHAP, confidence), and retraining. Contribute to production-grade MLOps workflows. ✅ What You Bring: 5+ yrs experience in Data Science/ML, especially with time series models (LSTM, ARIMA, Autoencoders). Proficiency in Python, ML libraries (scikit-learn, TensorFlow, PyTorch). Hands-on with IoT/sensor data in manufacturing/industrial domains. Experience with MLOps tools (MLflow, SageMaker, Kubeflow). Strong grasp of model interpretability, ETL (Pandas, PySpark, SQL), and cloud deployment. ✨ Bonus Points: Background in oil & gas, SCADA systems, maintenance logs, or industrial control systems. Experience with cloud platforms (AWS/GCP/Azure) and alarm classification standards.
Posted 2 days ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Data Scientist Exp : 4—7 Years Location: Mumbai Key Responsibilities: Lead the development and deployment of AIML/generative AI models, managing full project lifecycles from conception to delivery. Collaborate with senior stakeholders to identify strategic opportunities for AI applications, aligning them with business objectives. Collaborate/Oversee teams of data scientists and engineers, providing guidance, mentorship, and ensuring high-quality deliverables. Drive research and innovation in AI techniques and tools, fostering an environment of continuous improvement and learning. Ensure compliance with data governance standards and ethical AI practices in all implementations. Present AI insights and project progress to clients and internal leadership, adapting technical language to suit audience expertise levels. Qualifications: Advanced degree (Master’s or Ph.D.) in Computer Science, Artificial Intelligence, Data Science, or related discipline. Strong background in AI/ML, NLP, and Generative AI models, including SLMs,LLMs like GPT and BERT. Extensive experience managing AI projects and leading teams in developing AI-based solutions. Deep understanding and hands-on experience with generative algorithms, particularly models (e.g., GPT, VAE, GANs, LLMs), and libraries like TensorFlow, PyTorch, and Keras. Cloud Computing: Experience with platforms like Azure, Google Cloud, or AWS. Familiarity with tools for model deployment and monitoring. Proven track record of delivering high-impact AI projects in a consultancy environment. Strong business acumen, with the ability to translate complex algorithms into actionable business strategies. Outstanding leadership and interpersonal skills, adept at fostering collaboration across diverse teams. Preferred: Programming: Python, pyspark, SQL, R, and other relevant languages. Min Exp - 7-8+ years Mandatory Skill Sets Data Science/AI/ML Preferred Skill Sets Data Science/AI/ML Years Of Experience Required 4—7 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Tools Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in internal audit at PwC help build, optimise and deliver end-to-end internal audit services to clients in all industries. This includes IA function setup and transformation, co-sourcing, outsourcing and managed services, using AI and other risk technology and delivery models. IA capabilities are combined with other industry and technical expertise, in areas like cyber, forensics and compliance, to address the full spectrum of risks. This helps organisations to harness the power of IA to help the organisation protect value and navigate disruption, and obtain confidence to take risks to power growth. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary We are looking for a skilled Azure Data Engineer to join our Data Analytics (DA) team. The ideal candidate will have a strong understanding of Azure technologies and components, along with the ability to architect web applications on the Azure framework. As part of the team, you will be responsible for end-to-end implementation projects utilizing GenAI-based models and frameworks, contributing to our innovative data-driven solutions. Responsibilities: Architecture & Design: Design and architect web applications on the Azure platform, ensuring scalability, reliability, and performance. End-to-End Implementation: Lead the implementation of data solutions from ingestion to visualization, leveraging GenAI-based models and frameworks to drive analytics initiatives. Development & Deployment: Write clean, maintainable code in Python, Pyspark and deploy applications and services on Azure using best practices. Data Engineering: Build robust data pipelines and workflows to automate data processing and ensure seamless integration across various data sources. Collaboration: Work closely with cross-functional teams, including data scientists, product managers, and business analysts, to understand data requirements and develop effective solutions. Optimization: Optimize data processes and pipelines to improve performance and reduce costs, utilizing services within the Azure ecosystem. Documentation & Reporting: Document architecture, development processes, and technical specifications; provide regular updates to stakeholders. Technical Skills And Requirements: Azure Expertise: Strong knowledge of Azure components such as Azure Data Lake, Azure Databricks, Azure SQL Database, Azure Storage, and Azure Functions, among others. Programming Languages: Proficient in Python and Pyspark for data processing, scripting, and integration tasks. Big Data Technologies: Familiarity with big data tools and frameworks, especially Hadoop, and experience with data engineering concepts. Databricks: Experience using Azure Databricks for building scalable and efficient data pipelines. Database Management: Strong SQL skills for data querying, manipulation, and management. Data Visualization (if necessary): Basic knowledge of Power BI or similar tools for creating interactive reports and dashboards. Cloud Understanding: Familiarity with AWS is a plus, enabling cross-platform integration or migration tasks. Mandatory Skill Sets: As above Preferred Skill Sets: As above Years Of Experience: 3 to 8 years of professional experience in data engineering, with a focus on Azure-based solutions and web application architecture Education Qualification: Bachelor’s degree (B.Tech) or Master’s degree (M.Tech, MCA) in Economics, Computer Science, Information Technology, Mathematics, or Statistics. A background in the Finance domain is preferred. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory Skill Sets AWS Data Engineer Preferred Skill Sets AWS Data Engineer Years Of Experience Required 4-8 Education Qualification B.tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Development, Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Accountabilities - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoghts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory Skill Sets Azure DE, ADB, ADF, ADL Preferred Skill Sets Azure DE, ADB, ADF, ADL Years Of Experience Required 3 to 9 years Education Qualification Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Accountabilities - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoghts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory Skill Sets Azure DE, ADB, ADF, ADL Preferred Skill Sets Azure DE, ADB, ADF, ADL Years Of Experience Required 3 to 9 years Education Qualification Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
Goregaon, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Job Description: Key Responsibilities: Designs, implements and maintains reliable and scalable data infrastructure Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data Designs, develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes Works with customers to deploy, manage, and audit standard processes for cloud products Adheres to and advocates for software & data engineering standard processes (e.g. technical design and review, unit testing, monitoring, alerting, source control, code review & documentation) Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. Part of a cross-disciplinary team working closely with other data engineers, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Job Requirements: Education : Bachelor or higher degree in computer science, Engineering, Information Systems or other quantitative fields Experience : Years of experience: 8 to 12 years relevant experience Deep and hands-on experience designing, planning, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments Hands on experience with: Spark for data processing (batch and/or real-time) Configuring Delta Lake on Azure Databricks Languages: SQL, pyspark, python Cloud platforms: Azure Azure Data Factory (must) , Azure Data Lake (must), Azure SQL DB (must), Synapse (must), SQL Pools (must), Databricks (good to have) Designing data solutions in Azure incl. data distributions and partitions, scalability, cost-management, disaster recovery and high availability Azure Devops (or similar tools) for source control & building CI/CD pipelines Experience designing and implementing large-scale distributed systems Customer management and front-ending and ability to lead large organizations through influence Desirable Criteria : Strong customer management- own the delivery for Data track with customer stakeholders Continuous learning and improvement attitude Key Behaviors : Empathetic: Cares about our people, our community and our planet Curious: Seeks to explore and excel Creative: Imagines the extraordinary Inclusive: Brings out the best in each other Mandatory Skill Sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark, spark-SQL Preferred Skill Sets: ‘Good to have’ knowledge, skills and experiences Cosmos DB, Data modeling, Databricks, PowerBI, experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years Of Experience Required: 8 to 12 years relevant experience Education Qualification: BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Apache Synapse Optional Skills Microsoft Power Business Intelligence (BI) Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Accellor is looking for a Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required. Design, develop, and maintain ETL pipelines using PySpark Notebooks and Microsoft Fabric Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions Migrate and integrate data from legacy SQL Server environments into modern data platforms Optimize data pipelines and workflows for scalability, efficiency, and reliability Provide technical leadership and mentorship to junior developers and other team members Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability Develop, maintain, and enforce data engineering best practices, coding standards, and documentation Conduct code reviews and provide constructive feedback to improve team productivity and code quality Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms Requirements Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field Experience with Microsoft Fabric or similar cloud-based data integration platforms is a must Min 3 years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage Experience working with both structured and unstructured data sources Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues Proven ability to work independently, as part of a team, and in leadership roles Strong communication skills with the ability to translate complex technical concepts into business terms Mandatory Skills Experience with Data lake, Data warehouse, Delta lake Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools Knowledge of scripting languages (e.g., Python, Scala) for data manipulation and automation Familiarity with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.
Posted 3 days ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
As an expectation a fitting candidate must have/be: Ability to analyze business problem and cut through the data challenges. Ability to churn the raw corpus and develop a data/ML model to provide business analytics (not just EDA), machine learning based document processing and information retrieval Quick to develop the POCs and transform it to high scale production ready code. Experience in extracting data through complex unstructured documents using NLP based technologies. Good to have : Document analysis using Image processing/computer vision and geometric deep learning Technology Stack: Python as a primary programming language. Conceptual understanding of classic ML/DL Algorithms like Regression, Support Vectors, Decision tree, Clustering, Random Forest, CART, Ensemble, Neural Networks, CNN, RNN, LSTM etc. Programming: Must Have: Must be hands-on with data structures using List, tuple, dictionary, collections, iterators, Pandas, NumPy and Object-oriented programming Good to have: Design patterns/System design, cython ML libraries: Must Have: Scikit-learn, XGBoost, imblearn, SciPy, Gensim Good to have: matplotlib/plotly, Lime/sharp Data extraction and handling: Must Have: DASK/Modin, beautifulsoup/scrappy, Multiprocessing Good to have: Data Augmentation, Pyspark, Accelerate NLP/Text analytics: Must Have: Bag of words, text ranking algorithm, Word2vec, language model, entity recognition, CRF/HMM, topic modelling, Sequence to Sequence Good to have: Machine comprehension, translation, elastic search Deep learning: Must Have: TensorFlow/PyTorch, Neural nets, Sequential models, CNN, LSTM/GRU/RNN, Attention, Transformers, Residual Networks Good to have: Knowledge of optimization, Distributed training/computing, Language models Software peripherals: Must Have: REST services, SQL/NoSQL, UNIX, Code versioning Good to have: Docker containers, data versioning Research: Must Have: Well verse with latest trends in ML and DL area. Zeal to research and implement cutting areas in AI segment to solve complex problems Good to have: Contributed to research papers/patents and it is published on internet in ML and DL Morningstar is an equal opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Do you have in-depth experience in Nat Cat models and tools Do you enjoy being part of a distributed team of Cat Model specialists with diverse backgrounds, educations, and skills Are you passionate about researching, debugging issues, and developing tools from scratch We are seeking a curious individual to join our NatCat infrastructure development team. As a Cat Model Specialist, you will collaborate with the Cat Perils Cat & Geo Modelling team to maintain models, tools, and applications used in the NatCat costing process. Your responsibilities will include supporting model developers in validating their models, building concepts and tools for exposure reporting, and assisting in model maintenance and validation. You will be part of the Cat & Geo Modelling team based in Zurich and Bangalore, which specializes in natural science, engineering, and statistics. The team is responsible for Swiss Re's global natural catastrophe risk assessment and focuses on advancing innovative probabilistic and proprietary modelling technology for earthquakes, windstorm, and flood hazards. Main Tasks/Activities/Responsibilities: - Conceptualize and build NatCat applications using sophisticated analytical technologies - Collaborate with model developers to implement and test models in the internal framework - Develop and implement concepts to enhance the internal modelling framework - Coordinate with various teams for successful model and tool releases - Provide user support on model and tools related issues - Install and maintain the Oasis setup and contribute to the development of new functionality - Coordinate platform setup and maintenance with 3rd party vendors About You: - Graduate or Post-Graduate degree in mathematics, engineering, computer science, or equivalent quantitative training - Minimum 5 years of experience in the Cat Modelling domain - Reliable, committed, hands-on, with experience in Nat Cat modelling - Previous experience with catastrophe models or exposure reporting tools is a plus - Strong programming skills in MATLAB, MS SQL, Python, Pyspark, R - Experience in consuming WCF/RESTful services - Knowledge of Business Intelligence, reporting, and data analysis solutions - Experience in agile development environment is beneficial - Familiarity with Azure services like Storage, Data Factory, Synapse, and Databricks - Good interpersonal skills, self-driven, and ability to work in a global team - Strong analytical and problem-solving skills About Swiss Re: Swiss Re is a leading provider of reinsurance, insurance, and insurance-based risk transfer solutions. With over 14,000 employees worldwide, we anticipate and manage various risks to make the world more resilient. We cover a wide range of risks from natural catastrophes to cybercrime, offering solutions in both Property & Casualty and Life & Health sectors. If you are an experienced professional returning to the workforce after a career break, we welcome you to apply for positions that match your skills and experience.,
Posted 3 days ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Data Engineer (Azure) at Fractal, you will be an integral part of large-scale client business development and delivery engagements. You will have the opportunity to develop the software and systems needed for end-to-end execution on large projects, working across all phases of SDLC and utilizing Software Engineering principles to build scaled solutions. Your role will involve building the knowledge base required to deliver increasingly complex technology projects. To be successful in this role, you should hold a bachelor's degree in Computer Science or a related field with 5-10 years of technology experience. You should have strong experience in System Integration, Application Development, or Data-Warehouse projects, across technologies used in the enterprise space. Your software development experience should include working with object-oriented languages such as Python, PySpark, and frameworks. You should also have expertise in relational and dimensional modeling, including big data technologies. Expertise in Microsoft Azure is mandatory for this role, including components like Azure DataBricks, Azure Data Factory, Azure Data Lake Storage, Azure SQL, HD Insights, and ML Service. Proficiency in Python and Spark is required, along with a good understanding of enabling analytics using cloud technology and ML Ops. Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus. You should have a proven track record of keeping existing technical skills up-to-date and developing new ones to contribute effectively to deep architecture discussions around systems and applications in the cloud (Azure). If you are an extraordinary developer who loves to push the boundaries to solve complex business problems using creative solutions, and if you possess the characteristics of a forward thinker and self-starter, then this role at Fractal is the perfect opportunity for you. Join us in working with happy, enthusiastic over-achievers and experience wild growth in your career. If this opportunity is not the right fit for you currently, you can express your interest in future opportunities by clicking on "Introduce Yourself" in the top-right corner of the page or creating an account to set up email alerts for new job postings that align with your interests.,
Posted 3 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Iris's Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Preferred: Immediate joiners or 0-30 days notice period Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
🚀 We’re Hiring: ML Ops / Data Engineer (Minimum 5 Years Experience) 🚀 Are you a data engineering professional with at least 5 years of experience in building scalable data pipelines and deploying machine learning models? We’re looking for a talented ML Ops / Data Engineer to join our team in Noida! 🔍 What You’ll Do: Design and maintain robust data pipelines for large-scale datasets Collaborate with data scientists to deploy and monitor ML models in production Develop and optimize ETL processes using AWS Glue, PySpark, and SQL Automate ML workflows using tools like Kubeflow, MLflow, or TFX Ensure model versioning, logging, and performance tracking Work with cloud platforms (AWS preferred) and modern data storage solutions Ensure data security, integrity, and compliance 🛠️ Must-Have Skills: Minimum 5 years of experience in data engineering or ML Ops Proficiency in AWS Services (Lambda, EventBridge, Fargate) Strong knowledge of SQL, Docker, and Kubernetes Experience with AWS Glue, PySpark, and containerized environments 📍 Location: Noida 💼 Employment Type: Permanent 🔑 Primary Skill: AWS Cloud, ML Ops, Data Engineering If you're ready to take your career to the next level and work on cutting-edge ML infrastructure, we’d love to connect! #Hiring #MLOps #DataEngineering #AWSJobs #ETL #MachineLearning #NoidaJobs #TechCareers #DataPipelines #5YearsExperience
Posted 3 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Iris's Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.
Posted 3 days ago
0.0 years
0 Lacs
Varthur, Bengaluru, Karnataka
On-site
Outer Ring Road, Devarabisanahalli Vlg Varthur Hobli, Bldg 2A, Twr 3, Phs 1, BANGALORE, IN, 560103 INFORMATION TECHNOLOGY 4230 Band B Satyanarayana Ambati Job Description Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough