Jobs
Interviews

8417 Pyspark Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

India

Remote

Quant Engineer Location: Bangalore(Remote) Fulltime Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

🚀 We're Hiring: Data Scientist (AI/ML | Industrial IoT | Time Series) 📍 Location: Hyderabad 🧠 Experience: 5+ Years Join our AI/ML initiative to predict industrial alarms from complex sensor data in refinery environments. You'll lead the development of predictive models using time series data, maintenance logs, and work in an Expert-in-the-Loop (EITL) setup with domain experts. 🔍 Key Responsibilities: Develop ML models for anomaly detection & alarm prediction from sensor/IoT time series data. Collaborate with domain experts to validate model outputs. Implement data preprocessing, feature engineering & scalable pipelines. Monitor model performance, drift, explainability (SHAP, confidence), and retraining. Contribute to production-grade MLOps workflows. ✅ What You Bring: 5+ yrs experience in Data Science/ML, especially with time series models (LSTM, ARIMA, Autoencoders). Proficiency in Python, ML libraries (scikit-learn, TensorFlow, PyTorch). Hands-on with IoT/sensor data in manufacturing/industrial domains. Experience with MLOps tools (MLflow, SageMaker, Kubeflow). Strong grasp of model interpretability, ETL (Pandas, PySpark, SQL), and cloud deployment. ✨ Bonus Points: Background in oil & gas, SCADA systems, maintenance logs, or industrial control systems. Experience with cloud platforms (AWS/GCP/Azure) and alarm classification standards.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Data Scientist Exp : 4—7 Years Location: Mumbai Key Responsibilities: Lead the development and deployment of AIML/generative AI models, managing full project lifecycles from conception to delivery. Collaborate with senior stakeholders to identify strategic opportunities for AI applications, aligning them with business objectives. Collaborate/Oversee teams of data scientists and engineers, providing guidance, mentorship, and ensuring high-quality deliverables. Drive research and innovation in AI techniques and tools, fostering an environment of continuous improvement and learning. Ensure compliance with data governance standards and ethical AI practices in all implementations. Present AI insights and project progress to clients and internal leadership, adapting technical language to suit audience expertise levels. Qualifications: Advanced degree (Master’s or Ph.D.) in Computer Science, Artificial Intelligence, Data Science, or related discipline. Strong background in AI/ML, NLP, and Generative AI models, including SLMs,LLMs like GPT and BERT. Extensive experience managing AI projects and leading teams in developing AI-based solutions. Deep understanding and hands-on experience with generative algorithms, particularly models (e.g., GPT, VAE, GANs, LLMs), and libraries like TensorFlow, PyTorch, and Keras. Cloud Computing: Experience with platforms like Azure, Google Cloud, or AWS. Familiarity with tools for model deployment and monitoring. Proven track record of delivering high-impact AI projects in a consultancy environment. Strong business acumen, with the ability to translate complex algorithms into actionable business strategies. Outstanding leadership and interpersonal skills, adept at fostering collaboration across diverse teams. Preferred: Programming: Python, pyspark, SQL, R, and other relevant languages. Min Exp - 7-8+ years Mandatory Skill Sets Data Science/AI/ML Preferred Skill Sets Data Science/AI/ML Years Of Experience Required 4—7 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Tools Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in internal audit at PwC help build, optimise and deliver end-to-end internal audit services to clients in all industries. This includes IA function setup and transformation, co-sourcing, outsourcing and managed services, using AI and other risk technology and delivery models. IA capabilities are combined with other industry and technical expertise, in areas like cyber, forensics and compliance, to address the full spectrum of risks. This helps organisations to harness the power of IA to help the organisation protect value and navigate disruption, and obtain confidence to take risks to power growth. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary We are looking for a skilled Azure Data Engineer to join our Data Analytics (DA) team. The ideal candidate will have a strong understanding of Azure technologies and components, along with the ability to architect web applications on the Azure framework. As part of the team, you will be responsible for end-to-end implementation projects utilizing GenAI-based models and frameworks, contributing to our innovative data-driven solutions. Responsibilities: Architecture & Design: Design and architect web applications on the Azure platform, ensuring scalability, reliability, and performance. End-to-End Implementation: Lead the implementation of data solutions from ingestion to visualization, leveraging GenAI-based models and frameworks to drive analytics initiatives. Development & Deployment: Write clean, maintainable code in Python, Pyspark and deploy applications and services on Azure using best practices. Data Engineering: Build robust data pipelines and workflows to automate data processing and ensure seamless integration across various data sources. Collaboration: Work closely with cross-functional teams, including data scientists, product managers, and business analysts, to understand data requirements and develop effective solutions. Optimization: Optimize data processes and pipelines to improve performance and reduce costs, utilizing services within the Azure ecosystem. Documentation & Reporting: Document architecture, development processes, and technical specifications; provide regular updates to stakeholders. Technical Skills And Requirements: Azure Expertise: Strong knowledge of Azure components such as Azure Data Lake, Azure Databricks, Azure SQL Database, Azure Storage, and Azure Functions, among others. Programming Languages: Proficient in Python and Pyspark for data processing, scripting, and integration tasks. Big Data Technologies: Familiarity with big data tools and frameworks, especially Hadoop, and experience with data engineering concepts. Databricks: Experience using Azure Databricks for building scalable and efficient data pipelines. Database Management: Strong SQL skills for data querying, manipulation, and management. Data Visualization (if necessary): Basic knowledge of Power BI or similar tools for creating interactive reports and dashboards. Cloud Understanding: Familiarity with AWS is a plus, enabling cross-platform integration or migration tasks. Mandatory Skill Sets: As above Preferred Skill Sets: As above Years Of Experience: 3 to 8 years of professional experience in data engineering, with a focus on Azure-based solutions and web application architecture Education Qualification: Bachelor’s degree (B.Tech) or Master’s degree (M.Tech, MCA) in Economics, Computer Science, Information Technology, Mathematics, or Statistics. A background in the Finance domain is preferred. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory Skill Sets AWS Data Engineer Preferred Skill Sets AWS Data Engineer Years Of Experience Required 4-8 Education Qualification B.tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Development, Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Accountabilities - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoghts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory Skill Sets Azure DE, ADB, ADF, ADL Preferred Skill Sets Azure DE, ADB, ADF, ADL Years Of Experience Required 3 to 9 years Education Qualification Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Accountabilities - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoghts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory Skill Sets Azure DE, ADB, ADF, ADL Preferred Skill Sets Azure DE, ADB, ADF, ADL Years Of Experience Required 3 to 9 years Education Qualification Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Job Description: Key Responsibilities: Designs, implements and maintains reliable and scalable data infrastructure Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data Designs, develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes Works with customers to deploy, manage, and audit standard processes for cloud products Adheres to and advocates for software & data engineering standard processes (e.g. technical design and review, unit testing, monitoring, alerting, source control, code review & documentation) Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. Part of a cross-disciplinary team working closely with other data engineers, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Job Requirements: Education : Bachelor or higher degree in computer science, Engineering, Information Systems or other quantitative fields Experience : Years of experience: 8 to 12 years relevant experience Deep and hands-on experience designing, planning, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments Hands on experience with: Spark for data processing (batch and/or real-time) Configuring Delta Lake on Azure Databricks Languages: SQL, pyspark, python Cloud platforms: Azure Azure Data Factory (must) , Azure Data Lake (must), Azure SQL DB (must), Synapse (must), SQL Pools (must), Databricks (good to have) Designing data solutions in Azure incl. data distributions and partitions, scalability, cost-management, disaster recovery and high availability Azure Devops (or similar tools) for source control & building CI/CD pipelines Experience designing and implementing large-scale distributed systems Customer management and front-ending and ability to lead large organizations through influence Desirable Criteria : Strong customer management- own the delivery for Data track with customer stakeholders Continuous learning and improvement attitude Key Behaviors : Empathetic: Cares about our people, our community and our planet Curious: Seeks to explore and excel Creative: Imagines the extraordinary Inclusive: Brings out the best in each other Mandatory Skill Sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark, spark-SQL Preferred Skill Sets: ‘Good to have’ knowledge, skills and experiences Cosmos DB, Data modeling, Databricks, PowerBI, experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years Of Experience Required: 8 to 12 years relevant experience Education Qualification: BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Apache Synapse Optional Skills Microsoft Power Business Intelligence (BI) Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Accellor is looking for a Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required. Design, develop, and maintain ETL pipelines using PySpark Notebooks and Microsoft Fabric Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions Migrate and integrate data from legacy SQL Server environments into modern data platforms Optimize data pipelines and workflows for scalability, efficiency, and reliability Provide technical leadership and mentorship to junior developers and other team members Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability Develop, maintain, and enforce data engineering best practices, coding standards, and documentation Conduct code reviews and provide constructive feedback to improve team productivity and code quality Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms Requirements Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field Experience with Microsoft Fabric or similar cloud-based data integration platforms is a must Min 3 years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage Experience working with both structured and unstructured data sources Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues Proven ability to work independently, as part of a team, and in leadership roles Strong communication skills with the ability to translate complex technical concepts into business terms Mandatory Skills Experience with Data lake, Data warehouse, Delta lake Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools Knowledge of scripting languages (e.g., Python, Scala) for data manipulation and automation Familiarity with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.

Posted 3 days ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

As an expectation a fitting candidate must have/be: Ability to analyze business problem and cut through the data challenges. Ability to churn the raw corpus and develop a data/ML model to provide business analytics (not just EDA), machine learning based document processing and information retrieval Quick to develop the POCs and transform it to high scale production ready code. Experience in extracting data through complex unstructured documents using NLP based technologies. Good to have : Document analysis using Image processing/computer vision and geometric deep learning Technology Stack: Python as a primary programming language. Conceptual understanding of classic ML/DL Algorithms like Regression, Support Vectors, Decision tree, Clustering, Random Forest, CART, Ensemble, Neural Networks, CNN, RNN, LSTM etc. Programming: Must Have: Must be hands-on with data structures using List, tuple, dictionary, collections, iterators, Pandas, NumPy and Object-oriented programming Good to have: Design patterns/System design, cython ML libraries: Must Have: Scikit-learn, XGBoost, imblearn, SciPy, Gensim Good to have: matplotlib/plotly, Lime/sharp Data extraction and handling: Must Have: DASK/Modin, beautifulsoup/scrappy, Multiprocessing Good to have: Data Augmentation, Pyspark, Accelerate NLP/Text analytics: Must Have: Bag of words, text ranking algorithm, Word2vec, language model, entity recognition, CRF/HMM, topic modelling, Sequence to Sequence Good to have: Machine comprehension, translation, elastic search Deep learning: Must Have: TensorFlow/PyTorch, Neural nets, Sequential models, CNN, LSTM/GRU/RNN, Attention, Transformers, Residual Networks Good to have: Knowledge of optimization, Distributed training/computing, Language models Software peripherals: Must Have: REST services, SQL/NoSQL, UNIX, Code versioning Good to have: Docker containers, data versioning Research: Must Have: Well verse with latest trends in ML and DL area. Zeal to research and implement cutting areas in AI segment to solve complex problems Good to have: Contributed to research papers/patents and it is published on internet in ML and DL Morningstar is an equal opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Do you have in-depth experience in Nat Cat models and tools Do you enjoy being part of a distributed team of Cat Model specialists with diverse backgrounds, educations, and skills Are you passionate about researching, debugging issues, and developing tools from scratch We are seeking a curious individual to join our NatCat infrastructure development team. As a Cat Model Specialist, you will collaborate with the Cat Perils Cat & Geo Modelling team to maintain models, tools, and applications used in the NatCat costing process. Your responsibilities will include supporting model developers in validating their models, building concepts and tools for exposure reporting, and assisting in model maintenance and validation. You will be part of the Cat & Geo Modelling team based in Zurich and Bangalore, which specializes in natural science, engineering, and statistics. The team is responsible for Swiss Re's global natural catastrophe risk assessment and focuses on advancing innovative probabilistic and proprietary modelling technology for earthquakes, windstorm, and flood hazards. Main Tasks/Activities/Responsibilities: - Conceptualize and build NatCat applications using sophisticated analytical technologies - Collaborate with model developers to implement and test models in the internal framework - Develop and implement concepts to enhance the internal modelling framework - Coordinate with various teams for successful model and tool releases - Provide user support on model and tools related issues - Install and maintain the Oasis setup and contribute to the development of new functionality - Coordinate platform setup and maintenance with 3rd party vendors About You: - Graduate or Post-Graduate degree in mathematics, engineering, computer science, or equivalent quantitative training - Minimum 5 years of experience in the Cat Modelling domain - Reliable, committed, hands-on, with experience in Nat Cat modelling - Previous experience with catastrophe models or exposure reporting tools is a plus - Strong programming skills in MATLAB, MS SQL, Python, Pyspark, R - Experience in consuming WCF/RESTful services - Knowledge of Business Intelligence, reporting, and data analysis solutions - Experience in agile development environment is beneficial - Familiarity with Azure services like Storage, Data Factory, Synapse, and Databricks - Good interpersonal skills, self-driven, and ability to work in a global team - Strong analytical and problem-solving skills About Swiss Re: Swiss Re is a leading provider of reinsurance, insurance, and insurance-based risk transfer solutions. With over 14,000 employees worldwide, we anticipate and manage various risks to make the world more resilient. We cover a wide range of risks from natural catastrophes to cybercrime, offering solutions in both Property & Casualty and Life & Health sectors. If you are an experienced professional returning to the workforce after a career break, we welcome you to apply for positions that match your skills and experience.,

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer (Azure) at Fractal, you will be an integral part of large-scale client business development and delivery engagements. You will have the opportunity to develop the software and systems needed for end-to-end execution on large projects, working across all phases of SDLC and utilizing Software Engineering principles to build scaled solutions. Your role will involve building the knowledge base required to deliver increasingly complex technology projects. To be successful in this role, you should hold a bachelor's degree in Computer Science or a related field with 5-10 years of technology experience. You should have strong experience in System Integration, Application Development, or Data-Warehouse projects, across technologies used in the enterprise space. Your software development experience should include working with object-oriented languages such as Python, PySpark, and frameworks. You should also have expertise in relational and dimensional modeling, including big data technologies. Expertise in Microsoft Azure is mandatory for this role, including components like Azure DataBricks, Azure Data Factory, Azure Data Lake Storage, Azure SQL, HD Insights, and ML Service. Proficiency in Python and Spark is required, along with a good understanding of enabling analytics using cloud technology and ML Ops. Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus. You should have a proven track record of keeping existing technical skills up-to-date and developing new ones to contribute effectively to deep architecture discussions around systems and applications in the cloud (Azure). If you are an extraordinary developer who loves to push the boundaries to solve complex business problems using creative solutions, and if you possess the characteristics of a forward thinker and self-starter, then this role at Fractal is the perfect opportunity for you. Join us in working with happy, enthusiastic over-achievers and experience wild growth in your career. If this opportunity is not the right fit for you currently, you can express your interest in future opportunities by clicking on "Introduce Yourself" in the top-right corner of the page or creating an account to set up email alerts for new job postings that align with your interests.,

Posted 3 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Iris's Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Preferred: Immediate joiners or 0-30 days notice period Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

🚀 We’re Hiring: ML Ops / Data Engineer (Minimum 5 Years Experience) 🚀 Are you a data engineering professional with at least 5 years of experience in building scalable data pipelines and deploying machine learning models? We’re looking for a talented ML Ops / Data Engineer to join our team in Noida! 🔍 What You’ll Do: Design and maintain robust data pipelines for large-scale datasets Collaborate with data scientists to deploy and monitor ML models in production Develop and optimize ETL processes using AWS Glue, PySpark, and SQL Automate ML workflows using tools like Kubeflow, MLflow, or TFX Ensure model versioning, logging, and performance tracking Work with cloud platforms (AWS preferred) and modern data storage solutions Ensure data security, integrity, and compliance 🛠️ Must-Have Skills: Minimum 5 years of experience in data engineering or ML Ops Proficiency in AWS Services (Lambda, EventBridge, Fargate) Strong knowledge of SQL, Docker, and Kubernetes Experience with AWS Glue, PySpark, and containerized environments 📍 Location: Noida 💼 Employment Type: Permanent 🔑 Primary Skill: AWS Cloud, ML Ops, Data Engineering If you're ready to take your career to the next level and work on cutting-edge ML infrastructure, we’d love to connect! #Hiring #MLOps #DataEngineering #AWSJobs #ETL #MachineLearning #NoidaJobs #TechCareers #DataPipelines #5YearsExperience

Posted 3 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Iris's Fortune 100 direct client is looking for Senior AWS Data Engineer for Pune / Noida / Gurgaon location. Position: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.

Posted 3 days ago

Apply

0.0 years

0 Lacs

Varthur, Bengaluru, Karnataka

On-site

Outer Ring Road, Devarabisanahalli Vlg Varthur Hobli, Bldg 2A, Twr 3, Phs 1, BANGALORE, IN, 560103 INFORMATION TECHNOLOGY 4230 Band B Satyanarayana Ambati Job Description Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.

Posted 3 days ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description: Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at Setu, you will have the opportunity to be a part of a team that is revolutionizing the fintech landscape. Setu believes in empowering every company to become a fintech company by providing them with cutting-edge APIs. The Data Science team at Setu is dedicated to understanding the vast population of India and creating solutions for various fintech sectors such as personal lending, collections, PFM, and BBPS. In this role, you will have the unique opportunity to delve deep into the business objectives and technical architecture of multiple companies, leading to a customer-centric approach that fosters innovation and delights customers. The learning potential in this role is immense, with the chance to explore, experiment, and build critical, scalable, and high-value use cases. At Setu, innovation is not just a goal; it's a way of life. The team is constantly pushing boundaries and introducing groundbreaking methods to drive business growth, enhance customer experiences, and streamline operational processes. From computer vision to natural language processing and Generative AI, each day presents new challenges and opportunities for breakthroughs. To excel in this role, you will need a minimum of 2 years of experience in Data Science and Machine Learning. Strong knowledge in statistics, tree-based techniques, machine learning, inference, hypothesis testing, and optimizations is essential. Proficiency in Python programming, building Data Pipelines, feature engineering, pandas, sci-kit-learn, SQL, and familiarity with TensorFlow/PyTorch are also required. Experience with deep learning techniques and understanding of DevOps/MLOps will be a bonus. Setu offers a dynamic and inclusive work environment where you will have the opportunity to work closely with the founding team who built and scaled public infrastructure such as UPI, GST, and Aadhaar. The company is dedicated to your growth and provides various benefits such as access to a fully stocked library, tickets to conferences, learning sessions, and development allowance. Additionally, Setu offers comprehensive health insurance, access to mental health counselors, and a beautiful office space designed to foster creativity and collaboration. If you are passionate about making a tangible difference in the fintech landscape, Setu offers the perfect platform to contribute to financial inclusion and improve millions of lives. Join us in our audacious mission and obsession with craftsmanship in code as we work together to build infrastructure that directly impacts the lives of individuals across India.,

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

India

Remote

Location: Remote, Preferably Bangalore with occasional travel for collaboration and client meetings Engagement Type: Contract (initial 3 months with potential for extension based on project needs and fitment) About Optron: At Optron (a venture of Blue Boy Consulting LLP), we are at the forefront of leveraging cutting-edge AI to transform how enterprises interact with and derive insights from their data. We believe in building intelligent, autonomous systems that drive unprecedented efficiency and innovation for our clients. Our culture is one of continuous learning, fearless exploration, and solving complex, real-world challenges with elegant, intelligent solutions. We are a lean, agile team passionate about pushing the boundaries of what's possible with AI. Our leadership team has extensive global top-tier strategy consulting experience, coupled with deep technical acumen. This unique blend means we don't just build technology; we build solutions that truly impact global businesses, and you'll have the freedom to shape the future direction of the company and its offerings. The Opportunity: Accelerate Enterprise Transformation with Data & Process Mining Are you a bright, driven data engineer with a passion for crafting robust data solutions and a knack for quickly mastering new technologies? Do you thrive in environments where your direct impact is tangible, and your innovative ideas can genuinely shape the future of enterprise data strategy? If so, we're looking for you! We're not just seeking a data engineer; we're seeking a highly intelligent, exceptionally quick-learning problem-solver eager to delve into the intricate world of enterprise processes. This role is pivotal in building accelerators and tools that will empower our consultants to deliver best-in-class process mining and intelligent process execution solutions for our global enterprise clients. You'll bridge the gap between raw process data and actionable insights by building robust data models that automate the discovery, analysis, and optimization of complex business processes. This is not about maintaining legacy systems; it's about pioneering the next generation of data interaction and automation through intelligent data models. We are looking for a smart, foundational developer who thrives on intellectual challenge, possesses an insatiable curiosity, and is eager to dive deep into sophisticated data environments. We are looking for raw talent, a sharp mind, and the ability to rapidly acquire and apply new knowledge. If you're a problem-solver at heart, passionate about data, and want to build solutions that redefine industry standards, this is your chance to make a significant impact. What You'll Be Doing (Key Responsibilities & Goals) As a Data Engineer, you'll drive the data backbone of our process intelligence initiatives, specifically: Architecting Process Mining Data Models: Designing, developing, and optimizing highly efficient data models to capture and prepare event data for process mining analysis. This involves deep engagement with complex datasets from critical enterprise IT systems like SAP ERP, SAP S/4HANA, Salesforce , and other bespoke client applications. Databricks & PySpark Development: Leveraging your experience (2-5 years preferred) with Databricks and PySpark (with occasional SQL Spark) to create scalable, robust, and efficient data ingestion and transformation pipelines. This includes working with core Databricks features such as Delta Lake, and optimizing data processing through techniques like Z-ordering and partitioning. End-to-End Data Pipeline Ownership: Implementing core data engineering concepts such as Change Data Capture (CDC) , to build real-time data ingestion and transformation pipelines from various sources Storage Management: Working with various data storage solutions like Azure Data Lake, Unity Catalogue, and Delta Lake for efficient data storage. Cloud & DevOps Setup: Taking ownership of setting up cloud environments, establishing robust CI/CD pipelines , and managing code repositories to ensure seamless, modular, and version-controlled development. This includes leveraging Git / Databricks Repos and Databricks Workflows for streamlined development and orchestration. Data Governance & Security: Implementing and maintaining data governance, privacy and security best practices within the Databricks environment to handle sensitive enterprise data. Synthetic Data Generation: Developing sophisticated synthetic training datasets that accurately emulate the complex data structures, event logs, and behaviours found within diverse enterprise IT systems, crucial for our analytical models. Staying Updated: Keeping up-to-date with the latest Databricks features, best practices, and industry trends to continuously enhance our solutions. What We're Looking For (Required & Preferred Qualifications) We prioritize a sharp mind and a strong foundation. While specific experience is valuable, your ability to learn and adapt quickly is paramount. Educational Background: A Bachelor of Engineering (B.E.) / Bachelor of Technology (B.Tech) in Computer Science, Information Technology, or a closely related engineering discipline is preferred. Core Data Engineering Acumen: Demonstrated understanding of fundamental data engineering principles, including data warehousing, ETL/ELT methodologies, data quality, and data governance. Databricks & Spark Exposure: 2-5 years of practical experience with Databricks , with a focus on building pipelines and data solutions using PySpark. Conceptual Depth: A clear grasp of concepts like CDC, data pipeline creation, efficient data ingestion, optimization strategies, efficient cloud cost management, and modular code development. Problem-Solving & Adaptability: A proven track record of tackling complex technical challenges with innovative solutions and a genuine eagerness to quickly master new tools and paradigms. Enterprise Data Context (Preferred): While not mandatory, prior exposure to or understanding of data structures and IT workloads within large enterprise environments (e.g., SAP, Salesforce) would be advantageous. Why Join Us? Join a team where your contributions are celebrated, and your growth is prioritized: Groundbreaking Work: Be at the forefront of data innovation, building solutions that don't just optimize, but fundamentally transform how enterprises operate. Intellectual Challenge: Work on complex, unsolved problems that will stretch your abilities and foster rapid personal and professional growth. Learning-Centric Environment & 20% Time: We deeply value continuous learning. You'll receive 20% dedicated time to explore new technologies, learn new skills, or pursue personal pet projects that spark your interest and contribute to your growth. Global Exposure: Gain invaluable experience working with diverse global clients and collaborating with colleagues from various backgrounds , expanding your professional network and worldview. High Impact & Shaping the Future: Your contributions will directly influence our clients' success and, critically, you'll have the freedom to shape the future direction of the company , contributing directly to product strategy, technical roadmap, and innovative service offerings, working closely with our visionary IIM alumni leadership. Autonomy & Trust: We trust our team members to take ownership, innovate, and deliver high-quality results. Collaborative & Supportive Team: Work alongside other bright, passionate individuals who are eager to learn and build together. Competitive Compensation: We offer attractive contractor rates commensurate with your skills and potential. Ready to Redefine Enterprise Intelligence with Data? If you're a brilliant problem-solver with a strong technical foundation and a burning desire to master the art of data engineering for enterprise transformation, we encourage you to apply. This is more than a contract; it's an opportunity to build something truly revolutionary. To Apply: Click on Easy Apply, and submit your latest resume. Ensure you have at least one key relevant project mentioned in detail on the resume.

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

You are ready to gain the skills and experience required to progress within your role and advance your career, and there is an excellent software engineering opportunity waiting for you. As a Software Engineer II at JPMorgan Chase in the Corporate Technology organization, you play a crucial role in the Data Services Team dedicated to enhancing, building, and delivering trusted market-leading Generative AI products securely, stably, and at scale. Being a part of the software engineering team, you will implement software solutions by designing, developing, and troubleshooting multiple components within technical products, applications, or systems while continuously enhancing your skills and experience. Your responsibilities include executing standard software solutions, writing secure and high-quality code in at least one programming language, designing and troubleshooting with consideration of upstream and downstream systems, applying tools within the Software Development Life Cycle for automation, and employing technical troubleshooting to solve basic complexity technical problems. Additionally, you will analyze large datasets to identify issues and contribute to decision-making for secure and stable application development, learn and apply system processes for developing secure code and systems, and contribute to a team culture of diversity, equity, inclusion, and respect. The qualifications, capabilities, and skills required for this role include formal training or certification in software engineering concepts with a minimum of 2 years of applied experience, experience with large datasets and predictive models, developing and maintaining code in a corporate environment using modern programming languages and database querying languages, proficiency in programming languages like Python, TensorFlow, PyTorch, PySpark, numpy, pandas, SQL, and familiarity with cloud services such as AWS/Azure. You should have a strong ability to analyze and derive insights from data, experience across the Software Development Life Cycle, exposure to agile methodologies, and emerging knowledge of software applications and technical processes within a technical discipline. Preferred qualifications include understanding of SDLC cycles for data platforms, major upgrade releases, patches, bug/hot fixes, and associated documentations.,

Posted 3 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You will be joining our dynamic team as a highly skilled Senior Data Engineer/DE Architect with 7-10 years of experience. Your expertise in data engineering technologies, particularly SQL, Databricks, Azure services, and client interaction will be crucial for this role. Your responsibilities will include: - Hands-on experience with SQL, Databricks, pyspark, Python, Azure Cloud, and Power BI. - Designing, developing, and optimizing pyspark workloads. - Writing scalable, modular, and reusable code in SQL, python, and pyspark. - Collaborating with client stakeholders and cross-functional teams. - Gathering and analyzing requirements, translating business needs into technical solutions. - Providing regular project updates and reports on progress. - Ensuring alignment of data solutions with business requirements. - Working in US shift hours to coordinate with global teams. We expect you to have: - 8-10 years of experience in data engineering or related fields. - Proficiency in SQL, Databricks, PySpark, Python, Azure Cloud, and Power BI. - Strong written and verbal communication skills. - Proven ability to collaborate effectively with global stakeholders. - Strong problem-solving skills and attention to detail. Apply now and be part of our innovative team!,

Posted 3 days ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

You have a deep experience in developing data processing tasks using PySpark/spark such as reading data from external sources, merging data, performing data enrichment, and loading into target data destinations. Your responsibilities will include developing, programming, and maintaining applications using the Apache Spark and python open-source framework. You will work with different aspects of the Spark ecosystem, including Spark SQL, DataFrames, Datasets, and streaming. As a Spark Developer, you must have strong programming skills in Python, Java, or Scala. It is essential that you are familiar with big data processing tools and techniques, have a good understanding of distributed systems, and possess proven experience as a Spark Developer or a related role. Your problem-solving and analytical thinking skills should be excellent. Experience with building APIs for provisioning data to downstream systems is required. Working experience on any Cloud technology like AWS, Azure, Google is an added advantage. Hands-on experience with AWS S3 Filesystem operations will be beneficial for this role.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, you will be responsible for designing, developing, and delivering ADF pipelines for the Accounting & Reporting Stream. Your role will involve creating and maintaining scalable data pipelines using PySpark and ETL workflows in Azure Databricks and Azure Data Factory. You will also work on data modeling and architecture to optimize data structures for analytics and business requirements. Your responsibilities will include monitoring, tuning, and troubleshooting pipeline performance for efficiency and reliability. Collaboration with business analysts and stakeholders is key to understanding data needs and delivering actionable insights. Implementing data governance practices to ensure data quality, security, and compliance with regulations is essential. You will also be required to develop and maintain documentation for data pipelines and architecture. Experience in testing and test automation is necessary for this role. Collaboration with cross-functional teams to comprehend data requirements and provide technical advice is crucial. Strong background in data engineering is required, with proficiency in SQL, Azure Databricks, Blob Storage, Azure Data Factory, and programming languages like Python or Scala. Knowledge of Logic App and Key Vault is also necessary. Strong problem-solving skills and the ability to communicate complex technical concepts to non-technical stakeholders are essential for effective communication within the team.,

Posted 3 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Manager Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Responsibilities Exp : 4-6 years Good knowledge of Data Warehousing , Data Lakehouse & Data Modelling concepts Hands-on experience in Azure DataBricks & Pyspark Design and develop robust and scalable data pipelines using PySpark and Databricks. Implement ETL processes and metadata-driven frameworks to optimize data flow and quality. Should have experience in understanding the source to target mapping document and building optimized unit tested ETL pipeline Should have experience in Data profiling and ensure data quality and integrity throughout the data lifecycle. Hands-on experience in handling large data volumes and performance tuning Should have experience working in onsite/offshore model Should have good communication and documentation skills Mandatory Skill Sets Azure/ETL Preferred Skill Sets Azure/ETL Years Of Experience Required 4-6 years Education Qualification BE/BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Extract Transform Load (ETL), Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Coaching and Feedback, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion {+ 24 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Azure Databricks Professional at YASH Technologies, you will be utilizing your 6-8 years of experience to work with cutting-edge technologies in Azure services and data bricks. Your role will involve a strong understanding of medallion architecture, as well as proficiency in Python and Pyspark. YASH Technologies is a leading technology integrator that focuses on helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. Our team is comprised of bright individuals who are dedicated to making real positive changes in an increasingly virtual world. Working at YASH, you will have the opportunity to shape your career in an inclusive team environment. We believe in continuous learning and development, leveraging career-oriented skilling models and technology to empower our employees to grow and adapt at a rapid pace. Our Hyperlearning workplace is based on principles such as flexible work arrangements, free spirit, emotional positivity, agile self-determination, trust, transparency, open collaboration, support for realizing business goals, and a stable employment environment with a great atmosphere and ethical corporate culture.,

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies