Jobs
Interviews

8521 Pyspark Jobs - Page 39

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Role & responsibilities Looking for a skilled Data Engineer with expertise in Python and Azure Databricks for building scalable data pipelines. Must have strong SQL skills for designing, querying, and optimizing relational databases. Responsible for data ingestion, transformation, and orchestration across cloud platforms. Experience with coding best practices, performance tuning, and CI/CD in Azure ecosystem is essential. Need Streamlit exp.

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Job Title: MS Fabric Solution Engineer Architect Experience: 7-10 Years Shift : IST JD for MS Fabric Solution Engineer Key Responsibilities: ● Lead the technical design, architecture, and hands-on implementation of Microsoft Fabric PoCs. This includes translating business needs into effective data solutions, often applying Medallion Architecture principles within the Lakehouse.. ● Develop and optimize ELT/ETL pipelines for diverse data sources: o Static data (e.g., CIM XML, equipment models, Velocity Suite data). o Streaming data (e.g., measurements from grid devices, Event Hub and IoT Hub). ● Seamlessly integrate Fabric with internal systems (e.g., CRM, ERP) using RESTful APIs, data mirroring, Azure Integration Services, and CDC (Change Data Capture) mechanisms. ● Hands-on configuration and management of core Fabric components: OneLake, Lakehouse, Notebooks (PySpark/KQL), and Real-Time Analytics databases. ● Facilitate data access via GraphQL interfaces, Power BI Embedded, and Direct Lake connections, ensuring optimal performance for self-service BI and adhering to RLS/OLS. ● Work closely with Microsoft experts, SMEs, and stakeholders. ● Document architecture, PoC results, and provide recommendations for production readiness and data governance (e.g., Purview integration). ______________ Required Skills & Experience: ● 7-10 years of experience in Data Engineering / BI / Cloud Analytics, with at least 1–2 projects using Microsoft Fabric (or strong Power BI + Synapse background transitioning to Fabric). ● Proficient in: o OneLake, Data Factory, Lakehouse, Real-Time Intelligence, Dataflow Gen2 o Ingestion using CIM XML, CSV, APIs, SDKs o Power BI Embedded, GraphQL interfaces o Azure Notebooks / PySpark / Fabric SDK ● Experience with data modeling (asset registry, nomenclature alignment, schema mapping). ● Familiarity with real-time streaming (Kafka/Kinesis/IoT Hub) and data governance concepts. ● Strong problem-solving and debugging skills. ● Prior experience with PoC/Prototype-style projects with tight timelines. ______________ Good to Have: ● Knowledge of grid operations / energy asset management systems. ● Experience working on Microsoft-Azure joint engagements. ● Understanding of AI/ML workflow integration via Azure AI Foundry or similar. ● Relevant certifications: DP-600/700 or DP-203. If Intrested. Please submit your CV to Khushboo@Sourcebae.com or share it via WhatsApp at 8827565832 Stay updated with our latest job opportunities and company news by following us on LinkedIn: :https://www.linkedin.com/company/sourcebae

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Big Data Tester LocationPune (for Mastercard) Experience Level5-9 years Minimum Skill Set Required / Must Have Python PySpark Testing skills and best practices for data validation SQL (hands-on experience, especially with complex queries) and ETL Good to Have Unix Big Data: Hadoop, Spark, Kafka, NoSQL databases (MongoDB, Cassandra), Hive, etc. Data Warehouse: TraditionalOracle, Teradata, SQL Server Modern CloudAmazon Redshift, Google BigQuery, Snowflake AWS development experience (not mandatory, but beneficial) Best Fit Python + PySpark + Testing + SQL (hands-on) and ETL + Good to Have skills

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Position Overview:We are seeking a highly skilled Senior Data Engineer – Python, PySpark & Azure Databricks to join our dynamic data engineering team. This role focuses on building scalable, high-performance data pipelines using Python and PySpark within the Azure Databricks environment. While familiarity with broader Azure services is valuable, the emphasis is on distributed data processing and automation using modern big data frameworks. Prior experience in the Property & Casualty (P&C) insurance industry is a strong plus.Key Responsibilities:Data Pipeline Development & Optimization:Design, develop, and maintain scalable ETL/ELT data pipelines using Python and PySpark.Leverage Azure Databricks to process large volumes of structured and semi-structured data efficiently.Implement data quality checks, error handling, and performance tuning across all stages of data processing.Data Architecture & Modeling:Contribute to the design of cloud-based data architectures that support analytics and reporting use cases.Develop and maintain data models that adhere to industry best practices and support business requirements.Work with Delta Lake, Bronze/Silver/Gold data architecture patterns, and metadata management strategies.Cloud Integration (Azure):Integrate and orchestrate data workflows using Azure Data Factory, Azure Blob Storage, and Event Hub where applicable.Optimize cloud compute resources and manage cost-effective data processing at scale.Collaboration & Stakeholder Engagement:Partner with data analysts, data scientists, and business users to understand evolving data needs.Collaborate with DevOps and platform teams to ensure reliable, secure, and automated data operations.Participate in Agile and contribute to sprint planning, demos, and retrospectives.Documentation & Best Practices:Maintain clear and comprehensive documentation of code, pipelines, and architectural decisions.Contribute to internal data engineering standards and promote best practices for code quality, testing, and CI/CD. Qualifications Position Overview:We are seeking a highly skilled Senior Data Engineer – Python, PySpark & Azure Databricks to join our dynamic data engineering team. This role focuses on building scalable, high-performance data pipelines using Python and PySpark within the Azure Databricks environment. While familiarity with broader Azure services is valuable, the emphasis is on distributed data processing and automation using modern big data frameworks. Prior experience in the Property & Casualty (P&C) insurance industry is a strong plus.Key Responsibilities:Data Pipeline Development & Optimization:Design, develop, and maintain scalable ETL/ELT data pipelines using Python and PySpark.Leverage Azure Databricks to process large volumes of structured and semi-structured data efficiently.Implement data quality checks, error handling, and performance tuning across all stages of data processing.Data Architecture & Modeling:Contribute to the design of cloud-based data architectures that support analytics and reporting use cases.Develop and maintain data models that adhere to industry best practices and support business requirements.Work with Delta Lake, Bronze/Silver/Gold data architecture patterns, and metadata management strategies.Cloud Integration (Azure):Integrate and orchestrate data workflows using Azure Data Factory, Azure Blob Storage, and Event Hub where applicable.Optimize cloud compute resources and manage cost-effective data processing at scale.Collaboration & Stakeholder Engagement:Partner with data analysts, data scientists, and business users to understand evolving data needs.Collaborate with DevOps and platform teams to ensure reliable, secure, and automated data operations.Participate in Agile and contribute to sprint planning, demos, and retrospectives.Documentation & Best Practices:Maintain clear and comprehensive documentation of code, pipelines, and architectural decisions.Contribute to internal data engineering standards and promote best practices for code quality, testing, and CI/CD.

Posted 1 week ago

Apply

5.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional Requirements: PythonPySparkETLData PipelineBig DataAWSGCPAzureData WarehousingSparkHadoop Preferred Skills: Technology-Big Data-Big Data - ALL

Posted 1 week ago

Apply

3.0 - 8.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BSc,Bachelor of Business Administration and Bachelor of Legislative Law (BBA LLB) Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skillsTechnology-Cloud Platform-Azure Analytics Services-Azure Data Lake Preferred Skills: Technology-Cloud Platform-Azure Development & Solution Architecting

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities Roles & Responsibilities: Understand the requirements from the business and translate it into an appropriate technical requirement. Responsible for successful delivery of MLOps solutions and services in client consulting environments; Define key business problems to be solved; formulate high level solution approaches and identify data to solve those problems, develop, analyze/draw conclusions and present to client. Assist clients with operationalization metrics to track performance of ML Models Help team with ML Pipelines from creation to execution Guide team to debug on issues with pipeline failures Understand and take requirements on Operationalization of ML Models from Data Scientist Engage with Business / Stakeholders with status update on progress of development and issue fix Setup Standards related to Coding, Pipelines and Documentation Research on new topics, services and enhancements in Cloud Technologies Additional Responsibilities: EEO/ :Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability. Technical and Professional Requirements: Preferred Qualifications: Experienced in Agile way of working, manage team effort and track through JIRA High Impact client communication Domain experience in Retail, CPG and Logistics Experience in Test Driven Development and experience in using Pytest frameworks, git version control, Rest APIsThe job may entail extensive travel. The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. Preferred Skills: Technology-Machine learning-data science

Posted 1 week ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BSc Service Line Data & Analytics Unit Responsibilities Consulting Skills: oHypothesis-driven problem solvingoGo-to-market pricing and revenue growth executionoAdvisory, Presentation, Data StorytellingoProject Leadership and Execution Additional Responsibilities: Typical Work Environment Collaborative work with cross-functional teams across sales, marketing, and product development Stakeholder Management, Team Handling Fast-paced environment with a focus on delivering timely insights to support business decisions Excellent problem-solving skills and ability to address complex technical challenges. Effective communication skills to collaborate with cross-functional teams and stakeholders. Potential to work on multiple projects simultaneously, prioritizing tasks based on business impact Qualification: Degree in Data Science, Computer Science with data science specialization Master’s in business administration and Analytics preferred Technical and Professional Requirements: Technical Skills: oProficiency in programming languages like Python and R for data manipulation and analysis oExpertise in machine learning algorithms and statistical modeling techniques oFamiliarity with data warehousing and data pipelines oExperience with data visualization tools like Tableau or Power BI oExperience in Cloud platforms (e.g., ADF, Data bricks, Azure) and their AI services. Preferred Skills: Technology-Big Data-Text Analytics

Posted 1 week ago

Apply

3.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Kochi

Work from Office

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops- Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Mumbai

Work from Office

The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Kochi

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 16 Lacs

Kochi

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re- write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Bengaluru

Work from Office

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp 3-6 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Bengaluru

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases

Posted 1 week ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Hyderabad

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 week ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mysuru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

9.0 - 12.0 years

15 - 30 Lacs

Pune, Bengaluru

Hybrid

Role & responsibilities Azure Data Engineer with Databricks (9+ Years) Experience: 9+ Years Location: Pune, Hyderabad (Preferred) Job Description: Experience in Perform Design, Development & Deployment using Azure Services ( Data Factory, Azure Data Lake Storage, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience in create the Technical Specification Design, Application Interface Design. Files Processing XML, CSV, Excel, ORC, Parquet file Formats Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Good to have experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies (Informatica preferred) Demonstrated in depth skills with Azure Data Factory, Azure Databricks, PySpark, ADLS (must have) with the ability to configure and administrate all aspects of Azure SQL DB. Collaborate and engage with BI & analytics and business team Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.), If Interested,kindly share your update cv on Himanshu.mehra@thehrsolutions.in

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies