Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Key Responsibilities: Data Lake and Lakehouse Implementation: Design, implement, and manage Data Lake and Lakehouse architectures. (Must have) Develop and maintain scalable data pipelines and workflows. (Must have) Utilize Azure Data Lake Services (ADLS) for data storage and management. (Must have) Knowledge on Medalion Architecture, Delta Format. (Must have) Data Processing and Transformation: Use PySpark for data processing and transformations. (Must have) Implement Delta Live Tables for real-time data processing and analytics. (Good to have) Ensure data quality and consistency across all stages of the data lifecycle. (Must have) Data Management and Governance: Employ Unity Catalog for data governance and metadata management. (Good to have) Ensure robust data security and compliance with industry standards. (Must have) Data Integration: Extract, transform, and load (ETL) data from multiple sources (Must have) including SAP (Good to have), Dynamics 365 (Good to have), and other systems. Utilize Azure Data Factory (ADF) and Synapse Analytics for data integration and orchestration. (Must have) Performance Optimization of the Jobs. (Must have) Data Storage and Access: Implement and manage Azure Data Lake Storage (ADLS) for large-scale data storage. (Must have) Optimize data storage and retrieval processes for performance and cost-efficiency. (Must have) Collaboration and Communication: Work closely with data scientists, analysts, and other stakeholders to understand data requirements. (Must have) Provide technical guidance and mentorship to junior team members. (Good to have) Continuous Improvement: Stay updated with the latest industry trends and technologies in data engineering and cloud computing. (Good to have) Continuously improve data processes and infrastructure for efficiency and scalability. (Must have) Required Skills And Qualifications Technical Skills: Proficient in PySpark and Python for data processing and analysis. Strong experience with Azure Data Lake Services (ADLS) and Data Lake architecture. Hands-on experience with Databricks for data engineering and analytics. Knowledge of Unity Catalog for data governance. Expertise in Delta Live Tables for real-time data processing. Familiarity with Azure Fabric for data integration and orchestration. Proficient in Azure Data Factory (ADF) and Synapse Analytics for ETL and data warehousing. Experience in pulling data from multiple sources like SAP, Dynamics 365, and others. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Attention to detail and commitment to data accuracy and quality. Certifications Required Certification in Azure Data Engineering or relevant Azure certifications. DP203 (Must have) Certification in Databricks. Databricks certified Data Engineer Associate (Must have) Databricks certified Data Engineer Professional (Good Have) Mandatory skill sets: Azure DE, Pyspark, Databricks Preferred skill sets: Azure DE, Pyspark, Databricks Years of experience required: 5-10 Years Educational Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Role: Databricks Developer Exp: 6-8yrs Location: KOL/HYD/PUNE Looking for immediate joiners or NP upto 25 days JD 5+ years relevant and progressive data engineering experience Deep Technical knowledge and experience in Databricks, Python, Scala, Microsoft Azure architecture and platform including Synapse, ADF (Azure Data Factory) pipelines and Synapse stored procedures Hands-on experience working with data pipelines using a variety of source and target locations (e.g., Databricks, Synapse, SQL Server, Data Lake, file-based, SQL and No-SQL database) Experience in engineering practices such as development, code refactoring, and leveraging design patterns, CI/CD, and building highly scalable data applications and processes Experience developing batch ETL pipelines; real-time pipelines are a plus Knowledge of advanced data engineering concepts such as dimensional modeling, ETL, data governance, data warehousing involving structured and unstructured data Thorough knowledge of Synapse and SQL Server including T-SQL and stored procedures Experience working with and supporting cross-functional teams in a dynamic environment A successful history of manipulating, processing and extracting value from large disconnected datasets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytic solutions Develop and deliver documentation on data engineering capabilities, standards, and processes; participate in coaching, mentoring, design reviews and code reviews Partner with business analysts and solutions architects to develop technical architecture for strategic enterprise projects and initiatives. Solve complex data problems to deliver insights that helps the organization achieve its goals Knowledge and understanding of Boomi is a plus As it is 24*7 production support project, Resource should will to work in shifts and 7-night shifts in a month Show more Show less
Posted 1 month ago
3.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. Responsibilities Job Description : Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks. Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologie Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 3-10 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Databricks Platform, Extract Transform Load (ETL), PySpark, Python (Programming Language), Structured Query Language (SQL) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
5.0 - 7.0 years
8 - 10 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
We are seeking an experienced Power BI Developer to join our team. The role involves creating insightful and interactive reports and dashboards using Power BI, optimizing SQL queries, and troubleshooting data-related issues. The ideal candidate will have hands-on experience with complex DAX queries, various data sources, and different reporting modes (Import, Direct Query). Responsibilities include working with PostgreSQL, MS SQL, developing robust SQL code, writing stored procedures, and ensuring high-quality data modeling. Candidates should have expertise in SQL optimization, performance tuning, and working with Common Table Expressions (CTEs) and complex joins. The role requires proficiency in designing engaging visual reports, applying themes/templates, and keeping up with the latest Power BI features and best practices.
Posted 1 month ago
7.0 years
0 Lacs
India
On-site
Job Title :- Senior Data Engineer Experience :- 7+ yrs Notice :- Imm joiner We are looking for a highly experienced Azure Data Engineer with strong technical expertise in data engineering tools and platforms within the Azure ecosystem. The ideal candidate should have 7 + years of relevant experience and a deep understanding of data warehousing, data pipelines, and cloud-based data processing frameworks. Key Responsibilities: Design and implement scalable data pipelines and ETL processes using Azure Data Factory (ADF), PySpark, and Databricks . Manage and optimize Azure Data Lake and integrate with Azure Synapse Analytics for large-scale data storage and analytics. Collaborate with cross-functional teams to gather requirements, design data solutions, and deliver actionable insights. Develop and optimize SQL queries for data extraction and transformation. Apply data modeling techniques and implement best practices for data governance and quality. Work closely with BI developers and stakeholders to support reporting and dash-boarding solutions. Implement and manage CI/CD pipelines for data engineering solutions. Required Skills: 7+ years of experience in Azure Data Engineering. Strong proficiency in SQL and at least one programming language (preferably Python ). Deep experience with: Azure Data Factory (ADF) Azure Databricks Azure Data Lake Azure Synapse Analytics PySpark Knowledge of data warehousing concepts and implementation. Experience in Apache Spark or similar ETL tools. Preferred/Good to Have: Experience with Microsoft Fabric (MS Fabric) . Familiarity with Power BI for data visualization. Domain knowledge in Finance , Procurement , or Human Capital . Show more Show less
Posted 1 month ago
5.0 - 9.0 years
7 - 15 Lacs
Kochi, Chennai, Bengaluru
Hybrid
Greetings from Aspire Systems!! Currently hiring for ETL Testing with SSIS, SQL and ADF. Role : ETL testing Exp : 5+ Only Location : Chennai/ Bangalore/ Kochi Notice - Immediate to 20 days only. Share CV to safoora.imthiyas@aspiresys.com / Call on 9384788107 - immediate joiners only Job Summary: We are seeking a highly skilled ETL Testing - Software Engineer with 5 to 6 years of experience to join our dynamic team. The ideal candidate will have proficient knowledge in SSIS, SSRS, SQL, MS SQL Server and knowledge of Azure Databricks. This role involves ensuring the quality and reliability of ETL processes and data integration solutions. Key Responsibilities: ETL Testing: Design, develop, and execute ETL test plans and test cases to ensure data accuracy, completeness, and integrity. SSIS SSRS Reporting: Validate the BI Reports using complex SQL Queries SQL Proficiency: Write complex SQL queries for data validation, testing, and troubleshooting. Azure Databricks: Apply basic knowledge of Azure Databricks for data processing and analytics. Data Quality Assurance : Perform data validation and verification to ensure data quality and consistency across various systems. Defect Management : Identify, document, and track defects using appropriate tools such as JIRA and methodologies. Collaboration : Work closely with developers, business analysts, and other stakeholders to understand requirements and ensure comprehensive testing coverage. Documentation : Create and maintain detailed documentation of test cases, test results, and testing processes. Required Skills and Qualifications: Proficient in SSIS /SSRS/SQL skills for writing and optimizing queries. Sound knowledge of Azure Databricks. Strong analytical and problem-solving skills to identify issues and ensure data accuracy. Attention to Detail: High attention to detail and commitment to delivering high-quality work. Communication: Excellent verbal and written communication skills. Team Player: Ability to work effectively in a team environment and collaborate with cross-functional teams. Experience: Previous experience in a similar role within the Insurance sector.
Posted 1 month ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: Sr Data Engineer (Azure) Experience: 5+ years of Experience What you should have? Candidate having technical background of SAP (SAP BW/SAP HANA, and SAP BDC) would be preferred. Proficiency in Microsoft Fabric, Azure Data Lake, Azure Synapse, and Azure Databricks. Solid understanding of data modeling, ETL/ELT processes, and SQL. Experience with PySpark and Python for data engineering tasks. Familiarity with CI/CD practices using Azure DevOps or GitHub. Ability to work in Agile environments and collaborate with diverse teams Good analytical skills with excellent knowledge of SQL. Well versed with Azure Services Must have experience and knowledge on ADF, ADLS, Blob storage Must have experience in building data pipelines Hand on development on PySpark, Databricks Experience using software version control tools (Git) Work in Agile methodologies and might be required to perform QA for work done by other team members in the sprint Work with team and assist the Product Owner and technology lead in identifying and estimating data platform engineering Knowledge and ability to setup DevOps and Test frameworks Familiarity with API integration processes Exposure to Power BI , streaming data and other Azure services Responsibilities- Develop Data pipelines to load data using Azure services. Perform Data Model design, ETL/ELT development optimized for efficient storage, access, and computation to serve various Business Intelligence use cases Contribute fully/partially to areas of API integration, End to end Devops automation, test automation, data visualisation (Power BI) and Business intelligence reporting solutions Knowledge of programming languages such as spark or python Create technical design documentation which includes current and future functionality, database objects affected, specifications, and flows/diagrams to detail the proposed database and/or Data Integration implementation Show more Show less
Posted 1 month ago
10.0 - 16.0 years
25 - 27 Lacs
Chennai
Work from Office
We at Dexian India, are looking to hire a Cloud Data PM with over 10 years of hands-on experience in AWS/Azure, DWH, and ETL. The role is based in Chennai with a shift from 2.00pm to 11.00pm IST. Key qualifications we seek in candidates include: - Solid understanding of SQL and data modeling - Proficiency in DWH architecture, including EDW/DM concepts and Star/Snowflake schema - Experience in designing and building data pipelines on Azure Cloud stack - Familiarity with Azure Data Explorer, Data Factory, Data Bricks, Synapse Analytics, Azure Fabric, Azure Analysis Services, and Azure SQL Datawarehouse - Knowledge of Azure DevOps and CI/CD Pipelines - Previous experience managing scrum teams and working as a Scrum Master or Project Manager on at least 2 projects - Exposure to on-premise transactional database environments like Oracle, SQL Server, Snowflake, MySQL, and/or Postgres - Ability to lead enterprise data strategies, including data lake delivery - Proficiency in data visualization tools such as Power BI or Tableau, and statistical analysis using R or Python - Strong problem-solving skills with a track record of deriving business insights from large datasets - Excellent communication skills and the ability to provide strategic direction to technical and business teams - Prior experience in presales, RFP and RFI responses, and proposal writing is mandatory - Capability to explain complex data solutions clearly to senior management - Experience in implementing, managing, and supporting data warehouse projects or applications - Track record of leading full-cycle implementation projects related to Business Intelligence - Strong team and stakeholder management skills - Attention to detail, accuracy, and ability to meet tight deadlines - Knowledge of application development, APIs, Microservices, and Integration components Tools & Technology Experience Required: - Strong hands-on experience in SQL or PLSQL - Proficiency in Python - SSIS or Informatica (Mandatory one of the tools) - BI: Power BI, or Tableau (Mandatory one of the tools)
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Thiruvananthapuram, Kerala
On-site
We are looking for a highly skilled and detail-oriented Data & Visualisation Specialist to join Zafin team. The ideal candidate will have a strong background in Business Intelligence (BI), data analysis, and visualisation, with advanced technical expertise in Azure Data Factory (ADF), SQL, Azure Analysis Services, and Power BI. In this role, you will be responsible for performing ETL operations, designing interactive dashboards, and delivering actionable insights to support strategic decision-making. Key Responsibilities: · Azure Data Factory: Design, build, and manage ETL pipelines in Azure Data Factory to facilitate seamless data integration across systems. · SQL & Data Management: Develop and optimize SQL queries for extracting, transforming, and loading data while ensuring data quality and accuracy. · Data Transformation & Modelling: Build and maintain data models using Azure Analysis Services (AAS), optimizing for performance and usability. · Power BI Development: Create, maintain, and enhance complex Power BI reports and dashboards tailored to business requirements. · DAX Expertise: Write and optimize advanced DAX queries and calculations to deliver dynamic and insightful reports. · Collaboration: Work closely with stakeholders to gather requirements, deliver insights, and help drive data-informed decision-making across the organization. · Attention to Detail: Ensure data consistency and accuracy through rigorous validation and testing processes. o Presentation & Reporting: · Effectively communicate insights and updates to stakeholders, delivering clear and concise documentation. Skills and Qualifications: Technical Expertise: · Proficient in Azure Data Factory for building ETL pipelines and managing data flows. · Strong experience with SQL, including query optimization and data transformation. · Knowledge of Azure Analysis Services for data modelling · Advanced Power BI skills, including DAX, report development, and data modelling. · Familiarity with Microsoft Fabric and Azure Analytics (a plus) · Analytical Thinking: Ability to work with complex datasets, identify trends, and tackle ambiguous challenges effectively Communication Skills: · Excellent verbal and written communication skills, with the ability to convey complex technical information to non-technical stakeholders. · Educational Qualification: Minimum of a Bachelor's degree, preferably in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or a related discipline Job Type: Full-time Pay: Up to ₹1,000,000.00 per year Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Have you worked with Azure Data Factory (ADF)? Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Senior Engineer, Data Modeling Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Manage Azure Infrastructure: Optimize the Azure cloud environment for high availability, scalability, and performance. Administer Azure Services: Manage Azure Portal, Entra ID, and authentication mechanisms. Configure Azure PaaS Services: Work with services such as App Service, Function Apps, Azure SQL, CosmosDB, and Azure Service Bus (ASB). Azure Storage and Key Vault: Implement and manage Azure Storage solutions and Key Vault for secure data handling. API Management: Oversee Azure API Management (APIM), including managing the Developer Portal, Postman/Bruno, and policy configuration. Monitoring and Performance: Set up App Insights, Log Analytics, and monitoring tools for performance optimization and troubleshooting. Governance & Compliance: Enforce Azure Policies to ensure governance and maintain compliance. Secure Application Onboarding: Assist with secure and compliant application onboarding to Azure. IaC Development: Develop and maintain Terraform, ARM templates, and PowerShell scripts for automating infrastructure provisioning. Cloud Deployment Automation: Use Azure CLI, PowerShell, and templates for automating cloud infrastructure deployments. Compliance & Governance Automation: Ensure adherence to Azure policies and governance frameworks via automated solutions. Configure Network Components: Set up and maintain VNets, Subnets, Private Endpoints, and Express Routes. Security Implementations: Implement firewalls, NSGs, UDRs, and DNS configurations to secure the Azure environment. Route Optimization & Secure Connectivity: Optimize global route management and secure connectivity. Authentication & Certificate Management: Manage authentication, authorization, and the lifecycle of certificates to ensure secure cloud operations. Use Kusto Query Language (KQL): Leverage KQL for log analysis and troubleshooting. Optimize PaaS Resources: Optimize Azure PaaS services for cost, scalability, and performance. Certificate & Key Management: Handle platform-level management of certificates and keys (e.g., CMK, development certificates). Collaborate with Teams: Work closely with DevOps and application teams to ensure smooth deployments and operations. Experience: 8–10 years in Azure Cloud Engineering or similar roles, with deep expertise in cloud infrastructure and Azure services. Cloud Fundamentals: Strong foundation in cloud networking, security, storage, and authentication. Azure PaaS Expertise: Proficient in working with Azure PaaS services like App Services, ASE, ADF, APIM, ASB, CosmosDB, and Azure SQL. Automation Skills: Experience with Terraform, ARM templates, and PowerShell scripting for IaC. Azure CLI & Policy Enforcement: Hands-on experience with Azure CLI, policy enforcement, and cloud automation. Networking Expertise: Skilled in Azure networking components like VNets, NSGs, Private Endpoints, and Express Routes. Troubleshooting Expertise: Proficient in troubleshooting with tools like App Insights, Log Analytics, and KQL. Cloud Security & Certificate Management: Familiarity with certificate management and best practices for cloud security. Skills Azure PaaS and IaC Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings from TATA Consultancy Services Job Openings at TCS Skill :AZURE DATA ENGINEER Exp range :4+10 YEARS Role : Permanent Role Job location :Pune Current location : Pune Mode of Interview: FACE TO FACE(WALKIN)INTERVIEW on 31st may 25 saturday AT HINJEWADI PHASE 3 PUNE Pls find the Job Description below. Keywords For Search Azure Databricks, Python, ADF, Data Engineer, Synapse Job Description Role and Responsibilities Senior Data Engineer 5+ years of total IT experience Minimum 5+ years of development experience in Azure Must have “Data Warehouse / Data Lake” development experience. Must have “Azure Data Factory (ADF) & Azure SQL DB” Must have “Azure Data Bricks” experience using Python or Spark or Scala Nice to have “Data Modelling” & “Azure Synapse” experience. Nice to Azure ML experience Nice to have “PowerBI” experience. Nice to have Azure Data Engineer Certifications Passion for Data Quality with an ability to integrate these capabilities into the deliverables. Prior use of Big Data components and the ability to rationalize and align their fit for a business case. Experience in working with different data sources - flat files, XML, JSON, Avro files and databases. Knowledge of Jenkins for continuous integration and End-to-End automation for application build and deployments. Ability to integrate into a project team environment and contribute to project planning activities. Experience in developing implementation plans and schedules and preparing documentation for the jobs according to the business requirements. Lead ambiguous and complex situations to clear measurable plans. Proven experience and ability to work with people across the organization and skilled at managing cross-functional relationships and communicating with leadership across multiple organizations. Proven capabilities for strong written and oral communication skill with the ability to synthesize, simplify and explain complex problems to different audiences. Thanks & Regards Priyanka Talent Acquisition Group Tata Consultancy Services Show more Show less
Posted 1 month ago
6.0 - 9.0 years
40 - 45 Lacs
Pune
Work from Office
6+ years of experience in data engineering with a focus on Azure cloud technologies. Strong expertise in Azure Data Factory, Databricks, ADLS, and Power BI. Proficiency in SQL, Python, and Spark for data processing and transformation. Experience with IoT data ingestion and processing, handling high-volume, real-time data streams. Strong understanding of data modeling, Lakehouse architecture, and medallion frameworks. Experience in building and optimizing scalable ETL/ELT processes. Knowledge of data governance, security, and compliance frameworks. Experience with monitoring, logging, and performance tuning of data workflows. Strong problem-solving and analytical skills with a platform-first mindset.
Posted 1 month ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
TCS Hiring for Azure Admin + Azure Platform Eng Experience: 5 to 8 Years Only Job Location: New Delhi,Kolkata,Mumbai,Pune,Bangalore TCS Hiring for Azure Admin + Azure Platform Eng Required Technical Skill Set: Deployment through Terraform,Azure Administration, DataFactory, DataBricks, Active Directory, Identity,Unitycatalog, Terraform, Mechine leaning, AI and Access Management 3+ years of prior product/technical support customer facing experience Must have good knowledge working in Azure cloud technical support Good to have technical skills and hands-on experience in following areas: Deployment through Terraform,PowerShell/CLI, Identity Management, Azure Resource Group Management, Azure PaaS services e.g.: ADF, Databricks, Storage Account Understanding about the machine leaning and AI concept related to Infrastructure. · Unity catalog end to end process to migrate from hive to UC Excellent team player with good interpersonal and communication skills. Experience of Life Science and Health care domain preferred. Roles & Responsibilities: Resource Group creation along with various component deployment using Terraform Template Management of user access in Azure PaaS product such as Azure SQL, WebApp, AppService, Storage Account , DataBricks, DataFactory Creation of Service Principle/AD groups and managing access using this to various application Troubleshoot issues regarding access, data visualizations, permission issues Kind Regards, Priyankha M Show more Show less
Posted 1 month ago
4.0 - 9.0 years
6 - 14 Lacs
Hyderabad
Remote
Job description Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Preferred: Hyderabad At least 4+ years of relevant hands on development experience as Azure Data Engineering role Proficient in Azure technologies like ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flow Experience in business processing mapping of data and analytics solutions
Posted 1 month ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
TCS Hiring for Azure Admin + Azure Platform Eng Experience: 5 to 8 Years Only Job Location: New Delhi,Kolkata,Mumbai,Pune,Bangalore TCS Hiring for Azure Admin + Azure Platform Eng Required Technical Skill Set: Deployment through Terraform,Azure Administration, DataFactory, DataBricks, Active Directory, Identity,Unitycatalog, Terraform, Mechine leaning, AI and Access Management 3+ years of prior product/technical support customer facing experience Must have good knowledge working in Azure cloud technical support Good to have technical skills and hands-on experience in following areas: Deployment through Terraform,PowerShell/CLI, Identity Management, Azure Resource Group Management, Azure PaaS services e.g.: ADF, Databricks, Storage Account Understanding about the machine leaning and AI concept related to Infrastructure. · Unity catalog end to end process to migrate from hive to UC Excellent team player with good interpersonal and communication skills. Experience of Life Science and Health care domain preferred. Roles & Responsibilities: Resource Group creation along with various component deployment using Terraform Template Management of user access in Azure PaaS product such as Azure SQL, WebApp, AppService, Storage Account , DataBricks, DataFactory Creation of Service Principle/AD groups and managing access using this to various application Troubleshoot issues regarding access, data visualizations, permission issues Kind Regards, Priyankha M Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Description: Customize and configure Oracle Fusion modules as per business requirements. Develop and modify reports (BIP, OTBI, FRS, Hyperion Smart View), interfaces, extensions (Page Composer, Application Composer (With Groovy Scripting), Process Composer), and workflows (Oracle BPM, AMX), Forms (ADF (Java Based)), VBCS and Page Customization to enhance functionality. Integrate Oracle Fusion applications with other business systems and third-party applications. (Oracle Integration Cloud) Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What is Blend Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com What is the Role As a Senior Data Engineer, your role is to spearhead the data engineering teams and elevate the team to the next level! You will be responsible for laying out the architecture of the new project as well as selecting the tech stack associated with it. You will plan out the development cycles deploying AGILE if possible and create the foundations for good data stewardship with our new data products !You will also set up a solid code framework that needs to be built to purpose yet have enough flexibility to adapt to new business use cases tough but rewarding challenge! What you’ll be doing? Collaborate with several stakeholders to deeply understand the needs of data practitioners to deliver at scale Lead Data Engineers to define, build and maintain Data Platform Work on building Data Lake in Azure Fabric processing data from multiple sources Migrating existing data store from Azure Synapse to Azure Fabric Implement data governance and access control Drive development effort End-to-End for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Present technical solutions, capabilities, considerations, and features in business terms. Effectively communicate status, issues, and risks in a precise and timely manner. Further develop critical initiatives, such as Data Discovery, Data Lineage and Data Quality Leading team and Mentor junior resources Help your team members grow in their role and achieve their career aspirations Build data systems, pipelines, analytical tools and programs Conduct complex data analysis and report on results What do we need from you? 5+ Years of Experience as a data engineer or similar role in Azure Synapses, ADF or relevant exp in Azure Fabric Degree in Computer Science, Data Science, Mathematics, IT, or similar field Must have experience executing projects end to end. At least one data engineering project should have worked in Azure Synapse, ADF or Azure Fabric Should be experienced in handling multiple data sources Technical expertise with data models, data mining, and segmentation techniques Deep understanding, both conceptually and in practice of at least one object orientated library (Python, pySpark) Strong SQL skills and a good understanding of existing SQL warehouses and relational databases. Strong Spark, PySpark, Spark SQL skills and good understanding of distributed processing frameworks. Build large-scale batches and real-time data pipelines. Ability to work independently and mentor junior resources. Desire to lead and develop a team of Data Engineers across multiple levels Experience or knowledge in Data Governance Azure Cloud experience with Data modeling, CI\CD, Agile Methodologies, Docker\Kubernetes What do you get in return? Competitive Salary : Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the fut ur e. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your ski lls. Snack Zone : Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flow i ng. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward prog r am. Fuel Your Growth Journey with Certifications : We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certificat i ons. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
TCS Hiring for Azure Admin + Azure Platform Eng Experience: 5 to 8 Years Only Job Location: New Delhi,Kolkata,Mumbai,Pune,Bangalore TCS Hiring for Azure Admin + Azure Platform Eng Required Technical Skill Set: Deployment through Terraform,Azure Administration, DataFactory, DataBricks, Active Directory, Identity,Unitycatalog, Terraform, Mechine leaning, AI and Access Management 3+ years of prior product/technical support customer facing experience Must have good knowledge working in Azure cloud technical support Good to have technical skills and hands-on experience in following areas: Deployment through Terraform,PowerShell/CLI, Identity Management, Azure Resource Group Management, Azure PaaS services e.g.: ADF, Databricks, Storage Account Understanding about the machine leaning and AI concept related to Infrastructure. · Unity catalog end to end process to migrate from hive to UC Excellent team player with good interpersonal and communication skills. Experience of Life Science and Health care domain preferred. Roles & Responsibilities: Resource Group creation along with various component deployment using Terraform Template Management of user access in Azure PaaS product such as Azure SQL, WebApp, AppService, Storage Account , DataBricks, DataFactory Creation of Service Principle/AD groups and managing access using this to various application Troubleshoot issues regarding access, data visualizations, permission issues Kind Regards, Priyankha M Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Position: Enterprises Architect //Solution Architect// Cloud Pre-Sales Solution Architect. Job Type: -Permanent Location: Pune Experience: 10+ Years Role Responsibilities Roles and Responsibilities To design develop and implement solutions to complex business problems collaborating with stakeholders to understand their needs and requirements and design and implement solutions that meet those needs and create solutions that balance technology risks against business delivery driving consistency A seasoned IT Professional with + years of IT experience and currently, working as a Solution Architect Technologies COE/Practice Team understanding business requirements and designing solutions based on Multi clouds Azure, AWS, Google AWS Certified Associate/Professional Architect. As AWS Architect for Belron-Safelite - 80 Applications target designs as per AS-IS designs and signed off by presenting the target designs with stakeholders. Won the appreciations from the customer majorly on networking part for using the different AWS networking services. As AWS Architect lead the team of developers to configure the infra using the CFT. Using the AWS CFT to spin the EKS cluster for deployment the critical application Azure Cloud T&T - 21 different Azure Data Analytics services TDD got approved from customer and implemented e.g. ADF, Event Grid, Event Hub, Synapsys etc. Provisioning different Azure Analytics services for customers using IaC tool. Transitions 8-9 Fortune 500 customers in Azure, AWS and Google Cloud. Implemented Cloudera Big Data Hadoop-Anaconda S/W for Malaysia biggest financial customer. Transition Lead - Big Data Hadoop cluster 220 nodes, USA one of the biggest financial customers Managing Azure Landing Zone implementation EU customer. Engaged with GTM/SALE/PRE-Sale Team for technical expertise. Azure Cloud strategy consultant. Lead the T&T (Transitions & Transformations) for the Azure PaaS Services across the globe customers. Responsible to build the Big Data Hadoop Practice Team. Expert in Azure PaaS Data & Analytics Services. Involved in propositions , Pre-Sale, TDD, HLD, LLD, Solutioning & Designing, Architecture, Effort Estimation, RFP/RFI, T&T (Transition & Transformation) and Core Member of Interview Panel for Big Data Analytics and Clouds technologies. Lead the Team to Implement the Different Azure PaaS Data & Analytics Services Rich experience in preparing the deployment plan for different Azure PaaS service and get approval from Customer to provisioning the services. Worked closely with IaC Team to execute the deployment plan. Rich technical experience in architecting, designing & implementing Cloud based Data Platform & Analytics Services . Currently spearheading the delivery of Azure Data Lake and Modern Data Warehouse Solutions. Developing solutions, planning, creating & delivering compelling proof of concept demonstrations. Possess professional IT Delivery Management experience with strong work ethics, approachability and consistent commitment to the team leadership and innovation. Responsible for driving teamwork, communication, collaboration, and commitment within I.T. and teams. Providing and implementing suggestions on Cost Optimization on Client Infra. Working on various Microsoft Azure services such as Azure Virtual Machines, Azure Networking, Azure Storage, Azure Migrate, Azure DevOps, Azure Data Lake, Azure Synapse Analytics, Azure Stream Analytics, Azure Data Bricks, Azure Backup and Azure Active Directory. Configuring the Azure Firewall, Application Gateway with WAF, load balancers and traffic manager to manage security of the workload virtual network. Managing and implementing roles, users, groups, RBAC, MFA, Conditional Access Policies in Azure AD. Working various DevOps tools such as Repo, Dashboard, GitHub for version control, containers- Dockers and Kubernetes . Managing pods, Replica Sets, deployments, services in a Kubernetes cluster. Building POC environment in Google and IBM Cloud. Provisioning different resources/resource groups via Terraform. Worked as Mainframe Consultant with Tech Mahindra (Satyam Computers Ltd. ) for EU Clients to Implement Change man/Vision Plus. Expertise in Troubleshooting production issues, log analysis, performance monitoring . Excellent knowledge of ITIL processes like Incident, Problem, Change, Release, Availability management. Worked on various Service Management tools like Remedy/ServiceNow- Problem and Incident management tools. Responsible for Transition and Transformation of Hadoop Projects. Responsible for the various Big Data Analytics and Cloud Propositions. Big Data Hadoop with Global biggest Financial Customers: - Hadoop| HBase | Hive| Looker| Neo4j| OpenShift| Kubernetes| Docker| Rundeck| Prometheus| AWS| Azure| Shell| Python| Architect| Implementation| Troubleshooting| Solution Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France