Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
10 - 20 Lacs
chennai, bengaluru, mumbai (all areas)
Hybrid
Azure Data Engineer Data Engineers will be responsible for developing and maintaining the data pipelines that populate and process data within the existing Azure Data Lake They will need to ensure that data is properly ingested transformed and made accessible for analytics reporting and realtime insights Key Skills Azure Data Factory ADF Proficiency in building data pipelines using Azure Data Factory particularly for ETLELT processes from D365 FO and other systems to the data lake o Linked Services Connecting ADF with D365 FO other databases and external sources o Data Flows Implementing complex data transformations using ADFs data flow capabilities RealTime Data Integration Experience with realtime data ingestion and processing using tools like Azure Stream Analytics especially for pushing data into the data lake in nearrealtime Databricks or Synapse Analytics Knowledge of using Azure Databricks or Synapse for data processing transformation and analytics within the data lake environment This includes working with largescale distributed data processing Data Storage Optimization Skills in optimizing data storage within the data lake including choosing the right storage tiers eg hot cool archive and compressing large datasets Data Lake Delta Lake File Formats Familiarity with working with optimized file formats such as Parquet Avro or Delta Lake for efficient querying and data storage Power BI service for scheduled refreshes Azure Data Factory for orchestrating data refreshes Monitoring and Performance Tuning Experience with monitoring data pipelines troubleshooting performance bottlenecks and optimizing for costeffective usage of Azure resources Kindly share your resume to Jyoti.dhabadi@ltimindtree.com/ Dhanasurya.Velusamy@ltimindtree.com
Posted 1 day ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
As an MLOps Engineer, you will play a crucial role in designing, building, and maintaining end-to-end MLOps pipelines for ML model training, testing, and deployment. Your responsibilities will include collaborating with Data Scientists to productionize ML models in Azure ML and Azure Databricks. Additionally, you will implement CI/CD pipelines for ML workflows using Azure DevOps, GitHub Actions, or Jenkins and automate infrastructure provisioning using IaC tools such as Terraform, ARM templates, or Bicep. Monitoring and managing deployed models using Azure Monitor, Application Insights, and MLflow will also be part of your daily tasks. Implementing best practices in model versioning, model registry, experiment tracking, and artifact management is essential to ensure the success of ML solutions deployed on Azure. Your role will also involve working closely with cross-functional teams, including Data Engineers, DevOps Engineers, and Data Scientists, to streamline ML delivery. Developing monitoring and alerting for ML model drift, data drift, and performance degradation will be critical for maintaining the efficiency and effectiveness of ML models. **Key Responsibilities:** - Design, build, and maintain end-to-end MLOps pipelines for ML model training, testing, and deployment. - Collaborate with Data Scientists to productionize ML models in Azure ML and Azure Databricks. - Implement CI/CD pipelines for ML workflows using Azure DevOps, GitHub Actions, or Jenkins. - Automate infrastructure provisioning using IaC tools (Terraform, ARM templates, or Bicep). - Monitor and manage deployed models using Azure Monitor, Application Insights, and MLflow. - Implement best practices in model versioning, model registry, experiment tracking, and artifact management. - Ensure security, compliance, and cost optimization of ML solutions deployed on Azure. - Work with cross-functional teams (Data Engineers, DevOps Engineers, Data Scientists) to streamline ML delivery. - Develop monitoring/alerting for ML model drift, data drift, and performance degradation. **Qualifications Required:** - 5-10 years of experience in programming: Python, SQL. - Experience with MLOps/DevOps Tools: MLflow, Azure DevOps, GitHub Actions, Docker, Kubernetes (AKS). - Proficiency in Azure Services: Azure ML, Azure Databricks, Azure Data Factory, Azure Storage, Azure Functions, Azure Event Hubs. - Experience with CI/CD pipelines for ML workflows and IaC using Terraform, ARM templates, or Bicep. - Familiarity with data handling tools such as Azure Data Lake, Blob Storage, and Synapse Analytics. - Strong knowledge of monitoring & logging tools like Azure Monitor, Prometheus/Grafana, and Application Insights. - Understanding of the ML lifecycle including data preprocessing, model training, deployment, and monitoring. This job prefers candidates with experience in Azure Kubernetes Service (AKS) for scalable model deployment, knowledge of feature stores and distributed training frameworks, familiarity with RAG (Retrieval Augmented Generation) pipelines, LLMOps, and Azure certifications like Azure AI Engineer Associate, Azure Data Scientist Associate, or Azure DevOps Engineer Expert.,
Posted 4 days ago
7.0 - 8.0 years
3 - 6 Lacs
pune
Work from Office
Job Purpose This position is open with Bajaj Finance Ltd. Duties and Responsibilities Looking for a Senior SQL Developer with 78 years of deep experience in advanced database architecture, SQL and PL/SQL programming, complex query optimization, and hands-on Azure cloud data solutions. This position will involve leadership in Azure-based data pipelines, integration services, and modern data warehousing. Required Qualifications and Experience Looking for a Senior SQL Developer with 78 years of deep experience in advanced database architecture, SQL and PL/SQL programming, complex query optimization, and hands-on Azure cloud data solutions. This position will involve leadership in Azure-based data pipelines, integration services, and modern data warehousing. Required Skills 78 years of progressively responsible experience in SQL and PL/SQL development and database administration. Advanced expertise with PostgreSQL and MS SQL Server environments. Extensive experience in query optimization, indexing, partitioning, and troubleshooting performance bottlenecks. Strong proficiency in Azure Data Factory with demonstrable project experience. Hands-on experience with other Azure services: Azure SQL, Synapse Analytics, Data Lake, and related cloud data architectures. Experience in data warehousing, modeling, and enterprise ETL pipelines.
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer with 4+ years of experience in building and managing scalable data solutions, your role will involve designing, implementing, and optimizing data pipelines using Azure Data Factory. You will be responsible for implementing ETL/ELT processes for structured and unstructured data, building and optimizing data warehouses for analytics and reporting needs, and integrating data from multiple sources to ensure data quality and reliability. Working with Azure services like Data Lake, Synapse, and Databricks, you will deliver end-to-end data solutions and maintain RBAC policies for secure data access. Collaboration with data analysts, data scientists, and business stakeholders is essential to provide reliable datasets and ensure performance, scalability, and cost optimization of data pipelines. Key Responsibilities: - Design, develop, and maintain data pipelines using Azure Data Factory. - Implement ETL/ELT processes for structured and unstructured data. - Build and optimize data warehouses for analytics and reporting needs. - Integrate and manage data from multiple sources to ensure data quality, consistency, and reliability. - Work with Azure services (Data Lake, Synapse, Databricks, etc.) to deliver end-to-end data solutions. - Implement and maintain RBAC policies to ensure secure and compliant data access. - Collaborate with data analysts, data scientists, and business stakeholders to provide reliable datasets. - Monitor and troubleshoot pipelines, ensuring performance, scalability, and cost optimization. Required Skills & Qualifications: - 4+ years of experience in Data Engineering, with at least 2 end-to-end data warehouse or data lake projects. - Strong SQL expertise, including complex queries, window functions, and performance tuning. - Hands-on experience with Azure Data Services such as Azure Data Factory, Azure Databricks, Azure Data Lake, Delta Lake, Synapse Analytics, Azure Key Vault, RBAC, and security features. - Good understanding of ETL/ELT tools like Pentaho, dbt, or equivalent. - Knowledge of Big Data and distributed processing frameworks like Spark and optimization techniques. - Familiarity with version control and CI/CD for data pipelines using GitHub or Azure DevOps. - 12 years of Power BI hands-on experience. - Excellent communication, problem-solving, and leadership abilities.,
Posted 6 days ago
4.0 - 6.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming. Experience with Azure Databricks/ADB Experience with building CI/CD pipelines in Data environments Your Profiles Should have 4+ years of experience in Azure Databricks with strong pyspark experience Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread What you'll love about working here Choosing Capgemini means having the opportunity to make a difference, whetherfor the world's leading businesses or for society. It means getting the support youneed to shape your career in the way that works for you. It means when the futuredoesn't look as bright as you'd like, youhave the opportunity tomake change: torewrite it. When you join Capgemini, you don't just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
You are an experienced Data Architect who will be responsible for leading the transformation of enterprise data solutions, particularly focused on migrating Alteryx workflows into Azure Databricks. Your expertise in the Microsoft Azure ecosystem, including Azure Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and strong background in data architecture, governance, and distributed computing will be crucial for this role. Your strategic thinking and hands-on architectural leadership will ensure the development of scalable, secure, and high-performance data solutions. Your key responsibilities will include defining the migration strategy for transforming Alteryx workflows into scalable, cloud-native data solutions on Azure Databricks. You will architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure Data Lake, and Synapse, while establishing best practices, standards, and governance frameworks for pipeline design, orchestration, and data lifecycle management. Collaborating with business stakeholders, guiding engineering teams, overseeing data quality, lineage, and security compliance, and driving CI/CD adoption for Azure Databricks will also be part of your role. Furthermore, you will provide architectural leadership, design reviews, and mentorship to engineering and analytics teams. Optimizing solutions for performance, scalability, and cost-efficiency within Azure, participating in enterprise architecture forums, and influencing data strategy across the organization are also expected from you. To be successful in this role, you should have at least 10 years of experience in data architecture, engineering, or solution design. Proven expertise in Alteryx workflows and their modernization into Azure Databricks, deep knowledge of the Microsoft Azure data ecosystem, strong background in data governance, lineage, security, and compliance frameworks, and proficiency in Python, SQL, and Apache Spark are essential. Excellent leadership, communication, and stakeholder management skills are also required. Preferred qualifications include Microsoft Azure certifications, experience in leading large-scale migration programs or modernization initiatives, familiarity with enterprise architecture frameworks, exposure to machine learning enablement on Azure Databricks, and understanding of Agile delivery and working in multi-disciplinary teams.,
Posted 1 week ago
6.0 - 10.0 years
25 - 30 Lacs
bengaluru
Remote
Responsibilities: Design and implement scalable data pipelines using Microsoft Fabric, including Dataflows Gen2, Lakehouse, Notebooks and SQL endpoints. Develop ETL/ELT solutions using PySpark, T-SQL and Spark Notebooks within Fabric and Azure Synapse. Manage and optimize data storage and compute in OneLake supporting Lakehouse and Warehouse use cases. Implement and manage Azure Key Vault for secure handling of secrets, credentials and connection strings. Configure and manage CI/CD pipelines for Data engineering projects using Azure Devops including automated deployment of Fabric assets. Integrate data from diverse sources including SQL server, Azure Blob, REST APIs and on-prem systems. Collaborate closely with business teams and PowerBI developers to ensure data models support reporting and self-service needs. Monitor and troubleshoot data pipeline performance, data quality and failure recovery. Contribute to architecture design, governance processes and performance tuning.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We're looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your key responsibilities - Responsible for managing multiple client engagements. - Understand and analyze business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy, and solution approach. - Work independently to gather requirements, cleanse extraction and loading of data. - Translate business and analyst requirements into technical code. - Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. - Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing. - Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. - Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills and attributes for success - Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments, and other documents/templates. - Able to manage Senior stakeholders. - Experience in leading teams to execute high-quality deliverables within stipulated timelines. - Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modeling, DAX, Power Query, Microsoft Fabric. - Strong proficiency in Power BI, including data modeling, DAX, and creating interactive visualizations. - Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. - Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics. - Strong SQL Skills and experience with one of the following: Oracle, SQL, Azure SQL. - Good to have experience in SSAS or Azure SSAS and Agile Project Management. - Basic Knowledge of Azure Machine Learning services. - Excellent Written and Communication Skills and ability to deliver technical demonstrations. - Quick learner with a can-do attitude. - Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members. To qualify for the role, you must have - A bachelor's or master's degree. - A minimum of 4-7 years of experience, preferably a background in a professional services firm. - Excellent communication skills with consulting experience preferred. Ideally, you'll also have - Analytical ability to manage multiple projects and prioritize tasks into manageable work products. - Can operate independently or with minimum supervision. What working at EY offers At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies, and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: - Support, coaching, and feedback from some of the most engaging colleagues around. - Opportunities to develop new skills and progress your career. - The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You have 4 to 7 years of experience and possess primary skills in Azure Databricks, Pyspark, ADF, Synapse Analytics & SQL. The job location can be in Chennai, Bangalore, Hyderabad, or NCR, and the work mode is Hybrid. Your notice period should be immediate to a maximum of 30 days. As a Senior Data Engineer, you will be responsible for leading the design, development, and enhancement of in-house data platforms. Your role involves constructing datasets as per stakeholder requests, building and maintaining data pipelines, and collaborating closely with data analysts, data scientists, and business application development teams to deliver effective data solutions. Your key responsibilities include providing subject matter expertise in data engineering, implementing secure and high-performing data solutions and pipelines, offering guidance on evolving solutions within the platform, and driving data engineering best practices to enhance the organization's reputation as a reliable data intelligence innovation partner. In terms of collaboration, you will actively contribute to the development and review of data standards and solution documentation, provide guidance to junior team members through peer reviews, and establish strong relationships with business stakeholders and team members across different chapters. As a leader in data engineering, you are expected to stay updated on the latest practices, technologies, and vendor packages, optimize legacy data services, and mentor junior colleagues to enhance their skills and knowledge. Compliance with all relevant obligations, trainings, and certifications is essential, and you must guide team members on compliance, security, and operational responsibilities within your area of work. The ideal candidate for this role should have a master's degree with a minimum of 7 years of experience or equivalent in a similar role. Proficiency in Azure Data Factory, Synapse Analytics, Databricks, ADLS, SQL, PySpark, and Python Programming is required. A successful candidate should have led at least one large-scale and three mid-scale projects or managed complex data pipelines/environments in the past, with related certifications or accreditations being preferred.,
Posted 1 week ago
4.0 - 12.0 years
0 Lacs
maharashtra
On-site
As an Azure Data Engineer specializing in Microsoft Fabric (Data Lake) based in Mumbai, you should have a minimum of 4 years of experience in the field, with at least 2 years dedicated to working with Microsoft Fabric technologies. Your expertise in Azure services is key, specifically in Data Lake, Synapse Analytics, Data Factory, Azure Storage, and Azure SQL. Your responsibilities will involve data modeling, ETL/ELT processes, and data integration patterns. It is essential to have experience in Power BI integration for effective data visualization. Proficiency in SQL, Python, or PySpark for data transformations is required for this role. A solid understanding of data governance, security, and compliance in cloud environments is also necessary. Previous experience working in Agile/Scrum environments is a plus. Strong problem-solving skills and the ability to work both independently and collaboratively within a team are crucial for success in this position.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineer specializing in SAP BW and Azure Data Factory (ADF), you will be responsible for leading the migration of SAP BW data to Azure. Your expertise in data integration, ETL, and cloud data platforms will be crucial in designing, implementing, and optimizing SAP BW-to-Azure migration projects. Your key responsibilities will include ensuring data integrity, scalability, and efficiency during the migration process. You will design and implement ETL/ELT pipelines using Azure Data Factory (ADF), Synapse, and other Azure services. Additionally, you will develop and optimize data ingestion, transformation, and orchestration workflows between SAP BW and Azure. Collaborating with business and technical stakeholders, you will analyze data models, define migration strategies, and ensure compliance with data governance policies. Troubleshooting and optimizing data movement, processing, and storage across SAP BW, Azure Data Lake, and Synapse Analytics will be part of your daily tasks. You will implement best practices for performance tuning, security, and cost optimization in Azure-based data solutions. Your role will also involve providing technical leadership in modernizing legacy SAP BW reporting and analytics by leveraging cloud-native Azure solutions. Working closely with cross-functional teams, including SAP functional teams, data architects, and DevOps engineers, you will ensure seamless integration of data solutions. Your expertise in SAP BW data modeling, ETL, reporting, Azure Data Factory (ADF), Azure Synapse Analytics, and other Azure data services will be essential in this role. Proficiency in SQL, Python, or Spark for data processing and transformation is required. Experience in Azure Data Lake, Azure Blob Storage, and Synapse Analytics for enterprise-scale data warehousing is a must. Preferred qualifications include experience with SAP BW/4HANA and its integration with Azure, knowledge of Databricks, Power BI, or other Azure analytics tools, and certification in Azure Data Engineer Associate (DP-203) or SAP BW. Experience in metadata management, data governance, and compliance in cloud environments is a plus. Your strong analytical and problem-solving skills, along with the ability to work in an agile environment, will contribute to the success of the migration projects.,
Posted 1 week ago
7.0 - 11.0 years
16 - 27 Lacs
hyderabad
Hybrid
Description - Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Follow the development practices, policies, and reporting expectations in the Optum Team Be able to implement manual and automation testing frameworks Participate in the daily standups, project meetings and retrospectives Clearly articulate questions and requirements for the work being assigned The ability to produce solutions to problems independently but can ask for help when needed Work in collaboration with the Optum Team leads to ensure development and testing milestones are agreed upon and achieved Raise blockers early to ensure proper communication so these issues can be addressed by the Optum Team Follow all Optum Regulatory/Compliance requirements Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: BA/BS in Computer Science or a related field 5+ years of experience in Data Engineering and Java technologies in the candidates most recent position Experience working in an Agile development environment Experience building ETL pipelines Experience in Databricks, SQL. Synapse, Scala, Python, Java Experience using a source control repository, preferably GIT High level knowledge of development fundamentals and core language concepts with Java/Scala Proven problem solving skills, able to analyze logs and trouble shoot issues Proven willingness to learn new technologies and eagerness to think outside the box Good understanding of OOP principals Good understanding of secure coding best practices, remediation of security vulnerability general knowledge Preferred Qualifications: Azure App services, Azure Functions experience Rest API development experience CI/CD experience
Posted 2 weeks ago
6.0 - 11.0 years
12 - 20 Lacs
chennai
Hybrid
Position Overview We are seeking a highly skilled Senior Data Engineer to join our data platform team, responsible for designing and implementing scalable, cloud-native data solutions. This role focuses on building modern data infrastructure using AWS and Azure services to support our growing analytics and machine learning initiatives. Key Responsibilities Business Translation : Collaborate with business stakeholders to understand requirements and translate them into scalable data models, architectures, and robust end-to-end data pipelines ETL/ELT Implementation : Design, develop, and maintain high-performance batch and real-time data pipelines using modern frameworks like Apache Spark, Delta Lake, and streaming technologies Data Platform Management : Architect and implement data lakes, data warehouses, and Lakehouse architectures following medallion architecture principles (Bronze, Silver, Gold layers) Data Operations & Quality Pipeline Orchestration : Implement complex workflow orchestration using tools like AWS Step Functions or Azure Data Factory Data Quality Assurance : Establish comprehensive data validation, monitoring, and quality frameworks using tools like Great Expectations or custom Python solutions Performance Optimization : Monitor, troubleshoot, and optimize data pipeline performance, implementing partitioning strategies and query optimization techniques Data Governance : Implement data lineage tracking, metadata management, and ensure compliance with data privacy regulations (GDPR) Cloud Infrastructure & DevOps Experience & Background 6+ years of hands-on experience in Data Engineering roles with demonstrated expertise in cloud-native data solutions Cloud Expertise : Strong preference for AWS/Azure data services Production Experience : Proven track record of building and maintaining production-grade data pipelines processing terabytes of data Technical Skills Programming & Development Python : Advanced proficiency in Python for data engineering (pandas, numpy, boto3, azure-sdk) SQL : Expert-level SQL skills including complex queries, window functions, CTEs, and performance tuning PySpark : Hands-on experience with PySpark for large-scale data processing and optimization Version Control : Proficient with Git workflows, branching strategies, and collaborative development AWS Data Services (Primary Focus) Compute : Experience working on services like AWS Glue, Lambda Azure Data Factory, Synapse Analytics, Data Lake Storage Gen2 , S3 Redshift, RDS, DynamoDB Analytics : Power BI, Athena, Kinesis (Data Streams, Data Firehose, Analytics) Security : IAM, KMS, VPC, Security Groups for secure data access patterns Soft Skills & Collaboration Problem-Solving : Strong analytical and troubleshooting skills with attention to detail Communication : Excellent verbal and written communication skills for cross-functional collaboration Project Management : Ability to manage multiple priorities and deliver projects on time Mentorship : Experience mentoring junior engineers and promoting best practices Preferred Qualifications (Nice to Have) CI/CD Pipelines: Experience with GitHub Actions, Jenkins, or Azure DevOps for automated testing and deployment Infrastructure as Code: Terraform or AWS CDK for infrastructure automation Containerization: Docker and Kubernetes for containerized data applications Testing Frameworks: Unit testing, integration testing, and data quality testing practices Cloud Certifications: AWS Certified Data Analytics, Azure Data Engineer Associate, or similar certifications Agile Methodologies: Experience working in Scrum/Kanban environments What We Offer Opportunity to work with cutting-edge data technologies and modern cloud platforms Collaborative environment with opportunities for professional growth and learning Flexible work arrangements and comprehensive benefits package Conference attendance and certification reimbursement programs This role requires the ability to work independently while collaborating effectively with cross-functional teams including Data Scientists, Analytics Engineers, Software Engineers, and Business Stakeholders. Bottom of Form
Posted 2 weeks ago
0.0 - 2.0 years
2 - 3 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Title: Power BI Architect Department: Data & Analytics / IT Industry: [Banking / Healthcare / Retail / Manufacturing / Consulting] Job Summary: We are seeking a seasoned Power BI Architect to lead the design, development, and deployment of enterprise-level business intelligence solutions. The ideal candidate will have deep expertise in Power BI, data architecture, and analytics strategy, with a strong ability to translate business needs into scalable BI solutions. Key Responsibilities: Architect and implement end-to-end Power BI solutions across the organization. Design and optimize data models using DAX and Power Query. Define BI strategy, governance, and best practices. Integrate Power BI with various data sources including Azure, SQL Server, APIs, and cloud platforms. Lead data visualization initiatives and ensure alignment with business goals. Collaborate with stakeholders to gather requirements and deliver actionable insights. Mentor and guide Power BI developers and analysts. Ensure data security, compliance, and performance optimization. Required Skills: Expert-level proficiency in Power BI, DAX, Power Query, and Power BI Service. Strong experience in data modelling, ETL processes, and data warehousing. Hands-on experience with Azure Data Services (Data Factory, Synapse, SQL DB). Deep understanding of enterprise BI architecture and governance. Excellent communication and stakeholder management skills. Preferred Qualifications: Experience with other BI tools (Tableau, Qlik, etc.) is a plus. Familiarity with Python, R, or other analytics languages. Microsoft certifications (e.g., DA-100, PL-300, or Azure Data Engineer). Bachelors or Master’s degree in Computer Science, Data Science, or related field.
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You are applying for the position of Assistant Team Lead at ArcelorMittal's Global Business and Technologies hub in India. As a Reporting Application Maintenance Specialist, your primary responsibility will be to support and maintain enterprise reporting solutions for FIPM business functions. You will play a crucial role in ensuring data availability, report accuracy, and system performance across various platforms like Azure Data Lake, Databricks, Synapse Analytics, and SAP BW. Your efforts will directly contribute to operational visibility, manufacturing efficiency, and supply chain decision-making. Your key responsibilities will include monitoring and maintaining reporting applications and dashboards, ensuring timely data refreshes, system availability, and issue resolution. You will collaborate with data and IT teams to address incidents, perform root cause analysis, and implement permanent fixes. Additionally, you will maintain integrations and data flows between different platforms, support performance tuning, and optimize queries to meet reporting SLAs. Your role will also involve working closely with stakeholders from Finance, Purchasing, Maintenance & Operations to understand evolving business needs and translate them into technical solutions. To excel in this role, you should have a Bachelor's degree in Information Systems, Computer Science, or a related field, along with 6-8 years of experience in BI/reporting application support, preferably in manufacturing or supply chain contexts. Strong hands-on knowledge of Azure Data Lake, Databricks, Azure Synapse Analytics, and SAP BW on HANA is essential. You should also have experience with BI tools like Power BI, Tableau, or similar platforms, and a good understanding of data pipelines, data modeling, and ETL/ELT processes. Problem-solving skills and the ability to collaborate effectively with technical and business teams are crucial for success. Preferred qualifications include certifications in Azure, SAP, or ITIL, experience in Agile or DevOps-driven environments, and familiarity with scripting in SQL, Python (Databricks), or Spark. Exposure to Agile environments and DevOps for data platforms is also beneficial. In return, ArcelorMittal offers you a critical role in maintaining business-critical reporting tools, collaboration opportunities with IT and frontline FIPM teams, access to modern data platforms and advanced analytics tools, as well as competitive compensation, benefits, and career development. Join us to be a part of a global team committed to building a better world with smarter low-carbon steel.,
Posted 2 weeks ago
4.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Azure Data Factory Engineer (ADF), you will play a crucial role in managing and developing data engineering solutions. With a focus on Azure Data Factory (ADF), your responsibilities will include designing, implementing, and optimizing data pipelines to ensure efficient data processing. You should have a minimum of 10 years of experience in Data Engineering, with at least 4 years dedicated to Azure Data Factory development. Your expertise in SQL and Synapse Analytics will be essential in creating robust and scalable data solutions. Strong communication skills are a must-have for this role as you will be collaborating with cross-functional teams and stakeholders. Additionally, your ability to articulate technical concepts clearly will be vital in ensuring successful project outcomes. This position is based in Hyderabad, Chennai, or Bangalore and offers both full-time and contract options. The ideal candidate should be able to start immediately and be prepared for a virtual interview followed by an in-person meeting. If you are passionate about data engineering, have a solid background in Azure Data Factory, and enjoy working in a collaborative environment, we encourage you to apply for this exciting opportunity.,
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
pune, maharashtra, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience :2-4 years in Data Engineering Job Description : We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS : Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure : Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - GCP : Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 2-4 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure, or GCP) is a plus. - Familiarity with Spark Streaming is a bonus. Mandatory skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Years of experience required: 2-4 years Education qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills PySpark, Python (Programming Language), Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Artificial Intelligence, Big Data, C++ Programming Language, Communication, Complex Data Analysis, Data-Driven Decision Making (DIDM), Data Engineering, Data Lake, Data Mining, Data Modeling, Data Pipeline, Data Quality, Data Science, Data Science Algorithms, Data Science Troubleshooting, Data Science Workflows, Deep Learning, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Machine Learning + 12 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer based in Pune, India, you will be responsible for defining and developing data sets, models, and cubes. Your role will involve building simple to complex pipelines and data flows in Azure Data Factory (ADF). This includes extracting data from source systems into the data warehouse staging area, ensuring data validation, accuracy, type conversion, and applying business rules. You should possess strong visualization skills in Power BI and have expert knowledge in writing advanced DAX formulas. Additionally, proficiency in Power Queries, Power Automate, M Language, and R Programming is essential for this role. Advanced knowledge of Azure SQL Database, Synapse Analytics, Azure Data Bricks, and Power BI is required. You should be able to analyze and understand complex data sets effectively. Familiarity with Azure Data Lake Storage and services like Azure Analysis Services and SQL Databases will be beneficial in performing your duties efficiently.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You will be responsible for leading end-to-end architecture design for data and analytics solutions on Azure, defining and implementing Azure-based modern data platforms, and overseeing solution delivery to ensure timelines and milestones are met. Collaborating with Pre-sales teams, you will support RFP responses, solutioning, and client presentations. Additionally, you will create and maintain comprehensive architectural documentation, review implementation teams, and ensure alignment with security, compliance, and governance standards. Your role will also involve overseeing data integration pipelines, supporting application migration and modernization efforts on Azure, and providing technical leadership and mentoring to development and data engineering teams. Troubleshooting architectural issues and optimizing deployed solutions will be part of your regular tasks. You should have at least 4 years of hands-on experience with Microsoft Azure services such as Data Factory, Databricks, Synapse Analytics, and Azure Data Lake Storage. Proven expertise in designing and implementing enterprise-grade data and analytics solutions, application migration and modernization on cloud platforms, and strong understanding of cloud security, identity management, and compliance practices are essential. Proficiency in modern application architectures, a Bachelor's degree in Engineering with a solid foundation in software engineering and architectural design, as well as strong documentation, stakeholder communication, and project leadership skills are required. Preferred qualifications include Microsoft Azure Certifications, experience with Azure Machine Learning, and familiarity with microservices, containers, and event-driven systems. Join Polestar Solutions, a data analytics and enterprise planning powerhouse, to help customers derive sophisticated insights from their data in a value-oriented manner. The company offers a comprehensive range of services and opportunities for growth and learning in a dynamic environment.,
Posted 2 weeks ago
6.0 - 10.0 years
20 - 25 Lacs
gurugram, delhi / ncr
Work from Office
Experience in data engineering, AI solution development, and ML model deployment. Strong skills in Python, ML frameworks (e.g., Scikit-learn, PyTorch, TensorFlow). Experience with Azure AI services, Azure OpenAI, Azure AI Foundry plus Copilot Studio
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organizational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: About the Role: We are seeking a hands-on Data Engineer with 5-8 years of experience specializing in on-premises database migration to cloud. The ideal candidate will have a strong background in cloud migration whether on-premises to Azure cloud, or cloud to Azure cloud. Key Responsibilities: Should have experience of database migration strategy and related technologies. Experience of Microsoft SQL DB migration from on-prem to cloud platform. Experience of any other cloud (e.g. AWS or GCP) to Azure cloud platform. Review Database migration plans, provide recommendation and liaise with Customer and Migration Factory for successful migrations. Ability to identify and resolve performance issues post-migration Required Skills: Knowledge in using tools like Azure Data Studio, Azure DMA, Azure DMS, Azure SQL/MI Familiarity with migrating SSIS packages using ADF. SSMA Migration Tool, Database migration using SSIS Packages In-depth knowledge of SQL, TSQL and programming experience with stored procedures, triggers, functions, etc. Stronger performance tuning (partitioning, exec plans, indexing etc.), data modeling experience and dacpac deployment experience, Stored Procedures Optimization Extensive experience with Azure Data Services, including Azure Data Factory, Azure SQL Data Warehouse, and Synapse Analytics. Strong understanding of database architecture principles and best practices. Preferred Qualifications: Azure Solutions Architect Expert certification. Knowledge of SQL Clustering/Always on Availability Groups, Azure SQL, SQL MI Experience of CTS/Migration technologies. (Rehost/Clean Deployments) Should have exp on Azure project and GitHib as Code Repo Knowledge of setting up High Availability & Disaster Recovery configuration. Experience in setting up Always ON Availability Groups & Zone Redundant Storage Familiarity with Azure DevOps (ADO) for (CI/CD) pipelines, Infrastructure as Code (IaC) tools like Terraform for deploying container apps and associated services Knowledge of security best practices and compliance requirements for cloud-based applications Backup/recovery, Security related activities, Audit tracking and reporting Maintaining and supporting, monitoring production and test environments Implementing of High Availability such as Clustering, Database Mirroring, Log Shipping, and Replication Mandatory skill sets: Knowledge in using tools like Azure Data Studio, Azure DMA, Azure DMS, Azure SQL/MI Familiarity with migrating SSIS packages using ADF. SSMA Migration Tool, Database migration using SSIS Packages In-depth knowledge of SQL, TSQL and programming experience with stored procedures, triggers, functions, etc. Preferred skill sets: Azure Solutions Architect Expert certification. Knowledge of SQL Clustering/Always on Availability Groups, Azure SQL, SQL MI Experience of CTS/Migration technologies. (Rehost/Clean Deployments) Should have exp on Azure project and Git Hib as Code Repo Years of experience required: Experience: 5-8 Years Education qualification: B.Tech / M.Tech / MCA/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: MBA (Master of Business Administration), Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Database Migrations Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline + 38 more Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship Government Clearance Required Job Posting End Date Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organizational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: About the Role: We are seeking a hands-on Data Engineer with 5-8 years of experience specializing in on-premises database migration to cloud. The ideal candidate will have a strong background in cloud migration whether on-premises to Azure cloud, or cloud to Azure cloud. Key Responsibilities: Should have experience of database migration strategy and related technologies. Experience of Microsoft SQL DB migration from on-prem to cloud platform. Experience of any other cloud (e.g. AWS or GCP) to Azure cloud platform. Review Database migration plans, provide recommendation and liaise with Customer and Migration Factory for successful migrations. Ability to identify and resolve performance issues post-migration Required Skills: Knowledge in using tools like Azure Data Studio, Azure DMA, Azure DMS, Azure SQL/MI Familiarity with migrating SSIS packages using ADF. SSMA Migration Tool, Database migration using SSIS Packages In-depth knowledge of SQL, TSQL and programming experience with stored procedures, triggers, functions, etc. Stronger performance tuning (partitioning, exec plans, indexing etc.), data modeling experience and dacpac deployment experience, Stored Procedures Optimization Extensive experience with Azure Data Services, including Azure Data Factory, Azure SQL Data Warehouse, and Synapse Analytics. Strong understanding of database architecture principles and best practices. Preferred Qualifications: Azure Solutions Architect Expert certification. Knowledge of SQL Clustering/Always on Availability Groups, Azure SQL, SQL MI Experience of CTS/Migration technologies. (Rehost/Clean Deployments) Should have exp on Azure project and GitHib as Code Repo Knowledge of setting up High Availability & Disaster Recovery configuration. Experience in setting up Always ON Availability Groups & Zone Redundant Storage Familiarity with Azure DevOps (ADO) for (CI/CD) pipelines, Infrastructure as Code (IaC) tools like Terraform for deploying container apps and associated services Knowledge of security best practices and compliance requirements for cloud-based applications Backup/recovery, Security related activities, Audit tracking and reporting Maintaining and supporting, monitoring production and test environments Implementing of High Availability such as Clustering, Database Mirroring, Log Shipping, and Replication Mandatory skill sets: Knowledge in using tools like Azure Data Studio, Azure DMA, Azure DMS, Azure SQL/MI Familiarity with migrating SSIS packages using ADF. SSMA Migration Tool, Database migration using SSIS Packages In-depth knowledge of SQL, TSQL and programming experience with stored procedures, triggers, functions, etc. Preferred skill sets: Azure Solutions Architect Expert certification. Knowledge of SQL Clustering/Always on Availability Groups, Azure SQL, SQL MI Experience of CTS/Migration technologies. (Rehost/Clean Deployments) Should have exp on Azure project and Git Hib as Code Repo Years of experience required: Experience: 5-8 Years Education qualification: B.Tech / M.Tech / MCA/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, MBA (Master of Business Administration) Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Database Migrations Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline + 38 more Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship Government Clearance Required Job Posting End Date Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. WhyPWC AtPwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more aboutus . AtPwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organizational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Position: Azure Cloud Migration Engineer / Senior Associate Experience: 5-8 Years Location: Hyderabad Type: Full Time Certifications: Azure Solutions Architect Expert About the Role: We are seeking a hands-on Data Engineer with 5-8 years of experience specializing in on-premises database migration to cloud. The ideal candidate will have a strong background in cloud migration whether on-premises to Azure cloud, or cloud to Azure cloud. Key Responsibilities: Should have experience of database migration strategy and related technologies. Experience of Microsoft SQL DB migration from on-prem to cloud platform. Experience of any other cloud (e.g. AWS or GCP) to Azure cloud platform. Review Database migration plans, provide recommendation and liaise with Customer and Migration Factory for successful migrations. Ability to identify and resolve performance issues post-migration Required Skills: Knowledge in using tools like Azure Data Studio, Azure DMA, Azure DMS, Azure SQL/MI Familiarity with migrating SSIS packages using ADF. SSMA Migration Tool, Database migration using SSIS Packages In-depth knowledge of SQL, TSQL and programming experience with stored procedures, triggers, functions, etc. Stronger performance tuning (partitioning, exec plans, indexing etc.), data modeling experience and dacpac deployment experience, Stored Procedures Optimization Extensive experience with Azure Data Services, including Azure Data Factory, Azure SQL Data Warehouse, and Synapse Analytics. Strong understanding of database architecture principles and best practices. Preferred Qualifications: Azure Solutions Architect Expert certification. Knowledge of SQL Clustering/Always on Availability Groups, Azure SQL, SQL MI Experience of CTS/Migration technologies. (Rehost/Clean Deployments) Should have exp on Azure project and GitHib as Code Repo Knowledge of setting up High Availability & Disaster Recovery configuration. Experience in setting up Always ON Availability Groups & Zone Redundant Storage Familiarity with Azure DevOps (ADO) for (CI/CD) pipelines, Infrastructure as Code (IaC) tools like Terraform for deploying container apps and associated services Knowledge of security best practices and compliance requirements for cloud-based applications Backup/recovery, Security related activities, Audit tracking and reporting Maintaining and supporting, monitoring production and test environments Implementing of High Availability such as Clustering, Database Mirroring, Log Shipping, and Replication Mandatory skill sets: Knowledge in using tools like Azure Data Studio, Azure DMA, Azure DMS, Azure SQL/MI Familiarity with migrating SSIS packages using ADF. SSMA Migration Tool, Database migration using SSIS Packages In-depth knowledge of SQL, TSQL and programming experience with stored procedures, triggers, functions, etc. Additional skill sets preferred: Azure Solutions Architect Expert certification. Knowledge of SQL Clustering/Always on Availability Groups, Azure SQL, SQL MI Experience of CTS/Migration technologies. (Rehost/Clean Deployments) Should have exp on Azure project and GitHib as Code Repo Years of experience required: Experience: 5-8 Years Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline + 38 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
hyderabad, bengaluru, delhi / ncr
Work from Office
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
You are a skilled Data Engineer with expertise in the Azure Data Platform, specifically Azure Databricks, PySpark, Azure Data Factory, and Synapse Analytics. Your primary responsibility will be to design, develop, and optimize large-scale data pipelines and analytics solutions. Your key responsibilities will include developing and optimizing data pipelines using Azure Data Factory for ingestion into Azure Databricks, implementing PySpark transformations in Databricks with a focus on performance tuning, building and maintaining data layers, and ensuring efficient export strategies for reporting and analytics. You will also work with Azure Synapse for advanced querying, transformations, and reporting, configure and manage Azure Monitoring & Log Analytics, and implement automation using Logic Apps and Azure Functions for workflow orchestration and integration. Your focus will be on ensuring high-performance and scalable architecture across Databricks, Data Factory, and Synapse, as well as collaborating with stakeholders to define best practices for data access, connection options, and reporting performance. Required skills for this role include strong hands-on experience with Azure Databricks, proficiency in Azure Data Factory pipeline development, solid understanding of Azure Synapse Analytics, experience with Azure Monitoring, Log Analytics, and error-handling frameworks, knowledge of Logic Apps & Azure Functions for automation and orchestration, and a strong understanding of data architecture patterns. You should also possess excellent troubleshooting, performance optimization, and analytical skills. It would be good to have experience with CI/CD for data pipelines using Azure DevOps, knowledge of Delta Lake and Lakehouse architecture, and exposure to BFSI/Insurance domain data solutions.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |