Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 8.0 years
8 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India Duration: 6 Months Email to Apply: navaneeta@suzva.com Contact: 9032956160
Posted 1 week ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 68 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Hybrid
The Operations Engineer will work in collaboration with and under the direction of the Manager of Data Engineering, Advanced Analytics to provide operational services, governance, and incident management solutions for the Analytics team. This includes modifying existing data ingestion workflows, releases to QA and Prod, working closely with cross functional teams and providing production support for daily issues. Essential Job Functions: * Takes ownership of customer issues reported and see problems through to resolution * Researches, diagnoses, troubleshoots and identifies solutions to resolve customer issues * Follows standard procedures for proper escalation of unresolved issues to the appropriate internal teams * Provides prompt and accurate feedback to customers * Ensures proper recording and closure of all issues * Prepares accurate and timely reports * Documents knowledge in the form of knowledge base tech notes and articles Other Responsibilities: * Be part of on-call rotation * Support QA and production releases, off-hours if needed * Work with developers to troubleshoot issues * Attend daily standups * Create and maintain support documentation (Jira/Confluence) Minimum Qualifications and Job Requirements: * Proven working experience in enterprise technical support * Basic knowledge of systems, utilities, and scripting * Strong problem-solving skills * Excellent client-facing skills * Excellent written and verbal communication skills * Experience with Microsoft Azure including Azure Data Factory (ADF), Databricks, ADLS (Gen2) * Experience with system administration and SFTP * Experience leveraging analytics team tools such as Alteryx or other ETL tools * Experience with data visualization software (e.g. Domo, Datorama) * Experience with SQL programming * Experience automating routine data tasks using various software tools (e.g., Jenkins, Nexus, SonarQube, Rundeck, Task Scheduler)
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 2 weeks ago
3.0 - 7.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills Proven solid analytical skills and attention to detail Proven ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Posted 3 weeks ago
5.0 - 9.0 years
13 - 18 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Technical Delivery Lead to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for managing and leading the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools and Snowflake. This platform will enable offshore (non-US) resources to build and develop Reporting, Analytics, and Data Science solutions. Primary Responsibilities Manage and lead the migration of the on-premises SQLServer Enterprise Data Warehouse to Azure Cloud and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, Databricks, and Snowflake Manage and guide the development of cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Implement and oversee DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Provide technical leadership and mentorship to the engineering team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in a Cloud Data Engineering role, with 3+ years in a leadership or technical delivery role Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Experience with Python or other scripting languages for data processing Experience with Agile methodologies and project management tools Solid experience in developing cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills. Solid analytical skills and attention to detail Proven track record of successful project delivery in a cloud environment Preferred Qualifications Certification in Azure or Snowflake Experience working with automated ETL conversion tools used during cloud migrations (SnowConvert, BladeBridge, etc.) Experience with data modeling and database design Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Posted 3 weeks ago
5.0 - 10.0 years
8 - 13 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Senior Cloud Data Engineer to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for migrating our on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools, Delta lake and Snowflake. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Solid communication and collaboration skills Solid analytical skills and attention to detail Ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 3 weeks ago
3.0 - 7.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Test Planning & Automation Lead - Cloud Data Modernization Position Overview: We are seeking a highly skilled and experienced Test Planning & Automation Lead to join our team for a Cloud Data Modernization project. This role involves leading the data validation testing efforts for the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a target cloud tech stack comprising Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage, etc.) and Snowflake. The primary goal is to ensure data consistency between the on-premises and cloud environments. Primary Responsibilities Lead Data Validation TestingOversee and manage the data validation testing process to ensure data consistency between the on-premises SQLServer and the target cloud environment Tool Identification and AutomationIdentify and implement appropriate tools to automate the testing process, reducing reliance on manual methods such as Excel or manual file comparisons Testing Plan DevelopmentDefine and develop a comprehensive testing plan that addresses validations for all data within the data warehouse CollaborationWork closely with data engineers, cloud architects, and other stakeholders to ensure seamless integration and validation of data Quality AssuranceEstablish and maintain quality assurance standards and best practices for data validation and testing ReportingGenerate detailed reports on testing outcomes, data inconsistencies, and corrective actions Continuous ImprovementContinuously evaluate and improve testing processes and tools to enhance efficiency and effectiveness Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree or above education Leadership Experience6+ years as a testing lead in Data Warehousing or Cloud Data Migration projects Automation ToolsExperience with data validation through custom built python frameworks and testing automation tools Testing MethodologiesProficiency in defining and implementing testing methodologies and frameworks for data validation Technical ExpertiseSolid knowledge of Python, SQL Server, Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Analytical Skills: Proven excellent analytical and problem-solving skills to identify and resolve data inconsistencies CommunicationProven solid communication skills to collaborate effectively with cross-functional teams Project ManagementDemonstrated ability to manage multiple tasks and projects simultaneously, ensuring timely delivery of testing outcomes Preferred Qualifications Experience in leading data validation testing efforts in cloud migration projects Familiarity with Agile methodologies and project management tools At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 3 weeks ago
6.0 - 8.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
8 - 13 years
13 - 23 Lacs
Hyderabad, Noida, Jaipur
Hybrid
5 to 8 years of solutions design & development experience Experience in building Data Ingestion/Transformation pipelines on Azure Cloud Experience in BigData Tools like Spark, Delta Lakes, ADLS, Azure Synapse /Databricks Proficient understanding of distributed computing principles
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Bengaluru
Work from Office
Description We are seeking a highly skilled and experienced Azure Platform Engineer to lead the design, implement and maintain azure data platform. The ideal candidate will have a deep understanding of Azure data services such as Azure Databricks, ADF, ADLS Gen2, Azure SQL, Azure Synapse, Azure VMs, Azure App Services, Azure Functions. Youll collaborate with cross-functional teams to ensure the scalability, reliability, and performance of our analytics and data solutions. Must have solid experience in terraform based provisioning, Building automations and reports using Pyspark, Powershell and Python Roles ResponsibilitiesExperience on Schema, delta/parquet tables on UC, Spark configurations etc. Experience on Identifying cluster failure issues and fixing the same. Experience on Understanding Spark UI logs/driver logs/metrics and Optimizing cluster performance. Experience on Cluster upgrades/wheel file upgrades Understanding New Features and performing POC and demoing the same Experience on building automations and reports to avoid manual and repeated work. Should have Experience on below Azure Skills: ADLS Gen2:Provisioning, Governance and Management through the lifecycle ADFProvisioning, configuring, monitoring, and Troubleshooting issue. SQLProvisioning, configuring, monitoring and maintenancez Log Analytics/Diagnostic Setting along with proficiency in KQL Azure DevOps and GitHub Management and Governance experience Terraform provisioning of cloud services Troubleshoot cloud platform related issues for business users Provisioning, configuring, and maintaining other azure services like Key Vault , App Services, Azure AD, VMs etc. Key or SVP rotations using azure automations. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills Microsoft Azure;Azure Databricks;Microsoft Azure Data Factory;Microsoft Azure DevOps Languages RequiredENGLISH Role Rarity To Be Defined
Posted 2 months ago
8 - 12 years
15 - 19 Lacs
Trivandrum
Hybrid
Role & responsibilities: Lead Data Engineer will be responsible for designing, implementing, and maintaining data solutions on the Microsoft Azure Data platform (Databricks,Azure Data Factory, Python) Leading a team of data engineers, collaborating with various stakeholders, and ensuring the efficient processing, storage, and retrieval of large volumes of data Maintaining and enhancing the data pipelines necessary to support business operations, analytics workloads, reporting needs and insights the impact the strategic and tactical implementation for the organization. Should be able to consume business and functional requirements and translation them into technical implementations. This role will also be responsible for providing technical expertise and direction to their team to ensure quality, standards, processes, and methodologies are being followed and work through technical challenges the team may face. Technical Expertise and Responsibilities : Design, build, and maintain scalable and reliable data pipelines. Should be able to design and build solutions in Azure data factory and Databricks to extract, transform and load data into different source and target systems. Should be able to analyze and understand the existing data landscape and provide recommendations/innovative ideas for rearchitecting / optimizing / streamlining to bring efficiency and scalability. Must be able to collaborate and effectively communicate with onshore counterparts to address technical gaps, requirement challenges, and other complex scenarios. Monitor and troubleshoot data systems to ensure high performance and reliability. Should be highly analytical and detail-oriented with extensive familiarity with database management principles. Optimize data processes for speed and efficiency. Ensure the data architecture supports business requirements and data governance policies. Define and execute the data engineering strategy in alignment with the companys goals. Integrate data from various sources, ensuring data quality and consistency. Stay updated with emerging technologies and industry trends. Understand the big picture business process utilizing deep knowledge in banking industry and translate them to data requirements. Enabling and running data migrations across different databases and different servers Perform thorough testing and validation to support the accuracy of data transformations and data verification used in machine learning models. Analyze data and different systems to define data requirements. Should be well versed with Data Structures & algorithms. Define data mapping working along with business and digital team and data team. Data pipeline maintenance/testing/performance validation Assemble large, complex data sets that meet functional / non-functional business requirements. Analyze and identify gaps on data needs and work with business and IT to bring in alignment on data needs. Understand impact of data conversions as they pertain to servicing operations.o Manage higher volume and more complex cases with accuracy and efficiency. Leadership Responsibilities : Provide leadership and guidance to the data engineering team, including mentoring, coaching, and fostering a collaborative work environment. Support team members with troubleshooting and resolving complex technical issues and challenges. Provide technical expertise and direction in data engineering, guiding the team in selecting appropriate tools, technologies, and methodologies. Stay updated with the latest advancements in data engineering and ensure the team follows best practices and industry standards. Align coding standards, conduct code reviews to ensure proper code quality level. Identify and introduce quality assurance processes for data pipelines and workflows. Stay up to date on the latest process and IT advancements to automate and modernize data systems for storage for performance, efficiency and cost savings with a cloud centric focus. Act as main point of contact to other teams/contributors. Work with product and business teams to understand and translate business requirements in to technical solutions. Implement proper security measures and procedures to prevent unauthorized entry and/or data loss. Promotes honest and open communication throughout the credit union. Demonstrate behaviors that are consistent with the credit unions values, philosophies, and leadership characteristics. Perform other duties as assigned. Foster a culture of collaboration, innovation, and continuous improvement within the team. Optimize data flow and collection for cross-functional teams. Work closely with Data counterparts at onshore, product owners, and business stakeholders to understand data needs and strategies. Collaborate with IT and DevOps teams to ensure data infrastructure aligns with overall IT architecture. Drive continuous improvement initiatives within the data engineering function. Required Skills: 5+ years of experience supporting big data platforms (cloud) Exceptional analytical and conceptual thinking skills coupled with strong written and verbal communication skills and ability to solve problems quickly. Strong technical skills in data engineering, including proficiency in programming languages and technologies such (Databricks,Azure Data Factory, Python) Should be proficient in SQL and Query optimization. Technical expertise understanding database design development, data models, and techniques for data mining. Expertise in working with various data tools and technologies, such as ETL frameworks, data pipelines, and data warehousing solutions. Preferred Cloud computing experience with preference for Microsoft Azure cloud technologies. Proven experience in leading a team of data engineers, providing guidance, mentorship, and technical support. In-depth knowledge of data management principles and best practices, including data governance, data quality, and data integration. Excellent interpersonal skills, with the ability to effectively collaborate with cross-functional teams, stakeholders, and senior management.• Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins. Experience designing for data engineering models. Able to communicate effectively, both written and orally, with internal and external customers, vendors, and partners. Thrives in ambiguous technology situations. Ability to solve problems efficiently, think analytically, creatively, and logically. Ability to interact effectively with co-workers to accomplish daily tasks and to resolve interpersonal conflicts and miscommunications. Must be able to be bonded. Ability to maintain a high level of confidentiality. Should have experience as a technical lead mentoring data engineer and driving data-oriented projects. Should have worked in onshore offshore model managing challenging scenarios. Lead data architecture and implementation efforts to structure, integrate, govern, store, describe, model, and maintain data warehouse for optimal accuracy and usage. Ability to work with ambiguity and vague requirements and transform them into deliverables. Help build define architecture frameworks, best practices & processes. Collaborate on Data warehouse architecture and technical design discussions. Should have expertise in data life cycle, data ingestion, transformation, data loading, validation, and performance tuning
Posted 2 months ago
4 - 9 years
6 - 11 Lacs
Gurgaon
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents: Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed data systems. Strong programming skills in Python, PySpark,Scalaor SparkSQL/TSQLfor data transformations. Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Proficiencyin data integration techniques, ETL processes and data pipeline architectures. Understanding of Machine Learning Algorithms & AI/ML frameworks (i.e TensorFlow, PyTorch)and Power BIis an added advantage Your Profile Overall 59 years of relevant experience in Azure data platform and AI/ML technologies. 4+ years in designing, solutioning in Fabric, Databricks, Pyspark and Azure Synapse MS Fabric and PySpark is must. What youll love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 3 months ago
5 - 10 years
20 - 35 Lacs
Bengaluru, Hyderabad, Gurgaon
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python/Scala Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2