Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 40 Lacs
Mumbai, Hyderabad
Work from Office
Essential Services: Role & Location fungibility At ICICI Bank, we believe in serving our customers beyond our role definition, product boundaries, and domain limitations through our philosophy of customer 360-degree. In essence, this captures our belief in serving the entire banking needs of our customers as One Bank, One Team . To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service . The role descriptions give you an overview of the responsibilities, it is only directional and guiding in nature. About the Role: As a Data Warehouse Architect, you will be responsible for managing and enhancing data warehouse that manages large volume of customer-life cycle data flowing in from various applications within guardrails of risk and compliance. You will be managing the day-to-day operations of data warehouse i.e. Vertica. In this role responsibility, you will manage a team of data warehouse engineers to develop data modelling, designing ETL data pipeline, issue management, upgrades, performance fine-tuning, migration, governance and security framework of the data warehouse. This role enables the Bank to maintain huge data sets in a structured manner that is amenable for data intelligence. The data warehouse supports numerous information systems used by various business groups to derive insights. As a natural progression, the data warehouses will be gradually migrated to Data Lake enabling better analytical advantage. The role holder will also be responsible for guiding the team towards this migration. Key Responsibilities: Data Pipeline Design: Responsible for designing and developing ETL data pipelines that can help in organising large volumes of data. Use of data warehousing technologies to ensure that the data warehouse is efficient, scalable, and secure. Issue Management: Responsible for ensuring that the data warehouse is running smoothly. Monitor system performance, diagnose and troubleshoot issues, and make necessary changes to optimize system performance. Collaboration: Collaborate with cross-functional teams to implement upgrades, migrations and continuous improvements. Data Integration and Processing: Responsible for processing, cleaning, and integrating large data sets from various sources to ensure that the data is accurate, complete, and consistent. Data Modelling: Responsible for designing and implementing data modelling solutions to ensure that the organizations data is properly structured and organized for analysis. Key Qualifications & Skills: Education Qualification: B.E./B. Tech. in Computer Science, Information Technology or equivalent domain with 10 to 12 years of experience and at least 5 years or relevant work experience in Datawarehouse/ mining/BI/MIS. Experience in Data Warehousing: Knowledge on ETL and data technologies and outline future vision in OLTP, OLAP (Oracle / MS SQL). Data Modelling, Data Analysis and Visualization experience (Analytical tools experience like Power BI / SAS / ClickView / Tableu etc). Good to have exposure to Azure Cloud Data platform services like COSMOS, Azure Data Lake, Azure Synapse, and Azure Data factory. Synergize with the Team: Regular interaction with business/product/functional teams to create mobility solutions. Certification: Azure certified DP 900, PL 300, DP 203 or any other Data platform/Data Analyst certifications. About the Business Group The Technology Group at ICICI Bank is at the forefront of our operations and offerings, which are focused on leveraging state-of-the-art technology to provide customer-centric solutions. This group plays a pivotal role in our vision of the transition from Bank to Bank Tech. Further, the group offers round-the-clock support to our entire banking ecosystem. In our persistent efforts to provide products and solutions that genuinely touch customers, unlocking the potential of technology in every single engagement would go a long way in creating customer delight. In this endeavor, we also tirelessly ensure all our processes, systems, and infrastructure are very well within the guardrails of data security, privacy, and relevant regulations.
Posted 1 week ago
6.0 - 11.0 years
12 - 17 Lacs
Pune
Work from Office
Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities: Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture
Posted 1 week ago
8.0 - 10.0 years
20 - 35 Lacs
Ahmedabad
Remote
We are seeking a talented and experienced Senior Data Engineer to join our team and contribute to building a robust data platform on Azure Cloud. The ideal candidate will have hands-on experience designing and managing data pipelines, ensuring data quality, and leveraging cloud technologies for scalable and efficient data processing. The Data Engineer will design, develop, and maintain scalable data pipelines and systems to support the ingestion, transformation, and analysis of large datasets. The role requires a deep understanding of data workflows, cloud platforms (Azure), and strong problem-solving skills to ensure efficient and reliable data delivery. Key Responsibilities Data Ingestion and Integration: Develop and maintain data ingestion pipelines using tools like Azure Data Factory , Databricks , and Azure Event Hubs . Integrate data from various sources, including APIs, databases, file systems, and streaming data. ETL/ELT Development: Design and implement ETL/ELT workflows to transform and prepare data for analysis and storage in the data lake or data warehouse. Automate and optimize data processing workflows for performance and scalability. Data Modeling and Storage: Design data models for efficient storage and retrieval in Azure Data Lake Storage and Azure Synapse Analytics . Implement best practices for partitioning, indexing, and versioning in data lakes and warehouses. Quality Assurance: Implement data validation, monitoring, and reconciliation processes to ensure data accuracy and consistency. Troubleshoot and resolve issues in data pipelines to ensure seamless operation. Collaboration and Documentation: Work closely with data architects, analysts, and other stakeholders to understand requirements and translate them into technical solutions. Document processes, workflows, and system configurations for maintenance and onboarding purposes. Cloud Services and Infrastructure: Leverage Azure services like Azure Data Factory , Databricks , Azure Functions , and Logic Apps to create scalable and cost-effective solutions. Monitor and optimize Azure resources for performance and cost management. Security and Governance: Ensure data pipelines comply with organizational security and governance policies. Implement security protocols using Azure IAM, encryption, and Azure Key Vault. Continuous Improvement: Monitor existing pipelines and suggest improvements for better efficiency, reliability, and scalability. Stay updated on emerging technologies and recommend enhancements to the data platform. Skills Strong experience with Azure Data Factory , Databricks , and Azure Synapse Analytics . Proficiency in Python , SQL , and Spark . Hands-on experience with ETL/ELT processes and frameworks. Knowledge of data modeling, data warehousing, and data lake architectures. Familiarity with REST APIs, streaming data (Kafka, Event Hubs), and batch processing. Good To Have: Experience with tools like Azure Purview , Delta Lake , or similar governance frameworks. Understanding of CI/CD pipelines and DevOps tools like Azure DevOps or Terraform . Familiarity with data visualization tools like Power BI . Competency Analytical Thinking Clear and effective communication Time Management Team Collaboration Technical Proficiency Supervising Others Problem Solving Risk Management Organizing & Task Management Creativity/innovation Honesty/Integrity Education: Bachelors degree in Computer Science, Data Science, or a related field. 8+ years of experience in a data engineering or similar role.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm committed to delivering outcomes that help shape the future. With a team of over 125,000 individuals across 30+ countries, we are driven by curiosity, entrepreneurial agility, and a desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, empowers us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist specializing in Azure Generative AI & Advanced Analytics. As a highly skilled and experienced professional, you will be responsible for developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. Your role will be crucial in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets for actionable insights and data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging various platforms including AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines to efficiently process and analyze large-scale datasets. - Implement Agentic AI techniques to develop intelligent, autonomous systems capable of making decisions and taking actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models and data-driven solutions, refining and optimizing them as necessary. - Stay updated with the latest industry trends, tools, and technologies in data science, AI, and generative models to enhance existing solutions and develop new ones. - Mentor and guide junior team members to aid in their professional growth and skill development. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Keep abreast of advancements in AI, ML, and data science and apply best practices to enhance business operations. Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is advantageous. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. Job Title: Principal Consultant Location: India-Noida Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Apr 11, 2025, 9:36:00 AM Unposting Date: May 11, 2025, 1:29:00 PM Master Skills List: Digital Job Category: Full Time,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You will play a crucial role in meeting the requirements of key business functions by developing SQL code, Azure data pipelines, ETL processes, and data models. Your responsibilities will include crafting MS-SQL queries and procedures, generating customized reports, and aggregating data to the desired level for client consumption. Additionally, you will be tasked with database design, data extraction from diverse sources, data integration, and ensuring data stability, reliability, and performance. Your typical day will involve: - Demonstrating 2-3 years of experience as a SQL Developer or in a similar capacity - Possessing a strong grasp of SQL Server and SQL programming, with at least 2 years of hands-on SQL programming experience - Familiarity with SQL Server Integration Services (SSIS) - Preferred experience in implementing Data Factory pipelines for on-cloud ETL processing - Proficiency in Azure Data Factory, Azure Synapse, and ADLS, with the capability to configure and manage all aspects of SQL Server at a Consultant level - Showing a sense of ownership and pride in your work, understanding its impact on the company's success - Exhibiting excellent interpersonal and communication skills (both verbal and written), enabling clear and precise communication at various organizational levels - Demonstrating critical thinking and problem-solving abilities - Being a team player with good time-management skills - Experience in analytics projects within the pharma sector, focusing on deriving actionable insights and their implementation - Expertise in longitudinal data, retail/CPG, customer-level datasets, pharma data, patient data, forecasting, and performance reporting - Intermediate to strong proficiency in MS Excel and PowerPoint - Previous exposure to SQL Server and SSIS - Ability to efficiently handle large datasets (multi-million record complex relational databases) - Self-directed approach in supporting the data requirements of multiple teams, systems, and products - Effective communication in challenging situations with structured thinking and a solution-focused mindset, leading interactions with internal and external stakeholders with minimal supervision - Proactive identification of potential risks and implementation of mitigation strategies to prevent downstream issues - Familiarity with project management principles, including breaking down approaches into smaller tasks and planning resource allocation accordingly - Quick learning ability in a dynamic environment - Advantageous if you have successfully worked in a global environment - Prior experience in healthcare analytics is a bonus IQVIA is a prominent global provider of clinical research services, commercial insights, and healthcare intelligence to the life sciences and healthcare sectors. The company facilitates intelligent connections to expedite the development and commercialization of innovative medical treatments, ultimately enhancing patient outcomes and global population health. For further insights, visit https://jobs.iqvia.com.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer (Power BI) at Acronotics Limited, you will play a crucial role in designing and managing data pipelines that integrate Power BI, OLAP cubes, documents such as PDFs and presentations, and external data sources with Azure AI. Your primary responsibility will be to ensure that both structured and unstructured financial data is properly indexed and made accessible for semantic search and LLM application. Your key responsibilities in this full-time, on-site role based in Bengaluru will include extracting data from Power BI datasets, semantic models, and OLAP cubes. You will connect and transform data using Azure Synapse, Data Factory, and Lakehouse architecture. Additionally, you will preprocess PDFs, PPTs, and Excel files utilizing tools like Azure Form Recognizer or Python-based solutions. Your role will also involve designing data ingestion pipelines for external web sources, such as commodity prices, and collaborating with AI engineers to provide cleaned and contextual data for vector indexes. To be successful in this role, you should have a strong background in utilizing Power BI REST/XMLA APIs and expertise in OLAP systems like SSAS and SAP BW, data modeling, and ETL design. Hands-on experience with Azure Data Factory, Synapse, or Data Lake is essential, along with familiarity with JSON, DAX, and M queries. Join Acronotics Limited in revolutionizing businesses with cutting-edge robotic automation and artificial intelligence solutions. Let your expertise in data engineering contribute to the advancement of automated solutions that redefine how products are manufactured, marketed, and consumed. Discover the possibilities with Radium AI, our innovative product automating bot monitoring and support activities, on our website today.,
Posted 1 week ago
13.0 - 17.0 years
0 Lacs
maharashtra
On-site
Birlasoft is a powerhouse that brings together domain expertise, enterprise solutions, and digital technologies to redefine business processes. With a consultative and design thinking approach, we drive societal progress by enabling our customers to run businesses with efficiency and innovation. As part of the CK Birla Group, a multibillion-dollar enterprise, we have a team of 12,500+ professionals dedicated to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our commitment to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. As an Azure Tech PM at Birlasoft, you will be responsible for leading and delivering complex data analytics projects. With 13-15 years of experience, you will play a critical role in overseeing the planning, execution, and successful delivery of data analytics initiatives, while managing a team of 15+ skilled resources. You should have exceptional communication skills, a deep understanding of Agile methodologies, and a strong background in managing cross-functional teams in data analytics projects. Key Responsibilities: - Lead end-to-end planning, coordination, and execution of data analytics projects, ensuring adherence to project scope, timelines, and quality standards. - Guide the team in defining project requirements, objectives, and success criteria using your extensive experience in data analytics. - Apply Agile methodologies to create and maintain detailed project plans, sprint schedules, and resource allocation for efficient project delivery. - Manage a team of 15+ technical resources, fostering collaboration and a culture of continuous improvement. - Collaborate closely with cross-functional stakeholders to align project goals with business objectives. - Monitor project progress, identify risks, issues, and bottlenecks, and implement mitigation strategies. - Provide regular project updates to executive leadership, stakeholders, and project teams using excellent communication skills. - Facilitate daily stand-ups, sprint planning, backlog grooming, and retrospective meetings to promote transparency and efficiency. - Drive the implementation of best practices for data analytics, ensuring data quality, accuracy, and compliance with industry standards. - Act as a point of escalation for project-related challenges and work with the team to resolve issues promptly. - Collaborate with cross-functional teams to ensure successful project delivery, including testing, deployment, and documentation. - Provide input to project estimation, resource planning, and risk management activities. Mandatory Experience: - Technical Project Manager experience of minimum 5+ years in Data lake and Data warehousing (DW). - Strong understanding of DW process execution from acquiring data to visualization. - Exposure to Azure skills such as Azure ADF, Azure Databricks, Synapse, SQL, PowerBI for minimum 3+ years or experience in managing at least 2 end-to-end Azure Cloud projects. Other Qualifications: - Bachelor's or Master's degree in Computer Science, Information Systems, or related field. - 13-15 years of progressive experience in technical project management focusing on data analytics and data-driven initiatives. - In-depth knowledge of data analytics concepts, tools, and technologies. - Exceptional leadership, team management, interpersonal, and communication skills. - Demonstrated success in delivering data analytics projects on time, within scope, and meeting quality expectations. - Strong problem-solving skills and proactive attitude towards identifying challenges. - Project management certifications such as PMP, PMI-ACP, CSM would be an added advantage. - Ability to thrive in a dynamic and fast-paced environment, managing multiple projects simultaneously.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
As a Data Architect at Beinex located in Kochi, Kerala, you will be responsible for collaborating with the Sales team to build RFPs, Pre-sales activities, Project Delivery, and support. Your role will involve delivering on-site technical engagements with customers, participating in pre-sales visits, understanding customer requirements, defining project timelines, and implementing solutions. Additionally, you will work on both on and off-site projects to assist customers in migrating from their existing data warehouses to Snowflake and other databases. You should have at least 8 years of experience in IT platform implementation, development, DBA, and Data Migration in Relational Database Management Systems (RDBMS). Furthermore, you should possess 5+ years of hands-on experience in implementing and performance tuning MPP databases. Proficiency in Snowflake, Redshift, Databricks, or Azure Synapse is essential, along with the ability to prioritize projects effectively. Experience in analyzing Data Warehouses such as Teradata, Netezza, Oracle, and SAP will be valuable in this role. Your responsibilities will also include designing database environments, analyzing production deployments, optimizing performance, writing SQL, stored procedures, conducting Data Validation and Data Quality tests, and planning migrations to Snowflake. You will be expected to possess strong communication skills, problem-solving abilities, and the capacity to work effectively both independently and as part of a team. At Beinex, you will have access to various perks including comprehensive health plans, learning and development opportunities, workation and outdoor training, a hybrid working environment, and on-site travel opportunities. Join us to be a part of a dynamic team and advance your career in a supportive and engaging work environment.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
We are searching for a Senior Data Engineer with significant experience in developing ETL processes utilizing PySpark Notebooks and Microsoft Fabric, as well as supporting existing legacy SQL Server environments. The perfect candidate will have a solid foundation in Spark-based development, showcase advanced SQL skills, and feel at ease working autonomously, collaboratively within a team, or guiding other developers when necessary, all while possessing excellent communication abilities. The ideal candidate will also demonstrate expertise with Azure Data Services, such as Azure Data Factory, Azure Synapse, or similar tools, familiarity with creating DAG's, implementing activities, and running Apache Airflow, and knowledge of DevOps practices, CI/CD pipelines, and Azure DevOps. Key Responsibilities: - Design, develop, and manage ETL Notebook orchestration pipelines utilizing PySpark and Microsoft Fabric. - Collaborate with data scientists, analysts, and stakeholders to grasp data requirements and provide effective data solutions. - Migrate and integrate data from legacy SQL Server environments into modern data platforms. - Optimize data pipelines and workflows for scalability, efficiency, and reliability. - Provide technical leadership and mentorship to junior developers and team members. - Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. - Develop, maintain, and uphold data engineering best practices, coding standards, and documentation. - Conduct code reviews and offer constructive feedback to enhance team productivity and code quality. - Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. - 10+ years of experience in data engineering, focusing on ETL development using PySpark or other Spark-based tools. - Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. - Experience with Microsoft Fabric or similar cloud-based data integration platforms is advantageous. - Strong understanding of data warehousing concepts, ETL frameworks, and big data processing. - Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is a plus. - Experience dealing with both structured and unstructured data sources. - Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. - Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. - Experience of creating DAG's, implementing activities, and running Apache Airflow. - Familiarity with DevOps practices, CI/CD pipelines, and Azure DevOps. In conclusion, Aspire Systems is a global technology services firm that acts as a trusted technology partner for over 275 clients worldwide. Aspire collaborates with leading enterprises in Banking, Insurance, Retail, and ISVs to help them leverage technology for business transformation in the current digital era. The company's dedication to Attention. Always. reflects its commitment to providing care and attention to both its customers and employees. With over 4900 employees globally and a CMMI Level 3 certification, Aspire Systems operates in North America, LATAM, Europe, Middle East, and Asia Pacific. Aspire Systems has been consistently recognized as one of the Top 100 Best Companies to Work For by the Great Place to Work Institute for the 12th consecutive time. For more information about Aspire Systems, please visit https://www.aspiresys.com/.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced Data Engineer with at least 6 years of relevant experience. In this role, you will be working as part of a team to develop Data and Analytics solutions. Your responsibilities will include participating in the development of cloud data warehouses, data as a service, and business intelligence solutions. You should be able to provide forward-thinking solutions in data integration and ensure the delivery of a quality product. Experience in developing Modern Data Warehouse solutions using Azure or AWS Stack is required. To be successful in this role, you should have a Bachelor's degree in computer science & engineering or equivalent demonstrable experience. It is desirable to have Cloud Certifications in Data, Analytics, or Ops/Architect space. Your primary skills should include: - 6+ years of experience as a Data Engineer, with a key/lead role in implementing large data solutions - Programming experience in Scala or Python, SQL - Minimum of 1 year of experience in MDM/PIM Solution Implementation with tools like Ataccama, Syndigo, Informatica - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Snowflake - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Databricks - Working knowledge of some AWS and Azure Services like S3, ADLS Gen2, AWS Redshift, AWS Glue, Azure Data Factory, Azure Synapse - Demonstrated analytical and problem-solving skills - Excellent written and verbal communication skills in English Your secondary skills should include familiarity with Agile Practices, Version control platforms like GIT, CodeCommit, problem-solving skills, ownership mentality, and a proactive approach rather than reactive. This is a permanent position based in Trivandrum/Bangalore. If you meet the requirements and are looking for a challenging opportunity in the field of Data Engineering, we encourage you to apply before the close date on 11-10-2024.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Sr. Data Engineer to join their team in Bangalore, Karnataka, India. As a Sr. Data Engineer, your primary responsibility will be to build and implement PySpark-based data pipelines in Azure Synapse to transform and load data into ADLS in Delta format. You will also design and implement dimensional (star/snowflake) and 3NF data models optimized for access using Power BI. Unit testing of data pipelines and transformations, as well as designing and building metadata-driven data pipelines using PySpark in Azure Synapse, will be part of your tasks. Analyzing and optimizing Spark SQL queries, optimizing the integration of data lake with Power BI semantic model, and collaborating with cross-functional teams to ensure data models align with business needs are also key responsibilities. Additionally, you will perform Source-to-Target Mapping (STM) from source to multiple layers in the data lake and maintain version control and CI/CD pipelines in Git and Azure DevOps. Integrating Azure Purview to enable access controls and implementing row level security will also be part of your role. The ideal candidate for this position should have at least 7 years of experience in SQL and PySpark. Hands-on experience with Azure Synapse, ADLS, Delta format, and metadata-driven data pipelines is required. Experience in implementing dimensional (star/snowflake) and 3NF data models, as well as expertise in PySpark and Spark SQL, including query optimization and performance tuning, are essential. Strong problem-solving and analytical skills for debugging and optimizing data pipelines in Azure Synapse, familiarity with CI/CD practices in Git and Azure DevOps, and working experience in an Azure DevOps-based development environment are also necessary. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. They are committed to helping clients innovate, optimize, and transform for long-term success. With diverse experts in more than 50 countries and a robust partner ecosystem, NTT DATA offers business and technology consulting, data and artificial intelligence solutions, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure and is part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in confidently moving into the digital future.,
Posted 1 week ago
10.0 - 17.0 years
12 - 17 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
POSITION OVERVIEW: We are seeking an experienced and highly skilled Data Engineer with deep expertise in Microsoft Fabric , MS-SQL, data warehouse architecture design , and SAP data integration. The ideal candidate will be responsible for designing, building, and optimizing data pipelines and architectures to support our enterprise data strategy. The candidate will work closely with cross-functional teams to ingest, transform, and make data (from SAP and other systems) available in our Microsoft Azure environment, enabling robust analytics and business intelligence. KEY ROLES & RESPONSIBILITIES : Spearhead the design, development, deployment, testing, and management of strategic data architecture, leveraging cutting-edge technology stacks on cloud, on-prem and hybrid environments Design and implement an end-to-end data architecture within Microsoft Fabric / SQL, including Azure Synapse Analytics (incl. Data warehousing). This would also encompass a Data Mesh Architecture. Develop and manage robust data pipelines to extract, load, and transform data from SAP systems (e.g., ECC, S/4HANA, BW). Perform data modeling and schema design for enterprise data warehouses in Microsoft Fabric. Ensure data quality, security, and compliance standards are met throughout the data lifecycle. Enforce Data Security measures, strategies, protocols, and technologies ensuring adherence to security and compliance requirements Collaborate with BI, analytics, and business teams to understand data requirements and deliver trusted datasets. Monitor and optimize performance of data processes and infrastructure. Document technical solutions and develop reusable frameworks and tools for data ingestion and transformation. Establish and maintain robust knowledge management structures, encompassing Data Architecture, Data Policies, Platform Usage Policies, Development Rules, and more, ensuring adherence to best practices, regulatory compliance, and optimization across all data processes Implement microservices, APIs and event-driven architecture to enable agility and scalability. Create and maintain architectural documentation, diagrams, policies, standards, conventions, rules and frameworks to effective knowledge sharing and handover. Monitor and optimize the performance, scalability, and reliability of the data architecture and pipelines. Track data consumption and usage patterns to ensure that infrastructure investment is effectively leveraged through automated alert-driven tracking. KEY COMPETENCIES: Microsoft Certified: Fabric Analytics Engineer Associate or equivalent certificate for MS SQL. Prior experience working in cloud environments (Azure preferred). Understanding of SAP data structures and SAP integration tools like SAP Data Services, SAP Landscape Transformation (SLT), or RFC/BAPI connectors. Experience with DevOps practices and version control (e.g., Git). Deep understanding of SAP architecture, data models, security principles, and platform best practices. Strong analytical skills with the ability to translate business needs into technical solutions. Experience with project coordination, vendor management, and Agile or hybrid project delivery methodologies. Excellent communication, stakeholder management, and documentation skills. Strong understanding of data warehouse architecture and dimensional modeling. Excellent problem-solving and communication skills. QUALIFICATIONS / EXPERIENCE / SKILLS Qualifications : Bachelors degree in Computer Science, Information Systems, or a related field. Certifications such as SQL, Administrator, Advanced Administrator, are preferred. Expertise in data transformation using SQL, PySpark, and/or other ETL tools. Strong knowledge of data governance, security, and lineage in enterprise environments. Advanced knowledge in SQL, database procedures/packages and dimensional modeling Proficiency in Python, and/or Data Analysis Expressions (DAX) (Preferred, not mandatory) Familiarity with PowerBI for downstream reporting (Preferred, not mandatory). Experience : • 10 years of experience as a Data Engineer or in a similar role. Skills: Hands-on experience with Microsoft SQL (MS-SQL), Microsoft Fabric including Synapse (Data Warehousing, Notebooks, Spark) Experience integrating and extracting data from SAP systems, such as: o SAP ECC or S/4HANA SAP BW o SAP Core Data Services (CDS) Views or OData Services Knowledge of Data Protection laws across countries (Preferred, not mandatory)
Posted 2 weeks ago
1.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that all components function seamlessly and meet client requirements. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct code reviews to ensure adherence to best practices and coding standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark and AZURE Synapse.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance issues in application code. Additional Information:- The candidate should have minimum 2 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Experience:- Overall IT experience (No of years) - 7+- Data Modeling Experience - 3+- Data Vault Modeling Experience - 2+ Key Responsibilities:- Drive discussions with clients deal teams to understand business requirements, how Data Model fits in implementation and solutioning- Develop the solution blueprint and scoping, estimation in delivery project and solutioning- Drive Discovery activities and design workshops with client and lead strategic road mapping and operating model design discussions- Design and develop Data Vault 2.0-compliant models, including Hubs, Links, and Satellites.- Design and develop Raw Data Vault and Business Data Vault.- Translate business requirements into conceptual, logical, and physical data models.- Work with source system analysts to understand data structures and lineage.- Ensure conformance to data modeling standards and best practices.- Collaborate with ETL/ELT developers to implement data models in a modern data warehouse environment (e.g., Snowflake, Databricks, Redshift, BigQuery).- Document models, data definitions, and metadata. Technical Experience:Good to have Skills: - 7+ year overall IT experience, 3+ years in Data Modeling and 2+ years in Data Vault Modeling- Design and development of Raw Data Vault and Business Data Vault.- Strong understanding of Data Vault 2.0 methodology, including business keys, record tracking, and historical tracking.- Data modeling experience in Dimensional Modeling/3-NF modeling- Hands-on experience with any data modeling tools (e.g., ER/Studio, ERwin, or similar).- Solid understanding of ETL/ELT processes, data integration, and warehousing concepts.- Experience with any modern cloud data platforms (e.g., Snowflake, Databricks, Azure Synapse, AWS Redshift, or Google BigQuery).- Excellent SQL skills. Good to Have Skills: - Any one of these add-on skills - Graph Database Modelling, RDF, Document DB Modeling, Ontology, Semantic Data Modeling- Hands-on experience in any Data Vault automation tool (e.g., VaultSpeed, WhereScape, biGENIUS-X, dbt, or similar).- Preferred understanding of Data Analytics on Cloud landscape and Data Lake design knowledge.- Cloud Data Engineering, Cloud Data Integration Professional Experience:- Strong requirement analysis and technical solutioning skill in Data and Analytics - Excellent writing, communication and presentation skills.- Eagerness to learn new skills and develop self on an ongoing basis.- Good client facing and interpersonal skills Educational Qualification:- B.E or B.Tech must Qualification 15 years full time education
Posted 2 weeks ago
7.0 - 11.0 years
15 - 25 Lacs
Hyderabad
Hybrid
Role Purpose: The Senior Data Engineer will support and enable the Data Architecture and the Data Strategy. Supporting solution architecture and engineering for data ingestion and modelling challenges. The role will support the deduplication of enterprise data tools, working with the Lonza Data Governance Board, Digital Council and IT to drive towards a single Data and Information Architecture. This will be a hands-on engineering role with a focus on business and digital transformation. The role will be responsible for managing and maintain the Data Architecture and solutions that deliver the platform at with operational support and troubleshooting. The Senior Data Engineer will also manage (no reporting line changes but from day-to-day delivery) and coordinate the Data Engineering team members (Internal and External) working on the various project implementations. Experience : 7-10 years experience with digital transformation and data projects. Experience in designing, delivering and managing data infrastructures. Proficiency in using Cloud Services (Azure) for data engineering, storage and analytics. Strong SQL and NoSQL experience Data Modelling Hands on developing pipelines, setting-up architectures in Azure Fabric. Team management experience (internal and external resources). Good understanding of data warehousing, data virtualization and analytics. Experience in working with data analysts, data scientists and BI teams to deliver on data requirements. Data Catalogue experience is a plus. ETL Pipeline Design is a plus Python Development skills is a plus Realtime data ingestion (E.g. Kafka) Licenses or Certifications Beneficial; ITIL, PM, CSM, Six Sigma, Lean Knowledge Good understanding about integration, ETL, API and Data sharing concepts. Understanding / Awareness of Visualization tools is a plus Knowledge and understanding of relevant legal and regulatory requirements, such as CFR 21 part 11, EU General Data Protection Regulation, Health Insurance Portability and Accountability Act (HIPAA) and GxP validation process would be a plus. Skills The position requires a pragmatic leader with sound knowledge of data, integration and analytics. Excellent written and verbal communication skills, interpersonal and collaborative skills, and the ability to communicate technical concepts to nontechnical audiences. Exhibit excellent analytical skills, the ability to manage and contribute to multiple projects under strict timelines, as well as the ability to work well in a demanding, dynamic environment and meet overall objectives. Project management skills: scheduling and resource management are a plus. Ability to motivate cross-functional, interdisciplinary teams to achieve tactical and strategic goals. Data Catalogue Project and Team management skills are plus. Strong SAP skills are a plus.
Posted 2 weeks ago
6.0 - 10.0 years
15 - 18 Lacs
Chennai
Work from Office
Role & responsibilities 8-10 years of experience, with a minimum of 5 years working on core data engineering responsibilities on a cloud platform. Project Management experience is a big plus. Proven track record of implementing data-driven solutions in areas such as plant automation, operational analytics, quality control, supply chain optimization. Expertise in cloud-based data platforms, particularly within the Azure ecosystem (Azure Data Factory, Synapse Analytics, Databricks). Familiarity with SAP as a data source. Proficiency in programming languages such as SQL, Python, and R for analytics and reporting.
Posted 2 weeks ago
6.0 - 10.0 years
16 - 30 Lacs
Amritsar
Remote
Job Title: Senior Azure Data Engineer Location: Remote Experience Required: 5+ years About the Role: We are seeking a highly skilled Senior Azure Data Engineer to design and develop robust, scalable, and high-performance data pipelines using Azure technologies. The ideal candidate will have strong experience with modern data platforms and tools, including Azure Data Factory, Synapse, Databricks, and Data Lake, as well as expertise in SQL, Python, and CI/CD workflows. Key Responsibilities: Design and implement end-to-end data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage Gen2. Ingest and integrate data from various sources such as SQL Server, APIs, blob storage, and on-premise systems, ensuring security and performance. Develop and manage ETL/ELT workflows and orchestrations in a scalable, optimized manner. Build and maintain data models, data marts, and data warehouse structures for analytics and reporting. Write and optimize complex SQL queries, stored procedures, and Python scripts. Ensure data quality, consistency, and integrity through validation frameworks and best practices. Support and enhance CI/CD pipelines using Azure DevOps, Git, and ARM/Bicep templates. Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver impactful solutions. Enforce data governance, security, and compliance policies, including use of Azure Key Vault and access controls. Mentor junior data engineers, lead design discussions, and conduct code reviews. Monitor and troubleshoot issues related to performance, cost, and scalability across data systems. Required Skills & Experience: 6+ years of experience in data engineering or related fields. 3+ years of hands-on experience with Azure cloud services, specifically: Azure Data Factory (ADF) Azure Synapse Analytics (Dedicated and Serverless SQL Pools) Azure Databricks (Spark preferred) Azure Data Lake Storage Gen2 (ADLS) Azure SQL / Managed Instance / Cosmos DB Strong proficiency in SQL, PySpark, and Python. Solid experience with CI/CD tools: Azure DevOps, Git, ARM/Bicep templates. Experience with data warehousing, dimensional modeling, and medallion/lakehouse architecture. In-depth knowledge of data security best practices, including encryption, identity management, and network configurations in Azure. Expertise in performance tuning, data partitioning, and cost optimization. Excellent communication, problem-solving, and stakeholder management skills.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining YASH Technologies, a leading technology integrator focused on helping clients enhance competitiveness, optimize costs, and drive business transformation in an increasingly virtual world. As a Microsoft Fabric Professional, you will be responsible for working with cutting-edge technologies in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure SQL, and ETL processes. Your key responsibilities will include creating pipelines, datasets, dataflows, Integration runtimes, and monitoring pipelines in Azure. You will be extracting, transforming, and loading data from source systems using Azure Databricks and creating SQL scripts for complex queries. Additionally, you will develop Synapse pipelines to migrate data from Gen2 to Azure SQL and work on data migration pipelines to Azure cloud (Azure SQL). Experience in using Azure Data Catalog and Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions will be beneficial for this role. While certifications are considered good to have, YASH Technologies provides an inclusive team environment where you are empowered to create a career path aligned with your aspirations. The workplace culture is grounded on principles like flexible work arrangements, emotional positivity, trust, transparency, open collaboration, and all necessary support for realizing business goals. Join us at YASH Technologies for stable employment and a great atmosphere with an ethical corporate culture.,
Posted 2 weeks ago
4.0 - 7.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Seeking a skilled and experienced Technical Hands-On Production Support Lead with a focus on Azure data engineering and production support. The ideal candidate will work with a technical team in building and supporting data solutions on the Azure platform. This is an exciting opportunity to work on cutting-edge projects and be part of a dynamic team that thrives on innovation and collaboration. Required Skills & Experience: • 5+ years of experience in data engineering with a strong focus on Azure data services. • Hands-on experience with Azure Data Factory, Azure Databricks, and Azure Synapse. • Strong proficiency in Python, SQL, and PySpark. • Proven experience in production support environments, with the ability to resolve critical issues efficiently. • Solid understanding of ETL processes, data pipelines, and data management best practices. • Ability to troubleshoot and optimize existing data pipelines and queries. • Excellent communication skillsboth written and verbalwith the ability to convey complex technical concepts to non-technical stakeholders.
Posted 2 weeks ago
10.0 - 20.0 years
20 - 35 Lacs
Noida
Remote
Position Overview: The primary focus of this position is to Design, develop, and maintain robust data pipelines using Azure Data Factory. Implement and manage ETL processes to ensure efficient data flow and transformation. What youll do as a (BI Developer Lead): Design, develop, and maintain robust data pipelines using Azure Data Factory. Implement and manage ETL processes to ensure efficient data flow and transformation. Develop and maintain data models and data warehouses using Azure SQL Database and Azure Synapse Analytics. Create and manage Power BI reports and dashboards to provide actionable insights to stakeholders. Ensure data quality, integrity, and security across all data systems. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize data storage and retrieval processes for performance and cost efficiency. Monitor and troubleshoot data pipelines and workflows to ensure smooth operations. Create and maintain tabular models for efficient data analysis and reporting. Stay updated with the latest Azure services and best practices to continuously improve data infrastructure. What will you bring to the team: Bachelors degree in computer science, Information Technology, or a related field. Certification in Azure Data Engineer or related Azure certifications will be an added advantage. Experience with machine learning and AI services on Azure will be an added advantage. Proven experience in designing and maintaining data pipelines using Azure Data Factory. Strong proficiency in SQL and experience with Azure SQL Database. Hands-on experience with Azure Synapse Analytics and Azure Data Lake Storage. Proficiency in creating and managing Power BI reports and dashboards. Knowledge of Azure DevOps for CI/CD pipeline implementation. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Knowledge of data governance and compliance standards.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You have an exciting opportunity to join YASH Technologies as a Microsoft Fabric Professional. As part of our team, you will be working with cutting-edge technologies to drive business transformation and create real positive changes in an increasingly virtual world. Your main responsibilities will include working with Azure Fabric, Azure Data Factory, Azure Databricks, Azure Synapse, Azure SQL, and ETL processes. You will be creating pipelines, datasets, dataflows, integration runtimes, and monitoring pipelines to trigger runs. Additionally, you will be involved in extracting, transforming, and loading data from source systems using Azure Databricks, as well as creating SQL scripts for complex queries. Moreover, you will work on creating Synapse pipelines to migrate data from Gen2 to Azure SQL, data migration pipelines to Azure cloud (Azure SQL), and database migration from on-prem SQL server to Azure Dev Environment using Azure DMS and Data Migration Assistant. Experience in using Azure Data Catalog and Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions is a plus. As a Microsoft Fabric Professional, you are encouraged to pursue relevant certifications to enhance your skills. At YASH Technologies, we provide a supportive and inclusive team environment where you can create a career that aligns with your goals. Our Hyperlearning workplace is built on flexibility, emotional positivity, trust, transparency, and open collaboration to help you achieve your business goals while maintaining stable employment in a great atmosphere with an ethical corporate culture.,
Posted 2 weeks ago
6.0 - 8.0 years
22 - 27 Lacs
Mumbai
Remote
This is a full-time remote role for a Azure Lead Data Engineer with 6 to 8 years of total work experience. The Azure Data Engineer will be responsible for : • Developing and implementing data solutions using Azure Services and Azure Functions, • Using data modeling best practices to develop usable data models • Collaborating with various stakeholders, troubleshooting data-related issues, and staying updated on the latest Azure technologies and best practices. Location • Mumbai\Pune, India serving US clients with some reasonable time overlap. Qualifications • Experience with Azure services and solutions in the data ecosystem such as MS Fabric, Azure SQL Database, Azure Data Lake, Azure Data Factory, and Azure Purview • Skills in data architecture, data modeling, and data integration • Proficiency in optimizing data management processes and ensuring data security • Familiarity with troubleshooting and resolving data-related issues • Strong collaboration and communication skills to work with various stakeholders • Ability to mentor and work with junior team members Experience • 6 to 8 years, progressively from being a Data engineer to a Senior Data Engineer. What you get: • Opportunity to contribute to building data solutions and frameworks. • Develop/Enhance skills in AI when building end to end data solutions. • Organizational support to acquire relevant Certifications and training. • Medical benefits for you and your family (for full time roles only and NOT contractors). • Eligible to be part of incentives over and above your annual compensation and benefits. **Please read this before hitting the "Apply" button:** • If you are a highly experienced candidate with total work experience beyond the specified range, there may be a mismatch in expectations; please take this into account and apply appropriately. • Candidates who have experienced global layoffs and have been unemployed or on sabbatical will be given preference. Please take note that these are merely preferences and have no bearing whatsoever on qualified candidates' applications. • Likewise, if you're serving notice period, proudly displaying your LWD (Last Working Date), please do NOT bother to apply. You have probably committed to offers and fishing for more and we have zero tolerance for opportunists.
Posted 2 weeks ago
10.0 - 12.0 years
0 - 1 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Experience Required: 10 to 12yrs (3-4 Years in Cloudera + Cloud Migration) Work location Hyderabad, Bangalore, Chennai, Noida, Pune Work Type - Hybrid model Work Time - Canada EST hours is a must Job Summary: We are seeking a skilled Cloudera Migration Specialist to lead the migration of our on-premises Cloudera cluster to Microsoft Azure. The ideal candidate will have 3–4 years of hands-on experience with Cloudera platform administration, optimization, and migration, along with a strong understanding of Azure cloud services and data engineering best practices. Key Responsibilities: Lead and execute the migration of Cloudera workloads (HDFS, Hive, Spark, Impala, HBase, etc.) from on-premise infrastructure to Azure. • Assess the existing Cloudera cluster, identify dependencies, and prepare a detailed migration roadmap. • Develop and implement data migration scripts, workflows, and cloud-native configurations. • Design and deploy equivalent services on Azure using Azure HDInsight, Azure Data Lake, Azure Synapse, or other relevant services. • Ensure data integrity, performance tuning, and post-migration validation. • Collaborate with infrastructure, security, and DevOps teams to ensure compliance and automation. • Prepare and maintain documentation of the migration plan, architecture, and troubleshooting playbooks. • Provide knowledge transfer and training to internal teams post-migration. Required Skills & Experience: 3–4 years of hands-on experience with Cloudera (CDH/CDP) ecosystem components (e.g., HDFS, YARN, Hive, Spark, Impala, HBase). • Proven experience in Cloudera cluster migrations, preferably to cloud platforms like Azure. • Solid understanding of cloud-native equivalents and data architectures on Azure. • Experience with Azure services such as HDInsight, Data Lake Storage, Synapse Analytics, Blob Storage. • Proficiency in Linux system administration, shell scripting, and automation tools. • Strong problem-solving and troubleshooting abilities in distributed data environments. • Familiarity with security controls, Kerberos, Ranger, LDAP integration, and data governance Preferred Qualifications: Cloudera Certified Administrator / Developer. • Experience with Azure DevOps, Terraform, or Ansible for infrastructure provisioning. • Knowledge of disaster recovery planning and HA architectures on Azure. • Familiarity with performance tuning in cloud vs. on-prem Hadoop environments
Posted 2 weeks ago
10.0 - 15.0 years
20 - 35 Lacs
Noida, Pune, Gurugram
Hybrid
Role: PowerBi Architect Exp.: 10+ years Must have Skills & Experience: 5+ experience in Power BI Expertise as a Power BI Architect Expert in Power BI Desktop, Power Query, DAX, and Power BI Service. Strong understanding of data warehousing, ETL processes, and relational databases (SQL Server, Azure SQL, etc.). Experience with cloud platforms like Azure Synapse, Azure Data Factory, or similar. Solid knowledge of data governance, security, and compliance best practices. Excellent problem-solving, communication, and leadership skills. Bachelors degree in computer science, Information Systems, or related field. Exposure to Finance domain.
Posted 2 weeks ago
3.0 - 8.0 years
6 - 15 Lacs
Ahmedabad
Work from Office
Job Description: As an ETL Developer, you will be responsible for designing, building, and maintaining ETL pipelines using MSBI stack, Azure Data Factory (ADF) and Fabric. You will work closely with data engineers, analysts, and other stakeholders to ensure data is accessible, reliable, and processed efficiently. Key Responsibilities: Design, develop, and deploy ETL pipelines using ADF and Fabric. Collaborate with data engineers and analysts to understand data requirements and translate them into efficient ETL processes. Optimize data pipelines for performance, scalability, and robustness. Integrate data from various sources, including S3, relational databases, and APIs. Implement data validation and error handling mechanisms to ensure data quality. Monitor and troubleshoot ETL jobs to ensure data accuracy and pipeline reliability. Maintain and update existing data pipelines as data sources and requirements evolve. Document ETL processes, data models, and pipeline configurations. Qualifications: Experience: 3+ years of experience in ETL development, with a focus on ADF, MSBI stack, SQL, Power BI, Fabric. Technical Skills: Strong expertise in ADF, MSBI stack, SQL, Power BI. Proficiency in programming languages such as Python or Scala. Hands-on experience with ADF, Fabric, Power BI, MSBI. Solid understanding of data warehousing concepts, data modeling, and ETL best practices. Familiarity with orchestration tools like Apache Airflow is a plus. Data Integration: Experience with integrating data from diverse sources, including relational databases, APIs, and flat files. Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex ETL issues. Communication: Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Education: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience. Nice to Have: Experience with data lakes and big data processing. Knowledge of data governance and security practices in a cloud environment.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France