Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Azure Data Engineer with expertise in Microsoft Fabric and modern data platform components, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud. Your primary focus will be on ensuring performance, scalability, and delivering business value through efficient data solutions. You will collaborate with various teams to define data requirements, implement data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Additionally, you will work with Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI for seamless data integration and reporting. Your role will involve optimizing data performance and cost through efficient architecture and coding practices, ensuring data security, privacy, and compliance with organizational policies. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be part of your responsibilities. To excel in this role, you should have 5 to 7 years of experience as a Data Engineer, with at least 2+ years working on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is crucial. Strong skills in data modeling, ETL/ELT design, and performance tuning are required, along with proficiency in SQL and Python/PySpark scripting. Experience with CI/CD pipelines and DevOps practices for data solutions, understanding of data governance, security, and compliance frameworks, as well as excellent communication, problem-solving, and stakeholder management skills are essential for success in this role. A Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field is preferred. Having Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric would be advantageous. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
delhi
On-site
You will be responsible for leading and mentoring a team of data engineers to ensure high-quality delivery across various projects. Your role will involve designing, building, and optimizing large-scale data pipelines and integration workflows using Azure Data Factory (ADF) and Synapse Analytics. Additionally, you will be tasked with architecting and implementing scalable data solutions on Azure cloud, leveraging tools such as Databricks and Microsoft Fabric. Writing efficient and maintainable code using PySpark and SQL for data transformations will be a key part of your responsibilities. Collaboration with data architects, analysts, and business stakeholders to define data strategies and requirements is crucial. You will also be expected to implement and promote Data Mesh principles within the organization, provide architectural guidance, and offer solutions for new and existing data projects on Azure. Ensuring data quality, governance, and security best practices are followed, and staying updated with evolving Azure services and data technologies are essential aspects of the role. In terms of required skills and experience, you should possess at least 6 years of professional experience in data engineering and solution architecture. Expertise in Azure Data Factory (ADF) and Azure Synapse Analytics is necessary. Strong hands-on experience with Databricks, PySpark, and advanced SQL is also expected. A good understanding of Microsoft Fabric and its use cases, along with a deep knowledge of Azure cloud services related to data storage, processing, and integration, will be beneficial. Familiarity with Data Mesh architecture and distributed data product ownership is desirable. Strong problem-solving and debugging skills, as well as excellent communication and stakeholder management abilities, are essential for this role. It would be advantageous to have experience with CI/CD pipelines for data solutions, knowledge of data security and compliance practices on Azure, and a certification in Azure Data Engineering or Solution Architecture.,
Posted 1 day ago
15.0 - 19.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for a highly skilled and experienced Data Architect to join our team. With at least 15 years of experience in Data engineering and Analytics, the ideal candidate will have a proven track record of designing and implementing complex data solutions. As a senior principal data architect, you will play a key role in designing, creating, deploying, and managing Blackbaud's data architecture. This position holds significant technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at Blackbaud. You will act as an evangelist for proper data strategy across various teams within Blackbaud, and provide technical guidance, particularly in the area of data, for other projects. Responsibilities: - Develop and direct the strategy for all aspects of Blackbaud's Data and Analytics platforms, products, and services. - Set, communicate, and facilitate technical direction for the AI Center of Excellence and beyond collaboratively. - Design and develop innovative products, services, or technological advancements in the Data Intelligence space to drive business expansion. - Collaborate with product management to create technical solutions that address customer business challenges. - Take ownership of technical data governance practices to ensure data sovereignty, privacy, security, and regulatory compliance. - Challenge existing practices and drive innovation in the data space. - Create a data access strategy to securely democratize data and support research, modeling, machine learning, and artificial intelligence initiatives. - Contribute to defining tools and pipeline patterns used by engineers and data engineers for data transformation and analytics support. - Work within a cross-functional team to translate business requirements into data architecture solutions. - Ensure that data solutions prioritize performance, scalability, and reliability. - Mentor junior data architects and team members. - Stay updated on technology trends such as distributed computing, big data concepts, and architecture. - Advocate internally for the transformative power of data at Blackbaud. Required Qualifications: - 15+ years of experience in data and advanced analytics. - Minimum of 8 years of experience with data technologies in Azure/AWS. - Proficiency in SQL and Python. - Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. - Familiarity with Databricks, Microsoft Fabric. - Strong grasp of data modeling, data warehousing, data lakes, data mesh, and data products. - Experience with machine learning. - Excellent communication and leadership abilities. Preferred Qualifications: - Experience with .Net/Java and Microservice Architecture.,
Posted 1 day ago
4.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Power BI + Microsoft Fabric Lead with over 10 years of experience, you will play a key role in leading the strategy and architecture for BI initiatives. Your responsibilities will include designing and delivering end-to-end Power BI and Microsoft Fabric solutions, collaborating with stakeholders to define data and reporting goals, and driving the adoption of best practices and performance optimization. Your expertise in Power BI, including DAX, Power Query, and Advanced Visualizations, will be essential for the success of high-impact BI initiatives. As a Power BI + Microsoft Fabric Developer with 4+ years of experience, you will be responsible for developing dashboards and interactive reports using Power BI, building robust data models, and implementing Microsoft Fabric components like Lakehouse, OneLake, and Pipelines. Working closely with cross-functional teams, you will gather and refine requirements to ensure high performance and data accuracy across reporting solutions. Your hands-on experience with Microsoft Fabric tools such as Data Factory, OneLake, Lakehouse, and Pipelines will be crucial for delivering effective data solutions. Key Skills Required: - Strong expertise in Power BI (DAX, Power Query, Advanced Visualizations) - Hands-on experience with Microsoft Fabric (Data Factory, OneLake, Lakehouse, Pipelines) - Solid understanding of data modeling, ETL, and performance tuning - Ability to collaborate effectively with business and technical teams Joining our team will provide you with the opportunity to work with cutting-edge Microsoft technologies, lead high-impact BI initiatives, and thrive in a collaborative and innovation-driven environment. We offer a competitive salary and benefits package to reward your expertise and contributions. If you are passionate about leveraging Power BI and Microsoft Fabric tools to drive data-driven insights and solutions, we invite you to apply for this full-time position. Application Question(s): - What is your current and expected CTC - What is your notice period If you are serving your notice period, then what is your Last Working Day (LWD) Experience Required: - Power BI: 4 years (Required) - Microsoft Fabrics: 4 years (Required) Work Location: In person,
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate for this position should have 8-12 years of experience and possess a strong understanding and hands-on experience with Microsoft Fabric. You will be responsible for designing and implementing end-to-end data solutions on Microsoft Azure, which includes data lakes, data warehouses, and ETL/ELT processes. Your role will involve developing scalable and efficient data architectures to support large-scale data processing and analytics workloads. Ensuring high performance, security, and compliance within Azure data solutions will be a key aspect of this role. You should have knowledge of various techniques such as lakehouse and warehouse, along with experience in implementing them. Additionally, you will be required to evaluate and select appropriate Azure services like Azure SQL Database, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Unity Catalog, and Azure Data Factory. Deep knowledge and hands-on experience with these Azure Data Services are essential. Collaborating closely with business and technical teams to understand and translate data needs into robust and scalable data architecture solutions will be part of your responsibilities. You should also have experience in data governance, data privacy, and compliance requirements. Excellent communication and interpersonal skills are necessary for effective collaboration with cross-functional teams. In this role, you will provide expertise and leadership to the development team implementing data engineering solutions. Working with Data Scientists, Analysts, and other stakeholders to ensure data architectures align with business goals and data analysis requirements is crucial. Optimizing cloud-based data infrastructure for performance, cost-effectiveness, and scalability will be another key responsibility. Experience in programming languages like SQL, Python, and Scala is required. Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms is preferred. Familiarity with Azure DevOps and CI/CD pipeline development is beneficial. An in-depth understanding of database structure principles and distributed data processing of big data batch or streaming pipelines is essential. Knowledge of data visualization tools such as Power BI and Tableau, along with data modeling and strong analytics skills is expected. The candidate should be able to convert OLTP data structures into Star Schema and ideally have DBT experience along with data modeling experience. A problem-solving attitude, self-motivation, attention to detail, and effective task prioritization are essential qualities for this role. At Hitachi, attitude and aptitude are highly valued as collaboration is key. While not all skills are required, experience with Azure SQL Data Warehouse, Azure Data Factory, Azure Data Lake, Azure Analysis Services, Databricks/Spark, Python or Scala, data modeling, Power BI, and database migration are desirable. Designing conceptual, logical, and physical data models using tools like ER Studio and Erwin is a plus.,
Posted 1 day ago
10.0 - 15.0 years
0 Lacs
noida, uttar pradesh
On-site
As an experienced Power BI Architect with extensive knowledge of Microsoft Fabric, you will be responsible for leading the design, development, and implementation of innovative Business Intelligence (BI) solutions. Your expertise in enterprise data architecture, analytics platforms, and data integration strategies will be crucial in optimizing data pipelines and driving performance and scalability through the effective use of Power BI and Microsoft Fabric. Your key responsibilities will include developing comprehensive Power BI solutions such as dashboards, reports, and data models to meet business needs. You will lead the entire lifecycle of BI projects, from requirement gathering to deployment, ensuring optimal performance. Utilizing Microsoft Fabric, you will streamline data pipelines by integrating data engineering, data storage, and data processing capabilities. Integration of Power BI with Microsoft Fabric will be essential for improved performance, scalability, and efficiency. Your role will also involve working with Azure Data Services (e.g., Azure Data Lake, Azure Synapse, Azure Data Factory) to support the BI architecture. Establishing and implementing best practices in Power BI development, including DAX functions, data transformations, and data modeling, will be part of your responsibilities. Additionally, you will lead and mentor a team of Power BI developers, ensuring high-quality output and adherence to best practices. You will oversee task prioritization, resource allocation, and project timelines to ensure timely and successful delivery of BI solutions. Collaboration with data engineers and stakeholders to translate business requirements into functional, scalable BI solutions will be crucial. Driving BI initiatives to ensure alignment with business goals and objectives will also be a key aspect of your role. To qualify for this position, you should have a Bachelor's degree in Computer Science, Information Systems, Data Analytics, or a related field. You should have 10-15 years of experience in BI development, with at least 3 years in a leadership role. Proven experience with Power BI, Microsoft Fabric, and Azure Data Services will be essential for success in this role.,
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: Cloud Data Architect Job Overview We are seeking a highly skilled and motivated Data Architect with 57 years of experience in data architecture, modeling, and enterprise data solutions. The ideal candidate will be familiar with Microsoft Fabric or equivalent modern data platforms, and possess hands-on expertise in designing scalable data models and systems. Prior consulting experience and a strong understanding of data integration, governance, and analytics design are highly desirable. Key Responsibilities - Design, implement, and maintain enterprise data architectures for analytics and operational use. - Develop and optimize conceptual, logical, and physical data models that support business reporting and analytics needs. - Lead end-to-end data solution designs using Microsoft Fabric, Azure Synapse, or similar platforms. - Translate business and functional requirements into technical data architecture specifications. - Collaborate with stakeholders, data engineers, and business teams to design scalable, secure, and high-performance data environments. - Provide data integration strategies for structured data across multiple systems. - Ensure adherence to data governance, data quality, and security standards in all architecture designs. - Participate in architecture reviews, technical design sessions, and consulting engagements with internal or external clients. - Support modernization efforts by guiding migration to cloud-native data platforms. Required Qualifications - 57 years in data architecture, data engineering, or analytics solution delivery. - Hands-on experience with Microsoft Fabric , Azure Data Services (Data Lake, Synapse, Data Factory), or equivalent platforms like Snowflake, Databricks. - Strong data modeling (dimensional and normalized), metadata management, and data lineage understanding. - Proficiency in Python or R, SQL, DAX, Power BI, and data pipeline tools. - Solid understanding of modern data warehouse/lakehouse architectures and enterprise data integration patterns. - Ability to manage stakeholder expectations, translate business needs into technical requirements, and deliver value through data solutions. - Strong communication and documentation skills. - Bachelors degree in Computer Science, Information Systems, Engineering, or a related field. Preferred Skills - Experience with tools such as Microsoft Fabric, Azure Data Factory, Databricks, or similar platforms. - Knowledge of Python or R for data manipulation or ML integration. - Familiarity with DevOps or CI/CD for BI deployments. - Exposure to data governance tools (e.g., Purview) and practices. - Experience working with Agile/Scrum methodologies. Show more Show less
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Senior Data Engineer, you will play a crucial role in supporting the Global BI team for Isolation Valves as they transition to Microsoft Fabric. Your primary responsibilities will involve data gathering, modeling, integration, and database design to facilitate efficient data management. You will be tasked with developing and optimizing scalable data models to cater to analytical and reporting needs, utilizing Microsoft Fabric and Azure technologies for high-performance data processing. Your duties will include collaborating with cross-functional teams such as data analysts, data scientists, and business collaborators to comprehend their data requirements and deliver effective solutions. You will leverage Fabric Lakehouse for data storage, governance, and processing to back Power BI and automation initiatives. Additionally, your expertise in data modeling, particularly in data warehouse and lakehouse design, will be essential in designing and implementing data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Furthermore, you will be responsible for developing ETL processes using tools like SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar platforms to prepare data for analysis and reporting. Implementing data quality checks and governance practices to ensure data accuracy, consistency, and security will also fall under your purview. You will supervise and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Your role will require a strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms, along with experience in data integration and ETL tools like Azure Data Factory. A deep understanding of Microsoft Fabric or similar data platforms, as well as comprehensive knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions, will be necessary. Effective communication skills to convey technical concepts to both technical and non-technical stakeholders, the ability to work both independently and within a team environment, and the willingness to stay abreast of new technologies and business areas are also vital for success in this role. To excel in this position, you should possess 5-7 years of experience in Data Warehousing with on-premises or cloud technologies, strong analytical abilities to tackle complex data challenges, and proficiency in database management, SQL query optimization, and data mapping. A solid grasp of Excel, including formulas, filters, macros, pivots, and related operations, is essential. Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging, along with a willingness to work flexible hours based on project requirements, is also required. Furthermore, hands-on experience with Fabric components such as Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models, as well as advanced SQL skills and experience with complex queries, data modeling, and performance tuning, are highly desired. Prior exposure to implementing Medallion Architecture for data processing, experience in a manufacturing environment, and familiarity with Oracle, SAP, or other ERP systems will be advantageous. A Bachelor's degree or equivalent experience in a Science-related field, with good interpersonal skills in English (spoken and written) and Agile certification, will set you apart as a strong candidate for this role. At Emerson, we are committed to fostering a workplace where every employee is valued, respected, and empowered to grow. Our culture encourages innovation, collaboration, and diverse perspectives, recognizing that great ideas stem from great teams. We invest in your ongoing career development, offering mentorship, training, and leadership opportunities to ensure your success and make a lasting impact. Employee wellbeing is a priority for us, and we provide competitive benefits plans, medical insurance options, Employee Assistance Program, flexible time off, and other supportive resources to help you thrive. Emerson is a global leader in automation technology and software, dedicated to helping customers in critical industries operate more sustainably and efficiently. Our commitment to our people, communities, and the planet drives us to create positive impacts through innovation, collaboration, and diversity. If you seek an environment where you can contribute to meaningful work, develop your skills, and make a difference, join us at Emerson. Let's go together towards a brighter future.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer MS Fabric at our Chennai-Excelencia Office location, you will leverage your 4+ years of experience to design, build, and optimize data pipelines using Microsoft Fabric, Azure Data Factory, and Synapse Analytics. Your primary responsibilities will include developing and maintaining Lakehouses, Notebooks, and data flows within the Microsoft Fabric ecosystem, ensuring efficient data integration, quality, and governance across OneLake and other Fabric components, and implementing real-time analytics pipelines for high-throughput data processing. To excel in this role, you must have proficiency in Microsoft Fabric, Azure Data Factory (ADF), Azure Synapse Analytics, Delta Lake, OneLake, Lakehouses, Python, PySpark, Spark SQL, T-SQL, and ETL/ELT Development. Your work will involve collaborating with cross-functional teams to define and deliver end-to-end data engineering solutions, participating in Agile ceremonies, and utilizing tools like JIRA for project tracking and delivery. Additionally, you will be tasked with performing complex data transformations using various data formats and handling large-scale data warehousing and analytics workloads. Preferred skills for this position include a strong understanding of distributed computing and cloud-native data architecture, experience with DataOps practices and data quality frameworks, familiarity with CI/CD for data pipelines, and proficiency in monitoring tools and job scheduling frameworks to ensure the reliability and performance of data systems. Strong problem-solving and analytical thinking, excellent communication and collaboration skills, as well as a self-motivated and proactive approach with a continuous learning mindset are essential soft skills required for success in this role.,
Posted 3 days ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
We are searching for a Senior Data Engineer with significant experience in developing ETL processes utilizing PySpark Notebooks and Microsoft Fabric, as well as supporting existing legacy SQL Server environments. The perfect candidate will have a solid foundation in Spark-based development, showcase advanced SQL skills, and feel at ease working autonomously, collaboratively within a team, or guiding other developers when necessary, all while possessing excellent communication abilities. The ideal candidate will also demonstrate expertise with Azure Data Services, such as Azure Data Factory, Azure Synapse, or similar tools, familiarity with creating DAG's, implementing activities, and running Apache Airflow, and knowledge of DevOps practices, CI/CD pipelines, and Azure DevOps. Key Responsibilities: - Design, develop, and manage ETL Notebook orchestration pipelines utilizing PySpark and Microsoft Fabric. - Collaborate with data scientists, analysts, and stakeholders to grasp data requirements and provide effective data solutions. - Migrate and integrate data from legacy SQL Server environments into modern data platforms. - Optimize data pipelines and workflows for scalability, efficiency, and reliability. - Provide technical leadership and mentorship to junior developers and team members. - Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. - Develop, maintain, and uphold data engineering best practices, coding standards, and documentation. - Conduct code reviews and offer constructive feedback to enhance team productivity and code quality. - Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. - 10+ years of experience in data engineering, focusing on ETL development using PySpark or other Spark-based tools. - Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. - Experience with Microsoft Fabric or similar cloud-based data integration platforms is advantageous. - Strong understanding of data warehousing concepts, ETL frameworks, and big data processing. - Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is a plus. - Experience dealing with both structured and unstructured data sources. - Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. - Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. - Experience of creating DAG's, implementing activities, and running Apache Airflow. - Familiarity with DevOps practices, CI/CD pipelines, and Azure DevOps. In conclusion, Aspire Systems is a global technology services firm that acts as a trusted technology partner for over 275 clients worldwide. Aspire collaborates with leading enterprises in Banking, Insurance, Retail, and ISVs to help them leverage technology for business transformation in the current digital era. The company's dedication to Attention. Always. reflects its commitment to providing care and attention to both its customers and employees. With over 4900 employees globally and a CMMI Level 3 certification, Aspire Systems operates in North America, LATAM, Europe, Middle East, and Asia Pacific. Aspire Systems has been consistently recognized as one of the Top 100 Best Companies to Work For by the Great Place to Work Institute for the 12th consecutive time. For more information about Aspire Systems, please visit https://www.aspiresys.com/.,
Posted 3 days ago
10.0 - 17.0 years
12 - 17 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
POSITION OVERVIEW: We are seeking an experienced and highly skilled Data Engineer with deep expertise in Microsoft Fabric , MS-SQL, data warehouse architecture design , and SAP data integration. The ideal candidate will be responsible for designing, building, and optimizing data pipelines and architectures to support our enterprise data strategy. The candidate will work closely with cross-functional teams to ingest, transform, and make data (from SAP and other systems) available in our Microsoft Azure environment, enabling robust analytics and business intelligence. KEY ROLES & RESPONSIBILITIES : Spearhead the design, development, deployment, testing, and management of strategic data architecture, leveraging cutting-edge technology stacks on cloud, on-prem and hybrid environments Design and implement an end-to-end data architecture within Microsoft Fabric / SQL, including Azure Synapse Analytics (incl. Data warehousing). This would also encompass a Data Mesh Architecture. Develop and manage robust data pipelines to extract, load, and transform data from SAP systems (e.g., ECC, S/4HANA, BW). Perform data modeling and schema design for enterprise data warehouses in Microsoft Fabric. Ensure data quality, security, and compliance standards are met throughout the data lifecycle. Enforce Data Security measures, strategies, protocols, and technologies ensuring adherence to security and compliance requirements Collaborate with BI, analytics, and business teams to understand data requirements and deliver trusted datasets. Monitor and optimize performance of data processes and infrastructure. Document technical solutions and develop reusable frameworks and tools for data ingestion and transformation. Establish and maintain robust knowledge management structures, encompassing Data Architecture, Data Policies, Platform Usage Policies, Development Rules, and more, ensuring adherence to best practices, regulatory compliance, and optimization across all data processes Implement microservices, APIs and event-driven architecture to enable agility and scalability. Create and maintain architectural documentation, diagrams, policies, standards, conventions, rules and frameworks to effective knowledge sharing and handover. Monitor and optimize the performance, scalability, and reliability of the data architecture and pipelines. Track data consumption and usage patterns to ensure that infrastructure investment is effectively leveraged through automated alert-driven tracking. KEY COMPETENCIES: Microsoft Certified: Fabric Analytics Engineer Associate or equivalent certificate for MS SQL. Prior experience working in cloud environments (Azure preferred). Understanding of SAP data structures and SAP integration tools like SAP Data Services, SAP Landscape Transformation (SLT), or RFC/BAPI connectors. Experience with DevOps practices and version control (e.g., Git). Deep understanding of SAP architecture, data models, security principles, and platform best practices. Strong analytical skills with the ability to translate business needs into technical solutions. Experience with project coordination, vendor management, and Agile or hybrid project delivery methodologies. Excellent communication, stakeholder management, and documentation skills. Strong understanding of data warehouse architecture and dimensional modeling. Excellent problem-solving and communication skills. QUALIFICATIONS / EXPERIENCE / SKILLS Qualifications : Bachelors degree in Computer Science, Information Systems, or a related field. Certifications such as SQL, Administrator, Advanced Administrator, are preferred. Expertise in data transformation using SQL, PySpark, and/or other ETL tools. Strong knowledge of data governance, security, and lineage in enterprise environments. Advanced knowledge in SQL, database procedures/packages and dimensional modeling Proficiency in Python, and/or Data Analysis Expressions (DAX) (Preferred, not mandatory) Familiarity with PowerBI for downstream reporting (Preferred, not mandatory). Experience : • 10 years of experience as a Data Engineer or in a similar role. Skills: Hands-on experience with Microsoft SQL (MS-SQL), Microsoft Fabric including Synapse (Data Warehousing, Notebooks, Spark) Experience integrating and extracting data from SAP systems, such as: o SAP ECC or S/4HANA SAP BW o SAP Core Data Services (CDS) Views or OData Services Knowledge of Data Protection laws across countries (Preferred, not mandatory)
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer with 5+ years of experience, you will be responsible for designing and developing scalable, reusable, and efficient data pipelines using modern Data Engineering platforms such as Microsoft Fabric, PySpark, and Data Lakehouse architectures. Your role will involve integrating data from diverse sources, transforming it into actionable insights, and ensuring high standards of data governance and quality. You will play a key role in establishing and enforcing data governance policies, monitoring pipeline performance, and optimizing for efficiency. Key Responsibilities Design and build robust data pipelines using Microsoft Fabric components including Pipelines, Notebooks (PySpark), Dataflows, and Lakehouse architecture. Ingest and transform data from cloud platforms (Azure, AWS), on-prem databases, SaaS platforms (e.g., Salesforce, Workday), and REST/OpenAPI-based APIs. Develop and maintain semantic models and define standardized KPIs for reporting and analytics in Power BI or equivalent BI tools. Implement and manage Delta Tables across bronze/silver/gold layers using Lakehouse medallion architecture within OneLake or equivalent environments. Apply metadata-driven design principles to ensure pipeline parameterization, reusability, and scalability. Monitor, debug, and optimize pipeline performance; implement logging, alerting, and observability mechanisms. Establish and enforce data governance policies including schema versioning, data lineage tracking, role-based access control (RBAC), and audit trail mechanisms. Perform data quality checks including null detection, duplicate handling, schema drift management, outlier identification, and Slowly Changing Dimensions (SCD) type management. Required Skills & Qualifications 5+ years of hands-on experience in Data Engineering or related fields. Solid understanding of data lake/lakehouse architectures, preferably with Microsoft Fabric or equivalent tools (e.g., Databricks, Snowflake, Azure Synapse). Strong experience with PySpark, SQL, and working with dataflows and notebooks. Exposure to BI tools like Power BI, Tableau, or equivalent for data consumption layers. Experience with Delta Lake or similar transactional storage layers. Familiarity with data ingestion from SaaS applications, APIs, and enterprise databases. Understanding of data governance, lineage, and RBAC principles. Strong analytical, problem-solving, and communication skills. Nice to Have Prior experience with Microsoft Fabric and OneLake platform. Knowledge of CI/CD practices in data engineering. Experience implementing monitoring/alerting tools for data pipelines. Join us for the opportunity to work on cutting-edge data engineering solutions in a fast-paced, collaborative environment focused on innovation and learning. Gain exposure to end-to-end data product development and deployment cycles.,
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a WebFOCUS Development Engineer, your responsibilities will involve designing, developing, testing, and deploying WebFOCUS reports, dashboards, and visualizations. Additionally, you will be tasked with creating and managing WebFOCUS procedures (.fex), metadata layers, and reporting cubes. You will play a crucial role in building and maintaining ReportCaster schedules for automated report distribution. Collaboration with business users and product managers to gather requirements and deliver insights is a key aspect of this position. Data analysis, data modeling, and report optimization are also integral parts of your responsibilities. You will be required to write clear and effective functional and technical documentation. Furthermore, you will participate in solution architecture discussions and offer technical recommendations to enhance processes. Ensuring report performance, access security, and high-quality user experiences will be a vital part of your role. To excel in this position, you must possess at least 3-7 years of experience in WebFOCUS development. A strong command of App Studio, InfoAssist, and Developer Studio is essential. Excellent verbal and written communication skills are crucial for effective interaction with team members and stakeholders. Expertise in WebFOCUS metadata modeling, report/cube development, ReportCaster configuration and scheduling, WebFOCUS security design, sign-on integration, proficiency in SQL, and working with relational databases are all necessary skills. Additionally, knowledge of PowerBI, Microsoft Fabric, and familiarity with Azure Services is highly desirable. A strong understanding of JavaScript, HTML5, CSS, and strong analytical and problem-solving abilities are also important for this role. You should have the ability to engage directly with end-users for dashboard design and troubleshooting. It would be advantageous to have familiarity with Azure Services like Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Information Systems, or a related field.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
You will be working as a Senior Microsoft Fabric Developer in an immediate project that aims to develop and implement an automated reporting solution using Microsoft Fabric. Your primary responsibilities will include utilizing Microsoft Fabric as the primary platform, Azure Logic Apps for API integrations, Power BI for report creation within Fabric, Power Automate for report distribution, and OneLake for data storage. Your role will require deep expertise in Microsoft Fabric, focusing on data integration, processing, and report development. You should have a strong background in Power BI, specifically within the Fabric environment, and proficiency in Azure Logic Apps for API integrations. Additionally, familiarity with Power Automate for workflow automation, understanding of data modeling and ETL processes, as well as experience with SQL and data analysis are essential skills for this position. Desired skills for this role include knowledge of MSP operations and common tools, experience with Microsoft 365 security features and reporting, familiarity with PDF generation from Power BI reports, understanding of data privacy and security best practices, and previous experience in creating reporting solutions for service providers. Apart from technical skills, you are expected to have excellent communication skills in English, both written and verbal. You should be able to work independently, take initiative, and approach problem-solving proactively. Business acumen, cost-awareness, and the commitment to seeing the project through to successful completion are also key criteria for this role. Additionally, you must be available to overlap with Irish time zones for at least 4 hours per day. If you meet these requirements and are ready to contribute to the successful completion of the project, we look forward to receiving your application.,
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an experienced Azure Data Engineer, you will be a valuable member of our data engineering team, contributing to the design, development, and maintenance of data pipelines and platforms on Azure. Leveraging tools and services such as Microsoft Fabric, Azure Data Factory, Python notebooks, Azure Functions, and Data Warehouses, you will play a crucial role in building scalable, reliable, and secure data workflows that drive analytics and business intelligence initiatives. Your responsibilities will include designing and implementing scalable ETL/ELT pipelines using Azure Data Factory (ADF) and Microsoft Fabric, integrating data from various sources into centralized data warehouses or lakehouses, and creating Python notebooks for data transformation and machine learning preparation. Additionally, you will develop Azure Function Apps for tasks like lightweight compute, API integration, and orchestration, collaborate with analysts and stakeholders to deliver high-quality data models, and ensure best practices in data governance, security, version control, and CI/CD for data pipelines. To excel in this role, you should have a minimum of 3-5 years of hands-on experience in data engineering with a strong focus on Azure technologies. Proficiency in Microsoft Fabric, Azure Data Factory (ADF), Python notebooks, Azure Functions, and Data Warehousing concepts is essential. Strong SQL skills for data extraction, transformation, and modeling are also required, along with a good understanding of modern data architectures such as medallion architecture, lakehouse, and data mesh principles. If you are passionate about data engineering, enjoy working with cutting-edge technologies, and thrive in a collaborative environment, we encourage you to apply and be part of our team dedicated to continuous improvement and innovation in data platform architecture.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Supply Chain Data Analyst, you will be an integral part of our partner's high-performance analytics team, focused on facilitating real-time, cross-functional decisions within the supply chain, finance, and operations domains. This role is pivotal in the initiative to streamline and modernize reporting within a complex ERP landscape by actively engaging in analytics delivery. Working in close collaboration with stakeholders from planning, logistics, procurement, and inventory management, you will play a critical role in transforming operational complexities into actionable insights. Leveraging your expertise in SQL and Power BI, you will be responsible for modeling and reshaping ERP data (specifically SAP ECC/S4) to drive informed decision-making. The ideal candidate for this role should possess a Bachelor's degree in Computer Science, Engineering, Business, or a related field, along with a minimum of 5 years of experience in business intelligence, analytics, or reporting. An essential requirement is previous exposure to ERP-driven data, with a strong preference for experience with SAP ECC or S/4HANA. Prior engagement in manufacturing, distribution, or supply chain-oriented environments, coupled with hands-on experience in supporting both recurring and ad hoc reporting needs, will be advantageous. Familiarity with modern data platforms like Databricks and Microsoft Fabric is considered a plus. Your proficiency in tools and technologies such as Power BI, DAX, SQL, and Excel will be instrumental in your day-to-day responsibilities. You will be tasked with designing and developing Power BI dashboards to track key metrics related to inventory, procurement, and logistics. Collaborating closely with supply chain and planning stakeholders, you will gather requirements and utilize SQL queries to extract and model ERP data for reporting purposes. Additionally, you will be expected to respond to urgent ad hoc analysis requests, automate manual reporting workflows, and contribute to the standardization of KPIs and reporting practices. Furthermore, your role will involve data cleansing, validation, and governance activities to ensure the accuracy and reliability of reporting outputs. You will also contribute to defining and enhancing data models across SAP and other source systems while continuously improving existing reporting assets based on feedback. Your participation in sprint planning processes will aid in prioritizing and aligning deliverables, thus contributing to the overall modernization efforts of the analytics platform. In summary, as a Supply Chain Data Analyst, you will play a pivotal role in transforming raw data into actionable insights, supporting stakeholders in making informed decisions, and contributing to the continuous enhancement of our partner's analytics platform.,
Posted 6 days ago
8.0 - 13.0 years
20 - 35 Lacs
Pune
Hybrid
Job Description: JOB SUMMARY We are seeking an experienced Microsoft Fabric architect that brings technical expertise and architectural instincts to lead the design, development, and scalability of our secured enterprise-grade data ecosystem. This role is not a traditional BI/Data Engineering position we are looking for deep hands-on expertise in Fabric administration, CI/CD integration, and security/governance configuration in production environments. ESSENTIAL DUTIES Provide technical leadership on design and architectural decisions, data platform evolution and vendor/tool selection Leverage expertise in data Lakehouse on Microsoft Fabric, including optimal use of OneLake, Dataflows Gen2, Pipelines and Synapse Data Engineering Build and maintain scalable data pipelines to ingest, transform and curate data from a variety of structured and semi-structured sources Implement and enforce data modelling standards, including medallion architecture, Delta Lake and dimensional modelling best practices Collaborate with analysts and business users to deliver well-structured, trusted datasets for self-service reporting and analysis in Power BI Establish data engineering practices that ensure reliability, performance, governance and security Monitor and tune workloads within the Microsoft Fabric platform to ensure cost-effective and efficient operations EDUCATION / CERTIFICATION REQUIREMENTS Bachelor’s degree in computer science, data science, or a related field is required. A minimum of 3 years of experience in data engineering with at least 2 years in a cloud-native or modern data platform environment is required. Prior experience with a public accounting, financial or other professional services environment is preferred. SUCCESSFUL CHARACTERISTICS / SKILLS Extensive, hands-on expertise with Microsoft Fabric, including Dataflows Gen2, Pipelines, Synapse Data Engineering, Notebooks, and OneLake. Proven experience designing Lakehouse or data warehouse architecture, including data ingestion frameworks, staging layers and semantic models. Strong SQL and T-SQL skills and familiarity with Power Query (M) and Delta Lake formats. Understanding of data governance, data security, lineage and metadata management practices. Ability to lead technical decisions and set standards in the absence of a dedicated Data Architect. Strong communication skills with the ability to collaborate across technical and non-technical teams. Results driven; high integrity; ability to influence, negotiate and build relationships; superior communications skills; making complex decisions and leading team through complex challenges. Self-disciplined to work in a virtual, agile, globally sourced team. Strategic, out-of-the-box thinker and problem-solving experience to assess, analyze, troubleshoot, and resolve issues. Excellent analytical skills, extraordinary attention to detail, and ability to present recommendations to business teams based on trends, patterns, and modern best practices. Experience with Power BI datasets and semantic modelling is an asset. Familiarity with Microsoft Purview or similar governance tools is an asset. Working knowledge of Python, PySpark, or KQL is an asset. Experience and passion for technology and providing exceptional experience both internally for our employees and externally for clients and prospects. Strong ownership, bias to action, and know-how to succeed in ambiguity. Ability to deliver value consistently by motivating teams towards achieving goal Do share your resume with my email address: sachin.patil@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - sachin.patil@newvision-software.com
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
noida, uttar pradesh
On-site
Your journey at Crowe starts here with the opportunity to build a meaningful and rewarding career. At Crowe, you are trusted to deliver results and make an impact while having the flexibility to balance work with life moments. Your well-being is cared for, and your career is nurtured in an inclusive environment where everyone has equitable access to opportunities for growth and leadership. With over 80 years of history, Crowe has excelled in delivering excellent service through innovation across audit, tax, and consulting groups. As a Data Engineer at Crowe, you will provide critical integration infrastructure for analytical support and solution development for the broader Enterprise using market-leading tools and methodologies. Your expertise in API integration, pipelines or notebooks, programming languages (Python, Spark, T-SQL), dimensional modeling, and advanced data engineering techniques will be key in creating and delivering robust solutions and data products. You will be responsible for designing, developing, and maintaining the Enterprise Analytics Platform to support data-driven decision-making across the organization. Success in this role requires a strong interest and passion in data analytics, ETL/ELT best practices, critical thinking, problem-solving, as well as excellent interpersonal, communication, listening, and presentation skills. The Data team strives for an unparalleled client experience and will look to you to promote success and enhance the firm's image firmwide. To qualify for this role, you should have a Bachelor's degree in computer science, Data Analytics, Data/Information Science, Information Systems, Mathematics (or related fields), along with specific years of experience in SQL, data warehousing concepts, programming languages, managing projects, and utilizing tools like Microsoft Power BI, Delta Lake, or Apache Spark. It is preferred that you have hands-on experience or certification with Microsoft Fabric. Upholding Crowe's values of Care, Trust, Courage, and Stewardship is essential in this position, as we expect all team members to act ethically and with integrity at all times. Crowe offers a comprehensive benefits package to its employees and provides an inclusive culture that values diversity. You will have the opportunity to work with a Career Coach who will guide you in your career goals and aspirations. Crowe, a subsidiary of Crowe LLP (U.S.A.), a public accounting, consulting, and technology firm, is part of Crowe Global, one of the largest global accounting networks in the world. Crowe does not accept unsolicited candidates, referrals, or resumes from any staffing agency or third-party paid service. Referrals, resumes, or candidates submitted without a pre-existing agreement will be considered the property of Crowe.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer with 4 to 6 years of hands-on experience in Microsoft Fabric, Snowflake, and Matillion, you will be a valuable asset to our team. Your primary responsibility will involve supporting MS Fabric and leading the migration process to Snowflake and Matillion. Your expertise and attention to detail will play a crucial role in the success of these projects.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
KLA is a global leader in diversified electronics for the semiconductor manufacturing ecosystem. Virtually every electronic device in the world is produced using our technologies. No laptop, smartphone, wearable device, voice-controlled gadget, flexible screen, VR device or smart car would have made it into your hands without us. KLA invents systems and solutions for the manufacturing of wafers and reticles, integrated circuits, packaging, printed circuit boards and flat panel displays. The innovative ideas and devices that are advancing humanity all begin with inspiration, research and development. KLA focuses more than average on innovation and we invest 15% of sales back into R&D. Our expert teams of physicists, engineers, data scientists and problem-solvers work together with the worlds leading technology providers to accelerate the delivery of tomorrows electronic devices. Life here is exciting and our teams thrive on tackling really hard problems. There is never a dull moment with us. The Information Technology (IT) group at KLA is involved in every aspect of the global business. ITs mission is to enable business growth and productivity by connecting people, process, and technology. It focuses not only on enhancing the technology that enables our business to thrive but also on how employees use and are empowered by technology. This integrated approach to customer service, creativity and technological excellence enables employee productivity, business analytics, and process excellence. As a Sr. data engineer, part of Data Sciences and Analytics team, you will play a key role for KLAs Data strategy principles and technique. As part of the centralized analytics team we will help analyze and find key data insights into various business unit processes across the company. You will be providing key performance indicators, dashboards to help various business users/partners to make business critical decisions. You will craft and develop analytical solutions by capturing business requirements and translating them into technical specifications, building data models and data visualizations. Responsibilities: - Design, develop and deploy Microsoft Fabric solutions, Power BI reports and dashboards. - Collaborate with business stakeholders to gather requirements and translate them into technical specifications. - Develop data models and establish data connections to various data sources. Use expert knowledge on Microsoft fabric architecture, deployment, and management. - Optimize Power BI solutions for performance and scalability. - Implement best practices for data visualization and user experience. - Conduct code reviews and provide mentorship to junior developers. - Manage permissions and workspaces in Power BI ensuring secure and efficient analytics platform. - Conduct assessments and audits of existing Microsoft Fabric environments to identify areas for improvement. - Stay current with the latest Fabric and Power BI features and updates. - Troubleshoot and resolve issues related to Fabric objects, Power BI reports and data sources. - Create detailed documentation, including design specifications, implementation plans, and user guides. Minimum Qualifications: - Doctorate (Academic) Degree and 0 years related work experience; Master's Level Degree and related work experience of 3 years; Bachelor's Level Degree and related work experience of 5 years - Proven experience as a Power BI Developer, with a strong portfolio of Power BI projects. - In-depth knowledge of Power BI, including DAX, Power Query, and data modeling. - Experience with SQL and other data manipulation languages. - In-depth knowledge of Microsoft Fabric and Power BI, including its components and capabilities. - Strong understanding of Azure cloud computing, data integration, and data management. - Excellent problem-solving skills and the ability to work independently and as part of a team. - Excellent Technical Problem Solving skill, performance optimization skills - Specialist in SQL and Stored procedures with Data warehouse concepts - Performed ETL Processes (Extract, Load, Transform). - Exceptional communication and interpersonal skills. - Expert knowledge of cloud and big data concepts and tools. Azure, AWS, Data Lake, Snowflake, etc. Nice to have: - Extremely strong SQL skills - Foundational knowledge of Metadata management, Master Data Management, Data Governance, Data Analytics - Good to have technical knowledge on Data Bricks/Data Lake/Spark/SQL. - Experience in configuring SSO(Single Single-On), RBAC, security roles on an analytics platform. - SAP functional knowledge is a plus - Microsoft certifications related to Microsoft Fabric/Power BI or Azure/analytics are a plus. - Good understanding of requirements and converting them into data warehouse solutions We offer a competitive, family friendly total rewards package. We design our programs to reflect our commitment to an inclusive environment, while ensuring we provide benefits that meet the diverse needs of our employees. KLA is proud to be an equal opportunity employer Be aware of potentially fraudulent job postings or suspicious recruiting activity by persons that are currently posing as KLA employees. KLA never asks for any financial compensation to be considered for an interview, to become an employee, or for equipment. Further, KLA does not work with any recruiters or third parties who charge such fees either directly or on behalf of KLA. Please ensure that you have searched KLAs Careers website for legitimate job postings. KLA follows a recruiting process that involves multiple interviews in person or on video conferencing with our hiring managers. If you are concerned that a communication, an interview, an offer of employment, or that an employee is not legitimate, please send an email to talent.acquisition@kla.com to confirm the person you are communicating with is an employee. We take your privacy very seriously and confidentially handle your information.,
Posted 1 week ago
3.0 - 5.0 years
15 Lacs
Mumbai
Work from Office
Candidate Specifications: Candidate should have 3+ years of experience in Data Analytics and reporting, Databricks, Power BI, Snowflake. Strong technical expertise in Power BI, Microsoft Fabric, Snowflake, SQL, Python, and R. Experience with Azure Data Factory, Databricks, Synapse Analytics, and AWS Glue. Hands-on experience in building and deploying machine learning models. Ability to translate complex data into actionable insights. Excellent problem-solving and communication skills Job Description: Design and build interactive dashboards and reports using Power BI and Microsoft Fabric Perform advanced data analysis and visualisation to support business decision-making. Develop and maintain data pipelines and queries using SQL and Python. Apply data science techniques such as predictive modelling, classification, clustering, and regression to solve business problems and uncover actionable insights. Perform feature engineering and data preprocessing to prepare datasets for machine learning workflows. Build, validate, and tune machine learning models using tools such as scikit-learn, TensorFlow, or similar frameworks. Deploy models into production environments and monitor their performance over time, ensuring they deliver consistent value.Collaborate with stakeholders to translate business questions into data science problems and communicate findings in a clear, actionable manner.Use statistical techniques and hypothesis testing to validate assumptions and support decision-making. Document data science workflows and maintain reproducibility of experiments and models Support the Data Analytics Manager in delivering analytics projects and mentoring junior analysts Design and build interactive dashboards and reports using Power BI and Microsoft Fabric. Professional Certifications (preferred or in progress) Microsoft Certified: Power BI Data Analyst Associate (PL-300)SnowPro Core Certification (Snowflake) Microsoft Certified: Azure Data Engineer Associate AWS Certified: Data Analytics Specialty Contact Person- Hemalatha Email id - hemalatha@gojobs.biz
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
tiruchirappalli, tamil nadu
On-site
INFOC is seeking a motivated and curious Associate Data & AI Engineer to assist in the development of AI-enabled analytics solutions using Microsoft Fabric, Power BI, and Azure Data Services. In this role, you will work closely with senior architects and consultants on real-world transformation projects, enabling customers to derive actionable insights through modern data platforms. This position is ideal for early-career professionals who are enthusiastic about working with cutting-edge Microsoft technologies, honing their skills, and driving tangible business outcomes. Your primary responsibilities will include supporting the development of data pipelines, dataflows, and transformations utilizing Microsoft Fabric and Azure tools. You will also play a key role in constructing Power BI dashboards and semantic models tailored to meet customer reporting requirements. Collaboration with solution architects will be essential as you prepare, cleanse, and model data sourced from a variety of systems including ERP, CRM, and external sources. Additionally, you will engage in Proof-of-Concepts (PoCs) involving AI integrations such as Azure OpenAI and Azure ML Studio, conduct data validation, testing, and performance tuning, and document technical processes, solution architecture, and deployment steps. The ideal candidate will possess at least 2-5 years of experience in data analytics, engineering, or AI solution development. Hands-on proficiency with Power BI (datasets, visuals, DAX) and basic Azure Data tools (Data Factory, Data Lake) is required. Exposure to, or a willingness to learn, Microsoft Fabric, Lakehouses, and AI workloads is highly valued. A strong grasp of SQL, data modeling, and visualization principles is essential, and familiarity with Python or Power Query (M) is advantageous. Strong analytical and communication skills, along with a problem-solving mindset, are crucial. Possessing a Microsoft certification (PL-300, DP-203, or similar) would be a bonus. At INFOC, you will gain valuable real-world project experience in AI, Data Engineering, and Business Intelligence. You will receive mentorship from senior Microsoft-certified consultants, have the opportunity to progress into roles such as AI Solution Architect or Power BI Specialist, access certifications and structured learning paths, and be part of a collaborative and innovation-driven culture. Begin your journey in the field of AI & analytics with INFOC today. Apply now by sending your resume to: careers@infoc.com To learn more about INFOC, visit: www.infoc.com,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your role will involve working on both batch and real-time ingestion and transformation, integrating with Azure Data Factory for smooth data flow, and collaborating with data architects to implement governed Lakehouse models in Microsoft Fabric. You will be expected to monitor and optimize the performance of data pipelines and notebooks in Microsoft Fabric, applying tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery. Collaboration with cross-functional teams, including BI developers, analysts, and data scientists, is essential to gather requirements and build high-quality datasets. Additionally, you will need to document pipeline logic, lakehouse architecture, and semantic layers clearly, following development standards and contributing to internal best practices for Microsoft Fabric-based solutions. To excel in this role, you should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric, Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required, along with a strong command of SQL, PySpark, and Python applied to data integration and analytical workloads. Experience in optimizing pipelines and managing compute resources for cost-effective data processing in Azure/Fabric is also crucial. Preferred skills for this role include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, and understanding of Microsoft Purview, Unity Catalog, or Fabric-native tools for metadata, lineage, and access control. Exposure to DevOps practices for Fabric and Power BI, as well as knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines, would be considered a plus. If you are passionate about developing efficient data solutions in a collaborative environment and have a strong background in data engineering within the Azure ecosystem, this role as a Senior Data Engineer at Srijan Technologies PVT LTD could be the perfect fit for you. Apply now to be a part of a dynamic team driving innovation in data architecture and analytics.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
You are looking for a Data Engineer with over 5 years of experience to join our team in Ahmedabad. As a Data Engineer, you will play a key role in transforming raw data into valuable insights and creating scalable data infrastructure. Your responsibilities will include designing data pipelines, optimizing data systems, and supporting data-driven decision-making. Key responsibilities of the role include: - Architecting, building, and maintaining scalable data pipelines from various sources. - Designing effective data storage, retrieval mechanisms, and data models for analytics. - Implementing data validation, transformation, and quality monitoring processes. - Collaborating with cross-functional teams to deliver data-driven solutions. - Identifying bottlenecks, optimizing workflows, and providing mentorship to junior engineers. We are looking for a candidate with: - 4+ years of hands-on experience in Data Engineering. - Proficiency in Python and data pipeline design. - Experience with Big Data tools like Hadoop, Spark, and Hive. - Strong skills in SQL, NoSQL databases, and data warehousing solutions. - Knowledge of cloud platforms, especially Azure. - Familiarity with distributed computing, data modeling, and performance tuning. - Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus. - Strong analytical thinking, collaboration skills, excellent communication skills, and the ability to work independently or as part of a team. Qualifications required for this position include a Bachelor's degree in Computer Science, Data Science, or a related field. If you are passionate about data engineering and have the necessary expertise, we encourage you to apply and be a part of our innovative team in Ahmedabad.,
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Client in India is one of the leading providers of risk, financial services and business advisory, internal audit, corporate governance, and tax and regulatory services. Our Client was established in India in September 1993, and has rapidly built a significant competitive presence in the country. The firm operates from its offices in Mumbai, Pune, Delhi, Kolkata, Chennai, Bangalore, Hyderabad , Kochi, Chandigarh and Ahmedabad, and offers its clients a full range of services, including financial and business advisory, tax and regulatory. Our client has their client base of over 2700 companies. Their global approach to service delivery helps provide value-added services to clients. The firm serves leading information technology companies and has a strong presence in the financial services sector in India while serving a number of market leaders in other industry segments. Job Requirements Mandatory Skills Bachelor s or higher degree in Computer Science or a related discipline or equivalent (minimum 7+ years work experience). At least 6+ years of consulting or client service delivery experience on Azure Microsoft data engineering. At least 4+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Synapse/Azure Databricks, Microsoft Fabric Hands-on experience implementing data ingestion, ETL and data processing using Azure services: Fabric, onelake, ADLS, Azure Data Factory, Azure Functions, services in Microsoft Fabric etc. Minimum of 5+ years of hands-on experience in Azure and Big Data technologies such as Fabric, databricks, Python, SQL, ADLS/Blob, pyspark/SparkSQL. Minimum of 3+ years of RDBMS experience Experience in using Big Data File Formats and compression techniques. Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git, etc. Preferred Skills Technical Leadership & Demo Delivery: oProvide technical leadership to the data engineering team, guiding the design and implementation of data solutions. oDeliver compelling and clear demonstrations of data engineering solutions to stakeholders and clients, showcasing functionality and business value. oCommunicate fluently in English with clients, translating complex technical concepts into business-friendly language during presentations, meetings, and consultations. ETL Development & Deployment on Azure Cloud: oDesign, develop, and deploy robust ETL (Extract, Transform, Load) pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Notebooks, Azure Functions, and other Azure services. oEnsure scalable, efficient, and secure data integration workflows that meet business requirements. oPreferably to have following skills Azure doc intelligence, custom app, blob storage oDesign and develop data quality frameworks to validate, cleanse, and monitor data integrity. oPerform advanced data transformations, including Slowly Changing Dimensions (SCD Type 1 and Type 2), using Fabric Notebooks or Databricks. oPreferably to have following skills Azure doc intelligence, custom app, blob storage Microsoft Certifications: oHold relevant role-based Microsoft certifications, such as: DP-203: Data Engineering on Microsoft Azure AI-900: Microsoft Azure AI Fundamentals. oAdditional certifications in related areas (e.g., PL-300 for Power BI) are a plus. Azure Security & Access Management: oStrong knowledge of Azure Role-Based Access Control (RBAC) and Identity and Access Management (IAM). oImplement and manage access controls, ensuring data security and compliance with organizational and regulatory standards on Azure Cloud. Additional Responsibilities & Skills: oTeam Collaboration: Mentor junior engineers, fostering a culture of continuous learning and knowledge sharing within the team. oProject Management: Oversee data engineering projects, ensuring timely delivery within scope and budget, while coordinating with cross-functional teams. oData Governance: Implement data governance practices, including data lineage, cataloging, and compliance with standards like GDPR or CCPA. oPerformance Optimization: Optimize ETL pipelines and data workflows for performance, cost-efficiency, and scalability on Azure platforms. oCross-Platform Knowledge: Familiarity with integrating Azure services with other cloud platforms (e.g., AWS, GCP) or hybrid environments is an added advantage. Soft Skills & Client Engagement: oExceptional problem-solving skills with a proactive approach to addressing technical challenges. oStrong interpersonal skills to build trusted relationships with clients and stakeholders. Ability to manage multiple priorities in a fast-paced environment, ensuring high-quality deliverables.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough