Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
You will join Beyond Key, a Microsoft Gold Partner and a Great Place to Work-certified company that values the happiness of both team members and clients. Established in 2005, the company comprises over 350 skilled software professionals who cater to clients across the United States, Canada, Europe, Australia, the Middle East, and India. Beyond Key is dedicated to offering cutting-edge IT consulting and software services to meet global client requirements. For more information, visit https://www.beyondkey.com/about. As a Microsoft Fabric Expert at Beyond Key, you will play a crucial role in leading the design, implementation, and optimization of enterprise-scale data solutions using Microsoft Fabric. Your primary focus will be on developing a modern, scalable data ecosystem that seamlessly integrates with Power BI, SQL databases, and upcoming Enterprise Data Warehouse (EDW) projects. Collaborating with various teams, you will ensure the secure, efficient, and compliant management of data workflows across healthcare, finance, and other sectors. Your responsibilities will include architecting end-to-end data integration pipelines using Microsoft Fabric, Azure Data Factory, and Synapse. You will be responsible for enabling bulk and incremental data ingestion from sources like SQL Server, REST APIs, and more into Microsoft Fabric. Additionally, you will optimize data workflows for improved performance, scalability, and cost-efficiency. Integrating Microsoft Fabric with Power BI for real-time analytics and reporting will also be part of your role. Ensuring the implementation of role-based access controls, encryption, and compliance frameworks for sensitive data is essential. Troubleshooting and resolving issues related to data access, latency, and pipeline failures will also fall under your purview. Furthermore, you will collaborate with stakeholders to translate business requirements into technical specifications and mentor teams on best practices for Microsoft Fabric, data modeling, and hybrid cloud architectures. To qualify for this role, you should have proven experience with Microsoft Fabric, Azure Data Factory, Synapse, and Power BI. Proficiency in SQL, Python, Spark, ADLS, and ETL/ELT frameworks is required. Hands-on experience with REST API integration and hybrid cloud environments (Azure/AWS/GCP) is also essential. Possessing certifications such as Microsoft Certified: Azure Data Engineer Associate, TOGAF, Databricks, or Snowflake will be advantageous. Strong communication skills are necessary to effectively bridge technical and non-technical teams, and a problem-solving mindset focused on delivering business-aligned solutions is crucial for success in this position.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
kochi, kerala
On-site
You should have 8-12 years of experience in a Data Engineer role, with at least 3 years as an Azure data engineer. A bachelor's degree in Computer Science, Information Technology, Engineering, or a related field is required. You must be proficient in Python, SQL, and have a deep understanding of PySpark. Additionally, expertise in Databricks or similar big data solutions is necessary. Strong knowledge of ETL/ELT frameworks, data structures, and software architecture is expected. You should have proven experience in designing and deploying high-performance data processing systems and extensive experience with Azure cloud data platforms. As a Data Engineer, your responsibilities will include designing, constructing, installing, testing, and maintaining highly scalable and robust data management systems. You will apply data warehousing concepts to design and implement data warehouse tables in line with business requirements. Building complex ETL/ELT processes for large-scale data migration and transformation across platforms and Enterprise systems such as Oracle ERP, ERP Fusion, and Salesforce is essential. You must have the ability and expertise to extract data from various sources like APIs, JSONs, and Databases. Utilizing PySpark and Databricks within the Azure ecosystem to manipulate large datasets, improve performance, and enhance scalability of data operations will be a part of your role. Developing and implementing Azure-based data architectures consistent across multiple projects while adhering to best practices and standards is required. Leading initiatives for data integrity and normalization within Azure data storage and processing environments is expected. You will evaluate and optimize Azure-based database systems for performance efficiency, reusability, reliability, and scalability. Additionally, troubleshooting complex data-related issues within Azure and providing expert guidance and support to the team is necessary. Ensuring all data processes adhere to governance, data security, and privacy regulations is also a critical part of the role.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
64580 Jobs | Dublin
Wipro
25801 Jobs | Bengaluru
Accenture in India
21267 Jobs | Dublin 2
EY
19320 Jobs | London
Uplers
13908 Jobs | Ahmedabad
Bajaj Finserv
13382 Jobs |
IBM
13114 Jobs | Armonk
Accenture services Pvt Ltd
12227 Jobs |
Amazon
12149 Jobs | Seattle,WA
Oracle
11546 Jobs | Redwood City