Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
You are looking for a Data Engineer with over 5 years of experience to join our team in Ahmedabad. As a Data Engineer, you will play a key role in transforming raw data into valuable insights and creating scalable data infrastructure. Your responsibilities will include designing data pipelines, optimizing data systems, and supporting data-driven decision-making. Key responsibilities of the role include: - Architecting, building, and maintaining scalable data pipelines from various sources. - Designing effective data storage, retrieval mechanisms, and data models for analytics. - Implementing data validation, transformation, and quality monitoring processes. - Collaborating with cross-functional teams to deliver data-driven solutions. - Identifying bottlenecks, optimizing workflows, and providing mentorship to junior engineers. We are looking for a candidate with: - 4+ years of hands-on experience in Data Engineering. - Proficiency in Python and data pipeline design. - Experience with Big Data tools like Hadoop, Spark, and Hive. - Strong skills in SQL, NoSQL databases, and data warehousing solutions. - Knowledge of cloud platforms, especially Azure. - Familiarity with distributed computing, data modeling, and performance tuning. - Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus. - Strong analytical thinking, collaboration skills, excellent communication skills, and the ability to work independently or as part of a team. Qualifications required for this position include a Bachelor's degree in Computer Science, Data Science, or a related field. If you are passionate about data engineering and have the necessary expertise, we encourage you to apply and be a part of our innovative team in Ahmedabad.,
Posted 1 month ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Client in India is one of the leading providers of risk, financial services and business advisory, internal audit, corporate governance, and tax and regulatory services. Our Client was established in India in September 1993, and has rapidly built a significant competitive presence in the country. The firm operates from its offices in Mumbai, Pune, Delhi, Kolkata, Chennai, Bangalore, Hyderabad , Kochi, Chandigarh and Ahmedabad, and offers its clients a full range of services, including financial and business advisory, tax and regulatory. Our client has their client base of over 2700 companies. Their global approach to service delivery helps provide value-added services to clients. The firm serves leading information technology companies and has a strong presence in the financial services sector in India while serving a number of market leaders in other industry segments. Job Requirements Mandatory Skills Bachelor s or higher degree in Computer Science or a related discipline or equivalent (minimum 7+ years work experience). At least 6+ years of consulting or client service delivery experience on Azure Microsoft data engineering. At least 4+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Synapse/Azure Databricks, Microsoft Fabric Hands-on experience implementing data ingestion, ETL and data processing using Azure services: Fabric, onelake, ADLS, Azure Data Factory, Azure Functions, services in Microsoft Fabric etc. Minimum of 5+ years of hands-on experience in Azure and Big Data technologies such as Fabric, databricks, Python, SQL, ADLS/Blob, pyspark/SparkSQL. Minimum of 3+ years of RDBMS experience Experience in using Big Data File Formats and compression techniques. Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git, etc. Preferred Skills Technical Leadership & Demo Delivery: oProvide technical leadership to the data engineering team, guiding the design and implementation of data solutions. oDeliver compelling and clear demonstrations of data engineering solutions to stakeholders and clients, showcasing functionality and business value. oCommunicate fluently in English with clients, translating complex technical concepts into business-friendly language during presentations, meetings, and consultations. ETL Development & Deployment on Azure Cloud: oDesign, develop, and deploy robust ETL (Extract, Transform, Load) pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Notebooks, Azure Functions, and other Azure services. oEnsure scalable, efficient, and secure data integration workflows that meet business requirements. oPreferably to have following skills Azure doc intelligence, custom app, blob storage oDesign and develop data quality frameworks to validate, cleanse, and monitor data integrity. oPerform advanced data transformations, including Slowly Changing Dimensions (SCD Type 1 and Type 2), using Fabric Notebooks or Databricks. oPreferably to have following skills Azure doc intelligence, custom app, blob storage Microsoft Certifications: oHold relevant role-based Microsoft certifications, such as: DP-203: Data Engineering on Microsoft Azure AI-900: Microsoft Azure AI Fundamentals. oAdditional certifications in related areas (e.g., PL-300 for Power BI) are a plus. Azure Security & Access Management: oStrong knowledge of Azure Role-Based Access Control (RBAC) and Identity and Access Management (IAM). oImplement and manage access controls, ensuring data security and compliance with organizational and regulatory standards on Azure Cloud. Additional Responsibilities & Skills: oTeam Collaboration: Mentor junior engineers, fostering a culture of continuous learning and knowledge sharing within the team. oProject Management: Oversee data engineering projects, ensuring timely delivery within scope and budget, while coordinating with cross-functional teams. oData Governance: Implement data governance practices, including data lineage, cataloging, and compliance with standards like GDPR or CCPA. oPerformance Optimization: Optimize ETL pipelines and data workflows for performance, cost-efficiency, and scalability on Azure platforms. oCross-Platform Knowledge: Familiarity with integrating Azure services with other cloud platforms (e.g., AWS, GCP) or hybrid environments is an added advantage. Soft Skills & Client Engagement: oExceptional problem-solving skills with a proactive approach to addressing technical challenges. oStrong interpersonal skills to build trusted relationships with clients and stakeholders. Ability to manage multiple priorities in a fast-paced environment, ensuring high-quality deliverables.
Posted 1 month ago
4.0 - 9.0 years
15 - 25 Lacs
Vadodara
Work from Office
Alembic Pharmaceutical is looking for Senior Data Engineer with 5+ years of hands-on experience to join our dynamic data team. The ideal candidate will have strong expertise in Microsoft Fabric, demonstrate readiness to adopt cutting-edge tools like SAP Data Sphere, and possess foundational AI knowledge to guide our data engineering initiatives. This role combines technical excellence with leadership responsibilities, including team guidance and vendor collaboration. Experience & Technical Skills 5+ years of professional data engineering experience with demonstrated expertise in Microsoft Fabric components Strong proficiency in PySpark for large-scale data processing and distributed computing (MANDATORY) Extensive experience with Azure Data Factory (ADF) for orchestrating complex data workflows (MANDATORY) Proficiency in SQL and Python for data processing and pipeline development Strong understanding of cloud data platforms, preferably Azure ecosystem Experience with data modelling, data warehousing concepts, and modern data architecture patterns Interested candidate can share CV on creyesha.macwan@alembic.co.in
Posted 1 month ago
5.0 - 10.0 years
7 - 11 Lacs
Hyderabad
Hybrid
Database Administrator (DBA) - T-SQL / Microsoft Fabric / Azure Data Services Required Qualifications Bachelors degree in Computer Science, Information Technology, or a related discipline 5+ years of hands-on experience as a DBA, with strong exposure to: T-SQL SSMS SQL Server 2019 or later Solid knowledge of Microsoft Fabric components and their interoperability with the Power Platform ecosystem Experience with: Azure SQL Database Azure Managed Instance Data Lake (Gen2 / OneLake) Strong understanding of: RDBMS design Data normalization Performance tuning techniques Hands-on with HA/DR mechanisms such as: Always On Availability Groups Log Shipping Azure Failover Groups Proficient in monitoring and diagnostic tools: SQL Profiler Extended Events Azure Log Analytics Query Performance Insight Experience in implementing: Data privacy Encryption (e.g., TDE, Always Encrypted) Firewall rules Security auditing Preferred Skills & Tools Proficiency in: Azure Data Factory (ADF) Azure Synapse Power BI Dataflows Familiarity with Microsoft Purview for data lineage and governance Hands-on with CI/CD pipelines for SQL using Azure DevOps YAML Understanding of: Fabric workspace administration Capacity planning Security roles Knowledge of NoSQL / Azure Cosmos DB is a plus Experience with monitoring tools like Grafana or Prometheus (especially in hybrid setups) Scripting experience in Python and/or PowerShell for automation Experience with ERP integrations and third-party data replication tools like: Fivetran BryteFlow Qlik Replicate Qualification : - Bachelors degree in Computer Science, Information Technology, Business Administration, or a related field Skills : - T-SQL, SSMS, SQL Server 2019, Microsoft Fabric, Power Platform, Azure SQL Database, Azure Managed Instance, Data Lake Gen2, OneLake, Always On Availability Groups, Log Shipping, Azure Failover Groups, SQL Profiler, Extended Events, Azure Log Analytics, Query Performance Insight, TDE, Always Encrypted, Azure Data Factory (ADF), Azure Synapse, Power BI Dataflows, Microsoft Purview, Azure DevOps, YAML, Grafana, Prometheus, Fivetran, BryteFlow, Qlik Replicate
Posted 1 month ago
3.0 - 8.0 years
10 - 15 Lacs
Mumbai
Hybrid
1. Job Purpose The Senior Data Analytics Analyst will play a key role in delivering hands-on analytics solutions across the UK and Ireland. This role focuses on bespoke report building, dashboard development, and advanced data analysis using technologies such as Power BI, Microsoft Fabric, Snowflake, Azure, AWS, Python, SQL, and R. Experience in data science is highly advantageous, including the application of machine learning and predictive modelling to solve business problems 2. Accountabilities & Activities • Design and build interactive dashboards and reports using Power BI and Microsoft Fabric. • Perform advanced data analysis and visualisation to support business decision-making. • Develop and maintain data pipelines and queries using SQL and Python. • Apply data science techniques such as predictive modelling, classification, clustering, and regression to solve business problems and uncover actionable insights. • Perform feature engineering and data preprocessing to prepare datasets for machine learning workflows. • Build, validate, and tune machine learning models using tools such as scikit-learn, TensorFlow, or similar frameworks. • Deploy models into production environments and monitor their performance over time, ensuring they deliver consistent value. • Collaborate with stakeholders to translate business questions into data science problems and communicate findings in a clear, actionable manner. • Use statistical techniques and hypothesis testing to validate assumptions and support decision-making. • Document data science workflows and maintain reproducibility of experiments and models. • Support the Data Analytics Manager in delivering analytics projects and mentoring junior analysts. • Design and build interactive dashboards and reports using Power BI and Microsoft Fabric. 3. Qualifications, Knowledge & Experience • Professional Certifications (preferred or in progress): - Microsoft Certified: Power BI Data Analyst Associate (PL-300) - SnowPro Core Certification (Snowflake) - Microsoft Certified: Azure Data Engineer Associate - AWS Certified: Data Analytics Specialty • Strong technical expertise in Power BI, Microsoft Fabric, Snowflake, SQL, Python, and R. • Experience with Azure Data Factory, Databricks, Synapse Analytics, and AWS Glue. • Hands-on experience in building and deploying machine learning models. • Ability to translate complex data into actionable insights. • Excellent problem-solving and communication skills. 4. Judgement Skills Balances team development with delivery priorities and business needs. Makes informed decisions on technical design, resource allocation, and delivery timelines. Evaluates project outcomes and identifies opportunities for team and process improvement. Encourages experimentation while maintaining delivery discipline and governance. 5. Freedom Of Action Acts as a key liaison between technical teams and business stakeholders. 6. Dimensions Financial: Supports budgeting and cost optimisation for analytics projects. Contributes to revenue growth through data-driven client solutions. Non-Financial: Drives adoption of modern analytics tools and practices. Builds strong relationships across business units and with external clients. 7. Environment Build strong relationships and influence key decision makers. Able to work under pressure and adapt to chang
Posted 1 month ago
4.0 - 8.0 years
15 - 27 Lacs
Indore, Hyderabad
Hybrid
Data Engineer - D365 OneLake Integration Specialist Position Overview: We are seeking an experienced Data Engineer with expertise in Microsoft D365 ERP and OneLake integration to support a critical acquisition integration project. The successful candidate will assess existing data integrations, collaborate with our data team to migrate pipelines to Snowflake using Matillion, and ensure seamless data flow for go-live critical reports by November 2025. Role & responsibilities: Assessment & Documentation: Analyze and document existing D365 to OneLake/Fabric integrations and data flows Data Pipeline Migration: Collaborate with the current data team to redesign and migrate data integrations from D365 to Snowflake using Matillion Integration Architecture : Understand and map current Power BI reporting dependencies and data sources Go-Live Support: Identify critical reports for go-live and recommend optimal data integration strategies Technical Collaboration: Work closely with existing data engineering team to leverage current Snowflake and Matillion expertise Knowledge Transfer: Document findings and provide recommendations on existing vs. new integration approaches ERP Implementation Support: Support the acquired company's ERP go-live timeline and requirements Required Qualifications: Technical Skills 3+ years experience with Microsoft Dynamics 365 ERP data integrations 2+ years hands-on experience with Microsoft OneLake and Fabric ecosystem Strong experience with Snowflake data warehouse platform Proficiency in Matillion ETL tool for data pipeline development Experience with Power BI data modeling and reporting architecture Strong SQL skills and data modeling expertise Knowledge of Azure Data Factory or similar cloud ETL tools Experience with REST APIs and data connector frameworks Business & Soft Skills Experience supporting ERP implementation projects and go-live activities Strong analytical and problem-solving skills for complex data integration challenges Excellent documentation and communication skills Ability to work in fast-paced, deadline-driven environments Experience in M&A integration projects (preferred) Project management skills and ability to prioritize go-live critical deliverables Preferred candidate profile Microsoft Azure certifications (DP-203, DP-900) Experience with Snowflake SnowPro certification Previous experience with acquisition integration projects Knowledge of financial and operational reporting requirements Familiarity with data governance and compliance frameworks
Posted 1 month ago
5.0 - 10.0 years
6 - 10 Lacs
Mumbai
Remote
Travel Requirement : will be plus if willing to travel to the UK as needed Job Description : We are seeking a highly experienced Senior Data Engineer with a background in Microsoft Fabric and have done projects in it. This is a remote position based in India, ideal for professionals who are open to occasional travel to the UK and must possess a valid passport. Key Responsibilities : - Design and implement scalable data solutions using Microsoft Fabric - Lead complex data integration, transformation, and migration projects - Collaborate with global teams to deliver end-to-end data pipelines and architecture - Optimize performance of data systems and troubleshoot issues proactively - Ensure data governance, security, and compliance with industry best practices Required Skills and Experience : - 5+ years of experience in data engineering, including architecture and development - Expertise in Microsoft Fabric, Data Lake, Azure Data Services, and related technologies - Experience in SQL, data modeling, and data pipeline development - Knowledge of modern data platforms and big data technologies - Excellent communication and leadership skills Preferred Qualifications : - Good communication skills - Understanding of data governance and security best practices Perks & Benefits : - Work-from-home flexibility - Competitive salary and perks - Opportunities for international exposure - Collaborative and inclusive work culture
Posted 1 month ago
6.0 - 11.0 years
2 - 6 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Location- Pune, Mumbai, Nagpur, Goa, Noida, Gurgaon, Ahmedabad, Jaipur, Indore, Kolkata, Kochi, Hyderabad, Bangalore, Chennai,) Minimum 67 years of experience in designing, implementing, and supporting Data Warehousing and Business Intelligence solutions on Microsoft Fabric data pipelines Design and implement scalable and efficient data pipelines using Azure Data Factory, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. Implement ETL processes to extract data from diverse sources, transform it into suitable formats, and load it into the data warehouse or analytical systems. Hands-on experience in design, development, and implementation of Microsoft Fabric, Azure Data Analytics Service (Azure Data Factory – ADF, Data Lake, Azure Synapse, Azure SQL, and Databricks) Experience in writing optimized SQL queries on MS Azure Synapse Analytics (dedicated, serverless resources in queries, etc.) Troubleshoot, resolve and suggest a deep code-level analysis of Spark to address complex customer issues related to Spark core internals, Spark SQL, Structured Streaming, and Delta. Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, considering large-scale data sets. Experience with hybrid cloud deployments and integration between on-premises and cloud environments. Ensure data security and compliance with data privacy regulations throughout the data engineering process. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources. Conceptual knowledge of data and analytics, such as dimensional modeling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data.Role & responsibilities. Understanding of data engineering best practices like code modularity, documentation, and version control. Collaborate with business stakeholders to gather requirements and create comprehensive technical solutions and documentation
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines and notebooks in Microsoft Fabric will be a key aspect of your role. You will apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Working closely with BI developers, analysts, and data scientists, you will gather requirements and build high-quality datasets to support self-service BI initiatives. Additionally, documenting pipeline logic, lakehouse architecture, and semantic layers clearly will be essential. Your experience with Lakehouses, Notebooks, Data Pipelines, and Direct Lake in Microsoft Fabric will be crucial in delivering reliable, secure, and efficient data solutions that integrate with Power BI, Azure Synapse, and other Microsoft services. You should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric components such as Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required. A strong command of SQL, PySpark, Python, and experience in optimising pipelines for cost-effective data processing in Azure/Fabric are necessary. Preferred skills include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, as well as understanding of Microsoft Purview or Unity Catalog. Exposure to DevOps practices for Fabric and Power BI, and knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines would be advantageous.,
Posted 1 month ago
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Work from Office
We are looking for senior data engineer with 5-8 yrs of experience.
Posted 1 month ago
10.0 - 14.0 years
18 - 22 Lacs
Noida
Work from Office
We are looking for a visionary UX Design Architect to lead the creation and evolution of user experience strategies across complex, high-impact digital ecosystems. In this strategic leadership role, you will operate beyond day-to-day design execution, focusing instead on systemic thinking, enterprise design governance, and experience orchestration. This role is ideal for an experienced design leader who not only excels in user-centered design methodologies but also has a proven track record of shaping enterprise-level design systems, setting UX standards, and influencing product direction across diverse domains. Prior experience in the Oil & Gas industry will be an advantage. Key Responsibilities Strategic Design Leadership Define the UX architecture vision and roadmap in alignment with organizational strategy, product roadmaps, and brand objectives. Drive enterprise-wide UX design frameworks, ensuring design consistency, scalability, accessibility, and performance across all user-facing products. Serve as the UX thought leader, helping shape product strategy through research-backed insights, trend analysis, and human-centered design approaches. Design Systems & Standards Architect and maintain design systems and component libraries that support cross-platform consistency and development efficiency. Lead design governance, standardization, and auditing mechanisms to ensure adherence to UX best practices across teams and geographies. Enterprise Collaboration Collaborate closely with CXOs, Product Owners, Engineering Architects, and Business Analysts to bridge business needs with user expectations. Act as the design authority in cross-functional teams, ensuring that UX considerations are embedded in technical and business decision-making. UX Maturity & Mentorship Champion UX maturity across the organization, influencing processes, KPIs, and design-thinking culture at scale. Mentor, guide, and review the work of senior and principal UX designers, fostering an environment of learning, excellence, and innovation. Research and Innovation Oversee or commission advanced UX research initiatives, market studies, and user journey analytics to drive product direction. Stay ahead of emerging technologies (AI/ML, voice UX, immersive tech) and identify opportunities for innovation within design practice. Required Skills & Experience 10+ years of experience in UX design, with at least 4+ years in a strategic or architect-level role. Prior experience in the Oil & Gas industry will be an advantage. Proven expertise in crafting and implementing UX strategies and design systems across multiple business domains and product portfolios. Mastery of tools like Figma, Adobe XD, and Sketch, with hands-on knowledge of design systems management and collaboration platforms. Deep understanding of: o Strategic UX architecture & scalable design systems o Enterprise UX frameworks & governance o Human-centred design methodologies o Cross-platform design consistency (mobile, desktop, tablet, responsive web) o Data-informed UX (research synthesis, usability analytics, accessibility compliance) Strong leadership presence with the ability to influence C-suite stakeholders and product leadership. Exceptional communication, facilitation, and change management capabilities.
Posted 1 month ago
7.0 - 12.0 years
25 - 35 Lacs
Hyderabad
Work from Office
Be a part of our success story. Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch. What we are looking for: We are looking for a Data Engineer for designing and building ETL pipelines, with a focus on Azure Data Factory and Microsoft Fabric services. Role: Data Engineer Location: Hyderabad Work Mode: WFO Years of experience: 5+ Years of Experience Mandatory Skills: 5+ years of hands-on experience designing and building ETL pipelines, with a focus on Azure Data Factory and Microsoft Fabric services. Expertise in Power BI, DAX Reporting. Expertise in implementing CI/CD pipelines and automating deployment processes using Azure DevOps. Hands-on experience migrating data from on-premises databases to Azure Cloud environments, including Managed Instances. Proven track record of implementing version control systems and managing data pipeline architecture. Strong SQL skills for developing and optimizing queries, stored procedures, and database performance. Familiarity with Delta Lake, Synapse Analytics, or other MS Fabric-specific technologies. Experience working in large, agile teams to deliver data solutions in an iterative, collaborative environment. Preferred Skills: Knowledge of Microsoft Fabric components such as Dataflows, Data Pipelines, and integration with Power BI for seamless analytics delivery. Understanding of data security practices, including data encryption and role-based access control in Azure. Experience with event-driven architectures using Azure Event Hubs or similar tools. Familiarity with DataOps principles to streamline pipeline monitoring and management. Excellent problem-solving skills and ability to quickly adapt to evolving project requirements. Any Certifications required: Good to have any certification related to Fabric, Azure Cloud but not mandatory. We are Navigators in the Age of Transformation: We use sophisticated technology to transform clients into the digital age, but our top priority is our positive impact on the human experience. We ease anxiety and fear around digital transformation and replace it with opportunity. Launch IT is an equal opportunity employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Launch IT is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The Executive - SAP Support position at House of Shipping involves designing, implementing, and managing data solutions using Azure Data services to facilitate data-driven decision-making processes. Your primary responsibility will be to develop robust data pipelines and architectures for efficient Extraction, Transformation, and Loading (ETL) operations from various sources into the Data warehouse or Data Lakehouse. Your key duties will include designing and developing data solutions on Azure, managing data integration workflows using Azure Data Factory, implementing data storage solutions using Azure Synapse SQL Pool and other Azure services, and monitoring and optimizing the performance of data pipelines and storage solutions. You will also collaborate with Data analysts, Developers, and Business stakeholders to understand data requirements and troubleshoot data-related issues to ensure data accuracy, reliability, and availability. To excel in this role, you should have proficiency in Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake Storage, as well as experience with SQL, RDBMS systems, data modeling, ELT batch design, data integration techniques, Python programming, and Serverless architecture using Azure Functions. Experience with Spark, Streaming services, Azure AI services, PowerBI, and Data Lake house architecture will be advantageous. Preferred certifications for this role include Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Fabric Analytics Engineer Associate. If you are passionate about data solutions and enjoy working in a collaborative environment to deliver effective solutions, we invite you to join our team at House of Shipping.,
Posted 1 month ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
You are an experienced Data Engineering Manager responsible for leading a team of 10+ engineers in Chennai, Tamil Nadu, India. Your primary role is to build scalable data marts and Power BI dashboards to measure marketing campaign performance. Your deep expertise in Azure, Microsoft Fabric, and Power BI, combined with strong leadership skills, enables you to drive data initiatives that facilitate data-driven decision-making for the marketing team. Your key responsibilities include managing and mentoring the data engineering and BI developer team, overseeing the design and implementation of scalable data marts and pipelines, and leading the development of insightful Power BI dashboards. You collaborate closely with marketing and business stakeholders to gather requirements, align on metrics, and deliver actionable insights. Additionally, you lead project planning, prioritize analytics projects, and ensure timely and high-impact outcomes using Agile methodologies. You are accountable for ensuring data accuracy, lineage, and compliance through robust validation, monitoring, and governance practices. You promote the adoption of modern Azure/Microsoft Fabric capabilities and industry best practices in data engineering and BI. Cost and resource management are also part of your responsibilities, where you optimize infrastructure and licensing costs, as well as manage external vendors or contractors if needed. Your expertise in Microsoft Fabric, Power BI, Azure (Data Lake, Synapse, Data Factory, Azure Functions), data modeling, data pipeline development, SQL, and marketing analytics is crucial for success in this role. Proficiency in Agile project management, data governance, data quality monitoring, Git, stakeholder management, and performance optimization is also required. Your role involves leading a team that focuses on developing scalable data infrastructure and analytics solutions to empower the marketing team with campaign performance measurement and optimization. This permanent position requires 9 to 12 years of experience in the Data Engineering domain. If you are passionate about driving data initiatives, leading a team of engineers, and collaborating with stakeholders to deliver impactful analytics solutions, this role offers an exciting opportunity to make a significant impact in the marketing analytics space at the Chennai location.,
Posted 1 month ago
4.0 - 6.0 years
12 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Data Engineer (Microsoft Fabric & Lakehouse), PySpark Data Lakehouse architectures,cloud platforms (Azure, AWS), on-prem databases, SaaS platforms Salesforce, Workday), REST/OpenAPI-based APIs,data governance, lineage, RBAC principles,PySpark, SQL
Posted 1 month ago
5.0 - 15.0 years
0 Lacs
maharashtra
On-site
En Derevo empoderamos a las empresas y las personas, liberando el valor de los datos en las organizaciones. Con ms de 15 aos de experiencia, diseamos soluciones de datos e IA de punta a punta, desde la integracin en arquitecturas modernas hasta la implementacin de modelos inteligentes en procesos clave del negocio. Buscamos tu talento como Data Engineer (MS Fabric)!! Es importante que vivas en Mxico o Colombia. Como Data Engineer en Derevo, tu misin ser clave para crear e implementar arquitecturas modernas de datos con alta calidad, impulsando soluciones analticas basadas en tecnologas de Big Data. Disears, mantendrs y optimizars sistemas de multiprocesamiento paralelo, aplicando las mejores prcticas de almacenamiento y gestin en data warehouses, data lakes y lakehouses. Sers el apasionado que recolecta, procesa, limpia y orquesta grandes volmenes de datos, entendiendo modelos estructurados y semiestructurados, para integrar y transformar mltiples fuentes con eficacia. Definirs la estrategia ptima segn objetivos de negocio y requerimientos tcnicos, convirtiendo problemas complejos en soluciones alcanzables que ayuden a nuestros clientes a tomar decisiones basadas en datos. Te integrars al proyecto, sus sprints y ejecutars las actividades de desarrollo aplicando siempre las mejores prcticas de datos y las tecnologas que implementamos. Identificars requerimientos y definirs el alcance, participando en sprint planning y sesiones de ingeniera con una visin de consultor que aporte valor extra. Colaborars proactivamente en workshops y reuniones con el equipo interno y con el cliente. Clasificars y estimars actividades bajo metodologas giles (picas, features, historias tcnicas/usuario) y dars seguimiento diario para mantener el ritmo del sprint. Cumplirs las fechas de entrega comprometidas y gestionars riesgos comunicando desviaciones a tiempo. Para incorporarte como Data Engineer en Derevo, es necesario tener un manejo avanzado del idioma ingls (Conversaciones tcnicas y de negocios, B2+ o C1) y habilidades tcnicas en: - Lenguajes de Consulta y Programacin: T-SQL / Spark SQL, Python (PySpark), JSON / REST APIs, Microsoft Fabric. - Lenguajes de Consulta y Programacin: T-SQL / Spark SQL, Python (PySpark), JSON / REST APIs, Microsoft Fabric. Adems, es importante que te identifiques con habilidades blandas y de negocio como la comunicacin cercana, trabajo en Squads, proactividad y colaboracin, aprendizaje constante, responsabilidad y organizacin, consultora de datos, gestin de requerimientos, estrategia alineada al cliente y presentacin a clientes. Entre los beneficios que tendrs en Derevo se encuentran el impulso a tu bienestar integral, oportunidad de especializarte en diferentes reas y tecnologas, libertad para crear, participacin en proyectos tecnolgicos punteros y un esquema de trabajo remoto flexible y estructurado. Si cumples con la mayora de los requisitos y te interesa el perfil, no dudes en postularte para convertirte en un derevian y desarrollar tu superpoder. Nuestro equipo de Talent te contactar!,
Posted 1 month ago
0.0 - 1.0 years
1 - 2 Lacs
Lucknow
Work from Office
Develop and maintain robust ETL (Extract, Transform, Load) pipelines Ensure data quality, integrity, and security across systems Integrate data from various sources including APIs, databases, and cloud platforms Familiarity with cloud platforms Required Candidate profile Proficiency in SQL and Python Knowledge of data modeling, warehousing, and pipeline orchestration tools Strong understanding of database systems (relational and NoSQL)
Posted 1 month ago
4.0 - 7.0 years
7 - 17 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Key Responsibilities: Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Required Skills: 48 years of experience in Data Engineering or related roles. Hands-on experience in Azure Databricks , ADF , or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problem-solving and communication skills. Good to Have: Hands on experience in Microsoft Fabric, Logic Apps, Azure OpenAI basics Experienced in Delta Lake , Power BI , or Azure DevOps . Knowledge of Spark , Scala , or other distributed processing frameworks. Exposure to BI tools like Power BI , Tableau , or Looker . Familiarity with data security and compliance in the cloud. Experience in leading a development team.
Posted 1 month ago
3.0 - 8.0 years
7 - 17 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
We are looking for a skilled and experienced Data Engineer with 5-8 years of experience in building scalable data solutions on the Microsoft Azure ecosystem . The ideal candidate must have strong hands-on experience with Microsoft Fabric, Azure Databricks along with strong PySpark , Python and SQL expertise. Familiarity with Data Lake , Data Warehouse concepts, and end-to-end data pipelines is essential. Key Responsibilities: Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Microsoft Fabric & Databricks Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Required Skills: 58 years of experience in Data Engineering or related roles. Hands-on experience in Microsoft Fabric Hands-on experience in Azure Databricks Proficiency in PySpark for data processing and scripting. Strong command over Python & SQL – writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Hands on experience in performance tuning & optimization on Databricks & MS Fabric. Ensure alignment with overall system architecture and data flow. Understanding CI/CD practices in a data engineering context. Excellent problem-solving and communication skills. Exposure to BI tools like Power BI , Tableau , or Looker . Good to Have: Experienced in Azure DevOps . Knowledge of Scala or other distributed processing frameworks. Familiarity with data security and compliance in the cloud. Experience in leading a development team.
Posted 1 month ago
4.0 - 8.0 years
15 - 25 Lacs
Kolkata, Indore, Pune
Hybrid
issueshands-on,,,roduct-basedtoin,and ,,, ,associated,,Real-timeKafkaPython Position - Data Engineer Skills - AWS, Data engineer - Pyspark , python Real time analytics - Azure streaming and kafka Years of exp 4 to 8 Location - Pune / Kolkata / Indore Skills: Proficiency in programming languages such as Python, PySpark and associated libraries and frameworks. Hands-on and proficient in Databases (on-prem and Cloud), Messaging Apps, API Development and libraries. Exposure and experience in Microsoft Fabric, Azure ADLS, Synapse, ADF, Databricks, Azure Stream Analytics, BLOB and other associates cloud services. Knowledge in Real Time Analytics Kafka, Azure Streaming etc. Knowledge of Collibra, Atlan, OpenMetadata, Flink, Airflow would be considered a plus. Role Description: 1. Play a role of end-to-end Data Engineering and Analytics solutions including data, technology infrastructure and models. 2. Design and development of solutions according to the business objective. 3. Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the solutions in the real world 4. Verifying data quality, and/or ensuring it via data cleaning 5. Supervising the data acquisition process if more data is needed 6. Defining data augmentation pipelines 7. Work with the technology team to plan, execute, and deliver Data Engineering Product based projects. Contribute towards the best practices regarding handling the projects. 8. Provide technical design, implementation, and support services around the creation of APIs, Models and Integration Pipelines. 9. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools 10. Environments Understanding of the auxiliary practical concerns in production systems. 11. Strong communication, presentation and consulting skills, including technical writing skills and the ability to listen and understand client issue. 12. Needs to be fully hands on. Intrested candidates may share their updated
Posted 1 month ago
4.0 - 7.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Key Responsibilities: Design, develop, and maintain interactive dashboards and reports in Power BI . Utilize Microsoft Fabric (including OneLake, Lakehouse, Dataflows Gen2, and Pipelines) to build scalable data solutions. Integrate data from multiple sources using Fabric Data Factory Pipelines , Synapse Real-Time Analytics, and Power Query. Implement and optimize data models , measures (DAX) , and ETL processes . Collaborate with data engineers, analysts, and stakeholders to understand data needs and deliver actionable insights. Ensure data governance, security, and compliance using Microsoft Purview and Fabrics built-in governance tools. Perform performance tuning, dataset optimization, and report deployment across workspaces. Document technical solutions and provide user training/support when necessary. Good to Have: Microsoft Certified: Fabric Analytics Engineer or Power BI Data Analyst Associate. Knowledge of Azure Data Services (Data Factory, Synapse, Azure SQL). Experience with Row-Level Security (RLS) and large dataset optimization in Power BI. Familiarity with GitHub or Azure DevOps for version control. Exposure to real-time streaming data and KQL queries (Kusto). Job Requirement Strong experience with Power BI, including DAX,Power Query and Fabric Proficiency in SQL and data modeling techniques. Experience with Azure services (e.g., Synapse, Data Factory). Ability to optimize Power BI reports for performance. Excellent communication and problem-solving skills.
Posted 1 month ago
6.0 - 9.0 years
9 - 13 Lacs
Gurugram
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 month ago
7.0 - 12.0 years
15 - 20 Lacs
Visakhapatnam
Work from Office
Dear Aspirant, Greetings from Miracle Software Systems Inc. (https://miraclesoft.com/) Job description for your insights. Kindly have a glance and help us out in sharing your updated resume along with details in line below the job description: Requirement Details : Position : Data Engineer Location : Visakhapatnam Experience : 6+ Experience Mode of opportunity : Permanent Job Description: Looking for a highly skilled Data Engineer with extensive experience in Microsoft Azure, particularly with ADF and Fabric pipeline development, and a strong understanding of the Medallion Architecture (Bronze, Silver, Gold layers). The ideal candidate will be responsible for designing and optimizing end-to-end data pipelines across Lake houses and Warehouses in Microsoft Fabric, and will work closely with business and engineering teams to define scalable, governed data models. Responsibilities: Develop and manage complex data pipelines using Azure Data Factory (ADF) and Microsoft Fabric. Implement and maintain Medallion Architecture layers (Bronze, Silver, Gold). Design governed, scalable data models tailored to business requirements. Develop and optimize PySpark-based data processing for large-scale data transformations. Integrate with reporting tools such as Power BI for seamless data visualization. Ensure robust data governance, security, and performance in large-scale Fabric deployments. Strong expertise in Azure Data Factory (ADF) and Microsoft Fabric Hands-on experience with OneLake, Lakehouse Explorer, and Power BI integration Solid understanding of data governance, security, and performance tuning SAP knowledge is required Proficiency in PySpark is mandatory
Posted 1 month ago
3.0 - 7.0 years
12 - 15 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
We are looking for an experienced Data Engineer/BI Developer with strong hands-on expertise in Microsoft Fabric technologies, including OneLake, Lakehouse, Data Lake, Warehouse, and Real-Time Analytics, along with proven skills in Power BI, Azure Synapse Analytics, and Azure Data Factory (ADF). The ideal candidate should also possess working knowledge of DevOps practices for data engineering and deployment automation. Key Responsibilities: Design and implement scalable data solutions using Microsoft Fabric components: OneLake, Data Lake, Lakehouse, Warehouse, and Real-Time Analytics Build and manage end-to-end data pipelines integrating structured and unstructured data from multiple sources. Integrate Microsoft Fabric with Power BI, Synapse Analytics, and Azure Data Factory to enable modern data analytics solutions. Develop and maintain Power BI datasets, dashboards, and reports using data from Fabric Lakehouses or Warehouses. Implement data governance, security, and compliance policies within the Microsoft Fabric ecosystem. Collaborate with stakeholders for requirements gathering, data modeling, and performance tuning. Leverage Azure DevOps / Git for version control, CI/CD pipelines, and deployment automation of data artifacts. Monitor, troubleshoot, and optimize data flows and transformations for performance and reliability. Required Skills: 38 years of experience in data engineering, BI development, or similar roles. Strong hands-on experience with Microsoft Fabric ecosystem:OneLake, Data Lake, Lakehouse, Warehouse, Real-Time Analytics Proficient in Power BI for interactive reporting and visualization. Experience with Azure Synapse Analytics, ADF (Azure Data Factory), and related Azure services. Good understanding of data modeling, SQL, T-SQL, and Spark/Delta Lake concepts. Working knowledge of DevOps tools and CI/CD processes for data deployment (Azure DevOps preferred). Familiarity with DataOps and version control practices for data solutions. Preferred Qualifications: Microsoft certifications (e.g., DP-203, PL-300, or Microsoft Fabric certifications) are a plus. Experience with Python, Notebooks, or KQL for Real-Time Analytics is advantageous. Knowledge of data governance tools (e.g., Microsoft Purview) is a plus. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You are a highly skilled Microsoft Fabric + Power BI Developer sought for supporting a part-time project engagement. Your expertise in Microsoft Fabric and proficiency in Power BI report development and data modeling will be essential for this remote opportunity suitable for offshore candidates. In this role, you will be responsible for designing and developing robust data solutions using Microsoft Fabric components such as Lakehouse, Data Warehouse, and Real-Time Analytics. Your tasks will include creating insightful and interactive Power BI reports and dashboards, performing data modeling, and writing efficient DAX queries and Power Query (M Language) scripts. Additionally, you will be building and maintaining data pipelines using Fabric Data Factory or Azure Data Factory, collaborating with stakeholders to understand reporting requirements, and delivering actionable insights. It will also be your responsibility to ensure performance optimization and best practices in report development and data architecture. To excel in this position, you must have proven experience with Microsoft Fabric technologies and strong hands-on experience in Power BI development, encompassing Dashboards, Reports, DAX, and Power Query. Your in-depth knowledge of data modeling, ETL processes, and data visualization best practices, along with experience in Fabric Data Factory or Azure Data Factory, will be crucial. Excellent analytical and problem-solving skills, as well as strong communication and collaboration abilities, are also required for this role.,
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |