Jobs
Interviews

57 Data Factory Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer with expertise in Microsoft Fabric and modern data platform components, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud. Your primary focus will be on ensuring performance, scalability, and delivering business value through efficient data solutions. You will collaborate with various teams to define data requirements, implement data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Additionally, you will work with Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI for seamless data integration and reporting. Your role will involve optimizing data performance and cost through efficient architecture and coding practices, ensuring data security, privacy, and compliance with organizational policies. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be part of your responsibilities. To excel in this role, you should have 5 to 7 years of experience as a Data Engineer, with at least 2+ years working on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is crucial. Strong skills in data modeling, ETL/ELT design, and performance tuning are required, along with proficiency in SQL and Python/PySpark scripting. Experience with CI/CD pipelines and DevOps practices for data solutions, understanding of data governance, security, and compliance frameworks, as well as excellent communication, problem-solving, and stakeholder management skills are essential for success in this role. A Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field is preferred. Having Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric would be advantageous. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,

Posted 2 days ago

Apply

4.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Power BI + Microsoft Fabric Lead with over 10 years of experience, you will play a key role in leading the strategy and architecture for BI initiatives. Your responsibilities will include designing and delivering end-to-end Power BI and Microsoft Fabric solutions, collaborating with stakeholders to define data and reporting goals, and driving the adoption of best practices and performance optimization. Your expertise in Power BI, including DAX, Power Query, and Advanced Visualizations, will be essential for the success of high-impact BI initiatives. As a Power BI + Microsoft Fabric Developer with 4+ years of experience, you will be responsible for developing dashboards and interactive reports using Power BI, building robust data models, and implementing Microsoft Fabric components like Lakehouse, OneLake, and Pipelines. Working closely with cross-functional teams, you will gather and refine requirements to ensure high performance and data accuracy across reporting solutions. Your hands-on experience with Microsoft Fabric tools such as Data Factory, OneLake, Lakehouse, and Pipelines will be crucial for delivering effective data solutions. Key Skills Required: - Strong expertise in Power BI (DAX, Power Query, Advanced Visualizations) - Hands-on experience with Microsoft Fabric (Data Factory, OneLake, Lakehouse, Pipelines) - Solid understanding of data modeling, ETL, and performance tuning - Ability to collaborate effectively with business and technical teams Joining our team will provide you with the opportunity to work with cutting-edge Microsoft technologies, lead high-impact BI initiatives, and thrive in a collaborative and innovation-driven environment. We offer a competitive salary and benefits package to reward your expertise and contributions. If you are passionate about leveraging Power BI and Microsoft Fabric tools to drive data-driven insights and solutions, we invite you to apply for this full-time position. Application Question(s): - What is your current and expected CTC - What is your notice period If you are serving your notice period, then what is your Last Working Day (LWD) Experience Required: - Power BI: 4 years (Required) - Microsoft Fabrics: 4 years (Required) Work Location: In person,

Posted 2 days ago

Apply

2.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer (Power BI) at Acronotics Limited, you will play a crucial role in designing and managing data pipelines that integrate Power BI, OLAP cubes, documents such as PDFs and presentations, and external data sources with Azure AI. Your primary responsibility will be to ensure that both structured and unstructured financial data is properly indexed and made accessible for semantic search and LLM application. Your key responsibilities in this full-time, on-site role based in Bengaluru will include extracting data from Power BI datasets, semantic models, and OLAP cubes. You will connect and transform data using Azure Synapse, Data Factory, and Lakehouse architecture. Additionally, you will preprocess PDFs, PPTs, and Excel files utilizing tools like Azure Form Recognizer or Python-based solutions. Your role will also involve designing data ingestion pipelines for external web sources, such as commodity prices, and collaborating with AI engineers to provide cleaned and contextual data for vector indexes. To be successful in this role, you should have a strong background in utilizing Power BI REST/XMLA APIs and expertise in OLAP systems like SSAS and SAP BW, data modeling, and ETL design. Hands-on experience with Azure Data Factory, Synapse, or Data Lake is essential, along with familiarity with JSON, DAX, and M queries. Join Acronotics Limited in revolutionizing businesses with cutting-edge robotic automation and artificial intelligence solutions. Let your expertise in data engineering contribute to the advancement of automated solutions that redefine how products are manufactured, marketed, and consumed. Discover the possibilities with Radium AI, our innovative product automating bot monitoring and support activities, on our website today.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

It is exciting to be a part of a company where the employees truly believe in the mission and values of the organization. At Fractal Analytics, we are dedicated to bringing passion and customer focus to our business operations. Our vision is to empower every human decision in the enterprise, creating a world where individual choices, freedom, and diversity are celebrated. We believe in fostering an ecosystem where human imagination plays a vital role in every decision-making process, constantly challenging ourselves to innovate and improve. We value individuals who empower imagination with intelligence, and we call them true Fractalites. We are currently seeking a Data Engineer with 2-5 years of experience to join our team in Bangalore, Gurgaon, Chennai, Coimbatore, Pune, or Mumbai. The ideal candidate will be responsible for ensuring that production-related activities are delivered within the agreed Service Level Agreements (SLAs). This role involves working on issues, bug fixes, minor changes, and collaborating with the development team when necessary to address any challenges and implement enhancements. Key Technical Skills required for this role include: - Strong proficiency in Azure Data Engineering services, specifically Azure Data Factory, Azure Databricks, and Storage (ADLS Gen 2) - Experience in Web app/App service development - Proficiency in programming languages such as Python, Pyspark, and SQL - Hands-on experience with log analytics and Application Insights - Strong expertise in Azure SQL In addition to technical skills, the following non-technical skills are mandatory: - Drive incident and problem resolution to support key operational activities - Collaborate on change ticket review, approvals, and planning with internal teams - Support the transition of projects from project teams to support teams - Serve as an escalation point for operation-related issues - Experience with ServiceNow is preferred - Strong attention to detail with a focus on quality and accuracy - Ability to manage multiple tasks with appropriate priority and time management skills - Flexibility in work content and eagerness to learn - Knowledge of service support, operation, and design processes (ITIL) - Strong relationship-building skills to collaborate with stakeholders at all levels and across organizational boundaries If you are someone who thrives in a dynamic environment and enjoys working with motivated individuals who are passionate about growth, then a career with us at Fractal Analytics may be the perfect fit for you. If this role does not align with your experience, feel free to express your interest in future opportunities by connecting with us through the Introduce Yourself feature on our website or by creating an account to receive email alerts for new job postings matching your interests.,

Posted 6 days ago

Apply

2.0 - 6.0 years

0 - 0 Lacs

karnataka

On-site

As a Power BI Developer, you will be an integral part of our dynamic team, contributing your expertise to design, develop, and implement advanced Power BI solutions that facilitate data-driven decision-making. Your role will involve close collaboration with business stakeholders to grasp their requirements and translate them into visually appealing and high-performance Power BI reports. Your responsibilities will span various key areas, including data modeling and analysis. You will be tasked with creating robust data models, utilizing advanced DAX for complex calculations, and effectively transforming and cleaning data using Power Query. Additionally, you will develop interactive Power BI reports with diverse visualizations, optimize report performance, and enforce data access control through Row-Level Security (RLS). Furthermore, you will oversee Power BI Service administration, managing capacities, licenses, and deployment strategies while integrating Power BI with other Microsoft tools for enhanced automation and data processing. Your expertise in cloud platforms like MS Fabric, Data Factory, and Data Lake will be crucial in optimizing data pipelines and scalability. In addition to your technical responsibilities, you will engage in collaboration with stakeholders to deliver actionable insights, mentor junior team members on best practices, and provide technical leadership by ensuring adherence to standards and deploying reports to production environments. To qualify for this role, you should possess 2 to 6 years of hands-on experience with Power BI and related technologies, demonstrating proficiency in data modeling, DAX, Power Query, visualization techniques, and SQL skills. Experience in ETL processes, cloud platforms, and strong problem-solving abilities are essential. Excellent communication skills and the ability to work both independently and collaboratively are also required. Preferred qualifications include experience with R or Python for custom visual development and certification in Power BI or related technologies. Please note that this position mandates working at our (South) Bangalore office for at least 4 out of 5 days, with no remote work option available. Local candidates are preferred, and relocation assistance will not be provided. This is a full-time position based in Bangalore (South), offering a competitive salary range of 500,000-1,200,000 INR per year. If you meet the qualifications and are eager to contribute to our team, we encourage you to apply before the deadline of April 15, 2025.,

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Hybrid

Design and implement scalable data architectures on Azure and Databricks Lead end-to-end data pipelines: ingestion, transformation, modeling, orchestration, and consumption Work on data management, data science enablement data visualization layers Required Candidate profile Prior experience in proposal writing and effort estimation is mandatory Strong expertise in Azure Data Services: Data Factory, Data Lake, Synapse, SQL DB Hands-on with Databricks

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 33 Lacs

Noida

Work from Office

Senior Data Engineer Experience: 5+yrs Location: Noida 5 days Work from Office Shift:1pm to 10pm Job Summary We are seeking a highly skilled Senior Data Engineer / BI Developer with deep expertise in SQL Server database development and performance tuning, along with experience in ETL pipelines (SSIS), cloud-based data engineering (Azure Databricks), and data visualization (Power BI/Sigma). This role is critical in designing, optimizing, and maintaining enterprise grade data solutions that power analytics and business intelligence across the organization. Key Responsibilities Design, develop, and optimize SQL Server databases in Azure Cloud, including schema design, indexing strategies, and stored procedures. Perform advanced SQL performance tuning, query optimization, and troubleshooting of slow-running queries. Develop and maintain SSIS packages for complex ETL workflows, including error handling and logging. Build scalable data pipelines in Azure Databricks. Create and maintain Power BI and Sigma dashboards, ensuring data accuracy, usability, and performance. Implement and enforce data governance, security, and compliance best practices. Collaborate with cross-functional teams including data analysts, data scientists, and business stakeholders. Participate in code reviews, data modeling, and architecture planning for new and existing systems. Experience with backup and recovery strategies, high availability, and disaster recovery Required Skills & Experience 5 to 8 years of hands-on experience with Microsoft SQL Server (2016/2022 or later). - Strong expertise in T-SQL, stored procedures, functions, views, indexing, and query optimization. Proven experience with SSIS for ETL development and deployment. Experience with Azure Databricks, Spark, and Delta Lake for big data processing. Proficiency in Power BI and/or Sigma for data visualization and reporting. Solid understanding of data warehousing, star/snowflake schemas, and dimensional modeling. Familiarity with CI/CD pipelines, Git, and DevOps for data. Senior Data Engineer / BI Developer (SQL Server & Cloud Analytics) Strong communication and documentation skills. Preferred Qualifications Experience with Azure Data Factory, Synapse Analytics, or Azure SQL Database. Knowledge of NoSQL databases (e.g., MongoDB, Cosmos DB) is a plus. Familiarity with data lake architecture and cloud storage (e.g., ADLS Gen2). Experience in agile environments and working with JIRA or Azure DevOps

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

chennai, tamil nadu

On-site

You are an experienced Data Engineering Manager responsible for leading a team of 10+ engineers in Chennai, Tamil Nadu, India. Your primary role is to build scalable data marts and Power BI dashboards to measure marketing campaign performance. Your deep expertise in Azure, Microsoft Fabric, and Power BI, combined with strong leadership skills, enables you to drive data initiatives that facilitate data-driven decision-making for the marketing team. Your key responsibilities include managing and mentoring the data engineering and BI developer team, overseeing the design and implementation of scalable data marts and pipelines, and leading the development of insightful Power BI dashboards. You collaborate closely with marketing and business stakeholders to gather requirements, align on metrics, and deliver actionable insights. Additionally, you lead project planning, prioritize analytics projects, and ensure timely and high-impact outcomes using Agile methodologies. You are accountable for ensuring data accuracy, lineage, and compliance through robust validation, monitoring, and governance practices. You promote the adoption of modern Azure/Microsoft Fabric capabilities and industry best practices in data engineering and BI. Cost and resource management are also part of your responsibilities, where you optimize infrastructure and licensing costs, as well as manage external vendors or contractors if needed. Your expertise in Microsoft Fabric, Power BI, Azure (Data Lake, Synapse, Data Factory, Azure Functions), data modeling, data pipeline development, SQL, and marketing analytics is crucial for success in this role. Proficiency in Agile project management, data governance, data quality monitoring, Git, stakeholder management, and performance optimization is also required. Your role involves leading a team that focuses on developing scalable data infrastructure and analytics solutions to empower the marketing team with campaign performance measurement and optimization. This permanent position requires 9 to 12 years of experience in the Data Engineering domain. If you are passionate about driving data initiatives, leading a team of engineers, and collaborating with stakeholders to deliver impactful analytics solutions, this role offers an exciting opportunity to make a significant impact in the marketing analytics space at the Chennai location.,

Posted 1 week ago

Apply

5.0 - 6.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Responsibilities Analyze and implement user requirements/business needs as new and/or enhanced product functionality Design, code, test, and document software code Engage in the full software development lifecycle, from design and implementation to testing and deployment. Assist in the packaging and delivery of finished software products to clients Communicate with technical and business leaders on business requirements, system-related capabilities, programming progress, and enhancement status Assist in supporting the client by providing technical product support to them. Assist in the maintenance of the hosted products and environments Learn new technologies and help the team implement them in our products Gain a deep understanding of distributed architecture and contribute to the scalability and efficiency of our blockchain projects. Oversee junior developers to enhance their skills and contribute to the success of the team. Utilize general developer tools such as Github, VSCode, and other industry-standard platforms. Skill required .Net, Azure, Angular, Unit testing, Blockchain(Intermediate to Advanced knowledge), Docker, Linux , DevOps Comfortable with Linux and command-line interfaces. Passion for technology, identifying issues and problem solving. Process-oriented mindset, with ability to create, document, perform, and continually improve how team goals are achieved. Excellent verbal and written communication skills in English. Ability to work cross-functionally with team members of varied backgrounds (e.g., business, product, development, testing) Ability to build professional relationships, demonstrate a spirit of collaboration, and provide a flexible approach to work.

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 19 Lacs

Bengaluru

Remote

Role: Azure Specialist-DBT Location: Bangalore Mode: Remote Education and Work Experience Requirements: •Overall, 5 to 9 years of experience in IT Industry. Min 6 years of experience working on Data Engineering. Translate complex business requirements into analytical SQL views using DBT Support data ingestion pipelines using Airflow and Data Factory Develop DBT macros to enable scalable and reusable code automation Mandatory Skills: Strong experience with DBT (Data Build Tool)( (or strong SQL / relational DWH knowledge)-Must have Proficiency in SQL and strong understanding of relational data warehouse concept Hands-on experience with Databricks (primarily Databricks SQL) good to have Familiarity with Apache Airflow and Azure Data Factory – nice to have Experience working with Snowflake – nice to have Additional Information: Qualifications - BE, MS, M.Tech or MCA. Certifications: Azure Big Data, Databricks Certified Associate

Posted 1 week ago

Apply

4.0 - 7.0 years

15 - 27 Lacs

Bengaluru

Hybrid

Key Responsibilities: Design, develop, and maintain interactive dashboards and reports in Power BI . Utilize Microsoft Fabric (including OneLake, Lakehouse, Dataflows Gen2, and Pipelines) to build scalable data solutions. Integrate data from multiple sources using Fabric Data Factory Pipelines , Synapse Real-Time Analytics, and Power Query. Implement and optimize data models , measures (DAX) , and ETL processes . Collaborate with data engineers, analysts, and stakeholders to understand data needs and deliver actionable insights. Ensure data governance, security, and compliance using Microsoft Purview and Fabrics built-in governance tools. Perform performance tuning, dataset optimization, and report deployment across workspaces. Document technical solutions and provide user training/support when necessary. Good to Have: Microsoft Certified: Fabric Analytics Engineer or Power BI Data Analyst Associate. Knowledge of Azure Data Services (Data Factory, Synapse, Azure SQL). Experience with Row-Level Security (RLS) and large dataset optimization in Power BI. Familiarity with GitHub or Azure DevOps for version control. Exposure to real-time streaming data and KQL queries (Kusto). Job Requirement Strong experience with Power BI, including DAX,Power Query and Fabric Proficiency in SQL and data modeling techniques. Experience with Azure services (e.g., Synapse, Data Factory). Ability to optimize Power BI reports for performance. Excellent communication and problem-solving skills.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 12 Lacs

Chennai

Work from Office

Min 3+ yrs in Data engineer(GenAI platform) ETL/ELT workflows using AWS,Azure Databricks,Airflow,Azure Data Factory Exp in Azure Databricks,Snowflake,Airflow,Python,SQL,Spark,Spark Streaming,AWS EKS, CI/CD(Jenkins) Elasticsearch,SOLR,OpenSearch,Vespa

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As the Lead Data Engineer at Mastercard, you will be responsible for designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. Your role will involve mentoring and guiding other engineers, fostering a culture of curiosity and continuous improvement, and creating robust ETL/ELT pipelines to serve business-critical use cases. You will lead by example by writing high-quality, testable code, participating in architecture and design discussions, and decomposing complex problems into scalable components aligned with platform and product goals. Championing best practices in data engineering, you will drive collaboration across teams, support data governance and quality efforts, and continuously learn and apply new technologies to improve team productivity and platform reliability. To succeed in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You should also possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, you should be comfortable working with cloud platforms such as AWS, Azure, or GCP and have a strong foundation in data modeling, database design, and performance optimization. A bachelor's degree in computer science, engineering, or a related field is required, along with experience in Agile/Scrum development environments. Experience with CI/CD practices, version control, and automated testing is essential, as well as the ability to mentor and uplift junior engineers. Familiarity with cloud-related services like S3, Glue, Data Factory, and Databricks is highly desirable. Furthermore, exposure to data governance tools and practices, orchestration tools, containerization, and infrastructure automation will be advantageous. A master's degree, relevant certifications, or contributions to open source/data engineering communities will be considered a bonus. Exposure to machine learning data pipelines or MLOps is also a plus. If you are a curious, adaptable, and driven individual who enjoys problem-solving and continuous improvement, and if you have a passion for building clean data pipelines and cloud-native designs, then this role is perfect for you. Join us at Mastercard and be part of a team that is dedicated to unlocking the potential of data assets and shaping the future of data engineering.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As an ETL Developer at our client's EU subsidiary of a Global Financial Bank, you will play a crucial role in designing, building interfaces, and integrating data from various internal and external sources into the new Enterprise Data Warehouse environment. Your primary responsibility will be developing ETL solutions using Microsoft and Azure technologies while adhering to industry ETL standards, architecture, and best practices. You will act as a technical expert throughout the software development lifecycle, including designing, coding, unit testing, supporting, and debugging data warehouse software components. Your expertise in cloud and ETL engineering will be instrumental in solving problems and designing effective approaches. Additionally, you will troubleshoot and debug ETL pipelines, optimize query performance, and create unit tests. Collaborating with the Development Lead, DWH Architect, QA Engineers, and business analysts, you will contribute to planning, implementing, and delivering efficient ETL strategies that align with end-user requirements. Your role will involve creating technical documentation, reports, and dashboards in the BI portal while supporting internal audit processes. Key Mandatory Skills: - Proven work experience as an ETL Developer - Advanced knowledge of relational databases and dimensional Data Warehouse modeling - Expertise in Microsoft Data stack with experience in Azure and Synapse Analytics - Designing and implementing data transformation and ETL layers using tools like Data Factory and Notebooks - Experience with PowerBI for report and dashboard creation - Strong SQL knowledge for developing complex queries and working with stored procedures, views, indexes, etc. - Familiarity with CI/CD tools and principles, preferably Azure DevOps or Bamboo - Proficiency in at least one scripting language, with Python as an advantage - Experience with GIT repositories and version control tools like GitHub, Azure DevOps, or Bitbucket - Working in Agile projects, preferably using JIRA - Excellent problem-solving skills, communication abilities, and understanding of data governance concepts Nice-to-Have Skills: - Microsoft Fabric - Snowflake - Background in SSIS / SSAS / SSRS - Azure DevTest Labs, ARM templates - Azure PurView - Banking or finance industry experience Your ability to work independently, collaborate effectively in a team environment, and communicate complex information clearly will be essential for success in this role. If you have a passion for data engineering, a keen eye for detail, and a proactive approach to problem-solving, we encourage you to apply.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You are a Senior Data Engineer with expertise in constructing scalable data pipelines utilizing Microsoft Fabric. Your primary responsibilities will involve developing and managing data pipelines through Microsoft Fabric Data Factory and OneLake. You will be tasked with designing and creating ingestion and transformation pipelines for both structured and unstructured data. It will be your responsibility to establish frameworks for metadata tagging, version control, and batch tracking to ensure the security, quality, and compliance of data pipelines. Additionally, you will play a crucial role in contributing to CI/CD integration, observability, and documentation. Collaboration with data architects and analysts will be essential to align with business requirements effectively. To qualify for this role, you should possess at least 6 years of experience in data engineering, with a minimum of 2 years of hands-on experience working on Microsoft Fabric or Azure Data services. Proficiency in tools like Azure Data Factory, Fabric, Databricks, or Synapse is required. Strong SQL and data processing skills (such as PySpark and Python) are essential. Previous experience with data cataloging, lineage, and governance frameworks will be beneficial for this position.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced BI Architect with a strong background in Power BI and the Microsoft Azure ecosystem. Your main responsibility will be to design, implement, and enhance business intelligence solutions that aid in strategic decision-making within the organization. You will play a crucial role in leading the BI strategy, architecture, and governance processes, while also guiding a team of BI developers and Data analysts. Your key responsibilities will include designing and implementing scalable BI solutions using Power BI and Azure services, defining BI architecture, data models, security models, and best practices for enterprise reporting. You will collaborate closely with business stakeholders to gather requirements and transform them into data-driven insights. Additionally, you will oversee data governance, metadata management, and Power BI workspace design, optimizing Power BI datasets, reports, and dashboards for performance and usability. Furthermore, you will be expected to establish standards for data visualization, development lifecycle, version control, and deployment. As a mentor to BI developers, you will ensure adherence to coding and architectural standards, integrate Power BI with other applications using APIs, Power Automate, or embedded analytics, and monitor and troubleshoot production BI systems to maintain high availability and data accuracy. To qualify for this role, you should have a minimum of 12 years of overall experience with at least 7 years of hands-on experience with Power BI, including expertise in data modeling, DAX, M/Power Query, custom visuals, and performance tuning. Strong familiarity with Azure services such as Azure SQL Database, Azure Data Lake, Azure Functions, and Azure DevOps is essential. You must also possess a solid understanding of data warehousing, ETL, and dimensional modeling concepts, along with proficiency in SQL, data transformation, and data governance principles. Experience in managing enterprise-level Power BI implementations with large user bases and complex security requirements, excellent communication and stakeholder management skills, the ability to lead cross-functional teams, and influence BI strategy across departments are also prerequisites for this role. Knowledge of Microsoft Fabric architecture and its components, a track record of managing BI teams of 6 or more, and the capability to provide technical leadership and team development are highly desirable. In addition, having the Microsoft Fabric Certification DP 600 and PL-300 would be considered a bonus for this position.,

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Job Title: Azure Data Engineer Job Summary: We are looking for a Data Engineer with hands-on experience in the Azure ecosystem. You will be responsible for designing, building, and maintaining both batch and real-time data pipelines using Azure cloud services. Key Responsibilities: Develop and maintain data pipelines using Azure Synapse Analytics, Data Factory, and DataBricks Work with real-time streaming tools like Azure Event Hub, Streaming Analytics, and Apache Kafka Design and manage data storage using ADLS Gen2, Blob Storage, Cosmos DB, and SQL Data Warehouse Use Spark (Python/Scala) for data processing in DataBricks Implement data workflows with tools like Apache Airflow and dbt Automate processes using Azure Functions and Python Ensure data quality, performance, and security Required Skills: Strong knowledge of Azure Data Platform (Synapse, ADLS2, Data Factory, Event Hub, Cosmos DB) Experience with Spark (in DataBricks), Python or Scala Familiar with tools like Azure Purview, dbt, and Airflow Good understanding of real-time and batch processing architectures

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Scientist specializing in Generative AI & ML Engineering, your primary responsibility will be to research and develop AI algorithms and models. You will be tasked with analyzing data, constructing predictive models, and employing machine learning techniques to address intricate problems. Your proficiency should encompass a range of skills including proficiency in languages/frameworks such as Fast API and Azure UI Search API (React), as well as expertise in databases and ETL tools like Cosmos DB and Data Factory Data Bricks. In addition, you should have a strong command of Python and R, familiarity with Azure Cloud Basics and Gitlab Pipeline, and experience in deploying AI solutions end-to-end. In addition to your proficient skills, you are expected to possess expert-level knowledge in areas such as Azure Open AI, Open AI GPT Family of models, and Azure Storage Account. Your expertise should extend to machine learning algorithms, deep learning frameworks like TensorFlow and PyTorch, and a solid foundation in mathematics including linear algebra, calculus, probability, and statistics. Furthermore, your role will require proficiency in data analysis tools such as Pandas, NumPy, and SQL, as well as strong statistical and probabilistic modeling skills. Experience with data visualization tools like Matplotlib, Seaborn, and Tableau, along with knowledge of big data technologies like Spark and Hive, will be essential for success in this position. Overall, your experience in AI-driven analytics and decision-making systems, coupled with your ability to develop and deploy AI frameworks and models, will be critical in delivering effective solutions to complex challenges.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Data Engineer at Mastercard, you will be a key player in the Mastercard Services Technology team, responsible for driving the mission to unlock the potential of data assets by innovating, managing big data assets, ensuring accessibility of data, and enforcing standards and principles in the Big Data space. Your role will involve designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. You will mentor and guide other engineers, foster a culture of curiosity and continuous improvement, and create robust ETL/ELT pipelines that integrate with various systems. Your responsibilities will include decomposing complex problems into scalable components aligned with platform goals, championing best practices in data engineering, collaborating across teams, supporting data governance and quality efforts, and optimizing cloud infrastructure components related to data engineering workflows. You will actively participate in architectural discussions, iteration planning, and feature sizing meetings while adhering to Agile processes. To excel in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You must possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, a strong foundation in data modeling, database design, and performance optimization is required. Experience working with cloud platforms like AWS, Azure, or GCP and knowledge of modern data architectures and data lifecycle management are essential. Furthermore, familiarity with CI/CD practices, version control, and automated testing is crucial. You should demonstrate the ability to mentor junior engineers effectively, possess excellent communication and collaboration skills, and hold a Bachelor's degree in computer science, Engineering, or a related field. Comfort with Agile/Scrum development environments, curiosity, adaptability, problem-solving skills, and a drive for continuous improvement are key traits for success in this role. Experience with integrating heterogeneous systems, building resilient data pipelines across cloud environments, orchestration tools, data governance practices, containerization, infrastructure automation, and exposure to machine learning data pipelines or MLOps will be advantageous. Holding a Master's degree, relevant certifications, or contributions to open-source/data engineering communities will be a bonus.,

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer Junior at dotSolved, you will be responsible for designing, implementing, and managing scalable data solutions on Azure. Your primary focus will be on building and maintaining data pipelines, integrating data from various sources, and ensuring data quality and security. Proficiency in Azure services such as Data Factory, Databricks, and Synapse Analytics is essential as you optimize data workflows for analytics and reporting purposes. Collaboration with stakeholders is a key aspect of this role to ensure alignment with business goals and performance standards. Your responsibilities will include designing, developing, and maintaining data pipelines and workflows using Azure services, implementing data integration, transformation, and storage solutions to support analytics and reporting, ensuring data quality, security, and compliance with organizational and regulatory standards, optimizing data solutions for performance, scalability, and cost efficiency, as well as collaborating with cross-functional teams to gather requirements and deliver data-driven insights. This position is based in Chennai and Bangalore, offering you the opportunity to work in a dynamic and innovative environment where you can contribute to the digital transformation journey of enterprises across various industries.,

Posted 3 weeks ago

Apply

3.0 - 8.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Job Title - Infrastructure Engineer Reporting Line Delivery & Technology Role Type Permanent Experience 5+ years Summary As an Infrastructure Engineer, you are required to work in complex development and migration of application projects to Azure cloud. You will work with business analysts, application developers, architects, and data stakeholders to understand the customer business cases and build Azure environment secured and scalable Key Accountabilities Understanding the requirement and bring out highly scalable Azure solution design and deliver End-to-End Implementation working with Azure architects Building and Implementation of environment setup in Azure cloud for applications, data team and DevOps. Understanding in the use of dockers, containers, and Kubernetes in development processes. Good co-ordination with Architects, App Developers, Data Engineers and POTs team for resolving infrastructure related issues. Implementation of Azure cloud resources with security features enabled. Ownership of documentation, tracking and prioritising tasks and highlighting the foreseen blockers upfront Ensure the Azure cloud environment is implemented using best practices and as per architecture governance & compliance rules Key skills and Technical Competencies Degree level education in Mathematics, Computing or Engineering discipline or equivalent experience. 5+ years of experience in Azure Cloud (IaaS, PaaS, SaaS). Experience in designing solutions and implementation/migration of applications to Azure Cloud. Strong Expertise on ARM Templates and PowerShell Scripting. Experience in building and managing VMs, azure services (webapp, SQL Server, PostgreSQL, Azure monitoring, AD, Data Bricks, Data Factory, Azure Data lake, Log analytics, OMS) using ARM Templates and PowerShell scripting. Experience in creating VNET, SUBNET, VMs, storage account, peering, backups and restore, encryption of VMs disks (OS and data disk), load balancers, Azure AD, VM Scale Sets Disk allocation to both windows and RHEL/Ubuntu VMs. Setting up NSG Knowledge on dockers, containers and AKS. Creation, Managing and troubleshooting WAF (Web Application Firewall) Implementation of HA and DR. Hands on experience in deploying infra resources via Terraform IaC tool. Experience in implementing & troubleshooting Azure Container Apps based hosting services. Managing and maintaining Azure Defender Score security posture to meet organization security standards. Should be very good in troubleshooting VMs and other Azure services connectivity and firewall issues. Knowledge in setting up reverse proxy, traffic manager and gateway servers. Hands-on experience in implementing Security controls for Azure Services. Experience in RHEL and Ubuntu Linux. Experience of working in an agile environment, within a self-organising team Behavioural Competencies We are adopting a winning mindset, aligning around our strategy, and being guided by a clear set of behaviours PUT SAFETY FIRST Prioritising the safety of our people and products and supporting each other to speak up. DO THE RIGHT THING Supporting a culture of caring and belonging where we listen first, embrace feedback and act with integrity. KEEP IT SIMPLE Working together to share and execute ideas and staying adaptable to new ideas and solutions. MAKE A DIFFERENCE Thinking about the business impact of our choices and the business outcomes of our decisions and challenging ourselves to deliver excellence and efficiency every day on the things that matter.

Posted 4 weeks ago

Apply

8.0 - 12.0 years

22 - 27 Lacs

Bengaluru

Work from Office

KEY RESPONSIBILITIES We are looking for a highly experienced and hands-on Senior Data Architect to lead the design and implementation of scalable, secure, and high-performance data solutions across Azure and multi-cloud environments. The ideal candidate will have deep expertise in Azure Databricks, Azure Data Factory, Unity Catalog, and streaming/time-series data architectures. Experience in the life sciences industry is a strong plus. More specifically you will lead the following responsibilities: Implement enterprise-grade data platforms using Azure and other cloud technologies Leverage Azure Databricks , Data Factory , and Unity Catalog for data engineering, orchestration, and governance Design and optimize streaming and time-series data pipelines for real-time analytics and operational intelligence Build and manage functional and scalable data models to support diverse business use cases Define and enforce data architecture standards, best practices, and security policies Collaborate with data engineers, scientists, and business stakeholders to translate requirements into robust data solutions Create reusable components for rapid development of data platform Drive initiatives around data quality , metadata management , and data lifecycle governance Assist data scientists and visualization developers in developing and deploying algorithms for all analytics needs Stay current with emerging technologies and trends, especially in regulated industries like life sciences YEAR ONE CRITICAL SUCCESS FACTORS Successfully complete project implementations within time, cost and quality standards, in line with business and IT expectations Successfully complete implementation of various data integration initiatives Successfully develop capabilities that ensure data accuracy, integrity and accessibility PROFESSIONAL EXPERIENCE / QUALIFICATIONS Bachelors or Masters degree in in Computer Science or Engineering OR equivalent years of work experience 10+ years of proven experience in architecting and implementing Cloud data lake, Data warehouse platforms, Master data Management, data integration and OLTP database solutions 10+ years of experience in data architecture, data engineering, and cloud-based data platforms 5+ years of hands-on experience with Azure cloud services Expertise in: Azure Databricks, Data Factory, Unity Catalog Streaming technologies (e.g., Spark Structured Streaming, Azure Event Hubs, Kafka) Time-series data modelling and analytics Functional data modelling (e.g., dimensional, normalized, data vault) A comprehensive understanding of data lakehouse processes and the supporting technologies such as, Azure Data Factory, Azure Databricks, ADLS Gen2, IoT and other cloud technologies Strong knowledge of industry best practices around data architecture in both cloud based and on prem big data solutions Exceptional understanding of building solution architectures, design patterns, network topology and data security frameworks Experience of architecting data solution across hybrid (cloud, on premise) data platforms A comprehensive understanding of the principles of and best practices behind data engineering, and the supporting technologies such as RDBMS, NoSQL, Cache & In-memory stores Excellent problem solving and data modelling skills (logical, physical, sematic and integration models) including normalisation, OLAP / OLTP principles and entity relationship analysis Prior experience working with Middleware platforms like Mulesoft Comprehensive experience working with SQL Server, Azure SQL, RDS, CosmosDB, MongoDB, PostgreSQL Experience with stream processing tools (Spark structured streaming, Azure Stream Analytics, Storm, etc.) Familiarity with Agile / Scrum methodologies Excellent verbal and written English skills Proven experience in life sciences or other regulated industries Equal Opportunity Employer Biocon Biologics is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, colour, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Biocon Biologics also complies with all applicable national, state and local laws governing non-discrimination in employment as well as work authorisation and employment eligibility verification requirements of the Immigration and Nationality Act.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Hyderabad

Work from Office

We are seeking highly skilled .NET Developers with 5+ years of experience and expertise in Azure API Management to join our team. The ideal candidates will have a strong background in .NET Core, C# , and hands-on experience with cloud-based solutions leveraging Microsoft Azure services . Candidates with a focus on Azure APIM will work on designing, developing, and managing API solutions, while those with an emphasis on Azure DevOps will be responsible for CI/CD pipeline management, automation, and cloud infrastructure deployment. Key Responsibilities: Design, develop, and maintain scalable applications using .NET Core and C# Develop and deploy serverless applications using Azure Functions Build and optimize Azure APIM Pipelines for CI/CD automation Implement GitHub Actions and version control strategies Work with cloud-based SQL and NoSQL databases Develop and manage Azure Logic Apps, Azure Data Factory, and Azure Service Bus Queues Ensure secure application development using Azure Key Vault and Azure Application Insights Manage containerized applications using Azure Container Registry Implement Azure Event Grid for event-driven workflows Utilize Azure Storage Accounts for scalable data storage solutions Work on batch processing using Azure Batch Accounts Troubleshoot, optimize, and monitor applications for performance and security Required Skills & Experience: Hands-on experience in .NET development with C# and .NET Core , exclusively focused on API development . Experience with SQL databases and NoSQL solutions Hands-on experience with Azure Functions, Logic Apps, and Data Factory Proficiency in Azure API Management and GitHub Actions for CI/CD Expertise in cloud security best practices using Azure Key Vault Experience working with event-driven architecture using Azure Service Bus & Event Grid Ability to work with containerized applications and Azure Container Registry Strong problem-solving and debugging skills in cloud-based applications

Posted 1 month ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies