Jobs
Interviews

378 Azure Synapse Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9 - 12 years

0 - 0 Lacs

Bengaluru

Work from Office

Senior Data Engineer Job Summary: We are seeking an experienced and highly motivated Senior Azure Data Engineer to join a Data & Analytics team. The ideal candidate will be a hands-on technical leader responsible for designing, developing, implementing, and managing scalable, robust, and secure data solutions on the Microsoft Azure platform. This role involves leading a team of data engineers, setting technical direction, ensuring the quality and efficiency of data pipelines, and collaborating closely with data scientists, analysts, and business stakeholders to meet data requirements. Key Responsibilities: Lead, mentor, and provide technical guidance to a team of Azure Data Engineers. Design, architect, and implement end-to-end data solutions on Azure, including data ingestion, transformation, storage (lakes/warehouses), and serving layers. Oversee and actively participate in the development, testing, and deployment of robust ETL/ELT pipelines using key Azure services. Establish and enforce data engineering best practices, coding standards, data quality checks, and monitoring frameworks. Ensure data solutions are optimized for performance, cost, scalability, security, and reliability. Collaborate effectively with data scientists, analysts, and business stakeholders to understand requirements and deliver effective data solutions. Manage, monitor, and troubleshoot Azure data platform components and pipelines. Contribute to the strategic technical roadmap for the data platform. Qualifications & Experience: Experience: Minimum 6-8+ years of overall experience in data engineering roles. Minimum 3-4+ years of hands-on experience designing, implementing, and managing data solutions specifically on the Microsoft Azure cloud platform. Proven experience (1-2+ years) in a lead or senior engineering role, demonstrating mentorship and technical guidance capabilities. Education: Bachelor's degree in computer science, Engineering, Information Technology, or a related quantitative field (or equivalent practical experience). Technical Skills: Core Azure Data Services: Deep expertise in Azure Data Factory (ADF), Azure Synapse Analytics (SQL Pools, Spark Pools), Azure Databricks, Azure Data Lake Storage (ADLS Gen2). Data Processing & Programming: Strong proficiency with Spark (using PySpark or Scala) and expert-level SQL skills. Proficiency in Python is highly desired. Data Architecture & Modelling: Solid understanding of data warehousing principles (e.g., Kimball), dimensional modelling, ETL/ELT patterns, and data lake design. Databases: Experience with relational databases (e.g., Azure SQL Database) and familiarity with NoSQL concepts/databases is beneficial. Version Control: Proficiency with Git for code management. Leadership & Soft Skills: Excellent leadership, mentoring, problem-solving, and communication skills, with the ability to collaborate effectively across various teams. Required Skills # Azure Component Proficiency 1 Azure Synapse Analytics High 2 Azure Data Factory High 3 Azure SQL High 4 ADLS Storage High 5 Azure Devops - CICD High 6 Azure Databricks Medium - High 7 Azure Logic App Medium - High 8 Azure Fabric Good to Have, not mandatory 9 Azure Functions Good to Have, not mandatory 10 Azure Purview Good to Have, not mandatory 2. Good experience in Data extraction patterns via ADF - API , Files, Databases. 3. Data Masking in Synapse, RBAC 4. Experience in Data warehousing - Kimbal Modelling. 5. Good communication and collaboration skills.

Posted 2 months ago

Apply

5 - 7 years

15 - 20 Lacs

Hyderabad

Work from Office

Job Summary We are seeking a skilled and detail-oriented Azure Data Engineer to join our data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and solutions on the Microsoft Azure cloud platform. You will collaborate with data analysts, the reporting team, and business stakeholders to ensure efficient data availability, quality, and governance. Experience Level: Mid-Level/Senior Must have skills: Strong handson experience with Azure Data Factory , Azure Data Lake Storage , and Azure SQL . Good to have skills: Working knowledge on Databricks, Azure Synapse Analytics, Azure functions, Logic app workflows, Log analytics and Azure DevOps. Roles and Responsibilities Design and implement scalable data pipelines using Azure Data Factory , Azure SQL , Databricks , and other Azure services. Develop and maintain data lakes and data warehouses on Azure. Integrate data from various on-premises and cloud-based sources. Create and manage ETL/ELT processes , ensuring data accuracy and performance. Optimize and troubleshoot data pipelines and workflows. Ensure data security, compliance, and governance. Collaborate with business stakeholders to define data requirements and deliver actionable insights. Monitor and maintain Azure data services performance and cost-efficiency.

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Mohali

Remote

As a Senior Data Engineer , you will support the Global BI team for Isolation Valves in migrating to Microsoft Fabric . Your role focuses on data gathering, modeling, integration, and database design to enable efficient data management. You will develop and optimize scalable data models to support analytics and reporting needs. Leverage Microsoft Fabric and Azure technologies for high-performance data processing. In this Role, Your Responsibilities Will Be: Collaborate with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data requirements and deliver effective solutions. Leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Expertise in data modeling, with a strong focus on data warehouse and lakehouse design. Design and implement data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar tools to prepare data for analysis and reporting. Implement data quality checks and governance practices to ensure accuracy, consistency, and security of data assets. Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms. Experience with data integration and ETL tools like Azure Data Factory. Proven expertise in Microsoft Fabric or similar data platforms. In-depth knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions. Strong problem-solving skills with a track record of resolving complex technical challenges. Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Ability to work independently and collaboratively within a team environment. Microsoft certifications in data-related fields are preferred. DP-700 (Microsoft Certified: Fabric Data Engineer Associate) is a plus. Who You Are: You show a tremendous amount of initiative in tough situations; are exceptional at spotting and seizing opportunities. You observe situational and group dynamics and select best-fit approach. You make implementation plans that allocate resources precisely. You pursue everything with energy, drive, and the need to finish. For This Role, You Will Need: Experience: 5+ years in Data Warehousing with on-premises or cloud technologies. Analytical & Problem-Solving Skills: Strong analytical abilities with a proven track record of resolving complex data challenges. Communication Skills: Ability to effectively engage with internal customers across various functional areas. Database & SQL Expertise: Proficient in database management, SQL query optimization, and data mapping. Excel Proficiency: Strong knowledge of Excel, including formulas, filters, macros, pivots, and related operations. MS Fabric Expertise: Extensive experience with Fabric components, including Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models. Programming Skills: Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging. Flexibility: Willingness to work flexible hours based on project requirements. Technical Documentation: Strong documentation skills for maintaining clear and structured records. Language Proficiency: Fluent in English. SQL & Data Modeling: Advanced SQL skills, including experience with complex queries, data modeling, and performance tuning. Medallion Architecture: Hands-on experience implementing Medallion Architecture for data processing. Database Experience: Working knowledge of Oracle, SAP, or other relational databases. Manufacturing Industry Experience: Prior experience in a manufacturing environment is strongly preferred. Learning Agility: Ability to quickly learn new business areas, software, and emerging technologies. Leadership & Time Management: Strong leadership and organizational skills, with the ability to prioritize, multitask, and meet deadlines. Confidentiality: Ability to handle sensitive and confidential information with discretion. Project Management: Capable of managing both short- and long-term projects effectively. Cross-Functional Collaboration: Ability to work across various organizational levels and relationships. Strategic & Tactical Thinking: Ability to balance strategic insights with hands-on execution. ERP Systems: Experience with Oracle, SAP, or other ERP systems is a plus. Travel Requirements: Willing to travel up to 20% as needed. Preferred Qualifications that Set You Apart: Education: BA/BS/B.E./B.Tech in Business, Information Systems, Technology, or a related field. Technical Background: Bachelors degree or equivalent in Science, with a focus on MIS, Computer Science, Engineering, or a related discipline. Communication Skills: Strong interpersonal skills in English (spoken and written) to collaborate effectively with overseas teams. Database & SQL Expertise: Proficiency in Oracle PL/SQL. Azure Experience: Hands-on experience with Azure services, including Azure Synapse Analytics and Azure Data Lake. DevOps & Agile: Practical experience with Azure DevOps, along with knowledge of Agile and Scrum methodologies. Certifications: Agile certification is preferred.

Posted 2 months ago

Apply

2 - 6 years

0 - 3 Lacs

Chennai

Work from Office

Position Details Total Years of Experience: 2-5 Years Primary Technologies: SQL, Power BI, Excel, Python, Additional : Azure Synapse, Databricks, Spark, Warehouse Architecture & Development Summary: The Business Intelligence (BI) Engineer is responsible for assisting the specified Human Resource team in the continuous management of all relevant analytics. This position will collect and analyze the data to measure the impact of initiatives to support strategic business decision making. This position is responsible for working with developers to provide the business at all levels with relevant, intuitive, insight-driven information that is directly actionable. The Business Intelligence Engineer will become closely integrated with business and build a strong relationship with business leaders. This position will work with multi-national teams in an Agile framework and design and implement actionable reports and dashboards, assisting in designing the broader information landscape available to the business. Primary Job Functions Collaborate directly with the business teams to understand performance drivers and trends in their area, provide insights, make recommendations and interpret new data and results. Design reports and dashboards for consumption by the business; oversee the development for production. Perform pro forma modeling and ad hoc analyses. Keep up to date on the best visualization practices and dashboard designs. Maintain standardized templates for reports and dashboards. Ensure standardization and consistency of reporting. Perform deep dive analyses into specific issues as needed. Define data needs and sources; evaluate data quality and work with data services team to extract, transform and load data for analytic discovery projects. Ensure BI tools are fully leveraged to provide the insights needed to drive performance. Interface closely with technology partners to manage analytical environment and acquire data sets. Utilize statistical and data visualization packages to develop innovative approaches to complex business problems. Analyze and communicate the effectiveness of new initiatives; draw insights and make performance improvement recommendations based upon the data sources. Use quantitative and qualitative methodologies to draw insights and support the continuous improvement of the business. Analyze initiatives and events utilizing transaction-level data. Ensure that appropriate data-driven reports and customer behavior insight continuously flow to management to help improve quality, reduce cost, enhance the guest experience, and deliver continued growth. Required Qualifications Proficient in working in Microsoft Azure services and/or other cloud computing environment Experience with Database Management Systems (DBMS), specifically SQL and NoSQL. Knowledge of an enterprise data visualization platform, such as Power BI, Big Query Advanced analytical and problem-solving skills Strong applied Algebraic skills Working knowledge of business statistical application and econometrics Project management skills Ability to digest business problems and translate needs into a data-centric context Ability to synthesize and analyze large sets of data to yield actionable findings Strong attention to detail Excellent verbal and written communication skills Handle multiple projects simultaneously within established time constraints Perform under strong demands in a fast-paced environment Work professionally with customers and co-workers to efficiently serve our customers, treating both with enthusiasm and respect If you feel you have the necessary skill sets and are passionate about the job, please send your profile to vthulasiram@ashleyfurnitureindia.com

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Hyderabad

Work from Office

What is Blend Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com What is the Role As a Senior Data Engineer, your role is to spearhead the data engineering teams and elevate the team to the next level! You will be responsible for laying out the architecture of the new project as well as selecting the tech stack associated with it. You will plan out the development cycles deploying AGILE if possible and create the foundations for good data stewardship with our new data products You will also set up a solid code framework that needs to be built to purpose yet have enough flexibility to adapt to new business use cases tough but rewarding challenge! What youll be doing? Collaborate with several stakeholders to deeply understand the needs of data practitioners to deliver at scale Lead Data Engineers to define, build and maintain Data Platform Work on building Data Lake in Azure Fabric processing data from multiple sources Migrating existing data store from Azure Synapse to Azure Fabric Implement data governance and access control Drive development effort End-to-End for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Present technical solutions, capabilities, considerations, and features in business terms. Effectively communicate status, issues, and risks in a precise and timely manner. Further develop critical initiatives, such as Data Discovery, Data Lineage and Data Quality Leading team and Mentor junior resources Help your team members grow in their role and achieve their career aspirations Build data systems, pipelines, analytical tools and programs Conduct complex data analysis and report on results What do we need from you? 6+ Years of Experience as a data engineer or similar role in Azure Synapses, ADF or relevant exp in Azure Fabric Degree in Computer Science, Data Science, Mathematics, IT, or similar field Must have experience executing projects end to end. At least one data engineering project should have worked in Azure Synapse, ADF or Azure Fabric Should be experienced in handling multiple data sources Technical expertise with data models, data mining, and segmentation techniques Deep understanding, both conceptually and in practice of at least one object orientated library (Python, pySpark) Strong SQL skills and a good understanding of existing SQL warehouses and relational databases. Strong Spark, PySpark, Spark SQL skills and good understanding of distributed processing frameworks. Build large-scale batches and real-time data pipelines. Ability to work independently and mentor junior resources. Desire to lead and develop a team of Data Engineers across multiple levels Experience or knowledge in Data Governance Azure Cloud experience with Data modeling, CI\CD, Agile Methodologies, Docker\Kubernetes What do you get in return? Competitive Salary : Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the fut ur e. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your ski lls. Snack Zone : Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flow i ng. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward prog r am. Fuel Your Growth Journey with Certifications : Were all about your growth groove! Level up your skills with our support as we cover the cost of your certificat i ons.

Posted 3 months ago

Apply

3 - 5 years

6 - 8 Lacs

Mohali, Pune

Work from Office

Role Responsibilities Develop and design Power BI dashboards, reports, and visualizations to support business decision-making. Extract, transform, and load (ETL) data from multiple sources into Power BI using Power Query, Python, and SQL Optimize performance of Power BI reports, ensuring efficiency in data processing and visualization. Create and maintain data models with relationships, calculated columns, and measures using DAX. Utilize Python for data pre-processing, automation, and integration with Power BI. Work closely with business stakeholders to understand reporting requirements and deliver customized solutions. Ensure data accuracy and consistency across reports and dashboards. Implement row-level security (RLS) and other security measures within Power BI reports. Integrate Power BI with other tools like Azure, Power Automate, Power Apps, and Python-based scripts when needed. Stay updated with the latest Power BI and Python features to enhance reporting capabilities. Requirement Strong proficiency in Power BI Desktop, Power Query, and Power BI Service. Hands-on experience with DAX (Data Analysis Expressions) and data modeling. Proficiency in SQL for querying and transforming data. Experience working with various data sources (SQL Server, Excel, APIs, cloud databases, etc.). Experience in Python for data analysis, automation, and integration with Power BI. Familiarity with ETL processes and data warehousing concepts. Experience in implementing Power BI security features like RLS. Strong analytical and problem-solving skills. Excellent communication skills to interact with stakeholders and translate business needs into reports. Hands-on experience in Pandas, NumPy, and Matplotlib for data analysis in Python. Experience integrating Python scripts within Power BI via Power Query or R/Python visuals. Experience with Azure Synapse, Power Automate, or Power Apps. Experience in integrating Power BI with cloud platforms (AWS, Azure). Familiarity with Agile/Scrum methodologies. Qualification Knowledge of machine learning concepts and predictive analytics using Python. Graduation is MUST

Posted 3 months ago

Apply

5 - 10 years

9 - 19 Lacs

Hyderabad, Pune, Delhi / NCR

Hybrid

Hello, Greetings from LTIMindtree! We are Hiring Azure Data Engineer . Job Description Notice Period:- 0 to 60 Days only Experience:- 3 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Please apply on below link: https://forms.office.com/r/YsAYqdcRS0 Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.

Posted 3 months ago

Apply

5 - 10 years

7 - 17 Lacs

Kolkata, Chennai, Bengaluru

Hybrid

Hello, Greetings from LTIMindtree! We are Hiring Azure Data Engineer . Job Description Notice Period:- 0 to 60 Days only Experience:- 3 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Also apply in below link: https://forms.office.com/r/YsAYqdcRS0 Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.

Posted 3 months ago

Apply

10 - 14 years

9 - 15 Lacs

Chennai

Work from Office

Automated tools, extract data, databases, data systems - in a readable format, Quality Analysis, Filter data, performance indicators, identify correct code problems. Gen. reports. Stating trends, patterns, & predictions, Process improvement. Required Candidate profile Math. skills analyse data prog. lg. SQL, MY SQL, Python, Database design dev, data models, techniques data mining & segmentation, Business Objects, prog. JSON/ETL frameworks, databases Excel, SPSS,SAS

Posted 3 months ago

Apply

12 - 16 years

10 - 14 Lacs

Pune

Work from Office

IT MANAGER, DATA ENGINEERING AND ANALYTICS will lead a team of data engineers and analysts responsible for designing, developing, and maintaining robust data systems and integrations. This role is critical for ensuring the smooth collection, transformation, integration and visualization of data, making it easily accessible for analytics and decision-making across the organization. The Manager will collaborate closely with analysts, developers, business leaders and other stakeholders to ensure that the data infrastructure meets business needs and is scalable, reliable, and efficient. What Youll Do: Team Leadership: Manage, mentor, and guide a team of data engineers and analysts, ensuring their professional development and optimizing team performance. Foster a culture of collaboration, accountability, and continuous learning within the team. Lead performance reviews, provide career guidance, and handle resource planning. Data Engineering & Analytics: Design and implement data pipelines, data models, and architectures that are robust, scalable, and efficient. Develop and enforce data quality frameworks to ensure accuracy, consistency, and reliability of data assets. Establish and maintain data lineage processes to track the flow and transformation of data across systems. Ensure the design and maintenance of robust data warehousing solutions to support analytics and reporting needs. Collaboration and Stakeholder Management: Collaborate with stakeholders, including functional owners, analysts and business leaders, to understand business needs and translate them into technical requirements. Work closely with these stakeholders to ensure the data infrastructure supports organizational goals and provides reliable data for business decisions. Build and Foster relationships with major stakeholders to ensure Management perspectives on Data Strategy and its alignment with Business objectives. Project Management: Drive end-to-end delivery of analytics projects, ensuring quality and timeliness. Manage project roadmaps, prioritize tasks, and allocate resources effectively. Manage project timelines and mitigate risks to ensure timely delivery of high-quality data engineering projects. Technology and Infrastructure: Evaluate and implement new tools, technologies, and best practices to improve the efficiency of data engineering processes. Oversee the design, development, and maintenance of data pipelines, ensuring that data is collected, cleaned, and stored efficiently. Ensure there are no data pipeline leaks and monitor production pipelines to maintain their integrity. Familiarity with reporting tools such as Superset and Tableau is beneficial for creating intuitive data visualizations and reports. Machine Learning and GenAI Integration: Machine Learning: Knowledge of machine learning concepts and integration with data pipelines is a plus. This includes understanding how machine learning models can be used to enhance data quality, predict data trends, and automate decision-making processes. GenAI: Familiarity with Generative AI (GenAI) concepts and exposure is advantageous, particularly in enabling GenAI features on new datasets. Leveraging GenAI with data pipelines to automate tasks, streamline workflows, and uncover deeper insights is beneficial. What Youll Bring: 12+ years of experience in data engineering, with at least 3 years in a managerial role. Technical Expertise: Strong knowledge of data engineering concepts, including data warehousing, ETL processes, and data pipeline design. Proficiency in Azure Synapse or data factory, SQL, Python, and other data engineering tools. Data Modeling: Expertise in data modeling is essential, with the ability to design and implement robust, scalable data models that support complex analytics and reporting needs. Experience with data modeling frameworks and tools is highly valued. Leadership Skills: Proven ability to lead and motivate a team of engineers while managing cross-functional collaborations. Problem-Solving: Strong analytical and troubleshooting skills to address complex data-related challenges. Communication: Excellent verbal and written communication skills to effectively interact with technical and non-technical stakeholders. This includes the ability to motivate team members, provide regular constructive feedback, and facilitate open communication channels to ensure team alignment and success. Data Architecture: Experience with designing scalable, high-performance data systems and understanding cloud platforms such as Azure, Data Bricks. Machine Learning and GenAI: Knowledge of machine learning concepts and integration with data pipelines, as well as familiarity with GenAI, is a plus. Data Governance: Experience with data governance best practices is desirable. Open Mindset: An open mindset with a willingness to learn new technologies, processes, and methodologies is essential. The ability to adapt quickly to evolving data engineering landscapes and embrace innovative solutions is highly valued.

Posted 3 months ago

Apply

6 - 11 years

10 - 15 Lacs

Gurugram, Chennai, Mumbai (All Areas)

Hybrid

Position : Senior Data Engineer with Azure & Java Location : Chennai, Mumbai & Gurugram Position Type : Permanent Work Mode: Hybrid Notice Period: - Immediate - 30 Days Job Description: Bachelors or Masters degree in computer science, engineering, mathematics, statistics, or equivalent technical discipline 7+ years of experience working with data mapping, data analysis and numerous large data sets/data warehouses. Strong application development experience using JAVA, C++ Strong experience with Azure Data bricks, Azure Data Explorer, ADLS2, EventHub technologies. Experience with application containerization and deployment process (Docker, GitHub, CI/CD pipelines). Experience working with Cosmos DB is preferred. Ability to assemble, analyze, and evaluate big data and be able to make appropriate and well-reasoned recommendations to stakeholders. Good analytical and problem-solving skills, good understanding of different data structures, algorithms, and their usage in solving business problems. Strong communication (verbal and written) and customer service skills. Strong interpersonal, communication, and presentation skills applicable to a wide audience including senior and executive management, customers, etc. Strong skills in setting, communicating, implementing, and achieving business objectives and goals. Strong organization/project planning, time management, and change management skills across multiple functional groups and departments, and strong delegation skills involving prioritizing and reprioritizing projects and managing projects of various size and complexity Accountabilities: Work in iterative processes to map data into common formats, perform advanced data analysis, validate findings, or test hypotheses, and communicate results and methodology. Provide recommendations on who to utilize our data to optimize our search, increase in data accuracy results, and help us to better understand or existing data. Communicate technical information successfully with technical and non-technical audiences such as third-party vendors, external customer technical departments, various levels of management and other relevant parties. Collaborate effectively with all team members as well as attend regular team meetings.

Posted 3 months ago

Apply

5 - 7 years

11 - 21 Lacs

Noida, Mumbai (All Areas)

Work from Office

You will be responsible for assessing complex new data sources and quickly turning these into business insights. You also will support the implementation and integration of these new data sources into our Azure Data platform.

Posted 3 months ago

Apply

5 - 8 years

10 - 20 Lacs

Noida, Mumbai (All Areas)

Work from Office

5+ years of experience in the azure domain with minimum 4 years of relevant experience.

Posted 3 months ago

Apply

10 - 12 years

10 - 20 Lacs

Noida, Mumbai (All Areas)

Work from Office

Advanced working knowledge and experience with relational and non - relational databases.

Posted 3 months ago

Apply

4 - 8 years

12 - 22 Lacs

Kochi, Gurugram, Bengaluru

Hybrid

Project Role: Azure date engineer Work Experience: 4 to 8 Years Work location: Bangalore / Gurugram / Kochi Work Mode: Hybrid Must Have Skills: Azure Data engineer, SQL, Spark/Pyspark Job Overview: Responsible for the on-time completion of projects or components of large, complex projects for clients in the life sciences field. Identifies and elevates potential new business opportunities and assists in the sales process. Skills required: Experience in developing Azure components like Azure data factory, Azure data Bircks, Logic Apps, Functions Develop efficient & smart data pipelines in migrating various sources on to Azure datalake Proficient in working with Delta Lake, Parquet file formats Designs, implements, and maintain the CI/CD pipelines, deploy, merge codes Expert in programming in SQL, Pyspark, Python Creation of databases on Azure data lake with best data warehousing practises Build smart metadata databases and solutions, parameterization, configurations Develop Azure frameworks, develops automated systems for deployment & monitoring Hands-on experience in continuous delivery and continuous integration of CI/CD pipelines, CI/CD infrastructure and process troubleshooting. Extensive experience with version control systems like Git and their use in release management, branching, merging, and integration strategies Essential Functions: Participates or leads teams in the design, development and delivery of consulting projects or components of larger, complex projects. Reviews and analyzes client requirements or problems and assists in the development of proposals of cost effective solutions that ensure profitability and high client satisfaction. Provides direction and guidance to Analysts, Consultants, and where relevant, to Statistical Services assigned to engagement. Develops detailed documentation and specifications. Performs qualitative and/or quantitative analyses to assist in the identification of client issues and the development of client specific solutions. Designs, structures and delivers client reports and presentations that are appropriate to the characteristics or needs of the audience. May deliver some findings to clients. Qualifications Bachelor's Degree Req Master's Degree Business Administration Pref 4-8 years of related experience in consulting and/or life sciences industry Req.

Posted 3 months ago

Apply

5 - 10 years

7 - 17 Lacs

Pune

Hybrid

6+ years experience as a Data Engineer , ideally in a large corporate environment In-depth knowledge of SQL and data modelling/data processing Strong experience working with Microsoft Azure and Synapse. Experience working with Git/GitLab and SDLC Life Cycle. Fluent in English both written and spoken. Nice to have: Experience working in the financial industry Experience in complex metrics design and reporting Experience with visualisation tools like PowerBI (or Tableau, QlikView or similar) Role & responsibilities Interested candidate share cv to saikiran_p@trigent.com

Posted 3 months ago

Apply

5 - 9 years

10 - 15 Lacs

Hyderabad, Bengaluru

Hybrid

Job Title: System Analyst C#.NET Job Type: Contract-to-Hire (C2H) Location: Hyderabad , Bangalore. Notice Period : - Immediate Joiner Only Experience - 5+ Years. Job Description: We are seeking a System Analyst C#.NET for a Contract-to-Hire opportunity with a strong foundation in Microsoft technologies and Azure cloud services . The ideal candidate will bring deep expertise in C# .NET development , API integration , and cloud-based application architecture , with a focus on data performance and engineering using Azure tools. Interested candidates can apply with an updated resume to shridatt@silverlinktechnologies.com .

Posted 3 months ago

Apply

3 - 7 years

10 - 20 Lacs

Bengaluru

Work from Office

For this Job, you will require: • BTech/ Masters degree or equivalent in computer science/computer engineering/IT, technology, or related field. • 8 (or) More Years of Good Hands-on Experience in Azure Data Engineering • Good hands-on experience on Azure Data Lake, Data Factory, Data Bricks and Logic apps. • Strong SQL Skills and Hands on experience. To join us, you should: • Have excellent relationship building and organizational skills. • Enjoy working in a fast-changing field. • Be able to demonstrate the solution/product capabilities to clients. • Be able to design new solutions and products for multiple domains. You must have: • Designed (or) worked-on solutions on Azure platform using Azure data factory, Azure Data Lake, Azure Data Bricks, Azure Synapse Analytics, Azure SQL, Power shell, Azure logic apps, Azure automation, Blob storage and T-SQL. • Should have done at least 1 greenfield Data Lake, Delta Lake(or) Data warehousing project in Azure landscape. • Design, build and maintain Azure Data Analytics solution. • Should be well familiarized with CI/CD in Azure DevOps platform. • Build solution as per the customer needs and defined Architecture. • Analyse, design, implement and test medium to complex mappings independently. • Strong experience in transforming large datasets as per business needs. • Should be able to write scripts in Python or any other scripting language. • Understanding of Self-Service BI/Analytics, User Security, Mobility, and other BI • Participate in design/discovery sessions and workshops with end users. • A flexible approach to dealing with ad-hoc queries. • To Guide and help the junior team members in achieving their goals and objective.

Posted 3 months ago

Apply

4 - 9 years

7 - 15 Lacs

Bengaluru

Work from Office

NOTE- Mandatory to have experience in Azure Data Engineering with ADF, ADB, ADLS, Azure Synapse. Hiring for- MNC Client Location- Bangalore (work from office) Role- Senior Analyst- Azure- Data Management Shift - General Purpose- Understand client requirements and build ETL solution using Azure Data Factory, Azure Databricks & PySpark. Build solution in such a way that it can absorb clients change request very easily. Find innovative ways to accomplish tasks and handle multiple projects simultaneously and independently. Works with Data & appropriate teams to effectively source required data. Identify data gaps and work with client teams to effectively communicate the findings to stakeholders/clients . Main accountabilities: Develop ETL solution to populate Centralized Repository by integrating data from various data sources. Create Data Pipelines, Data Flow, Data Model according to the business requirement. Proficient in implementing all transformations according to business needs. Identify data gaps in data lake and work with relevant data/client teams to get necessary data required for dashboarding/ reporting. Strong experience working on Azure data platform, Azure Data Factory, Azure Data Bricks. Strong experience working on ETL components and scripting languages like PySpark, Python. Experience in creating Pipelines, Alerts, email notifications, and scheduling jobs. What are we looking for? Bachelors' degree in Engineering or Science or equivalent graduates with at least 4-7 years of overall experience in data management including data integration, modeling & optimization. Minimum 3 years of experience working on Azure cloud, Azure Data Factory, Azure Databricks. Minimum 2-3 years of experience in PySpark, Python, etc. for data ETL. In-depth understanding of data warehouse, ETL concept and modeling principles. Strong ability to design, build and manage data. Strong understanding of Data integration.

Posted 3 months ago

Apply

5 - 9 years

18 - 22 Lacs

Mohali

Remote

In this Role, Your Responsibilities Will Be: Collaborate with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data requirements and deliver effective solutions. Leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Expertise in data modeling, with a strong focus on data warehouse and lakehouse design. Design and implement data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar tools to prepare data for analysis and reporting. Implement data quality checks and governance practices to ensure accuracy, consistency, and security of data assets. Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms. Experience with data integration and ETL tools like Azure Data Factory. Proven expertise in Microsoft Fabric or similar data platforms. In-depth knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions. Strong problem-solving skills with a track record of resolving complex technical challenges. Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Ability to work independently and collaboratively within a team environment. Microsoft certifications in data-related fields are preferred. DP-700 (Microsoft Certified: Fabric Data Engineer Associate) is a plus. Who You Are: You show tremendous amount of initiative in tough situations; are exceptional at spotting and seizing opportunities. You observe situational and group dynamics and select the best-fit approach. You make implementation plans that allocate resources precisely. You pursue everything with energy, drive, and the need to finish. For This Role, You Will Need: Experience: 5+ years in Data Warehousing with on-premises or cloud technologies. Analytical & Problem-Solving Skills: Strong analytical abilities with a proven track record of resolving complex data challenges. Communication Skills: Ability to effectively engage with internal customers across various functional areas. Database & SQL Expertise: Proficient in database management, SQL query optimization, and data mapping. Excel Proficiency: Strong knowledge of Excel, including formulas, filters, macros, pivots, and related operations. MS Fabric Expertise: Extensive experience with Fabric components, including Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models. Programming Skills: Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging. Flexibility: Willingness to work flexible hours based on project requirements. Technical Documentation: Strong documentation skills for maintaining clear and structured records. Language Proficiency: Fluent in English. SQL & Data Modeling: Advanced SQL skills, including experience with complex queries, data modeling, and performance tuning. Medallion Architecture: Hands-on experience implementing Medallion Architecture for data processing. Database Experience: Working knowledge of Oracle, SAP, or other relational databases. Manufacturing Industry Experience: Prior experience in a manufacturing environment is strongly preferred. Learning Agility: Ability to quickly learn new business areas, software, and emerging technologies. Leadership & Time Management: Strong leadership and organizational skills, with the ability to prioritize, multitask, and meet deadlines. Confidentiality: Ability to handle sensitive and confidential information with discretion. Project Management: Capable of managing both short- and long-term projects effectively. Cross-Functional Collaboration: Ability to work across various organizational levels and relationships. Strategic & Tactical Thinking: Ability to balance strategic insights with hands-on execution. ERP Systems: Experience with Oracle, SAP, or other ERP systems is a plus. Travel Requirements: Willing to travel up to 20% as needed. Preferred Qualifications that Set You Apart: Education: BA/BS/B.E./B.Tech in Business, Information Systems, Technology, or a related field. Technical Background: Bachelor's degree or equivalent in Science, focusing on MIS, Computer Science, Engineering, or a related discipline. Communication Skills: Strong interpersonal skills in English (spoken and written) to collaborate effectively with overseas teams. Database & SQL Expertise: Proficiency in Oracle PL/SQL. Azure Experience: Hands-on experience with Azure services, including Azure Synapse Analytics and Azure Data Lake. DevOps & Agile: Practical experience with Azure DevOps and knowledge of Agile and Scrum methodologies. Certifications: Agile certification is preferred.

Posted 3 months ago

Apply

5 - 10 years

0 Lacs

Chennai, Coimbatore, Bengaluru

Hybrid

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 10th May [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 10th May [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 10th May [Saturday] 2025 Experience: 5 years to 12 years Time: 9.00 AM to 5 PM Venue: HEXAWARE TECHNOLOGIES H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********

Posted 3 months ago

Apply

3 - 8 years

3 - 8 Lacs

Hyderabad

Work from Office

Name of Organization: Jarus Technologies (India) Pvt. Ltd. Organization Website: www.jarustech.com Position: Senior Software Engineer - Data warehouse Domain Knowledge: Insurance (Mandatory) Job Type: Permanent Location: Hyderabad - IDA Cherlapally, ECIL and Divyasree Trinity, Hi-Tech City. Experience: 3+ years Education: B. E. / B. Tech. / M. C. A. Resource Availability: Immediately or a maximum period of 30 days. Technical Skills: • Strong knowledge of data warehousing concepts and technologies. • Proficiency in SQL and other database languages. • Experience with ETL tools (e.g., Informatica, Talend, SSIS). • Familiarity with data modelling techniques. • Experience in building dimensional data modelling objects, dimensions, and facts. • Experience with cloud-based data warehouse platforms (e.g., AWS Redshift, Azure Synapse, Google Big Query). • Familiar with optimizing SQL queries and improving ETL processes for better performance. • Knowledge of data transformation, cleansing, and validation techniques. Experience with incremental loads, change data capture (CDC) and data scheduling. • • Comfortable with version control systems like GIT. • Familiar with BI tools like Power BI for visualization and reporting. Responsibilities: Design, develop and maintain data warehouse systems and ETL (Extract, Transform, Load) processes. • • Develop and optimize data models and schemas to support business needs. • Design and implement data warehouse architectures, including physical and logical designs. • Design and develop dimensions, facts and bridges. • Ensure data quality and integrity throughout the ETL process. • Design and implement relational and multidimensional database structures. • Understand data structures and fundamental design principles of data warehouses. • Analyze and modify data structures to adapt them to business needs. • Identify and resolve data quality issues and data warehouse problems. • Debug ETL processes and data warehouse queries. Communication skills: • Good communication skills to interact with customer • Ability to understand requirements for implementing an insurance warehouse system

Posted 3 months ago

Apply

3 - 5 years

6 - 8 Lacs

Pune

Work from Office

Job Title: Senior Data Engineer Experience Required: 3 to 5 Years Location: Baner, Pune Job Type: Full-Time (WFO) Job Summary We are seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing scalable data pipelines, working with cloud platforms like Microsoft Azure, AWS and utilizing advanced tools such as Datalakes, PySpark, and Azure Data Factory. The role involves collaborating with cross-functional teams to design and implement robust data solutions that support business intelligence, analytics, and decision-making processes. Key Responsibilities Design, develop, and maintain scalable ETL pipelines to ingest, transform, and process large datasets from various sources. Build and optimize data pipelines and architectures for efficient and secure data processing. Work extensively with Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics for cloud data integration and management. Utilize Databricks and PySpark for advanced big data processing and analytics. Implement data modelling and design data warehouses to support business intelligence tools like Power BI . Ensure data quality, governance, and security using Azure DevOps and Azure Functions . Develop and maintain SQL Server databases and write optimized SQL queries for analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into effective data engineering solutions. Implement Data architecture best practices to support big data initiatives and analytics use cases. Monitor, troubleshoot, and improve data workflows and processes to ensure seamless data flow. Required Skills and Qualifications Educational Background : Bachelor's or master's degree in computer science, Information Systems, or a related field. Technical Skills : Strong expertise in ETL development , Data Engineering , and Data Pipeline -Development . Proficiency in Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics . Advanced knowledge of Databricks , PySpark , and Python for data processing. Hands-on experience with SQL Azure , SQL Server , and data warehousing solutions. Knowledge of Power BI for reporting and dashboard creation. Familiarity with Azure Functions , Azure DevOps , and cloud computing in Microsoft Azure . Understanding of data architecture and data modelling principles. Experience with Big Data tools and frameworks. Experience : Proven experience in designing and implementing large-scale data processing systems. Hands-on experience with DWH and handling big data workloads. Ability to work with both structured and unstructured datasets. Soft Skills : Strong problem-solving and analytical skills. Excellent communication and collaboration abilities to work effectively in a team environment. A proactive mindset with a passion for learning and adopting new technologies. Preferred Skills Experience with Azure Data Warehouse technologies. Knowledge of Azure Machine Learning or similar AI/ML frameworks. Familiarity with Data Governance and Data Compliance practices.

Posted 3 months ago

Apply

6 - 11 years

12 - 17 Lacs

Gurgaon

Work from Office

Job Responsibilities Skillet Needed from the resource Data Architecture and Management : Understanding of Azure SQL technology, including SQL databases, operational data stores, and data transformation processes. Azure Data Factory : Expertise in using Azure Data Factory for ETL processes, including creating and managing pipelines. Python Programming : Proficiency in writing Python scripts, particularly using the pandas library, for data cleaning and transformation tasks. Azure Functions : Experience with Azure Functions for handling and processing Excel files, making them suitable for database import. API Integration : Skills in integrating various data sources, including APIs, into the data warehouse. Preferred candidate profile

Posted 3 months ago

Apply

8 - 13 years

15 - 30 Lacs

Bengaluru

Work from Office

Design, develop, and maintain scalable ETL pipelines, data lakes, and hosting solutions using Azure tools. Ensure data quality, performance optimization, and compliance across hybrid and cloud environments. Required Candidate profile Data engineer with experience in Azure data services, ETL workflows, scripting, and data modeling. Strong collaboration with analytics teams and hands-on pipeline deployment using best practices

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies