Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
hyderabad, telangana, india
On-site
Lillyu2019s Purposeu00A0 u00A0 Come bring to life technologies to lead in Pharma-tech! u00A0 The Enterprise Data platforms enable the platform has developed an integrated and intuitive data bases, integration and analytics platform. This platform enables Lilly business functions enables quickly utilize the database services , integration services and analytics platform. u00A0 u00A0 What you will be doing: u00A0 The Associate Director u2013 Analytic Platform Services & SRE will lead the operations, reliability, and performance engineering of enterprise analytics platforms, enabling self-service analytics, AI/ML model deployment, and data-driven insights across the organization . u00A0 This role is pivotal in managing hybrid analytics ecosystems including Power BI, Azure Fabric, SAS, Business Objects, JMP, AI/ML, Databricks, Snowflake, and AWS SageMaker Studio , ensuring scalability, reliability, security, and compliance in a highly regulated pharmaceutical environment. u00A0 The ideal candidate blends technical expertise in analytics and AI platforms with Site Reliability Engineering (SRE) principles , enabling innovation at scale for research, clinical, manufacturing, and commercial functions. u00A0 u00A0 How you will succeed:u00A0 u00A0 Strategic Leadership : u00A0 Define and execute the analytics platform operations and SRE strategy , aligning with enterprise data and digital transformation roadmaps. u00A0 Partner with enterprise architects, data scientists, and business stakeholders to scale analytics, AI/ML, and visualization capabilities for Finance, LRL , clinical trials, manufacturing, and commercial operations. u00A0 Evaluate emerging cloud-native analytics and AI tools to ensure platforms remain future-ready and cost-efficient . u00A0 Operations & Platform Management u00A0 O versee the day-to-day operations of analytics platforms including Power BI, Azure Fabric, Databricks, Snowflake, and AWS SageMaker Studio . u00A0 Drive automation and Infrastructure-as-Code ( IaC ) for platform provisioning, scaling, and configuration management. u00A0 Ensure high availability, low latency, and optimized query performance across all platforms to support advanced analytics workloads. u00A0 Build and maintain data pipelines and integrations to unify data from Finance, clinical, lab, manufacturing, and commercial systems into analytics-ready formats u00A0 SRE & Reliability Engineering u00A0 Establish observability frameworks for analytics platforms with metrics, logging, and tracing for proactive performance monitoring. u00A0 Define and track SLOs, SLIs, and SLAs for platform reliability and performance. u00A0 Implement incident management, root cause analysis, and blameless post-mortems to improve Mean Time to Detect (MTTD) and Mean Time to Recover (MTTR). u00A0 Develop self-healing mechanisms and automate capacity planning for scalable platform operations. u00A0 Security, Compliance & Governance u00A0 Ensure GxP and SOX compliance across analytics environments, particularly for sensitive Finance , clinical and manufacturing datasets. u00A0 Implement strong identity and access management , role-based permissions, and data masking controls for governed analytics environments. u00A0 Collaborate with InfoSec and Data Governance teams to enforce data lineage, data cataloging, and audit-readiness . u00A0 Team Leadership & Collaboration u00A0 Lead, mentor, and grow a global team of analytics engineers, SRE specialists, and platform administrators . u00A0 Foster a data-driven culture by enabling self-service analytics and democratized insights across business units. u00A0 Collaborate closely with data science, AI/ML engineering, and business analytics teams to enable seamless deployment of predictive and prescriptive models . u00A0 Provide operational playbooks and governance frameworks to standardize platform management. u00A0 Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form () for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lillyu00A0does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly
Posted 1 day ago
6.0 - 10.0 years
8 - 12 Lacs
pune, gurugram, bengaluru
Work from Office
Contractual Hiring manager :- My profile :- linkedin.com/in/yashsharma1608 Payroll of :- https://www.nyxtech.in/ 1. AZURE DATA ENGINEER WITH FABRIC The Role : Lead Data Engineer PAYROLL Client - Brillio About Role: Experience 6 to 8yrs Location- Bangalore , Hyderabad , Pune , Chennai , Gurgaon (Hyderabad is preferred) Notice- 15 days / 30 days. Budget -15 LPA AZURE FABRIC EXP MANDATE Skills : Azure Onelake, datapipeline , Apache Spark , ETL , Datafactory , Azure Fabric , SQL , Python/Scala. Key Responsibilities: Data Pipeline Development: Lead the design, development, and deployment of data pipelines using Azure OneLake, Azure Data Factory, and Apache Spark, ensuring efficient, scalable, and secure data movement across systems. ETL Architecture: Architect and implement ETL (Extract, Transform, Load) workflows, optimizing the process for data ingestion, transformation, and storage in the cloud. Data Integration: Build and manage data integration solutions that connect multiple data sources (structured and unstructured) into a cohesive data ecosystem. Use SQL, Python, Scala, and R to manipulate and process large datasets. Azure OneLake Expertise: Leverage Azure OneLake and Azure Synapse Analytics to design and implement scalable data storage and analytics solutions that support big data processing and analysis. Collaboration with Teams: Work closely with Data Scientists, Data Analysts, and BI Engineers to ensure that the data infrastructure supports analytical needs and is optimized for performance and accuracy. Performance Optimization: Monitor, troubleshoot, and optimize data pipeline performance to ensure high availability, fast processing, and minimal downtime. Data Governance & Security: Implement best practices for data governance, data security, and compliance within the Azure ecosystem, ensuring data privacy and protection. Leadership & Mentorship: Lead and mentor a team of data engineers, promoting a collaborative and high-performance team culture. Oversee code reviews, design decisions, and the implementation of new technologies. Automation & Monitoring: Automate data engineering workflows, job scheduling, and monitoring to ensure smooth operations. Use tools like Azure DevOps, Airflow, and other relevant platforms for automation and orchestration. Documentation & Best Practices: Document data pipeline architecture, data models, and ETL processes, and contribute to the establishment of engineering best practices, standards, and guidelines. C Innovation: Stay current with industry trends and emerging technologies in data engineering, cloud computing, and big data analytics, driving innovation within the team.C
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are seeking an experienced and highly motivated Senior Azure Data Engineer to join a Data & Analytics team. As a Senior Data Engineer, your primary responsibility will be to design, develop, implement, and manage scalable, robust, and secure data solutions on the Microsoft Azure platform. You will be leading a team of data engineers, setting technical direction, ensuring the quality and efficiency of data pipelines, and collaborating closely with data scientists, analysts, and business stakeholders to meet data requirements. Your key responsibilities will include leading, mentoring, and providing technical guidance to a team of Azure Data Engineers. You will design, architect, and implement end-to-end data solutions on Azure, encompassing data ingestion, transformation, storage (lakes/warehouses), and serving layers. It will also be crucial to oversee and actively participate in the development, testing, and deployment of robust ETL/ELT pipelines using key Azure services. You will establish and enforce data engineering best practices, coding standards, data quality checks, and monitoring frameworks. Additionally, you will ensure that data solutions are optimized for performance, cost, scalability, security, and reliability. Effective collaboration with data scientists, analysts, and business stakeholders to understand requirements and deliver effective data solutions will be essential. You will manage, monitor, and troubleshoot Azure data platform components and pipelines while also contributing to the strategic technical roadmap for the data platform. In terms of qualifications and experience, you should have a minimum of 6-8+ years of overall experience in data engineering roles, with at least 3-4+ years of hands-on experience designing, implementing, and managing data solutions specifically on the Microsoft Azure cloud platform. Proven experience in a lead or senior engineering role, demonstrating mentorship and technical guidance capabilities, is also required. A Bachelor's degree in computer science, Engineering, Information Technology, or a related quantitative field is preferred. Your technical skills should include deep expertise in core Azure Data Services such as Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage. Strong proficiency in Spark (using PySpark or Scala) and expert-level SQL skills are essential. Additionally, proficiency in Python is highly desired. You should have a solid understanding of data warehousing principles, dimensional modeling, ETL/ELT patterns, and data lake design. Experience with relational databases and familiarity with NoSQL concepts/databases is beneficial. Proficiency with Git for code management is also required. Leadership and soft skills are crucial for this role, including excellent leadership, mentoring, problem-solving, and communication skills. The ability to collaborate effectively across various teams is essential. Additionally, experience in data extraction patterns via ADF API, Files, Databases, Data Masking in Synapse, RBAC, and Data warehousing using Kimball Modeling will be advantageous. Good communication and collaboration skills are also necessary. About UST: UST is a global digital transformation solutions provider that has been working with the world's best companies for over 20 years to drive real impact through transformation. With deep domain expertise and a future-proof philosophy, UST partners with clients from design to operation, embedding innovation and agility into their organizations. UST's team of over 30,000 employees in 30 countries builds for boundless impact, touching billions of lives in the process.,
Posted 1 week ago
7.0 - 12.0 years
9 - 19 Lacs
hyderabad
Work from Office
Role: Lead Azure Data Engineer Exp: 6+ Location: Hyderabad (Remote candidates can be considered but must report to clients HYD office for the 1st 10 days) Mandatory Skills : Python, Sql, Spark, Azure Synapses, ADF or relevant exp in Azure Fabric
Posted 2 weeks ago
6.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
Remote
Role: Lead Azure Data Engineer Exp: 6+ Location: Hyderabad (Remote candidates can be considered but must report to clients HYD office for the 1st 10 days) Mandatory Skills : Python, Sql, Spark, Azure Synapses, ADF or relevant exp in Azure Fabric Show more Show less
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
gurgaon, haryana, india
On-site
Job Description - Data Scientist We are seeking a highly motivated and skilled Data Scientist with 2-3 years of hands-on experience in data science, statistical modeling, and cloud-based analytics. The ideal candidate will have strong technical expertise in SAS, Azure Fabric, Python, and SQL, along with excellent communication skills and a working knowledge of insurance concepts. Required Qualifications Bachelor's / master's degree in computer science, Statistics, Mathematics, Engineering, or a related field 2-3 years of experience in data science, analytics, or a related role Proficiency in Python, SAS, SQL, Azure Fabric, ML Good to have experience on data visualization tools (e.g., Power BI, Tableau) Strong problem-solving skills and analytical thinking Excellent verbal and written communication skills Key Responsibilities Design, develop, and deploy predictive models and analytical tools using Python, SAS, and SQL Leveraging Azure Fabric for scalable data processing and model deployment Collaborate with data engineers, business analysts, and stakeholders to understand business requirements and deliver data-driven solutions Communicate complex analytical findings in a clear and compelling manner to technical and non-technical audiences
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Data Orchestration Platform Product Owner at Lloyds Register, you will be tasked with overseeing the continuous development and management of the data orchestration platform, focusing primarily on Azure technologies, notably Azure Fabric as part of LRs technology roadmap. Reporting directly to the Director of Data Systems, your role will involve collaborating with various stakeholders such as business stakeholders, project managers, architects, and offshore teams to ensure the successful delivery of data solutions. Your expertise in platform engineering, ITIL/SIAM service management, and Agile Scrum methodologies will be integral to your responsibilities. You will have the opportunity to work with a skilled and cooperative team at Lloyds Register. Additionally, the role offers flexible working hours within core UK/EU working hours and the chance to work for an organization that is purpose-driven, values-based, and supports professional and personal development through a range of people development programs. Your key responsibilities will include collaborating with LRs Infrastructure leadership to develop and manage the data orchestration platform using Azure technologies, particularly Azure Fabric. You will work closely with the Data Architect, Information Security team, and platform engineers to define and execute the data orchestration technology roadmap to facilitate advanced analytics, AI, and system integrations. Furthermore, you will collaborate with the Data Orchestration Platform's Data Analysts and Engineers to deliver outcomes such as integrations, data modeling, and PBI reporting. As the Product Owner, you will be responsible for overseeing the platform service management, including incidents, service requests, platform maintenance, and security posture. You will develop and implement a continuous improvement plan for the platform, focusing on enhancing service management processes and rules in alignment with the technology roadmap. Engaging with offshore scrum masters to drive an Agile delivery process and associated Scrum ceremonies across all data services will be a crucial aspect of your role. Furthermore, you will coordinate with offshore teams to ensure effective collaboration and deliverable execution, monitor delivery progress, identify potential risks, and implement mitigation strategies. It will also be your responsibility to ensure that data solutions meet quality standards and exceed client expectations. To excel in this role, you should have proven experience as an Enterprise Platform Engineering Lead in data orchestration projects/services, excellent knowledge of enterprise Azure technologies (Synapse, ADF, APIM, etc.), strong business stakeholder engagement and communication skills, solid project management experience with a focus on Agile/Scrum methodologies, experience working with offshore teams, and managing remote collaboration. Additionally, you should possess strong analytical and problem-solving skills, the ability to work independently, and manage multiple priorities effectively.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You are the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From your unmatched range of services, you provide stability, security, and improved business performance, freeing your clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in you. You believe that success is driven by passion and purpose. Your passion for technology is only surpassed by your commitment to empowering your employees around the world. You have an exciting Opportunity for a Cloud Data Engineer. This full-time position is open for an experienced Senior Data Engineer that will support several of your clients" systems. Client satisfaction is your primary objective; all available positions are customer-facing requiring excellent communication and people skills. A positive attitude, rigorous work habits, and professionalism in the workplace are a must. Fluency in English, both written and verbal, is required. This is an Onsite role. As a senior cloud data engineer with 7+ years of experience, you will have strong knowledge and hands-on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache Spark, and Snowflake Datawarehouse, Azure Fabric. It is good to have experience with Azure Databricks, Azure Cosmos DB, Azure AI, and developing cloud-based applications. You should be able to analyze problems and provide solutions, design, implement, and manage data warehouse solutions using Azure Synapse Analytics or similar technologies, migrate data from On-Premises to Cloud, and proficiency in data modeling techniques. Your responsibilities include designing and developing ETL/ELT processes to move data between systems and transform data for analytics, strong programming skills in languages such as SQL, Python, or Scala, developing and maintaining data pipelines, experience in at least one of the reporting tools such as Power BI/Tableau, working effectively in a team environment, communicating complex technical concepts to non-technical stakeholders, managing and optimizing databases, understanding business requirements, converting them to technical design for implementation, performing analysis, developing and testing code, designing and developing cloud-based applications using Python on a serverless framework, troubleshooting skills, creating, maintaining, and enhancing applications, working independently as an individual contributor, and following Agile Methodology (SCRUM). You have experience in developing cloud-based data applications, hands-on experience in Azure data services, data warehousing, ETL, understanding cloud architecture principles, best practices, developing pipelines using ADF, Synapse, migrating data from On-Premises to Cloud, writing complex SQL scripts, transformations, analyzing problems, providing solutions, knowledge in CI/CD pipelines, Python, and API Gateway. Product Management/BA experience is a nice-to-have. Your culture is all about connection - connection with your clients, your technology, and most importantly with each other. In addition to working with an amazing team around the world, you also offer a competitive compensation package. If someone believes they would be a great fit and are ready for their best job ever, you would like to hear from them. Love Your Job, Share Your Technology Passion, Create Your Future Here!,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As an Integration Developer at our organization, you will be an integral part of the data engineering and integration team. Your primary responsibility will be to design, develop, and maintain ETL/ELT pipelines using tools such as SSIS and Azure Data Factory (ADF). You will play a crucial role in building reliable data integration pipelines that support various business functions including business intelligence, analytics, and operational workloads. Your key responsibilities will include implementing robust data workflows to ingest, transform, and store structured and unstructured data in Azure Data Lakehouse, integrating cloud and on-premise systems for high performance and data quality, and actively participating in architectural discussions to contribute to the design of data integration frameworks on the Azure platform. Collaboration with data analysts, data scientists, and business users will be essential to understand data requirements and deliver scalable solutions. Monitoring and optimizing data pipelines for performance, reliability, and cost-efficiency will also be part of your responsibilities. To excel in this role, you are required to have a Bachelor's degree in Computer Science, Engineering, or a related field, along with at least 5 years of experience in data integration or ETL development roles. Strong expertise in SSIS, Azure Data Factory, Azure Data Lake, Delta Lake, and Azure Storage (Blob/ADLS Gen2) is necessary. Proficiency in SQL, familiarity with Azure Fabric concepts, and problem-solving skills are also essential. Preferred qualifications include experience with DataOps or CI/CD pipelines, exposure to Databricks, Spark, or Python for data transformation, and knowledge of data governance tools. Possessing Microsoft Azure certifications such as DP-203 or AZ-204 would be considered a plus. This is a full-time position with benefits such as food provided and Provident Fund. The work location is in-person. Join us as an Integration Developer and be part of a dynamic team that is dedicated to creating scalable and resilient data integration solutions on the Azure platform.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As an Azure Data Engineer, you will be responsible for deploying and maintaining fully automated data transformation pipelines using Azure Data Factory, Pyspark, Azure DevOps, and Azure Delta Lake. Your role will involve designing data products and pipelines that are resilient, modular, flexible, scalable, reusable, and cost-effective in support of data architecture design. Your key responsibilities will include designing, developing, and maintaining data pipelines and ETL processes using Microsoft Azure services such as Azure Data Factory, Azure Synapse, Azure Databricks, and Azure Fabric. You will also be utilizing Azure data storage accounts like Azure Data Lake Storage Gen 2 & Azure Blob storage for organizing and managing data pipeline outputs. Collaboration with data scientists, data analysts, data architects, and other stakeholders will be essential to understand data requirements and deliver high-quality data solutions. You will need to optimize data pipelines in the Azure environment for performance, scalability, and reliability while ensuring data quality and integrity through validation techniques and frameworks. Documentation is a crucial aspect of your role, as you will be required to develop and maintain documentation for data processes, configurations, and best practices. Monitoring and troubleshooting data pipeline issues to ensure timely resolution is also part of your responsibilities. Staying current with industry trends and emerging technologies will be necessary to ensure that the data solutions you deliver remain cutting-edge. Furthermore, managing the CI/CD process for deploying and maintaining data solutions will be an integral part of your role as an Azure Data Engineer.,
Posted 1 month ago
10.0 - 18.0 years
0 Lacs
pune, maharashtra
On-site
We are looking for a seasoned Senior Data Architect with extensive knowledge in Databricks and Microsoft Fabric to join our team. In this role, you will be responsible for leading the design and implementation of scalable data solutions for BFSI and HLS clients. As a Senior Data Architect specializing in Databricks and Microsoft Fabric, you will play a crucial role in architecting and implementing secure, high-performance data solutions on the Databricks and Azure Fabric platforms. Your responsibilities will include leading discovery workshops, designing end-to-end data pipelines, optimizing workloads for performance and cost efficiency, and ensuring compliance with data governance, security, and privacy policies. You will collaborate with client stakeholders and internal teams to deliver technical engagements and provide guidance on best practices for Databricks and Microsoft Azure. Additionally, you will stay updated on the latest industry developments and recommend new data architectures, technologies, and standards to enhance our solutions. As a subject matter expert in Databricks and Azure Fabric, you will be responsible for delivering workshops, webinars, and technical presentations, as well as developing white papers and reusable artifacts to showcase our company's value proposition. You will also work closely with Databricks partnership teams to contribute to co-marketing and joint go-to-market strategies. In terms of business development support, you will collaborate with sales and pre-sales teams to provide technical guidance during RFP responses and identify upsell and cross-sell opportunities within existing accounts. To be successful in this role, you should have a minimum of 10+ years of experience in data architecture, engineering, or analytics roles, with specific expertise in Databricks and Azure Fabric. You should also possess strong communication and presentation skills, as well as the ability to collaborate effectively with diverse teams. Additionally, certifications in cloud platforms such as AWS and Microsoft Azure will be advantageous. In return, we offer a competitive salary and benefits package, a culture focused on talent development, and opportunities to work with cutting-edge technologies. At Persistent, we are committed to fostering diversity and inclusion in the workplace and invite applications from all qualified individuals. We provide a supportive and inclusive environment where all employees can thrive and unleash their full potential. Join us at Persistent and accelerate your growth professionally and personally while making a positive impact on the world with the latest technologies.,
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The BI Data Engineer is a key role within the Enterprise Data team. We are looking for expert Azure data engineer with deep Data engineering, ADF Integration and database development experience. This is a unique opportunity to be involved in delivering leading-edge business analytics using the latest and greatest cutting-edge BI tools, such as cloud-based databases, self-service analytics and leading visualisation tools enabling the companys aim to become a fully digital organisation. Job Description: Key Responsibilities: Build Enterprise data engineering and Integration solutions using the latest Azure platform, Azure Data Factory, Azure SQL Database, Azure Synapse and Azure Fabric Development of enterprise ETL and integration routines using ADF Evaluate emerging Data enginnering technologies, standards and capabilities Partner with business stakeholder, product managers, and data scientists to understand business objectives and translate them into technical solutions. Work with DevOps, engineering, and operations teams to implement CI/CD pipelines and ensure smooth deployment of data engiinering solutions Required Skills And Experience Technical Expertise : Expertise in the Azure platform including Azure Data Factory, Azure SQL Database, Azure Synapse and Azure Fabric Exposure to Data bricks and lakehouse arcchitect & technologies Extensive knowledge of data modeling, ETL processes and data warehouse design principles. Experienc in machine learning and AI services in Azure. Professional Experience : 5+ years of experience in database development using SQL 5+ Years integration and data engineering experience 5+ years experience using Azure SQL DB, ADF and Azure Synapse 2+ Years experience using Power BI Comprehensive understanding of data modelling Relevant certifications in data engineering, machine learning, AI. Key Competencies: Expertise in data engineering and database development. Familiarity with the Microsoft Fabric technologies including One Lake, Lakehouse and Data Factory Strong understanding of data governance, compliance, and security frameworks. Proven ability to drive innovation in data strategy and cloud solutions. A deep understanding of business intelligence workflows and the ability to align technical solutions Strong database design skills, including an understanding of both normalised form and dimensional form databases. In-depth knowledge and experience of data-warehousing strategies and techniques e.g., Kimball Data warehousing Experience in Cloud based data integration tools like Azure Data Factory Experience in Azure Dev Ops or JIRA is a plus Experience working with finance data is highly desirable Familiarity with agile development techniques and objectives Location: Pune Brand: Dentsu Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a skilled and seasoned Senior Data Engineer to become a valued member of our innovative team. The ideal candidate should possess a solid foundation in data engineering and demonstrate proficiency in Azure, particularly Azure Data Factory (ADF), Azure Fabric, Databricks, and Snowflake. In this role, you will be responsible for the design, construction, and upkeep of data pipelines, ensuring data quality and accessibility, as well as collaborating with various teams to support our data-centric initiatives. Your responsibilities will include crafting, enhancing, and sustaining robust data pipelines utilizing tools such as Azure Data Factory, Azure Fabric, Databricks, and Snowflake. Moreover, you will work closely with data scientists, analysts, and stakeholders to comprehend data requirements, guarantee data availability, and maintain data quality. Implementing and refining ETL processes to efficiently ingest, transform, and load data from diverse sources into data warehouses, data lakes, and Snowflake will also be part of your role. Furthermore, you will play a crucial role in ensuring data integrity and security by adhering to best practices and data governance policies. Monitoring and rectifying data pipelines for timely and accurate data delivery, as well as optimizing data storage and retrieval processes to enhance performance and scalability, will be among your key responsibilities. Staying abreast of industry trends and best practices in data engineering and cloud technologies is essential, along with mentoring and providing guidance to junior data engineers. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Additionally, you must have over 5 years of experience in data engineering, with a strong emphasis on Azure, ADF, Azure Fabric, Databricks, and Snowflake. Proficiency in SQL, experience in data modeling and database design, and solid programming skills in Python, Scala, or Java are prerequisites. Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka, as well as a sound grasp of data warehousing concepts and solutions, including Azure Synapse Analytics and Snowflake, are highly desirable. Knowledge of data governance, data quality, and data security best practices, exceptional problem-solving skills, and effective communication and collaboration abilities within a team setting are essential. Preferred qualifications include experience with other Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Cosmos DB, familiarity with DevOps practices and tools for CI/CD in data engineering, and certifications in Azure Data Engineering, Snowflake, or related areas.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
We are seeking a skilled and seasoned Senior Data Engineer to become a part of our innovative team. The perfect candidate will possess a solid foundation in data engineering and proficiency in Azure, Azure Data Factory (ADF), Azure Fabric, Databricks, and Snowflake. This position necessitates the creation, development, and upkeep of data pipelines, ensuring data quality and accessibility, and collaborating with various teams to support our data-centric initiatives. Your responsibilities will include designing, developing, and maintaining robust data pipelines utilizing Azure Data Factory, Azure Fabric, Databricks, and Snowflake. You will work closely with data scientists, analysts, and stakeholders to comprehend data requirements and guarantee the availability and quality of data. Implementing and refining ETL processes to handle the ingestion, transformation, and loading of data from diverse sources into data warehouses, data lakes, and Snowflake will also be a key aspect of your role. Additionally, you will be responsible for upholding data integrity and security through the implementation of best practices and compliance with data governance policies. Monitoring and resolving data pipeline issues to ensure the timely and accurate delivery of data, as well as enhancing data storage and retrieval processes to boost performance and scalability, will be essential tasks. It is crucial to stay abreast of industry trends and best practices in data engineering and cloud technologies. Furthermore, you will have the opportunity to mentor and provide guidance to junior data engineers, offering technical expertise and assistance as required. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field, along with over 5 years of experience in data engineering, with a strong emphasis on Azure, ADF, Azure Fabric, Databricks, and Snowflake. Proficiency in SQL, experience in data modeling and database design, and strong programming skills in Python, Scala, or Java are also essential. Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka, as well as a solid grasp of data warehousing concepts and experience with data warehousing solutions (e.g., Azure Synapse Analytics, Snowflake) is required. Knowledge of data governance, data quality, and data security best practices, excellent problem-solving abilities, and effective communication and collaboration skills within a team setting are all highly valued. Preferred qualifications include experience with other Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Cosmos DB, familiarity with DevOps practices and tools for CI/CD in data engineering, as well as certifications in Azure Data Engineering, Snowflake, or related fields.,
Posted 1 month ago
7.0 - 11.0 years
15 - 25 Lacs
Hyderabad
Hybrid
Role Purpose: The Senior Data Engineer will support and enable the Data Architecture and the Data Strategy. Supporting solution architecture and engineering for data ingestion and modelling challenges. The role will support the deduplication of enterprise data tools, working with the Lonza Data Governance Board, Digital Council and IT to drive towards a single Data and Information Architecture. This will be a hands-on engineering role with a focus on business and digital transformation. The role will be responsible for managing and maintain the Data Architecture and solutions that deliver the platform at with operational support and troubleshooting. The Senior Data Engineer will also manage (no reporting line changes but from day-to-day delivery) and coordinate the Data Engineering team members (Internal and External) working on the various project implementations. Experience : 7-10 years experience with digital transformation and data projects. Experience in designing, delivering and managing data infrastructures. Proficiency in using Cloud Services (Azure) for data engineering, storage and analytics. Strong SQL and NoSQL experience Data Modelling Hands on developing pipelines, setting-up architectures in Azure Fabric. Team management experience (internal and external resources). Good understanding of data warehousing, data virtualization and analytics. Experience in working with data analysts, data scientists and BI teams to deliver on data requirements. Data Catalogue experience is a plus. ETL Pipeline Design is a plus Python Development skills is a plus Realtime data ingestion (E.g. Kafka) Licenses or Certifications Beneficial; ITIL, PM, CSM, Six Sigma, Lean Knowledge Good understanding about integration, ETL, API and Data sharing concepts. Understanding / Awareness of Visualization tools is a plus Knowledge and understanding of relevant legal and regulatory requirements, such as CFR 21 part 11, EU General Data Protection Regulation, Health Insurance Portability and Accountability Act (HIPAA) and GxP validation process would be a plus. Skills The position requires a pragmatic leader with sound knowledge of data, integration and analytics. Excellent written and verbal communication skills, interpersonal and collaborative skills, and the ability to communicate technical concepts to nontechnical audiences. Exhibit excellent analytical skills, the ability to manage and contribute to multiple projects under strict timelines, as well as the ability to work well in a demanding, dynamic environment and meet overall objectives. Project management skills: scheduling and resource management are a plus. Ability to motivate cross-functional, interdisciplinary teams to achieve tactical and strategic goals. Data Catalogue Project and Team management skills are plus. Strong SAP skills are a plus.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining YASH Technologies, a leading technology integrator focused on helping clients enhance competitiveness, optimize costs, and drive business transformation in an increasingly virtual world. As a Microsoft Fabric Professional, you will be responsible for working with cutting-edge technologies in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure SQL, and ETL processes. Your key responsibilities will include creating pipelines, datasets, dataflows, Integration runtimes, and monitoring pipelines in Azure. You will be extracting, transforming, and loading data from source systems using Azure Databricks and creating SQL scripts for complex queries. Additionally, you will develop Synapse pipelines to migrate data from Gen2 to Azure SQL and work on data migration pipelines to Azure cloud (Azure SQL). Experience in using Azure Data Catalog and Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions will be beneficial for this role. While certifications are considered good to have, YASH Technologies provides an inclusive team environment where you are empowered to create a career path aligned with your aspirations. The workplace culture is grounded on principles like flexible work arrangements, emotional positivity, trust, transparency, open collaboration, and all necessary support for realizing business goals. Join us at YASH Technologies for stable employment and a great atmosphere with an ethical corporate culture.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be joining Lexitas, a high-growth company that values strong personal relationships with clients and delivers reliable, accurate, and professional services across various offerings such as local and national court reporting, medical record retrieval, process service, registered agent services, and legal talent outsourcing. As part of a multinational corporation, Lexitas has established a subsidiary in Chennai, India known as Lexitas India Pvt. Ltd., aimed at becoming the Lexitas Global Capability Center. This center will focus on building a world-class IT development team and evolving into a Shared Services hub for several corporate functions. To learn more about Lexitas, visit https://www.lexitaslegal.com. This is a full-time position based in Chennai, India. In this role, you will lead the design and development of advanced Power BI reports and dashboards, offer guidance on data modeling and DAX calculations, collaborate with stakeholders to define data requirements, ensure data security and compliance, and troubleshoot and optimize Power BI solutions. The ideal candidate should have 6 to 8+ years of experience working with Reporting tools, 3 to 5+ years of hands-on development experience with Power BI, proficiency in SQL and data warehouse concepts, expertise in developing and optimizing complex Power BI solutions, experience in developing, debugging, and writing complex MS SQL queries, familiarity with data pipeline orchestration and automation, skills in performance tuning and optimization of Power BI reports and SQL queries, ability to architect end-to-end BI solutions, strong communication skills to lead cross-functional teams, project management capabilities to deliver results, certifications in Power BI are highly desirable, and an understanding of Cloud and Azure Fabric. Qualifications for this position include a bachelor's degree in computer science or a Master's degree preferred, along with 8+ years of proven experience.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You have an exciting opportunity to join YASH Technologies as a Microsoft Fabric Professional. As part of our team, you will be working with cutting-edge technologies to drive business transformation and create real positive changes in an increasingly virtual world. Your main responsibilities will include working with Azure Fabric, Azure Data Factory, Azure Databricks, Azure Synapse, Azure SQL, and ETL processes. You will be creating pipelines, datasets, dataflows, integration runtimes, and monitoring pipelines to trigger runs. Additionally, you will be involved in extracting, transforming, and loading data from source systems using Azure Databricks, as well as creating SQL scripts for complex queries. Moreover, you will work on creating Synapse pipelines to migrate data from Gen2 to Azure SQL, data migration pipelines to Azure cloud (Azure SQL), and database migration from on-prem SQL server to Azure Dev Environment using Azure DMS and Data Migration Assistant. Experience in using Azure Data Catalog and Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions is a plus. As a Microsoft Fabric Professional, you are encouraged to pursue relevant certifications to enhance your skills. At YASH Technologies, we provide a supportive and inclusive team environment where you can create a career that aligns with your goals. Our Hyperlearning workplace is built on flexibility, emotional positivity, trust, transparency, and open collaboration to help you achieve your business goals while maintaining stable employment in a great atmosphere with an ethical corporate culture.,
Posted 1 month ago
3.0 - 8.0 years
6 - 15 Lacs
Ahmedabad
Work from Office
Job Description: As an ETL Developer, you will be responsible for designing, building, and maintaining ETL pipelines using MSBI stack, Azure Data Factory (ADF) and Fabric. You will work closely with data engineers, analysts, and other stakeholders to ensure data is accessible, reliable, and processed efficiently. Key Responsibilities: Design, develop, and deploy ETL pipelines using ADF and Fabric. Collaborate with data engineers and analysts to understand data requirements and translate them into efficient ETL processes. Optimize data pipelines for performance, scalability, and robustness. Integrate data from various sources, including S3, relational databases, and APIs. Implement data validation and error handling mechanisms to ensure data quality. Monitor and troubleshoot ETL jobs to ensure data accuracy and pipeline reliability. Maintain and update existing data pipelines as data sources and requirements evolve. Document ETL processes, data models, and pipeline configurations. Qualifications: Experience: 3+ years of experience in ETL development, with a focus on ADF, MSBI stack, SQL, Power BI, Fabric. Technical Skills: Strong expertise in ADF, MSBI stack, SQL, Power BI. Proficiency in programming languages such as Python or Scala. Hands-on experience with ADF, Fabric, Power BI, MSBI. Solid understanding of data warehousing concepts, data modeling, and ETL best practices. Familiarity with orchestration tools like Apache Airflow is a plus. Data Integration: Experience with integrating data from diverse sources, including relational databases, APIs, and flat files. Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex ETL issues. Communication: Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Education: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience. Nice to Have: Experience with data lakes and big data processing. Knowledge of data governance and security practices in a cloud environment.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Microsoft Fabric Professional at YASH Technologies, you will be responsible for working with cutting-edge technologies to bring about real positive changes in an increasingly virtual world. You will have the opportunity to contribute to business transformation by leveraging your experience in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure Storage Services, Azure SQL, ETL, Azure Cosmos DB, Event HUB, Azure Data Catalog, Azure Functions, and Azure Purview. With 5-8 years of experience in Microsoft Cloud solutions, you will be involved in creating pipelines, datasets, dataflows, Integration runtimes, and monitoring Pipelines. Your role will also entail extracting, transforming, and loading data from source systems using Azure Databricks, as well as preparing DB Design Documents based on client requirements. Collaborating with the development team, you will create database structures, queries, and triggers while working on SQL scripts and Synapse pipelines for data migration to Azure SQL. Your responsibilities will include data migration pipeline to Azure cloud, database migration from on-prem SQL server to Azure Dev Environment, and implementing data governance in Azure. Additionally, you will work on data migration pipelines for on-prem SQL server data to Azure cloud, along with utilizing Azure data catalog and experience in Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions. To excel in this role, mandatory certifications are required. At YASH Technologies, you will have the opportunity to create a career path tailored to your aspirations within an inclusive team environment. Our Hyperlearning workplace is built on principles of flexible work arrangements, free spirit, emotional positivity, agile self-determination, trust, transparency, open collaboration, support for business goals realization, stable employment, and ethical corporate culture. Join us to embark on a journey of continuous learning, unlearning, and relearning in a dynamic and evolving technology landscape.,
Posted 2 months ago
5.0 - 10.0 years
0 Lacs
bhubaneswar
On-site
As a Senior Azure Data Engineer, you will be an integral part of our dynamic data team, contributing your expertise and skills to design, build, optimize, and maintain scalable data solutions on the Azure platform. Your primary responsibilities will include developing robust data pipelines using Azure Data Factory, Azure Data Lake, and Azure SQL, as well as working with Azure Fabric, Cosmos DB, and SQL Server to create end-to-end data solutions. You will also be involved in Database Design, Data Modeling, Performance Tuning, and writing complex SQL queries to support data processing and reporting requirements. Proactive optimization strategies and leading data migration efforts will be key aspects of your role, along with collaborating with cross-functional teams to translate business requirements into technical solutions. Maintaining documentation and adhering to industry best practices for security, compliance, and scalability will also be essential. The ideal candidate for this position should possess proven experience in Azure Fabric, SQL Server, Azure Data Factory, Azure Data Lake, and Cosmos DB. Strong hands-on expertise in complex SQL queries, query efficiency and optimization, database design, data modeling, data migration techniques, and performance tuning is also required. A solid understanding of cloud infrastructure and data integration patterns in Azure is essential for success in this role. Nice-to-have qualifications include Microsoft Azure certifications related to Data Engineering or Azure Solutions Architecture, as well as experience working in agile environments with CI/CD practices. The required qualifications for this position include a minimum of 5+ years of experience in the software industry, a B.Tech/M.Tech in CS/IT, or a related field, and excellent verbal and written communication skills. If you are a motivated and skilled Senior Azure Data Engineer looking to join a growing team and make a significant impact in the field of data engineering, we encourage you to apply for this exciting opportunity.,
Posted 2 months ago
5.0 - 10.0 years
0 Lacs
bhubaneswar
On-site
You are a highly skilled and motivated Senior Azure Data Engineer with 5-10 years of experience, sought to join our expanding data team. Your expertise lies in cloud-based data engineering, particularly with hands-on experience in various Azure services. In this role, you will be pivotal in designing, constructing, optimizing, and managing scalable data solutions that align with our business objectives. Your responsibilities will encompass: - Designing and implementing robust and scalable data pipelines utilizing Azure Data Factory, Azure Data Lake, and Azure SQL. - Extensive work with Azure Fabric, Cosmos DB, and SQL Server to create and enhance end-to-end data solutions. - Conducting Database Design, Data Modeling, and Performance Tuning to uphold system reliability and data integrity. - Crafting and refining complex SQL queries to support data ingestion, transformation, and reporting requirements. - Proactively implementing SQL optimization and preventive maintenance strategies for efficient database performance. - Leading data migration initiatives from on-premise to cloud or across various Azure services. - Collaborating with cross-functional teams to gather requirements and translate them into technical solutions. - Maintaining clear documentation and adhering to industry best practices for security, compliance, and scalability. Your required skills include proven experience with Azure Fabric, SQL Server, Azure Data Factory, Azure Data Lake, and Cosmos DB. You must possess strong hands-on expertise in complex SQL queries, SQL query efficiency and optimization, database design, data modeling, data migration techniques, and performance tuning. A solid understanding of cloud infrastructure and data integration patterns in Azure is essential. Nice to have qualifications: - Microsoft Azure certifications related to Data Engineering or Azure Solutions Architecture. - Experience working in agile environments with CI/CD practices. To qualify for this role, you must have a minimum of 5+ years of experience in the software industry, hold a B.Tech/M.Tech in CS/IT, or related field, and exhibit excellent verbal and written communication skills. Join us in this exciting opportunity to contribute to our dynamic data team and shape the future of our data solutions.,
Posted 2 months ago
6.0 - 10.0 years
10 - 20 Lacs
Chennai, Bengaluru
Hybrid
We are hiring for Big Data Lead Yrs of Experience : 6yrs - 10yrs Primary Skills set : MS Fabric/ Azure Fabric , Python/ Pyspark, SQL Work location : Bangalore/Chennai Work Mode: Hybrid Notice Period : Imm - 30 days Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location
Posted 2 months ago
10.0 - 17.0 years
25 - 40 Lacs
Chennai
Work from Office
Extensive experience in big data architecture, with a focus on Cloud native and/or Cloud based services / solutions. data processing technologies such as Hadoop, Spark, and Kafka, in the Cloud ecosystem. AWS, Azure and GCP.
Posted 2 months ago
6.0 - 11.0 years
3 - 12 Lacs
Hyderabad, Telangana, India
On-site
Collaborate with several stakeholders to deeply understand the needs of data practitioners to deliver at scale Lead Data Engineers to define, build and maintain Data Platform Work on building Data Lake in Azure Fabric processing data from multiple sources Migrating existing data store from Azure Synapse to Azure Fabric Implement data governance and access control Drive development effort End-to-End for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Present technical solutions, capabilities, considerations, and features in business terms. Effectively communicate status, issues, and risks in a precise and timely manner. Further develop critical initiatives, such as Data Discovery, Data Lineage and Data Quality Leading team and Mentor junior resources Help your team members grow in their role and achieve their career aspirations Build data systems, pipelines, analytical tools and programs Conduct complex data analysis and report on results 5+ Years of Experience as a data engineer or similar role in Azure Synapses, ADF or relevant exp in Azure Fabric Degree in Computer Science, Data Science, Mathematics, IT, or similar field Must have experience executing projects end to end. At least one data engineering project should have worked in Azure Synapse, ADF or Azure Fabric Should be experienced in handling multiple data sources Technical expertise with data models, data mining, and segmentation techniques Deep understanding, both conceptually and in practice of at least one object orientated library (Python, pySpark) Strong SQL skills and a good understanding of existing SQL warehouses and relational databases. Strong Spark, PySpark, Spark SQL skills and good understanding of distributed processing frameworks. Build large-scale batches and real-time data pipelines. Ability to work independently and mentor junior resources. Desire to lead and develop a team of Data Engineers across multiple levels Experience or knowledge in Data Governance Azure Cloud experience with Data modeling, CICD, Agile Methodologies, DockerKubernetes
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |