Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
6 - 14 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Job Description: 5+ years in data engineering with at least 2 years on Azure Synapse. Strong SQL, Spark, and Data Lake integration experience. Familiarity with Azure Data Factory, Power BI, and DevOps pipelines. Experience in AMS or managed services environments is a plus. Detailed JD Design, develop, and maintain data pipelines using Azure Synapse Analytics. Collaborate with customer to ensure SLA adherence and incident resolution. Optimize Synapse SQL pools for performance and cost. Implement data security, access control, and compliance measures. Participate in calibration and transition phases with client stakeholders
Posted 1 week ago
5.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job description We are looking for Azure Data Engineer's resources having minimum 5 to 9 years of Experience. Role & responsibilities Blend of technical expertise with 5 to 9 year of experience, analytical problem-solving, and collaboration with cross-functional teams. Design and implement Azure data engineering solutions (Ingestion & Curation) Create and maintain Azure data solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data quality / security and compliance. Optimize Azure SQL databases for efficient query performance. Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures.
Posted 1 week ago
7.0 - 12.0 years
18 - 30 Lacs
Bengaluru
Work from Office
Urgently Hiring for Senior Azure Data Engineer Job Location- Bangalore Minimum exp - Total 7+yrs with min 4 years relevant exp Keywords Databricks, Pyspark, SCALA, SQL, Live / Streaming data, batch processing data Share CV siddhi.pandey@adecco.com OR Call 6366783349 Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Share CV siddhi.pandey@adecco.com OR Call 6366783349
Posted 1 week ago
8.0 - 13.0 years
7 - 12 Lacs
Noida
Work from Office
Job Overview We are looking for a Data Engineer who will be part of our Analytics Practice and will be expected to actively work in a multi-disciplinary fast paced environment. This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project; its primary responsibility is the acquisition, transformation, loading and processing of data from a multitude of disparate data sources, including structured and unstructured data for advanced analytics and machine learning in a big data environment. Responsibilities: Engineer a modern data pipeline to collect, organize, and process data from disparate sources. Performs data management tasks, such as conduct data profiling, assess data quality, and write SQL queries to extract and integrate data Develop efficient data collection systems and sound strategies for getting quality data from different sources Consume and analyze data from the data pool to support inference, prediction and recommendation of actionable insights to support business growth. Design and develop ETL processes using tools and scripting. Troubleshoot and debug ETL processes. Performance tuning and opitimization of the ETL processes. Provide support to new of existing applications while recommending best practices and leading projects to implement new functionality. Collaborate in design reviews and code reviews to ensure standards are met. Recommend new standards for visualizations. Learn and develop new ETL techniques as required to keep up with the contemporary technologies. Reviews the solution requirements and architecture to ensure selection of appropriate technology, efficient use of resources and integration of multiple systems and technology. Support presentations to Customers and Partners Advising on new technology trends and possible adoption to maintain competitive advantage Experience Needed: 8+ years of related experience is required. A BS or Masters degree in Computer Science or related technical discipline is required ETL experience with data integration to support data marts, extracts and reporting Experience connecting to varied data sources Excellent SQL coding experience with performance optimization for data queries. Understands different data models like normalized, de-normalied, stars, and snowflake models. Worked with transactional, temporarl, time series, and structured and unstructured data. Experience on Azure Data Factory and Azure Synapse Analytics Worked in big data environments, cloud data stores, different RDBMS and OLAP solutions. Experience in cloud-based ETL development processes. Experience in deployment and maintenance of ETL Jobs. Is familiar with the principles and practices involved in development and maintenance of software solutions and architectures and in service delivery. Has strong technical background and remains evergreen with technology and industry developments. At least 3 years of demonstrated success in software engineering, release engineering, and/or configuration management. Highly skilled in scripting languages like PowerShell. Substantial experience in the implementation and exectuion fo CI/CD processes. Additional Demonstrated ability to have successfully completed multiple, complex technical projects Prior experience with application delivery using an Onshore/Offshore model Experience with business processes across multiple Master data domains in a services based company Demonstrates a rational and organized approach to the tasks undertaken and an awareness of the need to achieve quality. Demonstrates high standards of professional behavior in dealings with clients, colleagues and staff. Is able to make sound and far reaching decisions alone on major issues and to take full responsibility for them on a technical basis. Strong written communication skills. Is effective and persuasive in both written and oral communication. Experience with gathering end user requirements and writing technical documentation Time management and multitasking skills to effectively meet deadlines under time-to-market pressure
Posted 1 week ago
8.0 - 12.0 years
24 - 28 Lacs
Bengaluru
Work from Office
* Design, develop & maintain ADF pipelines using API integration & cloud technologies. * Collaborate with DevOps team on CI/CD pipeline management & Power BI reporting. Call (For Details) Seven Two Zero Four Zero, Eight Nine Two One Three Assistive technologies Food allowance Provident fund Health insurance
Posted 1 week ago
6.0 - 10.0 years
19 - 22 Lacs
Bengaluru
Hybrid
Hi all, We are hiring fore the Data Architecture Experience: 6 - 9 years Location: Bangalore Notice Period: Immediate - 15 Days Skills: Data Architecture Azure Data Factory Azure Data Bricks Azure Cloud Architecture If you are interested drop your resume at mojesh.p@acesoftlabs.com Call: 9701971793
Posted 2 weeks ago
5.0 - 10.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Req ID: 326727 We are currently seeking a Microsoft Fabric Specialist to join our team in Hyderabad, Telangana (IN-TG), India (IN). : We are seeking a Mid-Level Microsoft Fabric Support Specialist to join our IT team. The ideal candidate will be responsible for providing technical support, troubleshooting, and ensuring the smooth operation of Microsoft Fabric services. This role requires a deep understanding of Microsoft Fabric, data integration, and analytics solutions, along with strong problem-solving skills. Key Responsibilities: "¢ Provide technical support and troubleshooting for Microsoft Fabric services. "¢ Assist in the implementation, configuration, and maintenance of Microsoft Fabric environments. "¢ Monitor system performance and resolve issues proactively. "¢ Collaborate with cross-functional teams to optimize data workflows and analytics solutions. "¢ Document support procedures, best practices, and troubleshooting steps. "¢ Assist in user training and onboarding for Microsoft Fabric-related tools and applications. "¢ Stay up to date with the latest Microsoft Fabric updates and best practices. Required Qualifications: "¢ 5+ years of experience in IT support, with a focus on Microsoft Fabric or related technologies. "¢ Strong knowledge of Microsoft Fabric, Power BI, Azure Synapse, and data integration tools. "¢ Experience with troubleshooting and resolving issues in a cloud-based environment. "¢ Familiarity with SQL, data pipelines, and ETL processes. "¢ Excellent problem-solving and communication skills. "¢ Ability to work independently and collaboratively in a team environment. Preferred Qualifications: "¢ Microsoft certifications related to Fabric, Azure, or Power BI. "¢ Experience with automation and scripting (PowerShell, Python, etc.). "¢ Understanding of security and compliance considerations in cloud-based data platforms.
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Req ID: 319341 We are currently seeking a MS Fabric Architect - Support to join our team in Bangalore, Karntaka (IN-KA), India (IN). "Job DutiesKey Responsibilities: Provide technical support and troubleshooting for Microsoft Fabric services. Assist in the implementation, configuration, and maintenance of Microsoft Fabric environments. Monitor system performance and resolve issues proactively. Collaborate with cross-functional teams to optimize data workflows and analytics solutions. Document support procedures, best practices, and troubleshooting steps. Assist in user training and onboarding for Microsoft Fabric-related tools and applications. Stay up to date with the latest Microsoft Fabric updates and best practices. Required Qualifications: 6+ years of experience in IT support, with a focus on Microsoft Fabric or related technologies. Strong knowledge of Microsoft Fabric, Power BI, Azure Synapse, and data integration tools. Experience with troubleshooting and resolving issues in a cloud-based environment. Familiarity with SQL, data pipelines, and ETL processes. Excellent problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Preferred Qualifications: Microsoft certifications related to Fabric, Azure, or Power BI. Experience with automation and scripting (PowerShell, Python, etc.). Understanding of security and compliance considerations in cloud-based data platforms. Minimum Skills RequiredKey Responsibilities: Provide technical support and troubleshooting for Microsoft Fabric services. Assist in the implementation, configuration, and maintenance of Microsoft Fabric environments. Monitor system performance and resolve issues proactively. Collaborate with cross-functional teams to optimize data workflows and analytics solutions. Document support procedures, best practices, and troubleshooting steps. Assist in user training and onboarding for Microsoft Fabric-related tools and applications. Stay up to date with the latest Microsoft Fabric updates and best practices. Required Qualifications: 6+ years of experience in IT support, with a focus on Microsoft Fabric or related technologies. Strong knowledge of Microsoft Fabric, Power BI, Azure Synapse, and data integration tools. Experience with troubleshooting and resolving issues in a cloud-based environment. Familiarity with SQL, data pipelines, and ETL processes. Excellent problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Preferred Qualifications: Microsoft certifications related to Fabric, Azure, or Power BI. Experience with automation and scripting (PowerShell, Python, etc.). Understanding of security and compliance considerations in cloud-based data platforms."
Posted 2 weeks ago
6.0 - 10.0 years
25 - 30 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Minimum of 6 yrs of Data Engineering Exp Must be an expert in SQL, Data Lake, Azure Data Factory, Azure Synapse, ETL, Databricks Must be an expert in data modeling, writing complex queries in SQL Ability to convert SQL code to PySpark Required Candidate profile Exp with SQL, Python, data modelling, data warehousing & dimensional modelling concept Familiarity with data governance, data security & Production deployments using Azure DevOps CICD pipelines.
Posted 2 weeks ago
2.0 - 6.0 years
0 - 3 Lacs
Bengaluru
Work from Office
Company: Infiniti Research/Quantzig Analytics Location: Bangalore | Experience: 26 Years Role Summary: We’re seeking a BI Engineer to design, develop, and optimize enterprise-grade Power BI dashboards that support key business decisions. You'll work with data teams and business users to transform data into impactful insights with strong visual storytelling. Key Responsibilities: Build and optimize Power BI dashboards using DAX, Power Query, and effective data models Collaborate with teams to understand KPIs and reporting needs Implement Row-Level Security (RLS) and manage access in Power BI Service Handle report publishing, incremental refresh, and workspace management Use Git/version control for report lifecycle management Deliver interactive visuals, especially for retail KPIs Must-Have Skills: Power BI (Desktop & Service), DAX, Power Query Data Modeling, Performance Tuning, RLS SQL, Git, Power BI Service features Preferred Tools/Tech: Azure Synapse, Excel, SharePoint
Posted 2 weeks ago
4.0 - 6.0 years
9 - 19 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
JOB DESCRIPTION: • Strong experience in Azure Datafactory,Databricks, Eventhub, Python,PySpark ,Azure Synapse and SQL • Azure Devops experience to deploy the ADF pipelines. • Knowledge/Experience with Azure cloud stack.
Posted 2 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Hybrid
Your day at NTT DATA The Data Scientist is a seasoned subject matter expert, tasked with participating in the adoption of data science and analytics within the organization. The primary responsibility of this role is to participate in the creation and delivery of data-driven solutions that add business value using statistical models, machine learning algorithms, data mining, and visualization techniques. What youll be doing Key Responsibilities: Designs, develops, and programs methods, processes, and systems to consolidate and analyze unstructured, diverse big data sources to generate actionable insights and solutions for client services and product enhancement. Designs and enhances data collection procedures to include information that is relevant for building analytic systems. Accountable for ensuring that data used for analysis is processed, cleaned and, integrally verified and build algorithms necessary to find meaningful answers. Designs and codes software programs, algorithms, and automated processes to cleanse, integrate and evaluate large datasets from multiple disparate sources. Accountable for providing meaningful insights from large data and metadata sources; interprets and communicates insights and findings from analysis and experiments to product, service, and business managers. Accountable for performing analysis using programming languages or statistical packages such as Python, pandas etc. Designs scalable and highly available applications leveraging the latest tools and technologies. Accountable for creatively visualizing and effectively communicating results of data analysis, insights, and ideas in a variety of formats to key decision-makers within the business. Creates SQL queries for the analysis of data and visualize the output of the models. Creates documentation around processes and procedures and manages code reviews. Accountable for ensuring that industry standards best practices are applied to development activities. Knowledge and Attributes: Seasoned in data modelling, statistical methods and machine learning techniques. Ability to thrive in a dynamic, fast-paced environment. Quantitative and qualitative analysis skills. Desire to acquire more knowledge to keep up to speed with the ever-evolving field of data science. Curiosity to sift through data to find answers and more insights. Good understanding of the information technology industry within a matrixed organization and the typical business problems such organizations face. Ability to translate technical findings clearly and fluently to non-technical team business stakeholders to enable informed decision-making. Ability to create a storyline around the data to make it easy to interpret and understand. Self-driven and able to work independently yet acts as a team player. Able to apply data science principles through a business lens. Desire to create strategies and solutions that challenge and expand the thinking of peers and business stakeholders. Academic Qualifications and Certifications: Bachelors degree or equivalent in Data Science, Business Analytics, Mathematics, Economics, Engineering, Computer Science or a related field. Relevant programming (Python) certification preferred. Agile certification preferred. Required Experience: Seasoned experience in a data science position in a corporate environment and/or related industry. Seasoned experience in statistical modelling and data modelling, machine learning, data mining, unstructured data analytics, natural language processing. Seasoned experience in programming languages (Python, etc.). Seasoned experience working in databases (MySQL, Microsoft SQL Server, Azure Synapse, MongoDB) Seasoned experience working with and creating data architectures. Seasoned experience with extracting, cleaning, and transforming data and working with data owners to understand the data. Seasoned experience visualizing and/or presenting data for stakeholder use and reuse across the business. Seasoned experience on working with API (creating and using APIs) Automation experience using Python scripting, UIPath, Selenium, PowerAutomate. Seasoned experience working on Linux operating system (Ubuntu)
Posted 2 weeks ago
5.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
We are looking for Azure Data Engineer's resources having minimum 5 to 9 years of Experience. To Apply, use the below link: https://career.infosys.com/jobdesc?jobReferenceCode=INFSYS-EXTERNAL-210775&rc=0 Role & responsibilities Blend of technical expertise with 5 to 9 year of experience , analytical problem-solving, and collaboration with cross-functional teams. Design and implement Azure data engineering solutions ( Ingestion & Curation ) Create and maintain Azure data solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data quality / security and compliance. Optimize Azure SQL databases for efficient query performance. Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures.
Posted 2 weeks ago
2.0 - 7.0 years
6 - 16 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Exciting Azure developer Job Opportunity at Infosys! We are looking for skilled Azure Developers to join our dynamic team PAN INDIA. If you have a passion for technology and a minimum of 2 to 9 years of hands-on experience in azure development, this is your chance to make an impact. At Infosys, we value innovation, collaboration, and diversity. We believe that a diverse workforce drives creativity and fosters a richer company culture. Therefore, we strongly encourage applications from all genders and backgrounds. Ready to take your career to the next level? Join us in shaping the future of technology. Visit our careers page for more details on how to apply.
Posted 2 weeks ago
10.0 - 15.0 years
11 - 15 Lacs
Hyderabad, Coimbatore
Work from Office
Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets.
Posted 2 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
5.0 - 10.0 years
15 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).
Posted 2 weeks ago
6.0 - 10.0 years
20 - 30 Lacs
Pune, Ahmedabad, Mumbai (All Areas)
Hybrid
Must be an expert in SQL, Data Lake, Azure Data Factory, Azure Synapse, ETL, Databricks Must be an expert in data modeling, writing complex queries in SQL, Stored procedures / PySpark. Notebooks for handling complex data transformation Required Candidate profile - Ability to convert SQL code to PySpark - Strong programming skills in languages such as SQL, Python - Exp with data modelling, data warehousing & dimensional modelling concept
Posted 2 weeks ago
5.0 - 7.0 years
15 - 15 Lacs
Hyderabad
Work from Office
Overview Role : Database Administrator - Senior Analyst (SQL and MySQL) Work Location - Hyderabad Shift timing : 4:00am - 1:00pm (IST) Hybrid Mode - 3 Days (Work From Office) About us : We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together. About Role : We are looking for an experienced Senior Analyst - Data Platform Administrator. This position involves overseeing several Data Platform Technologies including, Microsoft SQL Server, MY SQL, PostgreSQL, Oracle, Microsoft Synapse and others that support our environment. The position covers a wide range of responsibilities, including implementing strategies for high availability using Always-On availability groups, monitoring performance, maintain stability, scripting, and troubleshooting. Additionally, the candidate should have practical experience working in a multi-cloud platform environment with core skills being on AWS, and Azure. Responsibilities This is an exciting role and would entail you to Work closely with key members of the Omnicom Corporate and IT teams to comprehend specific business challenges, incorporating these into the technology solutions supporting the organization. Work on security patching and failover activities required during pre-defined and emergency maintenance windows. Ability to comprehend and troubleshoot complex SQL queries, database views, tables, partitions, indexes, stored procedures, etc. for supporting operations and maintenance processes and tasks. Implement and maintain data solutions on the AWS / Azure cloud platform, leveraging technologies such as AWS / Azure Data Factory, AWS / Azure Data Lake. Develop and deploy data pipelines, ETL processes, and data integration solutions to enable efficient and reliable data ingestion, transformation, and storage. Qualifications This may be the right role for you if you have 5-7 Years of experience with deep understanding of current and emerging technologies in their field of expertise. Prior experience with technology infrastructures in a public cloud environment, such as AWS/Azure including supporting applications built in the cloud and migrating applications into the public cloud. Prior experience interconnecting on-premises infrastructure with public cloud environments creating a hybrid cloud. Application of appropriate information security and regulatory or statutory compliance, including SOC (Must Have), GDPR, ISO27001, HIPAA (Nice to Have) 5+ years hands-on experience with Experience in Microsoft SQL Server, My SQL, PostgreSQL, Reporting Services, Azure Synapse (2 Plus years) and Integration Services. AWS Cloud experience is mandatory. Azure Cloud Experience is nice to have.
Posted 2 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
JOB DESCRIPTION We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server- side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy- first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. EXPERTISE AND QUALIFICATIONS Required Skills: Strong hands-on experience with Fabric and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes .
Posted 2 weeks ago
5.0 - 9.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Key Result Areas and Activities: Design, develop and deploy ETL/ELT solutions on premise or in the cloud Transformation of data with stored procedures Report Development (MicroStrategy/Power BI) Create and maintain comprehensive documentation for data pipelines, configurations, and processes Ensure data quality and integrity through effective data management practices Monitor and optimize data pipeline performance Troubleshoot and resolve data-related issues Technical Experience: Must Have Good experience in Azure Synapse Good experience in ADF Good experience in Snowflake & Stored Procedures Experience with ETL/ELT processes, data warehousing, and data modelling Experience with data quality frameworks, monitoring tools, and job scheduling Knowledge of data formats like JSON, XML, CSV, and Parquet English Fluent (Strong written, verbal, and presentation skills) Agile methodology & tools like JIRA Good communication and formal skills Good To Have : Good experience in MicroStrategy and PowerBI Experience in scripting languages such as Python, Java, or Shell scripting Familiarity with Azure cloud platforms and cloud data services Qualifications : Bachelors or Masters degree in Computer Science, Engineering, or a related field 3+ years of experience in Azure Synapse Qualities: Experience with or knowledge of Agile Software Development methodologies Can influence and implement change; demonstrates confidence, strength of conviction and sound decisions. Believes in head-on dealing with a problem; approaches in logical and systematic manner; is persistent and patient; can independently tackle the problem, is not over-critical of the factors that led to a problem and is practical about it; follow up with developers on related issues. Able to consult, write, and present persuasively
Posted 2 weeks ago
4.0 - 8.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Hiring for Azure Data Engineer and also the client is looking for Immediate joiners who can join in 30 days.
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Contractual Hiring manager :- My profile :- linkedin.com/in/yashsharma1608 Payroll of :- https://www.nyxtech.in/ 1. AZURE DATA ENGINEER WITH FABRIC The Role : Lead Data Engineer PAYROLL Client - Brillio About Role: Experience 6 to 8yrs Location- Bangalore , Hyderabad , Pune , Chennai , Gurgaon (Hyderabad is preferred) Notice- 15 days / 30 days. Budget -15 LPA AZURE FABRIC EXP MANDATE Skills : Azure Onelake, datapipeline , Apache Spark , ETL , Datafactory , Azure Fabric , SQL , Python/Scala. Key Responsibilities: Data Pipeline Development: Lead the design, development, and deployment of data pipelines using Azure OneLake, Azure Data Factory, and Apache Spark, ensuring efficient, scalable, and secure data movement across systems. ETL Architecture: Architect and implement ETL (Extract, Transform, Load) workflows, optimizing the process for data ingestion, transformation, and storage in the cloud. Data Integration: Build and manage data integration solutions that connect multiple data sources (structured and unstructured) into a cohesive data ecosystem. Use SQL, Python, Scala, and R to manipulate and process large datasets. Azure OneLake Expertise: Leverage Azure OneLake and Azure Synapse Analytics to design and implement scalable data storage and analytics solutions that support big data processing and analysis. Collaboration with Teams: Work closely with Data Scientists, Data Analysts, and BI Engineers to ensure that the data infrastructure supports analytical needs and is optimized for performance and accuracy. Performance Optimization: Monitor, troubleshoot, and optimize data pipeline performance to ensure high availability, fast processing, and minimal downtime. Data Governance & Security: Implement best practices for data governance, data security, and compliance within the Azure ecosystem, ensuring data privacy and protection. Leadership & Mentorship: Lead and mentor a team of data engineers, promoting a collaborative and high-performance team culture. Oversee code reviews, design decisions, and the implementation of new technologies. Automation & Monitoring: Automate data engineering workflows, job scheduling, and monitoring to ensure smooth operations. Use tools like Azure DevOps, Airflow, and other relevant platforms for automation and orchestration. Documentation & Best Practices: Document data pipeline architecture, data models, and ETL processes, and contribute to the establishment of engineering best practices, standards, and guidelines. C Innovation: Stay current with industry trends and emerging technologies in data engineering, cloud computing, and big data analytics, driving innovation within the team.C
Posted 2 weeks ago
12.0 - 16.0 years
40 - 45 Lacs
Gurugram
Work from Office
Overview Enterprise Data Operations Assoc Manager Job Overview: As Data Modelling Assoc Manager, you will be the key technical expert overseeing data modeling and drive a strong vision for how data modelling can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data modelers who create data models for deploying in Data Foundation layer and ingesting data from various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data modelling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will independently be analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be a key technical expert performing all aspects of Data Modelling working closely with Data Governance, Data Engineering and Data Architects teams. You will provide technical guidance to junior members of the team as and when needed. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Responsibilities: Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Advocates existing Enterprise Data Design standards; assists in establishing and documenting new standards. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the data science team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Qualifications: 12+ years of overall technology experience that includes at least 6+ years of data modelling and systems architecture. 6+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 6+ years of experience developing enterprise data models. 6+ years in cloud data engineering experience in at least one cloud (Azure, AWS, GCP). 6+ years of experience with building solutions in the retail or in the supply chain space. Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models). Fluent with Azure cloud services. Azure Certification is a plus. Experience scaling and managing a team of 5+ data modelers Experience with integration of multi cloud services with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring, hiring and scaling data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Differentiating Competencies Required Ability to work with virtual teams (remote work locations); lead team of technical resources (employees and contractors) based in multiple locations across geographies Lead technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with business, understand sometimes ambiguous, needs, and translate to clear, aligned requirements Able to work independently with business partners to understand requirements quickly, perform analysis and lead the design review sessions. Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17062 Jobs | Dublin
Wipro
9393 Jobs | Bengaluru
EY
7759 Jobs | London
Amazon
6056 Jobs | Seattle,WA
Accenture in India
6037 Jobs | Dublin 2
Uplers
5971 Jobs | Ahmedabad
Oracle
5764 Jobs | Redwood City
IBM
5714 Jobs | Armonk
Tata Consultancy Services
3524 Jobs | Thane
Capgemini
3518 Jobs | Paris,France