Jobs
Interviews

317 Datafactory Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

2 - 6 Lacs

chennai

Remote

This is a remote position. 1. Python PySpark 2. Azure Cloud Services such Synapse Data Bricks Data Factory (Data Bricks is mandatory) 3. OOPS Concept 4. Data Modelling with scalability Key Responsibilities: Design, develop, and maintain scalable and robust data pipelines to support data processing and analysis. Collaborate with cross-functional teams to understand data requirements and implement effective solutions. Perform data modeling and design to ensure the integrity, availability, and performance of data systems. Implement and optimize ETL processes for extracting, transforming, and loading data from various sources into our data warehouse. Identify and troubleshoot data-related issues, ensuring data quality and integrity throughout the entire data lifecycle. Stay abreast of industry best practices and emerging technologies to continuously improve data engineering processes.

Posted 6 days ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

bengaluru

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

7.0 - 12.0 years

10 - 14 Lacs

hyderabad

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Information Lifecycle management ILM Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : BE Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide guidance and support to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Information Lifecycle management ILM- Strong understanding of data lifecycle management- Experience in data archiving and retention policies- Knowledge of SAP data management solutions- Hands-on experience in SAP data migration- Experience in SAP data governance Additional Information:- The candidate should have a minimum of 7.5 years of experience in SAP Information Lifecycle management ILM- This position is based at our Hyderabad office- A BE degree is required Qualification BE

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

bengaluru

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

7.0 - 12.0 years

21 - 30 Lacs

bengaluru

Work from Office

Responsibilities: * Design, implement & optimize Azure infrastructure solutions. * Collaborate with cross-functional teams on project delivery. * Manage Azure networks, VMs, storage, DAF, DBs & data pipelines. *Azure Data Migration Food allowance Health insurance Provident fund Annual bonus

Posted 1 week ago

Apply

10.0 - 13.0 years

15 - 30 Lacs

india, bengaluru

Work from Office

Job Requirements Perform statistical analysis, identify trends, patterns, and anomalies within datasets using Python libraries such as NumPy, SciPy, and Scikit-learn. Develop and implement predictive models or machine learning algorithms to address specific business problems.Looking for a lead engineer for a data anawith good knowledge in Python Programming ,Experience in working with core data analytics libraries in Python (Pandas, Numpy). He / she should have Experience Web Frameworks & APIs (Flask, FastApi) and containerization , Docker. Experienced and having knowledge in building ETL pipelines, data engineering workflows and Data Visualization (spark, plotly, Dash, SQL) Experience in working with MS Azure Cloud and below Azure Services (Databricks, Data Factory, Azure App Service, Azure Batch Service, CosmosDB, Azure Functions, Azure Datalake) Machine Learning Concepts & Libraries (Time series forecasting , Anomaly Detection, Failure prediction, scikit-learn, statsmodels, PyOD, TensorFlow, PyTorch ) Work Experience Expert in Python Programming Language • Strong understanding of Python syntax, data structures, OOP, and best practices. • Experience with writing efficient, maintainable, and scalable Python code. 2: Experience in working with core data analytics libraries in Python • Pandas – Data manipulation and analysis • NumPy – Numerical computations • SciPy – Scientific computing, signal processing • Scikit-learn – Classical machine learning algorithms • Statsmodels – Statistical modeling and time series analysis 3: Web Frameworks & APIs • Flask, FastAPI – RESTful APIs and web apps 4: Experience and knowledge in building ETL pipelines and data engineering workflows, including: • Ability to work with structured and unstructured data from various sources. • Understanding of best practices for data quality, reliability, and scalability. • Proficiency in data ingestion, cleansing, transformation, and orchestration. • PySpark, Databricks – distributed data processing frameworks • Azure Data factory -ETL tool • Good SQL knowledge 5: Data Visualization and Dashboards • Plotly, Dash – Interactive visualizations and dashboards • Matplotlib, Seaborn – Static visualizations 6: Experience in working with MS Azure Cloud and below Azure Services: • Azure Databricks • Azure Data Factory • Azure App Service • Azure Batch Service • Azure Cosmos DB • Azure Datalake • Azure DevOps • Azure Functions • Azure Container Registry • Version control (Git) 7: Experience in containerization and Docker, including: • Building, deploying, and managing applications using Docker containers. • Writing and optimizing Docker files for Python and analytics projects. 8: Machine Learning Concepts & Libraries • Time series forecasting • Failure prediction (classification/regression) • Feature engineering from sensor data • Anomaly Detection • Model Evaluation and Deployment • scikit-learn, statsmodels, PyOD, TensorFlow, PyTorch Role-Specific Skill Summary We are primarily looking for an experienced Python Data Analytics / Data Science professional with the following emphasis. Core Strengths: Strong Python programming with solid mathematical/statistical foundation for deriving reliable inferences. Hands-on expertise in ETL workflows using Python libraries (Pandas, NumPy, SciPy, Scikit-learn, PySpark, Databricks). Ability to handle structured/unstructured data and transform it effectively for analytics and ML pipelines. Applied Machine Learning: Experience building and deploying models (e.g., TensorFlow, PyTorch) for time-series forecasting, prediction, and anomaly detection. Capable of preparing and feeding transformed data into ML models for real-world predictions. APIs & Deployment: Practical knowledge of Flask/FastAPI to expose inference models as APIs. Experience deploying solutions within the Azure ecosystem (Databricks, Data Factory, App Service, Cosmos DB, Functions, etc.).

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 25 Lacs

pune

Remote

Design Databases & Data Warehouses, Power BI Solutions, Support Enterprise Business Intelligence, Strong Team Player & Contributor, Continuous Improvement Experience in SQL, Oracle, SSIS/SSRS, Azure, ADF, CI/CD, Power BI, DAX, Microsoft Fabric Required Candidate profile Source system data structures, data extraction, data transformation, warehousing, DB administration, query development, and Power BI. Develop WORK FROM HOME

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

pune

Hybrid

We are looking for a skilled and experienced Data Engineer with experience in building scalable data solutions on the Microsoft Azure ecosystem . The ideal candidate must have strong hands-on experience with Azure Databricks , along with Spark , Python and SQL expertise. Familiarity with Data Lake , Data Warehouse concepts, and end-to-end data pipelines is essential. Key Responsibilities: Lead the design end-to-end data engineering solutions across cloud platforms. Develop and oversee robust data pipelines and ETL workflows using Python and Apache Spark. Implement scalable Delta Lake solutions for structured and semi-structured data. Orchestrate complex workflows using Databricks Workflows or Azure Data Factory. Translate business rules, retention metadata, and data governance policies into reusable, modular, and scalable pipeline components. Ensure adherence to data privacy, security, and compliance standards (e.g., GDPR, HIPAA, etc.). Mentor and guide junior data engineers, fostering best practices in coding, testing, and deployment. Collaborate with cross-functional teams including , analysts, and business stakeholders to align data solutions with business goals. Drive performance optimization, cost-efficiency, and innovation in data engineering practices. Required Skills & Qualifications: 5+ years of experience in data engineering Expert-level proficiency in Python, Apache Spark, and Delta Lake. Strong experience with Databricks Workflows and/or Azure Data Factory. Deep understanding of data governance, metadata management, and business rule integration. Proven track record in implementing data privacy, security, and regulatory compliance in insurance or financial domains. Strong leadership, communication, and stakeholder management skills. Experience with cloud platforms such as Azure.

Posted 1 week ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

gurugram

Work from Office

Data engineering emphasis supports the ongoing digital transformation and modernization of internal audit's risk assessment, automation efforts, and risk monitoring activities. This position is responsible for supporting Internal Audit engagements with scalable, end-to-end ETL and analytic processes. Additionally, the role is responsible for working closely with data analytics teams to create robust scripted data solutions, develop and support business monitoring tools, and support existing data systems and analytic reports. This includes identifying and integrating data sources, assessing data quality, and developing and executing data analytic tools/languages to support enterprise analytical risk assessments. This role is integral to our strategy to enable Internal Audit with data driven insights and bring value to our business partners. The role will challenge you to leverage your data analytics skills on a variety of initiatives in a hands-on role, as well as the opportunity to develop your skills as an auditor in a matrixed and cross-functional internal audit department. Primary Responsibilities: Automation and Data Modeling Design, build, and maintain automated data pipelines for extracting, transforming, and loading data from diverse sources (enterprise platforms, SharePoint, NAS drives, etc.) Develop robust and scalable data models to support risk surveillance analytics and reporting needs Implement and maintain workflows for scheduling and monitoring ETL/ELT jobs to ensure data freshness and reliability Utilize scripting and workflow automation tools to reduce manual intervention in data movement and processing Integrate new data sources and automate ingestion processes to expand surveillance coverage Data Management and Governance Ensure data quality, completeness, and consistency across all risk surveillance datasets Develop and enforce data validation, cleansing, and transformation procedures to support accurate analysis Implement data security and access controls in compliance with regulatory and organizational standards Maintain detailed metadata, data dictionaries, and lineage documentation for all data assets Support data governance initiatives, including data cataloguing, retention policies, and audit readiness Collaboration and Communication Partner with Risk Surveillance partners, data analysts, and audit teams to understand requirements and deliver analytical-ready datasets Collaborate with IT, data stewards, and business partners to resolve data issues and facilitate access to new data sources Communicate data pipeline status, issues, and solution approaches clearly to both technical and non-technical stakeholders Provide training and support for users on data tools, repositories, and best practices Document data workflows, processes, and solutions for knowledge sharing and operational continuity Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Overall 8+ years of program experience in Computer Science, Information Technology, Mathematics, Engineering, Data Analytics or related field 4+ years of SQL programming 4+ years programming in Python and/or R 2+ years of data modeling and scaled automation experience 2+ years of data visualization experience (Tableau and/or PowerBI) Solid interpersonal and analytical skills while working effectively with a matrixed team Solid oral and written communication skills Preferred Qualifications: 2+ years experience in developing scalable solutions with SSIS, Data Factory, Python, or R Extensive program experience in Computer Science, Information Technology, Mathematics, Engineering, or related field Internal Audit / Control experience Cloud computing experience including Azure, AWS, Databricks, and/or Spark computing Experience working in a Healthcare Industry and or a complex IT environment Experience with conducting automation surrounding API calls Working knowledge of Big Data tools, Cloud platforms, SQL Server database engineering Data Science experience including regression analysis and machine learning techniques Change management tool experience (e.g., Github, Jenkins, or similar)

Posted 1 week ago

Apply

6.0 - 11.0 years

21 - 30 Lacs

bengaluru

Work from Office

Responsibilities: * Collaborate with cross-functional teams on project delivery. * Design, implement & optimize Azure solutions using certified expertise. * Ensure compliance, security & scalability standards met. Food allowance Health insurance Provident fund Annual bonus

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As an ETL Test Engineer, you will play a crucial role in performing manual and automation testing to validate data sources, extract data, apply transformation logic, and load data into target tables. Your primary responsibility will involve closely collaborating with Data Engineering teams to gather requirements, understand data needs, and translate them into effective test cases. Additionally, you will design, build, and maintain test plans and cases specifically tailored for ETL and data testing processes. Your day-to-day tasks will include conducting data validation and testing to ensure the accuracy, consistency, and compliance of data. You will be responsible for executing both manual and automated tests using Azure Databricks and ADF pipelines, as well as conducting validations for Power BI semantic models and reports using PBI tools. Working in an Agile environment using Azure tools, you will need to showcase your proficiency in SQL, experience with Databricks, and data warehousing concepts. To excel in this role, you should hold a Bachelor's degree in Computer Science Engineering or a related field. However, equivalent work experience will also be considered. A minimum of 5 years of experience in data engineering testing, particularly in ELT testing, is a requirement for this position. Strong expertise in SQL, Databricks, and data warehousing is essential, along with a deep understanding of SQL and database concepts to write efficient queries and optimize performance. Moreover, you should possess a strong grasp of Power BI semantic models, measures, and reports, coupled with proficient programming skills in Python for automation in Data Testing. Knowledge of cloud-based data platforms, preferably Azure, and associated data services is highly desirable. Exposure and familiarity with Linux/Unix platforms will be an added advantage. Your problem-solving and troubleshooting skills should be exceptional, with a keen attention to detail. Effective communication and collaboration abilities are crucial, as you will be working in cross-functional teams. The must-have skills for this role include ETL testing and tools, Python programming, SQL, Power BI, Azure Databricks, and Data Factory. If you are looking to leverage your expertise in data engineering testing and automation within a dynamic environment, this role presents an exciting opportunity for you to excel and contribute significantly.,

Posted 1 week ago

Apply

11.0 - 16.0 years

22 - 37 Lacs

chennai

Hybrid

Primary Responsibilities: Should be Curious and Quick learner for Tech needed for the project Should be able to Explore and Suggest new Ideas/Tech for the project Should be able to understand the feature requirements from the product and be able to come up with Technical Solutions keeping in mind the Scalability, Extensibility, Security, Cost etc. Come up with a plan break down the feature in workable stories Work closely with Product & Development team The Delivered feature should be of Quality with low defect rate Work collaboratively with the team and should be able to help others on technical issues or functional questions they might have Should be able to own a feature and run through with the team and deliver on agreed upon timelines. Should be very solid in analyzing & providing complete Technical solutions for the given problems. Should be able to perform good code reviews Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - Required Qualifications: Bachelors Degree in Computer Science or related computer discipline Solid knowledge with 7+ years of experience in Postgres, Azure Cloud, Kafka/Pulsar 7+ years of experience in Azure cloud platform - Azure data factory, Azure data brick Experience in SQL Programing and SQL Data Base Solid Problem Solving capability Programming using Scala (or python) Fair in Architecting/Designing feature implementations & data integration mechanisms (sync/async, batch/stream), HLD/LLD Fair understanding on any FE framework like React Postgres, Azure Cloud, Kafka Azure Cloud Engineer, Azure data factory, Azure data bricks Programming using Scala Should have very good Problem solving skills Must Have Skills: Azure Data Engineer - Azure Data factory, Azure Blob, ADLS, Azure Databricks (preferably Scala), SQL programming knowledge

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Automation Tester with Java and Databricks, you will be responsible for automating APIs, web services, and data files to ensure the accuracy of data retrieved using APIs and contents of files. You should have at least 3 years of experience in automating APIs and web services, along with 2 years of experience in Selenium automation tool. Additionally, 1+ year of experience with Datafactory and Databricks is required. You will be expected to develop automated tests using existing Selenium framework for APIs and web services, as well as develop Java and Python programs to validate the content of data files. Furthermore, you will be responsible for developing automation for Databricks/Datafactory using Python and executing manual tests as needed to support project work. In this role, you will coordinate with the development team members regarding defect validation, assist in defect re-creation, and create appropriate test cases within the TestRail Test Management tool. Apart from automation testing, you will also be involved in establishing appropriate projects in Jenkins, maintaining technology expertise, and adhering to the company's compliance program. Your strong verbal and written communication skills will be essential in collaborating with team members and ensuring the quality of testing processes. The ideal candidate should have a minimum of 4 years of total experience, with specific experience in automation testing for 3 years, Java for 3 years, and Databricks testing for 1 year. If you are highly skilled in Java, have experience with BDD implementations using Cucumber, and possess excellent SQL skills, this role is perfect for you. This is a full-time, permanent position based in Chennai (Hybrid) with benefits such as health insurance, Provident Fund, and work-from-home options. The expected start date for this role is 25/03/2025. If you are ready to take on this exciting opportunity, reach out to the employer at +91 9244079355.,

Posted 1 week ago

Apply

4.0 - 7.0 years

25 - 27 Lacs

hyderabad, pune, bengaluru

Hybrid

Key Responsibilities: Lead the design end-to-end data engineering solutions across cloud platforms. Develop and oversee robust data pipelines and ETL workflows using Python and Apache Spark. Implement scalable Delta Lake solutions for structured and semi-structured data. Orchestrate complex workflows using Databricks Workflows or Azure Data Factory. Translate business rules, retention metadata, and data governance policies into reusable, modular, and scalable pipeline components. Ensure adherence to data privacy, security, and compliance standards (e.g., GDPR, HIPAA, etc.). Mentor and guide junior data engineers, fostering best practices in coding, testing, and deployment. Collaborate with cross-functional teams including , analysts, and business stakeholders to align data solutions with business goals. Drive performance optimization, cost-efficiency, and innovation in data engineering practices. Required Skills & Qualifications: 4+ years of experience in data engineering Expert-level proficiency in Python, Apache Spark, and Delta Lake. Strong experience with Databricks Workflows and/or Azure Data Factory. Deep understanding of data governance, metadata management, and business rule integration. Proven track record in implementing data privacy, security, and regulatory compliance in insurance or financial domains. Strong leadership, communication, and stakeholder management skills. Experience with cloud platforms such as Azure, AWS, or GCP. Preferred Qualifications: Experience with CI/CD pipelines and DevOps practices in data engineering. Familiarity with data cataloging and data quality tools. Certifications in Azure Data Engineering or related technologies. Exposure to modern data stack tools.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 20 Lacs

bengaluru

Work from Office

Overview: We are looking for a skilled and proactive Senior Cybersecurity Expert (M365 security, MS Purview) to join our security team. This role combines the definition of the strategy and main goals of different services hosted in the Purview portal with the hands-on technical implementation to make sure we make the most of the tools we have at our disposal. Having strong knowledge and experience managing the different tools belonging to Microsoft Purview is key, as the central goal of this position will consist of assessing the current usage, define priorities, work with stakeholders and drive implementation to address our gaps in the risk & compliance, data governance and data security areas. Experience with other solutions belonging to the Microsoft 365 environment is also valued (Microsoft Defender XDR, Entra ID, Intune, etc.). Tasks and responsibilities: You drive conversations and collaboration with all relevant stakeholders, inside and outside of IT, to meet all the necessary requirements to use the different services hosted in Purview. Main stakeholders to consider: o HR o Legal o Compliance o Cybersecurity Community o Relevant business lines o Data owners You assess the current usage of tools and services within Microsoft Purview suite (including those not belonging to cybersecurity), identify potential gaps, and collaborate with relevant stakeholders to address these gaps and integrate new tools into Siemens Healthineers where applicable. You identify and collaborate with the departments responsible within IT that should act as service owners for the different services hosted in Microsoft Purview. You lead the implementation and strategic use of Microsoft Purview as the organizations central data governance solution, providing guidance and alignment across diverse business data owners to ensure consistent data policies, classification, and responsible data stewardship including clear ownership, proper documentation, access control, and data quality standards. You support organizational efforts to automate the classification of information using Microsoft Purview, enabling proactive identification and protection of sensitive data across the enterprise through scalable, policy-driven tagging and labelling. You take ownership of the Microsoft Purview portal as a whole, ensuring that the permissions granted to different stakeholders are assigned responsibly. It is important to ensure that a proper access control is in place, fulfilling the least privilege principle, conduct access reviews periodically and maintain up to date documentation in this regard. You benchmark and select Microsoft Purview products and deploy proofs of concept with relevant stakeholders to decide the new capabilities to be deployed. You provide a key role in troubleshooting, problem-solving and end-user support to ensure the seamless operation of our services. You ensure that the appropriate licensing is in place for all solutions and collaborate with relevant teams to create a budget forecast if additional licensing is required. Analyze if the current spending of the licenses needed for different services is efficient/really needed for the organization, to optimize our cost. You assist in establishing monitoring processes to ensure admin activities align with the least privilege principle, safeguarding the company against risky actions that could indicate potential compromise, and fulfilling audit requirements. You coordinate and facilitate discussions with the provider (Microsoft) for Purview-related topics. You participate in projects or activities dedicated to improve our security posture in Microsoft 365, in any of the other areas managed by our team (Data Loss Prevention, Insider Risk Management, Defender for Endpoint, Defender for Identity, Defender for Office 365, etc.). What is in it for you: You will benefit from a Hybrid work schedule, allowing you to maintain a healthy work-life balance. You will thrive in a multinational environment where you will have the chance to meet and cooperate with colleagues from all over the globe. You will be engaged in both hands-on IT/security operations and strategic security improvements. You will participate in and lead big security projects, introducing improvements that will make a difference on the daily work of more than 70k employees. Organization, coordination, and communication are key to succeed. You will have the chance to constantly increase your knowledge and develop your skills by combining training courses with on-the-job training. You will develop a successful career by participating in all relevant stages: Definition, planning, implementation, and supervision. Qualifications: You have 7+ years of relevant work experience in IT, and 5+ years working with Microsoft Compliance/Purview solutions and the field of cybersecurity. You have proven experience in Microsoft Purview and the solutions that it hosts (Data Governance, Communication Compliance, Data Lifecycle Management, etc.), with a strong focus on implementation and hands-on configuration. Possession of relevant certifications, such as Microsoft SC-400 or SC-401 is highly valued. You have experience designing, deploying, and managing Microsoft Purview policies in complex environments. You have knowledge regarding other Microsoft365 security solutions and topics (Microsoft Defender Suite, Microsoft Information Protection, Insider Risk Management) You have advanced English, communication, and negotiation skills: clear and concise communication; able to address stakeholders of different backgrounds and technical expertise. Additionally: You enjoy engaging with different teams and facilitating discussions to find solutions that meet stakeholders expectations. You have a proactive mindset with a passion for staying ahead of potential security risks. You are analytical and work methodically, both autonomously and in a team setting. You work or have been working in global distributed teams. You are a quick learner and have the aptitude to get into new technologies and architectures. You provide guidance and mentorship to other team members, specially on Purview and best practices. You are able to drive projects from initiation to completion, ensuring deliverables are met on time and in alignment with business objectives.

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

pune

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

pune, gurugram, bengaluru

Hybrid

Project Role: Azure date engineer Work Experience: 3 to 8 Years Work location: Bangalore/Gururam/Kochi/Pune Work Mode: Hybrid Must Have Skills: Azure Data engineer, SQL, Spark/Pyspark Job Overview: Responsible for the on-time completion of projects or components of large, complex projects for clients in the life sciences field. Identifies and elevates potential new business opportunities and assists in the sales process. Skills required: Experience in developing Azure components like Azure data factory, Azure data Bircks, Logic Apps, Functions Develop efficient & smart data pipelines in migrating various sources on to Azure datalake Proficient in working with Delta Lake, Parquet file formats Designs, implements, and maintain the CI/CD pipelines, deploy, merge codes Expert in programming in SQL, Pyspark, Python Creation of databases on Azure data lake with best data warehousing practises Build smart metadata databases and solutions, parameterization, configurations Develop Azure frameworks, develops automated systems for deployment & monitoring Hands-on experience in continuous delivery and continuous integration of CI/CD pipelines, CI/CD infrastructure and process troubleshooting. Extensive experience with version control systems like Git and their use in release management, branching, merging, and integration strategies Essential Functions: Participates or leads teams in the design, development and delivery of consulting projects or components of larger, complex projects. Reviews and analyzes client requirements or problems and assists in the development of proposals of cost effective solutions that ensure profitability and high client satisfaction. Provides direction and guidance to Analysts, Consultants, and where relevant, to Statistical Services assigned to engagement. Develops detailed documentation and specifications. Performs qualitative and/or quantitative analyses to assist in the identification of client issues and the development of client specific solutions. Designs, structures and delivers client reports and presentations that are appropriate to the characteristics or needs of the audience. May deliver some findings to clients. Qualifications Bachelor's Degree Req Master's Degree Business Administration Pref 4-8 years of related experience in consulting and/or life sciences industry Req.

Posted 1 week ago

Apply

5.0 - 10.0 years

17 - 27 Lacs

bengaluru

Work from Office

Design and maintain data pipelines using Azure (Data Factory, Databricks, Synapse, SQL). Strong PySpark and Python skills required. Ensure performance, security, and compliance. Collaborate with teams to deliver scalable data solutions.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

bengaluru

Work from Office

About The Role We are seeking a skilled and motivated Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will have hands-on experience with big data technologies, cloud platforms, and programming languages, and will play a key role in enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and optimize data pipelines for ETL processes using Apache Hadoop, Spark, and other big data tools. Implement and manage data workflows in cloud environments, primarily Microsoft Azure. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver robust solutions. Ensure data quality, integrity, and security across all stages of data processing. Develop and maintain scalable data architectures for structured and unstructured data. Write efficient SQL queries for data extraction, transformation, and analysis. Monitor and troubleshoot data pipeline performance and reliability. Document data engineering processes and best practices. Primary Skills: Big Data Technologies: Apache Hadoop, Spark Cloud Platforms: Microsoft Azure (Data Factory, Synapse, Blob Storage, etc.) Programming Languages: Python, Java ETL Tools & Techniques: Data ingestion, transformation, and loading SQL & Data Querying: Advanced SQL for data manipulation and analysis Data Processing & Management: Batch and real-time data processing Data Analysis & Business Intelligence: Integration with BI tools and dashboards Secondary Skills: Cloud Computing Concepts: Public cloud, hybrid cloud, cloud security Multi-Paradigm Programming: Functional and object-oriented programming Software Development Practices: Version control, CI/CD, testing Data Science Fundamentals: Understanding of statistical methods and machine learning workflows Information Technology: General IT knowledge including networking, storage, and system architecture Cloud Providers: Familiarity with AWS or Google Cloud Platform is a plus Communication & Collaboration: Ability to work cross-functionally and explain technical concepts to non-technical stakeholders

Posted 2 weeks ago

Apply

8.0 - 13.0 years

12 - 22 Lacs

bengaluru

Hybrid

TECHNICAL SKILLS AND EXPERIENCE Most important: 7+ years professional experience as a data engineer, with at least 4 utilizing cloud technologies. Proven experience building ETL or ETL data pipelines with Databricks either in Azure or AWS using PySpark language. Strong experience with the Microsoft Azure Data Stack (Databricks, Data Lake Gen2, ADF etc.) Strong SQL skills and proficiency in Python adhering to standards such as PEP Proven experience with unit testing and applying appropriate testing methodologies using libraries such as Pytest, Great Expectations, or similar. Demonstrable experience with CICD including release and test automation tools and processes such as Azure Devops, Terraform, Powershell and Bash scripting or similar. Strong understanding of data modeling, data warehousing, and OLAP concepts. Excellent technical documentation skills. Role & responsibilities

Posted 2 weeks ago

Apply

12.0 - 15.0 years

30 - 40 Lacs

bengaluru

Work from Office

Highly skilled Data Engineer with 12+ years in data engineering, including 8+ years of hands-on experience in Azure technologies. Expertise in ELT, data modeling, integration, and Kubernetes with tools like Data Factory and Databricks. Proven project management skills and experience leading small teams in agile environments. Relevant Azure and AWS certifications are preferred.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

bengaluru

Work from Office

Seasoned Azure Data Engineer with over 10 years of experience, including 7+ in data engineering. Expertise in ELT, data modeling, and integration. Proficient in Azure Data Factory, Databricks, and Azure DevOps.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

3 - 8 Lacs

kolkata, pune, bengaluru

Work from Office

Job Title: Developer Work Location: Pune -MH, Kolkata Skill Required: Digital : Python~Digital : Databricks~Digital : PySpark~Azure Data Factory~MySQL Experience Range : 6-8 Years Job Description: A minimum of 5 years experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partners. Experience in troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tunning, ETL importing large volume of data extracted from multiple systems, capacity planning Essential Skills: Strong knowledge of Extraction Transformation and Loading (ETL) processes using frameworks like Azure Data Factory or Synapse or Databricks; establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks etc. Databricks Architect, Cloud Architect, Python, SQL Desirable Skills: Design and develop ETL processes based on functional and non-functional requirements in python / pyspark within Azure platform. Databricks Architect, Cloud Architect, Python, SQL

Posted 2 weeks ago

Apply

5.0 years

4 - 9 Lacs

chennai

Remote

Req ID: 336026 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Azure to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). "Job Duties: Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Azure, and Databricks Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 5+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Azure, Databricks Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in Azure data services, ADLS, ADF, Databricks, Data quality, ETL / ELT Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred Minimum Skills Required: Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Azure, and Databricks Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 5+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Azure, Databricks Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in Azure data services, ADLS, ADF, Databricks, Data quality, ETL / ELT Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred" About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us. NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

Remote

Req ID: 336028 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer - Azure to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). "Job Duties: Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Azure, and Databricks Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 5+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Azure, Databricks Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in Azure data services, ADLS, ADF, Databricks, Data quality, ETL / ELT Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred Minimum Skills Required: Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Azure, and Databricks Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 5+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Azure, Databricks Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in Azure data services, ADLS, ADF, Databricks, Data quality, ETL / ELT Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred" About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client’s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies