Home
Jobs

41 Data Factory Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12 - 17 years

25 - 40 Lacs

Mumbai, Bengaluru, Gurgaon

Work from Office

Naukri logo

Role Responsible for building data lake, data foundation and analytical solution with standard design and modern cloud/hybrid architecture pattern Work with cross functional teams to make the data usable for functional users and applications to enable delivery of business values to customers Provide architectural leadership and vision for Bank's Next Gen Data Platform and Data Lakehouse Develop and maintain architectural roadmap for data products and data services plus ensuring alignment with the business and enterprise architecture strategies and standards Drive design and architecture for Data Transformation & aggregations, design and development of a roadmap, and implementation based upon current vs. future state in a cohesive architecture viewpoint Review and understand business requirements and technical designs for physical data design, data pipelines (ETL) and other technical integrations Build data pipelines for multiple storage solutions, including distributed platforms such as Databricks, Trino, Hadoop and MPP databases and cloud Data warehouse Design and implement low latency analytical platform services leveraging open source and cloud technology Design and implement data governance and MDM tools Designing and building data capabilities to support cloud data strategy, including fully automated data Pipelines, data curation and consumption Expereince Experienced technology leader with a minimum of 12+ years of software development experience including 10+ years of data application or data platform architecture experience with deep technology expertise Deep knowledge and hands on experience in design and implementation of Big Data technologies (Apache Spark, Apache Airflow, Apache Flink, Streaming data, ADLS, S3, Object Data store, Trino, Databricks, Unity Catalog, Data governance tools) and familiarity with data architecture patterns (data warehouse, data lake, data lakehouse, data ingestion, curation and consumption)

Posted 2 months ago

Apply

3 - 8 years

11 - 12 Lacs

Gurgaon

Hybrid

Naukri logo

Position: Azure Data Engineer Company: US MNC Location: Gurgaon Experience: ~5 years Shift: 12:00 PM to 9:00 PM Cabs Yes, 2-Way Skills Needed Data Analysis, Azure, Data Lake, Data Factory, Data Bricks, Synapse, Azure SQL Summary The Azure Senior Data Engineer will be responsible for designing, building and maintaining efficient ELT/ETL pipelines using Data Factory along with data movement using the relevant Azure services. The person will be working in close co-ordination with the Tech Lead and/or the Architect to understand the requirements, effectively implement the solutions and to ensure the best practices are followed. Key Tasks Integrating data from various sources into a unified Azure data warehouse and suitable data marts. Continuously monitoring and testing the availability, performance and quality of data pipelines. Collaborating with peers by employing and following SDLC best practice while maintaining code repositories and activity using Agile, DevOps and CI/CD methodologies through Dev, Test and QA environments Working closely with stakeholders to understand ongoing requirements, build effective products and align data modelling principles. Adhering to agreed Release and Change Management Processes. Troubleshoot and investigate anomalies and bugs in code or data. Adhering to test and reconciliation standards to produce confidence in delivery. Produce appropriate and comprehensive documentation to support ease of access for technical and non-technical users. Engage in a culture of continual process improvement and best practice. Efficiently respond to changing business priorities through effective time management. Conduct all activities and duties in line with company policy and compliantly. To carry out any other ad-hoc duties as requested by management. Required Close to 5 years experience in Data Analytics. Working knowledge of Azure data analytics ecosystem. Must have hands-on experience on Microsoft SQL Server, Azure Data Factory, Azure SQL, Azure Synapse Computer graduate Currently in Delhi/NCR Should be available to join in 0-30 days For more details or applying, connect with Mariyam at 7302214372 / mariyam@manningconsulting.in

Posted 2 months ago

Apply

8 - 11 years

25 - 40 Lacs

Chennai, Bengaluru, Coimbatore

Hybrid

Naukri logo

Job Overview: We are seeking a highly skilled and motivated Informatics Specialist to support the management, flow, monitoring, and reporting of data across our Azure Synapse Data Lake. The role will focus on data engineering, business intelligence and data governance, ensuring the accuracy, consistency, and security of our data while supporting analytics and reporting functions. This position requires strong technical expertise in data management, as well as experience with Azure Synapse, data governance frameworks, and best practices for data integrity and compliance. Key Responsibilities: - Data Flow Management: Oversee the flow of data within Azure Synapse Data Lake, ensuring timely and efficient data movement between different systems and sources. - Monitoring and Maintenance: Implement and manage processes to monitor data pipelines, data quality, and system performance, proactively addressing issues and optimizing workflows. - Data Engineering: Develop, maintain, and optimize data pipelines for efficient data ingestion, transformation, and storage within the Azure cloud environment. - Reporting and Analytics Support: Collaborate with business stakeholders to define reporting requirements, create insightful data reports, and support data-driven decision-making processes. - Data Governance Implementation: Support the development and enforcement of data governance policies, ensuring adherence to standards around data quality, privacy, security, and compliance. - Data Cataloging & Documentation: Assist in maintaining a comprehensive data catalog, ensuring all data assets are accurately documented and traceable to their sources. - Stakeholder Collaboration: Work closely with cross-functional teams including data engineers, analysts, and business users to ensure data meets business needs and compliance requirements. - Azure Synapse Optimization: Manage and optimize Azure Synapse environments for data storage, processing, and querying, ensuring high performance and cost efficiency. Qualifications: - Bachelors degree in Informatics, Data Science, Computer Science, or related field. Masters degree is a plus. - 3+ years of experience in data management, including data lakes and data governance frameworks. - Expertise in Azure Synapse Analytics, including data ingestion, transformation, and integration workflows. - Strong understanding of data governance best practices and tools (e.g., data quality, metadata management, and data lineage). - Proficiency in SQL, Python, or other relevant programming languages. - Experience with data reporting tools (e.g., Power BI, Tableau) and an understanding of business intelligence and reporting processes. - Strong problem-solving skills, with a focus on automation and process improvement. - Familiarity with data privacy regulations (e.g., GDPR, CCPA) and security protocols. - Excellent communication skills, with the ability to translate technical concepts into business insights. Preferred Skills: - Knowledge of cloud-based data architecture and experience with other Azure services (e.g., Azure Data Factory, Azure Data Catalog). - Experience working in a data-driven organization with a focus on supporting advanced analytics and machine learning initiatives. - Certification in Azure Data Services or Data Governance is a plus.

Posted 2 months ago

Apply

11 - 14 years

40 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

Experience: 15+ years in solution designs for data and analytics deliveries. Skills: • Experience with a range of Azure-based big data and analytics platforms like ADLS+ ADF+ DataBricks+ Azure Data Warehouse+ Power BI+ Azure Cosmo DB+ Azure ML+ etc. • Hands-on experience in the design of Near Real-time Data Processing solutions using DataBricks+ In-Stream architecture+ and modular programming. • Hands-on experience in SparkSQL+ SparkML+ PySpark+ Python • Extensive experience consulting with business; translating business use cases into data analytics specifications and designing big data solutions to deliver these use cases. • Conducting review sessions with Architecture+ Solution Design & Project Development resources as required during the Design and Development Phases. • Certifications: Certification in Designing an Azure Data Solution or relevant to Azure Architect role is mandatory. Expertise: • Solution/technical architecture in the cloud for infrastructure migration+ Big Data/analytics+ information analysis+ database management in the cloud+ IoT/event-driven/microservices in the cloud. • Experience with private public cloud architectures+ pros/cons+ and migration considerations. • Extensive hands-on experience implementing infra migration+ data migration+ and data processing using Azure services: Networking+ Windows/Linux virtual machines+ Container+ Storage+ ELB+ AutoScaling+ Azure Functions+ Serverless Architecture+ ARM Templates+ Azure SQL DB/DW+ Data Factory+ Azure Stream Analytics+ Azure Analysis Service. • DevOps on an Azure platform+ developing and deploying ETL solutions on Azure. • Strong in Power BI+ Java+ C+ Spark+ PySpark+ Unix shell/Perl scripting. • Familiarity with the technology stack available in the industry for metadata management: Data Governance+ Data Quality+ MDM+ Lineage

Posted 3 months ago

Apply

5 - 10 years

20 - 25 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

Job Description AWS Data engineer Hadoop Migration We are seeking an experienced AWS Principal Data Architect to lead the migration of Hadoop DWH workloads from on-premise to AWS EMR. As an AWS Data Architect, you will be a recognized expert in cloud data engineering, developing solutions designed for effective data processing and warehousing requirements of large enterprises. You will be responsible for designing, implementing, and optimizing the data architecture in AWS, ensuring highly scalable, flexible, secured and resilient cloud architectures solving business problems and helps accelerate the adoption of our clients data initiatives on the cloud. Key Responsibilities: Lead the migration of Hadoop workloads from on-premise to AWS-EMR stack. Design and implement data architectures on AWS, including data pipelines, storage, and security. Collaborate with cross-functional teams to ensure seamless migration and integration. Optimize data architectures for scalability, performance, and cost-effectiveness. Develop and maintain technical documentation and standards. Provide technical leadership and mentorship to junior team members. Work closely with stakeholders to understand business requirements, and ensure data architectures meet business needs. Work alongside customers to build enterprise data platforms using AWS data services like Elastic Map Reduce (EMR), Redshift, Kinesis, Data Exchange, Data Sync, RDS , Data Store, Amazon MSK, DMS, Glue, Appflow, AWA Zero-ETL, Glue Data Catalog, Athena, Lake Formation, S3, RMS, Data Zone, Amazon MWAA, APIs Kong Deep understanding of Hadoop components, conceptual processes and system functioning and relative components in AWS EMR and other AWS services. Good experience on Spark-EMR Experience in Snowflake/Redshift Good idea of AWS system engineering aspects of setting up CI-CD pipelines on AWS using Cloudwatch, Cloudtrail, KMS, IAM IDC, Secret Manager, etc Extract best-practice knowledge, reference architectures, and patterns from these engagements for sharing with the worldwide AWS solution architect community Basic Qualifications: 10+ years of IT experience with 5+ years of experience in Data Engineering and 5+ years of hands-on experience in AWS Data/EMR Services (e.g. S3, Glue, Glue Catalog, Lake Formation) Strong understanding of Hadoop architecture, including HDFS, YARN, MapReduce, Hive, HBase. Experience with data migration tools like Glue, Data Sync. Excellent knowledge of data modeling, data warehousing, ETL processes, and other Data management systems. Strong understanding of security and compliance requirements in cloud. Experience in Agile development methodologies and version control systems. Excellent communication an leadership skills. Ability to work effectively across internal and external organizations and virtual teams. Deep experience on AWS native data services including Glue, Glue Catalog, EMR, Spark-EMR, Data Sync, RDS, Data Exchange, Lake Formation, Athena, AWS Certified Data Analytics – Specialty. AWS Certified Solutions Architect – Professional. Experience on Containerization and serverless computing. Familiarity with DevOps practices and automation tools. Experience in Snowflake/Redshift implementation is additionally Azure Data Engineer job description As an Azure Data Engineer, the candidate is expected to be specializing in designing, implementing, and managing large-scale data solutions on the Microsoft Azure cloud platform. They possess expertise in various aspects of data engineering like data storage, data integrations, analytics etc using Azure data services. Ideal candidate will have a strong background in data engineering, Azure cloud services and data processing technologies. # Requirements: 1. 5+ years of experience in data engineering, preferably on Microsoft Azure. This will increase with senior positions as other DE roles we have. 2. Strong knowledge of Azure cloud services, including Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Storage. 3. Experience with data processing technologies, such as Apache Spark, Apache Hadoop, and SQL. 4. Strong understanding of data modeling, data warehousing, and data governance. 5. Experience with data security, compliance, and regulatory requirements. 6. Strong programming skills in languages such as Python, Scala, or Java. 7. Experience with agile development methodologies and version control systems such as Git 8. Azure certifications, such as Azure Data Engineer Associate or Azure Solutions Architect expert. 9. Experience with containerization technologies such as Docker. 10. Knowledge of data visualization tools, such as Power BI, Tableau etc 11. Experience in machine learning and AI technologies will be added advantage. Soft Skills: Problem-solving: Solve complex data problems effectively. Communication: Communicate clearly with both technical and non-technical team members. Attention to detail: Pay close attention to detail to ensure accuracy. Adaptability and learning: Stay up to date with new technologies and trends. Teamwork: Collaborate well with others for project success. Leadership and mentorship: Take on leadership roles and mentor junior team members for growth.

Posted 3 months ago

Apply

5 - 10 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Azure Data Engineer job description As an Azure Data Engineer, the candidate is expected to be specializing in designing, implementing, and managing large-scale data solutions on the Microsoft Azure cloud platform. They possess expertise in various aspects of data engineering like data storage, data integrations, analytics etc using Azure data services. Ideal candidate will have a strong background in data engineering, Azure cloud services and data processing technologies. # Requirements: 1. 5+ years of experience in data engineering, preferably on Microsoft Azure. This will increase with senior positions as other DE roles we have. 2. Strong knowledge of Azure cloud services, including Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Storage. 3. Experience with data processing technologies, such as Apache Spark, Apache Hadoop, and SQL. 4. Strong understanding of data modeling, data warehousing, and data governance. 5. Experience with data security, compliance, and regulatory requirements. 6. Strong programming skills in languages such as Python, Scala, or Java. 7. Experience with agile development methodologies and version control systems such as Git 8. Azure certifications, such as Azure Data Engineer Associate or Azure Solutions Architect expert. 9. Experience with containerization technologies such as Docker. 10. Knowledge of data visualization tools, such as Power BI, Tableau etc 11. Experience in machine learning and AI technologies will be added advantage. Soft Skills: Problem-solving: Solve complex data problems effectively. Communication: Communicate clearly with both technical and non-technical team members. Attention to detail: Pay close attention to detail to ensure accuracy. Adaptability and learning: Stay up to date with new technologies and trends. Teamwork: Collaborate well with others for project success. Leadership and mentorship: Take on leadership roles and mentor junior team members for growth.

Posted 3 months ago

Apply

15 - 22 years

45 - 65 Lacs

Chennai, Bengaluru, Hyderabad

Hybrid

Naukri logo

We are looking for a Pr. Data Architects to be based out of our Chennai/ Bangalore/Pune /Hyderabad. This role involves a combination of hands-on contribution, customer engagement and technical team management. As a Data Architect, you will, Collaborate with architects providing directions & guidance to design and maintain data management solutions on the Azure Cloud using various PaaS services, Big data technologies and open-source frameworks Lead initiatives to build practice-level capabilities not limited to creating POVs, case studies, building/enhancing accelerators, mentoring, brainstorming, partnership (Azure, Databricks), validate architectural patterns & recommendations and representing the practice in technical summits. Manage the full life cycle of a large Data Lake / Lake house / Data mesh implementation from Discovery & assessment, architectural alignment & design, define standards, and best practices, guide architects & team during implementation and deployment. Drive proposals (RFPs/RFIs), and opportunities from the technical solution standpoint. Collaborate with a team of business domain experts, data scientists and application developers to develop solutions. Explore and learn new technologies for creative business problem-solving and mentor a team of Data Engineers Required Experience, Skills & Competencies: Implementation experience of various data management projects but not limited to data platform modernization, data lake, lake house, data mesh implementation, data virtualization, migration, advanced use-cases handling streaming, unstructured data, Gen AI integration Experience in designing well-architected data management solution, defining standards & best practices for data security, data classification, data governance (quality, lineage), data sovereignty, compliance, optimization, and cost management Expertise in conducting discovery & assessment workshops that involves multiple stakeholders, driving discussions, capturing pain-points/wish lists, brainstorming, align on technical architecture and create a roadmap for data lake/lake house implementation. Strong hands-on experience in implementing data lake using technologies such as Databricks Spark, Microsoft Fabric, Data Factory, Azure Data Lake Excellent understanding of various architectural patterns to ingest, and process data from batch and streaming data sources Strong programming skills in Python/Scala, Pyspark, SQL Ability to understand data (attributes, metrics), relate to reporting/analytical use- cases of various business functions at a high level and include those nuances in the design for scalability, and availability Hands-on experience or exposure to NoSQL databases, data modeling Experience in setting up consumption layer and serving data assets in a secure way for BI reporting, Adhoc analytics, DS/ML use-cases, and external users

Posted 3 months ago

Apply

3 - 5 years

10 - 11 Lacs

Mohali

Work from Office

Naukri logo

Responsibilities : Data Pipeline Development:Design, develop, and maintain scalable and reliable data pipelines using Python, Spark, and other relevant technologies. Extract, transform, and load (ETL) data from various sources into data lakes and data warehouses. Implement data quality checks and monitoring to ensure data accuracy and consistency. Big Data Technologies:Work with Hadoop, Spark, Confluent Kafka, and other big data technologies to process and analyze large datasets. Build and maintain data lakes for storing structured and unstructured data. Optimize data processing workflows for performance and efficiency. Cloud Platform Integration:Develop and deploy data solutions on cloud platforms such as Azure or Google Cloud. Utilize cloud-based data services like Azure Data Factory, Redshift, Snowflake, or BigQuery. Implement cloud-based data storage and processing solutions. Database Management:Design and implement database schemas in PostgreSQL and other SQL/NoSQL databases. Write complex SQL queries for data extraction and analysis. Optimize database performance and ensure data integrity. Data Streaming:Implement data streaming solutions using Confluent Kafka. Build real-time data pipelines for data ingestion and processing. Data Warehousing:Design and implement data warehousing solutions. Work with data models and dimensional modeling. Key skills Hadoop, Spark, Confluent, Kafka, Data Lake, PosgreSQL, Data Factory etc. Python, Scala, or Java, Confluent, Azure, Google Cloud, SQL, NoSQL databases, Redshift, Snowflake, BigQuery

Posted 3 months ago

Apply

2 - 4 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: Proven experience in assembling large, complex sets of data that meets non-functional and functional business requirements Good exposure in working with Azure data bricks Pyspark, Spark SQL, Scala (Lazy evaluation and delta tables, Parquet formats, Working with large and complex datasets) Experience/Knowledge in ETL, data pipelines, data flow techniques using Azure Data Services Leverage Databricks features such as Delta Lake, Unity Catalog, and advanced Spark configurations for efficient data management. Debug Spark jobs, analyze performance issues, and implement optimizations to Ensure pipelines meet SLAs for latency and throughput. Implement data partitioning, caching, and clustering strategies for performance tuning. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Develop and manage CI/CD pipelines for deploying Databricks notebooks and jobs using tools like Azure DevOps, Git, or Jenkins for version control and automation. Experience in development projects as Data Architect Must Need Skills- Data factory, Databricks, Databricks Architecture , Synapse, Py Spark Python and SparkAzure DB: Azure SQL Cosmos DB Integrate data validation frameworks like Great Expectations & Implement data quality checks to ensure reliability and consistency. Build and maintain a Lakehouse architecture in ADLS Databricks. Manage access controls and ensure compliance with data governance policies using Unity Catalog and Role-Based Access Control (RBAC). Experience integrating different data sources. Good Experiences in Snowflake, added advantage. Experience in supporting BI and Data Science teams in consuming the data in a secure and governed manner Create and maintain comprehensive documentation for data processes, procedures, and architecture designs

Posted 3 months ago

Apply

7 - 9 years

9 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Azure Data Engineer - AM -BLR/Kochi - J48762 Responsibilities: Design, build, and maintain Azure data services for internal and client platforms, including Azure SQL Database, Azure Data Lake, Data Factory, Stream Analytics, Azure Analysis Services, Azure Databricks, and MS Fabric. Develop and implement ETL processes, data pipelines, and data integration solutions in PySpark clusters. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. Collaborate with the team to design optimised data models for traditional warehouses as well as the latest delta lakes. Ensure data security, compliance, and best practices are always maintained. Optimise system performance, ensuring the fastest possible data retrieval and efficient data updates. Keep up-to-date with emerging database technologies and trends. Requirements: Bachelor`s degree in Computer Science, Information Systems, or a related field. A Master`s degree is preferred. Proven work experience as a Data Engineer or similar role. Expertise in Azure data services and tools, including Azure Databricks and MS Fabric. Proficiency in PySpark and experience with other programming languages (SQL, Python, Java). Experience in designing optimised data models in traditional warehouses and delta lakes. Strong analytical skills and the ability to create useful data insights and visualisation. Excellent problem-solving and communication skills. Knowledge of other cloud platforms is a plus. Certifications in Azure Data Engineering or related fields would be a plus. Required Candidate profile Candidate Experience Should Be : 7 To 9 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA

Posted 3 months ago

Apply

5 - 10 years

9 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

We are seeking a highly skilled Power BI Expert with over 5 years of experience in business intelligence and data analytics. The ideal candidate will have expertise in Azure, Data Factory, Microsoft Fabric, and Data Warehousing. Required Candidate profile Experience with Power BI, Azure, Data Warehousing, and related technologies Proficiency in DAX, Power Query, SQL, and data visualization best practices Degree in Computer Science, Data Analytic.

Posted 3 months ago

Apply

7 - 12 years

25 - 30 Lacs

Pune

Hybrid

Naukri logo

Job Title: Planning, Data, and Analytics Operations Architect Location: Pimple Saudagar, Pune Work Mode: Hybrid (3 days WFO, 2 days remote) We are looking for a Planning, Data, and Analytics Operations Architect to lead a team responsible for first and second-level support for data and analytics platforms, solutions, and data operations. The ideal candidate will ensure smooth daily operations, handle bug fixes, assist with regression testing, and implement minor enhancements to continuously improve analytics and data management. Key Responsibilities Lead a team providing first and second-level support for data and analytics platforms. • Manage and prioritize support requests, ensuring timely issue resolution. • Foster a collaborative and innovative team environment. • Monitor daily data jobs to ensure efficiency and troubleshoot issues. • Develop and implement fixes for bugs , minimizing disruptions. • Assist in regression testing and perform project quality checks before deployment. • Work on minor enhancements to optimize data solutions and analytics platforms. • Provide training and guidance to enhance team skills in Power BI, SAP, Azure Synapse . • Maintain comprehensive documentation of support processes, fixes, and enhancements. • Ensure data governance policies align with industry standards and regulatory requirements. • Stay updated with industry best practices in data operations and analytics. Required Skills Data & Analytics Platforms: Power BI, SAP, Azure Synapse • Cloud Technologies: Azure (Synapse, Data Factory), AWS • Data Operations & Governance: Data pipelines, ETL, data quality, compliance • Monitoring & Troubleshooting: Job monitoring, bug fixing, regression testing • Scripting & Automation: Python, SQL, PowerShell • Leadership & Collaboration: Team management, stakeholder engagement Why Join Us? Opportunity to work on large-scale global projects • Fast-paced, challenging environment • Collaborative culture with a strong focus on innovation and quality Interested candidates can share their resume at: minal_mohurle@persolkelly.com CONFIDENTIAL NOTE: We at PERSOLKELLY India or our representatives, if any, do not ask job seekers to pay any kind of fee, fine or penalties, make cash or online payment through any channel in exchange of interviews, offer letters, job or penalty claims for PERSOLKELLY or any of our clients. Nor do we ask our candidates to supply credit card numbers, PIN numbers, OTP details relating to bank accounts. All our emails will be sent from official domain only - @persolkelly.com. We are not liable for communication from any domain other than - @persolkelly.com. If you receive any suspicious requests purportedly from PERSOLKELLY India, please alert us at Contactus_in@persolkelly.com

Posted 3 months ago

Apply

3 - 6 years

5 - 8 Lacs

Uttar Pradesh

Work from Office

Naukri logo

Proven experience as a MS Fabric Data Architect Comfortable developing and implementing a delivery plan with key milestones based on requirements Strong working knowledge of Microsoft Fabric core features, including setup, configuration, and use of: Azure Data Lake (OneLake) for Big Data storage. Azure Synapse Data Warehouse for database management. Azure Synapse Data Engineering and Data Factory for data integration. Microsoft Purview (preview for Fabric) for data governance. Azure Data Science for analytics and AI workloads. Event stream and Data Activator for real time data flows. Strong understanding of data modelling, including relational and NoSQL data models. Ability to interpret an organisations information needs. Experience collaborating with Azure Cloud Architects to achieve platform goals. Proven experience designing Data architecture to support self serve analytics and AI development. Knowledge of dimensional modelling and Data Warehousing techniques. Expertise in Data partitioning, indexing, and optimisation strategies for large datasets Solution/technical architecture in the cloud Big Data/analytics/information analysis/database management in the cloud IoT/event driven/microservices in the cloud Experience with private and public cloud architectures, pros/cons, and migration considerations. Extensive hands on experience implementing data migration and data processing using Azure services:, Serverless Architecture, Azure Storage, Azure SQL DB/DW, Data Factory, Azure Stream Analytics, Azure Analysis Service, HDInsight, Databricks Azure Data Catalog, Cosmo Db, ML Studio, AI/ML, Azure Functions, ARM Templates, Azure DevOps, CI/CD etc. Cloud migration methodologies and processes including tools like Azure Data Factory, Event Hub, etc. Familiarity with the Technology stack available in the industry for data management, data ingestion, capture, processing and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc. Familiarity with Networking, Windows/Linux virtual machines, Container, Storage, ELB, AutoScaling is a plus Nice to Have Certifications AZ 303: Microsoft Azure Architect Technologies AZ 304: Microsoft Azure Architect Design DP 200 Implementing an Azure Data Solution DP 201 Designing an Azure Data Solution Nice to Have Skills/Qualifications: DevOps on an Azure platform Experience developing and deploying ETL solutions on Azure Strong in Power BI, C##, Spark, PySpark, Unix shell/Perl scripting Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog etc. Multi cloud experience a plus Azure, AWS, Google Professional Skill Requirements Proven ability to build, manage and foster a team oriented environment Proven ability to work creatively and analytically in a problem solving environment Desire to work in an information systems environment Excellent communication (written and oral) and interpersonal skills Excellent leadership and management skills Excellent organizational, multi tasking, and time management skills Proven ability to work independently

Posted 3 months ago

Apply

7 - 12 years

8 - 18 Lacs

Gurgaon

Remote

Naukri logo

Application Integration on Microsoft Azure using Logic Apps & Service Bus Exp in connectors and developing custom connectors Developing applications on Microsoft Azure Platform Good understanding of Azure VM, VNET, Storage, Subscriptions, Security

Posted 3 months ago

Apply

3 - 8 years

30 - 35 Lacs

Delhi NCR, Mumbai, Bengaluru

Work from Office

Naukri logo

Skill required- Azure Data Factory, Kubernetes, Azure DevOps Must-Have:- Working experience on Azure DevOps (4+ years) Working experience on Kubernetes - scripting, deployment Data Factory Terraform scripting Ansible Powershell Python, Cloud Formation, Good knowledge of ITIL process (good to have) Must have: Strong knowledge of Kubernetes, Istio Service mesh Linux - CLI and Basic knowledge of the OS Scripting (Bash and YAML) Containerization and Docker essentials Jenkins Pipeline creation and execution SCM Management such as GitHub and SVN Cloud Platform Knowledge Azure Monitoring tools like Grafana, Prometheus, ELK stack Certifications (Good to have): 1. Solutions architect associate 2. Certified Kubernetes Administrator (CKA) Location: Remote, Anywhere in- Delhi / NCR,Bangalore/Bengaluru ,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 3 months ago

Apply

4 - 6 years

7 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Your role Develop and maintain data pipelines tailored to Azure environments, ensuring security and compliance with client data standards. Collaborate with cross-functional teams to gather data requirements, translate them into technical specifications, and develop data models. Leverage Python libraries for data handling, enhancing processing efficiency and robustness. Ensure SQL workflows meet client performance standards and handle large data volumes effectively. Build and maintain reliable ETL pipelines, supporting full and incremental loads and ensuring data integrity and scalability in ETL processes. Implement CI/CD pipelines for automated deployment and testing of data solutions. Optimize and tune data workflows and processes to ensure high performance and reliability. Monitor, troubleshoot, and optimize data processes for performance and reliability. Document data infrastructure, workflows, and maintain industry knowledge in data engineering and cloud tech. Your Profile Bachelors degree in computer science, Information Systems, or a related field 4+ years of data engineering experience with a strong focus on Azure data services for client-centric solutions. Extensive expertise in Azure Synapse, Data Lake Storage, Data Factory, Databricks, and Blob Storage, ensuring secure, compliant data handling for clients. Good interpersonal communication skills Skilled in designing and maintaining scalable data pipelines tailored to client needs in Azure environments. Proficient in SQL and PL/SQL for complex data processing and client-specific analytics. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies