Jobs
Interviews

1265 Azure Databricks Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

25 - 30 Lacs

Telangana

Work from Office

Immediate Openings on Azure Data Engineer_Hyderabad_Contract Experience 10 + Years Skills Azure Data Engineer Location Hyderabad Notice Period Immediate . Employment Type Contract 10+ Years of overall experience in support and development. Primary Skills Microsoft Azure Cloud Platform Azure Admin(Good to have), Azure Data Factory (ADF), Azure DataBricks, Azure Synapse Analytics, Azure SQL, Azure DevOps, Python or Python Spark Secondary Skills DataLake, Azure Blob Storage, Azure Data Warehouse as Service (DWaaS) and Azure LogAnalytics, Oracle, Postgres, Microsoft storage explorer, ServiceNow

Posted 1 month ago

Apply

14.0 - 22.0 years

35 - 50 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role & responsibilities Job Summary: We are seeking a highly experienced Azure Databricks Architect to design and implement large-scale data solutions on Azure. The ideal candidate will have a strong background in data architecture, data engineering, and data analytics, with a focus on Databricks. Key Responsibilities: Design and implement end-to-end data solutions on Azure, leveraging Databricks, Azure Data Factory, Azure Storage, and other Azure services Lead data architecture initiatives, ensuring alignment with business objectives and best practices Collaborate with stakeholders to define data strategies, architectures, and roadmaps Develop and maintain data pipelines, ensuring seamless data integration and processing Optimize performance, scalability, and cost efficiency for Databricks clusters and data pipelines Ensure data security, governance, and compliance across Azure data services Provide technical leadership and mentorship to junior team members Stay up-to-date with industry trends and emerging technologies, applying knowledge to improve data solutions Requirements: 15+ years of experience in data architecture, data engineering, or related field 6+ years of experience with Databricks, including Spark, Delta Lake, and other Databricks features Databricks certification (e.g., Databricks Certified Data Engineer or Databricks Certified Architect) Strong understanding of data architecture principles, data governance, and data security Experience with Azure services, including Azure Data Factory, Azure Storage, Azure Synapse Analytics Programming skills in languages such as Python, Scala, or R Excellent communication and collaboration skills Nice to Have: Experience with data migration from on-premises data warehouses (e.g., Oracle, Teradata) to Azure Knowledge of data analytics and machine learning use cases Familiarity with DevOps practices and tools (e.g., Azure DevOps, Git) What We Offer: Competitive salary and benefits package Opportunity to work on large-scale data projects and contribute to the development of cutting-edge data solutions Collaborative and dynamic work environment Professional development and growth opportunities

Posted 1 month ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Hybrid

Immediate Openings on ITSS- Senior Azure Developer / Data Engineer _ Bangalore_Contract Experience: 5+ Years Skill: ITSS- Senior Azure Developer / Data Engineer Location: Bangalore Notice Period: Immediate . Employment Type: Contract Working Mode : Hybrid Job Description Description - Senior Azure Developer Loc - Bangalore , Hyd, Chennai and Noida Role: Data Engineer Experience: Mid-Level Primary Skillsets: Azure (ADF/ADLS/Key vaults) Secondary Skillsets: Databricks Good to have Skillsets: Ability to communicate well Experience in cloud applications, especially in Azure (basically things covered in Primary skillsets listed below) Work in agile framework Experience in ETL, SQL, PySpark Run with a task without waiting for direction all the time Experience with git repository and release pipelines. Any certifications on Azure Any certification on Databricks will be a topping on a cake

Posted 1 month ago

Apply

6.0 - 11.0 years

4 - 7 Lacs

Hyderabad

Hybrid

job Details: skills:Solution Analyst- MLOPS Experience: 6+ Years Location:PAN INDIA Job type: Contract to Hire Pay roll company: IDESLABS Work Model: Hybrid Job Description:- Mandatory skill : MLOps with Azure Databricks Devops A person who has exposure to Models/MLOps eco-system having exposure to Model Life Cycle Management, with primary responsibility being ability to engage with stakeholders around requirements elaboration, having them broken into stories for the pods by engaging with architects/leads for designs, participation in UAT and creation of user scenario testings, creation of product documentation describing features, capabilities etc. we basically not looking for a Project Manager who will track things.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Mumbai

Work from Office

Develops data processing solutions using Scala and PySpark.

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai

Work from Office

Develop big data solutions using Azure Databricks. Optimize data processing and machine learning workflows.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 12 Lacs

Hyderabad, Gurugram

Work from Office

Develop and maintain SQL and NoSQL databases in Azure, including schema design, stored procedures and data integrity Continuous improvement of data pipelines using Azure Data Factory As a foundation Developing insightful and interactive business. Minimum of4 year Database Administrator Experience with Microsoft SQL Server Minimum of 2 years of Azure SQL Database

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Greetings from tsworks Technologies India Pvt We are hiring for Sr. Data Engineer / Lead Data Engineer, if you are interested please share your CV to mohan.kumar@tsworks.io About This Role tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Position: Senior Data Engineer / Lead Data Engineer Experience : 5 to 11 Years Location : Bangalore, India / Remote Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Excellent Communication Skills Skills & Knowledge Bachelor's or masters degree in computer science, Engineering, or a related field. 5 to 10 Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform is a good to have experience. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired.

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 32 Lacs

Bengaluru

Remote

Greetings!!! Currently we have an urgent Opening for Azure Architect role with one of our projects, for a remote role. Job Location : Remote Looking only for Immediate Joiners Skills: Must to have skills. Terraform, scripting, Azure Migration Activities. Recent 2 projects .The candidate need to be involved in Azure migration experience, and he /she need to have solid architectural skills. Job Description: Architect for Migration Intakes and collaboration with Cloud Platform Engineering team for demand forecasting Primary skills - Application migration, broad range of experience migrating Apps from on-prem and AWS to Azure. Ability to identify potential challenges to migration. Experience sizing Applications in terms of R-Type and Complexity. Experience of assessing applications to understand and articulate the correct technical approach to migration, Azure App migration experience. Strong knowledge of Azure Cloud Products/Services. Experience range - Should have at least 5 years- experience working on complex Azure migrations spanning multiple technologies - Windows, SQL, AKS, Oracle, .NET, Java. Must have experience of working with Lead Architects and App owners to articulate challenges and be comfortable guiding Interested Candidates kindly revert with your updated resume to gsathish@sonata-software.com Regards Sathish Talent Acquisition - Sonata Software Services 9840669681

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 27 Lacs

Bengaluru

Hybrid

6-7 Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, CosmoDB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 14 Lacs

Hyderabad

Work from Office

#Employment Type: Contract Skills Azure Data Factory SQL Azure Blob Azure Logic Apps

Posted 1 month ago

Apply

12.0 - 15.0 years

45 - 50 Lacs

Bengaluru

Hybrid

Azure Data Architect with 12+ yrs exp in Azure Data Bricks, Power BI, ETL, ADF, SQL, and Data Lakes. Skilled in cloud data architecture, reporting, data pipelines, and governance. Azure certified preferred. Strong leadership & Agile experience.

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 12 Lacs

Bengaluru

Work from Office

We are seeking an experienced Data Engineer to join our dynamic product development team. In this role, you will be responsible for designing, building, and optimizing data pipelines that ensure efficient data processing and insightful analytics. You will work collaboratively with cross-functional teams, including data scientists, software developers, and product managers, to transform raw data into actionable insights while adhering to best practices in data architecture, security, and scalability. Role & responsibilities * Design, build, and maintain scalable ETL processes to ingest, process, and store large datasets. * Collaborate with cross-functional teams to integrate data from various sources, ensuring data consistency and quality. * Leverage Microsoft Azure services for data storage, processing, and analytics, integrating with our CI/CD pipeline on Azure Repos. * Continuously optimize data workflows for performance and scalability, identifying bottlenecks and implementing improvements. * Deploy and monitor ML/GenAI models in production environments. * Develop and enforce data quality standards, data validation checks, and ensure compliance with security and privacy policies. * Work closely with backend developers (PHP/Node/Python) and DevOps teams to support seamless data operations and deployment. * Stay current with industry trends and emerging technologies to continually enhance data strategies and methodologies. Required Skills & Qualifications * Minimum of 4+ years in data engineering or a related field. * In depth understanding of streaming technologies like Kafka, Spark Streaming. * Strong proficiency in SQL, Python, Spark SQL - data manipulation, data processing, and automation. * Solid understanding of ETL/ELT frameworks, data pipeline design, data modelling, data warehousing and data governance principles. * Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. * Proficient in Azure technologies like ADB, ADF, SQL (capability of writing complex SQL queries), PySpark, Python, Synapse, Fabric, Delta Tables, Unity CatLog. * Deep understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and data warehousing solutions (e.g., Snowflake, Redshift, Big Query). * Good knowledge of Agile, SDLC/CICD practices and tools with a good understanding of distributed systems. * Proven ability to work effectively in agile/scrum teams, collaborating across disciplines. * Excellent analytical, troubleshooting, problem-solving skills and attention to detail. Preferred candidate profile * Experience with NoSQL databases and big data processing frameworks e.g., Apache Spark. * Knowledge of data visualization and reporting tools. * Strong understanding of data security, governance, and compliance best practices. * Effective communication skills with an ability to translate technical concepts to non-technical stakeholders. * Knowledge of AI-OPS and LLM Data pipelines. Why Join GenXAI? * Innovative Environment: Work on transformative projects in a forward-thinking, collaborative setting. * Career Growth: Opportunities for professional development and advancement within a rapidly growing company. * Cutting-Edge Tools: Gain hands-on experience with industry-leading technologies and cloud platforms. * Collaborative Culture: Join a diverse team where your expertise is valued, and your ideas make an impact.

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

4+ years of hands on experience using Azure Cloud, ADLS, ADF & Databricks Finance Domain Data Stewardship Finance Data Reconciliation with SAP down-stream systems Run/Monitor Pipelines/ Validate the Data Bricks note books Able to interface with onsite/ business stake holders. Python, SQL Hands on Knowledge of Snowflake/DW is desirable.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Immediate Job Openings on #Big Data Engineer _ Pan India_ Contract Experience: 6 +Years Skill:Big Data Engineer Location: Pan India Notice Period:Immediate. Employment Type: Contract Pyspark Azure Data Bricks Experience on workflows Unity catalog Managed / external data with delta tables.

Posted 1 month ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 7+ years of overall experience in Data & Analytics engineering 5+ years of experience working with Azure, Databricks, and ADF, Data Lake 5+ years of experience working with data platform or product using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications: Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Role : Relational Database - Database Administrator (DBA) Experience : 5+ Years Notice period : Immediate to 30 Days Location : Bangalore Employment Type : Full Time, Permanent Working mode : Regular Job Description : We are seeking an experienced Database Administrator (DBA) to join our dynamic team. The ideal candidate will have extensive knowledge in relational database management systems, with a strong focus on optimization, administration, and support. Key Responsibilities : - Administer and maintain both standalone and clustered database environments. - Optimize database performance through efficient indexing, partitioning strategies, and query optimization techniques. - Manage and support relational databases (e.g., MySQL, MS SQL). - Ensure data integrity, availability, security, and scalability of databases. - Implement and manage data storage solutions like S3/Parquet and Delta tables. - Monitor database performance, troubleshoot issues, and implement solutions. - Collaborate with development teams to design and implement database solutions. - Implement and manage database backups, restores, and recovery models. - Perform routine database maintenance tasks such as upgrades, patches, and migrations. - Document database processes, procedures, and configurations. Requirements : Required : - 5-10 years of proven experience as a Database Administrator (DBA). - Strong understanding of database management principles. - Proficiency in relational databases (e.g., MySQL, MS SQL). - Experience in optimizing database performance and implementing efficient query processing. - Familiarity with Linux environments and basic administration tasks. - Knowledge of Parquet file structures as relational data stores. Preferred : - Experience with RDS (AWS) and Databricks (AWS). - Understanding of Databricks and Unity Catalogue. - Experience with S3/Parquet and Delta tables. - Knowledge of Apache Drill and Trino DB connectors. - Prior experience with Hadoop, Parquet file formats, Impala, and HIVE.

Posted 1 month ago

Apply

3.0 - 6.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Project description A DevOps Support Engineer will perform tasks related to data pipeline work and monitor and support related to job execution, data movement, and on-call support. In addition, deployed pipeline implementations will be tested for production validation. Responsibilities Provide production support for 1st tier, after hours and on call support. The candidate will eventually develop into more data engineering within the Network Operations team. The selected resource will learn the Telecommunications domain while also developing data learning skills. Skills Must have ETL pipeline, data engineering, data movement/monitoring Azure Databricks Watchtower Automation tools Testing Nice to have Data Engineering Other Languages EnglishC2 Proficient Seniority Regular

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Chennai

Work from Office

Key Skills: Azure Devops, Data Engineer, Azure Databricks, Azure Roles and Responsibilities: Design and develop scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake. Build robust ETL/ELT processes to ingest data from structured and unstructured sources. Implement data models and manage large-scale data warehouses and lakes. Optimize data processing workloads for performance and cost-efficiency. Work closely with Data Scientists, Analysts, and Software Engineers to meet data needs. Ensure data governance, quality, security, and compliance best practices. Monitor, troubleshoot, and enhance data workflows and environments. Skills Required: Proven experience as a Data Engineer working in Azure environments. Strong expertise in Azure Data Factory, Synapse, Databricks, Azure SQL, and Data Lake. Proficient in SQL, Python, and PySpark. Solid understanding of ETL/ELT pipelines, data integration, and data modeling. Experience with CI/CD, version control (Git), and automation tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., ARM, Bicep, Terraform). Excellent problem-solving, communication, and collaboration skills. Education: Bachelor's Degree in related field

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Hyderabad

Work from Office

Must have skills Azure DataBricks, Python And Pyspark, Spark. Please find the JD In the mail chain Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on experience in Azure Databricks , Data Factory, Data Lake store/Blob storage, SQL DB Experience in creating Big data Pipelines with Azure components Hands on programing with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with designing and implementing Big data solutions.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Skillset: In-depth knowledge of Azure Synapse Analytics (with dedicated pools) Proficient in Azure Data Factory (ADF) for ETL processes Strong SQL skills for complex queries and data manipulation Knowledge of data warehousing and big data analytics Good analytical and problem-solving skils.

Posted 1 month ago

Apply

12.0 - 16.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Azure Data Factory /Azure DataLake/ Azure Databricks with Terraform Professional Mandotory : A person should be able to write terraform code (modules & main code) to deploy the Azure Data Services on Azure Platform Azure Subscription/Networking/Resource Groups/ETL services deployment Azure CLI Terraform template and module creation Bitbucket knowledge Nice to have: GitHub

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Pune

Work from Office

Responsibilities: designing, developing, and maintaining scalable data pipelines using Databricks, PySpark, Spark SQL, and Delta Live Tables. Collaborate with cross-functional teams to understand data requirements and translate them into efficient data models and pipelines. Implement best practices for data engineering, including data quality, and data security. Optimize and troubleshoot complex data workflows to ensure high performance and reliability. Develop and maintain documentation for data engineering processes and solutions. Requirements: Bachelor's or Master's degree. Proven experience as a Data Engineer, with a focus on Databricks, PySpark, Spark SQL, and Delta Live Tables. Strong understanding of data warehousing concepts, ETL processes, and data modelling. Proficiency in programming languages such as Python and SQL. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services. Excellent problem-solving skills and the ability to work in a fast-paced environment. Strong leadership and communication skills, with the ability to mentor and guide team members.

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Hyderabad

Work from Office

6+ years of experience in Data engineering projects using COSMOS DB- Azure Databricks (Min 3-5 projects) Strong expertise in building data engineering solutions using Azure Databricks, Cosmos DB Strong T-SQL programming skills or with any other flavor of SQL Experience working with high volume data, large objects, complex data transformations Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. Good understanding of data modelling for data warehouse and data marts Strong verbal and written communication skills Ability to learn, contribute and grow in a fast phased environment Nice to have: Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, ADLS Gen2, Azure Events Hub Experience using Jira and ServiceNow in project environments Experience in implementing Datawarehouse and ETL solutions

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 17 Lacs

Kochi, Hyderabad, Bengaluru

Work from Office

Role: Snowflake Azure Developer Experience: 5-10 years Location: Hyderabad, Bangalore, Nagpur , Kochi We require a snowflake developer , proficient in writing SQL programming , Snowflake architecture & components also worked on ADF (azure data factory) who knows & worked on data migration from other cloud to snowflake.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies