Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
Mandatory skill : MLOps with Azure Databricks / Devops A person who has exposure to Models/MLOps eco-system having exposure to Model Life Cycle Management, with primary responsibility being ability to engage with stakeholders around requirements elaboration, having them broken into stories for the pods by engaging with architects/leads for designs, participation in UAT and creation of user scenario testings, creation of product documentation describing features, capabilities etc. we basically not looking for a Project Manager who will track things.
Posted 2 months ago
6.0 - 10.0 years
8 - 12 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Role Overview: We are looking for a skilled .NET Backend Developer with Azure Data Engineering expertise to join our dynamic and growing team. This role demands strong hands-on experience in .NET technologies along with cloud-based data engineering platforms like Azure Databricks or Snowflake. Primary Technical Skills (Must-Have): .NET Core / ASP.NET Core / C# Strong backend development Web API & Microservices Architecture SQL Server, NoSQL, Entity Framework (EF 6+) Azure Cloud Platform, Azure Data Engineering Azure Databricks, Microsoft Fabric, or Snowflake Database Performance Tuning & Optimization Strong understanding of OOPs & Design Patterns Agile Methodology Experience Nice to Have (Secondary Skills): Angular / JavaScript Frameworks MongoDB NPM Azure DevOps Build/Release Configuration Strong troubleshooting and communication skills Experience working with US clients is a plus Required Qualifications: B.Tech / B.E / MCA / M.Tech or equivalent Minimum 6+ years of relevant hands-on experience Must be willing to work onsite in Hyderabad Excellent communication (verbal & written)
Posted 2 months ago
7.0 - 12.0 years
18 - 30 Lacs
Chennai
Hybrid
Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call varsha 7200847046 for more info Thanks, varsha 7200847046
Posted 2 months ago
8.0 - 12.0 years
15 - 22 Lacs
Pune, Bengaluru
Work from Office
Job Title: Senior Data Engineer Company: NAM Info Private Limited Location : Bangalore Experience : 6-8 Years Responsibilities : Develop and optimize data pipelines using Azure Databricks and PySpark. Write SQL/Advanced SQL queries for data transformation and analysis. Manage data workflows with Azure Data Factory and Azure Data Lake. Collaborate with teams to ensure high-quality, efficient data solutions. Required Skills: 6-8 years of experience in Azure Databricks and PySpark. Advanced SQL query skills. Experience with Azure cloud services, ETL processes, and data optimization. Please send profiles for this role to narasimha@nam-it.com.
Posted 2 months ago
7.0 - 12.0 years
20 - 35 Lacs
Pune, Bengaluru
Hybrid
Hi All , we have senior position for databricks expert Job Location :Pune and Bangalore(hybrid) Perks :pick and drop provided Role & responsibilities Kindly Note : Overall experience should be 7 Yrs+ and immediate joiner Data Engineering - Data pipeline development using Azure Databricks 5+ years • Optimizing data processing performance, efficient resource utilization and execution time. Workflow orchestration 5+ years • Databricks features like Databricks SQL, Delta Lake, and Workflows to orchestrate and manage complex data workflows – 5 + years • Data modelling – 5 + Years 6. Nice to Haves: Knowledge of PySparks, Good knowledge of data warehousing
Posted 2 months ago
15.0 - 19.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Apache Spark, PySpark Minimum 15 year(s) of experience is required Educational Qualification : Graduate br/>Key Responsibilities :1Azure Devops CI/CD Integration Specialist to help us set up the end-to-end technical Continuous Integration/Continuous Deployment framework in Azure ADF, Databricks code, SQL and AAS and embedding the processes around it as well with the team 2Build processes supporting data transformation, data structures, metadata, dependency and workload management 3 Azure Data Factory, Azure Data Lake Storage, Azure SQL, Pyspark br/> Technical Experience :1Extensive experience with Azure Data Bricks and good to have Synapse, SQL, Pyspark 2Experience with Azure:Azure Data Factory, Azure Data Lake Storage, Databricks, Stream Analytics, Azure Functions, Serverless Architecture, ARM Templates 3Experience with object-oriented/object function scripting languages:Python, SQL, Scala, Spark-SQL 4Advanced working SQL knowledge and experience working with relational databases, query authoring SQL as well as working familiarity with a variety of data br/> Professional Attributes :1Strong project management and organizational skills 2Experience supporting and working with cross-functional teams in a dynamic environment 3Analytical bent of mind 4Ability to manage interaction with business stakeholders and other within the organization 5Good communication and documentation skil Qualification Graduate
Posted 2 months ago
5.0 - 10.0 years
7 - 12 Lacs
Ahmedabad
Work from Office
Project Role :Application Lead Project Role Description :Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills :Microsoft Azure Analytics Services Good to have skills :NA Minimum 5 year(s) of experience is required Educational Qualification :BE Summary:As an Application Lead for Packaged Application Development, you will be responsible for designing, building, and configuring applications using Microsoft Azure Analytics Services. Your typical day will involve leading the effort to deliver high-quality applications, acting as the primary point of contact for the project team, and ensuring timely delivery of project milestones. Roles & Responsibilities: Lead the effort to design, build, and configure applications using Microsoft Azure Analytics Services. Act as the primary point of contact for the project team, ensuring timely delivery of project milestones. Collaborate with cross-functional teams to ensure the successful delivery of high-quality applications. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Professional & Technical Skills: Must To Have Skills:Strong experience with Microsoft Azure Analytics Services. Good To Have Skills:Experience with other Azure services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Experience in designing, building, and configuring applications using Microsoft Azure Analytics Services. Must have databricks and pyspark Skills. Strong understanding of data warehousing concepts and best practices. Experience with ETL processes and tools such as SSIS or Azure Data Factory. Experience with SQL and NoSQL databases. Experience with Agile development methodologies. Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality applications. This position is based at our Bengaluru office. Qualifications BE
Posted 2 months ago
3.0 - 8.0 years
20 - 32 Lacs
Bengaluru
Work from Office
Translate ideas&designs into running code Automate business processes using Office 365 Power Automate,Power Apps,Power BI Perform softwaredesign,debugging,testing,deployment Implement custom solutions leveragingCanvasApps,Model-Driven Apps,Office 365 Required Candidate profile production-level app development exp using PowerApps,Power Automate,Power BI Exp in C#, JavaScript, jQuery, Bootstrap, HTML Exp in SAP HANA,ETLprocesses,data modeling,data cleaning,data pre-processing
Posted 2 months ago
3.0 - 6.0 years
5 - 15 Lacs
Kochi, Thiruvananthapuram
Hybrid
Hiring for Azure Data Engineer in Kochi Location Experience - 3 to 6 years Location - Kochi JD Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF Interested candidates please share your updated resume with below details at Smita.Dattu.Sarwade@gds.ey.com Total Experience - Relevant Experience - Current Location - Preferred Location - Current Ctc Expected Ctc – Notice period -
Posted 2 months ago
4.0 - 7.0 years
7 - 12 Lacs
Gurugram
Hybrid
Role & responsibilities Design and build effective solutions using the primary key skills required for the profile Support the Enterprise Data Environment team particularly for Data Quality and production support. Collaborate on a data migration strategy for existing systems that need to migrate to a next generation Cloud / AWS application software platform. Collaborate with teams as a key contributor of data architecture directives & documentation: including data models, technology roadmaps, standards, guidelines, and best practices. Focus on data quality throughout the ETL & data pipelines, driving improvements to data management processes, data storage, and data security to meet the needs of the business customers. Preferred candidate profile EDUCATION: Bachelor's FIELD OF STUDY: Information Technology EXPERIENCE: 4+ years of total experience into IT industry as a developer/senior developer/data engineer. 3+ years of experience of working extensively with Azure services such as Azure Data Factory, Azure Synapse and Azure Datalake. 3+ years of experience working extensively with Azure SQL, MS SQL Server and good exposure into writing complex SQL queries. 1+ years of experience working with the production support operations team as a production support engineer. Good knowledge and exposure into important SQL concepts such as Query optimization, Data Modelling and Data Governance. Working Knowledge of CI/CD process using Azure DevOps and Azure Logic Apps • Very good written and verbal communication skills. Perks and Benefits Transportation Services : Convenient and reliable commute options to ensure a hassle-free journey to and from work. Meal Facilities : Nutritious and delicious meals provided to keep you energized throughout the day. Career Growth Opportunities : Clear pathways for professional development and advancement within the organization. Captive Unit Advantage : Work in a stable, secure environment with long-term projects and consistent workflow. Continuous Learning : Access to training programs, workshops, and resources to support your personal and professional growth. Link to apply : https://encore.wd1.myworkdayjobs.com/externalnew/job/Gurgaon---Candor-Tech-Space-IT---ITES-SEZ/Senior-Data-Engineer_HR-18537 Or Share your CV at Anjali.panchwan@mcmcg.com
Posted 2 months ago
1.0 - 2.0 years
4 - 9 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure Databricks or GCP . As a Data Engineer at Tredence, you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure Databricks or GCP Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.)
Posted 2 months ago
6.0 - 11.0 years
18 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Pls fill the form below to apply for the position. https://forms.office.com/r/cZjqnwVUDw Exp Range- 6-12 years designing, data modelling and data warehousing. Strong experience Creating complex stored procedures and functions, Dynamics Strong experience in performance tuning activities Must have experience on Azure Data Factory V2, Azure Synapse, Azure Databricks and SSIS. Strong Azure SQL Database and Azure SQL Datawarehouse concepts. Strong verbal and written communications skills
Posted 2 months ago
3.0 - 6.0 years
14 - 18 Lacs
Pune
Work from Office
Establish and implement best practices for DBT workflows, ensuring efficiency, reliability, and maintainability. Collaborate with data analysts, engineers, and business teams to align data transformations with business needs. Monitor and troubleshoot data pipelines to ensure accuracy and performance. Work with Azure-based cloud technologies to support data storage, transformation, and processing Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong MS SQL, Azure Databricks experience Implement and manage data models in DBT, data transformation and alignment with business requirements. Ingest raw, unstructured data into structured datasets to cloud object store. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance Preferred technical and professional experience Establish best DBT processes to improve performance, scalability, and reliability. Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Databricks Proven interpersonal skills while contributing to team effort by accomplishing related results as required
Posted 2 months ago
4.0 - 8.0 years
4 - 8 Lacs
Gurugram
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed data systems. Strong programming skills in Python, PySpark,Scalaor SparkSQL/TSQLfor data transformations. Your Profile Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Proficiencyin data integration techniques, ETL processes and data pipeline architectures. Understanding of Machine Learning Algorithms & AI/ML frameworks (i.e TensorFlow, PyTorch)and Power BIis an added advantage MS Fabric and PySpark is must. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 months ago
6.0 - 8.0 years
8 - 11 Lacs
Chennai, Bengaluru
Work from Office
Skill : Azure Data Factory Notice Period : 30 Days Skills Required: 6-8 years of professional experience in data engineering or a related field. Profound expertise in SQL,T-SQL, database design, and data warehousing principles. Strong experience with Microsoft Azure tools including SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. Proficient in Python, PySpark, and PySQL for data processing and analytics tasks. Experience with Power BI and other reporting and analytics tools. Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. Excellent problem-solving, analytical, and communication skills.
Posted 2 months ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 68 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote
Posted 2 months ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
Responsibilities: designing, developing, and maintaining scalable data pipelines using Databricks, PySpark, Spark SQL, and Delta Live Tables. Collaborate with cross-functional teams to understand data requirements and translate them into efficient data models and pipelines. Implement best practices for data engineering, including data quality, and data security. Optimize and troubleshoot complex data workflows to ensure high performance and reliability. Develop and maintain documentation for data engineering processes and solutions. Requirements: Bachelor's or Master's degree. Proven experience as a Data Engineer, with a focus on Databricks, PySpark, Spark SQL, and Delta Live Tables. Strong understanding of data warehousing concepts, ETL processes, and data modelling. Proficiency in programming languages such as Python and SQL. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services. Excellent problem-solving skills and the ability to work in a fast-paced environment. Strong leadership and communication skills, with the ability to mentor and guide team members.
Posted 2 months ago
6.0 - 10.0 years
6 - 10 Lacs
Hyderabad, Greater Noida
Work from Office
Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark
Posted 2 months ago
6.0 - 10.0 years
7 - 11 Lacs
Greater Noida
Work from Office
Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark
Posted 2 months ago
5.0 - 10.0 years
10 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Data Bricks skillset with Pyspark , SQL Strong proficiency in pyspark and SQL Understanding of data warehousing concepts ETL processes/ Data pipeline building with ADB/ADF Experience with Azure cloud platform, knowledge of data manipulation techniques Experience working with business teams to convert the requirements into technical stories for migration Leading the technical discussions and implementing the solution Experience will multi tenant architecture and have delivered projects in Databricks + Azure combination Experience to Unity catalogue is useful
Posted 2 months ago
6.0 - 9.0 years
5 - 14 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Data Bricks skillset with Pyspark , SQL Strong proficiency in pyspark and SQL Understanding of data warehousing concepts ETL processes/ Data pipeline building with ADB/ADF Experience with Azure cloud platform, knowledge of data manipulation techniques Experience working with business teams to convert the requirements into technical stories for migration Leading the technical discussions and implementing the solution Experience will multi tenant architecture and have delivered projects in Databricks + Azure combination Experience to Unity catalogue is useful
Posted 2 months ago
10.0 - 15.0 years
30 - 36 Lacs
Thiruvananthapuram
Work from Office
* Manage Azure data infrastructure using DevOps practices. * Ensure security compliance through automation and collaboration. * Develop IAC tools for efficient data management. Immediate joiners preferred
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Noida, Hyderabad, Delhi / NCR
Work from Office
Job Role: Azure Data Engineer Location: Greater Noida & Hyderabad Experience: 5 to 10 years Notice Period: Immediate to 30 days Job Description: Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training Many years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3 years Data Engineering experience using SQL 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Azure Data Factory, Azure Databricks, Azure Blob Storage, Azure Power Apps, and Azure Functions e. CI/CD: GitHub, Azure DevOps, Terraform
Posted 2 months ago
5.0 - 10.0 years
10 - 17 Lacs
Pune, Chennai, Bengaluru
Hybrid
Job Opportunity from Hexaware Technologies ! We are hiring Azure Databricks consultant with immediate joiner required, interested please reply to manojkumark2@hexaware.com with below details Shortlisted candidates will get interview call on Saturday 7th June Total IT Exp: Exp in Azure Databricks: Exp in Pyspark: Exp in Synapse: CCTC & ECTC: Immediate joiner: Yes /No Location:
Posted 2 months ago
4.0 - 9.0 years
7 - 17 Lacs
Chennai, Coimbatore
Hybrid
Greetings from Cognizant!!! We are hiring for Permanent Position with Cognizant. Experience:- 3 - 6 Yrs. Mandatory to have experience in ETL Testing, Mongo DB, Python, SQL, Azure databricks . Work Location: Chennai, Coimbatore Interview Mode : ( Virtual ) Interview Date : Weekday & Weekend
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France