Jobs
Interviews

1262 Azure Databricks Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You are invited to join our team as a Data Engineer specializing in Azure Data Bricks, with in-person hiring sessions scheduled for the 2nd of August 2025 in Pune and Bengaluru. The interview locations are as follows: Pune: Persistent Systems, 9a, Aryabhata-Pingala, 12, Kashibai Khilare Path, Marg, Erandwane, Pune, Maharashtra 411004. Bangalore: Persistent Systems, The Cube at Karle Town Center Rd, Dada Mastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024. As a Data Engineer with 4+ years of experience, you will play a vital role in our team, focusing on designing, implementing, and managing scalable data pipelines using Azure Databricks, DBT, Python/PySpark, and SQL. Your responsibilities will include collaborating with various teams to ensure the efficiency, scalability, and availability of our data pipelines. Key Responsibilities: - Design and implement complex, scalable data pipelines using Azure technologies. - Collaborate with Architects, Data Analysts, and Business Analysts to develop efficient workflows. - Manage data storage solutions including Azure SQL Database, Data Lake, and Blob Storage. - Utilize Azure Data Factory and cloud-native tools for ETL processes. - Conduct unit testing, mentor junior engineers, and optimize data workflows for performance and cost-efficiency. - Monitor pipeline performance, troubleshoot issues, and provide regular updates. Skills and Qualifications: - Strong experience with Azure and Databricks. - Proficiency in Python/PySpark and SQL. - Familiarity with DBT and Dremio is a plus. In addition to a competitive salary and benefits package, we offer a culture focused on talent development, employee engagement initiatives, health benefits, and an inclusive environment that supports diversity and inclusion. We provide hybrid work options, flexible hours, and accessible facilities to accommodate diverse needs and preferences. At Persistent, we are committed to creating an inclusive environment where all employees can thrive, grow both professionally and personally, impact the world positively, and enjoy collaborative innovation using cutting-edge technologies. Join us to unlock global opportunities and unleash your full potential in a values-driven and people-centric work environment. Persistent is an Equal Opportunity Employer that prohibits discrimination and harassment of any kind. Apply now and be a part of our diverse and innovative team!,

Posted 10 hours ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies. Your expertise should include a deep understanding of distributed computing principles and strong knowledge of Apache Spark. Proficiency in Python programming is required, along with experience using technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming for building stream-processing systems. You should have a good understanding of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources such as RDBMS, ERP, and Files. Knowledge of SQL queries, joins, stored procedures, and relational schemas is essential. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is also expected. The role requires performance tuning of Spark Jobs, experience with AZURE Databricks, and the ability to efficiently lead a team. Designing and implementing Big Data solutions, as well as following AGILE methodology, are key aspects of this position.,

Posted 11 hours ago

Apply

4.0 - 10.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

At EY, you will have the opportunity to shape a career as unique as you are, supported by a global network, inclusive culture, and cutting-edge technology to help you reach your full potential. Your individual perspective and voice are valued to contribute to the continuous improvement of EY. By joining us, you can create an outstanding experience for yourself while contributing to a more efficient and inclusive working world for all. As a Data Engineering Lead, you will work closely with the Data Architect to design and implement scalable data lake architecture and data pipelines. Your responsibilities will include designing and implementing scalable data lake architectures using Azure Data Lake services, developing and maintaining data pipelines for data ingestion from various sources, optimizing data storage and retrieval processes for efficiency and performance, ensuring data security and compliance with industry standards, collaborating with data scientists and analysts to enhance data accessibility, monitoring and troubleshooting data pipeline issues to ensure reliability, and documenting data lake designs, processes, and best practices. You should have experience with SQL and NoSQL databases, as well as familiarity with big data file formats such as Parquet and Avro. **Roles and Responsibilities:** **Must Have Skills:** - Azure Data Lake - Azure Synapse Analytics - Azure Data Factory - Azure DataBricks - Python (PySpark, Numpy, etc.) - SQL - ETL - Data warehousing - Azure DevOps - Experience in developing streaming pipelines using Azure Event Hub, Azure Stream Analytics, Spark streaming - Experience in integrating with business intelligence tools such as Power BI **Good To Have Skills:** - Big Data technologies (e.g., Hadoop, Spark) - Data security **General Skills:** - Experience with Agile and DevOps methodologies and the software development lifecycle - Proactive and accountable for deliverables - Ability to identify and escalate dependencies and risks - Proficient in working with DevOps tools with limited supervision - Timely completion of assigned tasks and regular status reporting - Capability to train new team members - Desired knowledge of cloud solutions like Azure or AWS with DevOps/Cloud certifications - Ability to work effectively with multicultural global teams and virtually - Strong relationship-building skills with project stakeholders Join EY in its mission to build a better working world by creating long-term value for clients, people, and society, and fostering trust in the capital markets. Leveraging data and technology, diverse EY teams across 150+ countries provide assurance and support clients in growth, transformation, and operations across various sectors. Through its services in assurance, consulting, law, strategy, tax, and transactions, EY teams strive to address complex global challenges by asking insightful questions to discover innovative solutions.,

Posted 12 hours ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for designing, developing, and maintaining scalable data pipelines using Azure Databricks. Your role will involve building and optimizing ETL/ELT processes for structured and unstructured data, collaborating with data scientists, analysts, and business stakeholders, integrating Databricks with Azure Data Lake, Synapse, Data Factory, and Blob Storage, developing real-time data streaming pipelines, and managing data models/data warehouses. Additionally, you will optimize performance, manage resources, ensure cost efficiency, implement best practices for data governance, security, and quality, troubleshoot and improve existing data workflows, contribute to architecture and technology strategy, mentor junior team members, and maintain documentation. To excel in this role, you should have a Bachelor's/Master's degree in Computer Science, IT, or a related field, along with 5+ years of Data Engineering experience (minimum 2+ years with Databricks). Strong expertise in Azure cloud services (Data Lake, Synapse, Data Factory), proficiency in Spark (PySpark/Scala) and big data processing, experience with Delta Lake, Structured Streaming, and real-time pipelines, strong SQL skills, an understanding of data modeling and warehousing, familiarity with DevOps tools like CI/CD, Git, Terraform, Azure DevOps, excellent problem-solving and communication skills are essential. Preferred qualifications include Databricks Certified (Associate/Professional), experience with machine learning workflows on Databricks, knowledge of data governance tools like Purview, experience with REST APIs, Kafka, Event Hubs, cloud performance tuning, and cost optimization experience. Join us to be a part of a supportive and collaborative team, work with a growing company in the exciting BI and Data industry, enjoy a competitive salary and performance-based bonuses, and have opportunities for professional growth and development. If you are interested in this opportunity, please send your resume to hr@exillar.com and fill out the form at https://forms.office.com/r/HdzMNTaagw.,

Posted 13 hours ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure, platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest in Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines to collect data from disparate sources across the enterprise and external sources and deliver it to the data platform. - Implement Extract Transform and Load (ETL) workflows, ensuring data availability at each stage in the data flow. - Identify and onboard data sources, conduct exploratory data analysis, and evaluate modern technologies, frameworks, and tools in the data engineering space. Core/Must-Have skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - Experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator and other tools like SSIS. - Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Advanced working SQL Knowledge and experience with relational and NoSQL databases. - Strong analytical and critical thinking skills, expertise in data Modeling and DB Design, and experience building and optimizing data pipelines. Good to have: - Experience in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem and cloud platforms. - Ability to script, code, query, and design systems for maintaining Azure/AWS Lakehouse, ETL processes, business Intelligence, and data ingestion pipelines. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 14 hours ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a dedicated professional at Leighton Asia, you will be responsible for spearheading the development of end-to-end Azure integration solutions in alignment with CIMIC's OneIT Target Reference Architecture. Your key responsibilities will include ensuring compliance with CIMIC's Security Standards, constructing data and integration solutions that adhere to CIMIC's Target Reference Architecture, as well as hands-on development and unit testing of the solutions. You will play a crucial role in following CIMIC's path to production environments, providing technical support during test phases, and conducting handover sessions to transition solutions to BAU Support. Additionally, you will be involved in deploying solutions in PROD through CIMIC's CAB process and offering post go-live hypercare support. Your expertise will be instrumental in preparing technical specification documents, obtaining code review approvals, and ensuring compliance with standards. Your work will focus on the development and maintenance of CIMIC's data and integration solutions while aligning with CIMIC's Target Reference Architecture, strategic roadmap, and emerging technologies. To excel in this role, you should hold a degree in ICT, Engineering, or Science in a relevant discipline and possess at least 2-3 years of experience in a similar capacity. Your qualifications should include data and integration solution design skills, familiarity with Azure projects related to data ingestion, storage, transformation/enrichment, data lake solutions, point-to-point integrations, real-time and batch integrations, and an understanding of security compliance requirements. A solid grasp of Azure standards applicable to data and integration solutions, experience collaborating with on and offshore teams, and a commitment to architectural design and solutions are essential. Moreover, your technical proficiency should encompass two out of the three pillars of technical skills outlined below: Tech Pillar #1 Ingestion: - API Manager, REST APIs, Web API - Azure Logic Apps, Function Apps (C# - no, Python- 5 years) - Azure Data Factory and Integration Runtimes - Azure Storage, Delta Lake - Creation of Azure GitHub Repos, Build and Release Pipelines - Azure Key Vault - Service Principles, Security and Azure AD Groups/Entral ID Tech Pillar #2 Process/Enrichment: - Azure Databricks - Azure SQL and SQL MI (with SQL skills) - Azure Data Factory (including orchestration of Databricks notebooks) - Azure Storage, Delta Lake - Creation of Azure GitHub Repos, Build and Release Pipelines - Azure Key Vault - Service Principles, Security and Azure AD Groups/Entral ID By joining our team, you will play a vital role in delivering innovative solutions that drive the success of our projects while contributing to the growth and development of CIMIC Group.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Azure Databricks. This includes implementing data integration from various sources such as RDBMS, ERP systems, and files. You will design and optimize SQL queries, stored procedures, and relational schemas. Additionally, you will build stream-processing systems using technologies like Apache Storm or Spark-Streaming, and utilize messaging systems like Kafka or RabbitMQ for data ingestion. Performance tuning of Spark jobs for optimal efficiency will be a key focus area. Collaboration with cross-functional teams to deliver high-quality data solutions is essential in this role. You will also lead and mentor a team of data engineers, fostering a culture of continuous improvement and Agile practices. Key skills required for this position include proficiency in Apache Spark and Azure Databricks, strong experience with Azure ecosystem and Python, as well as working knowledge of Pyspark (Nice-to-Have). Experience in data integration from varied sources, expertise in SQL optimization and stream-processing systems, familiarity with Kafka or RabbitMQ, and the ability to lead and mentor engineering teams are also crucial. A strong understanding of distributed computing principles is a must. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a skilled individual, you will be responsible for designing, developing, and implementing data pipelines using Azure Data Factory. Your primary objective will be to efficiently extract, transform, and load data from diverse sources into Azure Data Lake Storage (ADLS). Additionally, you may have the opportunity to work with Azure Databricks, Python, and PySpark to enhance your capabilities in this role.,

Posted 1 day ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Data Scientist, you will be responsible for analyzing complex data using statistical and machine learning models to derive actionable insights. You will use Python for data analysis, visualization, and working with various technologies such as APIs, Linux OS, databases, big data technologies, and cloud services. Additionally, you will develop innovative solutions for natural language processing and generative modeling tasks, collaborating with cross-functional teams to understand business requirements and translate them into data science solutions. You will work in an Agile framework, participating in sprint planning, daily stand-ups, and retrospectives. Furthermore, you will research, develop, and analyze computer vision algorithms in areas related to object detection, tracking, product identification and verification, and scene understanding, ensuring model robustness, generalization, accuracy, testability, and efficiency. You will also be responsible for writing product or system development code, designing and maintaining data pipelines and workflows within Azure Databricks for optimal performance and scalability, and communicating findings and insights effectively to stakeholders through reports and visualizations. To qualify for this role, you should have a Master's degree in Data Science, Statistics, Computer Science, or a related field. You should have over 5 years of proven experience in developing machine learning models, particularly for time series data within a financial context. Advanced programming skills in Python or R, with extensive experience in libraries such as Pandas, NumPy, and Scikit-learn are required. Additionally, you should have comprehensive knowledge of AI and LLM technologies, with a track record of developing applications and models. Proficiency in data visualization tools like Tableau, Power BI, or similar platforms is essential. Exceptional analytical and problem-solving abilities, coupled with meticulous attention to detail, are necessary for this role. Superior communication skills are also required to enable the clear and concise presentation of complex findings. Extensive experience in Azure Databricks for data processing, model training, and deployment is preferred, along with proficiency in Azure Data Lake and Azure SQL Database for data storage and management. Experience with Azure Machine Learning for model deployment and monitoring, as well as an in-depth understanding of Azure services and tools for data integration and orchestration, will be beneficial for this position.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking a Senior Data Engineer who is proficient in Azure Databricks, PySpark, and distributed computing to create and enhance scalable ETL pipelines specifically for manufacturing analytics. Your responsibilities will include working with industrial data to support real-time and batch data processing needs. Your role will involve constructing scalable real-time and batch processing workflows utilizing Azure Databricks, PySpark, and Apache Spark. You will be responsible for data pre-processing tasks such as cleaning, transformation, deduplication, normalization, encoding, and scaling to guarantee high-quality input for downstream analytics. Designing and managing cloud-based data architectures, like data lakes, lakehouses, and warehouses, following the Medallion Architecture, will also be part of your duties. You will be expected to deploy and optimize data solutions on Azure, AWS, or GCP, focusing on performance, security, and scalability. Developing and optimizing ETL/ELT pipelines for structured and unstructured data sourced from IoT, MES, SCADA, LIMS, and ERP systems and automating data workflows using CI/CD and DevOps best practices for security and compliance will also be essential. Monitoring, troubleshooting, and enhancing data pipelines for high availability and reliability, as well as utilizing Docker and Kubernetes for scalable data processing, will be key aspects of your role. Collaboration with automation teams will also be required for effective project delivery. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Technology, or a related field, with a specific requirement for IIT Graduates. You should possess at least 4 years of experience in data engineering with a focus on cloud platforms like Azure, AWS, or GCP. Proficiency in PySpark, Azure Databricks, Python, Apache Spark, and expertise in various databases (relational, time series, and NoSQL) is necessary. Experience in containerization tools like Docker and Kubernetes, strong analytical and problem-solving skills, familiarity with MLOps and DevOps practices, excellent communication and collaboration abilities, and the flexibility to adapt to a dynamic startup environment are desirable qualities for this role.,

Posted 2 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

Your Responsibilities Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL). Collaborate with solution teams and Data Architects to implement data strategies, build data flows, and develop logical/physical data models. Work with Data Architects to define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Engage in hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Proactively and independently address project requirements and articulate issues/challenges to reduce project delivery risks. Your Profile Bachelor's degree in computer/data science technical or related experience. Possess 7+ years of hands-on relational, dimensional, and/or analytic experience utilizing RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols. Demonstrated experience with data warehouse, Data Lake, and enterprise big data platforms in multi-data-center contexts. Proficient in metadata management, data modeling, and related tools (e.g., Erwin, ER Studio). Preferred experience with services in Azure/Azure Databricks (Azure Data Factory, Azure Data Lake Storage, Azure Synapse & Azure Databricks) and working on SAP Datasphere is a plus. Experience in team management, communication, and presentation. Understanding of agile delivery methodology and experience working in a scrum environment. Ability to translate business needs into data vault and dimensional data models supporting long-term solutions. Collaborate with the Application Development team to implement data strategies, create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Maintain logical and physical data models along with corresponding metadata. Develop best practices for standard naming conventions and coding practices to ensure data model consistency. Recommend opportunities for data model reuse in new environments. Perform reverse engineering of physical data models from databases and SQL scripts. Evaluate data models and physical databases for variances and discrepancies. Validate business data objects for accuracy and completeness. Analyze data-related system integration challenges and propose appropriate solutions. Develop data models according to company standards. Guide System Analysts, Engineers, Programmers, and others on project limitations and capabilities, performance requirements, and interfaces. Review modifications to existing data models to improve efficiency and performance. Examine new application design and recommend corrections as needed. #IncludingYou Diversity, equity, inclusion, and belonging are cornerstones of ADM's efforts to continue innovating, driving growth, and delivering outstanding performance. ADM is committed to attracting and retaining a diverse workforce and creating welcoming, inclusive work environments that enable every ADM colleague to feel comfortable, make meaningful contributions, and grow their career. ADM values the unique backgrounds and experiences that each person brings to the organization, understanding that diversity of perspectives makes us stronger together. For more information regarding ADM's efforts to advance Diversity, Equity, Inclusion & Belonging, please visit the website: Diversity, Equity and Inclusion | ADM. About ADM At ADM, the power of nature is unlocked to provide access to nutrition worldwide. With industry-advancing innovations, a comprehensive portfolio of ingredients and solutions catering to diverse tastes, and a commitment to sustainability, ADM offers customers an edge in addressing nutritional challenges. As a global leader in human and animal nutrition and the premier agricultural origination and processing company worldwide, ADM's capabilities in insights, facilities, and logistical expertise are unparalleled. From ideation to solution, ADM enriches the quality of life globally. Learn more at www.adm.com.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an exceptional, innovative, and passionate individual seeking to grow with NTT DATA, a trusted global innovator of business and technology services. As a Systems Integration Senior Analyst based in Hyderabad, Telangana (IN-TG), India (IN), you will join a forward-thinking organization that values inclusivity and adaptability. Your role will involve the following hands-on experiences: - At least 5 years of overall experience, with a minimum of 2 years in Azure Databricks. - Proficiency in Python/Pyspark. - Strong hands-on experience in SQL, including MS SQL Server, SSIS, Stored Procedures, and Views. - Experience in ETL Testing, with a good understanding of all testing concepts. - Familiarity with Agile methodologies. - Excellent communication skills to effectively handle client calls. About NTT DATA: NTT DATA is a $30 billion global leader in business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA fosters innovation, optimization, and transformation for long-term success. With diverse experts in over 50 countries and a robust partner ecosystem, our services encompass business consulting, data and artificial intelligence, industry solutions, application development, infrastructure management, and connectivity. NTT DATA is at the forefront of digital and AI infrastructure globally, ensuring organizations and societies transition confidently into the digital future. As part of the NTT Group, we invest over $3.6 billion annually in R&D to drive sustainable progress. Learn more about us at us.nttdata.com.,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have 8-12 years of experience and possess a strong understanding and hands-on experience with Microsoft Fabric. You will be responsible for designing and implementing end-to-end data solutions on Microsoft Azure, which includes data lakes, data warehouses, and ETL/ELT processes. Your role will involve developing scalable and efficient data architectures to support large-scale data processing and analytics workloads. Ensuring high performance, security, and compliance within Azure data solutions will be a key aspect of this role. You should have knowledge of various techniques such as lakehouse and warehouse, along with experience in implementing them. Additionally, you will be required to evaluate and select appropriate Azure services like Azure SQL Database, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Unity Catalog, and Azure Data Factory. Deep knowledge and hands-on experience with these Azure Data Services are essential. Collaborating closely with business and technical teams to understand and translate data needs into robust and scalable data architecture solutions will be part of your responsibilities. You should also have experience in data governance, data privacy, and compliance requirements. Excellent communication and interpersonal skills are necessary for effective collaboration with cross-functional teams. In this role, you will provide expertise and leadership to the development team implementing data engineering solutions. Working with Data Scientists, Analysts, and other stakeholders to ensure data architectures align with business goals and data analysis requirements is crucial. Optimizing cloud-based data infrastructure for performance, cost-effectiveness, and scalability will be another key responsibility. Experience in programming languages like SQL, Python, and Scala is required. Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms is preferred. Familiarity with Azure DevOps and CI/CD pipeline development is beneficial. An in-depth understanding of database structure principles and distributed data processing of big data batch or streaming pipelines is essential. Knowledge of data visualization tools such as Power BI and Tableau, along with data modeling and strong analytics skills is expected. The candidate should be able to convert OLTP data structures into Star Schema and ideally have DBT experience along with data modeling experience. A problem-solving attitude, self-motivation, attention to detail, and effective task prioritization are essential qualities for this role. At Hitachi, attitude and aptitude are highly valued as collaboration is key. While not all skills are required, experience with Azure SQL Data Warehouse, Azure Data Factory, Azure Data Lake, Azure Analysis Services, Databricks/Spark, Python or Scala, data modeling, Power BI, and database migration are desirable. Designing conceptual, logical, and physical data models using tools like ER Studio and Erwin is a plus.,

Posted 2 days ago

Apply

5.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a seasoned Delivery Lead specializing in Azure Integration Services with over 12 years of experience. Your role involves managing and delivering enterprise-grade Azure projects, including implementation, migration, and upgrades. As a strategic leader, you should have in-depth expertise in Azure services and a proven track record of successfully managing enterprise customers and driving project success across Azure Integration and Data platforms. Your key responsibilities include leading end-to-end delivery of Azure integration, data, and analytics projects, ensuring scope, timeline, and budget adherence. You will plan and manage execution roadmaps, define milestones, handle dependencies, and oversee enterprise-level implementations, migrations, and upgrades using Azure services while ensuring compliance with best practices in security, performance, and governance. In terms of customer and stakeholder engagement, you will collaborate with enterprise customers to understand their business needs and translate them into technical solutions. Additionally, you will serve as a trusted advisor to clients, aligning technology with business objectives, and engage and manage stakeholders, including business users, architects, and engineering teams. Your technical leadership responsibilities include defining and guiding architecture, design patterns, and best practices for Azure Integration Services. You will deliver integration solutions using various Azure services such as Logic Apps, APIM, Azure Functions, Event Grid, and Service Bus. Leveraging ADF, Azure Databricks, and Synapse Analytics for data processing and analytics will be crucial, along with promoting automation and DevOps culture within the team. As the Delivery Lead, you will lead a cross-functional team of Azure developers, engineers, and architects, provide technical mentorship, and drive team performance. You will also coordinate with Microsoft and third-party vendors to ensure seamless delivery and support pre-sales activities by contributing to solution architecture, proposals, and effort estimation. To excel in this role, you must possess deep expertise in Azure Integration Services, hands-on experience with Azure App Services, Microservices architecture, and serverless solutions, and proficiency in data platforms such as Azure Data Factory, Azure Databricks, Synapse Analytics, and ADLS Gen2. A solid understanding of Azure security and governance tools is essential, along with experience in DevOps tools like Azure DevOps, CI/CD, Terraform, and ARM templates. In terms of professional experience, you should have at least 10 years in IT with a minimum of 5 years in Azure integration and data platforms. A proven track record in leading enterprise migration and implementation projects, sound knowledge of hybrid, on-prem, and cloud-native integration architectures, and experience in delivering projects using Agile, Scrum, and DevOps frameworks are required. Your soft skills should include strong leadership and stakeholder engagement abilities, effective problem-solving skills, and excellent verbal and written communication, presentation, and documentation skills. Preferred qualifications for this role include Microsoft certifications in Azure Solutions Architecture, Integration Services, or Data Engineering, experience in integration with SAP, Salesforce, or other enterprise applications, and awareness of AI/ML use cases within Azure's data ecosystem. This role is primarily based in Noida with a hybrid work model, and you should be willing to travel for client meetings as required.,

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer (Azure) at Fractal, you will be an integral part of large-scale client business development and delivery engagements. You will have the opportunity to develop the software and systems needed for end-to-end execution on large projects, working across all phases of SDLC and utilizing Software Engineering principles to build scaled solutions. Your role will involve building the knowledge base required to deliver increasingly complex technology projects. To be successful in this role, you should hold a bachelor's degree in Computer Science or a related field with 5-10 years of technology experience. You should have strong experience in System Integration, Application Development, or Data-Warehouse projects, across technologies used in the enterprise space. Your software development experience should include working with object-oriented languages such as Python, PySpark, and frameworks. You should also have expertise in relational and dimensional modeling, including big data technologies. Expertise in Microsoft Azure is mandatory for this role, including components like Azure DataBricks, Azure Data Factory, Azure Data Lake Storage, Azure SQL, HD Insights, and ML Service. Proficiency in Python and Spark is required, along with a good understanding of enabling analytics using cloud technology and ML Ops. Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus. You should have a proven track record of keeping existing technical skills up-to-date and developing new ones to contribute effectively to deep architecture discussions around systems and applications in the cloud (Azure). If you are an extraordinary developer who loves to push the boundaries to solve complex business problems using creative solutions, and if you possess the characteristics of a forward thinker and self-starter, then this role at Fractal is the perfect opportunity for you. Join us in working with happy, enthusiastic over-achievers and experience wild growth in your career. If this opportunity is not the right fit for you currently, you can express your interest in future opportunities by clicking on "Introduce Yourself" in the top-right corner of the page or creating an account to set up email alerts for new job postings that align with your interests.,

Posted 3 days ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

pune, maharashtra

On-site

You will be responsible for architecting data warehousing and business intelligence solutions to address cross-functional business challenges. This will involve interacting with business stakeholders to gather requirements and deliver comprehensive Data Engineering, Data Warehousing, and analytics solutions. Additionally, you will collaborate with other technology teams to extract, transform, and load data from diverse sources. You should have a minimum of 5-8 years of end-to-end Data Engineering Development experience, preferably across industries such as Retail, FMCG, Manufacturing, Finance, Oil & Gas. Experience in functional domains like Sales, Procurement, Cost Control, Business Development, and Finance is desirable. You are expected to have 3 to 10 years of experience in data engineering projects using Azure or AWS services, with hands-on expertise in data transformation, processing, and migration using various tools such as Azure Data Lake Storage, Azure Data Factory, Databricks, AWS Glue, Redshift, and Athena. Familiarity with MS Fabric and its components will be advantageous, along with experience in working with different source/target systems like Oracle Database, SQL Server Database, Azure Data Lake Storage, ERP, CRM, and SCM systems. Proficiency in reading data from sources via APIs/Web Services and utilizing APIs to write data to target systems is essential. You should also have experience in Data Cleanup, Data Cleansing, and optimization tasks, including working with non-structured data sets in Azure. Knowledge of analytics tools like Power BI and Azure Analysis Service, as well as exposure to private and public cloud architectures, will be beneficial. Excellent written and verbal communication skills are crucial for this role. Ideally, you hold a degree in M.Tech / B.E. / B.Tech (Computer Science, Information systems, IT) / MCA / MCS. Key requirements include expertise in MS Azure Data Factory, Python, PySpark Coding, Synapse Analytics, Azure Function Apps, Azure Databricks, AWS Glue, Athena, Redshift, and Databricks Pysark. Exposure to integration with various applications/systems like ERP, CRM, SCM, WebApp using APIs, Cloud, On-premise systems, DBs, and file systems is expected. The role necessitates a minimum of 3 Full Cycle Data Engineering Implementations (5-10 years of experience) with a focus on building data warehouses and implementing data models. Exposure to the consulting industry is mandatory, along with strong verbal and written communication skills. Your primary skills should encompass Data Engineering Development, Cloud Engineering with Azure or AWS, Data Warehousing & BI Solutions Architecture, Programming (Python PySpark), Data Integration across various systems, Consulting experience, ETL and Data Transformation, and knowledge in Cloud Architecture. Additionally, familiarity with MS Fabric, handling non-structured data, Data Cleanup and Optimization, API/Web Services, Data Visualization, and industry and functional knowledge will be advantageous. The compensation package ranges from INR 12-28 lpa, subject to the candidate's performance and experience level.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for designing high-performance data models optimized for reporting purposes using Power BI and SSRS, while ensuring compliance with enterprise architecture and data governance standards. Your expertise will be crucial in all stages of project life cycles, including design, analysis, implementation, and testing. You will be required to research technical issues, analyze business requirements, evaluate design alternatives, and develop technical processes to meet these requirements. Additionally, estimating and documenting the necessary work effort for implementing new technical processes/changes will be part of your responsibilities. Collaboration with data engineers to create and manage stored procedures and views is essential. You will need to provide sustainable long-term solutions for business data concerns and offer technical support while mentoring other team members for their professional development. As a representative of DP World, you are expected to uphold positive behaviors in line with the company's principles, values, and culture, ensuring a high level of safety, and adhering to the Code of Conduct and Ethics policies. In the job context, you will be involved in developing reports and dashboards using tools like Power BI, SSRS, and Power BI Report Builder. Creating reusable and scalable data models and/or OLAP cubes, working with technologies such as Azure Data Lake, Azure SQL Database, Azure Databricks, and Azure Analysis Service, and proficiently handling advanced DAX queries/functions are part of your tasks. You will also be responsible for managing KPI metrics, optimizing SQL queries, and implementing row-level security on reports/dashboards. Moreover, your role will include maintaining Power BI Apps, Workspaces, and User access, configuring personal and enterprise Gateways on Power BI Service, creating automated workflows using Power Automate, and working in an agile environment with activities managed using Azure DevOps. Previous hands-on technical experience with programming languages like Python and R will be beneficial. For qualifications, a College/University Degree in Computer Science or Software Engineering is preferred, along with Power BI or Azure certifications like PL-300, AZ-900, AZ-204, AZ-303, AZ-304, or AZ-400. The ideal candidate should have 5 to 8 years of relevant experience and possess skills in conceptual, logical, and physical data modeling, Azure Databricks, Azure SQL database, Spark SQL, Python, R, data visualization tools like Power BI and SSRS, SSAS, and analytical problem-solving abilities.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, you will be responsible for designing, developing, and delivering ADF pipelines for the Accounting & Reporting Stream. Your role will involve creating and maintaining scalable data pipelines using PySpark and ETL workflows in Azure Databricks and Azure Data Factory. You will also work on data modeling and architecture to optimize data structures for analytics and business requirements. Your responsibilities will include monitoring, tuning, and troubleshooting pipeline performance for efficiency and reliability. Collaboration with business analysts and stakeholders is key to understanding data needs and delivering actionable insights. Implementing data governance practices to ensure data quality, security, and compliance with regulations is essential. You will also be required to develop and maintain documentation for data pipelines and architecture. Experience in testing and test automation is necessary for this role. Collaboration with cross-functional teams to comprehend data requirements and provide technical advice is crucial. Strong background in data engineering is required, with proficiency in SQL, Azure Databricks, Blob Storage, Azure Data Factory, and programming languages like Python or Scala. Knowledge of Logic App and Key Vault is also necessary. Strong problem-solving skills and the ability to communicate complex technical concepts to non-technical stakeholders are essential for effective communication within the team.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Genpact is a global professional services and solutions firm committed to shaping the future. With a workforce of over 125,000 individuals in more than 30 countries, we are dedicated to creating lasting value for our clients through our innate curiosity, entrepreneurial agility, and deep industry knowledge. At Genpact, we serve and transform leading enterprises, including the Fortune Global 500, by leveraging our expertise in digital operations services, data, technology, and AI. We are currently seeking applications for the position of Principal Consultant, Dot Net Developer. In this role, you will play a crucial part in coding, testing, and delivering high-quality deliverables. Additionally, you should be enthusiastic about learning new technologies to enhance your skill set. **Responsibilities:** - Collaborate closely with the business unit and team members globally to understand and document requirements. - Offer innovative solutions to complex business issues using our technology practices. - Develop business tier components and relational database models. - Create interactive web-based user interfaces and integration solutions with 3rd party data providers and systems. - Establish unit/integration/functional tests and contribute to the enhancement of our architecture. - Follow the development process and guidelines, conduct code reviews, troubleshoot production issues, and stay updated on technology trends for recommending improvements. **Qualifications:** **Minimum Qualifications:** - BE/B Tech/MCA - Excellent written and verbal communication skills **Preferred Qualifications/ Skills:** - Bachelor's degree in computer science/computer engineering. - Proficiency in building highly interactive web-based user interfaces using HTML, CSS, JavaScript, AngularJS. - Experience in .Net, .Netcore, C#, SqlServer, Python, Azure Databricks, Snowflake. - Familiarity with building APIs (REST and GraphQL) and Distributed Caching (Redis, Cassandra, etc.). - Working experience with Azure PaaS services and SQL/NoSQL database platforms. - Strong .Net and C# skills for implementing object and service-oriented architecture. - Asp.net core experience for web and API development, OIDC, OAuth2 experience, and building automated test suites. - Experience configuring CI/CD pipelines, effective communication skills, Agile development practices, and familiarity with git source control and GitFlow fundamentals. - Sitecore CMS experience is a plus. **Job Details:** - Job Title: Principal Consultant - Primary Location: India-Bangalore - Schedule: Full-time - Education Level: Bachelor's/Graduation/Equivalent - Job Posting: Oct 4, 2024, 6:11:16 AM - Unposting Date: Nov 3, 2024, 11:59:00 PM **Master Skills List:** Consulting **Job Category:** Full Time,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Azure Databricks Professional at YASH Technologies, you will be utilizing your 6-8 years of experience to work with cutting-edge technologies in Azure services and data bricks. Your role will involve a strong understanding of medallion architecture, as well as proficiency in Python and Pyspark. YASH Technologies is a leading technology integrator that focuses on helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. Our team is comprised of bright individuals who are dedicated to making real positive changes in an increasingly virtual world. Working at YASH, you will have the opportunity to shape your career in an inclusive team environment. We believe in continuous learning and development, leveraging career-oriented skilling models and technology to empower our employees to grow and adapt at a rapid pace. Our Hyperlearning workplace is based on principles such as flexible work arrangements, free spirit, emotional positivity, agile self-determination, trust, transparency, open collaboration, support for realizing business goals, and a stable employment environment with a great atmosphere and ethical corporate culture.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Tosca Automation Engineer with 6-8 years of experience, you will be responsible for conducting functional and ETL testing across various systems to ensure accuracy and data integrity. Your primary focus will be on automation, ETL testing, SQL proficiency, and hands-on experience with Azure Databricks (ADB). You should have excellent skills in developing automation frameworks using Tosca and a solid understanding of Tosca's tools and architecture. Your key responsibilities will include developing and maintaining automation frameworks for GUI and API testing using Tosca, working on Tosca TDM/TDS for test data management, conducting hands-on testing with Azure Databricks, and creating custom automation reports using Tosca Custom Reporting. You will collaborate with development teams to ensure proper test coverage and automation integration, as well as set up and maintain Tosca DEX and Jenkins pipelines for continuous testing. Your primary skills should include strong SQL proficiency for writing and understanding SQL queries, hands-on experience in ETL testing, proficiency in automating GUIs and APIs using Tosca, knowledge of Tosca TDM/TDS, and experience in developing automation frameworks using Tosca. Additionally, experience with Azure Databricks, knowledge of Tosca components and server architecture, ability to set up and maintain Tosca DEX environments, experience with ADO or Jenkins for continuous integration and delivery pipelines, and skills in configuring and customizing Tosca automation reports will be beneficial as secondary skills. Overall, as a Tosca Automation Engineer, you will play a crucial role in ensuring the quality and efficiency of testing processes, automation frameworks, and test data management while collaborating with cross-functional teams to achieve successful automation integration and testing outcomes.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Databricks Engineer specializing in the Azure Data Platform, you will be responsible for designing, developing, and optimizing scalable data pipelines within the Azure ecosystem. You should have hands-on experience with Python-based ETL development, Lakehouse architecture, and building Databricks workflows utilizing the bronze-silver-gold data modeling approach. Your key responsibilities will include developing and maintaining ETL pipelines using Python and Apache Spark in Azure Databricks, implementing and managing bronze-silver-gold data lake layers using Delta Lake, and working with various Azure services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for end-to-end pipeline orchestration. It will be crucial to ensure data quality, integrity, and lineage across all layers of the data pipeline, optimize Spark performance, manage cluster configurations, and schedule jobs effectively in Databricks. Collaboration with data analysts, architects, and business stakeholders to deliver data-driven solutions will also be part of your role. To be successful in this role, you should have at least 3+ years of experience with Python in a data engineering environment, 2+ years of hands-on experience with Azure Databricks and Apache Spark, and a strong background in building scalable data lake pipelines following the bronze-silver-gold architecture. Additionally, in-depth knowledge of Delta Lake, Parquet, and data versioning, along with familiarity with Azure Data Factory, ADLS Gen2, and SQL is required. Experience with CI/CD pipelines and job orchestration tools such as Azure DevOps or Airflow would be advantageous. Excellent communication skills, both verbal and written, are essential. Nice to have qualifications include experience with data governance, security, and monitoring in Azure, exposure to real-time streaming or event-driven pipelines (Kafka, Event Hub), and an understanding of MLflow, Unity Catalog, or other data cataloging tools. By joining our team, you will have the opportunity to be part of high-impact, cloud-native data initiatives, work in a collaborative and growth-oriented team focused on innovation, and contribute to modern data architecture standards using the latest Azure technologies. If you are ready to advance your career as a Databricks Engineer in the Azure Data Platform, please send your updated resume to hr@vidhema.com. We look forward to hearing from you and potentially welcoming you to our team.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest on 3 key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skill sets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines that collect data from disparate sources across the enterprise and external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulate data throughout our data flows, ensuring data is available at each stage in the data flow and in the form needed for each system, service, and customer along said data flow. - Identify and onboard data sources using existing schemas and, where required, conduct exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS Glue or Oracle Cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle Cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g., GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 3 days ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Pune

Work from Office

Key responsibilities Responsibilities often include designing and implementing data science workflows using Azure Databricks, collaborating on data pipelines, optimizing PySpark jobs, and developing/deploying scalable ML and AI models, including Generative AI and LLMs. Data scientists also drive MLOps practices, analyze data for trends and models, translate requirements into data models, present findings, and stay updated on research. Required qualifications typically include relevant experience in applied data science, AI, and machine learning, with experience in developing ML models and hands-on experience with LLMs and Generative AI. Technical skills often needed are advanced Python programming with relevant libraries, proficiency in SQL and statistical languages, and deep familiarity with the Databricks platform components like Delta Lake, MLflow, and Unity Catalog. Strong understanding of big data technologies, machine learning techniques, and MLOps principles is also common. Other essential skills include excellent communication, problem-solving, analytical, and mathematical skills.

Posted 3 days ago

Apply

Exploring Azure Databricks Jobs in India

Azure Databricks is a popular cloud-based big data analytics platform that is widely used by organizations in India. As the demand for big data professionals continues to grow, the job market for Azure Databricks roles in India is also expanding rapidly. Job seekers with skills in Azure Databricks can find a plethora of opportunities across various industries in the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Gurgaon

Average Salary Range

The average salary range for Azure Databricks professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

A typical career progression in Azure Databricks may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually moving up to roles like Architect or Data Engineer.

Related Skills

In addition to Azure Databricks, professionals in this field are often expected to have skills in: - Apache Spark - SQL - Python - Data Warehousing concepts - Data visualization tools like Power BI or Tableau

Interview Questions

  • What is Azure Databricks and how does it differ from Apache Spark? (basic)
  • How do you optimize performance in Azure Databricks? (medium)
  • Explain the concept of Delta Lake in Azure Databricks. (medium)
  • What are the different types of clusters in Azure Databricks and when would you use each? (medium)
  • How do you handle security in Azure Databricks? (advanced)
  • Explain the process of job scheduling in Azure Databricks. (medium)
  • What are the advantages of using Azure Databricks over on-premises data processing solutions? (basic)
  • How do you handle schema evolution in Azure Databricks? (medium)
  • Explain the concept of Structured Streaming in Azure Databricks. (medium)
  • How does Azure Databricks integrate with other Azure services like Azure Data Lake Storage or Azure SQL Database? (advanced)
  • What are the different pricing tiers available for Azure Databricks and how do they differ? (medium)
  • Explain the role of a Workspace in Azure Databricks. (basic)
  • How do you troubleshoot performance issues in Azure Databricks? (medium)
  • What is the role of a Job in Azure Databricks and how do you create one? (basic)
  • How do you monitor and manage costs in Azure Databricks? (medium)
  • Explain the concept of Libraries in Azure Databricks. (basic)
  • How do you implement data encryption at rest and in transit in Azure Databricks? (advanced)
  • What are the different data storage options available in Azure Databricks? (basic)
  • How do you handle data skew in Azure Databricks? (medium)
  • Explain the concept of Autoscaling in Azure Databricks. (medium)
  • How do you perform ETL operations in Azure Databricks? (medium)
  • What are the best practices for data governance in Azure Databricks? (advanced)
  • How do you handle version control in Azure Databricks notebooks? (medium)
  • Explain the concept of Machine Learning in Azure Databricks. (medium)

Closing Remark

As you explore opportunities in the Azure Databricks job market in India, make sure to brush up on your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and a positive attitude, you can excel in your Azure Databricks career journey. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies