Jobs
Interviews

1265 Azure Databricks Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 - 0 Lacs

karnataka

On-site

As a Data Engineer at Gyansys with 6 to 8 years of experience, your primary focus will be on utilizing Azure Databricks, PySpark, and SQL to design and implement Hadoop big data solutions that align with business requirements and project timelines. Your responsibilities will include coding, testing, and documenting data systems to develop scalable applications for data analytics. Collaborating with other Big Data developers is essential to ensure the consistency of all data solutions. You will partner with the business community to gather requirements, identify training needs, and conduct user training sessions. Researching technologies and product enhancements will be part of your role to define requirements, address critical issues, and enhance the analytics technology stack. Evaluating new technologies and upgrades and supporting Big Data and batch/real-time analytical solutions will also fall under your responsibilities. Additionally, you will work on various projects, contributing as a technical team member or taking a lead role in user requirement analysis, software application design and development, testing, automation tools, and researching new technologies and frameworks. Experience with agile methodologies and tools like Bitbucket, Jira, and Confluence will be beneficial. To excel in this role, you must have hands-on experience with Databricks stack, data engineering technologies (e.g., Spark, Hadoop, Kafka), proficiency in streaming technologies, and practical skills in Python, SQL, and implementing Data Warehousing solutions. Expertise in any ETL tool (e.g., SSIS, Redwood) and a good understanding of submitting jobs using Workflows, API, and CLI are essential. If you have experience working with public cloud providers like AWS, Azure, or GCP and possess the ability to adapt to a hybrid work environment, this position offers an exciting opportunity to contribute to cloud migration projects and enhance the organization's analytical capabilities.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are seeking a skilled data engineering professional with over 5 years of experience in designing and implementing end-to-end ETL solutions using Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Databricks, and SSIS. The ideal candidate should have a strong proficiency in SQL, REST API integration, and automation through CI/CD pipelines using Azure DevOps and Git. Additionally, a solid understanding of maintaining and optimizing data pipelines, warehouses, and reporting within the Microsoft SQL stack is required. As a Senior Data Engineer, you will be responsible for designing and implementing comprehensive data solutions using Microsoft Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS), Azure Databricks, and SSIS. This role involves developing complex transformation logic using SQL Server, SSIS, and ADF, as well as creating ETL Jobs/Pipelines to execute those mappings concurrently. You will also play a key role in maintaining and enhancing existing ETL Pipelines, Warehouses, and Reporting leveraging traditional MS SQL Stack. The role requires a deep understanding of REST API principles and the ability to create ADF pipelines to handle HTTP requests for APIs. Proficiency in best practices for development and deployment of SSIS packages, SQL jobs, and ADF pipelines is essential. You will be responsible for implementing and managing source control practices using GIT within Azure DevOps to ensure code integrity and facilitate collaboration. Additionally, you will participate in the development and maintenance of CI/CD pipelines for automated testing and deployment of BI solutions. Preferred skills include an understanding of the Azure environment and experience in developing Azure Logic Apps and Azure Function Apps. Knowledge of code deployment, GIT, CI/CD, and deployment of developed ETL code (SSIS, ADF) would be advantageous, but not mandatory. This position is based in Ahmedabad/Pune and requires a candidate with a BS/MS in Computer Science or other engineering/technical degree.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Data Engineer at Srijan, a Material company, you will play a crucial role in designing and developing scalable data pipelines within Microsoft Fabric. Your primary responsibilities will include optimizing data pipelines, collaborating with cross-functional teams, and ensuring documentation and knowledge sharing. You will work closely with the Data Architecture team to implement scalable and governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Your expertise in Microsoft Fabric will be utilized to build robust pipelines using both batch and real-time processing techniques, integrating with Azure Data Factory for seamless data movement. Continuous monitoring, enhancement, and optimization of Fabric pipelines, notebooks, and lakehouse artifacts will be essential to ensure performance, reliability, and cost-efficiency. You will collaborate with analysts, BI developers, and data scientists to deliver high-quality datasets and enable self-service analytics via Power BI datasets connected to Fabric Lakehouses. Maintaining up-to-date documentation for all data pipelines, semantic models, and data products, as well as sharing knowledge of Fabric best practices with junior team members, will be an integral part of your role. Your expertise in SQL, data modeling, and cloud architecture design will be crucial in designing modern data platforms using Microsoft Fabric, OneLake, and Synapse. To excel in this role, you should have at least 7+ years of experience in the Azure ecosystem, with relevant experience in Microsoft Fabric, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory, advanced data engineering skills, and strong collaboration and communication abilities are also required. Additionally, knowledge of Azure Databricks, Power BI integration, DevOps practices, and familiarity with OneLake, Delta Lake, and Lakehouse architecture will be advantageous. Join our awesome tribe at Srijan and leverage your expertise in Microsoft Fabric to build scalable solutions integrated with Business Intelligence layers, Azure Synapse, and other Microsoft data services.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Engineer, IT Data at American Airlines, you will be part of a diverse and high-performing team dedicated to technical excellence. Your main focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will be working in refers to the area within Information Technology that focuses on managing and leveraging data as a strategic asset. This includes data management, storage, integration, and governance, leaning into Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will work closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc., as well as traditional data warehouse tools. Your responsibilities will involve multiple aspects of the development lifecycle including design, cloud engineering, ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and production support. You will provide technical leadership, collaborate within a team environment, and work independently. Additionally, you will be part of a DevOps team that completely owns and supports the product, implementing batch and streaming data pipelines using cloud technologies. As an essential member of the team, you will lead the development of coding standards, best practices, privacy, and security guidelines. You will also mentor others on technical and domain skills to create multi-functional teams. Your success in this role will require a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. To excel in this position, you should have at least 3 years of software solution development experience using agile, DevOps, and operating in a product model. Moreover, you should have 3+ years of data analytics experience using SQL and cloud development and data lake experience, preferably with Microsoft Azure. Preferred qualifications include 5+ years of software solution development experience, 5+ years of data analytics experience using SQL, 3+ years of full-stack development experience, and familiarity with Azure technologies. Additionally, skills, licenses, and certifications required for success in this role include expertise with the Azure Technology stack, practical direction within Azure Native cloud services, Azure Development Track Certification, Spark Certification, and a combination of Development, Administration & Support experience with various tools/platforms such as Scripting (Python, Spark, Unix, SQL), Data Platforms (Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake), Azure Cloud Technologies, CI/CD tools (GitHub, Jenkins, Azure DevOps, Terraform), BI Analytics Tool Stack (Cognos, Tableau, Power BI, Alteryx, Denodo, Grafana), and Data Governance and Privacy tools (Alation, Monte Carlo, Informatica, BigID). Join us at American Airlines, where you can explore a world of possibilities, travel the world, grow your expertise, and become the best version of yourself while contributing to the transformation of technology delivery for our customers and team members worldwide.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Hiring for Azure Developer with experience range 4 and above Mandatory Skills: Azure, auzre datafactory, azure datalake Education: BE/B.Tech/MCA/M.Tech/MSc./MS

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for performing comprehensive testing of ETL pipelines to ensure data accuracy and completeness across different systems. This includes validating Data Warehouse objects such as fact and dimension tables, designing and executing test cases and test plans for data extraction, transformation, and loading processes, as well as conducting regression testing to validate enhancements with no breakage of existing data flows. You will also work with SQL to write complex queries for data verification and backend testing, and test data processing workflows in Azure Data Factory and Databricks environments. Collaboration with developers, data engineers, and business analysts to understand requirements and proactively raise defects is a key part of this role. Additionally, you will be expected to perform root cause analysis for data-related issues and suggest improvements, as well as create clear and concise test documentation, logs, and reports. The ideal candidate for this position should possess strong knowledge of ETL testing methodologies and tools, excellent skills in SQL including joins, aggregation, subqueries, and performance tuning, hands-on experience with Data Warehousing and data models (Star/Snowflake), and experience in test case creation, execution, defect logging, and closure. Proficiency in regression testing, data validation, and data reconciliation is also required, as well as a working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks. Experience with test management tools like JIRA, TestRail, or HP ALM is essential. Nice to have qualifications include exposure to automation testing for data pipelines, scripting knowledge in Python or PySpark, understanding of CI/CD in data testing, and experience with data masking, data governance, and privacy rules. To qualify for this role, you should have a Bachelors degree in Computer Science, Information Systems, or a related field, along with at least 3 years of hands-on experience in ETL/Data Warehouse testing. Excellent analytical and problem-solving skills, strong attention to detail, and good communication skills are also necessary for this position.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Job Overview: We are looking for a Microsoft Fabric Data Engineer to join our dynamic team. In this role, you will be responsible for designing and delivering cutting-edge data solutions using Microsoft Fabric and related technologies. This role will be responsible for designing, developing, and optimizing data solutions using Microsoft Fabric. This role involves working with technical teams, ensuring best practices, and driving innovation in data engineering and hands on development of data pipelines. Key Responsibilities: Work with large datasets to solve complex analytical problems. Conduct end-to-end data analyses, including collection, processing, and visualization. Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders, to develop data-driven solutions. Implement data pipelines, ETL processes, and data models using Microsoft Fabric. Development of data lakehouses, Delta Lake tables, and data warehouses. Optimize data storage, query performance, and cost efficiency. Provide technical knowledge, collaboration and guidance to other data engineers. Implement data security, governance, and compliance with industry standards. Required Technical Skills: 3-5 years of experience in data engineering in Microsoft Cloud solutions. Expertise in Microsoft Fabric, Azure Data Services, and data integration. Experience with Azure Data Factory, Synapse Analytics, Databricks, and DevOps practices. Knowledge of data governance, compliance frameworks, and CI/CD pipelines. Excellent communication, and problem-solving skills. Strong proficiency in SQL, Python, and Power BI. Experience with data warehousing, ETL processes, and data modeling. Knowledge of cloud platforms, particularly Microsoft Azure and Fabric. Microsoft Certifications (e.g., DP-600, DP-203) are highly desirable Experience in data warehousing, data modelling, and dimensional schemas. Preferred Qualifications: Microsoft Fabric Data Engineer, Fabric Analytics Engineer Certification or similar. Bachelors or master’s degree in computers, engineering or relevant areas. Knowledge of CI/CD pipelines for data engineering workflows. Strong analytical and problem-solving skills. Using DevOps for development and deployment. Agile methodologies like Scrum, XP or similar. Experience in AI and ML model deployment is a plus.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

9 - 18 Lacs

Hyderabad

Hybrid

Experience: - 5 to 15 years of hands-on experience in Azure data bricks, Pyspark and cloud. Knowledge and skills: App Dev Language: .Azure data bricks, Pyspark and cloud. Data Management: Database (SQL Server DB), APIs (Kong, APIC, APIM) Development & Delivery Methods: Agile (Scaled Agile Framework) DevOps and CI/CD: Containers (Azure Kubernetes), CI/CD (Azure DevOps, Git), Scheduling Tools (Azure Scheduler) Development Tools & Platforms: IDE (GitHub Copilot, VSCODE, IntelliJ), Cloud Platform (Microsoft Azure) Security and Monitoring: Secure Coding (Veracode), Writing and executing automated tests -- Using Java and Java script, Selenium, and using test automation framework"

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Power Business Intelligence (BI), Microsoft Azure DatabricksMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in analyzing data requirements and translating them into effective solutions that align with the overall data strategy of the organization. Your role will require you to stay updated with the latest trends in data engineering and contribute to the continuous improvement of data processes and systems. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in the design and implementation of data pipelines to support data integration and analytics.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Azure Databricks, Microsoft Power Business Intelligence (BI).- Strong understanding of data modeling concepts and best practices.- Experience with ETL processes and data warehousing solutions.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Power Business Intelligence (BI), Microsoft Azure DatabricksMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be responsible for analyzing data requirements and translating them into effective solutions that support the organization's data strategy. Additionally, you will participate in team meetings to share insights and contribute to the overall success of the data platform initiatives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in continuous learning to stay updated with the latest trends and technologies in data platforms.- Assist in the documentation of data architecture and design processes to ensure clarity and consistency across the team. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Azure Databricks, Microsoft Power Business Intelligence (BI).- Strong understanding of data integration techniques and best practices.- Experience with data modeling and database design principles.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

7.0 - 10.0 years

1 - 5 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Primary Responsibilities • Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. • Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes • Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations • Use Azure Data Factory and Databricks to assemble large, complex data sets • Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. • Ensure data security and compliance • Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures Required skills: • Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams • Azure DevOps • Apache Spark, Python • SQL proficiency • Azure Databricks knowledge • Big data technologies The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL). Preferred candidate profile Good exposure in Python,Azure,SQL Good Communication

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics

Posted 2 weeks ago

Apply

6.0 - 11.0 years

19 - 27 Lacs

Haryana

Work from Office

About Company Founded in 2011, ReNew, is one of the largest renewable energy companies globally, with a leadership position in India. Listed on Nasdaq under the ticker RNW, ReNew develops, builds, owns, and operates utility-scale wind energy projects, utility-scale solar energy projects, utility-scale firm power projects, and distributed solar energy projects. In addition to being a major independent power producer in India, ReNew is evolving to become an end-to-end decarbonization partner providing solutions in a just and inclusive manner in the areas of clean energy, green hydrogen, value-added energy offerings through digitalisation, storage, and carbon markets that increasingly are integral to addressing climate change. With a total capacity of more than 13.4 GW (including projects in pipeline), ReNew’s solar and wind energy projects are spread across 150+ sites, with a presence spanning 18 states in India, contributing to 1.9 % of India’s power capacity. Consequently, this has helped to avoid 0.5% of India’s total carbon emissions and 1.1% India’s total power sector emissions. In the over 10 years of its operation, ReNew has generated almost 1.3 lakh jobs, directly and indirectly. ReNew has achieved market leadership in the Indian renewable energy industry against the backdrop of the Government of India’s policies to promote growth of this sector. ReNew’s current group of stockholders contains several marquee investors including CPP Investments, Abu Dhabi Investment Authority, Goldman Sachs, GEF SACEF and JERA. Its mission is to play a pivotal role in meeting India’s growing energy needs in an efficient, sustainable, and socially responsible manner. ReNew stands committed to providing clean, safe, affordable, and sustainable energy for all and has been at the forefront of leading climate action in India. Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience

Posted 2 weeks ago

Apply

6.0 - 9.0 years

20 - 25 Lacs

Pune

Work from Office

Skills & ExperienceStrong hands on experience with Dynamics 365 understands data structures and business processes without relying on others.Experience working with Azure Data Lake and Synapse Link knows how D365 data is replicated and how to work with it.Skilled in Power BI can build datasets, dashboards, and reports independently.Experience in D365 development to pick up technical work beyond reporting.Comfortable working directly with business users to gather requirements and translate them into working solutions. Behaviours & ApproachSelf sufficient doesn t need to rely on other D365 consultants to understand tables, relationships, or data sources.Proactive and comfortable working with non technical stakeholders.Flexible willing to pivot between reporting and development work based on demand.Takes ownership of their work and sees tasks through to delivery.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Telangana

Work from Office

Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

Chennai

Work from Office

Job Title:Fullstack Architect/Application ArchitectExperience10-15 YearsLocation:Remote : A NASDAQ-listed company, known for its prominent position in the food and beverage sector, is seeking A Fullstack Architect with expertise in Java Spring Boot. In this role, you will understand problem statements and conduct architecture Assessments specifically in the Manufacturing and Warehousing industry. This is an exciting opportunity for an experienced architect eager to work with a leading multinational Corporation that has a significant impact on millions of lives worldwide. Required Skills: 10+ Years of Experience Minimum of 5 years of experience as an Architect. 5+ Years of experience in Java and Spring Boot Hands-on experience in creating monitoring dashboards and conducting infrastructure monitoring using Datadog, Splunk, or Grafana. Proficiency with AppDynamics and Azure Databricks. Strong problem-solving skills and the ability to navigate complex organizational environments. Excellent verbal and written English communication skills.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. A typical day involves collaborating with cross-functional teams to gather insights, analyzing user needs, and translating them into functional specifications. You will engage in discussions to refine application designs and ensure alignment with business objectives, while also addressing any challenges that arise during the development process. Your role will be pivotal in ensuring that the applications developed are user-friendly and effectively meet the needs of the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with stakeholders to gather and analyze requirements for application design.- Develop and document application specifications and design documents. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with Microsoft Azure Architecture.- Strong understanding of cloud computing concepts and services.- Experience in application design and development methodologies.- Familiarity with agile development practices and tools. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve collaborating with various stakeholders to gather and analyze requirements, creating application designs that align with business objectives, and ensuring that the applications are user-friendly and efficient. You will also participate in team meetings to discuss project progress and contribute innovative ideas to enhance application functionality and performance. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in continuous learning to stay updated with industry trends and technologies.- Collaborate with cross-functional teams to ensure alignment on project goals and deliverables. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with Microsoft Azure Architecture.- Strong understanding of application design principles and methodologies.- Experience in developing and deploying applications on cloud platforms.- Familiarity with programming languages relevant to application development. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

4.0 - 7.0 years

12 - 15 Lacs

Pune, Chennai, Bengaluru

Work from Office

Azure Databricks Developer Location: Hyderabad, Bangalore, Gurgaon, Chennai, Pune Contract Duration: 12 Months Job Description: 4+ years of exp as Azure Databricks ETL developer Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks and Data Factory Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)

Posted 2 weeks ago

Apply

3.0 - 5.0 years

12 - 20 Lacs

Gurugram, Bengaluru, Mumbai (All Areas)

Work from Office

Job Description_ Data Engineer _ TransOrg Analytics Why would you like to join us? TransOrg Analytics specializes in Data Science, Data Engineering and Generative AI, providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC and the Middle East. We leverage data science to streamline, optimize, and accelerate our clients' businesses. Visit at www.transorg.com to know more about us. Responsibilities: Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks workflows. Develop an integrated data solution in Snowflake to unify data. Implement and manage big data solutions using Azure Databricks. Design and maintain relational databases using Azure Delta Lake. Ensure data quality and integrity through rigorous testing and validation. Monitor and troubleshoot data pipelines and workflows to ensure seamless operation. Implement data security and compliance measures in line with industry standards. Continuously improve data infrastructure (including CI/CD) for scalability and performance. Design, develop, and maintain ETL processes to extract, transform, and load data from various sources into Snowflake. Utilize ETL tools (e.g., ADF, Talend) to automate and manage data workflows. Develop and maintain CI/CD pipelines using GitHub and Jenkins for automated deployment of data models and ETL processes. Monitor and troubleshoot pipeline issues to ensure smooth deployment and integration. Design and implement scalable and efficient data models in Snowflake. Optimize data structures for performance and storage efficiency. Collaborate with stakeholders to understand data requirements and ensure data integrity Integrate multiple data sources to create data lake/data mart Perform data ingestion and ETL processes using SQL, Scoop, Spark or Hive Monitor job performances, manage file system/disk-space, cluster & database connectivity, log files, manage backup/security and troubleshoot various user issues Design, implement, test and document performance benchmarking strategy for platforms as well as for different use cases Setup, administer, monitor, tune, optimize and govern large scale implementations Drive customer communication during critical events and participate/lead various operational improvement initiatives Qualifications, Skill Set and competencies: Bachelor's in Computer Science, Engineering, Statistics, Maths or related quantitative degree. 2 - 5 years of relevant experience in data engineering. Must have worked on any of the cloud engineering platforms - AWS, Azure, GCP, Cloudera Proven experience as a Data Engineer with a focus on Azure cloud technologies/Snowflake. Strong proficiency in Azure Data Factory, Azure Databricks, ADLS, and Azure SQL Database. Experience with big data processing frameworks like Apache Spark. Expert level proficiency in SQL and experience with data modeling and database design. Knowledge of data warehousing concepts and ETL processes. Strong focus on PySpark, Scala and Pandas. Proficiency in Python programming and experience with other data processing frameworks. Solid understanding of networking concepts and Azure networking solutions. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Azure Data Engineer certification AZ-900 and DP-203 (Good to have) Familiarity with DevOps practices and tools for CI/CD in data engineering. Certification: MS Azure / DBR Data Engineer (Good to have) Data Ingestion - Coding & automating ETL pipelines, both batching & streaming. Should have worked on both ETL or ELT methodologies using any of traditional & new age tech stack- SSIS, Informatica, Databricks, Talend, Glue, DMS, ADF, Spark, Kafka, Storm, Flink etc. Data transformation - Experience working with MPPs, big data & distributed computing frameworks on cloud or cloud agnostic tech stack- Databricks, EMR, Hadoop, DBT, Spark etc, Data storage - Experience working on data lakes, lakehouse architecture- S3, ADLS, Blob, HDFS DWH - Strong experience modelling & implementing DWHing on tech like Redshift, Snowflake, Azure Synapse, Bigquery, Hive Orchestration & lineage - Airflow, Oozie etc.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Gurugram

Work from Office

The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Gurugram

Work from Office

Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Noida

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Noida

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Noida

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies