Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
As a highly skilled Senior Developer with 7 to 10 years of experience, you will be responsible for developing and maintaining scalable data solutions using Databricks Unity Catalog and Azure Data Lake Store to enhance data accessibility and security. Your key responsibilities will include: - Collaborating with cross-functional teams to integrate Azure DevOps into the development lifecycle ensuring seamless deployment and continuous integration. - Utilizing Python and PySpark to design and implement efficient data processing pipelines optimizing performance and reliability. - Creating and managing Databricks SQL queries to extract, transform, and load data supporting business intelligence and analytics initiatives. - Overseeing the execution of Databricks Workflows ensuring timely and accurate data processing to meet project deadlines. - Providing technical expertise and support to team members fostering a collaborative and innovative work environment. - Analyzing complex data sets to identify trends and insights contributing to data-driven decision-making processes. - Ensuring data quality and integrity by implementing robust validation and error-handling mechanisms. - Staying updated with the latest industry trends and technologies applying new knowledge to improve existing systems and processes. - Collaborating with stakeholders to understand business requirements and translate them into technical specifications. - Documenting technical designs, processes, and procedures to facilitate knowledge sharing and future reference. - Supporting the development of data management strategies aligning with organizational goals and objectives. - Contributing to the continuous improvement of development practices enhancing efficiency and effectiveness. Qualifications required for this role include: - Demonstrated proficiency in Databricks Unity Catalog, Azure Data Lake Store, and Azure DevOps. - Strong programming skills in Python and PySpark for data processing and analysis. - Experience in Databricks SQL and Workflows for data management and analytics. - Background in Data Management, Hedge Fund Accounting, or Account Management is a plus. - Ability to work in a hybrid model with excellent communication and collaboration skills. Additionally, certifications required for this role are: - Databricks Certified Data Engineer Associate - Microsoft Certified: Azure Data Engineer Associate,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Lead Azure Data Engineer at Syneos Health, you will play a crucial role in utilizing your expertise to drive successful outcomes in the biopharmaceutical solutions industry. Here is what you can expect in this role: **Role Overview:** - You will lead a small team of Azure Data Engineers, bringing your 8+ years of overall IT experience to the table. - Your responsibilities will include Database Development, Data Integration initiatives, and building ETL/ELT solutions using Azure data integration tools such as Azure Data Factory, Azure Functions, Logic Apps, and Databricks. - You will be highly experienced with DLTs, Unity Catalog, Databricks SQL, and Workflows, utilizing hands-on experience in Azure Databricks & Pyspark to develop ETL pipelines. - Strong experience in reading and writing queries in Azure or any relational database is required, along with prior knowledge of best practices in SDLC and use of source control management tools, preferably Azure DevOps to GitHub integration. - You will work in a production environment for multiple clients in a compliance industry, translating customer needs into technical requirements and contributing to data management framework implementation and adoption. - Good verbal and written communication skills, excellent interpersonal skills, troubleshooting abilities, and the knack for problem-solving will be key traits for success in this role. **Key Responsibilities:** - Lead a small team of Azure Data Engineers - Develop ETL/ELT solutions using Azure data integration tools - Utilize hands-on experience in Azure Databricks & Pyspark for ETL pipelines - Read and write queries in Azure or any relational database - Implement best practices in SDLC and use source control management tools - Translate customer needs into technical requirements - Contribute to data management framework implementation and adoption **Qualifications Required:** - 8+ years of overall IT experience with a minimum of 5 years leading a small team of Azure Data Engineers - 5+ years of experience in Database Development and Data Integration initiatives - 5+ years of experience building ETL/ELT solutions using Azure data integration tools - Hands-on experience in Azure Databricks & Pyspark - Strong experience in reading and writing queries in Azure or any relational database - Prior knowledge of best practices in SDLC and use of source control management tools - Experience working in a production environment for multiple clients in a compliance industry - Ability to work in a team and translate customer needs into technical requirements - Good verbal and written communication skills, excellent interpersonal skills, and problem-solving abilities Please note that tasks, duties, and responsibilities may vary, and the Company may assign other responsibilities as needed. The Company values diversity and inclusion to create a workplace where everyone feels they belong.,
Posted 6 days ago
10.0 - 15.0 years
18 - 20 Lacs
noida, gurugram
Work from Office
Lead design & implementation of scalable data pipelines using Snowflake & Databricks. Drive data architecture, governance. Build ETL/ELT, optimize models, mentor team, ensure security, compliance.strong Snowflake, Databricks, SQL, Python Required Candidate profile Experienced Data Analytics Lead skilled in Snowflake, Databricks, SQL, Python. Proven leader in designing scalable pipelines, data governance, ETL/ELT, and team mentoring.
Posted 1 week ago
5.0 - 10.0 years
12 - 19 Lacs
vadodara
Work from Office
Job Title: Senior Data Engineer Basic Function The Senior Data Engineer should be an expert familiar with all areas of data warehousing technical components (e.g., ETL, Reporting, Data Model), connected infrastructure, and their integrations. The ideal candidate will be responsible for developing the overall architecture and high-level design of the data schema environment. The candidate must have extensive experience with Star Schemas, Dimensional Models, and Data Marts. The individual is expected to build efficient, flexible, extensible, and scalable ETL design and mappings. Excellent written and verbal communication skills are required as the candidate will work very closely with diverse teams. A wide degree of creativity and latitude is expected. This position reports to the Manager of Data Services. Typical Requirements Requires strong technical and analytical skills, data management expertise, and business acumen to achieve results. The ideal candidate should be able to deep dive into data, perform advanced analysis, discover root causes, and design scalable long-term solutions using Databricks, Spark, and related technologies to address business questions. A strong understanding of business data needs and alignment with strategic goals will significantly enhance effectiveness. The role requires the ability to prepare high-level architectural frameworks for data services and present them to business leadership. Additionally, the candidate must work well in a collaborative environment while performing a variety of detailed tasks daily. Strong oral and written communication skills are essential, along with expertise in application design and a deep understanding of distributed computing, data lake architecture, and relational database concepts. This position requires the ability to leverage both business and technical capabilities regularly. Essential Functions Gather, structure, and process data from various sources (e.g., transactional systems, third-party applications, cloud-based financial systems, customer feedback, etc.) using Databricks and Apache Spark to enhance business insights. Develop and enforce standards, procedures, and quality control measures for data analytics in compliance with enterprise policies and best practices. Partner with business stakeholders to build scalable data models and infrastructure, leveraging Databricks' Delta Lake, MLflow, and Unity Catalog. Identify, analyze, and interpret complex data sets to develop insightful analytics and predictive models. Utilize Databricks to design and optimize data processing pipelines for large-scale data ingestion, transformation, and storage. Ensure data infrastructure completeness and compatibility to support system performance, availability, and reliability requirements. Architect and implement robust data pipelines using PySpark, SQL, and Databricks Workflows for automation. Provide input on technical challenges and recommend best practices for data engineering solutions within Databricks. Design and optimize data models for analytical and operational use cases. Develop and implement monitoring, alerting, and logging frameworks for data pipelines. Lead the architecture and implementation of next-generation cloud-based data solutions. Build scalable and reliable data integration pipelines using Databricks, SQL, Python, and Spark. Mentor and develop junior team members, fostering a data-driven culture within the organization. Develop high-quality, scalable data solutions to support business intelligence, analytics, and data science initiatives. Interface with technology teams to extract, transform, and load (ETL) data from diverse data sources into Databricks. Continuously improve data processes, automating and simplifying workflows for self-service analytics. Work with large, complex data sets to solve non-routine analysis problems, applying advanced machine learning and data processing techniques as needed. Prototype, iterate, and scale data analysis pipelines, advocating for improvements in Databricks data structures and governance. Collaborate cross-functionally to present findings effectively through data visualizations and executive-level presentations. Research and implement advanced analytics, forecasting, and optimization methods to drive business outcomes. Stay up to date with industry trends and emerging Databricks technologies to enhance data-driven capabilities. Specialized Skills or Technical Knowledge Bachelor's degree or higher in a quantitative/technical field (e.g., Computer Science, Statistics, Engineering). A Master's degree in Computer Science, Mathematics, Statistics, or Economics is preferred. 5+ years of experience in data engineering, business intelligence, or data analytics, with a focus on Databricks and Apache Spark. Extensive experience with SQL and Python for developing optimized queries and data transformations. Expertise in Databricks ecosystem, including Delta Lake, ML flow, and Databricks SQL. Experience in designing and implementing ETL/ELT pipelines on Databricks using Spark and cloud-based data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage). Strong data modeling, data warehousing, and data governance knowledge. Experience in working with structured and unstructured data, including real-time and batch processing solutions. Familiarity with data visualization tools such as Power BI, Tableau, or Looker. Deep understanding of distributed computing, scalable data architecture, and cloud computing frameworks. Hands-on experience with CI/CD pipelines, Infrastructure as Code (IaC), and DevOps practices in data engineering. Proven track record of working with cross-functional teams, stakeholders, and senior management to deliver high-impact data solutions. Knowledge of machine learning and AI-driven analytics is a plus. Strong problem-solving skills and the ability to work independently and in a team-oriented environment. Excellent communication skills and ability to convey complex data concepts to non-technical stakeholders. Experience in a franchised organization is a plus.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be working as a full-time on-site Databricks Developer in Hyderabad. Your responsibilities will include designing, developing, and maintaining highly scalable efficient data pipelines using Databricks, PySpark, and other related technologies to process large-scale datasets. Collaboration with cross-functional teams to design and implement data engineering and analytics solutions will be a key part of your role. To excel in this role, you should have expertise in using Unity Catalog and Metastore, optimizing Databricks notebooks, delta lake, and DLT pipelines. You should also be experienced in using Databricks SQL, implementing highly configured data processing solutions, and solutions for data quality and reconciliation requirements. A solid understanding of data governance frameworks, policies, and best practices for data management, security, and compliance is essential. Additionally, you should have a strong knowledge of data modelling techniques and be proficient in PySpark or SparkSQL.,
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint in hybrid environments-from on-premises to SaaS to containers to cloud. We're transforming the software industry. We're Flexera. With more than 50,000 customers across the world, we re achieving that goal . But we know we can't do any of that without our team . Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans Come and see why we're consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Senior Manager, Development We are seeking a dynamic and technically proficient Senior Manager to lead our data engineering initiatives within Flexera's Cloud Cost Optimization space. This role combines hands-on expertise in Databricks with proven leadership in managing high-performing engineering teams. The ideal candidate will be passionate about building scalable data solutions and mentoring teams to deliver impactful business outcomes. Key Responsibilities Technical Leadership Architect, design, and implement scalable data pipelines using PySpark on the Databricks platform. Leverage Delta Lake, Delta Tables, and Databricks SQL to build robust data solutions. Develop and maintain batch processing and Spark streaming workflows. Implement orchestration workflows using Databricks Workflows and Azure Data Factory, ensuring automation, monitoring, and alerting. Optimize cluster configurations, autoscaling strategies, and cost management within Databricks environments. Stay current with emerging technologies and bring innovation to the team. Team Management Manage teams responsible for building microservices-based applications using Golang, React and Databricks. Lead, mentor, and grow a team of data engineers, fostering a culture of collaboration, ownership, and continuous improvement. Conduct performance evaluations, provide feedback, and support career development. Manage team dynamics and resolve challenges to maintain productivity and engagement. Cross-Functional Collaboration Partner with product managers, architects, and operations teams to align technical deliverables with business goals. Identify dependencies, manage risks, and ensure seamless coordination across teams. Qualifications Bachelor's or master's degree in computer science, Engineering, or a related field. 10+ years of experience in software/data engineering, with 3+ years in a managerial role. Hands-on experience with Databricks, including pipeline development and orchestration. Strong programming skills in Python and PySpark. Proven experience in cloud-native development, preferably on AWS. Deep understanding of data modelling, ETL best practices, and DevOps for data pipelines. Experience deploying Databricks resources using Terraform is a plus. Excellent problem-solving, decision-making, and communication skills. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. W e encourage candidates requiring accommodations to please let us know by emailing .
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for an Azure Databricks Developer to join their team in Bengaluru, Karnataka, India. As an Azure Databricks Developer, you will be responsible for advanced experience with Databricks, including Databricks SQL, DataFrames, and Spark using PySpark or Scala. You should have a deep understanding of Spark architecture and optimization strategies, such as tuning Spark SQL configurations, managing data partitions, handling data skew, and leveraging broadcast joins. Your role will also involve proficiency in building and optimizing large-scale data pipelines for ETL/ELT processes using Databricks and familiarity with Delta Lake and data lake architectures. Strong programming skills in Python, SQL, or Scala are required, along with experience in version control (e.g., Git), CI/CD pipelines, and automation tools. You should have an understanding of Databricks cluster setup, resource management, and cost optimization, as well as experience with query optimization, performance monitoring, and troubleshooting complex data workflows. Familiarity with Databricks Photon engine and its enablement for accelerating workloads would be a plus. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is dedicated to helping clients innovate, optimize, and transform for long-term success. Visit us at us.nttdata.com.,
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
, India
On-site
About Us Preferred job location: Mumbai, India When you work with us, youll find that we deliver results; without compromising on respect. We value each others differences while recognising individual strength. We are the worlds leading contract logistics company. We create competitive advantage for our customers through customized warehousing and transportation services. We combine our global scale with local knowledge and sector expertise. At DHL Supply Chain (DSC), there&aposs more to a role than the work we do. Whatever your role is, we never forget that you make us who we are. We work hard to make sure a career with DHL is as satisfying and successful as it can be. Join a supportive work environment where youll have the tools and training you need to grow and succeed. DHL Supply Chain is Great Place To Work certified. Responsibilities The Regional Warehousing IT Consultant is responsible for leading WMS data solutions and supporting MAWM activities. The role involves designing, delivering and supporting strategic data solutions and structured analysis of business data requirements within the WMS ecosystem strategy in the APAC region. Additionally, improving and supporting the Manhattan Active WMS standardization and serve as a key contact for WMS related queries and issues. Overall, the role of a Regional Warehousing IT Solution Consultant is multifaceted, requiring a blend of technical knowledge, project management skills, and the ability to collaborate with various stakeholders. It&aposs a critical role in ensuring that WMS solutions are effectively implemented and aligned with the region&aposs strategic goals, ultimately enhancing operational efficiency and data management within the IT warehousing environment. Collaborate closely with the IT Solution Lead, APAC WD Director, and APAC WD VP to ensure the successful implementation of the strategic roadmap for the WMS solution in the APAC region. Data solution Develop and implement data solutions under warehousing domain like standard reporting policy, archiving, data use, real time operational dashboards, key red flags etc. that address the specific business requirements within the WMS ecosystem in the APAC region. Create and optimize the common data source (e.g. DB views) as baseline for shared usage, with alignment with APAC data infrastructure strategy. Play a key role in proposing appropriate data solutions from approved data tech stacks and available data products in APAC MAWM Data Ecosystem that fits to specific business use case requirements. Lead and manage UAT activities between stakeholders and delivery partners to assure data solutions are validated appropriately corresponds to business requirements before release into production. MAWM Related Activities Issue management and coordination between APAC countries leads, Center of Excellence (CoE), and the vendor. Day to day support of Regional Warehousing IT Solution Lead. Pioneering new MAWM functionalities that can improve operational efficiency while maximizing the WMS capabilities. Testing and supporting regional standard solution. Multi country solution design Stakeholder collaboration and communication Provide regular communications among stakeholders as well as internal teams to keep everyone updated on progress and address any issues that arise. Provide support to DHL DSC APAC data analytics team during data mapping phase and data validation for warehousing data product build/enhancements. Coordinate data standardization approach across APAC IT Business Unit Requirements Minimum 5 years of working experience around WMS and data related solutions (reporting, data analytics, and visualization in the supply chain industry) Data related certifications would be a plus. Experience/certificate for BI tools - Power BI, Qlik Sense Experience with SQL / NoSQL databases - Oracle, Ms SQL, MySQL Experience with Databricks SQL and Snowflake preferred MHE solution experience preferred Strong experience in the following data disciplines: data management, data governance, data analytics, and data visualization. Strong problem-solving and continuous improvement mindset to overcome challenges. Good interpersonal and communication skills. Ability to drive regional data strategy Clear understanding how to interpret data and visualize result for warehouse floor users as well as all management levels Understanding relationship of DB data/records against its business usage Job application will open till 31 July 2025. Show more Show less
Posted 1 month ago
9.0 - 11.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Qualifications: Overall 9+ years of IT experience Minimum of 5+ years' preferred managing Data Lakehouse environments, Azure Databricks, Snowflake, DBT (Nice to have) specific experience a plus. Hands-on experience with data warehousing, data lake/lakehouse solutions, data pipelines (ELT/ETL), SQL, Spark/PySpark, DBT,. Strong understanding of Data Modelling, SDLC, Agile, and DevOps principles. Bachelors degree in management/computer information systems, computer science, accounting information systems, computer or in a relevant field. Knowledge/Skills: Tools and Technologies: Azure Databricks, Apache Spark, Python, Databricks SQL, Unity Catalog, and Delta Live Tables. Understanding of cluster configuration, compute and storage layers. Expertise with Snowflake Architecture, with experience in design, development, and evolution System integration experience, data extraction, transformation, and quality controls design techniques. Familiarity with data science concepts, as well as MDM, business intelligence, and data warehouse design and implementation techniques. Extensive experience with the medallion architecture data management framework as well as unity catalog. Data modeling and information classification expertise at the enterprise level. Understanding of metamodels, taxonomies and ontologies, as well as of the challenges of applying structured techniques (data modeling) to less-structured sources. Ability to assess rapidly changing technologies and apply them to business needs. Be able to translate the information architecture contribution to business outcomes into simple briefings for use by various data-and-analytics-related roles. About Us Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. About The Team Datavails Data Management Services: Datavails Data Management and Analytics practice is made up of experts who provide a variety of data services including initial consulting and development, designing and building complete data systems, as well as ongoing support and management of database, data warehouse, data lake, data integration, and virtualization and reporting environments. Datavails team is comprised of not just excellent BI & analytics consultants, but great people as well. Datavails data intelligence consultants are experienced, knowledgeable and certified in the best in breed BI and analytics software applications and technologies. We ascertain your business objectives, goals and requirements, assess your environment, and recommend the tools which best fit your unique situation. Our proven methodology can help your project succeed, regardless of stage. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the Data Management experts on demand you desire. Datavails flexible and client focused services always add value to your organization. Show more Show less
Posted 1 month ago
8.0 - 15.0 years
0 Lacs
karnataka
On-site
This is a hands-on Databricks Senior Developer position within State Street Global Technology Services. We are seeking a candidate with a strong understanding of Bigdata technology and significant development expertise with Databricks. In this role, you will be responsible for managing the Databricks platform for the application, implementing enhancements, performance improvements, and AI/ML use cases, as well as leading a team. As a Databricks Sr. Developer, your responsibilities will include designing and developing custom high throughput and configurable frameworks/libraries. You should possess the ability to drive change through collaboration, influence, and the demonstration of proof of concepts. Additionally, you will be accountable for all aspects of the software development lifecycle, from design and coding to integration testing, deployment, and documentation. Collaboration within an agile project team is essential, and you must ensure that best practices and coding standards are adhered to by the team. Providing technical mentoring to the team and overseeing the ETL team are also key aspects of this role. To excel in this position, the following skills are highly valued: data analysis and data exploration experience, familiarity with agile delivery environments, hands-on development skills in Java, exposure to DevOps best practices and CICD (such as Jenkins), proficiency in working within a multi-developer environment using version control (e.g., Git), strong knowledge of Databricks SQL/Pyspark for data engineering pipelines, expertise in Unix, Python, and complex SQL, as well as strong critical thinking, communication, and problem-solving abilities. Troubleshooting DevOps pipelines and experience with AWS services are also essential. The ideal candidate will hold a Bachelor's degree in a computer or IT-related field, with at least 15 years of overall Bigdata data pipeline experience, 8+ years of hands-on experience with Databricks, and 8+ years of cloud-based development expertise, including AWS Services. Job ID: R-774606,
Posted 1 month ago
8.0 - 13.0 years
30 - 45 Lacs
Hyderabad
Work from Office
Role : Were looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. Key Responsibilities: Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Preferred candidate profile: Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Remote
Role & responsibilities Develop, maintain, and enhance new data sources and tables, contributing to data engineering efforts to ensure comprehensive and efficient data architecture. Serves as the liaison between Data Engineer team and the Airport operation teams, developing new data sources and overseeing enhancements to existing database; being one of the main contact points for data requests, metadata, and statistical analysis Migrates all existing Hive Metastore tables to Unity Catalog, addressing access issues and ensuring smooth transition of jobs and tables. Collaborate with IT teams to validate package (gold level data) table outputs during the production deployment of developed notebooks Develop and implement data quality alerting systems and Tableau alerting mechanisms for dashboards, setting up notifications for various thresholds. Create and maintain standard reports and dashboards to provide insights into airport performance, helping guide stations to optimize operations and improve performance. Preferred candidate profile Master's degree / UG Min 5 -10 years of experience Databricks (Azur op) Good Communication Experience developing solutions on a Big Data platform utilizing tools such as Impala and Spark Advanced knowledge/experience with Azure Databricks, PySpark , ( Teradata )/Databricks SQL Advanced knowledge/experience in Python along with associated development environments (e.g. JupyterHub, PyCharm, etc.) Advanced knowledge/experience in building Tableau Dashboard / Clikview / PowerBi Basic idea on HTML and JavaScript Immediate Joiner Skills, Licenses & Certifications Strong project management skills Proficient with Microsoft Office applications (MS Excel, Access and PowerPoint); advanced knowledge of Microsoft Excel Advanced aptitude in problem-solving, including the ability to logically structure an appropriate analytical framework Proficient in SharePoint, PowerApp and ability to use Graph API
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |