JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL s data strategy. We are seeking data professionals to work with our colleagues at JLL around the globe in providing solutions, developing new products, building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data and we are just getting started on that journey!
We are looking for a Senior Data Engineer who is self-starter to work in a diverse and fast-paced environment that can join our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.
As a Data Engineer at JLL Technologies, you will:
Contributes to the design of information infrastructure, and data management processes to move the organization to a more sophisticated, agile and robust target state data architecture
Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data
Interfaces with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development
Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales, Finance, HR, MDM, ODS, Data Lake, & EDW
Develop data & API solutions that enable non-technical staff to make data-driven decisions
Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate
Design and develop data lake and API solutions to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform
Sounds like you? To apply you need to be:
Bachelor s degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science.
Minimum of 3-7 years of experience as a data and API developer using Python/Nodejs, Azure Functions, Cosmos DB, Azure Event Hubs, Azure Data Lake Storage, Azure Storage Queues etc. Experience with AI technologies, including machine learning frameworks, AI/ML model deployment, and MLOps.
Excellent technical, analytical and organizational skills.
Effective written and verbal communication skills, including technical writing.
Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security etc.
Hands on experience with event processing and pub-sub consumption patterns.
Good knowledge and exposure to Data Models, Databases like SQL Server, NoSQL Databases, Elastic Search, API management, etc.
Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities.
Working experience with standard API development protocols, API gateway and tokens.
Experience building and maintaining a data warehouse/ data lake in a production environment with efficient ETL design, implementation, and maintenance.
Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams.