Posted:2 weeks ago|
Platform:
Work from Office
Full Time
DTS R&D Lead Data Engineer Job Description You were made to do this work: designing new technologies, diving into data, optimizing digital experiences, and constantly developing better, faster ways to get results. You want to be part of a performance culture dedicated to building technology for a purpose that matters. You want to work in an environment that promotes sustainability, inclusion, wellbeing, and career development. In this role, you ll help us deliver better care for billions of people around the world. It starts with YOU. In this role, you will: Understand business problems, analyze data, and define success criteria. Work with engineering team and architecture teams for data identification and collection, harmonization, and cleansing for the data analysis and preparation. Responsible for analyzing and identifying appropriate algorithms for the defined problem statement. Analyze additional data inputs and methods that would improve the results of the models and look for opportunities. Responsible for building models that are interpretable, explainable and sustainable at scale and meets business needs. Build visualizations and demonstrate the results of the model to the stakeholders and leadership team. Must be conversant with Agile methodologies and tools and have a track record of delivering products in a production environment. Lead the design of prototypes in our AI factory, partnering with product teams, AI strategists, and other stakeholders throughout the AI development life cycle. Lead and transform data science prototypes. Mentor a diverse team of junior engineers in machine learning techniques, tools and concepts. Provides guidance and leadership to junior engineers. Work with Technical Architects, Product Owners and Business teams to translate requirements into technical design for data modelling and data integration. Demonstrate deep background in data warehousing, data modelling and ETL/ELT data processing patterns. Design and develop ETL/ELT pipelines with reusable patterns and frameworks. Design and build efficient SQL to process and curate the data sets in Azure, Databricks, and Snowflake. Design and review data ingestion frameworks leveraging Python, Spark, Azure Data Factory, Snowpipe, etc. Design and build Data Quality models and ABCR frameworks to ingest, validate, curate and prepare the data for consumption. Understand the functional domain, business needs and able to identify the gaps in the requirements proactively prior to implementing solutions. Work with platform teams to design and build processes for automation in pipeline build, testing and code migrations. Demonstrate exceptional impact in delivering projects, products and/or platforms in terms of scalable data processing and application architectures, technical deliverables and delivery throughout the project lifecycle. Provide design and guiding principles on building data models and semantic models in Snowflake - enabling true self-service. Responsible for ensuring the effectiveness of the ingestion and data delivery frameworks and patterns. Build and maintain data development standards and principles, provide guidance and project specific recommendations as appropriate. Must be conversant with DevOps delivery approach and tools and have a track record of delivering products in agile model. Provide insight and direction on roles and responsibilities required for platform/ product operations. About You You perform at the highest level possible, and you appreciate a performance culture fueled by authentic caring. You want to be part of a company actively dedicated to sustainability, inclusion, wellbeing, and career development. You love what you do, especially when the work you do makes a difference. At Kimberly-Clark, we re constantly exploring new ideas on how, when, and where we can best achieve results. When you join our team, you ll experience Flex That Works: flexible (hybrid) work arrangements that empower you to have purposeful time in the office and partner with your leader to make flexibility work for both you and the business. In one of our technical roles, you ll focus on winning with consumers and the market, while putting safety, mutual respect, and human dignity at the center. To succeed in this role, you will need the following qualifications: Bachelors degree required. 10+ years of experience in data engineering and designing, developing, and building solutions on platforms such as HANA, Snowflake, Databricks, and Teradata. Strong proficiency in Azure Data Factory and Databricks Experience in designing and building metadata-driven data ingestion frameworks, building SAP BO/Data Services, Databricks, Azure Data Factory, SnowSQL, Snowpipe - as well as building mini-batch, real-time, and event-driven data processing jobs. Experience with one or more programming languages such as Python, Node, and Scala is preferred. Basic understanding of AI/ML pipeline (Bonus)
Kimberly-Clark Corporation
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Kimberly-Clark Corporation
35.0 - 40.0 Lacs P.A.
Chennai
25.0 - 30.0 Lacs P.A.
Bengaluru
5.0 - 9.0 Lacs P.A.
11.0 - 15.0 Lacs P.A.
Bengaluru
10.0 - 14.0 Lacs P.A.
8.0 - 18.0 Lacs P.A.
Hyderabad
3.0 - 6.0 Lacs P.A.
Hyderabad
10.0 - 15.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
Experience: Not specified
1.5 - 6.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.