Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organizations seeking independent talent. Our client, a global leader in energy management and automation, is currently seeking a Data Engineer to prepare data and make it available in an efficient and optimized format for various data consumers, including BI, analytics, and data science applications. As a Data Engineer, you will work with current technologies such as Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on the AWS environment. Key Responsibilities: - Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or building new data objects. Automate data pipelines to streamline the process. - Implement DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. Design and implement end-to-end data integration tests and CICD pipelines. - Analyze existing data models, identify performance optimizations for data ingestion and consumption to accelerate data availability within the platform and for consumer applications. - Support client applications in connecting and consuming data from the platform, ensuring compliance with guidelines and best practices. - Monitor the platform, debug detected issues and bugs, and provide necessary support. Skills required: - Minimum of 3 years of prior experience as a Data Engineer with expertise in Big Data and Data Lakes in a cloud environment. - Bachelor's or Master's degree in computer science, applied mathematics, or equivalent. - Proficiency in data pipelines, ETL, and BI, regardless of the technology. - Hands-on experience with AWS services including at least 3 of: RedShift, S3, EMR, Cloud Formation, DynamoDB, RDS, Lambda. - Familiarity with Big Data technologies and distributed systems such as Spark, Presto, or Hive. - Proficiency in Python for scripting and object-oriented programming. - Fluency in SQL for data warehousing, with experience in RedShift being a plus. - Strong understanding of data warehousing and data modeling concepts. - Familiarity with GIT, Linux, CI/CD pipelines is advantageous. - Strong systems/process orientation with analytical thinking, organizational skills, and problem-solving abilities. - Ability to self-manage, prioritize tasks in a demanding environment. - Consultancy orientation and experience with the ability to form collaborative working relationships across diverse teams and cultures. - Willingness and ability to train and teach others. - Proficiency in facilitating meetings and following up with action items.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |