Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at Zebra, your primary responsibility will be to understand the technical requirements of clients and design and build data pipelines to meet those requirements. In addition to developing solutions, you will also oversee the development of other Engineers. Strong verbal and written communication skills are essential as you will be required to effectively communicate with clients and internal teams. Success in this role will require a deep understanding of databases, SQL, cloud technologies, and modern data integration and orchestration tools like GCP Dataflow, GKE, Workflow, Cloud Build, and Airflow. You will play a critical role in designing and implementing data platforms for AI products, developing productized and parameterized data pipelines, and creating efficient data transformation code in various languages such as Python, Scala, Java, and Dask. You will also be responsible for building workflows to automate data pipelines using Python, Argo, and Cloud Build, developing data validation tests, and conducting performance testing and profiling of the code. In this role, you will guide Data Engineers in delivery teams to follow best practices in deploying data pipeline workflows, build data pipeline frameworks to automate high-volume and real-time data delivery, and operationalize scalable data pipelines to support data science and advanced analytics. You will also optimize customer data science workloads and manage cloud services costs/utilization while developing sustainable data-driven solutions with cutting-edge data technologies. To qualify for this position, you should have a Bachelor's, Master's, or Ph.D. Degree in Computer Science or Engineering, along with at least 5 years of experience programming in languages like Python, Scala, or Go. You should also have extensive experience in SQL, data transformation, developing distributed systems using open-source technologies like Spark and Dask, and working with relational or NoSQL databases. Experience in AWS, Azure, or GCP environments is highly desired, as well as knowledge of data models in the Retail and Consumer products industry and agile project methodologies. Strong communication skills, the ability to learn new technologies quickly and independently, and the capacity to work in a diverse and fast-paced environment are key competencies required for this role. You should be able to work collaboratively in a team setting and achieve stretch goals while teleworking. Travel is not expected for this position to ensure candidate safety and security from online fraudulent activities.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |