Requirements
4–8 years of experience in building and maintaining data pipelines for enterprise/SaaS applications.Strong knowledge of Python.Solid understanding of relational SQL and query optimization.Understanding designing and implementing ETL workflows and data transformation processes (PySpark, or similar libraries for ETL/data transformation).Deep knowledge of Kafka (or similar pub/sub systems) for data streaming.Strong experience with Apache Airflow, or similar tools to schedule, monitor, and manage complex data pipelinesExperience with AWS cloud (data storage, compute, and managed services).Understanding how to integrate datasets into BI/reporting tools (Tableau, Power BI, or QuickSight).Experience with CI/CD tooling for data pipeline deployment.
Nice To Have
Familiarity with AI/ML-based data cleansing, deduplication, and entity resolution techniques.Familiarity with microservices and event-driven architecture.Knowledge of performance tuning and monitoring tools for data workflows.
Responsibilities
Implement a cloud-native analytics platform with high performance and scalability.Build an API-first infrastructure for data in and data out.Build data ingestion capabilities for the data, as well as external spend data.Leverage data classification AI algorithms to cleanse and harmonize data.Own data modelling, microservice orchestration, monitoring & alerting.Develop comprehensive expertise in the entire application suite and leverage this knowledge to design more effective applications and data frameworks.Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision.Collaborate with cross-organizational teams, including product management, integrations, services, support, and operations, to ensure the overall success of software development, implementation, and deployment.
We Offer
US and EU projects based on advanced technologies.
Competitive Compensation Based On Skills And Experience.
Regular performance appraisals to support your growth.15 vacation days, 10 national holidays, 5 sick days.Free tech webinars and meetups organized by Svitla.Reimbursement for private medical insurance.Personalized learning program tailored to your interests and skill development.Bonuses for article writing, public talks, and other activities.Fun corporate online\offline celebrations and activities.Awesome team, friendly and supportive community!
Requirments
- 4–8 years of experience in building and maintaining data pipelines for enterprise/SaaS applications.
- Strong knowledge of Python.
- Solid understanding of relational SQL and query optimization.
- Understanding designing and implementing ETL workflows and data transformation processes (PySpark, or similar libraries for ETL/data transformation).
- Deep knowledge of Kafka (or similar pub/sub systems) for data streaming.
- Strong experience with Apache Airflow, or similar tools to schedule, monitor, and manage complex data pipelines
- Exerience with AWS cloud (data storage, compute, and managed services).
- Understanding how to integrate datasets into BI/reporting tools (Tableau, Power BI, or QuickSight).
- Experience with CI/CD tooling for data pipeline deployment.
Responsibilities
- Implement a cloud-native analytics platform with high performance and scalability.
- Build an API-first infrastructure for data in and data out.
- Build data ingestion capabilities for the data, as well as external spend data.
- Leverage data classification AI algorithms to cleanse and harmonize data.
- Own data modelling, microservice orchestration, monitoring & alerting.
- Develop comprehensive expertise in the entire application suite and leverage this knowledge to design more effective applications and data frameworks.
- Adhere to iterative development processes to deliver concrete value each release while driving longer-term technical vision.
- Collaborate with cross-organizational teams, including product management, integrations, services, support, and operations, to ensure the overall success of software development, implementation, and deployment.
We Offer
- US and EU projects based on advanced technologies.
- Competitive compensation based on skills and experience.
- Regular performance appraisals to support your growth.
- 15 vacation days, 10 national holidays, 5 sick days.
- Free tech webinars and meetups organized by Svitla.
- Reimbursement for private medical insurance.
- Personalized learning program tailored to your interests and skill development.
- Bonuses for article writing, public talks, and other activities.
- Fun corporate online\offline celebrations and activities.
- Awesome team, friendly and supportive community!
Nice To Have
- Familiarity with AI/ML-based data cleansing, deduplication, and entity resolution techniques.
- Familiarity with microservices and event-driven architecture.
- Knowledge of performance tuning and monitoring tools for data workflows.