Welo Data works with technology companies to provide datasets that are high-quality, ethically sourced, relevant, diverse, and scalable to supercharge their AI models. As a Welocalize brand, WeloData leverages over 25 years of experience in partnering with the world s most in ative companies and brings together a curated global community of over 500,000 AI training and domain experts to offer services that span:
ANNOTATION & LABELLING: Transcription, sum ization, image and video classification and labeling.
ENHANCING LLMs: Prompt engineering, SFT, RLHF, red teaming and adversarial model training, model output ranking.
DATA COLLECTION & GENERATION: From institutional languages to remote field audio collection.
RELEVANCE & INTENT: Culturally nuanced and aware, ranking, relevance, and evaluation to train models for search, ads, and LLM output.
Want to join our Welo Data team? We bring practical, applied AI expertise to projects. We have both strong academic experience and a deep working knowledge of state-of-the-art AI tools, frameworks, and best practices. Help us elevate our clients Data at Welo Data.
Job Reference: #LI-JC1
Job Profile Sum y:
The Business Intelligence Engineer is a core member of Welo Data s analytics team, responsible for designing, building, and maintaining the data infrastructure and transformations that power high-quality, reliable insights across the organization. This role bridges the gap between data engineering and analytics by developing scalable data models, orchestrating pipelines, and enabling data accessibility for reporting and analysis.
Working closely with Data Engineers, Analysts, and business stakeholders, the Business Intelligence Engineer ensures that Welo Data s analytics environment built on AWS, Redshift, dbt, Airflow, and Power BI operates efficiently, reliably, and in alignment with data governance and quality standards. This role plays a key part in advancing our cloud-based analytics platform, ensuring the scalability, performance, and sustainability of our data ecosystem.
This role involves working with a distributed team across multiple time zones (pri ily U.S. Pacific, UTC & India Standard)
Main Duties:
Data Pipeline Development and Management
Design, build, and maintain data pipelines and ELT/ETL workflows in AWS to support enterprise analytics and reporting.
Develop and optimize dbt models that transform raw data into reliable, well-structured datasets for business use.
Orchestrate workflows using Apache Airflow (MWAA) to ensure automation, reliability, and reproducibility of data processes.
Partner with the Data Engineering team to align data architecture and ensure schema design supports analytics and business requirements.
itor and optimize pipeline performance, reliability, and data quality across all layers of the data stack.
Analytics Enablement and Collaboration
Collaborate with analysts and business stakeholders to understand data needs and deliver governed, high-quality datasets.
Build and maintain Power BI data models and support analysts in optimizing report performance and structure.
Ensure data models and transformations are well-documented, version-controlled, and tested as part of a mature CI/CD analytics workflow.
Engage with the broader analytics team to identify opportunities for automation, standardization, and improved data accessibility.
Data Governance and Quality Assurance
Support Welo Data s ongoing data quality and governance initiatives by ensuring accuracy, consistency, and traceability within pipelines and data models.
Collaborate with the Data Quality & Governance Lead to identify and resolve data quality issues and contribute to continuous improvement of data management processes.
Contribute to the development and maintenance of metadata, lineage, and documentation standards for analytics assets.
Agile Collaboration and Delivery
Participate in sprint planning, retrospectives, and backlog grooming using Jira to manage tasks and progress.
Document architecture isions, data models, and workflows in Confluence to promote transparency and knowledge sharing.
Operate effectively in an Agile environment, balancing technical rigor with responsiveness to evolving business needs.
Additional Job Description:
- Bachelor s degree in Computer Science, Information Systems, or a related field strongly preferred; equivalent experience acceptable. Master s degree is a plus.
- Minimum of 4 years of experience in data engineering, analytics engineering, or a related role with de strated impact in building and maintaining data pipelines and models.
- Proficiency in SQL and dbt for data transformation and modeling.
- Hands-on experience with AWS services, including Redshift, S3, Lambda, and Glue.
- Experience with Airflow (MWAA or similar orchestration tools) for workflow management.
- Familiarity with CI/CD workflows and version control using Git.
- Understanding of data warehousing concepts, dimensional modeling, and ELT design principles.
- Experience working within Agile methodologies, with practical use of Jira and Confluence for planning and documentation.
- Excellent communication and collaboration skills, with the ability to engage effectively across technical and non-technical stakeholders.
- (Preferred) Proficiency in Power BI, including data modeling and performance optimization.
- (Preferred) Knowledge of Python for automation and data pipeline development.
We use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring isions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.