About the Role We are seeking an AI Engineer to architect and optimize our agent-based forecasting and decision systems. You'll work at the intersection of LLMs, distributed computing, and statistical forecasting to build intelligent agents that transform how enterprises plan their supply chains. This role combines hands-on development of AI agents with rigorous evaluation frameworks to ensure our systems deliver measurably superior business outcomes. Core Responsibilities Agent Development & Context Engineering Design and implement intelligent agents using Pydantic AI and similar frameworks for demand planning and decision-making Craft sophisticated system prompts and agent instructions that encode domain expertise and business heuristics Test & implement context pipeline improvements to optimize agent performance and cost Build evaluation frameworks to measure agent performance against statistical baselines (MAPE, WMAPE, bias metrics) Implement self-improving agent systems that learn from feedback and adapt to customer-specific patterns Data Engineering & Analytics Build scalable data pipelines using Databricks/Spark for processing millions of SKU-location combinations Optimize distributed computing patterns to minimize costs while maximizing performance Design feature engineering strategies for time-series forecasting and anomaly detection Create data quality validation frameworks for ML model inputs and outputs LLM Integration & Workflow Automation Develop multi-agent orchestrations for complex supply chain workflows Integrate code execution environments with LLMs for autonomous data analysis Build custom tools and function calling patterns for agent-based systems Design fallback strategies and error handling for production agent deployments Experimentation & Evaluation Design A/B testing frameworks for comparing agent strategies against traditional methods Build comprehensive evaluation suites measuring both accuracy and business impact Create simulation environments for testing agent behaviors under various scenarios Develop metrics dashboards showing per-customer optimization gains Required Qualifications 3+ years of experience in data science, ML engineering, or analytics roles Strong Python skills with expertise in data manipulation (Pandas, PySpark, Polars) Experience with SQL and distributed computing frameworks (Spark/Databricks) Experience with cloud deployments and IaC Hands-on experience with LLMs and prompt engineering Background in statistical analysis and experiment design Experience building data pipelines and ETL processes Preferred Qualifications Experience with supply chain analytics or demand forecasting Knowledge of agent frameworks (Pydantic AI, LangChain, AutoGPT) Familiarity with time-series forecasting methods (ARIMA, Prophet, neural approaches) Experience with workflow orchestration tools (Dagster, Airflow) Background in optimization algorithms or operations research Previous work with enterprise B2B customers Technical Environment AI/Agent Stack: Pydantic AI, Claude/GPT-4, custom evaluation frameworks Data Platform: Databricks, Spark Connect, Dagster, Airbyte ML Libraries: XGBoost, LightGBM, statsmodels, scikit-learn, transformers Infrastructure: Kubernetes, Docker, AWS, PostgreSQL Languages: Python (primary), SQL, potentially TypeScript for UI integrations What Makes This Role Unique You'll pioneer the application of agentic AI to supply chain optimization, where small accuracy improvements translate to millions in inventory savings. This isn't about building chatbots – it's about creating autonomous systems that outperform traditional forecasting methods through intelligent reasoning and adaptive learning. You'll have direct impact on how Fortune 500 companies plan their operations. Growth Opportunities Drive the development of our next-generation agent architecture Publish research on agent-based forecasting methods Present at conferences on practical applications of LLMs in enterprise Shape the technical direction of our AI strategy Mentor teams on agent development best practices Ideal Candidate Profile You're someone who gets excited about turning messy real-world data into actionable insights through AI. You understand that in enterprise B2B, the "analysis battle" matters more than pure accuracy – customers need to understand and trust AI recommendations. You're comfortable with ambiguity, enjoy experimenting with new approaches, and have the resilience to iterate through multiple solutions. Most importantly, you're customer-obsessed and understand that each enterprise has unique patterns that require tailored optimization strategies. We offer competitive compensation, equity participation, and the opportunity to define how AI transforms supply chain management at scale. Daybreak is an Equal Opportunity Employer
Senior QA Engineer About the Role We're seeking a Senior QA Engineer to lead quality initiatives for our AI-powered supply chain forecasting and decision platforms. This is a strategic role where you'll leverage cutting-edge AI tools and automation to build intelligent testing systems that validate complex ML models, data pipelines, and distributed systems. You'll drive our shift-left quality practices while ensuring our platforms deliver accurate, reliable results at scale. Prediction Platform : A data centric, domain specific, model agnostic platform built especially for supply chain practitioners to generate accurate predictions. Decision System : A supply chain planning decision system purpose built to combine the computational power of AI predictions with human judgement for enabling enterprise adoption and continuous improvement of decision quality, speed and confidence. Key Responsibilities AI-Focused Test Automation Build intelligent test automation frameworks that leverage AI for test generation, maintenance, and self-healing capabilities (using tools like Playwright MCP) Implement automated validation for Daybreak's ML model outputs, forecast accuracy, and data pipeline integrity Design performance and testing strategies for distributed systems and concurrent workflows Create smart test selection that optimize regression suite execution based on code changes Quality Strategy & Architecture Own end-to-end quality strategy across microservices architecture and data platforms Embed quality gates throughout CI/CD pipelines and enforce shift-left practices at the feature branch level Build frameworks for testing event-driven architectures, async workflows, and agent-based systems Define, track and publish quality metrics including test coverage, defect escape rate, and mean time to recovery Data & ML Quality Assurance Develop automated validation for ETL processes, data transformations, and feature engineering pipelines Build testing strategies for orchestrated workflows and batch processing systems Implement data quality checks across ingestion, processing, and serving layers Required Qualifications 5+ years in QA/SDET roles with 2+ years testing data-intensive or ML products Strong programming skills in Python and JavaScript/TypeScript Hands-on experience using modern AI tools (Claude Code, Windsurf, Playwright MCP, etc.) Hands-on experience with modern test automation frameworks (Playwright, Cypress, Selenium, etc.) Experience with performance testing tools (Locust, K6, JMeter) and API testing framework s Background in testing microservices, distributed systems, and cloud-native applications Understanding of data engineering workflows, CI/CD pipelines and infrastructure as code Preferred Qualifications Knowledge of data orchestration platforms (Dagster, Airflow, Databricks) Familiarity with ML platforms and model validation techniques ( prepare datasets, validate model-driven test scenarios) Experience testing data processing systems Background in supply chain, forecasting, or optimization domains Tech Stack You'll Work With Backend : FastAPI, Python, PostgreSQL, PydanticAI Frontend : React, TypeScript Data & ML : Databricks, Dagster, MLflow Infrastructure : AWS (EKS, RDS, S3), Docker, Kubernetes CI/CD : Azure DevOps Monitoring : Datadog and modern observability tools What Makes This Role Unique You'll architect quality systems that use AI to test AI-powered platforms. This is a hands-on strategic role where you'll revolutionize how we approach quality through intelligent automation, predictive analytics, and data-driven testing strategies. You'll have the autonomy to research and implement next-generation testing tools while directly impacting product reliability and customer satisfaction. Impact You'll Have Transform quality practices from reactive testing to proactive prevention Build self-improving test systems that learn from failures and adapt to changes Enable rapid, confident deployments through intelligent automation Establish quality as a competitive advantage in the AI supply chain space We offer competitive compensation, equity participation, and the opportunity to shape the future of AI-powered quality assurance in a rapidly growing company. DAYBREAK IS AN EQUAL OPPORTUNITY EMPLOYER
As a Senior QA Engineer at our company, you will play a crucial role in leading quality initiatives for our AI-powered supply chain forecasting and decision platforms. You will leverage cutting-edge AI tools and automation to build intelligent testing systems that validate complex ML models, data pipelines, and distributed systems. Your focus will be on ensuring our platforms deliver accurate, reliable results at scale. **Key Responsibilities:** - **AI-Focused Test Automation** - Build intelligent test automation frameworks using tools like Playwright MCP - Implement automated validation for ML model outputs, forecast accuracy, and data pipeline integrity - Design performance and testing strategies for distributed systems and concurrent workflows - Create smart test selection to optimize regression suite execution based on code changes - **Quality Strategy & Architecture** - Own end-to-end quality strategy across microservices architecture and data platforms - Embed quality gates throughout CI/CD pipelines and enforce shift-left practices - Build frameworks for testing event-driven architectures, async workflows, and agent-based systems - Define, track, and publish quality metrics including test coverage and defect escape rate - **Data & ML Quality Assurance** - Develop automated validation for ETL processes, data transformations, and feature engineering pipelines - Build testing strategies for orchestrated workflows and batch processing systems - Implement data quality checks across ingestion, processing, and serving layers **Required Qualifications:** - 5+ years in QA/SDET roles with 2+ years testing data-intensive or ML products - Strong programming skills in Python and JavaScript/TypeScript - Hands-on experience with modern AI tools and test automation frameworks - Experience in testing microservices, distributed systems, and cloud-native applications - Understanding of data engineering workflows, CI/CD pipelines, and infrastructure as code **Preferred Qualifications:** - Knowledge of data orchestration platforms and ML model validation techniques - Experience testing data processing systems - Background in supply chain, forecasting, or optimization domains In this role, you will work with a tech stack including Backend (FastAPI, Python, PostgreSQL), Frontend (React, TypeScript), Data & ML (Databricks, Dagster, MLflow), Infrastructure (AWS, Docker, Kubernetes), CI/CD (Azure DevOps), and Monitoring (Datadog). This role is unique as you will architect quality systems that use AI to test AI-powered platforms. You will have the autonomy to research and implement next-generation testing tools while directly impacting product reliability and customer satisfaction. Join us to transform quality practices, build self-improving test systems, enable rapid deployments, and establish quality as a competitive advantage in the AI supply chain space.,