← Volver a Trabajos

Quality Analytics Lead

AI Evaluation
Aplicar Ahora →

Descripción

If you have a Candidate Login already, but have forgotten your password please use the steps to reset your password. If you have forgotten your email login, please contact [email protected] subject Workday Candidate Login When creating your Workday account and entering personal information like name, address, please do not use ALL CAPS. Thank you! NOTICE: For Privacy Policy please review here Job Responsibilities: The Quality Analytics Lead is the dedicated technical resource bridging Welo Data’s Analytics and Quality organizations. Sitting within the Analytics team, this senior IC partners enterprise-wide with Quality Managers, Analysts, and leadership to design and maintain the data models, measurement frameworks, and analytical infrastructure that power evidence-based quality decisions across programs and regions. At its core, this is an analytics engineering role. The primary responsibility is building and owning the quality data layer — the dbt models, data marts, and Python-driven modeling that transform raw operational data into a trusted, well-documented foundation the Quality organization can rely on. Experimentation, stakeholder consulting, and BI delivery are all extensions of that foundation, not parallel tracks. The ideal candidate combines deep fluency in modern data modeling with a genuine understanding of quality operations, AI training data workflows, and experimental design. They ensure the analytical systems they build directly improve how quality teams detect issues, validate improvements, and demonstrate impact to clients and leadership.

Key Responsibilities 1. Quality Data Modeling & Analytics Infrastructure

  • Design, build, and maintain dbt models and data marts that serve the Quality organization’s enterprise reporting needs — covering throughput, accuracy, defect rates, CAPA effectiveness, annotator/rater performance, and program-level quality health.
  • Use Python for higher-order data modeling tasks including cohort analysis, performance trend modeling, and custom aggregations that go beyond standard SQL/dbt scope.
  • Partner with data engineers to define source data requirements, document data lineage, and ensure quality data is reliable, consistent, and analytics-ready.
  • Own the quality analytics data layer end-to-end: from raw operational inputs to clean, tested, well-documented marts consumed by dashboards, reports, and ad hoc analyses.
  • Apply dbt testing, documentation, and best practices to build a trusted, maintainable codebase that scales as new programs and data sources are onboarded. 2. Quality Measurement Frameworks & Metrics Design
  • Collaborate with Quality Managers and Analysts to define, standardize, and operationalize quality metrics — including accuracy rates, defect categorization, sampling coverage, inter-rater agreement, and CAPA closure effectiveness — consistently across all programs.
  • Design measurement frameworks aligned to acceptance criteria and quality thresholds, ensuring metrics faithfully reflect program health and client commitments.
  • Support rubric and guideline effectiveness measurement, helping quality teams understand whether their standards produce consistent, measurable outcomes across annotators and raters.
  • Champion data quality governance within the Quality org: own metric definitions, threshold documentation, and analytical methodology standards to reduce inconsistency and reporting variance.
  • Define enterprise-level quality dashboards in partnership with BI resources, translating mart output into clear, decision-ready views for Quality Managers through to senior leadership.
  • Analyze patterns in model evaluation outcomes, annotator disagreement, and guideline interpretation to surface systemic issues in AI training data and evaluation processes. 3. Experimental Design & Performance Validation
  • Design and execute A/B tests and controlled experiments to measure the impact of quality interventions, process changes, and annotator training programs — applying proper power analysis, significance testing, and results interpretation.
  • Build success validation frameworks to confirm that CAPA actions and process improvements produce measurable, sustained outcomes — not just short-term fluctuations.
  • Develop performance attribution models that quantify the contribution of specific quality initiatives to outcome improvements, separating causal signal from noise in program performance trends.
  • Apply statistical methods to sampling design, audit analysis, and error pattern detection, surfacing systemic quality issues and their root causes with data-backed evidence.
  • Conduct pre/post analyses for major quality program changes, training rollouts, and rubric updates, delivering clear impact assessments to quality leadership and clients. 4. Decision Support & Stakeholder Partnership
  • Act as the analytical partner to Quality Managers (P2–L2) and senior quality leadership, translating complex data models and analytical findings

Details

Category

AI Evaluation

Location

Portland, Oregon

Employment Type

Independent Contractor

Posted

14/4/2026

No te pierdas los mejores trabajos

Recibe alertas cuando se publiquen nuevas oportunidades de IA. Elige tu categorí...

Review

Is Welocalize Legit?

Pay Data

How Much Do AI Jobs Pay?

Guide

How to Get Started