📄 Reference: AI Event Audit System — Project Status

Why We're Building This

Every event at Sound in Town moves through four major handoffs: Sales to Planning, Planning to Operations (Showbook to Lead Tech), Operations to Admin (Post-Show), and Admin back to Sales (Feedback). Each handoff has documented requirements in our KB — required fields, deadlines, quality standards.

Today, we rely entirely on humans to catch every gap before handing off. The result: things slip through. A venue contact missing here, a staffing note that says "TBD" there, an equipment scrub that's overdue. These gaps waste time at the next stage — Planning can't advance a venue without a contact, a Lead Tech can't run a show without a complete Showbook.

The goal: Build AI-powered audit agents that review events at handoff time and catch the things humans miss. Not to nag about what's already known to be incomplete, but to audit events the human believes are ready and surface what slipped through.

What It Is

A set of pluggable audit agents, each focused on a specific stage of the event lifecycle. Each agent:

  1. Reads business rules from our KB articles (the KB is the source of truth)
  2. Pulls event data from Odoo (fields, tickets, activities, planning slots)
  3. Sends both to Claude (AI) with instructions to evaluate compliance
  4. Returns a clear report: blockers, warnings, and observations

The system doesn't make up its own rules. It enforces what's written in our KB. When we improve the KB, the auditors automatically get smarter.

Design Principles

These were established through a design and stress-testing process and should not change without GM approval.

1. Gate = Human Says "Ready"

Auditors run when a human triggers them — "I think this event is ready for handoff, audit it." They do not run on time windows or schedules (yet). This avoids flooding people with findings about events they already know are incomplete.

Later, once the team is consistently green and on track, we can add scheduled runs.

2. KB Is Source of Truth — Rules Are Compiled, Not Copied

KB articles are written for humans — long, narrative, context-rich. They're too verbose to send directly to the AI. So each KB article that feeds an auditor gets a companion "compiled rules" file that extracts the checkable facts: deadlines, required fields, thresholds, conditions.

  • When a KB article changes, the compiled rules file is re-compiled
  • The KB stays human-first; the compiled rules are the machine-optimized view
  • Both live in the repo so changes can be tracked

3. Route Findings to the Right Person

Findings go to the person responsible for moving the event to the next stage — not to a generic field. The KB defines who owns what at each stage (see The Handoffs between Sales / Planning / Operations / and Admin).

If the routing is unclear for a finding, that's a signal the KB needs a fix.

4. Manual First, Scale Later

Every auditor starts as a command-line tool. Terminal output only. No Odoo write-back, no ticket creation, no scheduled runs — until the output has been validated by humans and trusted.

5. Depth Control

Each auditor has three depth levels:

  • Quick — Field-level checks only (are required fields populated? are deadlines met?)
  • Standard — Quick + quality analysis (are populated fields meaningful? cross-field consistency?)
  • Deep — Standard + reasoning (schedule feasibility, risk inference, cross-event conflicts)

6. Start with Opus

Use the best available Claude model during development. Staff will judge the system by its first findings — if those are shallow or noisy, they'll dismiss it permanently. Optimize for cost later once the system has earned trust.

The Auditors

Auditor 0: KB Quality (The Foundation)

Purpose: Validate that KB articles are clear enough to be enforced by automated agents. The event auditors are only as good as the rules they enforce — if the KB has undefined deadlines, DRAFT markers, or contradictory rules, the event auditors produce unreliable findings.

What it checks:

  • DRAFT markers and TBD placeholders still present in published articles
  • Undefined deadlines (rules that say "on time" without a specific number of days)
  • Missing ownership (process steps with no stated responsible person)
  • Contradictory rules across articles (two articles giving different deadlines for the same thing)
  • Vague or unenforceable language ("should try to" vs. "must")
  • Missing cross-references and broken links
  • Gap analysis (processes described but with no documented standard)

Runs against: KB articles directly (not compiled rules — this auditor validates the rules themselves)

Routes findings to: General Manager

Status: First run complete (2026-04-06). See Test Results below.

Auditor 1: Sales Readiness (Handoff 1: Sales to Planning)

Purpose: When the Sales Manager believes an event is ready for handoff to Planning, this auditor checks whether the package is actually complete.

What it checks:

  • All 10 required handoff fields populated per KB 1608
  • Equipment scrub complete (32-day rule) per KB 1645
  • Customer promises documented (inclusions, exclusions, price) per KB 1205
  • Deposit received, Flex quote built, production scope and staffing requirements set
  • Quality of populated fields (catches "TBD" or "standard" entries that aren't actionable)
  • Cross-field consistency (audio scope flagged but no audio roles in staffing notes?)
  • Blocking helpdesk tickets
  • At deep depth: risk assessment (rush event? complex scope? new client?)

Source KB articles: 1608, 1645, 1205, 1218

Routes findings to: Sales Manager

Status: Config and prompt built. Waiting on KB fixes from Auditor 0 before compiling rules.

Auditor 2: Logistics and Conflict (Cross-Event Intelligence)

Purpose: Look across ALL events in a 2-week window and spot risks and opportunities that only emerge from the cross-event view. This is the auditor that finds things no individual event review would catch.

What it checks:

Pattern What It Catches
Direct transfer opportunity Event A strikes near Event B's venue — skip the warehouse round-trip
Soft/firm schedule conflict Event A has a flexible end time, Event B has a firm start, and they share crew
Crew double-booking Same technician assigned to overlapping shifts across events
Equipment conflict Two events need the same scarce gear in the same window
Warehouse bottleneck Multiple preps or depreps landing on the same day
Venue proximity optimization Nearby events could share transport
Missing venue address Can't assess logistics if address is empty — flag to fix the record

Source KB articles: 1202, 1203, 1292, 1293, 1613, 1648

Routes findings to: Planning Manager

Status: Planned — Phase 3

Auditor 3: Planning Readiness (Handoff 2: Showbook to Lead Tech)

Purpose: When the Lead Planner believes the Showbook is ready for handoff, this auditor checks whether the plan is actually executable.

What it checks:

  • Showbook status set (21-day rule) per KB 1589
  • Key positions booked (A1, V1, L1, etc.) per KB 1296
  • Lead Tech, Lead Prep, Lead Deprep assigned
  • Ship and return dates set
  • Venue advancing complete
  • System ready buffer of at least 2 hours before doors per KB 1202
  • Contingency buffers for risk factors (out of town, complex setup, new venue) per KB 1203
  • At deep depth: whether setup time is realistic for the equipment scope

Source KB articles: 1589, 1201, 1202, 1203, 1296, 1402, 1606

Routes findings to: Lead Planner

Status: Planned — Phase 4

Future Auditors (add as needed)

New auditors can be added by creating a config file and a prompt file — no code changes required. Ideas for future auditors:

  • Promise Auditor — Reads all helpdesk tickets linked to an event and extracts customer requests (e.g., "4 wireless mics", "screens visible from the back"). Then checks whether those commitments are reflected in the promise fields on the event. Catches gaps where Sales discussed something with the client but never documented it as a promise — which means it won't appear in the Showbook.
  • Venue Auditor — Checks venue records in Odoo contacts for completeness before Planning begins advancing. Flags missing critical info: loading dock access, elevator requirements, ground-level or upper-floor, key/access arrangements, parking costs, extra access fees, power availability, rigging points. For repeat venues, checks whether notes from previous events captured issues that should be pre-flagged.
  • Pre-Show Readiness — 7-day final sanity check on planned events
  • Close-Out — Post-event deprep, financial close, and survey compliance
  • Weekly Risk Digest — Birds-eye report for GM and leadership
  • Client History — Flag events for clients with a pattern of late scope changes
  • Training Opportunity — Simple events that could pair a trainee with a mentor

Implementation Plan

Phase 1: KB Quality Audit (current)

Goal: Validate the KB rules before building event auditors.

Steps:

  1. Build the audit harness (CLI, prompt compiler, output formatter) — DONE
  2. Build KB Quality Auditor — DONE
  3. Run against the Sales and Planning KB articles — DONE (first run 2026-04-06)
  4. Review findings with GM — IN PROGRESS
  5. Fix KB blockers found (see TODO list below)
  6. Re-run KB Quality Auditor to confirm fixes
  7. Run against the remaining Planning KB articles (1589, 1201, 1202, 1203, 1296, 1402, 1606)
  8. Fix any additional blockers found
  9. Update this article with results

Verification: The KB Quality Auditor is verified when it consistently finds real issues and produces zero false blockers on articles we know are clean. Compare its findings against manual review by GM.

Phase 2: Sales Readiness Auditor

Goal: First event auditor — proves the concept works against live Odoo data.

Steps:

  1. Compile rules files from the (now-validated) KB articles 1608, 1645, 1205, 1218
  2. Build the Odoo data layer (query events, tickets, activities, scorecard)
  3. Pick 3-5 test events:
  4. One event we know is ready for handoff (should produce few or no blockers)
  5. One event we know has gaps (should flag the right things)
  6. One edge case (rush event, BEO, or install type)
  7. Run the auditor against each test event at quick, standard, and deep depth
  8. Compare Claude's findings to what we would manually flag
  9. Tune the prompt and rules until findings match human judgment
  10. Log all test results in this article
  11. Share sample output with Planning Manager — do the findings match what they'd manually check?

Verification: The Sales Readiness Auditor is verified when:

  • It catches at least one real issue on a test event that a human missed
  • It produces no more than one false positive per event on average
  • The Planning Manager confirms the findings are useful

Phase 3: Logistics and Conflict Auditor

Goal: Cross-event intelligence — the auditor that finds things humans can't easily spot.

Steps:

  1. Add planning.slot queries to the data layer (crew schedules, call times)
  2. Build the Logistics auditor config and prompt
  3. Pick a busy 2-week window from recent memory as the test window
  4. Run the auditor and review each finding against what actually happened
  5. Tune: are transfer opportunities realistic? Are conflict warnings accurate?
  6. Log test results in this article

Verification: The Logistics Auditor is verified when it finds at least one real conflict or opportunity that wasn't previously noticed. The Planning Manager reviews and confirms findings are actionable.

Phase 4: Planning Readiness Auditor

Goal: Showbook-to-Lead-Tech handoff quality gate.

Steps:

  1. Compile rules from (now-validated) Planning KB articles
  2. Pick test events at various stages of Showbook completeness
  3. Run, compare, tune — same process as Phase 2
  4. Have the Lead Planner and Show Ops Manager review findings

Verification: Same criteria as Phase 2, evaluated by the Lead Planner.

Phase 5: Automation (when trust is established)

Goal: Move from manual runs to scheduled runs with Odoo write-back.

Steps:

  1. Switch execution mode from Claude Code (Max plan) to Anthropic API
  2. Add an x_studio_audit_notes field to x_events in Odoo
  3. Implement helpdesk ticket creation for blocker-severity findings
  4. Add scheduled runs for auditors that have proven reliable
  5. Consider switching high-frequency auditors from Opus to Sonnet if quality holds

Verification: Monitor false positive rates after switching to scheduled mode. If noise increases, scale back to manual.

TODO — Immediate Next Steps

KB Fixes Required (from Phase 1 first run)

Decisions Made

  • Handoff deadline: 30 days. All references will be standardized to 30 days. This gives a clean 2-day buffer after the 32-day equipment scrub deadline.
  • KB 1608 is the canonical handoff article. KB 1218 will be rewritten as a simplified summary that points to each handoff's dedicated article for details.

Sales KB Fixes (from Run 1 — articles 1608, 1645, 1205, 1218)

  1. ~~KB 1608: Standardize to 30-day deadline~~ DONE 2026-04-06. SLA, checklist, and KB 1218 all say 30 days.
  2. ~~KB 1608: Define the 48-hour clock start~~ DONE 2026-04-06. "48 hours from when the quote is marked Firm in Odoo."
  3. ~~KB 1608: Clarify "within 2 days of the new request"~~ DONE 2026-04-06. "2 business days of receiving the inquiry."
  4. ~~KB 1608: Add link to Production Scope Set~~ DONE 2026-04-06. References Odoo activity type "Production Scope Set."
  5. ~~KB 1218: Rewrite as simplified summary~~ DONE 2026-04-06. Each handoff defers to its dedicated article. No duplicate field lists.
  6. ~~KB 1645: Specify activity type name~~ DONE 2026-04-06. "Equipment List Scrubbing" activity in Odoo.
  7. A7 process undocumented — Known gap (MASTER_TODO HAND-04). Cannot be audited until documented.

Planning KB Fixes (from Run 2 — articles 1589, 1201, 1202, 1203, 1296, 1402, 1606)

  1. ~~KB 1589: Add Odoo field name for Showbook Ready~~ DONE 2026-04-06. Field: x_studio_selection_field_kt_1j462te0b = planned ("Showbook Ready").
  2. ~~KB 1589: Map Lead Tech confirmation to system action~~ DONE 2026-04-06. The status change to "Showbook Ready" IS the confirmation record.
  3. ~~KB 1296: Add booking deadline~~ DONE 2026-04-06. 21 days before event, aligns with Showbook handoff.
  4. ~~KB 1402: Add ownership and deadline for risk documentation~~ DONE 2026-04-06. Lead Planner owns it; before Showbook handoff.
  5. KB 1606: Add deadline for Lead Planner assignment — Still needed. Must be assigned within X days of Sales-to-Planning handoff.
  6. ~~KB 1589: Standardize to "Show Operations Manager"~~ DONE 2026-04-06. All references now use "Show Operations Manager" (matches Odoo hr.job title).
  7. ~~KB 1589: Clarify "The Planner" = Lead Planner~~ DONE 2026-04-06. All references now say "Lead Planner."
  8. KB 1202 vs 1203: State the System Ready formula explicitly — Still needed. "2 hours minimum + risk buffer = required System Ready time."

Remaining from Run 3 (verification audit, 2026-04-06)

  1. KB 1608: Map "Firm" to Odoo field — The word "Firm" is used as a trigger state but the Odoo field name is not specified. (Status field: x_studio_selection_field_kt_1j462te0b = confirmed)
  2. KB 1608: Confirm Firm timestamp is tracked — The 48-hour clock depends on knowing when the quote was marked Firm. Need to verify this is a tracked date in Odoo.
  3. KB 1296: Change "should" to "must" — Activity completion uses aspirational language.
  4. KB 1218: Update Handoff 3 table deadline — Table says "2 business days after deprep" but body says "begin within 2, complete within 5." Align.

Build Steps

  1. Compile rules files from validated Sales KB articles
  2. Build and test Sales Readiness Auditor against 3-5 known events

How the System Learns

The system improves through three layers:

Layer 1: KB Articles (source of truth)

Written for humans. Updated by the team as processes evolve. When we update a KB article and re-compile its rules file, the auditor starts enforcing the new rule on the next run.

Layer 2: Compiled Rules (machine-optimized)

Extracted from KB articles. Structured as checkable facts — deadlines, required fields, thresholds, conditions. These are what the AI actually reads during an audit. Re-compiled when the source KB article changes.

Layer 3: Learnings (human feedback)

Plain English notes about what the auditor gets right, gets wrong, or misses. Stored in scripts/event_audit/learnings/ and compiled into the prompt as guidance. This is how we teach the system our business intuition over time — not by writing code, but by noting "that was a good catch" or "stop flagging this" in plain English.

Refinement workflow:

  • After each audit run (2 minutes): skim findings, jot a one-liner if something was a good catch, a false positive, or a missed issue
  • When a KB article changes: re-compile the rules file
  • When a new process is documented: write the KB article, compile a rules file, add it to the auditor config

Growth path:

  • Month 1: 5 rules files, basic compliance checks
  • Month 3: 15 rules files, 20 learnings, nuanced context-aware audits
  • Month 6: 25 rules files, 50 learnings, auditor understands the business deeply

Test Results Log

Date Auditor Target Findings Accuracy Notes
2026-04-06 KB Quality KB 1608, 1645, 1205, 1218 (Sales) 4 blockers, 10 warnings, 3 notes Strong Found real inconsistencies: 30 vs 31 day deadline, mismatched field lists between 1608 and 1218.
2026-04-06 KB Quality KB 1589, 1201, 1202, 1203, 1296, 1402, 1606 (Planning) 5 blockers, 7 warnings, 5 notes Strong Found missing Odoo field names, no booking deadline in 1296, role name inconsistency (Ops Mgr vs Show Ops Mgr).
2026-04-06 KB Quality KB 1608, 1645, 1218, 1589, 1296, 1402 (post-fix verification) 2 blockers, 7 warnings, 4 notes Strong Original 4 blockers resolved. Remaining 2 blockers are both Handoff 4 (A7 undocumented, known gap). New warnings surfaced for deeper issues (Firm field mapping, "should" vs "must").
2026-04-06 Sales Readiness Event 1094 (APR18 Peace Passage Ice Show) 5 blockers, 5 warnings, 4 notes Strong Caught missing on-site contact, overdue scrub, rush protocol not invoked, vague promises ("Leopard or something hanging"), and ice arena rigging gap. All actionable.
2026-04-06 Sales Readiness Event 1266 (APR22-24 NCLGMA Prince George) 6 blockers, 7 warnings, 4 notes Strong Correctly distinguished venue contact vs. client contact. Caught truncated power promise, 26-day overdue hotel booking, empty staffing notes on a complex multi-day production.
2026-04-07 Promise Event 1094 (APR18 Peace Passage Ice Show) 1 blocker, 7 warnings, 2 notes Strong Found per-song lighting commitment in ticket thread not documented in promise fields. Extracted client phone number from ticket. Flagged unconfirmed quote 12 days out.
2026-04-07 Venue Event 1094 (APR18 Peace Passage Ice Show) 4 blockers, 4 warnings, 3 notes Strong Venue used 2x before with zero notes captured. No address, no phone. Ship date vs load-in discrepancy. Venue contact buried in Run of Show text.
2026-04-07 Logistics 19 events, next 14 days 2 conflicts, 7 risks, 5 opportunities Strong Found GP Pomeroy triple-room crunch (3 SIT productions same day), 6-event warehouse bottleneck Apr 9-10, GP gear consolidation opportunity. Identified 2 events shipping tomorrow with zero crew.
2026-04-07 Planning Readiness Event 1094 (APR18 Ice Show) 5 blockers, 6 warnings, 5 notes Strong Caught impossible schedule (system ready noon but truck leaves 4pm). Identified Thursday access as keystone dependency. Found Russell wearing 4 roles with no backup.
2026-04-07 Planning Readiness Event 858 (APR12-15 AARFP Conference) 7 blockers, 10 warnings, 5 notes Strong No showbook URL, no system ready/doors times, unassigned deprep, scope flags don't match crew. Correctly identified room-flip constraint with Pembina.

Technical Reference

Harness code: scripts/event_audit/

How to run:

# List available auditors
python -m event_audit --list

# Run KB quality audit on specific articles
python -m event_audit kb_quality --articles 1608 1645 1205 1218

# Run with deeper analysis
python -m event_audit kb_quality --articles 1608 --depth deep

# Dry run (see what would be sent to Claude, without calling it)
python -m event_audit kb_quality --articles 1608 --dry-run

# Run sales readiness audit on an event (Phase 2)
python -m event_audit sales_readiness --event 1247
python -m event_audit sales_readiness --event 1247 --depth deep

Currently running via: Claude Code (Max plan, no incremental cost)

Future: Anthropic API direct calls (estimated cost under $5/week at full scale)

Adding a new auditor: Create a YAML config file in auditors/ and a prompt file in prompts/. No Python code changes required.

Audit Data Model in Odoo

Audit results are stored in two custom Odoo models so findings can be tracked, queried, and reviewed by process owners.

x_audit_run (Audit Run)

One record per audit execution. Linked to the event.

Field Type Purpose
x_event_id many2one (x_events) Which event was audited
x_auditor selection Which auditor ran (sales_readiness, planning_readiness, etc.)
x_run_date datetime When the audit ran
x_depth selection quick / standard / deep
x_blocker_count integer Number of blockers found
x_warning_count integer Number of warnings found
x_note_count integer Number of notes found
x_summary text Full markdown report from the audit

x_audit_finding (Audit Finding)

One record per individual finding. Linked to the run and to the event. Process owners review and provide feedback on each finding.

Field Type Purpose
x_run_id many2one (x_audit_run) Parent audit run
x_event_id many2one (x_events) Event (denormalized for querying)
x_severity selection blocker / warning / note
x_title char Short title ("Missing venue contact")
x_finding text Full finding description
x_kb_reference char KB rule reference ("KB 1608, Field #4")
x_action text Recommended action
x_status selection open / acted_on / not_applicable / false_positive / new_rule_created
x_owner_feedback text Process owner's feedback on this finding
x_auditor char Which auditor
x_finding_date datetime When the finding was created

How the Feedback Loop Works

  1. Auditor runs and creates findings with status "open"
  2. Process owner reviews each finding in Odoo
  3. For each finding, they set a status and optionally add feedback:
  4. acted_on — "Fixed the venue contact, thanks for catching this"
  5. not_applicable — "This is a BEO event, no A1 needed"
  6. false_positive — "Venue is 5 min from warehouse, crew self-transports"
  7. new_rule_created — "Added this as a formal KB rule based on this finding"
  8. Periodically, a Trends Auditor reads all feedback and proposes KB improvements

The false_positive and not_applicable feedback becomes direct input to the auditor's learnings file — teaching it to avoid those findings in the future. The new_rule_created status tracks when a finding led to a permanent business rule change.

Finding Categories

Each finding is classified into one of five categories:

Category What It Means Typical Response
Missing Data A field or record that should be populated but isn't Owner fills in the field
Missed Deadline A deadline or SLA that has passed Escalate or trigger exception process
Data Quality Data that is incorrect, contradictory, truncated, or inconsistent Owner corrects the record
Risk An identified risk condition that needs monitoring or mitigation May promote to risk register (see below)
Opportunity A potential cost saving or efficiency gain Planning Manager evaluates

Future: Event Risk Register

Findings categorized as "Risk" point to an underlying risk condition that persists beyond the finding itself. The planned evolution:

Phase 1 (current): Findings have a x_category field. Filter the workbook to category = risk to see all risk findings.

Phase 2 (planned): A risk type library (x_risk_type) with pre-populated common event production risks:

  • Staffing risks: Green worker on key position, single point of failure, key person double-booked
  • Venue risks: New venue (no institutional knowledge), difficult load-in, limited power, no rigging points, outdoor exposure
  • Logistics risks: Tight turnaround, warehouse bottleneck, out-of-town event, shared equipment across events
  • Timeline risks: Compressed planning cycle, late sales handoff, pending client decisions
  • Scope risks: Unconfirmed client requirements, conditional scope items, scope creep history
  • Financial risks: Missing deposit, budget-constrained scope, unconfirmed quote

The library would be pre-populated with ~50 common risks from event production industry knowledge and refined over time with real data from our audits.

Phase 3 (planned): Event risk instances (x_event_risk) linking a risk type to a specific event with likelihood, impact, mitigation plan, owner, and status. Audit findings that identify a risk would link to the risk instance. Risk matrix view for leadership.

Querying Findings

In Odoo, you can filter and group findings to spot patterns:

  • All open blockers: x_severity = blocker AND x_status = open
  • All false positives (to improve the auditor): x_status = false_positive
  • All findings for a specific event: x_event_id = [event ID]
  • Findings by auditor type: x_auditor = sales_readiness
  • Most common finding titles: group by x_title