The data now spans November 2025 through April 2026 — revealing a clear seasonal cycle. January/February benefits renewal creates a production peak that the R8 rolling average anchors to. By March/April, production returns to November/December baseline levels, making the R8 prediction unreachable.
Weekly Network Production Trajectory
Total weekly production across all practices with R8 prediction overlay
Peak Month (Jan)
$7.06M
$7,058,729/wk
Current (Apr)
$6.34M
$6,338,364/wk
R8 Prediction
$7.54M
$7,539,503/wk
Peak-to-Current Drop
-10.2%
Jan to Apr decline
Dec ≈ Apr
Cyclical
$6.33M ≈ $6.34M
Monthly Summary by Day of Week
Month
Mon
Tue
Wed
Thu
Fri
Weekly Total
vs R8
vs January
Day-of-Week Decomposition
Five mini-charts showing the 6-month trajectory for each day of the week. Each chart has its own R8 line for comparison.
Thursday: Steepest Decline
Dropped from $1.61M (Jan) to $1.34M (Apr) — a loss of $263K/week. Thursday production has declined every month since January, the most consistent downward trend of any DOW.
Tuesday: Largest Absolute Day
Still the highest-production day at $1.58M in April, but down from $1.70M in January. The -7.4% drop represents ~$125K/week in lost production from the network's strongest day.
Wednesday: December Anomaly
Wednesday was the only DOW that actually peaked in December ($1.68M) rather than January. The decline from peak is -$303K/week to April levels.
Friday: Smallest but Most Stable
Fewest practices (59-62) and lowest volume, but the most stable trajectory. The spread between best and worst months (excl. Dec anomaly) is the narrowest of any DOW.
For each currently-active practice, take its own rolling 8-week per-DOW averages × the actual count of each weekday in the target month, minus federal holidays. Sum across the network. Why this is the headline: it includes every practice that's active today (not just those open in May 2025), and each practice's own current production and trend are baked into its R8 — no network-wide multiplier required. Three other reference numbers appear below for context.
May 2026 Forecast
—
—
June 2026 Forecast
—
—
Practices in Forecast
—
includes new practices since May 2025
Per-Month Bottom-Up Breakdown
Each panel decomposes the Bottom-Up monthly forecast (the green headline number) by day of week. Network Daily R8 is the sum across all open practices of their per-practice 8-week DOW average. Multiplied by the open day count for that DOW (working days minus holidays), it produces the monthly contribution. The five DOW contributions sum to the headline forecast.
May 2026 — Bottom-Up Breakdown
—
Working Days
—
—
Holiday Closures
—
none
No-Holiday Total
—
if every weekday open
Holiday Drag
—
closed-day impact
DOW
Working Days
Practices Open
Network Daily R8
Monthly Contribution
—
June 2026 — Bottom-Up Breakdown
—
Working Days
—
—
Holiday Closures
—
none
No-Holiday Total
—
if every weekday open
Holiday Drag
—
closed-day impact
DOW
Working Days
Practices Open
Network Daily R8
Monthly Contribution
—
Method Comparison — Why We Use Bottom-Up
Method
May 2026
June 2026
What It Does
Verdict
How the Methods Disagree
Four projections for May / June 2026. Bottom-Up is the headline (every active practice counted with its own current production). Calendar-Aware anchors to last May plus a network-wide growth multiplier. R8 × DOW Calendar uses network-level R8 baselines with the same calendar math. R8 × 4.33 ignores actual weekday counts and shows the same number for both months.
Top 20 Practices by Bottom-Up May Contribution
Practice
OD
Source
May Contribution
% of Network
Why Calendar-Aware Adds Value Over R8
Even with R8 corrected to the true rolling 8-week network average, it still treats every future month the same. The calendar-aware model adds three things: (1) seasonal anchoring — uses May/June 2025 actuals so it knows summer's profile differs from spring, (2) actual day counts — June 2026 has 5 Mondays vs May 2026's 4, which materially shifts the total, and (3) explicit holiday handling — Memorial Day and Juneteenth are subtracted, not assumed away.
Holiday Treatment
Memorial Day (Mon May 25) is treated as fully closed — nearly all SGA practices observe it. Juneteenth (Fri Jun 19) is treated as fully closed in the conservative case; if half the network operates that day, add roughly +$310K to the June forecast.
Methodology — Calendar-Aware Forecast
For each target month, the projection sums per-DOW contributions:
Baseline: Prior-year same-month per-DOW weekly network average (e.g., May 2025 Tuesday = $1.72M). Calculated from the YoY weekly series, $300/practice-day threshold applied.
YoY adjustment: Multiplied by 1 + (yoy_4wk_pct/100) — the rolling 4-week network total vs the same 4 weeks last year.
Working day count: Actual count of each DOW in the target month minus federal holidays where practices close.
Holiday assumptions (current run): Memorial Day = closed (May 25), Juneteenth = closed (Jun 19). Both are subtracted from working-day counts. The R8 Calendar comparison model uses the same DOW-count math but pulls the per-DOW R8 averages instead of seasonal baselines — isolating the seasonal-vs-anchored question.
Limitations: Prior-year baseline implicitly carries any one-off events from May/June 2025 (e.g., weather, regional issues). The YoY 4-week multiplier is a single-point trend estimate — if the network's trajectory shifts, the projection should be re-run. New practices added since May 2025 are NOT in the prior-year base, so the projection is conservative for network growth.
Declining
54
65% of practices
Stable
14
17% of practices
Improving
15
18% of practices
January Average vs April Average by Practice
Points above the diagonal are improving; below are declining
Practice Detail
Location ▲
OD ▲
Trend ▲
Nov ▲
Dec ▲
Jan ▲
Feb ▲
Mar ▲
Apr ▲
Change ▲
R8/Day ▲
Ops Director Impact
Daily dollar lost from declining practices in each OD's portfolio. Higher bars indicate ODs whose declining practices are losing the most production. Focus coaching and intervention where the dollar impact is greatest.
Daily Production Lost per Ops Director
Sum of daily losses from declining practices in each portfolio
Ops Director Detail
Ops Director ▲
Declining ▲
Total ▲
% Declining ▲
$/Day Lost ▲
Affected Practices
Highest Dollar Impact
Highest Decline Rate
24-Month Year-over-Year View
Pulls two full years of Velox daily booked production so every week in the most recent year has a same-calendar-week prior-year pair. The 8-week trailing rolling average is overlaid as a tracking baseline that adapts as weeks roll — not a fixed line.
Weeks of Data
—
—
Last 4 Weeks
—
network production
Same 4 Weeks PY
—
prior year baseline
YoY 4-Week
—
vs same period last year
Latest Week
—
—
24-Month Weekly Network Production
Solid: weekly Mon–Fri total · Dashed: trailing 8-week rolling average · drag to pan, scroll to zoom, double-click to reset
Year-over-Year Overlay
Current year (solid) vs prior year same calendar week (dashed) · drag to pan, scroll to zoom, double-click to reset
Day-of-Week Year-over-Year
Same six months as the Day-of-Week Trends tab, paired against last year. Solid bars are current year; lighter bars are prior year. Header shows the latest-month YoY %.
Source-of-Truth Audit — Month to Date
Per-practice booked, completed, scheduled, and budget for the current month. Numbers use the same Gen4 measures as Power BI's Pacing Report — Provider Level Pacing at the full-month grain, so this view should reconcile line-for-line against PBI. Booked = Completed + Scheduled.
Network MTD Booked
—
current month
Network MTD Completed
—
across all practices
Network Budget
—
monthly budget total
Booked vs Budget
—
% to budget (booked ÷ budget)
Practices Reporting
—
—
Practice Audit Detail
Practice ▲
OD ▲
Src ▲
Booked ▲
Completed ▲
Scheduled ▲
Budget ▲
% Bud ▲
Days ▲
R8 Rolling Average — Corrected Definition (2026-05-01)
The R8 displayed across this dashboard is the true rolling 8-week network DOW average — the actual sum of network production on Mondays (then Tuesdays, etc.) over the last 8 weeks, divided by 8.
Previously, R8 was computed by summing each practice's 8-week per-DOW average (filtered to days where they actually worked). That overstated the network R8 by 50–170% per DOW because every practice contributed an "if they worked" number even on DOWs where they're typically closed. Friday R8 was 171% too high; Monday–Thursday R8 were 52–57% too high.
True R8 weekly: ~$6.9M (vs the broken $11.4M in data.json)
True R8 monthly equivalent: ~$30M (vs the broken $49M)
Source:yoy.weekly_series last 8 weeks — the actual network DOW totals already in the dataset
Override is applied at the dashboard layer. The next pull-daily.js refresh will rewrite data.json with the broken values; the dashboard re-corrects them on every load. To make this permanent, fix the r8_summary calculation in tools/pacing/pull-daily.js to use rolling network DOW totals instead of per-practice sum.
$300 Threshold
Any practice reporting less than $300 in daily production is excluded from that day's calculation. The $300 threshold represents approximately one doctor-day of minimum production — below that, the practice likely had no provider and the zero would distort the average.
Data Sources
83 practices: Direct VELOX reporting with daily production actuals
7 practices: Estimated from monthly aggregates (no daily VELOX feed)
90 total: Represent the SGA network for pacing analysis
Timeframe: November 2025 through April 2026 (6 months, ~130 business days)
Bottom-Up Forecast (Primary, added 2026-05-04)
For each currently-active practice, the dashboard takes its own per-DOW rolling 8-week average (from practices[].dow_detail[DOW].r8), filters out DOWs the practice doesn't typically work (where R8 is below the $300/day threshold), and multiplies by the actual count of each weekday in the target month minus federal holidays. Contributions are summed across all practices.
Why this is the primary:
No composition bias. Includes every practice currently producing — new practices added since May 2025 are counted with their actual production, not assumed away.
No uniform multiplier. Each practice's own current trend is baked into its R8. Fast growers are projected forward at their own pace; declining practices at theirs.
No prior-year contamination. No reliance on May/June 2025 actuals, so one-off events from last year (weather weeks, payer disruptions) don't propagate forward.
Calendar-aware. Same DOW-counting and holiday-closure logic as the Calendar-Aware reference.
Adjustment for EOD-only practices: The 8 practices reporting via End-of-Day monthly aggregates (rather than daily Velox feeds) have a flat per-DOW value. Their r8 field can be polluted by partial-month May data being treated as a daily figure, so the dashboard substitutes the trailing 5-month (Dec–Apr) daily-average mean.
Limitations: Practice DOW patterns are assumed stationary — if a practice changes its hours (adds Saturdays, drops Fridays) the forecast won't reflect that until the new pattern enters their 8-week window. Universal-closure assumption for federal holidays (Memorial Day, Juneteenth) is conservative; actual closure rate is closer to 60–80%.
Known Limitations of the Calendar-Aware Forecast (Tracker)
Calendar-Aware (Seasonal + YoY) is now a reference tracker, not the primary — but it's still useful as a seasonal sanity check. A statistician review on 2026-05-04 surfaced these flaws (which is why we moved to Bottom-Up):
Composition bias. Practices added to the network since May 2025 are not in the prior-year baseline, so their production is invisible to the model. If even 10% of current production comes from new practices, the network forecast is biased ~10% low.
Heterogeneous growth. A single network-wide YoY 4-week multiplier is a weighted average. It under-shoots fast-growing practices and over-shoots declining ones. Errors don't cleanly cancel.
Single-point trend estimate. The YoY 4-week multiplier swings 200–400 bps on a single bad week (weather, payer disruption, AR posting timing). A trimmed 8–13 week YoY would be more stable.
One-off contamination. A May 2025 weather week or regional issue is permanently embedded in the 2026 baseline with no flag.
Uniform holiday closures. Memorial Day and Juneteenth are modeled as 100% closed; in reality 40–60% of practices stay partially open on float holidays.
No confidence band. A point estimate without a +/- range invites false precision.
Status: The recommended fix — switch to a per-practice Bottom-Up forecast — was implemented on 2026-05-04 and is now the primary headline number. Calendar-Aware remains as a tracker for the seasonal-comparison perspective.
Linear Regression — Removed as a Forecast (2026-05-04)
An earlier version of this dashboard included an OLS linear regression on each DOW's monthly series, extrapolated forward. It has been removed entirely:
The trailing data window is too short (~6 monthly observations per DOW) to fit a stable slope; standard error on the slope exceeds the slope itself for most days of the week.
December is a structural break (fewer working days, holiday cancellations), not a trend point. Including it corrupts every slope upward.
Linear extrapolation through a known seasonal cycle isn't a trend — it's an artifact.
If a trend signal is needed in the future, use a smoothed YoY (trimmed 8–13 week) on the network weekly series rather than OLS on monthly DOW totals.
The Seasonal Pattern
Six months of data reveal a clear seasonal U-curve:
November: $6.79M/wk — baseline production
December: $6.33M/wk — holiday trough (fewer working days, patient cancellations)
April: $6.34M/wk — returns to December baseline levels
December and April are nearly identical ($6.33M vs $6.34M), confirming this is a seasonal cycle rather than a structural decline. The R8 will always overpredict during trough months because it includes January/February in its 8-week window.
Power BI Direct Integration
The dataset ID b57375c9-d64b-4643-be71-378a520d8f93 was extracted from the Excel connection string. A direct API integration is feasible via:
POST https://api.powerbi.com/v1.0/myorg/datasets/b57375c9-d64b-4643-be71-378a520d8f93/executeQueries
This would eliminate the Excel middleman and enable automated daily refresh
DAX queries can be sent directly — the same measures used in Power BI reports
Requirements: Azure AD service principal with Power BI API permissions (Dataset.Read.All)
Benefit: Dashboard could pull live data instead of requiring manual Excel exports
Status: Proof-of-concept validated via the Power BI bridge at VPS:3050. Full automation requires Azure AD app registration and service principal setup.