Quantegy Laboratory · Systems Integration

Willow BrookManor

Custom-built real-time dashboard for a client's media room. Purpose-designed to surface what matters: time, weather, hardware health, and what's playing. Deployed on dedicated hardware.

1280×400 Strip Display
4 Live Data Sources
V7 Current Revision
Pi Hardware Target
Live Simulation, All Values Animated

Idle: auto-cycles system metrics every 8 sec  ·  Click HTPC IDLE to fire a simulated playback event and lock the display to Now Playing

01 / Project Overview

Built for a specific room.
Engineered for any environment.

Willow Brook Manor Dashboard is a fullscreen information display running on a Raspberry Pi, mounted in a dedicated media room environment. The primary display is a 1280×400px strip panel positioned below the main television screen.

The system integrates three live data sources: the HTPC media center via its JSON-RPC API, real-time weather via Open-Meteo, and local hardware telemetry via psutil. These are presented as three distinct views that transition automatically based on what's happening in the room.

When a film is playing, the dashboard switches to Cinema View: clearlogo, tagline, progress bar, and estimated finish time. When the HTPC is idle, it rotates between a clock/weather display and hardware gauge pages every 15 seconds. The system detects state changes every 2 seconds and responds without operator input.

Everything is custom. No off-the-shelf dashboard software, no configuration files, no web browser running in kiosk mode. Pure Python, Tkinter, and Pillow, chosen for complete control over every pixel.

PlatformRaspberry Pi (Linux)
LanguagePython 3 / Tkinter
Display1280 × 400 px fullscreen
HTPC APIJSON-RPC over HTTP
WeatherOpen-Meteo (REST, no key)
Telemetrypsutil: CPU, RAM, thermal, and net I/O
Poll Rate2s (HTPC state) / 2s (gauges)
Weather Refresh15 min (threaded, non-blocking)
ViewsCinema · System Gauges · Network
Auto-TransitionState-driven + 15s idle cycle
Dial AssetPillow circular mask applied at load
RevisionV7 (V5 base, incrementally iterated)
02 / Technical Architecture

Four engineering
decisions that matter.

01 / 04

Cross-System Integration

The dashboard orchestrates three independent external systems: the HTPC's JSON-RPC, a REST weather API, and the Linux hardware layer, each on its own data pathway and refresh interval. Network requests run on daemon threads so a slow HTPC response never blocks the UI or the animation loop. All API calls are wrapped in try/except with silent degradation: if the HTPC doesn't respond, the display stays on whatever it last showed. The system never crashes from a network timeout.

JSON-RPC REST API Threading
02 / 04

Real-Time Telemetry Display

Three analog gauges covering processor load, memory utilization, and thermal temperature are redrawn on a Tkinter canvas every 2 seconds from live psutil readings. A separate Network View displays download and upload throughput in Mbps using psutil's net I/O counter delta, calibrated to ISP plan speeds. Needle sweep covers a 240° arc; the thermal scale is normalized from 70–180°F so the needle reads meaningfully at typical Pi operating temperatures rather than pegging at rest.

psutil Canvas Rendering Normalization
03 / 04

State-Driven View Management

The system operates on a simple state machine: HTPC active → Cinema View. HTPC idle → gauge rotation. A 2-second polling loop watches Player.GetActivePlayers and triggers the appropriate transition when state changes. The idle cycle alternates between three views on a 15-second timer, but the HTPC state check always takes priority and can interrupt the cycle at any point. Transition guards prevent race conditions when polling fires during an active fade.

State Machine Race Condition Guard Priority Interrupt
04 / 04

Transition Engineering

View transitions use a sequential black fade: outgoing canvas fades to black, view swaps at the cut point, incoming canvas fades in from black. A simultaneous crossfade was attempted first and abandoned. At roughly 50% opacity overlap the dark background bleeds through both layers, producing a visible brightness pulse. The sequential approach eliminates the overlap window entirely. An is_transitioning flag suppresses interval redraws during the fade to prevent canvas artifacts mid-animation.

Sequential Fade Artifact Prevention Interval Guard
03 / Engineering Notes

The dissolve problem
and how it was solved.

The Problem

The gauge pages, System View and Network View, share the same visual language. Switching between them with a full black fade felt heavy; a crossfade between the two canvas layers seemed like the right call. The first implementation faded both canvases simultaneously: outgoing opacity from 1→0, incoming from 0→1, running in parallel over 500ms.

On the Pi display, the result was a clear brightness pulse at the midpoint of the transition. At ~50% opacity, both canvas layers were semi-transparent simultaneously. The background color, #020202, very nearly black, bled through both at once, and the eye read the combined output as a brief dimming flash at the exact center of the fade.

Root Cause Analysis

Tkinter on Linux has no native per-widget alpha channel. The stipple workaround (drawing a gray50 checkerboard rectangle over the canvas) was the only available approximation, and it was visually unacceptable in motion, rendering as a visible dot pattern rather than a smooth transition. The CSS equivalent would have been clean; the Tkinter equivalent was not.

# First attempt — simultaneous crossfade (ABANDONED)
# Caused brightness pulse at ~50% overlap
def bad_dissolve(self):
    # Both canvases at 50% simultaneously =
    # background bleeds through both → visible flash
    outgoing.alpha(0.5)  # fade out
    incoming.alpha(0.5)  # fade in — at same time

# Solution — sequential, not simultaneous
# No overlap window = no background bleed
def sequential_dissolve(self):
    # Step 1: outgoing → black (88ms, ease-in)
    # Step 2: swap view at the cut
    # Step 3: incoming → full (550ms, ease-out)
    # is_transitioning = True blocks interval redraws

The Fix

Sequential rather than simultaneous. The outgoing canvas fades to black first over 88ms. At the cut point the incoming canvas is raised and begins its fade in over 550ms, a longer and slower reveal. The two phases never overlap, which eliminates the background bleed entirely.

An is_transitioning flag was added to both the gauge interval timer and the network speed timer. Without it, a 2-second psutil poll firing during a 638ms transition would issue a canvas redraw mid-fade, resetting the opacity artificially. The flag makes both timers skip their draw call if a transition is in progress and reschedule normally; they don't miss data, they just don't render it at the wrong moment.

This class of bug, concurrent processes contending for a shared resource during a state change, appears in show systems in exactly the same form. Automation cues firing during a manual override. DMX values being written by two operators simultaneously. A Kinesys cue sheet executing while an E-stop clears. The solution is always the same: establish who owns the resource during the critical window and make everyone else wait.

Principles Demonstrated
Root-cause analysis before applying a fix: understanding why the flash occurs, not just patching it
Knowing platform limitations: Tkinter has no widget alpha on Linux, stipple is inadequate, and sequential is the path
Concurrency guard pattern: is_transitioning flag as a mutex for the canvas resource
Iterative revision: V5, V6, and V7 each addressing concrete failure modes found in deployment
Graceful degradation: every API call is wrapped, and an HTPC timeout never surfaces as a crash
Hardware-relative paths: __file__-relative dial.png load works from any working directory on any machine
04 / Relevance to Show Systems

Same problems.
Larger stage.

A media room dashboard and a touring show automation system operate on different scales, but the engineering problems are structurally identical: multiple data sources, shared state, real-time rendering, and a system that cannot crash during the show.

[ SYS ]

Cross-Vendor Coordination

HTPC speaks JSON-RPC. Weather speaks REST. The Pi speaks psutil. Each has different latency, different failure modes, different data shapes. Integrating them into a single coherent display without letting one slow source stall the others is the same problem as coordinating PRG, TAIT, and Solotech systems on a shared show network: different vendors, one stage.

[ NET ]

Real-Time Constraint

The dashboard's animation loop cannot block. Weather fetches run on daemon threads; HTPC polling is fire-and-forget with a 1-second timeout. In show systems, a video server that's slow to respond cannot hold up a cue trigger. The threading model here is the same discipline: separate concerns, set timeouts, never block the critical path.

[ GRD ]

Graceful Failure Under Load

Every external call is wrapped. If the HTPC doesn't respond, the Cinema View holds its last state. If the weather API is unavailable, the display shows the last known temperature. Nothing crashes. In a live production environment, this is not optional. It is the baseline expectation. The system that keeps running quietly when one node goes down is the system that gets used on the road.