LoudAwards: Web Design Lessons from Voting

LoudAwards: Web Design Lessons from Voting

See how web design powered four years of real-time fan voting for LoudAwards. Resilience, tradeoffs, and lessons for website builders.

LoudAwards: Web Design Lessons from Voting

The Black Box: Surviving Live Voting at LoudAwards

This isn’t a portfolio story. This is a black box analysis; an after-action report from four years running the Manitoba Loud Music Awards voting platform solo, under real traffic, real risk, and no reset button. No backup team, no dry runs. If it failed, the event failed. The evidence is here: the system didn’t fail. Here’s how, why, and what would’ve killed it.

For the community side of this build and why we keep sponsoring the chaos, read our LoudAwards sponsorship recap ; that piece shows the people behind these numbers.

Stakes and Scale: The Pressure Was Real

Voting volume wasn’t hypothetical. Every year, the numbers climbed:
2021: 3,085 votes
2022: 8,578 votes
2023: 26,794 votes
2024: 33,627 votes

Peak: 1,000+ concurrent users hitting the vote endpoint, traffic surging in the last hours, social shares driving unpredictable spikes. The site ran on shared LiteSpeed hosting, same box as the main FunkPd site. No CDN. No load balancer. No failover. All votes, all events, one server, one developer.

Mission Parameters: Success or Scandal

  • Every vote must count exactly once. No double-dips, no ghost records, no data loss.
  • System must remain responsive under peak. No lag, no timeout, no manual recovery.
  • No downtime, no excuses, no post-event "fixes."

Failure was public, permanent, and unrecoverable. This is the black box from that run.

Decisions That Saved the System: Option Trees

Every part of the stack was chosen by one rule: boring survives, fancy dies. Each tradeoff is logged below.

Authentication

  • Rejected: Custom auth (unproven, time sink, untested under load)
  • Rejected: Email-only magic links (blocked by deliverability, prone to delays)
  • Rejected: Social OAuth-only (risk if Facebook fails or bans)
  • Selected: WordPress core auth + Facebook OAuth fallback.
    Reason: Both are battle-tested, have built-in session and CSRF handling, and fail gracefully if one breaks.

Vote Storage

  • Rejected: Custom vote tables (risk of race conditions, table locks under heavy writes, single point of failure)
  • Rejected: Third-party services (Firebase, Redis, external API; introduce latency, data persistence and network risk)
  • Selected: WordPress usermeta per-user isolation.
    Reason: No cross-user contention, every write atomic, natural sharding by user, no external dependencies.

Vote Validation

  • Rejected: Client-side validation (trivial to bypass with browser tools or bots)
  • Rejected: Optimistic UI (risk of false success, invisible data loss, re-sync bugs)
  • Selected: All validation server-side with immediate feedback.
    Reason: Server is the only authority. Vote fails? User knows instantly. No silent errors.

Deployment & Infra

  • Rejected: Automated deploys, CI/CD, containers (good in theory, unproven in shared hosting, more moving parts to debug under fire)
  • Rejected: CDN (would require careful cache-busting and endpoint bypass; no time, no budget for mistakes)
  • Selected: Manual deploy, local staging, live plugin hotfixing.
    Reason: Know every change, can patch instantly, can roll back fast. Risk contained by discipline, not by automation.

Black Box Internals: How It Actually Worked

  • All voting logic lived in a single custom plugin. No custom tables, no external services, no scheduled jobs.
  • Votes processed via AJAX call to ng_cast_vote endpoint. Nonce validated, user session checked, category and prior votes checked, then usermeta written atomically.
  • Frontend showed instant feedback, but only after server confirmation. No optimistic UI. If you lost, you saw it; if you landed, you knew it.
  • No live leaderboards, no public endpoints to scrape. Results were tallied after voting closed, removing attack incentives.
  • Plugin footprint kept minimal. All dependencies contained. No chain reactions from external failures.

Failure Modes Avoided: The Near-Misses

  • Bot attacks: Could have overwhelmed the vote endpoint in minutes if client-side checks were trusted.
    Survived because: All votes checked server-side, nonce and session required, no retry logic to exploit.
  • Race conditions: With tens of thousands of votes, a shared vote table would have deadlocked or dropped writes.
    Survived because: Per-user usermeta isolation. No cross-user contention. Writes are atomic.
  • Plugin conflicts: WordPress update or rogue plugin could have taken out voting mid-event.
    Survived because: Minimal plugin surface, all voting logic in one module. CSS broke once, voting never did.
  • Live hotfixes: Patching production during traffic.
    Survived because: Tight code, backup discipline, small attack surface.
  • Feature revolt: PayPal paid voting addition in 2022 caused immediate community backlash.
    Survived because: Feature was modular, easily pulled, voting unaffected.

What Would Have Broken It

  • Database corruption: usermeta table failure = all votes lost. Single-point failure with no redundancy.
  • Hosting outage: Server down, event down. No geographic failover, no backup host.
  • Manual deployment error: No CI/CD, no automated rollback. One bad push could have dropped votes or locked out users.
  • Resource limits: Shared hosting means hard RAM and CPU caps. A true viral spike would have hit a wall.

Load & Performance: By the Numbers

  • Votes per season: 3,085 → 33,627
  • Peak concurrent users: 1,000+
  • Final voting window (2024): 40% of votes in 6 hours
  • Response time under load: always <2 seconds
  • Zero vote rejections from overload. No timeouts, no manual intervention, no lost ballots.

Monitoring We Lacked (and What Would Have Helped)

  • No live error or latency monitoring; only post-mortem review
  • No alerting; no way to catch silent failure until too late
  • No resource utilization data; couldn’t forecast risk, only react if/when it broke
  • No automated health checks or rollback triggers

If anything had gone wrong, it would have been detected late; if at all. Only discipline and the minimalism of the stack kept disaster away.

Technical Blueprint

ng_cast_vote() - AJAX endpoint├── Nonce validation (anti-CSRF)├── User authentication check├── Category validation (one vote per user/category/day)├── Duplicate vote prevention└── Atomic usermeta write (per-user, never global)
  1. Frontend AJAX → WordPress AJAX handler
  2. All validation server-side
  3. If pass: usermeta write, atomic and instant
  4. Immediate feedback to user: success or error, no gray area
  5. Frontend UI updates only after server response

Security model: Trust nothing but the backend. All attack surfaces are server-controlled. No secrets in the browser. No client-side state.

The Real Post-Mortem: Autopsy and Lessons

  • Atomicity over elegance: No partial votes. No “we’ll try again later.” The system was binary; votes landed or they didn’t. No scandal, no manual cleanup.
  • Isolation over analytics: Every user was isolated. No global counter to deadlock. All tallies after the fact. Zero cross-user failure risk.
  • No trust in the client: Everything server-side. If the client lied, it didn’t matter. Only the backend counted votes.
  • Stack boredom equals uptime: Every shiny tool or feature had to prove it could survive a spike, a bot, or a live patch. If not, it died on the whiteboard.
  • What would I do differently?
    - Modularize from day one.
    - Add real-time error and resource monitoring.
    - Build in geographic failover and CI/CD for real safety.
    - Never let a solo op be the only knowledge base.

This wasn’t luck. It was discipline, minimalism, and a willingness to do the boring thing every time. That’s what kept it alive. No drama, no excuses, just a clean black box; votes in, votes out, never failed. The rest is forensics.

Nolan Phelps, founder of FunkPd
About the author

Nolan Phelps

Nolan Phelps founded FunkPd in 2017, specializing in performance-optimized web development for trades and industrial businesses across Canada and internationally. With hands-on web development experience dating back to 2006 and over a decade of prior construction trade experience, he delivers full-stack solutions that combine technical depth with real-world operational understanding. His client roster includes mining corporations, equipment manufacturers, and service operators like Minetek, Actiwork, and Fanquip, with a focus on sub-3-second load times and search-ready architecture. FunkPd maintains a 95%+ client retention rate through direct, in-house development: no outsourcing, no delegation: ensuring every build is lean, owner-editable, and optimized for Core Web Vitals performance.

Meet Nolan