Services

Backend Engineering, Infrastructure

Industry

B2B Marketplace

Year

2021-2024

One cache. Two hundred tenants. Zero stale pages.

A B2B marketplace served 200+ company profile pages from a single Django instance. Every profile was publicly accessible and queried on every visit - company details, image galleries, map embeds, related listings. Without caching, each page load hit the database 8-12 times. With naive caching, one company's edit would serve stale data to visitors of completely unrelated profiles. The system needed per-tenant cache isolation with instant invalidation and zero cross-tenant leakage.

01.
THE CHALLENGE

The Stale Data Problem

The first caching attempt used Django's per-view cache with the URL as the key. It worked until two companies edited their profiles within the same cache TTL window. Company A's edit invalidated the cache, but the invalidation was too broad - it flushed pages for Company B, C, and every other tenant. The workaround (longer TTLs with manual invalidation) introduced a worse problem: editors would save changes, refresh the page, and see old content. Support tickets piled up. 'I updated my description 10 minutes ago and it still shows the old one.'

Cache invalidation is not hard. Cache invalidation across 200 tenants sharing one cache backend is hard.

02.
THE SOLUTION

Version-Hash Cache Keys

The solution uses a composite cache key: company slug, locale, and a truncated hash of the company's last_modified timestamp. When a company saves any change, the timestamp updates automatically. The next request computes a new hash, which produces a new cache key. The old cached page is never explicitly deleted - it simply becomes unreachable because nothing will ever request that key again. The cache's built-in TTL evicts orphaned entries. No invalidation logic. No cache.delete() calls. No race conditions between 'delete old' and 'write new.'

The composite cache key generator:

Python
import hashlib

def company_cache_key(company, locale):
    """Composite cache key with version hash.

    The hash changes whenever the company saves any field,
    because Django auto-updates last_modified.
    No explicit invalidation needed.
    """
    ts = company.last_modified.isoformat()
    version = hashlib.md5(ts.encode()).hexdigest()[:8]
    return f"company:{company.slug}:{locale}:{version}"

Watch the Cache

A live cache simulator. Each tile is a tenant page. Traffic requests hit the cache (green) or miss (amber). Click any tile to simulate an edit - the version hash changes and the next request is a guaranteed miss. Watch the hit rate climb as the cache warms.

Tenant Pages
Hit Miss Invalidated
acme-gmbh
v1
berg-ag
v1
delta-io
v1
echo-sys
v1
flux-tech
v1
grip-co
v1
hive-lab
v1
ion-dev
v1
kite-ops
v1
lens-ai
v1
meta-hub
v1
nova-biz
v1
orb-net
v1
peak-it
v1
quad-fx
v1
rim-sol
v1
sync-up
v1
trak-io
v1
unit-cx
v1
vex-api
v1
wave-ly
v1
xen-ops
v1
yoke-io
v1
zeta-hq
v1
Click a tile to trigger an edit
0
Requests
0
Cache Hits
0
Cache Misses
0%
Hit Rate

Template fragment caching for heavy components:

HTML
{% load cache %}

{# Gallery fragment — cached independently #}
{# Uses gallery_modified, not company.last_modified #}
{% cache 3600 gallery company.slug gallery_hash %}
  {% for image in company.gallery_images.all %}
    <img src="{{ image.url }}" alt="{{ image.alt }}" loading="lazy">
  {% endfor %}
{% endcache %}

{# Map embed — rarely changes, long TTL #}
{% cache 86400 map company.slug company.address_hash %}
  {% include "company/_map_embed.html" %}
{% endcache %}

Production Safeguards

Fragment-Level Caching

Heavy page sections - image galleries, map embeds, related listings - are cached independently with their own tenant-scoped keys. When a company updates their description, the gallery fragment stays cached because it has its own last_modified tracking. This reduces database load by 60% compared to full-page-only caching, because most edits touch text fields, not images.

Staff Preview Bypass

Staff users always see the uncached version. The cache middleware checks request.user.is_staff before serving from cache. This eliminates the most common support issue: editors saving changes and seeing stale content. It also means staff see live rendering performance, which keeps template optimization honest.

Locale-Aware Key Isolation

The cache key includes the locale segment. A German visitor and an English visitor hitting the same company page produce different cache keys. Without this, a visitor switching languages would see the previous locale's cached version. The initial implementation missed this and served German content to English visitors for one company whose German profile was more popular.

03.
THE RESULT

Sub-Millisecond Page Loads

Average page load dropped from 340ms (uncached) to 12ms (cached). Cache hit rate stabilized at 94% after the first hour of traffic. Database queries per page dropped from 11 to 1 (the timestamp check). Zero stale-data incidents in 18 months of production. Zero cross-tenant cache leakage. The cache stores roughly 800 entries (200 companies × 2 locales × 2 fragment types) using under 40MB of memory.

KEY METRICS

0msAvg Load Time
0%Cache Hit Rate
0DB Queries/Page
WHAT THE CLIENT SAYS

"Pages load instantly now. Editors see their changes immediately. We went from 3-4 support tickets per week about stale content to zero. The best infrastructure is the kind nobody thinks about."

Technical Lead

B2B Marketplace · Platform Team

FAQ

Why not use Redis tags or cache versioning?

What happens when the cache fills up?

Does the timestamp check add latency?

TECHNOLOGY STACK