Home › Backends › Memcached Backend
Requires:
pip install cachekit[memcached]
Store cache in Memcached with consistent hashing across multiple servers. High-throughput, volatile in-memory caching shared across processes and pods.
from cachekit.backends.memcached import MemcachedBackend, MemcachedBackendConfig
from cachekit import cache
# Use default configuration (127.0.0.1:11211)
backend = MemcachedBackend()
@cache(backend=backend)
def cached_function():
return expensive_computation()# Server list (JSON array format)
export CACHEKIT_MEMCACHED_SERVERS='["mc1:11211", "mc2:11211"]'
# Timeouts
export CACHEKIT_MEMCACHED_CONNECT_TIMEOUT=2.0 # Default: 2.0 seconds
export CACHEKIT_MEMCACHED_TIMEOUT=1.0 # Default: 1.0 seconds
# Connection pool
export CACHEKIT_MEMCACHED_MAX_POOL_SIZE=10 # Default: 10 per server
export CACHEKIT_MEMCACHED_RETRY_ATTEMPTS=2 # Default: 2
# Optional key prefix
export CACHEKIT_MEMCACHED_KEY_PREFIX="myapp:" # Default: "" (none)Config objects don't require a running Memcached server:
from cachekit.backends.memcached import MemcachedBackendConfig
config = MemcachedBackendConfig(
servers=["mc1:11211", "mc2:11211", "mc3:11211"],
connect_timeout=1.0,
timeout=0.5,
max_pool_size=20,
key_prefix="myapp:",
)To use the config with a live backend:
from cachekit.backends.memcached import MemcachedBackend, MemcachedBackendConfig
config = MemcachedBackendConfig(
servers=["mc1:11211", "mc2:11211", "mc3:11211"],
connect_timeout=1.0,
timeout=0.5,
max_pool_size=20,
key_prefix="myapp:",
)
backend = MemcachedBackend(config)Use MemcachedBackend when:
- Hot in-memory caching with sub-millisecond reads
- Shared cache across multiple processes/pods (like Redis but simpler)
- High-throughput read-heavy workloads
- Applications already using Memcached infrastructure
When NOT to use:
- Need persistence (Memcached is volatile — data lost on restart)
- Need distributed locking (use Redis instead)
- Need TTL inspection/refresh (Memcached doesn't support it)
- Cache values exceed 1MB (Memcached default slab limit)
- Latency: 1–5ms per operation (network-dependent)
- Throughput: Very high (multi-threaded C server)
- TTL support: Yes (max 30 days)
- Cross-process: Yes (shared across pods)
- Persistence: No (volatile memory only)
- Consistent hashing: Yes (via pymemcache HashClient)
- No persistence: All data is in-memory. Server restart = data loss.
- No locking: No distributed lock support (use Redis for stampede prevention).
- 30-day TTL maximum: TTLs exceeding 30 days are automatically clamped.
- 1MB value limit: Default Memcached slab size limits values to ~1MB.
- No TTL inspection: Cannot query remaining TTL on a key.
- Backend Guide — Backend comparison and resolution priority
- Redis Backend — Persistent shared caching with locking support
- File Backend — Single-process local caching without infrastructure
- Configuration Guide — Full environment variable reference