Package httpcache provides an http.RoundTripper implementation that works as a mostly RFC 7234 compliant cache for HTTP responses.
Note: This is a maintained fork of gregjones/httpcache, which is no longer actively maintained. This fork aims to modernize the codebase while maintaining backward compatibility, fix bugs, and add new features.
- ✅ RFC 7234 Compliant (~95% compliance) - Implements HTTP caching standards
- ✅ Age header calculation (Section 4.2.3)
- ✅ Warning headers for stale responses (Section 5.5)
- ✅ must-revalidate directive enforcement (Section 5.2.2.1)
- ✅ Pragma: no-cache support (Section 5.4)
- ✅ Cache invalidation on unsafe methods (Section 4.4)
- ✅ Multiple Backends - Memory, Disk, Redis, LevelDB, Memcache
- ✅ Thread-Safe - Safe for concurrent use
- ✅ Zero Dependencies - Core package uses only Go standard library
- ✅ Easy Integration - Drop-in replacement for
http.Client - ✅ ETag & Validation - Automatic cache revalidation
- ✅ Stale-If-Error - Resilient caching with RFC 5861 support
- ✅ Stale-While-Revalidate - Async cache updates for better performance
- ✅ Private Cache - Suitable for web browsers and API clients
package main
import (
"fmt"
"io"
"net/http"
"github.com/sandrolain/httpcache"
)
func main() {
// Create a cached HTTP client
transport := httpcache.NewMemoryCacheTransport()
client := transport.Client()
// Make requests - second request will be cached!
resp, _ := client.Get("https://example.com")
io.Copy(io.Discard, resp.Body)
resp.Body.Close()
// Check if response came from cache
if resp.Header.Get(httpcache.XFromCache) == "1" {
fmt.Println("Response was cached!")
}
}go get github.com/sandrolain/httpcachehttpcache supports multiple storage backends. Choose the one that fits your use case:
| Backend | Speed | Persistence | Distributed | Use Case |
|---|---|---|---|---|
| Memory | ⚡⚡⚡ Fastest | ❌ No | ❌ No | Development, testing, single-instance apps |
| Disk | ⚡ Slow | ✅ Yes | ❌ No | Desktop apps, CLI tools |
| LevelDB | ⚡⚡ Fast | ✅ Yes | ❌ No | High-performance local cache |
| Redis | ⚡⚡ Fast | ✅ Configurable | ✅ Yes | Microservices, distributed systems |
| Memcache | ⚡⚡ Fast | ❌ No | ✅ Yes | Distributed systems, App Engine |
sourcegraph.com/sourcegraph/s3cache- Amazon S3 storagegithub.com/die-net/lrucache- In-memory with LRU evictiongithub.com/die-net/lrucache/twotier- Multi-tier caching (e.g., memory + disk)github.com/birkelund/boltdbcache- BoltDB implementation
github.com/moul/hcfilters- HTTP cache middleware and filters for advanced cache control
transport := httpcache.NewMemoryCacheTransport()
client := transport.Client()Best for: Testing, development, single-instance applications
import "github.com/sandrolain/httpcache/diskcache"
cache := diskcache.New("/tmp/my-cache")
transport := httpcache.NewTransport(cache)
client := &http.Client{Transport: transport}Best for: Desktop applications, CLI tools that run repeatedly
⚠️ Breaking Change: The disk cache hashing algorithm has been changed from MD5 to SHA-256 for security reasons. Existing caches created with the original fork (gregjones/httpcache) are not compatible and will need to be regenerated.
import (
"github.com/gomodule/redigo/redis"
rediscache "github.com/sandrolain/httpcache/redis"
)
conn, _ := redis.Dial("tcp", "localhost:6379")
cache := rediscache.NewWithClient(conn)
transport := httpcache.NewTransport(cache)
client := &http.Client{Transport: transport}Best for: Microservices, distributed systems, high availability
import "github.com/sandrolain/httpcache/leveldbcache"
cache, _ := leveldbcache.New("/path/to/cache")
transport := httpcache.NewTransport(cache)
client := &http.Client{Transport: transport}Best for: High-performance local caching with persistence
// Use a custom underlying transport
transport := httpcache.NewTransport(cache)
transport.Transport = &http.Transport{
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
DisableCompression: false,
}
transport.MarkCachedResponses = true // Add X-From-Cache header
client := &http.Client{
Transport: transport,
Timeout: 30 * time.Second,
}See the examples/ directory for complete, runnable examples:
- Basic - Simple in-memory caching
- Disk Cache - Persistent filesystem cache
- Redis - Distributed caching with Redis
- LevelDB - High-performance persistent cache
- Custom Backend - Build your own cache backend
Each example includes:
- Complete working code
- Detailed README
- Use case explanations
- Best practices
httpcache implements RFC 7234 (HTTP Caching) by:
- Intercepting HTTP requests through a custom
RoundTripper - Checking cache for matching responses
- Validating freshness using Cache-Control headers and Age calculation
- Revalidating with ETag/Last-Modified when stale (respecting must-revalidate)
- Updating cache with new responses
- Invalidating cache on unsafe methods (POST, PUT, DELETE, PATCH)
- Adding headers (Age, Warning) per RFC specifications
Request Headers:
Cache-Control(max-age, max-stale, min-fresh, no-cache, no-store, only-if-cached)Pragma: no-cache(HTTP/1.0 backward compatibility per RFC 7234 Section 5.4)If-None-Match(ETag validation)If-Modified-Since(Last-Modified validation)
Response Headers:
Cache-Control(max-age, no-cache, no-store, must-revalidate, stale-if-error, stale-while-revalidate)ETag(entity tag validation)Last-Modified(date-based validation)Expires(expiration date)Vary(content negotiation)Age(time in cache per RFC 7234 Section 4.2.3)Warning(cache warnings per RFC 7234 Section 5.5)stale-if-error(RFC 5861)stale-while-revalidate(RFC 5861)
When MarkCachedResponses is enabled, cached responses include the X-From-Cache header set to "1".
Additionally, the X-Cache-Freshness header indicates the freshness state of the cached response:
fresh- Response is within its max-age and can be served directlystale- Response has expired and will be revalidatedstale-while-revalidate- Response is stale but can be served immediately while being revalidated asynchronouslytransparent- Response should not be served from cache
When a cached response is revalidated with the server (receiving a 304 Not Modified), the X-Revalidated header is also set to "1". This allows you to distinguish between:
- Responses served directly from cache (only
X-From-Cache: 1) - Responses that were revalidated with the server (both
X-From-Cache: 1andX-Revalidated: 1)
When a stale response is served due to an error (using stale-if-error), the X-Stale header is set to "1". This indicates:
- Responses served from cache due to backend errors (has
X-From-Cache: 1andX-Stale: 1)
The Transport struct provides several configuration options:
transport := httpcache.NewTransport(cache)
// Mark cached responses with X-From-Cache, X-Revalidated, and X-Stale headers
transport.MarkCachedResponses = true // Default: true
// Skip serving server errors (5xx) from cache, even if fresh
// This forces a new request to the server for error responses
transport.SkipServerErrorsFromCache = true // Default: falseSkipServerErrorsFromCache is useful when you want to:
- Always get fresh error responses from the server
- Prevent hiding ongoing server issues with cached errors
- Ensure monitoring systems detect real-time server problems
Example:
transport := httpcache.NewMemoryCacheTransport()
transport.SkipServerErrorsFromCache = true
client := transport.Client()
// Any 5xx responses in cache will be bypassed
// and a fresh request will be made to the serverhttpcache uses Go's standard log/slog package for logging. The logger is used to generate warning messages for errors that were previously silent, helping you identify potential issues in cache operations.
import (
"log/slog"
"os"
"github.com/sandrolain/httpcache"
)
// Create a custom logger
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelWarn,
}))
// Set the logger for httpcache
httpcache.SetLogger(logger)
// Now all httpcache operations will use your custom logger
transport := httpcache.NewMemoryCacheTransport()
client := transport.Client()If no logger is set, httpcache uses slog.Default().
For more information on configuring slog loggers, see the official slog documentation.
Automatically serve stale cached content when the backend is unavailable:
// Server returns 500, but cached response is served instead
resp, _ := client.Get(url) // Returns cached response, not 500 error
// Response will have X-From-Cache: 1 and X-Stale: 1 headersThis implements RFC 5861 for better resilience.
Improve perceived performance by serving stale content immediately while updating the cache in the background:
transport := httpcache.NewMemoryCacheTransport()
// Optional: Set timeout for async revalidation requests
transport.AsyncRevalidateTimeout = 30 * time.Second // Default: 0 (no timeout)
client := transport.Client()
// Server responds with: Cache-Control: max-age=60, stale-while-revalidate=300
// First request: Fetches from server and caches (60s fresh)
// Second request (after 70s): Returns stale cache immediately + revalidates in background
// Third request (after 80s): Returns fresh cache (updated by background revalidation)This implements the stale-while-revalidate directive from RFC 5861, which:
- Reduces latency: Returns cached response immediately without waiting for revalidation
- Improves UX: Users get instant responses even when cache is slightly stale
- Updates cache: Background goroutine fetches fresh data for subsequent requests
How it works:
- When a response is stale but within the
stale-while-revalidatewindow - The cached response is returned immediately to the client
- A background goroutine makes a fresh request to update the cache
- Subsequent requests get the updated cached response
Configuration:
transport.AsyncRevalidateTimeout = 30 * time.Second // Timeout for background updates
transport.MarkCachedResponses = true // See X-Cache-Freshness headerDetecting stale-while-revalidate responses:
if resp.Header.Get(httpcache.XFreshness) == "stale-while-revalidate" {
fmt.Println("Serving stale cache, updating in background")
}Differentiate cache entries based on request header values. This is useful when different header values should result in separate cache entries.
Common Use Cases:
- User-specific caching: Different cache per user (via Authorization header)
- Internationalization: Language-specific responses (via Accept-Language)
- API versioning: Version-specific responses (via API-Version header)
- Multi-tenant apps: Tenant-specific responses (via X-Tenant-ID header)
Important: This is different from the HTTP Vary response header mechanism, which is handled separately by httpcache. CacheKeyHeaders allows you to specify which request headers should be included in the cache key generation.
Configuration:
transport := httpcache.NewMemoryCacheTransport()
// Specify headers to include in cache key
transport.CacheKeyHeaders = []string{"Authorization", "Accept-Language"}
client := transport.Client()
// Each unique combination of Authorization + Accept-Language gets its own cache entryExample Scenario:
transport := httpcache.NewMemoryCacheTransport()
transport.CacheKeyHeaders = []string{"Authorization"}
client := transport.Client()
// Request 1: Authorization: Bearer token1
req1, _ := http.NewRequest("GET", "https://api.example.com/user/profile", nil)
req1.Header.Set("Authorization", "Bearer token1")
resp1, _ := client.Do(req1) // Cache miss, fetches from server
io.Copy(io.Discard, resp1.Body)
resp1.Body.Close()
// Request 2: Authorization: Bearer token2 (different token)
req2, _ := http.NewRequest("GET", "https://api.example.com/user/profile", nil)
req2.Header.Set("Authorization", "Bearer token2")
resp2, _ := client.Do(req2) // Cache miss, fetches from server (different cache entry)
io.Copy(io.Discard, resp2.Body)
resp2.Body.Close()
// Request 3: Authorization: Bearer token1 (same as request 1)
req3, _ := http.NewRequest("GET", "https://api.example.com/user/profile", nil)
req3.Header.Set("Authorization", "Bearer token1")
resp3, _ := client.Do(req3) // Cache hit! Serves cached response from request 1
io.Copy(io.Discard, resp3.Body)
resp3.Body.Close()
fmt.Println(resp3.Header.Get(httpcache.XFromCache)) // "1"Cache Key Format:
Without CacheKeyHeaders:
http://api.example.com/data
With CacheKeyHeaders:
http://api.example.com/data|Accept-Language:en|Authorization:Bearer token1
Important Notes:
- Header names are case-insensitive (automatically canonicalized)
- Headers are sorted alphabetically for consistent key generation
- Only non-empty header values are included in the key
- Empty
CacheKeyHeadersslice maintains backward compatibility (headers not included)
Vary Header:
Even when using CacheKeyHeaders, the server's Vary header is still validated. This means:
-
Matching headers: If
CacheKeyHeadersincludes the same headers as server'sVary, everything works correctly:transport.CacheKeyHeaders = []string{"Authorization"} // Server responds with: Vary: Authorization // ✅ Works perfectly - separate cache entries + validation
-
Missing headers: If server's
Varyincludes headers NOT inCacheKeyHeaders, cache will be invalidated:transport.CacheKeyHeaders = []string{"Authorization"} // Server responds with: Vary: Authorization, Accept // Request 1: Auth: token1, Accept: json → Cached // Request 2: Auth: token1, Accept: html → Same cache key, but Vary validation fails // ❌ Cache invalidated and overwritten
Best Practice: Always include all headers mentioned in server's Vary response in your CacheKeyHeaders configuration to avoid cache invalidation and overwrites.
Override default caching behavior for specific HTTP status codes using the ShouldCache hook:
transport := httpcache.NewMemoryCacheTransport()
// Cache 404 Not Found responses
transport.ShouldCache = func(resp *http.Response) bool {
return resp.StatusCode == http.StatusNotFound
}
client := transport.Client()
// Now 404 responses with appropriate Cache-Control headers will be cachedDefault Cacheable Status Codes (per RFC 7231):
200OK203Non-Authoritative Information204No Content206Partial Content300Multiple Choices301Moved Permanently404Not Found405Method Not Allowed410Gone414Request-URI Too Long501Not Implemented
Use Cases:
// Cache temporary redirects (302, 307)
transport.ShouldCache = func(resp *http.Response) bool {
return resp.StatusCode == http.StatusFound ||
resp.StatusCode == http.StatusTemporaryRedirect
}
// Cache specific error pages for offline support
transport.ShouldCache = func(resp *http.Response) bool {
if resp.StatusCode == http.StatusNotFound {
// Only cache 404s from specific domain
return strings.HasPrefix(resp.Request.URL.Host, "api.example.com")
}
return false
}
// Complex caching logic
transport.ShouldCache = func(resp *http.Response) bool {
switch resp.StatusCode {
case http.StatusOK:
return true // Already cached by default, but explicit
case http.StatusNotFound:
// Cache 404s but only for GET requests with specific header
return resp.Request.Method == "GET" &&
resp.Request.Header.Get("X-Cache-404") == "true"
case http.StatusBadRequest:
// Cache validation errors to reduce server load
return resp.Header.Get("Content-Type") == "application/json"
default:
return false
}
}Important Notes:
ShouldCacheis called AFTER checkingCache-Controlheaders- Responses without appropriate cache headers (e.g.,
no-store,max-age=0) are never cached - The hook only adds additional status codes to cache, it doesn't remove default ones
- Set
ShouldCache = nilto use default RFC 7231 behavior
Vary response header is currently used for validation only, not for creating separate cache entries.
What this means:
- The cached response stores the values of headers specified in
Vary(e.g.,Accept,Accept-Language) - When retrieving from cache, httpcache checks if the current request headers match the stored values
- If they don't match, the cache is considered invalid and a new request is made
- However, the new response overwrites the previous cache entry instead of creating a separate entry
Example of current behavior:
// Server responds with: Vary: Accept
// Request 1: Accept: application/json
resp1, _ := client.Do(req1) // Fetches from server, caches with Accept: application/json
// Request 2: Accept: text/html (different Accept header)
resp2, _ := client.Do(req2) // Cache miss (doesn't match), fetches from server
// ❌ OVERWRITES previous cache entry
// Request 3: Accept: application/json (same as Request 1)
resp3, _ := client.Do(req3) // ❌ Cache miss! (was overwritten by Request 2)Recommended Solution:
Use CacheKeyHeaders to create true separate cache entries based on request headers:
transport := httpcache.NewMemoryCacheTransport()
transport.CacheKeyHeaders = []string{"Accept", "Accept-Language"}
// Now each unique combination creates a separate cache entry
req1.Header.Set("Accept", "application/json")
client.Do(req1) // Cached separately
req2.Header.Set("Accept", "text/html")
client.Do(req2) // Cached separately (doesn't overwrite req1)
req3.Header.Set("Accept", "application/json")
client.Do(req3) // ✅ Cache hit! (separate entry still exists)Note: This limitation may be addressed in a future version to fully comply with RFC 7234 Section 4.1 (Vary header semantics).
httpcache implements several important RFC 7234 features for production-ready HTTP caching:
The Age header is automatically calculated and added to all cached responses, indicating how long the response has been in the cache:
resp, _ := client.Get(url)
age := resp.Header.Get("Age") // e.g., "120" (seconds)
// Clients can calculate: time_until_expiration = max-age - ageWarning headers are automatically added to inform clients about cache conditions:
Warning: 110 - "Response is Stale"- When serving stale contentWarning: 111 - "Revalidation Failed"- When revalidation fails and stale content is served
resp, _ := client.Get(url)
if warning := resp.Header.Get("Warning"); warning != "" {
log.Printf("Cache warning: %s", warning)
}The must-revalidate directive is enforced, ensuring that stale responses are always revalidated:
// Server response: Cache-Control: max-age=60, must-revalidate
// After 60s, cache MUST revalidate (ignores client's max-stale)This is critical for security-sensitive content that must not be served stale.
HTTP/1.0 backward compatibility via Pragma: no-cache request header:
req, _ := http.NewRequest("GET", url, nil)
req.Header.Set("Pragma", "no-cache")
resp, _ := client.Do(req)
// Bypasses cache (when Cache-Control is absent)Cache is automatically invalidated for affected URIs when unsafe methods succeed:
// POST/PUT/DELETE/PATCH with 2xx or 3xx response invalidates:
// - Request-URI
// - Location header URI (if present)
// - Content-Location header URI (if present)
client.Post(url, "application/json", body) // Invalidates GET cache for urlThis ensures cache consistency after data modifications.
Implement the Cache interface for custom backends:
type Cache interface {
Get(key string) (responseBytes []byte, ok bool)
Set(key string, responseBytes []byte)
Delete(key string)
}See examples/custom-backend for a complete example.
The Problem:
If you use the same Transport instance to make requests on behalf of different users, responses may be incorrectly shared between users unless properly configured:
// ❌ DANGEROUS: Same transport for different users
transport := httpcache.NewMemoryCacheTransport()
client := transport.Client()
// User 1 requests their profile
req1, _ := http.NewRequest("GET", "https://api.example.com/user/profile", nil)
req1.Header.Set("Authorization", "Bearer user1_token")
client.Do(req1) // Cached with key: https://api.example.com/user/profile
// User 2 requests their profile (same URL!)
req2, _ := http.NewRequest("GET", "https://api.example.com/user/profile", nil)
req2.Header.Set("Authorization", "Bearer user2_token")
client.Do(req2) // ❌ Gets User 1's cached response!Solutions:
- Use
CacheKeyHeadersto include user-identifying headers in cache keys:
// ✅ SAFE: Different cache entries per Authorization token
transport := httpcache.NewMemoryCacheTransport()
transport.CacheKeyHeaders = []string{"Authorization"}
client := transport.Client()
// Each user gets their own cache entry
req1.Header.Set("Authorization", "Bearer user1_token")
client.Do(req1) // Cached: https://api.example.com/user/profile|Authorization:Bearer user1_token
req2.Header.Set("Authorization", "Bearer user2_token")
client.Do(req2) // Cached: https://api.example.com/user/profile|Authorization:Bearer user2_token- Server-side
Varyheader -⚠️ Current Limitation: While theVaryresponse header is supported for validation, the current implementation does NOT create separate cache entries for different header values. Instead, it overwrites the previous cache entry with the same URL.
// Server response headers:
// Cache-Control: max-age=3600
// Vary: Authorization
// ❌ CURRENT BEHAVIOR:
// Request 1 (Authorization: Bearer token1) -> Cached
// Request 2 (Authorization: Bearer token2) -> Overwrites previous cache
// Request 3 (Authorization: Bearer token1) -> Cache miss (was overwritten)
// ✅ USE CacheKeyHeaders INSTEAD for true separate cache entries:
transport.CacheKeyHeaders = []string{"Authorization"}Important: If you rely on the server's Vary header for cache separation, you must also configure CacheKeyHeaders with the same headers to ensure separate cache entries are created. This is a known limitation that may be addressed in a future version.
- Prevent caching of user-specific data - Use
Cache-ControlorPragmaheaders:
// Server response for sensitive user data:
// Cache-Control: private, no-store
// or
// Pragma: no-cache
// These responses will never be cachedprivate directive because it's designed as a "private cache". This means:
Cache-Control: privatedoes NOT prevent caching in httpcache- This is correct for single-user scenarios (browser, CLI tool)
- This is problematic in multi-user scenarios (web server, API gateway)
Why this matters:
// Server tries to prevent shared caching:
// HTTP/1.1 200 OK
// Cache-Control: private, max-age=3600
// {"user": "john", "email": "john@example.com"}
// httpcache IGNORES "private" and caches it anyway!
// If same Transport serves multiple users → data leak!Workarounds for multi-user applications:
- Best: Use
Cache-Control: no-store(httpcache respects this) - Alternative: Configure
CacheKeyHeadersto separate cache by user - Alternative: Use separate Transport instances per user
- Separate Transport per user - Create individual cache instances:
// ✅ SAFE: Each user has isolated cache
func getClientForUser(userID string) *http.Client {
cache := diskcache.New(fmt.Sprintf("/tmp/cache/%s", userID))
transport := httpcache.NewTransport(cache)
return &http.Client{Transport: transport}
}When is this a concern?
- ✅ Web servers handling requests from multiple users
- ✅ API gateways proxying authenticated requests
- ✅ Background workers processing jobs for different accounts
- ❌ CLI tools (single user per instance)
- ❌ Desktop apps (single user per instance)
- ❌ Single-user services
Best Practice:
Always use CacheKeyHeaders or ensure the server sends appropriate Vary headers when caching user-specific or tenant-specific data.
CacheKeyHeaders with sensitive headers (e.g., Authorization, X-API-Key), these values may be stored in plain text in the cache backend.
- Private cache only - Not suitable for shared proxy caching
- No automatic eviction - MemoryCache grows unbounded (use size-limited backends)
- GET/HEAD only - Only caches GET and HEAD requests
- No range requests - Range requests bypass the cache
Typical performance characteristics:
| Operation | Memory | Disk | LevelDB | Redis (local) |
|---|---|---|---|---|
| Cache Hit | ~1µs | ~1ms | ~100µs | ~1ms |
| Cache Miss | Network latency + ~1µs overhead | |||
| Storage | RAM | Disk | Disk (compressed) | RAM/Disk |
Benchmarks vary based on response size, hardware, and network conditions.
# Run all tests
go test ./...
# Run with coverage
go test -cover ./...
# Run benchmarks
go test -bench=. ./...Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Submit a pull request
This project is a maintained fork of gregjones/httpcache, originally created by @gregjones. The original project was archived in 2023.
We're grateful for the original work and continue to maintain this project with:
- Bug fixes and security updates
- Modern Go practices and tooling
- Enhanced documentation and examples
- Backward compatibility with the original
Copyright (c) 2012 Greg Jones (original)
Copyright (c) 2025 Sandro Lain (fork maintainer)
- 📖 Documentation
- 💬 Issues
- 🔧 Examples