본문 바로가기
개발 (ENG)

Practical Strategies for Centralized Cache-Control in GET APIs

by 새싹 아빠 2026. 1. 20.

Recently, while designing a server, I found myself thinking deeply about cache strategies for read-only APIs. As traffic grows, questions like how to reduce server load and how to apply caching without introducing security risks become unavoidable.

This article starts with the following questions:

  • Why is Cache-Control necessary for GET read APIs?
  • What is the difference between Redis cache and HTTP cache?
  • Should Cache-Control be defined in every controller?
  • How can cache policies be centrally managed?

 

1. Why Do We Use Caching? (The Real Reason)

Caching is often explained as a way to speed up responses, but in real-world systems, there is a more important purpose.

Caching prevents the server from doing the same work repeatedly as traffic increases.

For example, if a schedule list API is called thousands of times per second and each request hits the database, the database will inevitably become the bottleneck. Caching eliminates this repetition and protects both the server and the database.

 

2. HTTP Cache vs Redis Cache

2-1. HTTP Cache (Browser / CDN / Proxy)

  • Operates outside the server
  • Identical requests never reach the server
  • Controlled via the Cache-Control header
Cache-Control: max-age=30

With this setting, identical requests will bypass the server entirely for 30 seconds.

2-2. Redis Cache (Application Cache)

  • Operates inside the server
  • No distinction between GET and POST
  • Caches database query and computation results
val key = "schedule:${date}"
redis.get(key) ?: service.getSchedule(date).also {
    redis.set(key, it, 30)
}

The key difference is that HTTP cache prevents requests from reaching the server, while Redis cache prevents database access inside the server.

 

3. Why Should HTTP Cache Never Be Used for Login or Authentication APIs?

HTTP cache is stored on the client or intermediate systems. If login responses or personal data are cached, it can lead to serious security incidents.

That is why authentication-related APIs follow this strict rule:

Cache-Control: no-store
  • Login
  • /me, /profile
  • Token refresh

These APIs must disable HTTP caching entirely, and use Redis cache only if necessary.

 

4. Where Should Cache-Control Be Defined for GET Read APIs?

Initially, Cache-Control is often defined directly in each controller:

return ResponseEntity.ok()
    .cacheControl(CacheControl.maxAge(30, TimeUnit.SECONDS))
    .body(response)

However, as the number of APIs grows, cache policies become scattered and harder to manage.

This is where centralized cache control becomes useful.

 

5. Centralized Cache-Control Using an Interceptor

In Spring, Cache-Control can be managed centrally using a HandlerInterceptor.

5-1. Basic Example

class CacheControlInterceptor : HandlerInterceptor {

    override fun postHandle(
        request: HttpServletRequest,
        response: HttpServletResponse,
        handler: Any,
        modelAndView: ModelAndView?
    ) {
        if (request.requestURI.startsWith("/schedule")) {
            response.setHeader("Cache-Control", "max-age=30")
        }
    }
}

With this approach, all requests to /schedule/** can be governed by a single cache policy.

 

5-2. Safety Measures Required in Real-World Systems

Using the interceptor as-is can be dangerous. In practice, the following conditions must be added:

  • Apply only to GET requests
  • Disable caching for sensitive APIs
  • Respect Cache-Control headers already set by controllers
class CacheControlInterceptor : HandlerInterceptor {

    override fun postHandle(
        request: HttpServletRequest,
        response: HttpServletResponse,
        handler: Any,
        modelAndView: ModelAndView?
    ) {
        // Do not override if Cache-Control is already set
        if (response.getHeader("Cache-Control") != null) return

        // Only apply to GET requests
        if (request.method != "GET") return

        // Disable cache for authentication or personal data APIs
        if (request.requestURI.startsWith("/auth") ||
            request.requestURI.startsWith("/me")) {
            response.setHeader("Cache-Control", "no-store")
            return
        }

        // Apply cache for schedule read APIs
        if (request.requestURI.startsWith("/schedule")) {
            response.setHeader("Cache-Control", "max-age=30")
        }
    }
}

 

6. Benefits of Centralized Cache Control

  • Cleaner controller code
  • Cache policy managed in one place
  • Team-wide rules can be enforced consistently

Controllers focus on business logic, while caching is handled as a policy.

 

7. Summary

  • Caching is a tool for stability, not just performance
  • HTTP cache and Redis cache serve different purposes
  • HTTP cache must never be used for sensitive data
  • GET read APIs should explicitly define Cache-Control
  • Interceptors enable centralized cache policy management

Caching is not optional — it is part of the API specification.

By considering cache strategies from the API design stage, you can build a server that remains stable even as traffic grows.