Table of Contents

PHP RFC: OPcache Static Cache

Introduction

This RFC proposes OPcache Static Cache: an OPcache-managed shared-memory cache for explicit userland values and selected PHP static state.

The feature is designed for applications that want faster cross-request caching while continuing to run under the traditional PHP request model. It adds explicit cache functions under the OPcache namespace, plus attributes that allow selected static properties and method static variables to survive across requests.

Before OPcache and APCu were split into separate extensions, APC combined opcode caching and userland caching in one place. This proposal moves part of that design space back toward a unified shape, but in a form that matches the modern engine boundary: OPcache already owns persistent script metadata, interned structures, preloading integration, and JIT-adjacent engine hooks, so it is the appropriate subsystem for cache paths that need cooperation from the engine, the VM, and selected internal classes.

Motivation

PHP applications commonly use APCu or external services when they need request-to-request cache state. These options remain useful, but they have costs that are hard to avoid in some workloads.

APCu stores user values outside the request heap. Non-trivial values cross a serialization boundary when stored and again when fetched. Large arrays and object graphs are rebuilt into request-local memory even when the request only needs to read a small part of the value.

The author also attempted to solve these APCu limitations by building colopl_cache, a non-public APCu drop-in extension. That work showed that OPcache compatibility, JIT-heavy workloads, and the Zend VM intervention needed for static-state caching are difficult to provide as an ordinary extension. This RFC therefore proposes implementing the feature in OPcache, where script metadata, shared-memory ownership, JIT assumptions, and VM hooks can be coordinated directly.

Long-running runtimes can keep PHP objects alive across requests, but adopting them safely often requires reviewing request isolation assumptions, service lifetimes, global state, framework bootstrapping, and extension behavior.

Static properties and method static variables are another common cache shape:

final class RouteMetadata
{
    public static function compiled(): array
    {
        static $routes = null;
 
        return $routes ??= self::compileRoutes();
    }
}

This is efficient within one request, but the cached state is discarded at request shutdown. Replacing it with a manual cache usually means adding keys, invalidation, fetch/store error handling, and explicit serialization behavior to application code.

There is also a class of applications that already works around this gap by combining preload with persistent arrays or similar engine-adjacent techniques to keep non-volatile derived data alive across requests. That pattern is already used in practice today, but it exists as a workaround rather than as an explicit, supported API shape. If the engine is already being used this way, it is more reasonable to provide a documented API with well-defined behavior than to force each application to reinvent the pattern through preload-specific bootstrapping.

Not every application wants an external master database or cache service for this kind of state. Some deployments are intentionally self-contained and would rather publish authoritative routing tables, metadata, or other derived structures directly into shared memory and fail loudly if those structures no longer fit. For those workloads, a non-volatile cache is not merely an optimization; it is a useful programming model.

OPcache Static Cache provides an engine-managed middle ground: selected values can be stored in OPcache-owned shared memory, while normal request isolation remains the default for all code that does not opt in.

Proposal

This RFC adds three related capabilities:

The caches are disabled by default. Administrators must allocate memory explicitly through new INI directives. If no memory is configured, the new userland functions and attributes are present but the corresponding cache backend is unavailable.

The implementation keeps two separate shared-memory areas:

Both areas have separate storage headers, hash tables, allocator state, locks, lookup caches, and status reporting.

Why Two Cache Backends

The volatile cache and the persistent cache serve different roles and intentionally do not share one storage policy.

The volatile cache covers APCu-adjacent use cases in this RFC while recognizing that APCu remains an actively maintained extension. It is for volatile cached data: values that are useful to keep across requests, but may be dropped under memory pressure and rebuilt by application code. These use cases are included here not because APCu lacks maintenance, but because the proposed behavior cannot be achieved by changes limited to APCu: static-property and method-static integration, VM mutation hooks, OPcache invalidation semantics, shared graph pinning, preload, and JIT assumptions all need coordination from OPcache and the engine. This is the backend used by the explicit OPcache\volatile_* API and by #[OPcache\VolatileStatic].

The persistent cache is for cached data that must not silently evaporate once the application has published it. This is intended for non-volatile state such as precomputed metadata, routing tables, or other values where losing the write should be treated as an application error rather than a routine cache miss. This is the backend used by the explicit OPcache\persistent_* API and by #[OPcache\PersistentStatic].

This is not a purely theoretical distinction. In practice, applications already approximate this kind of state through preload and persistent-array workarounds when they want request-to-request data that behaves more like process-owned state than like an evictable cache entry. The persistent cache turns that demand into an explicit API with visible configuration, status reporting, and failure behavior.

The cost of that guarantee is that the persistent cache is intentionally not the cheapest write path. To preserve the “publish now or fail now” contract, persistent-store paths must validate that the value fits in the configured shared-memory budget before committing it. That means additional size calculation and publication overhead compared with the volatile cache. The persistent cache therefore exists because the semantics are useful, not because it is expected to dominate volatile caches on raw write throughput.

Keeping these roles separate lets the volatile cache behave like a recoverable APCu-style cache, while the persistent cache can enforce stricter store-time guarantees and failure behavior for values that are not allowed to disappear.

Public API

The following class and functions are added to the OPcache namespace:

namespace OPcache;
 
class StaticCacheException extends \Exception {}
 
function volatile_store(string $key, null|bool|int|float|string|array|object $value, int $ttl = 0): bool;
function volatile_store_array(array $values, int $ttl = 0): bool;
function volatile_fetch(string $key, null|bool|int|float|string|array|object $default = null): null|bool|int|float|string|array|object;
function volatile_fetch_array(array $keys, ?array $default = null): ?array;
function volatile_exists(string $key): bool;
function volatile_lock(string $key): bool;
function volatile_delete(string $key): void;
function volatile_delete_array(array $keys): void;
function volatile_clear(): void;
function volatile_cache_info(): array;
 
function persistent_store(string $key, null|bool|int|float|string|array|object $value): void;
function persistent_store_array(array $values): void;
function persistent_fetch(string $key, null|bool|int|float|string|array|object $default = null): null|bool|int|float|string|array|object;
function persistent_fetch_array(array $keys, ?array $default = null): ?array;
function persistent_exists(string $key): bool;
function persistent_lock(string $key): bool;
function persistent_delete(string $key): void;
function persistent_delete_array(array $keys): void;
function persistent_clear(): void;
function persistent_atomic_increment(string $key, int $step = 1): int;
function persistent_atomic_decrement(string $key, int $step = 1): int;
function persistent_cache_info(): array;

Volatile Cache API

OPcache\volatile_store() stores a value in the volatile cache. The optional $ttl is expressed in seconds. A zero TTL means that the entry does not expire by time.

OPcache\volatile_store_array() stores multiple non-empty-string-keyed entries with the same TTL. It validates all keys before storing any entry. If any array key is not a string, or is an empty string, a ValueError is thrown and no entry from that call is stored.

OPcache\volatile_fetch() returns the stored value. If the key is missing or expired, it returns $default. The default default value is null, so routine cache misses do not throw. The first successful single-key fetch for a key and cache epoch attempts to record request-local fetch state as a prototype zval slot reconstructed from the stored payload. Repeated fetches for the same key and epoch copy from that slot when the value is supported by the request-local clone path, so object-free arrays keep PHP's ordinary copy-on-write behavior and avoid repeated PHP value graph reconstruction. For object-bearing values, OPcache treats the slot as a request-local prototype and returns a fresh object graph produced by an internal clone path that does not invoke userland __clone. Ordinary PHP objects use the std-object clone helper, while engine-vetted OPcache\__DirectCacheSafe internal objects use per-class copy handlers registered by the owning extension. Object identities that are shared inside one fetched graph remain shared inside that graph, but object handles are not shared with values returned by earlier or later fetches. Mutating a fetched object graph therefore does not mutate another fetched value, the request-local prototype, or the stored cache entry.

Single-key cache keys are non-empty strings; an empty key raises ValueError. Single-key store values and single-key fetch fallback values are typed as null|bool|int|float|string|array|object. PHP does not have a native type spelling for “object except Closure”, so Closure objects are rejected by argument validation. Resources are likewise rejected as public API values or fallback values.

OPcache\volatile_fetch_array() fetches multiple keys from an input key list. Each array element must be a non-empty string or int; ints are converted to their decimal string key, and other types are rejected without invoking userland conversion such as __toString(). The whole key list is validated and converted before any cache lock is taken. The return value is an associative array keyed by the converted string keys. Its fallback is ?array; missing or expired entries receive null or the supplied array fallback.

$miss = new stdClass();
$value = OPcache\volatile_fetch('routes', $miss);
if ($value === $miss) {
    $value = build_routes();
    OPcache\volatile_store('routes', $value, ttl: 300);
}

Applications that need to distinguish a stored null from a miss can choose a sentinel default value or call OPcache\volatile_exists() before fetching.

OPcache\volatile_exists() checks whether a key exists without decoding the stored value. OPcache\volatile_lock() attempts to acquire a request-retained context/key reservation lock. It returns true when the current request acquired or already owns the reservation, and false when another request currently owns the same key or the same reservation lock stripe. A later successful store for that key releases the reservation; a successful delete releases the reservation only when the current request owns that exact key reservation. Request shutdown also releases abandoned reservations.

This is the safe entry-building primitive for OPcache Static Cache. It provides the single-builder property needed by APCu-style apcu_entry() use cases without executing userland callbacks while holding the cache write lock. User code observes a miss, reserves the key with OPcache\volatile_lock(), computes the value in the same request, then publishes it with OPcache\volatile_store(). Other requests that try to commit a public store for the same key wait on the reservation and re-check the key after it is released. Destructive operations, including volatile_delete(), volatile_clear(), and opcache_reset(), do not wait for reservation locks; they remove visible entries under the cache write lock and retire any shared-graph payload that may still be referenced by another request. Plain volatile_fetch() and volatile_exists() do not wait for reservations.

OPcache\volatile_delete(), OPcache\volatile_delete_array(), and OPcache\volatile_clear() remove entries from the volatile cache namespace.

If the volatile cache cannot store a value due to memory pressure, volatile_store() and volatile_store_array() return false after attempting the configured recovery policy. The recovery path first expunges expired entries. If allocation still fails because free payload space is fragmented, the allocator compacts movable blocks only when the requested payload can fit in a contiguous free block after compaction. After key validation succeeds, volatile_store_array() stores entries in iteration order; if a later value cannot be stored, entries already stored by the same call remain visible.

The volatile API intentionally does not provide volatile_atomic_increment() or volatile_atomic_decrement(). A volatile entry may evaporate because of TTL expiry, explicit clearing, or memory-pressure recovery, so the continuity expected from an atomic counter cannot be guaranteed across requests. Atomic increment and decrement operations are therefore limited to the persistent cache backend.

Persistent Cache API

OPcache\persistent_store() stores a value in the persistent cache and returns void on success. It has no TTL. The persistent cache is intended for non-volatile cross-request state.

OPcache\persistent_store_array() stores multiple non-empty-string-keyed entries. Like volatile_store_array(), it validates all keys before storing any entry. If any array key is not a string, or is an empty string, a ValueError is thrown and no entry from that call is stored.

If the value cannot be stored because the persistent cache is exhausted, OPcache\persistent_store() throws OPcache\StaticCacheException. The persistent path may compact allocator fragmentation when movable allocated blocks can be packed so the existing free space becomes the requested contiguous payload block, but it does not expunge TTL entries, evict valid entries, or clear the cache as a pressure recovery path. This mirrors the stricter failure mode used by #[OPcache\PersistentStatic], while keeping the explicit volatile cache API recoverable.

OPcache\persistent_fetch(), OPcache\persistent_fetch_array(), and OPcache\persistent_exists() behave like their volatile-cache counterparts, but operate on the persistent cache backend. persistent_lock() uses the same request-retained key reservation semantics as volatile_lock(). persistent_fetch_array() uses the same non-empty-string/int input key-list and default-value behavior as volatile_fetch_array().

OPcache\persistent_delete() and OPcache\persistent_delete_array() remove entries from the explicit persistent cache namespace.

OPcache\persistent_atomic_increment() and OPcache\persistent_atomic_decrement() update integer entries under the persistent cache write lock and return the new value. Persistent atomic operations do not accept a TTL argument. persistent_atomic_increment() creates a missing key with the value of $step under the reservation lock and cache write lock, then releases any reservation held for that key after the successful write. If the current request already reserved the missing key with persistent_lock(), the atomic increment reuses that reservation and releases it after storing the initial value. persistent_atomic_decrement() throws OPcache\StaticCacheException if the key is missing. Both functions throw OPcache\StaticCacheException if the stored value is not an integer.

OPcache\persistent_clear() removes entries from the explicit persistent cache namespace and static-state entries stored in the persistent cache backend. It does not clear the volatile cache backend.

The persistent cache is shared with #[OPcache\PersistentStatic]. Implementations may reserve internal key prefixes for static-state storage. Applications should treat OPcache-documented prefixes as reserved once they are specified.

Storable Values

The explicit cache APIs and static attributes use the same value storage machinery. The intended value support is:

Value kind Support Notes
Scalars Yes Stored directly in the cache entry or payload area.
Arrays Yes Eligible arrays may use the shared-graph representation; otherwise they use serializer fallback.
Ordinary objects Yes, subject to serialization support Supported object graphs may use shared graph or OPcache serializer paths. Objects that cannot be encoded or serialized fail to store.
Internal objects with engine-vetted direct paths Yes Only internal classes marked with OPcache\__DirectCacheSafe may use direct restoration.
Closure No Closure objects are request-local executable state and cannot be represented as stable shared cache values.
Resources No Resources wrap process-local external handles and are rejected.

Top-level resources and Closure objects passed to the single-key explicit store/fetch APIs fail argument validation. Resources and Closure objects reached recursively through arrays, object properties, __serialize() result arrays, __sleep() selected properties, OPcache\*_store_array() values, or static publication are rejected during store preparation. Volatile paths return false, while OPcache\persistent_store(), OPcache\persistent_store_array(), and #[OPcache\PersistentStatic] treat unsupported values as hard storage failures and raise OPcache\StaticCacheException.

Status API

OPcache\volatile_cache_info() returns status for the volatile cache backend. OPcache\persistent_cache_info() returns status for the persistent cache backend.

opcache_get_status() is extended with volatile_cache and persistent_cache arrays. opcache_get_configuration() is extended with the new INI directives.

Attributes

The following attributes are added:

namespace OPcache;
 
#[\Attribute(\Attribute::TARGET_CLASS | \Attribute::TARGET_METHOD | \Attribute::TARGET_PROPERTY)]
final class PersistentStatic {}
 
enum CacheStrategy: int
{
  case Immediate = 0;
  case Tracking = 1;
}
 
#[\Attribute(\Attribute::TARGET_CLASS | \Attribute::TARGET_METHOD | \Attribute::TARGET_PROPERTY)]
final class VolatileStatic
{
  public readonly int $ttl;
 
  public readonly CacheStrategy $strategy;
 
  public function __construct(int $ttl = 0, CacheStrategy $strategy = CacheStrategy::Immediate) {}
}
 
#[\Attribute(\Attribute::TARGET_CLASS)]
final class __DirectCacheSafe {}

OPcache\__DirectCacheSafe is an internal marker. OPcache registers it as a visible attribute class so selected internal classes can be marked, but userland classes cannot apply it: attempting to do so raises a compile error, and the serializer still limits direct restoration to internal bases.

#[OPcache\PersistentStatic] persists selected static state in the persistent cache. This is for non-volatile state where losing the write due to memory exhaustion is considered an application error.

#[OPcache\VolatileStatic] uses the same restore machinery, but always stores state in the volatile cache. This is true for all strategies. VolatileStatic never uses the persistent cache backend.

VolatileStatic accepts an optional $ttl argument of type int and an optional $strategy argument of type OPcache\CacheStrategy. The TTL is expressed in seconds. A zero TTL means that the entry does not expire by time. Each publication of a class, static-property, or method-static entry applies the configured TTL and therefore refreshes the expiration time. The default strategy is OPcache\CacheStrategy::Immediate. OPcache\CacheStrategy::Tracking defers publication until request shutdown and republishes only the static roots or class snapshots dirtied during the request.

Both attributes can be applied to:

The attribute form is intentionally more than syntactic sugar over *_fetch(). The explicit key/value fetch APIs must return an independent PHP value for each object-bearing fetch, so repeated object reads either reconstruct the PHP value graph from the stored payload or clone from the request-local prototype using OPcache-controlled ordinary-object and OPcache\__DirectCacheSafe copy handlers. Attribute-backed static properties and method static variables have different semantics. Once restored into the request's static slot, ordinary reads use that slot directly and do not need to produce a fresh object graph for every access. OPcache\__DirectCacheSafe internal state therefore pays either the restore or prototype-copy cost at explicit-fetch time, but only the static-slot initialization cost for attribute-backed static reads. Attributes cover object-heavy static state without making every read produce a fresh object graph.

Applying both PersistentStatic and VolatileStatic to the same target is a fatal error.

Class-level attributes are not inherited. If a parent class is annotated with #[OPcache\PersistentStatic] or #[OPcache\VolatileStatic], a child class is not automatically cached; the child class must declare its own attribute. Inherited static properties and inherited methods are likewise not treated as child-owned attribute targets.

Example:

#[OPcache\PersistentStatic]
final class Metadata
{
    public static array $routes = [];
 
    public static function compiled(string $name): mixed
    {
        static $index = [];
 
        return $index[$name] ??= self::build($name);
    }
}

State is restored lazily when static storage or method static variables are initialized or accessed. The publication point and mutation tracking behavior are controlled by the attribute and, for VolatileStatic, by its strategy. The selected VolatileStatic strategy applies to class, static-property, and method-static targets; CacheStrategy::Tracking is not limited to class-level attributes.

Publication behavior is:

Attribute target Backend Array mutation behavior Object mutation behavior Publication point
#[OPcache\VolatileStatic] or #[OPcache\VolatileStatic(strategy: OPcache\CacheStrategy::Immediate)] Volatile cache No recursive tracking after the root snapshot No recursive tracking after the root snapshot Root assignments publish an immediate snapshot.
#[OPcache\VolatileStatic(strategy: OPcache\CacheStrategy::Tracking)] Volatile cache Restored reachable arrays are tracked as dirty markers; shared arrays dirty all owners Restored reachable objects are tracked as dirty markers; shared objects dirty all owners Request shutdown publishes changed static roots/class blobs; read-only requests skip publication.
#[OPcache\PersistentStatic] class Persistent cache Arrays held by the class blob, without crossing an object boundary, are published immediately after mutation No recursive object-property mutation publication Root assignments publish immediately; array mutations publish immediately; storage failure throws OPcache\StaticCacheException at the assignment or mutation site.
#[OPcache\PersistentStatic] property or method Persistent cache Arrays held by the static root, without crossing an object boundary, are published immediately after mutation No recursive object-property mutation publication Root assignments publish immediately; array mutations publish immediately; storage failure throws OPcache\StaticCacheException at the assignment or mutation site.

For CacheStrategy::Immediate, OPcache snapshots the root assigned value at the store point and writes it to the volatile cache immediately. The assigned array or object graph is resolved when the root is assigned, so unsupported values and memory pressure are detected at the store point. Later mutations to the same object or nested array are request-local unless the root static property or method static variable is assigned again.

For method static variables using CacheStrategy::Immediate, OPcache observes the reference update performed by the VM when the static variable receives a new root value. This lets method-level VolatileStatic publish an assignment-time snapshot rather than waiting for request shutdown.

For class-level CacheStrategy::Immediate, static properties and method static variables are stored together as a class blob in the volatile cache. Root static-property assignments and method static-variable assignments update that class blob immediately. Before a class blob is published, OPcache synchronizes initialized method static-variable slots into the blob's method section. Uninitialized method static variables keep their dynamic-initializer sentinel behavior; the blob does not eagerly create null members for them. The class blob uses the volatile-static key namespace and the volatile-cache shared-memory context.

For CacheStrategy::Tracking, OPcache restores the root value, records the original root state, and publishes at request shutdown only when that root or its restored reachable graph changed. Root assignments are detected by comparing the request-shutdown slot value with the original value. Arrays and objects decoded from the restored graph are registered as dirty markers; their mutation hooks mark every owning root or class blob dirty, and the actual store is deferred until request shutdown. If the same restored array or object is reachable from multiple cached roots, mutating that shared dependency causes all affected roots to be published. Tracking starts from the value that has been assigned into the static root; later changes to the outer local variable or temporary that was used for that assignment are not themselves tracked unless they still mutate the same reachable array/object identity now held by the static root. A request that only reads a tracking slot does not republish that slot.

For #[OPcache\PersistentStatic], direct root assignments to static properties and method static variables are treated as immediate persistent-store operations. After the assignment succeeds, OPcache snapshots the root or class blob and stores it in the persistent cache immediately. This validates serialization and shared-memory capacity at assignment time instead of deferring the work until request shutdown.

Arrays currently held by PersistentStatic state are also strict persistent-store state. After an array has been assigned to or restored into the static root, OPcache registers that array and nested arrays reachable from it without crossing an object boundary with the VM mutation hooks. Appending to the array or updating one of its elements publishes the affected root or class blob to persistent cache SHM immediately after the mutation completes, as long as the post-mutation array identity is still reachable from the current static root or class blob through arrays only. If copy-on-write separation moves the write to a local copy outside that boundary, the pending publication is discarded. If the updated static value cannot be stored because the persistent cache is exhausted or the value cannot be encoded, the request throws StaticCacheException.

#[OPcache\PersistentStatic]
final class Metadata
{
    public static array $routes = [];
}
 
Metadata::$routes = ['foo'];  // Stores to persistent cache SHM immediately.
Metadata::$routes[] = 'bar';  // Stores the updated array immediately.

Objects assigned to PersistentStatic state are not recursively tracked for later object-property mutations. The assigned object graph is snapshotted when the root static value is assigned or published; subsequent scalar or object-property writes on that object are request-local unless the root static value is assigned again. If the object graph contains an array property, that array is still behind an object boundary and is not registered merely because the object was assigned to PersistentStatic state. Mutating that array property is therefore request-local unless the same array identity is also reachable from a tracked static array root, or the static root is reassigned. This makes PersistentStatic closer to OPcache\persistent_store(): storing a value captures that value at the store point, with the additional strict array-root mutation publication rule described above.

Use CacheStrategy::Immediate when assignment-time snapshots should be stored in the volatile cache without recursive mutation tracking. Use CacheStrategy::Tracking when the final request-shutdown state of the restored or assigned graph should be published. Use PersistentStatic when the persistent cache backend and stricter non-volatile failure mode are desired.

Only CacheStrategy::Tracking defers mutation publication until request shutdown. PersistentStatic array-root edits are deliberately not deferred, so persistent-cache capacity failures are visible at the assignment or array-edit operation that caused them.

OPcache\volatile_clear(), opcache_reset(), and opcache_invalidate() mark the current request so that request-shutdown publication does not re-publish stale static state after a clear, reset, or invalidation operation.

Reset and invalidation behavior is:

Operation Explicit volatile cache Explicit persistent cache VolatileStatic state PersistentStatic state Notes
OPcache\volatile_clear() Cleared Unchanged Cleared for the volatile-cache backend Unchanged Prevents same-request republish of cleared VolatileStatic values.
OPcache\persistent_clear() Unchanged Cleared Unchanged Cleared for the persistent-cache backend Prevents same-request republish of cleared PersistentStatic values.
opcache_reset() Cleared Cleared Cleared Cleared The complete static-cache storage owned by OPcache is discarded together with script-cache reset scheduling.
opcache_invalidate($file) Unchanged Unchanged Only VolatileStatic values belonging to classes in $file are deleted Only PersistentStatic values belonging to classes in $file are deleted Explicit key/value entries are not associated with source files and are not deleted.
Timestamp revalidation detects that a cached file changed Unchanged Unchanged VolatileStatic values belonging to classes in the changed file are deleted PersistentStatic values belonging to classes in the changed file are deleted If a class definition changes across requests, cached state for that class is discarded before the new definition is used.

INI Directives

Two INI directives are added:

opcache.static_cache.volatile_size_mb=0
opcache.static_cache.persistent_size_mb=0

Values are configured in megabytes. Zero disables the corresponding cache. Non-zero values below 8MB are rejected with a warning. The directives are PHP_INI_SYSTEM and must be configured before OPcache shared memory is set up.

opcache.static_cache.volatile_size_mb controls the explicit volatile cache and #[OPcache\VolatileStatic].

opcache.static_cache.persistent_size_mb controls the explicit persistent cache and #[OPcache\PersistentStatic].

Security and Trust Model

OPcache Static Cache is shared memory for one PHP runtime instance. It is not a tenant-isolation boundary. When either cache is enabled, all workers that share the same OPcache static-cache shared-memory segment share the same explicit key/value namespace and the same attribute-backed static-state namespace. In FPM, the configured static-cache segments are initialized before workers start and are inherited by workers, so multiple pools under the same FPM master share the same cache namespace.

Administrators should enable these directives only when every worker that can access the same OPcache static-cache segment belongs to the same trust domain, for example one application, one tenant, or mutually trusted applications that already coordinate their cache keys. Shared-hosting deployments that run mutually untrusted pools, vhosts, Unix users, chroots, or customers under one PHP master should leave the feature disabled for that master, or run those tenants under separate PHP/FPM master processes, containers, or equivalent OS-level isolation that gives each trust domain its own OPcache and static-cache shared memory. Setting these directives in an individual FPM pool does not create a per-pool static-cache segment once OPcache has been set up.

The existing opcache.validate_permission and opcache.validate_root directives mitigate script-cache access and chroot key-collision issues by revalidating file access or varying script-cache hashes by root. They do not isolate OPcache Static Cache entries, because explicit cache keys and attribute-backed static-state keys are application data rather than script filenames. This is similar to local shared user caches such as APCu: applications should still namespace keys for correctness, but user-controlled key prefixes are not a security boundary when untrusted code can choose keys or call cache mutation APIs.

Implementation Notes

The implementation is split across the OPcache static-cache subsystem:

The storage layer uses open-addressed entries for keys and a shared-memory allocator for payloads. Backend startup reserves the configured SHM segment and initializes the header and entry table, but it does not eagerly zero the full payload area; allocator blocks are written when payload space is first consumed. The backend read/write lock uses byte-range process locks by default. It does not rely on a process-shared pthread rwlock as the default synchronization primitive, because POSIX rwlocks do not provide portable owner-death recovery for a process that terminates while holding the lock. The allocator has a pressure-recovery compaction path for fragmentation: before moving data, it checks whether the requested payload size can fit into a contiguous free block after movable allocations are packed around immovable anchors, and it skips compaction when that cannot succeed. Movable key, string, and serialized payload blocks are relocated by updating entry offsets; shared-graph payload blocks are immovable because their final buffers may be pinned and may contain pointers into themselves.

Each cache backend has a mutation epoch. Request-local lookup-cache entries include this epoch, so repeated hits and misses can avoid probing the shared table while still invalidating after any writer mutates the segment. The epoch can change within the same request as well as across requests: *_store(), *_store_array(), *_delete(), *_delete_array(), *_clear(), persistent_atomic_increment(), persistent_atomic_decrement(), static-attribute publication, opcache_reset(), and script invalidation all mutate the backend entry table when they publish, remove, or clear entries. A write path that expunges expired volatile entries during pressure recovery, or compacts payload blocks and updates entry offsets, also advances the epoch.

The storage layer also provides request-retained reservation locks for *_lock($key). The lock is keyed by cache context and key hash, backed on Unix by F_SETLK/F_SETLKW byte-range process locks, and, in ZTS builds, process-local heap-allocated stripe mutexes so threads in the same worker also serialize. The public reservation table records exact keys in request-local state, while the process lock is taken on a fixed stripe derived from the key hash. This means unrelated keys that map to the same stripe may be conservatively serialized, and *_lock($key) may return false for that temporary stripe contention even when the exact key is not reserved. These are process-associated byte-range locks rather than open-file-description locks: a forked child does not inherit ownership of the parent's byte-range locks, and closing the inherited file descriptor in the child does not release locks owned by the parent process. Public store, store-array entry commits, and persistent atomic mutations acquire the same reservation stripe before taking the cache write lock, unless the current request already owns the exact key reservation. A successful store or persistent atomic mutation for that key releases the reservation; a successful delete releases the current request's exact reservation for the deleted key, if it owns one, without waiting for reservations owned by other requests. Request shutdown releases any abandoned reservations. Delete, delete-array, namespace-wide clear operations, and opcache_reset() bypass reservation locks and only take the cache write lock, which avoids reservation-stripe deadlocks between a clearing request and a builder request. Because these destructive operations are not reservation barriers, a request that already owns a reservation may still publish a value after a concurrent delete, clear, or reset. Fetched shared-graph payloads carry cross-request reference state: destructive operations retire referenced payloads and free them only after the last request releases its reference, so removing an entry does not reclaim memory still in use by another request. Forked child processes discard inherited request-local reservation state before acquiring new locks, so they do not accidentally treat a parent request's reservation as their own. In ZTS children, inherited process-local mutex stripes are replaced after fork, the inherited mutex allocations are freed in the child, and the child-owned replacement stripes are released during request shutdown; a later request can lazily create fresh stripes if needed.

Reservation entries carry the owner PID. When a forked child discards inherited reservation entries, it does not unlock stripes owned by the parent process; the inherited process-local mutex stripes are replaced before the child acquires new entry locks.

In ZTS builds, OPcache captures the module-startup configuration that new request threads must copy into thread-local accelerator globals. PHP module startup still precedes request handling, and the handoff also uses an atomic validity flag: startup writes the configuration fields first and then stores the valid flag, while RINIT loads that flag before copying the fields.

Explicit OPcache\volatile_fetch() and OPcache\persistent_fetch() also have request-local fetch state keyed by context, key, and mutation epoch. Values reuse the request-local prototype zval slot while the epoch matches when the fetched value is supported by the request-local clone path. Object-free values stay on the fast slot-copy path. Object-bearing values clone object and reference branches out of the prototype for each fetch, without calling userland __clone, so each fetched graph has independent object state. Ordinary PHP objects use OPcache's std-object clone helper, and OPcache\__DirectCacheSafe internal objects use the registered per-class copy handler supplied by their owning extension. Mutating a fetched object graph does not dirty the prototype and does not affect earlier or later fetched values.

Large arrays and supported object graphs may be stored as shared graphs. Explicit stores prepare expensive sizing, serializer fallback, and optional scratch buffers before taking the cache write lock, and static-attribute publication does not execute userland serialization hooks while holding the cache write lock. Shared graphs are rebuilt directly into their final SHM destination during commit. This keeps direct-array payloads tied to the buffer that will later be fetched, instead of byte-copying a prepared buffer whose embedded array data pointers would still point at request-local scratch memory. Fetch decodes also keep userland-visible value reconstruction out of the cache read lock: serialized payload bytes are copied while locked, shared-graph payloads are pinned while locked, and PHP object reconstruction runs after the read lock is released. Fetched shared-graph payloads are pinned until request shutdown. Repeated fetches of the same shared-graph payload in the same request and cache context reuse one request-local pin, so the payload refcount and request-local reference list do not grow with the number of read operations. Deleting or clearing a cache entry removes it from the visible namespace, but the backing payload is not returned to the allocator until active request references have been released. For the same reason, allocator compaction treats shared-graph payloads as immovable anchors. If shared-graph restoration fails after a fetch has acquired a payload reference, and releasing that reference makes an already-retired payload eligible for allocator reclamation, the fetch path queues the retired payload and leaves allocator mutation to request cleanup under the cache write lock.

The VM changes add hooks for class static initialization, function static initialization, static property access, reference assignment, array mutation, and object mutation. These hooks are guarded by executor-global fast flags so ordinary code pays only a cheap branch when no static-cache hook is active.

The duplicated object-mutation tail checks in the executor, including the object-dimension binary-assignment, object-dimension write, and unset handlers, are funneled through shared

zend_execute.h

macros and cold helpers. The hot path only tests the executor-global fast flag in-line; the zobj != NULL, actual hook pointer, exception-free, and &EG(error_zval) checks run in a zend_never_inline ZEND_COLD helper before zend_tracked_object_mutation_hook() is called. This keeps the many generated VM specializations from inlining the full compound branch in the common no-hook case, while still avoiding a NULL function-pointer call if the fast flag and hook lifecycle ever become temporarily out of phase.

The VM also exposes a post static-property assignment hook used by class-level immediate static-cache attributes. OPcache uses this hook to publish changed class blobs immediately after root static-property assignment. A reference-update hook lets method-level CacheStrategy::Immediate and PersistentStatic publish method static-variable root assignments at the assignment point. Both zend_assign_to_variable() and zend_assign_to_variable_ex() notify this hook after successful updates to a reference cell for assignments whose result is used and for assignments whose result is not used, including assignments through typed references. CacheStrategy::Tracking uses array/object mutation hooks only to dirty-mark restored reachable graph owners; a shared dependency can point at more than one owner root/class blob, publication still happens at request shutdown, and read-only requests skip the store path. PersistentStatic registers arrays reachable without crossing an object boundary, but publishes them immediately through the persistent cache backend rather than dirty-marking them for shutdown. The tracking boundary is the static root and the reachable identities currently stored under it, not the later value changes of the outer variable that was originally assigned into that root.

JIT-generated static-property reads remain hook-able. The JIT fast path does not constant-fold the static slot zval itself: it loads the static-property slot pointer from the run-time cache and copies or returns an indirect reference to the current slot at execution time. When OPcache Static Cache's class-static access hook is active, the JIT static-property fast path invokes the same access hook as the VM run-time-cache static-property path and checks for exceptions before using the resolved slot pointer. This lets class-level snapshots, including preloaded class static slots, refresh their request-local state even when the property slot has already been resolved by the run-time cache. PersistentStatic array mutation publication stores a new shared-memory snapshot and advances the backend epoch; it does not replace the request-local static slot identity, so mutation publication does not require de-optimizing existing traces.

Array mutation hooks are pre-mutation notifications. They are invoked before SEPARATE_ARRAY(), so a write to a refcounted array may notify OPcache about the pre-separation array even though the actual element update happens on a newly separated copy. Static-cache semantics intentionally allow this conservative notification. Direct writes to a tracked static root use the root array identity that OPcache needs for owner discovery. For PersistentStatic's immediate publication path, OPcache re-checks the post-mutation array identity before publishing: the mutated hash must still be reachable from the current static root or class blob through arrays only. Writes through an outer local copy are therefore outside the static root tracking boundary and are not treated as newly tracked static state. CacheStrategy::Tracking still uses the conservative pre-mutation owner discovery to dirty-mark affected owners for request-shutdown publication. Array mutation call sites use EG(tracked_mutation_hooks_active) as the fast guard, then call through zend_maybe_track_hash_mutation(), which verifies the actual hash mutation hook pointer before invoking it.

The assignment fast path reduces request-shutdown work for immediate object assignments: the graph is walked, serialized or encoded, and capacity-checked at the assignment point. PersistentStatic array-root edits use the same strict persistent-store failure mode at the mutation point. The tradeoff is semantic: later object-property writes are intentionally not followed by PersistentStatic or CacheStrategy::Immediate class blobs, and code must reassign the root static value or use CacheStrategy::Tracking to publish a later object state.

Selected internal classes can be marked with OPcache\__DirectCacheSafe by the engine to allow safe direct restoration and request-local prototype-copy paths. The marker class is visible to userland, but userland classes cannot apply it and direct restoration still only applies when OPcache finds an internal marked base class with a registered safe-direct handler table.

Direct restoration paths for internal classes are intentionally tied to the PHP build that produced the cache payload. Internal layouts in extensions such as ext-date may change over time, but the static-cache implementation is updated together with those layout changes and continues to produce and consume the matching representation for that build. The feature does not provide any operation to export cache data to external storage or import it into a later process lifetime or different PHP build, so cross-version or cross-layout compatibility for cache payloads is not a supported scenario that implementations need to preserve.

The OPcache safe-direct integration is registered through function-pointer tables. ext-date and ext-spl expose non-static getter functions that return const zend_opcache_static_cache_safe_direct_handlers tables, while the actual copy, unstorable-state detection, state serialization, and state unserialization callbacks remain private to the owning extension. OPcache registers those tables during initialization and uses the same registry from the serializer, shared-graph checks, and request-local prototype clone path. This keeps OPcache from calling ext-date/ext-spl implementation helpers directly and makes adding another vetted internal class a matter of providing and registering another handler table.

The OPcache serializer decode path does not assume that serialized bytes in shared memory are naturally aligned for C struct access. Generic serializer headers and registered safe-direct state payloads are copied from the payload into aligned local storage or ordinary zvals before their fields are read.

This feature cannot be implemented as a pure userland library with the same behavior. Existing userland libraries or ordinary extension caches can provide fast key/value storage, but they cannot bind cached payloads to PHP static-property and method-static slots, observe VM-level array/object mutations, coordinate with OPcache script invalidation, or keep a shared graph representation pinned safely across request-local zvals. Preload and JIT make that boundary even more important: the cache must cooperate with persistent script metadata, preloaded class state, and VM/JIT assumptions about static storage rather than only storing serialized userland values behind function calls.

Backward Incompatible Changes

No behavior changes are intended for applications that do not configure the new INI directives or use the new functions and attributes.

The new names in the OPcache namespace become reserved by this RFC.

Proposed PHP Version(s)

next PHP version (8.6 or 9.0)

RFC Impact

To Extensions

The implementation adds small C-only handler accessors to ext-date and ext-spl so OPcache can register engine-vetted OPcache\__DirectCacheSafe object handlers without relying on userland serialization. These handler accessors are implementation details and are not exposed as userland APIs.

To OPcache

OPcache gains optional shared-memory areas for volatile/persistent static cache state, cache-specific locks, a mutable allocator, request-local lookup caches, VM hooks for static-state restore/publication, and mutation tracking for arrays and objects.

To SAPIs

SAPIs that use OPcache normally should not need userland changes.

FPM must initialize the configured static-cache shared-memory segments before workers start. If a backend would be initialized too late, the implementation reports the cache as unavailable instead of creating worker-local state.

Open Issues

Future Scope

This section should outline areas that you are not planning to work on in the scope of this RFC, but that might be iterated upon in the future by yourself or another contributor.

This helps with long-term planning and ensuring this RFC does not prevent future work.

Voting Choices

Voting has not started. The proposed voting questions are:


Primary Vote requiring a 2/3 majority to accept the RFC:

Add the explicit volatile cache API
Real name Yes No Abstain
Final result: 0 0 0
This poll has been closed.
Add the explicit persistent cache API
Real name Yes No Abstain
Final result: 0 0 0
This poll has been closed.
Add the #[OPcache\VolatileStatic] attribute
Real name Yes No Abstain
Final result: 0 0 0
This poll has been closed.
Add the #[OPcache\PersistentStatic] attribute
Real name Yes No Abstain
Final result: 0 0 0
This poll has been closed.

Performance

The benchmarks in this section were run in the Ubuntu 24.04 devcontainer on a MacBook Air (M4, 32GiB RAM).

Using this benchmark harness application: https://github.com/colopl/php-opcache_static_cache_benchmark_harness

The benchmark resets the relevant state, primes one value, runs warmup requests, and then measures repeated read-only requests. If a measured request misses and would need to build or store the value, the benchmark fails instead of folding that build cost into the sample. The harness supports --runs-on container|devcontainer|local and --target fpm|frankenphp|fpm,frankenphp. The final stdout is DokuWiki text; progress and runtime logs are written to stderr.

The benchmark workload includes framework-shaped route table reads, large array reads, metadata object reads, explicit object fetches that mutate the returned graph before the next fetch, internal objects supported by OPcache\__DirectCacheSafe, SPL collection objects, Carbon/DateTime objects, and nested-array mutation/publication cases. The harness has class, static property, and method static variable backends for #[OPcache\VolatileStatic(strategy: OPcache\CacheStrategy::Immediate)], #[OPcache\VolatileStatic(strategy: OPcache\CacheStrategy::Tracking)], and #[OPcache\PersistentStatic]. The benchmark tables below report the property and method targets, because those are the static-state shapes most directly comparable to explicit cache reads in the benchmarked workloads.

The benchmark suite includes named scenarios for longer steady-state reads, explicit object fetches that mutate each returned graph, sequential explicit-cache write throughput, 5-way write contention against shared and distinct key layouts, and 5-way single-builder entry reservation contention. The commands in this section use devcontainer runs against NTS php-fpm + nginx and ZTS FrankenPHP, with APCu rebuilt from master and reporting 5.1.29-dev. JIT is disabled for all rows shown here. The final run was rebuilt from ./buildconf --force and fresh NTS/ZTS configure invocations before the scenario matrix was measured; later scenarios reused the same build artifacts with --skip-rebuild.

The NTS FPM build used --enable-cli --enable-fpm --enable-pcntl --enable-session. The NTS CLI startup build used --enable-cli --enable-pcntl --enable-session. The ZTS build used --enable-cli --enable-pcntl --enable-session --enable-embed=static --enable-zend-max-execution-timers --enable-zts --disable-zend-signals, and FrankenPHP was linked against the same ZTS static embed library.

Read scenarios use vote_read_long with 20 measured iterations, 3 warmup requests, and 3000 operations per request. All read rows had a 100% hit ratio and a max build count of 0. The explicit OPcache\volatile_fetch() and OPcache\persistent_fetch() rows reuse request-local lookup state and prototype zval slots when the value is supported by the request-local clone path. Object-free values can be copied from the slot directly; object-bearing values clone object branches from the prototype on every fetch so returned object state is isolated. Ordinary PHP objects use OPcache's internal std-object clone helper, while OPcache\__DirectCacheSafe internal objects use registered per-class copy handlers. The fetch_mutate_object scenario uses the same 20/3/3000 shape, but each operation mutates the fetched metadata object graph after probing it, so the OPcache rows measure prototype-clone-after-mutation cost rather than full value reconstruction from storage.

Write scenarios use the same runtime setup: vote_write_throughput uses 15 measured iterations, 2 warmup requests, 128 stores per batch, a single worker, and a 32-key ring; vote_write_contention_shared uses 8 measured iterations, 1 warmup request, 32 stores per worker, 5 workers, and one shared key; vote_write_contention_distinct uses the same 8/1/32 batch shape with 5 workers and a 16-key ring; vote_entry_reservation_contention uses the same 5-worker shared-key shape while racing to populate one missing key with apcu_entry() or *_lock($key) plus store. The ZTS FrankenPHP runs completed with normal runtime shutdowns.

The benchmark metadata records runtime architecture via php_uname('m') and JIT status from opcache_get_status(). The same named scenarios can be rerun on additional architectures if needed.

CLI startup overhead

The following one-shot CLI run used a separate clean-build NTS CLI binary and the ZTS CLI binary from the clean FrankenPHP build. Each setting executes 10 processes with opcache.enable=1, opcache.enable_cli=1, opcache.jit=0, and both static-cache memory directives set to the same value under -n. The zero row keeps the static-cache backends disabled.

Runtime volatile/persistent cache memory CLI runs Total time Mean per run Overhead vs disabled Overhead
NTS CLI 0 MiB 10 43.615 ms 4.361 ms +0.000 ms +0.0%
NTS CLI 128 MiB 10 37.548 ms 3.755 ms -0.607 ms -13.9%
NTS CLI 256 MiB 10 33.986 ms 3.399 ms -0.963 ms -22.1%
NTS CLI 512 MiB 10 33.284 ms 3.328 ms -1.033 ms -23.7%
ZTS CLI 0 MiB 10 22.001 ms 2.200 ms +0.000 ms +0.0%
ZTS CLI 128 MiB 10 34.109 ms 3.411 ms +1.211 ms +55.0%
ZTS CLI 256 MiB 10 32.831 ms 3.283 ms +1.083 ms +49.2%
ZTS CLI 512 MiB 10 32.576 ms 3.258 ms +1.057 ms +48.1%

Static-cache SHM startup reserves the configured segment before requests start, but initializes only the header and entry table eagerly. Payload pages are touched on demand when entries are first stored. The remaining overhead is therefore small process-startup noise plus fixed metadata setup, and no longer scales linearly with the configured payload size. This is relevant to one-shot CLI invocations, but it is not the steady-state request cost for long-running SAPIs such as FPM or FrankenPHP.

Zend VM/JIT baseline overhead

Because this implementation adds VM hooks and adjusts JIT static-slot handling, the current branch was also compared with clean builds of commit 43b56c96af5c373a0539bca49bdc568b01a3163c using Zend/bench.php. Each row runs 10 one-shot CLI processes and reports the mean of the benchmark's Total line. JIT-off rows use opcache.jit=0; JIT-on rows use opcache.jit_buffer_size=64M and opcache.jit=tracing. The same -d command-line options are passed to both builds; in the base commit the static-cache INI directives are not defined, so those options do not enable any static-cache backend there.

Runtime JIT Static-cache INI 43b56c96 mean Current mean Delta Change
NTS CLI off opcache.static_cache.volatile_size_mb=32 113.600 ms 113.400 ms -0.200 ms -0.2%
NTS CLI off opcache.static_cache.persistent_size_mb=32 112.300 ms 113.800 ms +1.500 ms +1.3%
NTS CLI off opcache.static_cache.volatile_size_mb=32, persistent_size_mb=32 113.000 ms 113.300 ms +0.300 ms +0.3%
NTS CLI on opcache.static_cache.volatile_size_mb=32 41.600 ms 41.100 ms -0.500 ms -1.2%
NTS CLI on opcache.static_cache.persistent_size_mb=32 41.200 ms 42.000 ms +0.800 ms +1.9%
NTS CLI on opcache.static_cache.volatile_size_mb=32, persistent_size_mb=32 42.000 ms 42.200 ms +0.200 ms +0.5%
ZTS CLI off opcache.static_cache.volatile_size_mb=32 116.700 ms 118.500 ms +1.800 ms +1.5%
ZTS CLI off opcache.static_cache.persistent_size_mb=32 122.600 ms 121.800 ms -0.800 ms -0.7%
ZTS CLI off opcache.static_cache.volatile_size_mb=32, persistent_size_mb=32 117.900 ms 119.900 ms +2.000 ms +1.7%
ZTS CLI on opcache.static_cache.volatile_size_mb=32 42.100 ms 42.000 ms -0.100 ms -0.2%
ZTS CLI on opcache.static_cache.persistent_size_mb=32 42.200 ms 42.600 ms +0.400 ms +0.9%
ZTS CLI on opcache.static_cache.volatile_size_mb=32, persistent_size_mb=32 43.200 ms 44.200 ms +1.000 ms +2.3%

In this CLI micro-benchmark, the observed current-branch change is within -1.2% to +2.3% across the tested NTS/ZTS, JIT-off/JIT-on, and static-cache backend combinations. This is treated as run-to-run measurement noise for this benchmark, and the implementation is not expected to introduce a measurable VM or JIT performance regression when the feature is present.

Long-read steady state

The following table uses vote_read_long with JIT disabled. Values are mean operation time from the final benchmark summaries.

Workload Runtime APCu volatile_cache persistent_cache VolatileStatic Immediate property VolatileStatic Tracking property PersistentStatic property VolatileStatic Immediate method VolatileStatic Tracking method PersistentStatic method
route_table_read php-fpm + nginx (NTS) 156.160 us 36.479 us 35.355 us 0.279 us 0.305 us 0.297 us 0.458 us 0.467 us 0.336 us
route_table_read FrankenPHP (ZTS) 155.151 us 33.897 us 34.144 us 0.292 us 0.324 us 0.315 us 0.485 us 0.493 us 0.375 us
large_array php-fpm + nginx (NTS) 85.797 us 18.027 us 17.645 us 0.268 us 0.274 us 0.274 us 0.438 us 0.424 us 0.310 us
large_array FrankenPHP (ZTS) 86.510 us 16.704 us 17.019 us 0.253 us 0.263 us 0.262 us 0.422 us 0.445 us 0.309 us
metadata_object_read php-fpm + nginx (NTS) 162.036 us 36.493 us 36.402 us 0.334 us 0.356 us 0.319 us 0.493 us 0.510 us 0.343 us
metadata_object_read FrankenPHP (ZTS) 167.258 us 34.540 us 34.234 us 0.313 us 0.345 us 0.313 us 0.496 us 0.513 us 0.360 us
safe_direct_object php-fpm + nginx (NTS) 2.530 us 1.031 us 1.007 us 0.506 us 0.498 us 0.505 us 0.689 us 0.670 us 0.538 us
safe_direct_object FrankenPHP (ZTS) 2.485 us 0.992 us 0.994 us 0.492 us 0.494 us 0.489 us 0.679 us 0.663 us 0.545 us
spl_collection_object php-fpm + nginx (NTS) 19.882 us 5.551 us 5.914 us 0.376 us 0.376 us 0.385 us 1.098 us 0.590 us 0.438 us
spl_collection_object FrankenPHP (ZTS) 18.509 us 5.255 us 5.323 us 0.369 us 0.434 us 0.381 us 0.564 us 0.541 us 0.429 us
carbon_datetime_object php-fpm + nginx (NTS) 187.309 us 49.431 us 48.173 us 1.497 us 1.530 us 1.461 us 1.619 us 1.679 us 1.563 us
carbon_datetime_object FrankenPHP (ZTS) 187.260 us 45.901 us 45.220 us 1.469 us 1.514 us 1.472 us 1.671 us 1.746 us 1.508 us
nested_array_assignment php-fpm + nginx (NTS) 8.705 us 2.284 us 2.484 us 0.227 us 0.231 us 0.229 us 0.402 us 0.397 us 0.274 us
nested_array_assignment FrankenPHP (ZTS) 8.244 us 2.453 us 2.295 us 0.233 us 0.236 us 0.235 us 0.407 us 0.404 us 0.275 us

The following focused table uses the same final clean FPM/FrankenPHP runs with JIT disabled. It isolates explicit object fetch behavior, including the mutation-after-fetch workload.

Workload Runtime APCu volatile_cache persistent_cache
metadata_object_read php-fpm + nginx (NTS) 162.036 us 36.493 us 36.402 us
metadata_object_read FrankenPHP (ZTS) 167.258 us 34.540 us 34.234 us
metadata_object_fetch_mutate php-fpm + nginx (NTS) 162.448 us 36.353 us 35.833 us
metadata_object_fetch_mutate FrankenPHP (ZTS) 165.814 us 34.150 us 34.518 us
safe_direct_object php-fpm + nginx (NTS) 2.530 us 1.031 us 1.007 us
safe_direct_object FrankenPHP (ZTS) 2.485 us 0.992 us 0.994 us
spl_collection_object php-fpm + nginx (NTS) 19.882 us 5.551 us 5.914 us
spl_collection_object FrankenPHP (ZTS) 18.509 us 5.255 us 5.323 us
carbon_datetime_object php-fpm + nginx (NTS) 187.309 us 49.431 us 48.173 us
carbon_datetime_object FrankenPHP (ZTS) 187.260 us 45.901 us 45.220 us

Explicit-cache write throughput

The following table uses vote_write_throughput. Each cell reports mean store throughput, with mean per-store latency in parentheses.

Workload Runtime APCu volatile_cache persistent_cache
route_table_read php-fpm + nginx (NTS) 7409.38 ops/s (134.964 us) 7435.66 ops/s (134.487 us) 7333.14 ops/s (136.367 us)
route_table_read FrankenPHP (ZTS) 6991.28 ops/s (143.035 us) 7406.78 ops/s (135.011 us) 7527.05 ops/s (132.854 us)
metadata_object_read php-fpm + nginx (NTS) 6063.52 ops/s (164.921 us) 6531.50 ops/s (153.104 us) 6348.68 ops/s (157.513 us)
metadata_object_read FrankenPHP (ZTS) 6872.70 ops/s (145.503 us) 7021.88 ops/s (142.412 us) 7053.14 ops/s (141.781 us)
safe_direct_object php-fpm + nginx (NTS) 294478.53 ops/s (3.396 us) 117914.39 ops/s (8.481 us) 95484.38 ops/s (10.473 us)
safe_direct_object FrankenPHP (ZTS) 295021.51 ops/s (3.390 us) 134727.39 ops/s (7.422 us) 134003.35 ops/s (7.463 us)
spl_collection_object php-fpm + nginx (NTS) 47049.60 ops/s (21.254 us) 47655.69 ops/s (20.984 us) 48204.87 ops/s (20.745 us)
spl_collection_object FrankenPHP (ZTS) 56184.71 ops/s (17.798 us) 56809.78 ops/s (17.603 us) 56078.04 ops/s (17.832 us)
nested_array_assignment php-fpm + nginx (NTS) 92183.60 ops/s (10.848 us) 66931.60 ops/s (14.941 us) 53176.76 ops/s (18.805 us)
nested_array_assignment FrankenPHP (ZTS) 112974.40 ops/s (8.852 us) 85626.37 ops/s (11.679 us) 86599.61 ops/s (11.547 us)

Explicit-cache write contention, shared key

The following table uses vote_write_contention_shared with 5 workers publishing to the same key. Each cell reports mean store throughput, with mean per-store latency in parentheses.

Workload Runtime APCu volatile_cache persistent_cache
route_table_read php-fpm + nginx (NTS) 16155.90 ops/s (61.897 us) 6540.56 ops/s (152.892 us) 7242.85 ops/s (138.067 us)
route_table_read FrankenPHP (ZTS) 16463.45 ops/s (60.741 us) 6978.44 ops/s (143.298 us) 6989.42 ops/s (143.073 us)
metadata_object_read php-fpm + nginx (NTS) 17616.54 ops/s (56.765 us) 6843.71 ops/s (146.120 us) 6807.79 ops/s (146.891 us)
metadata_object_read FrankenPHP (ZTS) 15055.64 ops/s (66.420 us) 5896.44 ops/s (169.594 us) 6080.88 ops/s (164.450 us)
safe_direct_object php-fpm + nginx (NTS) 69125.67 ops/s (14.466 us) 57853.11 ops/s (17.285 us) 59955.97 ops/s (16.679 us)
safe_direct_object FrankenPHP (ZTS) 65286.14 ops/s (15.317 us) 55396.87 ops/s (18.052 us) 52753.05 ops/s (18.956 us)
spl_collection_object php-fpm + nginx (NTS) 48685.86 ops/s (20.540 us) 48532.65 ops/s (20.605 us) 46533.61 ops/s (21.490 us)
spl_collection_object FrankenPHP (ZTS) 47553.59 ops/s (21.029 us) 46356.66 ops/s (21.572 us) 44694.30 ops/s (22.374 us)
nested_array_assignment php-fpm + nginx (NTS) 58626.85 ops/s (17.057 us) 41802.74 ops/s (23.922 us) 37233.11 ops/s (26.858 us)
nested_array_assignment FrankenPHP (ZTS) 62518.32 ops/s (15.995 us) 40722.83 ops/s (24.556 us) 42462.85 ops/s (23.550 us)

Explicit-cache write contention, distinct keys

The following table uses vote_write_contention_distinct with 5 workers publishing to per-worker key rings. Each cell reports mean store throughput, with mean per-store latency in parentheses.

Workload Runtime APCu volatile_cache persistent_cache
route_table_read php-fpm + nginx (NTS) 17198.29 ops/s (58.145 us) 6147.61 ops/s (162.665 us) 6416.30 ops/s (155.853 us)
route_table_read FrankenPHP (ZTS) 13105.76 ops/s (76.302 us) 5379.53 ops/s (185.890 us) 6323.58 ops/s (158.138 us)
metadata_object_read php-fpm + nginx (NTS) 16307.60 ops/s (61.321 us) 6688.09 ops/s (149.520 us) 6650.08 ops/s (150.374 us)
metadata_object_read FrankenPHP (ZTS) 16528.71 ops/s (60.501 us) 6731.88 ops/s (148.547 us) 6687.46 ops/s (149.534 us)
safe_direct_object php-fpm + nginx (NTS) 67110.58 ops/s (14.901 us) 60580.25 ops/s (16.507 us) 54581.89 ops/s (18.321 us)
safe_direct_object FrankenPHP (ZTS) 69226.61 ops/s (14.445 us) 55267.70 ops/s (18.094 us) 60608.93 ops/s (16.499 us)
spl_collection_object php-fpm + nginx (NTS) 53041.60 ops/s (18.853 us) 46993.17 ops/s (21.280 us) 47363.55 ops/s (21.113 us)
spl_collection_object FrankenPHP (ZTS) 52918.80 ops/s (18.897 us) 51187.71 ops/s (19.536 us) 46811.00 ops/s (21.363 us)
nested_array_assignment php-fpm + nginx (NTS) 52962.60 ops/s (18.881 us) 40598.83 ops/s (24.631 us) 40314.96 ops/s (24.805 us)
nested_array_assignment FrankenPHP (ZTS) 63438.57 ops/s (15.763 us) 44714.60 ops/s (22.364 us) 44624.18 ops/s (22.409 us)

Explicit-cache entry reservation contention

The following table uses vote_entry_reservation_contention with 5 workers racing to populate one missing key. APCu uses apcu_entry(); OPcache uses volatile_lock($key) or persistent_lock($key) followed by store. Each cell reports mean operation throughput, with mean per-operation latency in parentheses. All rows had a max build count of 1.

Workload Runtime APCu entry volatile_cache reservation persistent_cache reservation
route_table_read php-fpm + nginx (NTS) 5506.35 ops/s (181.609 us) 30181.56 ops/s (33.133 us) 31538.75 ops/s (31.707 us)
route_table_read FrankenPHP (ZTS) 5446.90 ops/s (183.591 us) 27276.41 ops/s (36.662 us) 28193.21 ops/s (35.470 us)
metadata_object_read php-fpm + nginx (NTS) 5582.47 ops/s (179.132 us) 32210.17 ops/s (31.046 us) 29526.42 ops/s (33.868 us)
metadata_object_read FrankenPHP (ZTS) 5152.11 ops/s (194.095 us) 28059.10 ops/s (35.639 us) 29145.89 ops/s (34.310 us)

Where the write rows are slower than APCu, especially the 5-way contended route_table_read and metadata_object_read cases, the difference follows directly from the implementation strategy. A store first goes through

zend_static_cache_entries.c

: zend_opcache_static_cache_prepare_value() classifies the value, may calculate a shared-graph size, may build a shared-graph scratch buffer outside the lock, and otherwise falls back to the OPcache serializer or php_var_serialize(). Then zend_opcache_static_cache_store_prepared_locked() takes the cache write lock, probes the open-addressed entry table, allocates or reuses SHM blocks, copies prepared scalar/string/serialized payloads or rebuilds shared graphs directly into OPcache-owned shared memory, retires or frees overwritten payloads, may expunge expired volatile entries, and may compact fragmented movable payload blocks before reporting allocation failure.

That extra work is not incidental. The static-cache backends must publish values into OPcache-managed SHM in a form that can later participate in request-local lookup caching, shared-graph restoration, static-slot restore/publication hooks, and script invalidation semantics. #[OPcache\PersistentStatic] is stricter still: its store path must validate that the published value fits in shared memory at the assignment or publication point so that exhaustion becomes an immediate error instead of a delayed failure at request shutdown. That size calculation is part of the non-volatile guarantee, not an accidental implementation detail. APCu does not need to preserve those OPcache/VM-level invariants, so its contended write path can be cheaper for the larger graph payloads.

The entry reservation scenario measures a different shape: avoid redundant builder work when concurrent requests observe a miss for the same key. The OPcache path does not execute userland code while holding the cache write lock; it reserves the missing key, builds the value outside the lock, and releases the reservation on successful store or request shutdown. Public mutators from other requests wait for that reservation instead of writing through it. In that workload all rows build exactly once, and the OPcache reservation path is substantially cheaper than the apcu_entry() rows measured here.

That said, the proposal is intentionally optimized for read-dominated workloads, not for write-heavy cache churn. In the 100%-hit object-free read rows with max build count 0, once a value has been primed and restored into request-local state, static property targets stay at or below about 0.33 us and method targets stay around 0.31-0.49 us for the representative route-table and large-array rows, while APCu remains around 86-156 us. Explicit cache fetches still cross the key/value API and cache lock path, but remain around 17-36 us for the route-table and large-array rows. Ordinary object-bearing explicit fetches pay an internal clone cost to keep returned object graphs independent: in the final clean FPM/ZTS runs, the metadata-object fetch path is about 34-36 us versus APCu at about 162-167 us. The OPcache\__DirectCacheSafe, SPL, and Carbon explicit-fetch rows now exercise registered request-local copy handlers for supported Date/Time and SPL internal state, so they avoid repeated reconstruction from the stored payload when the epoch matches. DateTime-shaped safe-direct rows are about 0.99-1.03 us, SPL collection rows are about 5.26-5.91 us, and Carbon rows are about 45-49 us through the explicit OPcache backends. Attribute-backed static access to the Carbon shape remains about 1.46-1.75 us because restoration is paid once at static-slot initialization. Mutating the metadata object graph before the next fetch does not force full value reconstruction; the next fetch clones again from the request-local prototype, which is still slower than object-free slot copy but remains in the same range as read-only metadata-object fetches. In other words, the slower contended write rows are a secondary cost paid to install a value into the fast path; for repeated-hit request-local lookup, attribute-backed static slots, and supported prototype-copy paths that dominate the target workload, steady-state reads remain substantially faster than APCu, so the write-side disadvantage is acceptable for the intended use cases.

Benchmark takeaways

Validation

The OPcache static-cache PHPT coverage exercises the explicit volatile and persistent cache APIs, non-empty key validation, attribute restore and publication, clear/reset/invalidate behavior, TTL expiry and large TTL values, entry reservation locks, public mutator waits behind reservations, tracking shared dependencies, persistent failure modes including unsupported PersistentStatic values, allocator reuse after store/delete across requests, forked processes, and ZTS threads, allocator compaction under fragmentation, TTL-expunge-before-compaction ordering, skip conditions for unnecessary or impossible compaction, fetch value reconstruction and userland serialization hooks outside cache locks, request-local object-copy isolation for ordinary objects with userland __clone, OPcache\__DirectCacheSafe registered-handler copy/restore paths, ZTS helper paths, defensive tracked mutation hook helpers, and tracing-JIT static-slot reads after PersistentStatic array mutation publication. The benchmark verification above also builds APCu from master and exercises the NTS FPM and ZTS FrankenPHP runtimes used by the measured rows.

FAQ

Why are error policies different between volatile and persistent?

The public APIs choose their error policy from the cache semantics rather than from a single namespace-wide convention. The volatile cache is a recoverable cache: memory pressure, TTL expiry, and rebuildable values are part of the programming model, so volatile_store() and volatile_store_array() report storage failure with false. The persistent cache is strict non-volatile state: after an application decides to publish routing tables, metadata, or #[OPcache\PersistentStatic] state, losing that write would violate the contract, so capacity and encoding failures raise OPcache\StaticCacheException. Fetch operations in both backends are miss-tolerant reads and return the supplied default, which keeps routine cache misses out of the exception path while still allowing callers to distinguish a stored null with a sentinel default or *_exists().

Patches and Tests

Changelog