November 26

I released cachetools - package with @safe and @nogc cache and hashtable implementations.

It inherits nogc propery from key toHash and opEquals methods and can store immutable keys and values (with restrictions).

Currently implemented only LRU cache with TTL.

Performance of underlying hash table seems to be better in my benchmarks in comparison with std associative arrays and with emsi containers (see )

        Test inserts and lookups int[int]
|std     | 303 ms, 541 μs, and 3 hnsecs    | GC memory Δ 41MB|
|c.t.    | 181 ms, 173 μs, and 2 hnsecs    | GC memory Δ 0MB|
|c.t.+GC | 184 ms, 594 μs, and 5 hnsecs    | GC memory Δ 16MB|
|emsi    | 642 ms and 120 μs               | GC memory Δ 0MB|

     Test insert, remove, lookup for int[int]
|std     | 327 ms, 982 μs, and 1 hnsec     | GC memory Δ 17MB|
|c.t.    | 229 ms, 11 μs, and 7 hnsecs     | GC memory Δ 0MB|
|c.t.+GC | 240 ms, 135 μs, and 4 hnsecs    | GC memory Δ 16MB|
|emsi    | 678 ms, 931 μs, and 9 hnsecs    | GC memory Δ 0MB|

     Test inserts and lookups for struct[int]
|std     | 468 ms, 411 μs, and 7 hnsecs    | GC memory Δ 109MB|
|c.t.    | 392 ms, 146 μs, and 1 hnsec     | GC memory Δ 0MB|
|c.t.+GC | 384 ms, 771 μs, and 5 hnsecs    | GC memory Δ 88MB|
|emsi    | 1 sec, 328 ms, 974 μs, and 9 h  | GC memory Δ 0MB|


Thanks for bugreports and proposals.

January 11

v.0.1.1 released.

Significant changes since previous announce:


1. added 2Q cache. 2Q is more advanced strategy than plain LRU. It is faster, scan-resistant and can give more cache hits.

2. For LRU cache implemented per-item TTL in addition to global TTL.


1. Added unrolled double-linked lists. Unrolled lists are much faster than plain double-linked lists. Unrolled lists used for 2Q cache implementation.

2. HashMap  - code cleanup, use core.internal.hash: bytesHash for strings.

Cachetools is set of cache strategies and containers. Caches and containers are @safe and @nogc (inherits from key and value properties).

Project page:

Some performance test results: