본문 바로가기

emotional developer/detect-Web

Memcached design and comparison with Redis

http://key-value-stories.blogspot.kr/2015/02/memcached-internals-design.html?mkt_tok=3RkMMJWWfF9wsRonvKXKZKXonjHpfsX56uolX6KxlMI/0ER3fOvrPUfGjI4DT8BrI%2BSLDwEYGJlv6SgFQ7HAMa5m3rgMWRk%3D



Discussion

Strong sides of Memcached, compared to Redis

  • It is multithreaded, and scales up to 5 concurrent threads (according to the note in memcached source code itself). It sounds plausible.
  • It stores keys and values along with entry structure within a single allocation, reducing memory overhead, imporving access locality, and lowering memory allocation pressure.
  • It is nicely able to pick up just expired entries to store the new ones, using the LRU lists per size class, that should almost elimitate trips to memory allocator in some workloads.

General weaknesses and shortcomings

Concurrent access management
  • While bucket lock is well striped, other three (memory allocator, LRU cache and statistics) locks are single, that limits scalability to only 5 threads.
  • At very least, having allocator and LRU cache locks per slabclass should help in some cases, and should be easy to implement, because slabclasses don't share state.
  • Mutex locks are slow, because they cause two context switches. Context switch cost estimates vary in a wide range, depending on if this is just a in-process thread switch, or an OS-level process switch (requiring much more kernel operations and TLB cache flush), CPU model, workload, if the server is dedicated for Memcached or have other intensively-working processes, and so on.

    The broadest estimate, is that context switch takes from 1 to 50 microseconds, i. e. from 1000 to 50 000 nanoseconds.

    Most Memcached operations take much less than two context switches, what means busy wait locking would result to smaller time losses.

    Given current architecture and lock layout, statistics lock should be converted into spinlock.

    Slab allocation lock(s) and bucket locks should have tunable time to spin in an acquiration attempt loop, before parking on a heavy mutex lock. This is because Slab allocation could either complete very fast (just unlink an entry from the free list), or trap slab establishment with page faults. Bucket locks wrap memory allocator lock, therefore they generally inherit slab allocation latency properties.
  • Hash table bucket locks should be read-write granular.
  • Concurrent memory allocation performs better, when it is spit into independent regions, where the region for the allocation depends on the current thread (jemalloc) or simply choosed randomly (one-nio). However, this isn't easily applicable to Memcached.
    Using a single byte for storing key size, while the minimum entry overhead is 63 bytes, seems to be an almost pointless optimization.


    반응형

    'emotional developer > detect-Web' 카테고리의 다른 글

    Web Server timeout  (0) 2015.11.11
    [spring] @ControllerAdvice 여러개 쓰기.  (1) 2015.07.06
    [spring] CharacterEncodingFilter java config  (0) 2015.07.02