Guide to the Storage Engine in Apache Cassandra – Apache Cassandra中的存储引擎指南

最后修改: 2022年 9月 25日

1. Overview


Modern database systems are tailored to guarantee a set of capabilities such as reliability, consistency, high throughput, and so on by leveraging sophisticated storage engines for writing and reading data.


In this tutorial, we’ll dive deep into the internals of the storage engine used by Apache Cassandra, which is designed for write-heavy workloads while maintaining good performance for reads as well.

在本教程中,我们将深入了解Apache Cassandra所使用的存储引擎的内部结构,该引擎专为写量大的工作负载而设计,同时也为读量保持良好的性能

2. Log-Structured Merge-Tree (LSMT)

2.Log-Structured Merge-Tree(LSMT)。

Apache Cassandra leverages a two-level Log-Structured Merge-Tree based data structure for storage. At a high level, there are two tree-like components in such an LSM tree, an in-memory cache component (C0) and an on-disk component (C1):
Log-structured Merge-tree

Apache Cassandra利用一种基于两级Log-Structured Merge-Tree的数据结构进行存储。在高层次上,这样的LSM树有两个树状组件,一个是内存缓存组件(C0),一个是磁盘组件(C1):
Log-structured Merge-tree

Reading and writing directly from memory is usually faster than the disk. Therefore, by design, all requests hit C0 before reaching out to C1. Moreover, a sync operation persists data from C0 to C1 periodically. As a result, it uses network bandwidth efficiently by reducing the I/O operations.


In the following sections, we’ll learn more about the C0 and C1 data structures of the two-level LSM tree in Apache Cassandra, commonly known as MemTable and SSTable, respectively.

在下面的章节中,我们将进一步了解Apache Cassandra中两级LSM树的C0和C1数据结构,通常分别被称为MemTable和SSTable。

3. MemTable


As the name suggests, MemTable is a memory-resident data structure such as a Red-black tree with self-balancing binary search tree properties. As a result, all the read and write operations, namely, search, insert, update, and delete, are achievable with O(log n) time complexity.

顾名思义,MemTable是一个驻留在内存中的数据结构,例如具有自平衡二进制搜索树特性的Red-black树。因此,所有的读写操作,即搜索、插入、更新和删除,都可以用O(log n)的时间复杂度实现。

Being an in-memory mutable data structure, MemTable makes all the writes sequential and allows for fast write operations. Further, due to the typical constraints of physical memory, such as limited capacity and volatile nature, we need to persist data from MemTable to disk:


Once the size of MemTable reaches a threshold value, all the r/w requests switch to a new MemTable, while the old one is discarded after flushing it onto the disk.


So far, so good! We can handle a large number of writes efficiently. However, what happens, if the node crashes before a flush operation? Well, it’s simple — we’d lose the data that isn’t flushed to disk yet.


In the next section, we’ll see how Apache Cassandra solves this problem by using the concept of Write-Ahead Logs (WAL).

在下一节中,我们将看到Apache Cassandra如何通过使用Write-Ahead Logs(WAL)的概念来解决这个问题。

4. Commit Log


Apache Cassandra defers the flush operation that persists data from memory to disk. Hence, an unexpected node or process crash can lead to data loss.

Apache Cassandra推迟了将数据从内存持久化到磁盘的刷新操作。因此,一个意外的节点或进程崩溃会导致数据丢失。

Durability is a must-have capability for any modern database system, and Apache Cassandra is no exception. It guarantees durability by ensuring all writes persist onto the disk in an append-only file called Commit Log. Thereafter, it uses MemTable as a write-back cache in the write-path:

持久性是任何现代数据库系统必须具备的能力,Apache Cassandra也不例外。它通过确保所有的写操作在磁盘上持续存在于一个名为Commit Log的纯附加文件中来保证持久性。此后,它使用MemTable作为写路径中的回写缓存。

Commit Log

We must note that append-only operations are fast as they avoid random seeks on the disk. So, the Commit Log brings durability capability without compromising the performance of writes. Further, Apache Cassandra refers to the Commit Log only during a crash recovery scenario, while the regular read requests don’t go to the Commit Log.

我们必须注意到,只附加的操作是快速的,因为它们避免了磁盘上的随机搜索。所以,Commit Log带来了持久性能力,而不影响写的性能。此外,Apache Cassandra只在崩溃恢复的情况下参考Commit Log,而普通的读取请求不会进入Commit Log。

5. SSTable


Sorted String Table (SSTable) is the disk-resident component of the LSM tree used by the Apache Cassandra storage engine. It derives its name from a similar data structure, first used by Google’s BigTable database, and indicates that the data is available in a sorted format. Generally speaking, each flush operation from the MemTable generates a new immutable segment in the SSTable.

分类字符串表(SSTable)是Apache Cassandra存储引擎使用的LSM树的磁盘驻留组件。它的名字来源于一个类似的数据结构,首先由谷歌的BigTable数据库使用,并表示数据是以排序的格式提供的。一般来说,MemTable的每一次刷新操作都会在SSTable中生成一个新的不可变的段。

Let’s try to visualize how an SSTable would look while containing data about the count of various animals kept in a zoo:


Though the segments are sorted by keys, nevertheless, the same key could be present in multiple segments. So if we’ve to look for a specific key, we need to start our search from the latest segment and return the result as soon as we find it.


With such a strategy, read operations for the recently written keys would be fast. However, in the worst case, the algorithm executes with an O(N*log(K)) time complexity, where N is the total number of segments, and K is the segment size. As K is a constant, we can say that the overall time complexity is O(N), which is not efficient.


In the following few sections, we’ll learn how Apache Cassandra optimizes the read operations for SSTable.

在下面几节中,我们将学习Apache Cassandra如何优化SSTable的读取操作。

6. Sparse Index


Apache Cassandra maintains a sparse index to limit the number of segments it needs to scan while looking for a key.

Apache Cassandra维护了一个稀疏索引,以限制它在寻找一个键时需要扫描的段的数量

Each entry in the sparse index contains the first member of a segment, along with its page offset position on the disk. Further, the index is maintained in memory as a B-Tree data structure so that we can search for an offset in the index with O(log(K)) time complexity.


Let’s say we want to search for the key “beer.” We’ll start by searching for all the keys in the sparse index that’d come before the word “beer.” After that, using the offset value, we’ll look into only a limited number of segments. In this case, we’ll look into the fourth segment where the first key is “alligator”:

比方说,我们想搜索 “啤酒 “这个键。我们将开始搜索稀疏索引中所有在 “啤酒 “这个词之前的键。之后,使用偏移值,我们将只查找有限数量的片段。在这个例子中,我们将搜索第四段,其中第一个键是 “alligator”。

Sparse Index

On the other hand, if we had to search for an absent key such as “kangaroo,” we’d have to look through all the segments in vain. So, we realize that using a sparse index optimizes the search to a limited extent.

另一方面,如果我们要搜索一个不存在的键,如 “袋鼠”,我们就不得不徒劳地翻阅所有的片段。因此,我们意识到,使用稀疏索引可以在有限的范围内优化搜索。

Moreover, we should note that SSTable allows the same key to be present in different segments. Therefore, with time, more and more updates will happen for the same key, thereby creating duplicate keys in sparse index too.


In the following sections, we’ll learn how Apache Cassandra addresses these two problems with the help of bloom filters and compaction.

在下面的章节中,我们将学习Apache Cassandra如何借助于Bloom过滤器和压缩来解决这两个问题。

7. Bloom Filter


Apache Cassandra optimizes the read queries using a probabilistic data structure known as a bloom filter. Simply put, it optimizes the search by first performing a membership check for a key using a bloom filter.

Apache Cassandra 使用被称为bloom filter的概率数据结构来优化读取查询。简单地说,它通过首先使用bloom过滤器对一个键执行成员检查来优化搜索。

So, by attaching a bloom filter to each segment of the SSTable, we’d get significant optimizations for our read queries, especially for keys that aren’t present in a segment:

因此,通过给SSTable的每一个段附加一个Bloom filter,我们可以对我们的读取查询进行显著的优化,特别是对于那些不存在于一个段中的键。

Bloom Filter

As bloom filters are probabilistic data structures, we could get “Maybe” as a response, even for missing keys. However, if we get “No” as a response, we can be sure that the key’s definitely missing.

由于Bloom filter是一种概率数据结构,我们可以得到 “也许 “作为响应,即使是缺失的键。然而,如果我们得到 “No “的回应,我们就可以确定这个键肯定是丢失的。

Despite their limitations, we can plan to improve the accuracy of bloom filters by allocating larger storage space for them.


8. Compaction


Despite using bloom filters and a sparse index, the performance of the read queries would degrade with time. That’s because the number of segments containing different versions of a key will likely grow with each MemTable flush operation.


To solve the issue, Apache Cassandra runs a background process of compaction that merges smaller sorted segments into larger segments while keeping only the latest value for each key. So, the compaction process gives the dual benefit of faster reads with lesser storage.

为了解决这个问题,Apache Cassandra 在后台运行一个压缩过程,将较小的排序段合并成较大的段,同时只保留每个键的最新值。因此,压实过程带来了双重好处,即以更少的存储量实现更快的读取

Let’s see what a single run of compaction would look like on our existing SSTable:



We notice that the compaction operation reclaimed some space by keeping only the latest version. For instance, old versions of keys such as “elephant” and “tiger” are no longer present, thereby freeing up disk space.

我们注意到,压缩操作只保留了最新版本,从而回收了一些空间。例如,诸如 “大象 “和 “老虎 “等键的旧版本不再存在,从而释放出了磁盘空间。

Additionally, the compaction process enables the hard deletion of keys. While a delete operation would mark a key with a tombstone, the actual deletion is deferred until compaction.


9. Conclusion


In this article, we explored the internal components of Apache Cassandra’s storage engine. While doing so, we learned about advanced data structure concepts such as LSM Tree, MemTable, and SSTable. Moreover, we also learned a few optimization techniques using Write-Ahead Logging, Bloom Filters, Sparse Index, and Compaction.

在这篇文章中,我们探讨了Apache Cassandra存储引擎的内部组件。在此过程中,我们了解了高级数据结构概念,如LSM树、MemTable和SSTable。此外,我们还学习了一些优化技术,如Write-Ahead Logging、Bloom Filters、Sparse Index和Compaction。