RocksDB-Learned Range Filter

146 minute read

  • LeRF: A Learned Range Filter for Key-Value Stores
  • AegisKV: A Range-query Optimized Key-Value Store via Learned Range Filter and Efficient Partitioning
  • AegisKV: Optimizing Range-query for key-value store optimization
  • 投稿目标
    • filter单独投一篇:偏机器学习投NeurIPS / 偏kv投系统会议
    • 范围优化kv:引用上篇;删除优化;异步scan

Abstract

Introduce

  • 范围查询性能问题
  • 大规模删除导致的

Background and Motivation

  • LSM-tree and Rocksdb
  • Range Query
  • Range Filter

Rosetta, surf,

  • System Stalls

Design

  • Overall System Architecture

Learned Range Filter

  • 把过滤器当作分类问题
  • 保证没有假阴性:备份过滤器
  • 分类算法
  • 减少误差率

Partition Scheduler

Implementation && Evaluation

Conclusion

相关文章

  • filter特征
    • 存在性索引:0/1分类问题
    • allow false positives, but will not give false negative (有不一定有,没有一定没有)
    • tradeoff:空间与精度
  • bloom filter

    • a set S ={x1,x2,xn} of n keys. It consists of an array of m bits and uses k independent hash functions {h1, h2, hk} with the range of each hi being integer values between 0 and m - 1

    • FPR(false positive rate) \(FPR = (1-(1-1/m)^{kn})^k\)

  • bloom filter的问题

    • 对数据不感知,空间开销大,误报率高:Learned Filter
    • 不支持范围查询:SuRF,Rosetta
    • 性能低,开销大:Chucky
      • 之前的SSD/HDD等访问延迟较大,Bloom filter引起的开销比重较小,可以忽略不计
      • NVMe SSD性能提升,与DRAM性能差距进一步缩小,由于在每层都要维护Bloom filter,会引起比较大的查询延迟,引起LSM-Tree中Bloom filter成为新的瓶颈之一
  • 相关文章 Prediction and filtering

    • Range Filter for KV Stores
    • Learned Filter

论文1-1 SuRF: Practical Range Query Filtering with Fast Succinct Tries

  • Sigmod‘18 Best Paper,CMU Andrew Pavlo团队

  • 提出了一种新的数据结构 SuRF (Succinct Range Filter),是基于 FST (Fast Succinct Tries) 构建的一种能够支持点查询和范围查询的数据结构。FST 采用 LOUDS-DS 编码字典树, 该字典树上层节点较少, 由访问频繁的热数据组成; 下层包含大部分的节点, 由冷数据组成。LOUDS-DS 采用层内有序的布局, 利用这个特点, LOUDS-DS 能够有效地支持范围查询

    image-20211209111846284 image-20211209111550841

  • Full Tire 能够准确查找每一条数据, 但是会占用大量的内存资源。为了平衡过滤器的误判率和所需的内存资源,SuRF 采用了剪枝字典树, 并提出了不同的优化方案:

    • SuRF-Base 只存储能够识别每个主键的最小长度的主键前缀, 但是这种方法对于主键前缀相似度较高的场景会带来较大的误判率。例如, 实验表明,对于邮件地址作为主键的数据会带来将近 25% 的误判率
    • 为了降低 SuRF-Base 的误判率, SuRF-Hash为每个主键增加了一些散列位, 查询时通过哈希函数映射到对应位检查目标数据是否存在。实验表明,SuRF-Hash 只需 2 ~ 4 个散列位就可将误判率降低至 1%。虽然 SuRF-Hash 能够有效地减低误判率,但是并不能提高范围查询的性能
    • 与 SuRF-Hash 不同的是, SuRF-Real 在每个主键前缀的后面存储n 个主键位增加主键的区分度, 在降低误判率的同时提升了点查询和范围查询的性能, 但是由于有些主键的前缀区别较小, SuRF-Real 的误判率不如 SuRF-Hash低
    • SuRF-Mixed 结合了 SuRF-Hash 和SuRF-Real 的优点, 能够有效地支持点查询和范围查询
  • 测试

    • 代码:https://github.com/efcient/SuRF

    • 指标:false positive rate (FPR), performance, and space

    • 负载:YCSB,email地址数据

      • The datasets are 100M 64-bit random integer keys and 25M email keys
      • We test two representative key types: 64-bit random integers generated by YCSB and email addresses (host reversed, e.g., “com.domain@foo”) drawn from a real-world dataset (average length = 22 bytes, max length = 129 bytes).

      image-20211209162719477 image-20211209162310825

    • 应用:Rocksdb测试,YCSB负载

      • 操作:We first warm the cache with 1M uniformly-distributed point queries to existing keys so that every SSTable is touched ∼ 1000 times and the block indexes and filters are cached. After the warm-up, both RocksDB’s block cache and the OS page cache are full. We then execute 50K application queries, recording the end-to-end throughput and I/O counts
  • 摒弃Excel!

论文1-2 Rosetta: A Robust Space-Time Optimized Range Filter for Key-Value Stores

  • SIGMOD’20,哈佛大学Stratos Idreos团队,Monkey,Dosteovsky,,,
  • Rosetta:优化SuRF的范围查询能力
    • 问题1:Short and Medium Range Queries are Significantly Sub-Optimal
    • 问题2:Lack of Support for Workloads With Key Query Correlation or Skew
  • 设计:使用一组分层排列的布隆过滤器对每个主键的二进制前缀建立索引,然后将每个范围查询转换为多次布隆过滤操作

    image-20211209110441425image-20211209110402730

  • 测试

    • 负载1:YCSB key-value workloads that are variations of Workload E
    • 指标:latency、CPU cost、内存占用、FPR

    image-20211209160614974 image-20211209161036560

    • 负载2:多种真实负载,变长字符串数据集 Wikipedia Extraction
      • We use a variable-length string data set, Wikipedia Extraction (WEX)9 comprising of a processed dump (of size 6M) of English language in Wikipedia
      • We generate 50 million (50 × 106) keys each of size 64 bits (and 512 byte values) using both uniform, and normal distributions.

论文1-3 Chucky: A Succinct Cuckoo Filter for LSM-Tree

  • SIGMOD’21,哈佛Stratos Idreos团队
  • 关注点:高性能介质的使用,Bloom filter引起的开销不容忽视
  • 设计:Succinct + Cuckoo Filter
    • Chucky提出用单个Succinct Cuckoo Filter替代LSM-Tree中的多个Bloom filter,可以有效减少查找引起的开销

image-20211209154820317 image-20211209161405428

  • 测试:负载YCSB
  • 指标:Memory I/O Scalability、FPR Scalability、Data in Storage vs. Memory、End-to-End Write Cost

论文2-1 A Model for Learned Bloom Filters, and Optimizing by Sandwiching

  • NeurIPS 2018,一个作者,哈佛 Michael Mitzenmacher

  • 原始Learned Bloom Filter

    image-20211203110726171

    • We then train a model with

      image-20211209194848919

    • that is, they suggest using a neural network on this binary classification task to produce a probability, based on minimizing the log loss function

      image-20211209194746053

    • 定义

    image-20211209195114883

    • FPR的评估

      • 与标准的布隆过滤器不同,其高度依赖于查询集,并且没有独立于查询进行很好的定义

      image-20211209195715822

      • 虽然F(B)本身是一个随机变量,但FPR很集中它的期望,这只取决于过滤器B的大小和错误否定的数量从必须存储在过滤器中的K,这取决于f:

      image-20211210142227258

    • 关于non-key

      • An assumption in this framework is that the training sample distribution needs to match or be close to the test distribution of non-keys. For many applications, past workloads or historical data can be used to get an appropriate non-key sample.
      • 训练样本分布需要匹配或接近non-key的测试分布,对于许多应用程序,可以使用过去的工作负载或历史数据来获得适当的non-key示例
      • 证明:Given sufficient data, we can determine an empirical false positive rate on a test set, and use that to predict future behavior. Under the assumption that the test set has the same distribution as future queries, standard Chernoff bounds provide that the empirical false positive rate will be close to the false positive rate on future queries, as both will be concentrated around the expectation. In many learning theory settings, this empirical false positive rate appears to be referred to as simply the false positive rate; we emphasize that false positive rate, as we have explained above, typically means something different in the Bloom filter literature
      • ing

image-20211209150106669

  • 除了使用一个后置布隆过滤器外,三明治(Sandwiching)结构还使用了一个前置布隆过滤器
    • 由于后置布隆过滤器的大小与通过RNN模型的假阴性元素数量呈正相关,所以通过使用前置布隆过滤器消除更多的假阴性, 能够降低后置布隆过滤器的空间代价
    • 三明治结构的另一个优点是它与Kraska等人提出的学习布隆过滤器结构相比具有更强的鲁棒性。如果用于学习布隆过滤器的训练集和测试集具有不同的数据分布,那么RNN模型的FNR可能远远大于预期。增加一个前置布隆过滤器能够缓解这个问题,因为它预先过滤了一部分假阴性元素

论文2-2 Adaptive Learned Bloom Filter (Ada-BF)

  • NeurIPS 2020

  • key分布如何求得?non-key的选取?

    image-20211203145902907

论文2-3 The Case for Learned Index Structures && Partition Learned Bloom Filter

  • Tim Kraska,Jeffrey Dean

image-20211209154031693image-20211203112429236

其他论文

  • Hash Adaptive Bloom Filter, ICDE‘21

  • Compressing (Multidimensional) Learned Bloom Filters,NeurIPS workshop 2020
  • Meta-Learning Neural Bloom Filters,ICML’19
  • Learned FBF: Learning-Based Functional Bloom Filter for Key–Value Storage,TOC’21

问题总结

  • Range Filter
    • Rosetta:无法避免数据的探查与探查结果的合并开销,不适合长范围查询场景
    • Surf:构建新结构,插入新数据时索引重构开销较大,Succinct 结构性能低下
    • Chucky:不支持range query
  • Learned Filter
    • 优势:感知数据模式,精度高,体积小
    • 劣势:不支持range query,模型精度随数据不定,不支持动态插入和更新
  • SNARF: A Learning-Enhanced Range Filter

设计与测试

  • Learned + Range Filter两则结合

Filter的设计

image-20211203122448516 Scan_filter

  • Learned Model: 二分类问题

    • f(x)的选择:RMI、Lr、Plr、SVM、CART、CNN、RNN
  • 理论证明:FPR的评估?
  • LRF设计1:(算法问题)求f(x)在范围内的最大值

  • 合不合理,可不可求:需要数学公式推导/证明

  • LRF设计2:(插值问题)绘制Key-Score 的映射,判断范围内最高score是否大于t;例子: (K1, k2)

    • 样本选择:正向样本:keys in the SStable;负向样本:non-keys的生成?

    • 区域最大值(极大值):求导

    • Spline插值 & 多项式回归

      splinetest

  • 其他方法?

    • Kernel Density Estimation(KDE)拟合

In KV Store

  • 出发点

    • 基于NVMe SSD,原来bloom filter成为瓶颈
    • 不支持范围查询
    • 数据感知
  • 设计:每个SStable配备,只读数据符合Learned index需求

  • 模型和string处理, 能否结合partition learned bloom filter / Rocksdb partition bloom filtre?

    • Rocksdb partition bloom filter:full filter存储方式,可以把filter block分片为多个更小的blocks,以降低block cache的压力

    string code

  • Range Filter问题:合并开销优化, 长范围查询优化

  • Key-range partition and garbage collection

测试

  • LBF的实现,尝试Lr、SVM、CNN、RNN (LSTM、GRU)、RMI

    • URL数据,CNN,模型25M,精度0.972

      image-20211231132349722image-20211228182703279

    • DB_bench数据

      • X:0~10000,y~[0,1],五千个0和五千个1
      • lr:0.453;SVM:0.562;RF:0.503;CNN:0.693;LSTM:0.706;双向LSTM:0.834
      • 双层双向LSTM,模型5.4M,精度0.997

      image-20211231132318193

求极值

  • 方法1:scipy.optimize.minimize

    image-20211228183244222image-20211228182840354image-20211231160227827

  • 方法2:梯度下降法

  • 方法3:变成训练过程

    • 神经网络训练过程就是求最值,loss最小
    • 输入(x,y,z),参数(a,b,c),模型 ax + by + cz
    • 训练过程:寻找使得 loss =pre - y最小的(a,b,c)
    • 现在的情况:模型(a,b,c)是固定的,寻找使得 pre 最小的 (x,y,z)

image-20211209152110027

  • 原论文代码测试
    • 模型:GRU
    • 数据集:网址数据
1. Bloom Filter
Bits needed 14293028
Hash functions needed 6
Tast False positives 0.010326521200924817

2. GRU
Params needed 2545
Bloom filter bits needed 7308941
Total bits needed 7311486
Test False positive rate:  0.010351505356869193

范围取样

  • 采蘑菇问题
    • 在一维度坐标[0, N]上按照函数f(x)位置种下k个蘑菇,坐标分别为P0,P1,…,Pk,其中Pi = f(i);

    • 一个农民在(a, b)范围内采蘑菇,a与b任意取值,0<= a < b <= N,需要判断该农民能否采到蘑菇;

    • 即如何构建布尔函数g(a,b),若g = 1,表示可以采到蘑菇;g = 0,不能采到。

mogu

数据:num0: 1703 num1: 178090

num0: 10509 num1: 199281

预测:num-1: 0 num0: 179128 num1: 665

num-1: 607 num0: 169174 num1: 10012

LR模型

num0: 10504 num1: 199286 num-1: 0 num0: 209785 num1: 5

Next

  • Range Filter
    • 理论支持?
    • non-key数据?
    • Range Filter如何实现?
  • 测试
    • LBF其他模型测试,单独用lr精度过低
    • LBF用于Rocksdb + 其他数据集测试

Surf测试

1. 不用filter

单点查询

throughput: 778.53 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1192.427183 95 : 2191.246016 99 : 2813.883330 100 : 18167.000000 rocksdb.block.cache.miss COUNT : 309894253 rocksdb.block.cache.hit COUNT : 1204937 rocksdb.block.cache.add COUNT : 6935340 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1428195 rocksdb.block.cache.index.hit COUNT : 1055823 rocksdb.block.cache.index.add COUNT : 1428195 rocksdb.block.cache.index.bytes.insert COUNT : 605603079464 rocksdb.block.cache.index.bytes.evict COUNT : 605518338384 rocksdb.block.cache.filter.miss COUNT : 0 rocksdb.block.cache.filter.hit COUNT : 0 rocksdb.block.cache.filter.add COUNT : 0 rocksdb.block.cache.filter.bytes.insert COUNT : 0 rocksdb.block.cache.filter.bytes.evict COUNT : 0 rocksdb.block.cache.data.miss COUNT : 308466058 rocksdb.block.cache.data.hit COUNT : 149114 rocksdb.block.cache.data.add COUNT : 5507145 rocksdb.block.cache.data.bytes.insert COUNT : 23038163432 rocksdb.block.cache.bytes.read COUNT : 391096497672 rocksdb.block.cache.bytes.write COUNT : 628641242896 rocksdb.bloom.filter.useful COUNT : 0 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 575 rocksdb.memtable.miss COUNT : 1049425 rocksdb.l0.hit COUNT : 1840 rocksdb.l1.hit COUNT : 2311 rocksdb.l2andup.hit COUNT : 995274 rocksdb.compaction.key.drop.new COUNT : 412 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 100000000 rocksdb.number.keys.read COUNT : 1050000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 104800000000 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 0 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 0 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 0 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 41216 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 2041476131 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 104800000000 rocksdb.write.self COUNT : 100000000 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 200000000 rocksdb.compact.read.bytes COUNT : 1301603543827 rocksdb.compact.write.bytes COUNT : 1272673781248 rocksdb.flush.write.bytes COUNT : 105017065602 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 310 rocksdb.number.superversion_releases COUNT : 307 rocksdb.number.superversion_cleanups COUNT : 305 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1192.427183 95 : 2191.246016 99 : 2813.883330 100 : 18167.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 5.057838 95 : 20.650113 99 : 1077.627671 100 : 9253.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 818358.974359 95 : 1890631.313131 99 : 6125120.000000 100 : 7993369.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 365.655172 95 : 947.844828 99 : 2357.894737 100 : 3274.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 282.647026 95 : 677.332593 99 : 1926.250000 100 : 29137.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 294.953460 95 : 526.962209 99 : 744.476136 100 : 5748.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 826.696150 95 : 1597.548845 99 : 1890.380107 100 : 14988.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 102.342230 95 : 163.463288 99 : 169.834521 100 : 8715.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.618389 95 : 2.123947 99 : 3.862707 100 : 69230.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 1.000000 95 : 1.141121 99 : 17.197391 100 : 26.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.528350 95 : 898.708763 99 : 1220.016908 100 : 6072.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 111.179925 95 : 519.101450 99 : 638.471213 100 : 14061.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 1048.000000 95 : 1048.000000 99 : 1048.000000 100 : 1048.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 3/0 184.33 MB 0.8 0.0 0.0 0.0 97.8 97.8 0.0 1.0 0.0 476.6 210 1630 0.129 0 0 L1 5/0 233.37 MB 0.9 193.2 97.6 95.6 193.2 97.6 0.0 2.0 415.4 415.4 476 111 4.290 197M 20 L2 54/0 2.46 GB 1.0 534.3 96.9 437.5 534.3 96.8 0.5 5.5 395.8 395.8 1382 1522 0.908 546M 97 L3 403/0 24.96 GB 1.0 405.0 78.3 326.6 405.0 78.3 16.5 5.2 405.8 405.8 1022 1180 0.866 414M 256 L4 1117/0 69.92 GB 0.3 52.8 20.8 32.0 52.8 20.8 49.1 2.5 385.4 385.4 140 313 0.448 53M 39 Sum 1582/0 97.75 GB 0.0 1185.3 293.6 891.7 1283.1 391.4 66.2 13.1 375.7 406.7 3231 4756 0.679 1211M 412 Int 0/0 0.00 KB 0.0 11.4 2.5 8.9 11.4 2.5 3.3 12256922651.0 389.1 389.1 30 39 0.770 11M 8 Uptime(secs): 4250.6 total, 1236.5 interval Flush(GB): cumulative 97.804, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 1283.06 GB write, 309.10 MB/s write, 1185.31 GB read, 285.55 MB/s read, 3230.8 seconds Interval compaction: 11.42 GB write, 9.45 MB/s write, 11.42 GB read, 9.45 MB/s read, 30.0 seconds Stalls(count): 1373 level0_slowdown, 96 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 3130 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 3050773 Average: 101.1093 StdDev: 33.87 Min: 67 Median: 101.2603 Max: 8710

Percentiles: P50: 101.26 P75: 130.28 P99: 169.79 P99.9: 501.99 P99.99: 1041.59

( 51, 76 ] 407209 13.348% 13.348% ### ( 76, 110 ] 1505048 49.333% 62.681% ########## ( 110, 170 ] 1111908 36.447% 99.128% ####### ( 170, 250 ] 14362 0.471% 99.599% ( 250, 380 ] 7072 0.232% 99.830% ( 380, 580 ] 3481 0.114% 99.945% ( 580, 870 ] 1271 0.042% 99.986% ( 870, 1300 ] 293 0.010% 99.996% ( 1300, 1900 ] 80 0.003% 99.998% ( 1900, 2900 ] 37 0.001% 100.000% ( 2900, 4400 ] 11 0.000% 100.000% ( 6600, 9900 ] 1 0.000% 100.000%

** Level 1 read latency histogram (micros): Count: 408588 Average: 113.3401 StdDev: 64.77 Min: 21 Median: 105.6646 Max: 5478

Percentiles: P50: 105.66 P75: 139.20 P99: 467.08 P99.9: 655.97 P99.99: 1349.41

( 15, 22 ] 1 0.000% 0.000% ( 22, 34 ] 2 0.000% 0.001% ( 34, 51 ] 9 0.002% 0.003% ( 51, 76 ] 39530 9.675% 9.678% ## ( 76, 110 ] 188830 46.215% 55.893% ######### ( 110, 170 ] 160402 39.258% 95.151% ######## ( 170, 250 ] 4547 1.113% 96.263% ( 250, 380 ] 8428 2.063% 98.326% ( 380, 580 ] 6323 1.548% 99.874% ( 580, 870 ] 410 0.100% 99.974% ( 870, 1300 ] 63 0.015% 99.989% ( 1300, 1900 ] 26 0.006% 99.996% ( 1900, 2900 ] 10 0.002% 99.998% ( 2900, 4400 ] 6 0.001% 100.000% ( 4400, 6600 ] 1 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 759492 Average: 146.4239 StdDev: 119.45 Min: 20 Median: 107.4768 Max: 14061

Percentiles: P50: 107.48 P75: 154.66 P99: 564.24 P99.9: 855.17 P99.99: 2441.82

( 15, 22 ] 3 0.000% 0.000% ( 22, 34 ] 14 0.002% 0.002% ( 34, 51 ] 36 0.005% 0.007% ( 51, 76 ] 78836 10.380% 10.387% ## ( 76, 110 ] 324974 42.788% 53.175% ######### ( 110, 170 ] 222695 29.322% 82.497% ###### ( 170, 250 ] 19860 2.615% 85.112% # ( 250, 380 ] 58206 7.664% 92.776% ## ( 380, 580 ] 51317 6.757% 99.532% # ( 580, 870 ] 2942 0.387% 99.920% ( 870, 1300 ] 406 0.053% 99.973% ( 1300, 1900 ] 94 0.012% 99.986% ( 1900, 2900 ] 61 0.008% 99.994% ( 2900, 4400 ] 40 0.005% 99.999% ( 4400, 6600 ] 6 0.001% 100.000% ( 6600, 9900 ] 1 0.000% 100.000% ( 14000, 22000 ] 1 0.000% 100.000%

** Level 3 read latency histogram (micros): Count: 1860592 Average: 255.4789 StdDev: 183.81 Min: 28 Median: 158.8238 Max: 13596

Percentiles: P50: 158.82 P75: 426.17 P99: 787.92 P99.9: 1203.28 P99.99: 2744.80

( 22, 34 ] 5 0.000% 0.000% ( 34, 51 ] 14 0.001% 0.001% ( 51, 76 ] 99110 5.327% 5.328% # ( 76, 110 ] 525849 28.262% 33.590% ###### ( 110, 170 ] 375208 20.166% 53.756% #### ( 170, 250 ] 7241 0.389% 54.146% ( 250, 380 ] 264664 14.225% 68.370% ### ( 380, 580 ] 534359 28.720% 97.090% ###### ( 580, 870 ] 49564 2.664% 99.754% # ( 870, 1300 ] 3506 0.188% 99.942% ( 1300, 1900 ] 657 0.035% 99.978% ( 1900, 2900 ] 271 0.015% 99.992% ( 2900, 4400 ] 135 0.007% 100.000% ( 4400, 6600 ] 2 0.000% 100.000% ( 6600, 9900 ] 1 0.000% 100.000% ( 9900, 14000 ] 6 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 857472 Average: 267.4790 StdDev: 181.75 Min: 21 Median: 228.9559 Max: 13714

Percentiles: P50: 228.96 P75: 436.88 P99: 785.86 P99.9: 1170.02 P99.99: 2847.70

( 15, 22 ] 1 0.000% 0.000% ( 22, 34 ] 2 0.000% 0.000% ( 34, 51 ] 1 0.000% 0.000% ( 51, 76 ] 36309 4.234% 4.235% # ( 76, 110 ] 222593 25.959% 30.194% ##### ( 110, 170 ] 168009 19.594% 49.788% #### ( 170, 250 ] 2471 0.288% 50.076% ( 250, 380 ] 138488 16.151% 66.227% ### ( 380, 580 ] 264517 30.848% 97.075% ###### ( 580, 870 ] 23253 2.712% 99.787% # ( 870, 1300 ] 1391 0.162% 99.949% ( 1300, 1900 ] 229 0.027% 99.976% ( 1900, 2900 ] 129 0.015% 99.991% ( 2900, 4400 ] 77 0.009% 100.000% ( 9900, 14000 ] 2 0.000% 100.000%

** DB Stats ** Uptime(secs): 4250.6 total, 1236.5 interval Cumulative writes: 100M writes, 100M keys, 100M commit groups, 1.0 writes per commit group, ingest: 97.60 GB, 23.51 MB/s Cumulative WAL: 100M writes, 0 syncs, 100000000.00 writes per sync, written: 97.60 GB, 23.51 MB/s Cumulative stall: 00:34:1.476 H:M:S, 48.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

10339690501 6002 192210247282 2245074190 1013077194 226596 124515442891 2584834989 0 195847054 631303944

I/O count: 547295

范围查询 closed

Filter DISABLED No Compression closed range query throughput: 780.034 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1259.090241 95 : 2094.587629 99 : 2738.917526 100 : 2871.000000 rocksdb.block.cache.miss COUNT : 4323025 rocksdb.block.cache.hit COUNT : 1506810 rocksdb.block.cache.add COUNT : 4323025 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1427187 rocksdb.block.cache.index.hit COUNT : 1442487 rocksdb.block.cache.index.add COUNT : 1427187 rocksdb.block.cache.index.bytes.insert COUNT : 606839608776 rocksdb.block.cache.index.bytes.evict COUNT : 606753687720 rocksdb.block.cache.filter.miss COUNT : 0 rocksdb.block.cache.filter.hit COUNT : 0 rocksdb.block.cache.filter.add COUNT : 0 rocksdb.block.cache.filter.bytes.insert COUNT : 0 rocksdb.block.cache.filter.bytes.evict COUNT : 0 rocksdb.block.cache.data.miss COUNT : 2895838 rocksdb.block.cache.data.hit COUNT : 64323 rocksdb.block.cache.data.add COUNT : 2895838 rocksdb.block.cache.data.bytes.insert COUNT : 12110899712 rocksdb.block.cache.bytes.read COUNT : 560821193152 rocksdb.block.cache.bytes.write COUNT : 618950508488 rocksdb.bloom.filter.useful COUNT : 0 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 0 rocksdb.l1.hit COUNT : 2132 rocksdb.l2andup.hit COUNT : 997868 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 24956 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 25754592 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1583 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 942.612488 95 : 1828.156979 99 : 2513.466334 100 : 4026.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 399.000000 95 : 399.000000 99 : 399.000000 100 : 399.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1623.897059 95 : 2833.102493 99 : 7582.928571 100 : 13973.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 104.072952 95 : 162.995106 99 : 168.827873 100 : 726.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1259.090241 95 : 2094.587629 99 : 2738.917526 100 : 2871.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 131.922207 95 : 559.730739 99 : 771.025619 100 : 13008.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L1 4/0 218.11 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 53/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 405/0 24.96 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1121/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1583/0 97.80 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1072.3 total, 1072.3 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 1 read latency histogram (micros): Count: 688067 Average: 104.8341 StdDev: 46.99 Min: 68 Median: 102.9612 Max: 13008

Percentiles: P50: 102.96 P75: 132.79 P99: 308.18 P99.9: 561.97 P99.99: 772.10

( 51, 76 ] 69178 10.054% 10.054% ## ( 76, 110 ] 346613 50.375% 60.429% ########## ( 110, 170 ] 264000 38.368% 98.797% ######## ( 170, 250 ] 417 0.061% 98.858% ( 250, 380 ] 2186 0.318% 99.176% ( 380, 580 ] 5479 0.796% 99.972% ( 580, 870 ] 189 0.027% 99.999% ( 870, 1300 ] 1 0.000% 99.999% ( 4400, 6600 ] 1 0.000% 100.000% ( 6600, 9900 ] 2 0.000% 100.000% ( 9900, 14000 ] 1 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 844987 Average: 135.5396 StdDev: 103.99 Min: 64 Median: 107.4353 Max: 12412

Percentiles: P50: 107.44 P75: 149.06 P99: 546.76 P99.9: 617.24 P99.99: 857.22

( 51, 76 ] 87782 10.389% 10.389% ## ( 76, 110 ] 362020 42.843% 53.232% ######### ( 110, 170 ] 282553 33.439% 86.671% ####### ( 170, 250 ] 16003 1.894% 88.564% ( 250, 380 ] 50617 5.990% 94.555% # ( 380, 580 ] 45049 5.331% 99.886% # ( 580, 870 ] 919 0.109% 99.995% ( 870, 1300 ] 6 0.001% 99.996% ( 1300, 1900 ] 1 0.000% 99.996% ( 1900, 2900 ] 12 0.001% 99.997% ( 2900, 4400 ] 5 0.001% 99.998% ( 4400, 6600 ] 7 0.001% 99.998% ( 6600, 9900 ] 11 0.001% 100.000% ( 9900, 14000 ] 2 0.000% 100.000%

** Level 3 read latency histogram (micros): Count: 1248302 Average: 243.9094 StdDev: 174.68 Min: 64 Median: 151.1579 Max: 3738

Percentiles: P50: 151.16 P75: 410.40 P99: 766.85 P99.9: 862.71 P99.99: 1812.93

( 51, 76 ] 67202 5.383% 5.383% # ( 76, 110 ] 363627 29.130% 34.513% ###### ( 110, 170 ] 281825 22.577% 57.090% ##### ( 170, 250 ] 4104 0.329% 57.419% ( 250, 380 ] 169689 13.594% 71.012% ### ( 380, 580 ] 327475 26.234% 97.246% ##### ( 580, 870 ] 33986 2.723% 99.968% # ( 870, 1300 ] 117 0.009% 99.978% ( 1300, 1900 ] 178 0.014% 99.992% ( 1900, 2900 ] 93 0.007% 100.000% ( 2900, 4400 ] 6 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 1543252 Average: 285.9925 StdDev: 195.82 Min: 65 Median: 169.8734 Max: 3911

Percentiles: P50: 169.87 P75: 469.17 P99: 822.55 P99.9: 869.04 P99.99: 2274.67

( 51, 76 ] 58795 3.810% 3.810% # ( 76, 110 ] 384037 24.885% 28.695% ##### ( 110, 170 ] 329489 21.350% 50.045% #### ( 170, 250 ] 3077 0.199% 50.244% ( 250, 380 ] 142341 9.223% 59.468% ## ( 380, 580 ] 537614 34.836% 94.304% ####### ( 580, 870 ] 86644 5.614% 99.919% # ( 870, 1300 ] 387 0.025% 99.944% ( 1300, 1900 ] 629 0.041% 99.985% ( 1900, 2900 ] 226 0.015% 99.999% ( 2900, 4400 ] 13 0.001% 100.000%

** DB Stats ** Uptime(secs): 1072.3 total, 1072.3 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

45459589 25 7194151454 9921764 173751 425 20742141 38549 0 4547118 9869938

I/O count: 518413

范围查询 open

Filter DISABLED No Compression open range query throughput: 661.763 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1509.746157 95 : 2683.052885 99 : 2883.373397 100 : 3721.000000 rocksdb.block.cache.miss COUNT : 4331192 rocksdb.block.cache.hit COUNT : 1480420 rocksdb.block.cache.add COUNT : 4331192 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1427113 rocksdb.block.cache.index.hit COUNT : 1426244 rocksdb.block.cache.index.add COUNT : 1427113 rocksdb.block.cache.index.bytes.insert COUNT : 607073117800 rocksdb.block.cache.index.bytes.evict COUNT : 606988648528 rocksdb.block.cache.filter.miss COUNT : 0 rocksdb.block.cache.filter.hit COUNT : 0 rocksdb.block.cache.filter.add COUNT : 0 rocksdb.block.cache.filter.bytes.insert COUNT : 0 rocksdb.block.cache.filter.bytes.evict COUNT : 0 rocksdb.block.cache.data.miss COUNT : 2904079 rocksdb.block.cache.data.hit COUNT : 54176 rocksdb.block.cache.data.add COUNT : 2904079 rocksdb.block.cache.data.bytes.insert COUNT : 12145210456 rocksdb.block.cache.bytes.read COUNT : 555255537360 rocksdb.block.cache.bytes.write COUNT : 619218328256 rocksdb.bloom.filter.useful COUNT : 0 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 0 rocksdb.l1.hit COUNT : 2132 rocksdb.l2andup.hit COUNT : 997868 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 50000 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 51600000 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1583 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1076.973931 95 : 1984.094354 99 : 2723.547898 100 : 4497.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 307.000000 95 : 307.000000 99 : 307.000000 100 : 307.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1610.023041 95 : 2806.656347 99 : 7957.400000 100 : 12006.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 104.737752 95 : 163.109112 99 : 168.836146 100 : 2514.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1509.746157 95 : 2683.052885 99 : 2883.373397 100 : 3721.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 132.248805 95 : 678.569015 99 : 833.727059 100 : 10837.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L1 4/0 218.11 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 53/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 405/0 24.96 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1121/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1583/0 97.80 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1194.9 total, 1194.9 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 1 read latency histogram (micros): Count: 691456 Average: 107.3161 StdDev: 50.52 Min: 68 Median: 103.3895 Max: 9078

Percentiles: P50: 103.39 P75: 133.08 P99: 328.89 P99.9: 686.93 P99.99: 852.65

( 51, 76 ] 55593 8.040% 8.040% ## ( 76, 110 ] 360159 52.087% 60.127% ########## ( 110, 170 ] 267361 38.666% 98.793% ######## ( 170, 250 ] 290 0.042% 98.835% ( 250, 380 ] 1876 0.271% 99.107% ( 380, 580 ] 5084 0.735% 99.842% ( 580, 870 ] 1089 0.157% 99.999% ( 6600, 9900 ] 4 0.001% 100.000%

** Level 2 read latency histogram (micros): Count: 847794 Average: 144.4460 StdDev: 121.74 Min: 65 Median: 107.8328 Max: 10837

Percentiles: P50: 107.83 P75: 149.09 P99: 571.57 P99.9: 827.91 P99.99: 868.06

( 51, 76 ] 70428 8.307% 8.307% ## ( 76, 110 ] 377534 44.531% 52.839% ######### ( 110, 170 ] 288363 34.013% 86.852% ####### ( 170, 250 ] 12401 1.463% 88.315% ( 250, 380 ] 24234 2.858% 91.173% # ( 380, 580 ] 69276 8.171% 99.344% ## ( 580, 870 ] 5510 0.650% 99.994% ( 870, 1300 ] 12 0.001% 99.996% ( 1300, 1900 ] 2 0.000% 99.996% ( 1900, 2900 ] 8 0.001% 99.997% ( 2900, 4400 ] 3 0.000% 99.997% ( 4400, 6600 ] 8 0.001% 99.998% ( 6600, 9900 ] 12 0.001% 100.000% ( 9900, 14000 ] 3 0.000% 100.000%

** Level 3 read latency histogram (micros): Count: 1249974 Average: 276.6674 StdDev: 208.52 Min: 65 Median: 151.4395 Max: 2834

Percentiles: P50: 151.44 P75: 476.95 P99: 836.02 P99.9: 868.16 P99.99: 1783.17

( 51, 76 ] 52814 4.225% 4.225% # ( 76, 110 ] 374972 29.998% 34.224% ###### ( 110, 170 ] 285526 22.843% 57.066% ##### ( 170, 250 ] 1252 0.100% 57.166% ( 250, 380 ] 24983 1.999% 59.165% ( 380, 580 ] 408336 32.668% 91.833% ####### ( 580, 870 ] 101484 8.119% 99.951% ## ( 870, 1300 ] 329 0.026% 99.978% ( 1300, 1900 ] 190 0.015% 99.993% ( 1900, 2900 ] 88 0.007% 100.000%

** Level 4 read latency histogram (micros): Count: 1543548 Average: 320.5750 StdDev: 226.17 Min: 66 Median: 169.8488 Max: 3729

Percentiles: P50: 169.85 P75: 517.12 P99: 852.15 P99.9: 1078.44 P99.99: 2191.77

( 51, 76 ] 46777 3.030% 3.030% # ( 76, 110 ] 392592 25.434% 28.465% ##### ( 110, 170 ] 333245 21.590% 50.054% #### ( 170, 250 ] 2490 0.161% 50.216% ( 250, 380 ] 16428 1.064% 51.280% ( 380, 580 ] 534021 34.597% 85.877% ####### ( 580, 870 ] 215846 13.984% 99.861% ### ( 870, 1300 ] 1249 0.081% 99.942% ( 1300, 1900 ] 687 0.045% 99.986% ( 1900, 2900 ] 201 0.013% 99.999% ( 2900, 4400 ] 15 0.001% 100.000%

** DB Stats ** Uptime(secs): 1194.9 total, 1194.9 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

36972782 18 5979176801 8177260 161941 385 20642352 38200 0 3791505 8138772

I/O count: 522434

2. bloom 过滤器

单点查询

throughput: 1480.24 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1130.399992 95 : 1863.636429 99 : 2679.271290 100 : 18371.000000 rocksdb.block.cache.miss COUNT : 309039278 rocksdb.block.cache.hit COUNT : 12148 rocksdb.block.cache.add COUNT : 4609065 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1077069 rocksdb.block.cache.index.hit COUNT : 3415 rocksdb.block.cache.index.add COUNT : 1077069 rocksdb.block.cache.index.bytes.insert COUNT : 466662846600 rocksdb.block.cache.index.bytes.evict COUNT : 466661654432 rocksdb.block.cache.filter.miss COUNT : 2512342 rocksdb.block.cache.filter.hit COUNT : 8733 rocksdb.block.cache.filter.add COUNT : 2512342 rocksdb.block.cache.filter.bytes.insert COUNT : 265798353326 rocksdb.block.cache.filter.bytes.evict COUNT : 265118269703 rocksdb.block.cache.data.miss COUNT : 305449867 rocksdb.block.cache.data.hit COUNT : 0 rocksdb.block.cache.data.add COUNT : 1019654 rocksdb.block.cache.data.bytes.insert COUNT : 4257616096 rocksdb.block.cache.bytes.read COUNT : 2400642649 rocksdb.block.cache.bytes.write COUNT : 736718816022 rocksdb.bloom.filter.useful COUNT : 4598028 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 574 rocksdb.memtable.miss COUNT : 1049426 rocksdb.l0.hit COUNT : 1839 rocksdb.l1.hit COUNT : 2358 rocksdb.l2andup.hit COUNT : 995229 rocksdb.compaction.key.drop.new COUNT : 427 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 100000000 rocksdb.number.keys.read COUNT : 1050000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 104800000000 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 0 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 0 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 0 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 41456 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 2248090487 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 104800000000 rocksdb.write.self COUNT : 100000000 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 200000000 rocksdb.compact.read.bytes COUNT : 1324925221313 rocksdb.compact.write.bytes COUNT : 1280986265088 rocksdb.flush.write.bytes COUNT : 105192257680 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 311 rocksdb.number.superversion_releases COUNT : 308 rocksdb.number.superversion_cleanups COUNT : 307 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1130.399992 95 : 1863.636429 99 : 2679.271290 100 : 18371.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 4.780012 95 : 19.777859 99 : 1098.236474 100 : 14062.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 843368.091762 95 : 1982105.263158 99 : 6767882.352941 100 : 7806996.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 453.993808 95 : 1087.986111 99 : 2088.888889 100 : 4219.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 326.390808 95 : 783.074169 99 : 1698.693878 100 : 15456.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 294.308814 95 : 532.961373 99 : 778.957576 100 : 20747.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1013.691163 95 : 1809.195302 99 : 2707.411301 100 : 14963.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 103.683874 95 : 164.490373 99 : 210.974179 100 : 4125.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.608095 95 : 1.997809 99 : 3.746680 100 : 84636.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 1.000000 95 : 1.178319 99 : 17.180500 100 : 26.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.529673 95 : 915.755958 99 : 1223.453079 100 : 13991.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 217.993679 95 : 527.355345 99 : 731.775890 100 : 15213.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 1048.000000 95 : 1048.000000 99 : 1048.000000 100 : 1048.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 3/0 184.63 MB 0.8 0.0 0.0 0.0 98.0 98.0 0.0 1.0 0.0 461.0 218 1630 0.133 0 0 L1 5/0 240.20 MB 0.9 193.5 97.8 95.7 193.5 97.8 0.0 2.0 406.3 406.3 488 111 4.393 197M 19 L2 53/0 2.47 GB 1.0 536.8 97.0 439.8 536.8 97.0 0.5 5.5 380.5 380.5 1444 1521 0.950 547M 110 L3 404/0 24.95 GB 1.0 406.9 77.3 329.6 406.9 77.3 17.7 5.3 393.4 393.4 1059 1165 0.909 415M 256 L4 1115/0 70.07 GB 0.3 55.8 21.9 33.9 55.8 21.9 48.1 2.5 372.7 372.6 153 327 0.469 56M 42 Sum 1580/0 97.91 GB 0.0 1193.1 294.1 899.0 1291.0 392.0 66.4 13.2 363.3 393.2 3362 4754 0.707 1217M 427 Int 0/0 0.00 KB 0.0 58.5 13.6 44.9 59.7 14.8 12.7 49.7 382.5 390.4 157 215 0.728 59M 33 Uptime(secs): 4303.7 total, 1289.5 interval Flush(GB): cumulative 97.967, interval 1.202 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 1290.96 GB write, 307.17 MB/s write, 1193.05 GB read, 283.87 MB/s read, 3362.4 seconds Interval compaction: 59.69 GB write, 47.40 MB/s write, 58.49 GB read, 46.45 MB/s read, 156.6 seconds Stalls(count): 1398 level0_slowdown, 96 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 3246 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 93 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 20214 Average: 170.2285 StdDev: 165.18 Min: 68 Median: 118.1592 Max: 6594

Percentiles: P50: 118.16 P75: 161.43 P99: 820.52 P99.9: 1285.14 P99.99: 2895.72

( 51, 76 ] 1781 8.811% 8.811% ## ( 76, 110 ] 7373 36.475% 45.285% ####### ( 110, 170 ] 7008 34.669% 79.954% ####### ( 170, 250 ] 530 2.622% 82.576% # ( 250, 380 ] 1225 6.060% 88.637% # ( 380, 580 ] 1647 8.148% 96.784% ## ( 580, 870 ] 540 2.671% 99.456% # ( 870, 1300 ] 93 0.460% 99.916% ( 1300, 1900 ] 10 0.049% 99.965% ( 1900, 2900 ] 5 0.025% 99.990% ( 2900, 4400 ] 1 0.005% 99.995% ( 4400, 6600 ] 1 0.005% 100.000%

** Level 1 read latency histogram (micros): Count: 470338 Average: 235.5529 StdDev: 67.23 Min: 19 Median: 224.8040 Max: 15213

Percentiles: P50: 224.80 P75: 263.91 P99: 475.54 P99.9: 897.28 P99.99: 1961.44

( 15, 22 ] 5 0.001% 0.001% ( 22, 34 ] 31 0.007% 0.008% ( 34, 51 ] 77 0.016% 0.024% ( 51, 76 ] 644 0.137% 0.161% ( 76, 110 ] 2488 0.529% 0.690% ( 110, 170 ] 4876 1.037% 1.727% ( 170, 250 ] 331433 70.467% 72.194% ############## ( 250, 380 ] 123339 26.223% 98.417% ##### ( 380, 580 ] 5739 1.220% 99.637% ( 580, 870 ] 1212 0.258% 99.895% ( 870, 1300 ] 373 0.079% 99.974% ( 1300, 1900 ] 72 0.015% 99.990% ( 1900, 2900 ] 32 0.007% 99.996% ( 2900, 4400 ] 14 0.003% 99.999% ( 4400, 6600 ] 2 0.000% 100.000% ( 14000, 22000 ] 1 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 720652 Average: 209.2450 StdDev: 99.55 Min: 19 Median: 201.5082 Max: 13293

Percentiles: P50: 201.51 P75: 240.51 P99: 530.72 P99.9: 1023.88 P99.99: 3465.58

( 15, 22 ] 26 0.004% 0.004% ( 22, 34 ] 128 0.018% 0.021% ( 34, 51 ] 294 0.041% 0.062% ( 51, 76 ] 3992 0.554% 0.616% ( 76, 110 ] 26437 3.668% 4.285% # ( 110, 170 ] 183899 25.518% 29.803% ##### ( 170, 250 ] 369555 51.281% 81.084% ########## ( 250, 380 ] 115838 16.074% 97.158% ### ( 380, 580 ] 17617 2.445% 99.602% ( 580, 870 ] 1936 0.269% 99.871% ( 870, 1300 ] 585 0.081% 99.952% ( 1300, 1900 ] 128 0.018% 99.970% ( 1900, 2900 ] 111 0.015% 99.985% ( 2900, 4400 ] 90 0.012% 99.998% ( 4400, 6600 ] 6 0.001% 99.999% ( 6600, 9900 ] 3 0.000% 99.999% ( 9900, 14000 ] 7 0.001% 100.000%

** Level 3 read latency histogram (micros): Count: 2154501 Average: 247.6514 StdDev: 146.71 Min: 19 Median: 221.3222 Max: 13938

Percentiles: P50: 221.32 P75: 329.57 P99: 773.13 P99.9: 1181.63 P99.99: 2805.79

( 15, 22 ] 29 0.001% 0.001% ( 22, 34 ] 47 0.002% 0.004% ( 34, 51 ] 24 0.001% 0.005% ( 51, 76 ] 50131 2.327% 2.331% ( 76, 110 ] 298212 13.841% 16.173% ### ( 110, 170 ] 288623 13.396% 29.569% ### ( 170, 250 ] 686150 31.847% 61.416% ###### ( 250, 380 ] 478117 22.192% 83.608% #### ( 380, 580 ] 298181 13.840% 97.448% ### ( 580, 870 ] 50217 2.331% 99.779% ( 870, 1300 ] 3609 0.168% 99.946% ( 1300, 1900 ] 672 0.031% 99.977% ( 1900, 2900 ] 302 0.014% 99.991% ( 2900, 4400 ] 166 0.008% 99.999% ( 4400, 6600 ] 9 0.000% 99.999% ( 6600, 9900 ] 4 0.000% 100.000% ( 9900, 14000 ] 8 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 1219261 Average: 251.7676 StdDev: 152.73 Min: 19 Median: 225.0891 Max: 12944

Percentiles: P50: 225.09 P75: 352.03 P99: 752.40 P99.9: 1193.57 P99.99: 2872.93

( 15, 22 ] 3 0.000% 0.000% ( 22, 34 ] 4 0.000% 0.001% ( 34, 51 ] 3 0.000% 0.001% ( 51, 76 ] 35062 2.876% 2.876% # ( 76, 110 ] 210607 17.273% 20.150% ### ( 110, 170 ] 163921 13.444% 33.594% ### ( 170, 250 ] 290483 23.825% 57.419% ##### ( 250, 380 ] 273138 22.402% 79.821% #### ( 380, 580 ] 219972 18.041% 97.862% #### ( 580, 870 ] 23340 1.914% 99.776% ( 870, 1300 ] 2005 0.164% 99.941% ( 1300, 1900 ] 424 0.035% 99.975% ( 1900, 2900 ] 182 0.015% 99.990% ( 2900, 4400 ] 109 0.009% 99.999% ( 4400, 6600 ] 6 0.000% 100.000% ( 9900, 14000 ] 2 0.000% 100.000%

** DB Stats ** Uptime(secs): 4303.7 total, 1289.5 interval Cumulative writes: 100M writes, 100M keys, 100M commit groups, 1.0 writes per commit group, ingest: 97.60 GB, 23.22 MB/s Cumulative WAL: 100M writes, 0 syncs, 100000000.00 writes per sync, written: 97.60 GB, 23.22 MB/s Cumulative stall: 00:37:28.090 H:M:S, 52.2 percent Interval writes: 1229K writes, 1229K keys, 1229K commit groups, 1.0 writes per commit group, ingest: 1228.57 MB, 0.95 MB/s Interval WAL: 1229K writes, 0 syncs, 1229247.00 writes per sync, written: 1.20 MB, 0.95 MB/s Interval stall: 00:00:43.933 H:M:S, 3.4 percent

10358005520 6002 196186905168 2254500231 1025791168 229369 127443302253 2592493142 0 198095258 648289708

I/O count: 126149

范围查询 closed

Using rocksdb.BuiltinBloomFilter No Compression closed range query throughput: 432.434 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 2385.102631 95 : 2893.738132 99 : 4062.618084 100 : 4093.000000 rocksdb.block.cache.miss COUNT : 4954786 rocksdb.block.cache.hit COUNT : 1594163 rocksdb.block.cache.add COUNT : 3660610 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1164052 rocksdb.block.cache.index.hit COUNT : 43224 rocksdb.block.cache.index.add COUNT : 1164052 rocksdb.block.cache.index.bytes.insert COUNT : 500267958336 rocksdb.block.cache.index.bytes.evict COUNT : 500267958336 rocksdb.block.cache.filter.miss COUNT : 1290604 rocksdb.block.cache.filter.hit COUNT : 1546639 rocksdb.block.cache.filter.add COUNT : 1290604 rocksdb.block.cache.filter.bytes.insert COUNT : 142640936220 rocksdb.block.cache.filter.bytes.evict COUNT : 142463171932 rocksdb.block.cache.data.miss COUNT : 2500130 rocksdb.block.cache.data.hit COUNT : 4300 rocksdb.block.cache.data.add COUNT : 1205954 rocksdb.block.cache.data.bytes.insert COUNT : 5038838992 rocksdb.block.cache.bytes.read COUNT : 179148852747 rocksdb.block.cache.bytes.write COUNT : 647947733548 rocksdb.bloom.filter.useful COUNT : 1630105 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 0 rocksdb.l1.hit COUNT : 2200 rocksdb.l2andup.hit COUNT : 997800 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 24956 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 25754592 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1760 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 5581154591 rocksdb.compact.write.bytes COUNT : 5506159800 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 3 rocksdb.number.superversion_releases COUNT : 2 rocksdb.number.superversion_cleanups COUNT : 2 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 997.889590 95 : 1664.635517 99 : 1865.960909 100 : 13936.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 945000.000000 95 : 1526046.000000 99 : 1526046.000000 100 : 1526046.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 437.000000 95 : 437.000000 99 : 437.000000 100 : 437.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 324.848485 95 : 558.461538 99 : 1915.000000 100 : 1915.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 322.222222 95 : 2200.000000 99 : 2543.000000 100 : 2543.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1516.746411 95 : 2726.923077 99 : 7463.076923 100 : 17829.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 104.247163 95 : 163.236767 99 : 169.095890 100 : 2452.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.570880 95 : 1.770671 99 : 2.849481 100 : 4033.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 1.000000 95 : 3.350000 99 : 3.870000 100 : 4.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 2385.102631 95 : 2893.738132 99 : 4062.618084 100 : 4093.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 229.246258 95 : 563.726406 99 : 783.919701 100 : 12757.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.1 0.1 0.0 1.0 0.0 427.2 0 1 0.135 0 0 L1 4/0 224.39 MB 0.9 0.5 0.2 0.2 0.5 0.2 0.0 2.0 385.6 385.6 1 1 1.251 480K 0 L2 51/0 2.47 GB 1.0 1.4 0.3 1.2 1.4 0.3 0.0 5.7 413.1 413.1 4 4 0.892 1469K 0 L3 403/0 24.95 GB 1.0 1.9 0.3 1.6 1.9 0.3 0.0 7.5 417.4 417.4 5 4 1.165 1939K 0 L4 1120/0 70.33 GB 0.3 1.3 0.3 1.0 1.3 0.3 0.0 5.0 389.5 389.5 3 4 0.829 1286K 0 Sum 1578/0 97.97 GB 0.0 5.1 1.0 4.1 5.1 1.0 0.0 91.1 401.6 406.1 13 14 0.924 5176K 0 Int 0/0 0.00 KB 0.0 5.1 1.0 4.1 5.1 1.0 0.0 5445635472.0 405.9 405.8 13 13 0.984 5176K 0 Uptime(secs): 1110.0 total, 1109.3 interval Flush(GB): cumulative 0.056, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 5.13 GB write, 4.73 MB/s write, 5.07 GB read, 4.68 MB/s read, 12.9 seconds Interval compaction: 5.07 GB write, 4.68 MB/s write, 5.07 GB read, 4.68 MB/s read, 12.8 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 12 Average: 4120.9167 StdDev: 3690.48 Min: 154 Median: 1300.0000 Max: 8650

Percentiles: P50: 1300.00 P75: 8250.00 P99: 8650.00 P99.9: 8650.00 P99.99: 8650.00

( 110, 170 ] 1 8.333% 8.333% ## ( 170, 250 ] 1 8.333% 16.667% ## ( 250, 380 ] 2 16.667% 33.333% ### ( 380, 580 ] 1 8.333% 41.667% ## ( 870, 1300 ] 1 8.333% 50.000% ## ( 6600, 9900 ] 6 50.000% 100.000% ##########

** Level 1 read latency histogram (micros): Count: 112908 Average: 236.9306 StdDev: 155.66 Min: 68 Median: 208.1038 Max: 9588

Percentiles: P50: 208.10 P75: 376.28 P99: 572.15 P99.9: 579.56 P99.99: 848.31

( 51, 76 ] 8232 7.291% 7.291% # ( 76, 110 ] 33032 29.256% 36.547% ###### ( 110, 170 ] 11995 10.624% 47.170% ## ( 170, 250 ] 6708 5.941% 53.111% # ( 250, 380 ] 25442 22.533% 75.645% ##### ( 380, 580 ] 27447 24.309% 99.954% ##### ( 580, 870 ] 44 0.039% 99.993% ( 870, 1300 ] 2 0.002% 99.995% ( 2900, 4400 ] 1 0.001% 99.996% ( 6600, 9900 ] 5 0.004% 100.000%

** Level 2 read latency histogram (micros): Count: 234452 Average: 213.1414 StdDev: 131.10 Min: 32 Median: 188.8882 Max: 9979

Percentiles: P50: 188.89 P75: 286.41 P99: 563.02 P99.9: 680.76 P99.99: 2756.16

( 22, 34 ] 2 0.001% 0.001% ( 34, 51 ] 1 0.000% 0.001% ( 51, 76 ] 5663 2.415% 2.417% ( 76, 110 ] 44335 18.910% 21.327% #### ( 110, 170 ] 53264 22.719% 44.045% ##### ( 170, 250 ] 59131 25.221% 69.266% ##### ( 250, 380 ] 48001 20.474% 89.740% #### ( 380, 580 ] 23725 10.119% 99.859% ## ( 580, 870 ] 275 0.117% 99.977% ( 870, 1300 ] 9 0.004% 99.980% ( 1300, 1900 ] 8 0.003% 99.984% ( 1900, 2900 ] 17 0.007% 99.991% ( 2900, 4400 ] 3 0.001% 99.992% ( 4400, 6600 ] 7 0.003% 99.995% ( 6600, 9900 ] 10 0.004% 100.000% ( 9900, 14000 ] 1 0.000% 100.000%

** Level 3 read latency histogram (micros): Count: 1070906 Average: 250.1457 StdDev: 134.43 Min: 20 Median: 223.8383 Max: 3687

Percentiles: P50: 223.84 P75: 340.89 P99: 663.22 P99.9: 858.19 P99.99: 1723.21

( 15, 22 ] 1 0.000% 0.000% ( 22, 34 ] 1 0.000% 0.000% ( 34, 51 ] 1 0.000% 0.000% ( 51, 76 ] 25441 2.376% 2.376% ( 76, 110 ] 155441 14.515% 16.891% ### ( 110, 170 ] 127775 11.931% 28.822% ## ( 170, 250 ] 336999 31.469% 60.291% ###### ( 250, 380 ] 225296 21.038% 81.329% #### ( 380, 580 ] 185128 17.287% 98.616% ### ( 580, 870 ] 14336 1.339% 99.955% ( 870, 1300 ] 260 0.024% 99.979% ( 1300, 1900 ] 170 0.016% 99.995% ( 1900, 2900 ] 54 0.005% 100.000% ( 2900, 4400 ] 4 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 2243851 Average: 279.1160 StdDev: 170.54 Min: 20 Median: 240.0615 Max: 12757

Percentiles: P50: 240.06 P75: 417.01 P99: 810.47 P99.9: 868.29 P99.99: 1983.11

( 15, 22 ] 1 0.000% 0.000% ( 22, 34 ] 1 0.000% 0.000% ( 51, 76 ] 58056 2.587% 2.587% # ( 76, 110 ] 401366 17.887% 20.475% #### ( 110, 170 ] 310234 13.826% 34.301% ### ( 170, 250 ] 402238 17.926% 52.227% #### ( 250, 380 ] 406980 18.138% 70.365% #### ( 380, 580 ] 562042 25.048% 95.413% ##### ( 580, 870 ] 101285 4.514% 99.927% # ( 870, 1300 ] 761 0.034% 99.960% ( 1300, 1900 ] 643 0.029% 99.989% ( 1900, 2900 ] 236 0.011% 100.000% ( 2900, 4400 ] 8 0.000% 100.000% ( 9900, 14000 ] 1 0.000% 100.000%

** DB Stats ** Uptime(secs): 1110.0 total, 1109.3 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

21385892 18 3495504910 4739817 146816 330 20514505 37758 0 2227254 4736410

I/O count: 940629

范围查询 open

Using rocksdb.BuiltinBloomFilter No Compression open range query throughput: 393.679 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 2489.898309 95 : 4077.233181 99 : 4335.653372 100 : 5493.000000 rocksdb.block.cache.miss COUNT : 3670404 rocksdb.block.cache.hit COUNT : 1564130 rocksdb.block.cache.add COUNT : 3670404 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1149700 rocksdb.block.cache.index.hit COUNT : 40462 rocksdb.block.cache.index.add COUNT : 1149700 rocksdb.block.cache.index.bytes.insert COUNT : 495710911760 rocksdb.block.cache.index.bytes.evict COUNT : 495710911760 rocksdb.block.cache.filter.miss COUNT : 1317075 rocksdb.block.cache.filter.hit COUNT : 1519950 rocksdb.block.cache.filter.add COUNT : 1317075 rocksdb.block.cache.filter.bytes.insert COUNT : 145627441887 rocksdb.block.cache.filter.bytes.evict COUNT : 145452417756 rocksdb.block.cache.data.miss COUNT : 1203629 rocksdb.block.cache.data.hit COUNT : 3718 rocksdb.block.cache.data.add COUNT : 1203629 rocksdb.block.cache.data.bytes.insert COUNT : 5029194696 rocksdb.block.cache.bytes.read COUNT : 174876509926 rocksdb.block.cache.bytes.write COUNT : 646367548343 rocksdb.bloom.filter.useful COUNT : 1630095 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 0 rocksdb.l1.hit COUNT : 2200 rocksdb.l2andup.hit COUNT : 997800 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 50000 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 51600000 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1578 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1107.316614 95 : 1787.193811 99 : 1892.098085 100 : 4176.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 511.000000 95 : 511.000000 99 : 511.000000 100 : 511.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1613.131313 95 : 2766.158537 99 : 6909.692308 100 : 12939.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 105.760547 95 : 163.556600 99 : 169.097458 100 : 675.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 2489.898309 95 : 4077.233181 99 : 4335.653372 100 : 5493.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 248.302695 95 : 701.566784 99 : 839.256910 100 : 11940.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L1 4/0 224.39 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 51/0 2.47 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 403/0 24.95 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1120/0 70.33 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1578/0 97.97 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1243.2 total, 1243.2 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 1 read latency histogram (micros): Count: 92677 Average: 242.8106 StdDev: 170.21 Min: 68 Median: 160.0690 Max: 8951

Percentiles: P50: 160.07 P75: 417.68 P99: 653.90 P99.9: 849.45 P99.99: 869.00

( 51, 76 ] 6034 6.511% 6.511% # ( 76, 110 ] 29054 31.350% 37.861% ###### ( 110, 170 ] 13482 14.547% 52.408% ### ( 170, 250 ] 4924 5.313% 57.721% # ( 250, 380 ] 10923 11.786% 69.507% ## ( 380, 580 ] 27018 29.153% 98.660% ###### ( 580, 870 ] 1237 1.335% 99.995% ( 870, 1300 ] 1 0.001% 99.996% ( 1900, 2900 ] 1 0.001% 99.997% ( 4400, 6600 ] 2 0.002% 99.999% ( 6600, 9900 ] 1 0.001% 100.000%

** Level 2 read latency histogram (micros): Count: 236425 Average: 239.7524 StdDev: 159.52 Min: 65 Median: 196.3548 Max: 11940

Percentiles: P50: 196.35 P75: 338.58 P99: 739.84 P99.9: 859.57 P99.99: 3190.89

( 51, 76 ] 4647 1.966% 1.966% ( 76, 110 ] 43687 18.478% 20.444% #### ( 110, 170 ] 53809 22.759% 43.203% ##### ( 170, 250 ] 48779 20.632% 63.835% #### ( 250, 380 ] 38738 16.385% 80.220% ### ( 380, 580 ] 41560 17.579% 97.798% #### ( 580, 870 ] 5154 2.180% 99.978% ( 870, 1300 ] 8 0.003% 99.982% ( 1300, 1900 ] 15 0.006% 99.988% ( 1900, 2900 ] 3 0.001% 99.989% ( 2900, 4400 ] 7 0.003% 99.992% ( 4400, 6600 ] 11 0.005% 99.997% ( 6600, 9900 ] 3 0.001% 99.998% ( 9900, 14000 ] 4 0.002% 100.000%

** Level 3 read latency histogram (micros): Count: 1085892 Average: 279.0070 StdDev: 160.45 Min: 65 Median: 238.3491 Max: 2610

Percentiles: P50: 238.35 P75: 383.52 P99: 813.56 P99.9: 869.20 P99.99: 1778.09

( 51, 76 ] 19731 1.817% 1.817% ( 76, 110 ] 156608 14.422% 16.239% ### ( 110, 170 ] 129779 11.951% 28.190% ## ( 170, 250 ] 277198 25.527% 53.718% ##### ( 250, 380 ] 227168 20.920% 74.638% #### ( 380, 580 ] 223531 20.585% 95.223% #### ( 580, 870 ] 50931 4.690% 99.913% # ( 870, 1300 ] 690 0.064% 99.976% ( 1300, 1900 ] 185 0.017% 99.993% ( 1900, 2900 ] 71 0.007% 100.000%

** Level 4 read latency histogram (micros): Count: 2256986 Average: 309.7356 StdDev: 196.32 Min: 65 Median: 270.3037 Max: 3362

Percentiles: P50: 270.30 P75: 459.10 P99: 847.39 P99.9: 1011.07 P99.99: 1894.95

( 51, 76 ] 45772 2.028% 2.028% ( 76, 110 ] 405534 17.968% 19.996% #### ( 110, 170 ] 316400 14.019% 34.015% ### ( 170, 250 ] 293807 13.018% 47.032% ### ( 250, 380 ] 428858 19.001% 66.034% #### ( 380, 580 ] 511662 22.670% 88.704% ##### ( 580, 870 ] 252032 11.167% 99.871% ## ( 870, 1300 ] 2024 0.090% 99.960% ( 1300, 1900 ] 677 0.030% 99.990% ( 1900, 2900 ] 216 0.010% 100.000% ( 2900, 4400 ] 5 0.000% 100.000%

** DB Stats ** Uptime(secs): 1243.2 total, 1243.2 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

28494898 18 4763902776 6422339 158122 371 20609955 38081 0 3032124 6404833

I/O count: 883903

3. Surf 过滤器

单点查询

throughput: 1376.25 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1010.872656 95 : 1877.105227 99 : 2781.958314 100 : 18372.000000 rocksdb.block.cache.miss COUNT : 328383992 rocksdb.block.cache.hit COUNT : 14525 rocksdb.block.cache.add COUNT : 4822175 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1339054 rocksdb.block.cache.index.hit COUNT : 4582 rocksdb.block.cache.index.add COUNT : 1339054 rocksdb.block.cache.index.bytes.insert COUNT : 568519518896 rocksdb.block.cache.index.bytes.evict COUNT : 568519518896 rocksdb.block.cache.filter.miss COUNT : 2165353 rocksdb.block.cache.filter.hit COUNT : 9943 rocksdb.block.cache.filter.add COUNT : 2165353 rocksdb.block.cache.filter.bytes.insert COUNT : 145892313104 rocksdb.block.cache.filter.bytes.evict COUNT : 145350915288 rocksdb.block.cache.data.miss COUNT : 324879585 rocksdb.block.cache.data.hit COUNT : 0 rocksdb.block.cache.data.add COUNT : 1317768 rocksdb.block.cache.data.bytes.insert COUNT : 5503880352 rocksdb.block.cache.bytes.read COUNT : 2762252368 rocksdb.block.cache.bytes.write COUNT : 719915712352 rocksdb.bloom.filter.useful COUNT : 1017396 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 575 rocksdb.memtable.miss COUNT : 1049425 rocksdb.l0.hit COUNT : 0 rocksdb.l1.hit COUNT : 2269 rocksdb.l2andup.hit COUNT : 997156 rocksdb.compaction.key.drop.new COUNT : 473 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 100000000 rocksdb.number.keys.read COUNT : 1050000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 104800000000 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 0 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 0 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 0 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 43797 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 2898007602 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 104800000000 rocksdb.write.self COUNT : 100000000 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 200000000 rocksdb.compact.read.bytes COUNT : 1406541711370 rocksdb.compact.write.bytes COUNT : 1360829093376 rocksdb.flush.write.bytes COUNT : 105146855022 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 327 rocksdb.number.superversion_releases COUNT : 324 rocksdb.number.superversion_cleanups COUNT : 323 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1010.872656 95 : 1877.105227 99 : 2781.958314 100 : 18372.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 4.813537 95 : 20.587777 99 : 1143.670743 100 : 8374.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 993491.012299 95 : 2524170.616114 99 : 7155170.731707 100 : 9679838.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 415.741445 95 : 996.927711 99 : 1665.454545 100 : 3842.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 319.661817 95 : 653.554120 99 : 1294.862338 100 : 44184.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 288.457622 95 : 527.747604 99 : 790.064894 100 : 28478.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 904.969239 95 : 1723.884943 99 : 2629.387464 100 : 21320.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 103.504734 95 : 166.294153 99 : 271.536490 100 : 4022.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.593299 95 : 1.924462 99 : 3.518433 100 : 53680.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 1.000000 95 : 1.250000 99 : 17.240000 100 : 28.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.531240 95 : 934.106732 99 : 1227.224184 100 : 8312.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 163.563537 95 : 515.341242 99 : 667.297606 100 : 15351.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 1048.000000 95 : 1048.000000 99 : 1048.000000 100 : 1048.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 97.9 97.9 0.0 1.0 0.0 398.1 252 1630 0.155 0 0 L1 5/0 230.72 MB 0.9 193.5 97.9 95.6 193.5 97.9 0.0 2.0 346.5 346.5 572 107 5.344 197M 20 L2 56/0 2.46 GB 1.0 534.4 97.2 437.3 534.4 97.2 0.5 5.5 328.9 328.9 1664 1518 1.096 545M 103 L3 402/0 24.99 GB 1.0 409.1 80.4 328.7 409.0 80.4 14.8 5.1 338.1 338.1 1239 1197 1.035 417M 271 L4 1120/0 70.17 GB 0.3 130.4 30.5 100.0 130.4 30.4 39.7 4.3 337.1 337.0 396 464 0.854 133M 79 Sum 1583/0 97.84 GB 0.0 1267.4 305.9 961.5 1365.3 403.8 55.0 13.9 314.8 339.1 4122 4916 0.839 1294M 473 Int 0/0 0.00 KB 0.0 94.3 16.4 77.9 95.7 17.8 12.2 66.4 344.8 350.0 280 255 1.098 96M 52 Uptime(secs): 4858.0 total, 1237.6 interval Flush(GB): cumulative 97.924, interval 1.442 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 1365.28 GB write, 287.78 MB/s write, 1267.44 GB read, 267.16 MB/s read, 4122.5 seconds Interval compaction: 95.75 GB write, 79.22 MB/s write, 94.31 GB read, 78.03 MB/s read, 280.1 seconds Stalls(count): 1457 level0_slowdown, 121 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 3592 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 145 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 21616 Average: 187.3155 StdDev: 161.52 Min: 64 Median: 132.0281 Max: 2838

Percentiles: P50: 132.03 P75: 223.47 P99: 811.80 P99.9: 1445.03 P99.99: 2703.49

( 51, 76 ] 1470 6.801% 6.801% # ( 76, 110 ] 7039 32.564% 39.364% ####### ( 110, 170 ] 6262 28.969% 68.334% ###### ( 170, 250 ] 2156 9.974% 78.308% ## ( 250, 380 ] 2064 9.548% 87.856% ## ( 380, 580 ] 1994 9.225% 97.081% ## ( 580, 870 ] 519 2.401% 99.482% ( 870, 1300 ] 87 0.402% 99.884% ( 1300, 1900 ] 14 0.065% 99.949% ( 1900, 2900 ] 11 0.051% 100.000%

** Level 1 read latency histogram (micros): Count: 827777 Average: 186.9114 StdDev: 118.33 Min: 19 Median: 149.1304 Max: 13434

Percentiles: P50: 149.13 P75: 229.18 P99: 570.21 P99.9: 1206.43 P99.99: 2668.05

( 15, 22 ] 4 0.000% 0.000% ( 22, 34 ] 27 0.003% 0.004% ( 34, 51 ] 87 0.011% 0.014% ( 51, 76 ] 34077 4.117% 4.131% # ( 76, 110 ] 113101 13.663% 17.794% ### ( 110, 170 ] 408776 49.382% 67.177% ########## ( 170, 250 ] 87543 10.576% 77.752% ## ( 250, 380 ] 138711 16.757% 94.509% ### ( 380, 580 ] 39086 4.722% 99.231% # ( 580, 870 ] 4312 0.521% 99.752% ( 870, 1300 ] 1566 0.189% 99.941% ( 1300, 1900 ] 299 0.036% 99.977% ( 1900, 2900 ] 137 0.017% 99.994% ( 2900, 4400 ] 42 0.005% 99.999% ( 4400, 6600 ] 4 0.000% 99.999% ( 6600, 9900 ] 2 0.000% 100.000% ( 9900, 14000 ] 3 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 837605 Average: 192.4806 StdDev: 128.83 Min: 19 Median: 166.9948 Max: 15351

Percentiles: P50: 166.99 P75: 228.64 P99: 568.13 P99.9: 1498.89 P99.99: 3967.10

( 15, 22 ] 18 0.002% 0.002% ( 22, 34 ] 141 0.017% 0.019% ( 34, 51 ] 346 0.041% 0.060% ( 51, 76 ] 10500 1.254% 1.314% ( 76, 110 ] 85512 10.209% 11.523% ## ( 110, 170 ] 339279 40.506% 52.029% ######## ( 170, 250 ] 262509 31.340% 83.369% ###### ( 250, 380 ] 94452 11.276% 94.646% ## ( 380, 580 ] 38774 4.629% 99.275% # ( 580, 870 ] 3749 0.448% 99.722% ( 870, 1300 ] 1379 0.165% 99.887% ( 1300, 1900 ] 327 0.039% 99.926% ( 1900, 2900 ] 383 0.046% 99.972% ( 2900, 4400 ] 214 0.026% 99.997% ( 4400, 6600 ] 12 0.001% 99.999% ( 6600, 9900 ] 2 0.000% 99.999% ( 9900, 14000 ] 7 0.001% 100.000% ( 14000, 22000 ] 1 0.000% 100.000%

** Level 3 read latency histogram (micros): Count: 1262927 Average: 229.4264 StdDev: 153.54 Min: 19 Median: 174.3809 Max: 12875

Percentiles: P50: 174.38 P75: 320.18 P99: 727.81 P99.9: 1277.52 P99.99: 3101.22

( 15, 22 ] 20 0.002% 0.002% ( 22, 34 ] 40 0.003% 0.005% ( 34, 51 ] 35 0.003% 0.008% ( 51, 76 ] 34084 2.699% 2.706% # ( 76, 110 ] 195557 15.484% 18.191% ### ( 110, 170 ] 390667 30.933% 49.124% ###### ( 170, 250 ] 201979 15.993% 65.117% ### ( 250, 380 ] 231194 18.306% 83.423% #### ( 380, 580 ] 187977 14.884% 98.308% ### ( 580, 870 ] 17157 1.359% 99.666% ( 870, 1300 ] 3117 0.247% 99.913% ( 1300, 1900 ] 673 0.053% 99.966% ( 1900, 2900 ] 283 0.022% 99.989% ( 2900, 4400 ] 132 0.010% 99.999% ( 4400, 6600 ] 3 0.000% 99.999% ( 6600, 9900 ] 3 0.000% 100.000% ( 9900, 14000 ] 6 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 1846923 Average: 225.6404 StdDev: 152.60 Min: 19 Median: 166.5208 Max: 13042

Percentiles: P50: 166.52 P75: 321.13 P99: 703.55 P99.9: 1260.58 P99.99: 3343.82

( 15, 22 ] 6 0.000% 0.000% ( 22, 34 ] 10 0.001% 0.001% ( 34, 51 ] 13 0.001% 0.002% ( 51, 76 ] 58915 3.190% 3.191% # ( 76, 110 ] 305431 16.537% 19.729% ### ( 110, 170 ] 593502 32.135% 51.863% ###### ( 170, 250 ] 242580 13.134% 64.998% ### ( 250, 380 ] 337619 18.280% 83.278% #### ( 380, 580 ] 280826 15.205% 98.483% ### ( 580, 870 ] 22420 1.214% 99.697% ( 870, 1300 ] 4133 0.224% 99.921% ( 1300, 1900 ] 877 0.047% 99.968% ( 1900, 2900 ] 335 0.018% 99.986% ( 2900, 4400 ] 241 0.013% 99.999% ( 4400, 6600 ] 5 0.000% 99.999% ( 6600, 9900 ] 1 0.000% 100.000% ( 9900, 14000 ] 9 0.000% 100.000%

** DB Stats ** Uptime(secs): 4858.0 total, 1237.6 interval Cumulative writes: 100M writes, 100M keys, 100M commit groups, 1.0 writes per commit group, ingest: 97.60 GB, 20.57 MB/s Cumulative WAL: 100M writes, 0 syncs, 100000000.00 writes per sync, written: 97.60 GB, 20.57 MB/s Cumulative stall: 00:48:18.008 H:M:S, 59.7 percent Interval writes: 1524K writes, 1524K keys, 1524K commit groups, 1.0 writes per commit group, ingest: 1523.90 MB, 1.23 MB/s Interval WAL: 1524K writes, 0 syncs, 1524739.00 writes per sync, written: 1.49 MB, 1.23 MB/s Interval stall: 00:01:36.093 H:M:S, 7.8 percent

10377868315 6002 200282260598 2264649690 1039225037 232939 130538971748 2603087071 0 200402937 668931892

I/O count: 242508

范围查询 closed 69310

closed range query throughput: 612.074 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 824.776845 95 : 1807.981530 99 : 2598.433420 100 : 3460.000000 rocksdb.block.cache.miss COUNT : 3687851 rocksdb.block.cache.hit COUNT : 2385403 rocksdb.block.cache.add COUNT : 3687851 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1115510 rocksdb.block.cache.index.hit COUNT : 305501 rocksdb.block.cache.index.add COUNT : 1115510 rocksdb.block.cache.index.bytes.insert COUNT : 479534912272 rocksdb.block.cache.index.bytes.evict COUNT : 479532471312 rocksdb.block.cache.filter.miss COUNT : 1112451 rocksdb.block.cache.filter.hit COUNT : 2075814 rocksdb.block.cache.filter.add COUNT : 1112451 rocksdb.block.cache.filter.bytes.insert COUNT : 74473698720 rocksdb.block.cache.filter.bytes.evict COUNT : 74368648344 rocksdb.block.cache.data.miss COUNT : 1459890 rocksdb.block.cache.data.hit COUNT : 4088 rocksdb.block.cache.data.add COUNT : 1459890 rocksdb.block.cache.data.bytes.insert COUNT : 6103209088 rocksdb.block.cache.bytes.read COUNT : 262414641824 rocksdb.block.cache.bytes.write COUNT : 560111820080 rocksdb.bloom.filter.useful COUNT : 1664075 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 575 rocksdb.l1.hit COUNT : 2269 rocksdb.l2andup.hit COUNT : 997156 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 24956 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 25754592 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1584 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 60580662 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1023.779961 95 : 1678.353598 99 : 1997.923656 100 : 4936.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 697.000000 95 : 697.000000 99 : 697.000000 100 : 697.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 2833.000000 95 : 2833.000000 99 : 2833.000000 100 : 2833.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1527.826087 95 : 2718.390805 99 : 2926.666667 100 : 5807.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 107.978970 95 : 165.726635 99 : 234.315206 100 : 814.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.891310 95 : 2.710371 99 : 3.500000 100 : 1181.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 824.776845 95 : 1807.981530 99 : 2598.433420 100 : 3460.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 185.490866 95 : 643.197989 99 : 828.926609 100 : 4632.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 1/0 57.77 MB 0.2 0.0 0.0 0.0 0.1 0.1 0.0 1.0 0.0 441.9 0 1 0.131 0 0 L1 5/0 230.72 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 56/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 402/0 24.99 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1120/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1584/0 97.90 GB 0.0 0.0 0.0 0.0 0.1 0.1 0.0 1.0 0.0 441.9 0 1 0.131 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.1 0.1 0.0 1.0 0.0 441.9 0 1 0.131 0 0 Uptime(secs): 1107.3 total, 1107.3 interval Flush(GB): cumulative 0.056, interval 0.056 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds Interval compaction: 0.06 GB write, 0.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 201684 Average: 293.5129 StdDev: 245.96 Min: 23 Median: 152.5957 Max: 2734

Percentiles: P50: 152.60 P75: 497.78 P99: 862.68 P99.9: 1205.33 P99.99: 1291.39

( 22, 34 ] 89 0.044% 0.044% ( 34, 51 ] 15 0.007% 0.052% ( 76, 110 ] 23304 11.555% 11.606% ## ( 110, 170 ] 109073 54.081% 65.687% ########### ( 170, 250 ] 7893 3.914% 69.601% # ( 250, 380 ] 2822 1.399% 71.000% ( 380, 580 ] 13699 6.792% 77.792% # ( 580, 870 ] 43880 21.757% 99.549% #### ( 870, 1300 ] 907 0.450% 99.999% ( 1300, 1900 ] 1 0.000% 100.000% ( 1900, 2900 ] 1 0.000% 100.000%

** Level 1 read latency histogram (micros): Count: 212117 Average: 123.9146 StdDev: 92.12 Min: 64 Median: 100.3633 Max: 4589

Percentiles: P50: 100.36 P75: 133.42 P99: 552.80 P99.9: 800.43 P99.99: 865.18

( 51, 76 ] 27484 12.957% 12.957% ### ( 76, 110 ] 109654 51.695% 64.652% ########## ( 110, 170 ] 56233 26.510% 91.162% ##### ( 170, 250 ] 4042 1.906% 93.068% ( 250, 380 ] 4582 2.160% 95.228% ( 380, 580 ] 9260 4.366% 99.594% # ( 580, 870 ] 855 0.403% 99.997% ( 870, 1300 ] 2 0.001% 99.998% ( 1900, 2900 ] 1 0.000% 99.998% ( 2900, 4400 ] 3 0.001% 100.000% ( 4400, 6600 ] 1 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 256496 Average: 194.4559 StdDev: 132.45 Min: 68 Median: 157.1437 Max: 4632

Percentiles: P50: 157.14 P75: 227.84 P99: 657.08 P99.9: 851.98 P99.99: 1194.38

( 51, 76 ] 7171 2.796% 2.796% # ( 76, 110 ] 62673 24.434% 27.230% ##### ( 110, 170 ] 74331 28.979% 56.209% ###### ( 170, 250 ] 66658 25.988% 82.197% ##### ( 250, 380 ] 11821 4.609% 86.806% # ( 380, 580 ] 30364 11.838% 98.644% ## ( 580, 870 ] 3435 1.339% 99.983% ( 870, 1300 ] 23 0.009% 99.992% ( 1300, 1900 ] 6 0.002% 99.995% ( 1900, 2900 ] 3 0.001% 99.996% ( 2900, 4400 ] 10 0.004% 100.000% ( 4400, 6600 ] 1 0.000% 100.000%

** Level 3 read latency histogram (micros): Count: 807807 Average: 266.0575 StdDev: 177.54 Min: 66 Median: 200.4110 Max: 2952

Percentiles: P50: 200.41 P75: 429.60 P99: 818.20 P99.9: 867.51 P99.99: 2240.66

( 51, 76 ] 16125 1.996% 1.996% ( 76, 110 ] 136871 16.944% 18.940% ### ( 110, 170 ] 175868 21.771% 40.711% #### ( 170, 250 ] 197401 24.437% 65.147% ##### ( 250, 380 ] 27240 3.372% 68.519% # ( 380, 580 ] 211109 26.134% 94.653% ##### ( 580, 870 ] 42752 5.292% 99.945% # ( 870, 1300 ] 193 0.024% 99.969% ( 1300, 1900 ] 126 0.016% 99.985% ( 1900, 2900 ] 121 0.015% 100.000% ( 2900, 4400 ] 1 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 2211329 Average: 273.5481 StdDev: 185.71 Min: 66 Median: 202.5938 Max: 3638

Percentiles: P50: 202.59 P75: 439.50 P99: 829.97 P99.9: 1000.51 P99.99: 1850.31

( 51, 76 ] 46411 2.099% 2.099% ( 76, 110 ] 380607 17.212% 19.310% ### ( 110, 170 ] 462763 20.927% 40.237% #### ( 170, 250 ] 529877 23.962% 64.199% ##### ( 250, 380 ] 65013 2.940% 67.139% # ( 380, 580 ] 584294 26.423% 93.562% ##### ( 580, 870 ] 139508 6.309% 99.871% # ( 870, 1300 ] 2124 0.096% 99.967% ( 1300, 1900 ] 557 0.025% 99.992% ( 1900, 2900 ] 174 0.008% 100.000% ( 2900, 4400 ] 3 0.000% 100.000%

** DB Stats ** Uptime(secs): 1107.3 total, 1107.3 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

7275731 12 1127716575 1578402 90618 255 9680485 17805 0 736129 1581772

I/O count: 427788

范围查询 open 69310

Using rocksdb.SuRFFilter No Compression open range query throughput: 777.228 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1174.175861 95 : 2560.310278 99 : 2883.516484 100 : 4689.000000 rocksdb.block.cache.miss COUNT : 3699413 rocksdb.block.cache.hit COUNT : 1291570 rocksdb.block.cache.add COUNT : 3699413 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1092803 rocksdb.block.cache.index.hit COUNT : 209496 rocksdb.block.cache.index.add COUNT : 1092803 rocksdb.block.cache.index.bytes.insert COUNT : 472186241160 rocksdb.block.cache.index.bytes.evict COUNT : 472184196232 rocksdb.block.cache.filter.miss COUNT : 1112278 rocksdb.block.cache.filter.hit COUNT : 1077128 rocksdb.block.cache.filter.add COUNT : 1112278 rocksdb.block.cache.filter.bytes.insert COUNT : 74444282760 rocksdb.block.cache.filter.bytes.evict COUNT : 74339158512 rocksdb.block.cache.data.miss COUNT : 1494332 rocksdb.block.cache.data.hit COUNT : 4946 rocksdb.block.cache.data.add COUNT : 1494332 rocksdb.block.cache.data.bytes.insert COUNT : 6246939072 rocksdb.block.cache.bytes.read COUNT : 152695968688 rocksdb.block.cache.bytes.write COUNT : 552877462992 rocksdb.bloom.filter.useful COUNT : 1664075 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 575 rocksdb.l1.hit COUNT : 2269 rocksdb.l2andup.hit COUNT : 997156 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 50000 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 51600000 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1584 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1017.790644 95 : 1660.790853 99 : 1907.375660 100 : 4296.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 450.000000 95 : 450.000000 99 : 450.000000 100 : 450.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1609.202454 95 : 2791.685393 99 : 6232.000000 100 : 18176.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 107.742335 95 : 165.655582 99 : 235.871580 100 : 2847.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1174.175861 95 : 2560.310278 99 : 2883.516484 100 : 4689.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 182.166259 95 : 580.593146 99 : 815.911964 100 : 12534.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 1/0 57.77 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L1 5/0 230.72 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 56/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 402/0 24.99 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1120/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1584/0 97.90 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1082.7 total, 1082.7 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 152895 Average: 164.2269 StdDev: 98.06 Min: 88 Median: 138.9819 Max: 11577

Percentiles: P50: 138.98 P75: 159.96 P99: 553.89 P99.9: 762.40 P99.99: 861.18

( 76, 110 ] 23628 15.454% 15.454% ### ( 110, 170 ] 109350 71.520% 86.973% ############## ( 170, 250 ] 7594 4.967% 91.940% # ( 250, 380 ] 3322 2.173% 94.113% ( 380, 580 ] 8594 5.621% 99.734% # ( 580, 870 ] 404 0.264% 99.998% ( 1300, 1900 ] 1 0.001% 99.999% ( 2900, 4400 ] 1 0.001% 99.999% ( 9900, 14000 ] 1 0.001% 100.000%

** Level 1 read latency histogram (micros): Count: 214167 Average: 121.0803 StdDev: 97.11 Min: 65 Median: 100.0563 Max: 12534

Percentiles: P50: 100.06 P75: 132.08 P99: 541.63 P99.9: 774.33 P99.99: 862.91

( 51, 76 ] 28430 13.275% 13.275% ### ( 76, 110 ] 111165 51.906% 65.180% ########## ( 110, 170 ] 57139 26.680% 91.860% ##### ( 170, 250 ] 4026 1.880% 93.740% ( 250, 380 ] 4927 2.301% 96.040% ( 380, 580 ] 7843 3.662% 99.703% # ( 580, 870 ] 631 0.295% 99.997% ( 1900, 2900 ] 1 0.000% 99.998% ( 6600, 9900 ] 4 0.002% 100.000% ( 9900, 14000 ] 1 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 272581 Average: 196.9780 StdDev: 144.20 Min: 69 Median: 156.8506 Max: 9913

Percentiles: P50: 156.85 P75: 230.71 P99: 643.95 P99.9: 851.43 P99.99: 2248.38

( 51, 76 ] 8074 2.962% 2.962% # ( 76, 110 ] 66996 24.578% 27.540% ##### ( 110, 170 ] 78403 28.763% 56.304% ###### ( 170, 250 ] 67160 24.639% 80.942% ##### ( 250, 380 ] 13592 4.986% 85.929% # ( 380, 580 ] 34874 12.794% 98.723% ### ( 580, 870 ] 3429 1.258% 99.981% ( 870, 1300 ] 10 0.004% 99.984% ( 1300, 1900 ] 14 0.005% 99.989% ( 1900, 2900 ] 5 0.002% 99.991% ( 2900, 4400 ] 7 0.003% 99.994% ( 4400, 6600 ] 9 0.003% 99.997% ( 6600, 9900 ] 6 0.002% 99.999% ( 9900, 14000 ] 2 0.001% 100.000%

** Level 3 read latency histogram (micros): Count: 824890 Average: 265.4743 StdDev: 177.18 Min: 66 Median: 199.5194 Max: 2802

Percentiles: P50: 199.52 P75: 430.33 P99: 815.16 P99.9: 866.74 P99.99: 2156.86

( 51, 76 ] 17072 2.070% 2.070% ( 76, 110 ] 141614 17.168% 19.237% ### ( 110, 170 ] 181217 21.969% 41.206% #### ( 170, 250 ] 196595 23.833% 65.039% ##### ( 250, 380 ] 26977 3.270% 68.309% # ( 380, 580 ] 219319 26.588% 94.897% ##### ( 580, 870 ] 41740 5.060% 99.957% # ( 870, 1300 ] 94 0.011% 99.968% ( 1300, 1900 ] 151 0.018% 99.987% ( 1900, 2900 ] 111 0.013% 100.000%

** Level 4 read latency histogram (micros): Count: 2236462 Average: 272.7666 StdDev: 185.34 Min: 66 Median: 202.2448 Max: 3017

Percentiles: P50: 202.24 P75: 439.28 P99: 827.85 P99.9: 948.62 P99.99: 2102.71

( 51, 76 ] 48560 2.171% 2.171% ( 76, 110 ] 388532 17.373% 19.544% ### ( 110, 170 ] 468033 20.927% 40.471% #### ( 170, 250 ] 528721 23.641% 64.112% ##### ( 250, 380 ] 66433 2.970% 67.083% # ( 380, 580 ] 597384 26.711% 93.794% ##### ( 580, 870 ] 136236 6.092% 99.885% # ( 870, 1300 ] 1786 0.080% 99.965% ( 1300, 1900 ] 497 0.022% 99.987% ( 1900, 2900 ] 278 0.012% 100.000% ( 2900, 4400 ] 2 0.000% 100.000%

** DB Stats ** Uptime(secs): 1082.7 total, 1082.7 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

14214791 12 2212881101 3035281 94176 270 9710763 17914 0 1422688 3025455

I/O count: 420497

Surf hash

范围查询 closed

Using rocksdb.SuRFFilter No Compression closed range query throughput: 1236.7 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 809.727230 95 : 1689.537380 99 : 2190.830946 100 : 3150.000000 rocksdb.block.cache.miss COUNT : 3637343 rocksdb.block.cache.hit COUNT : 1289824 rocksdb.block.cache.add COUNT : 3637343 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1065637 rocksdb.block.cache.index.hit COUNT : 208146 rocksdb.block.cache.index.add COUNT : 1065637 rocksdb.block.cache.index.bytes.insert COUNT : 460936894520 rocksdb.block.cache.index.bytes.evict COUNT : 460935207608 rocksdb.block.cache.filter.miss COUNT : 1112276 rocksdb.block.cache.filter.hit COUNT : 1077130 rocksdb.block.cache.filter.add COUNT : 1112276 rocksdb.block.cache.filter.bytes.insert COUNT : 74447622504 rocksdb.block.cache.filter.bytes.evict COUNT : 74342498256 rocksdb.block.cache.data.miss COUNT : 1459430 rocksdb.block.cache.data.hit COUNT : 4548 rocksdb.block.cache.data.add COUNT : 1459430 rocksdb.block.cache.data.bytes.insert COUNT : 6101093984 rocksdb.block.cache.bytes.read COUNT : 152282454160 rocksdb.block.cache.bytes.write COUNT : 541485611008 rocksdb.bloom.filter.useful COUNT : 1664075 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 575 rocksdb.l1.hit COUNT : 2269 rocksdb.l2andup.hit COUNT : 997156 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 24956 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 25754592 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1584 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1014.102138 95 : 1635.440323 99 : 1896.727380 100 : 3837.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 510.000000 95 : 510.000000 99 : 510.000000 100 : 510.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1524.009604 95 : 2698.326360 99 : 6637.714286 100 : 15877.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 104.747874 95 : 163.108145 99 : 168.848010 100 : 556.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 809.727230 95 : 1689.537380 99 : 2190.830946 100 : 3150.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 180.394806 95 : 577.939547 99 : 812.937335 100 : 9018.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 1/0 57.77 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L1 5/0 230.72 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 56/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 402/0 24.99 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1120/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1584/0 97.90 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1053.3 total, 1053.3 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 151990 Average: 104.0095 StdDev: 30.96 Min: 68 Median: 104.0107 Max: 7599

Percentiles: P50: 104.01 P75: 133.30 P99: 168.63 P99.9: 169.95 P99.99: 239.44

( 51, 76 ] 10805 7.109% 7.109% # ( 76, 110 ] 79129 52.062% 59.171% ########## ( 110, 170 ] 61954 40.762% 99.933% ######## ( 170, 250 ] 100 0.066% 99.999% ( 4400, 6600 ] 1 0.001% 99.999% ( 6600, 9900 ] 1 0.001% 100.000%

** Level 1 read latency histogram (micros): Count: 212082 Average: 122.4363 StdDev: 91.89 Min: 64 Median: 100.2695 Max: 8515

Percentiles: P50: 100.27 P75: 133.06 P99: 538.34 P99.9: 701.78 P99.99: 856.83

( 51, 76 ] 27138 12.796% 12.796% ### ( 76, 110 ] 110538 52.120% 64.916% ########## ( 110, 170 ] 55653 26.241% 91.158% ##### ( 170, 250 ] 3954 1.864% 93.022% ( 250, 380 ] 5993 2.826% 95.848% # ( 380, 580 ] 8444 3.981% 99.829% # ( 580, 870 ] 357 0.168% 99.998% ( 4400, 6600 ] 1 0.000% 99.998% ( 6600, 9900 ] 4 0.002% 100.000%

** Level 2 read latency histogram (micros): Count: 256143 Average: 192.6781 StdDev: 138.58 Min: 68 Median: 156.1893 Max: 9018

Percentiles: P50: 156.19 P75: 226.74 P99: 579.30 P99.9: 844.48 P99.99: 2028.57

( 51, 76 ] 6956 2.716% 2.716% # ( 76, 110 ] 63997 24.985% 27.701% ##### ( 110, 170 ] 74197 28.967% 56.668% ###### ( 170, 250 ] 66210 25.849% 82.516% ##### ( 250, 380 ] 11776 4.597% 87.114% # ( 380, 580 ] 30552 11.928% 99.042% ## ( 580, 870 ] 2411 0.941% 99.983% ( 870, 1300 ] 6 0.002% 99.985% ( 1300, 1900 ] 12 0.005% 99.990% ( 1900, 2900 ] 3 0.001% 99.991% ( 2900, 4400 ] 4 0.002% 99.993% ( 4400, 6600 ] 13 0.005% 99.998% ( 6600, 9900 ] 6 0.002% 100.000%

** Level 3 read latency histogram (micros): Count: 808125 Average: 264.0039 StdDev: 175.03 Min: 65 Median: 199.4616 Max: 2629

Percentiles: P50: 199.46 P75: 426.45 P99: 812.11 P99.9: 866.70 P99.99: 1866.16

( 51, 76 ] 15905 1.968% 1.968% ( 76, 110 ] 138560 17.146% 19.114% ### ( 110, 170 ] 177537 21.969% 41.083% #### ( 170, 250 ] 195673 24.213% 65.296% ##### ( 250, 380 ] 29107 3.602% 68.898% # ( 380, 580 ] 212338 26.275% 95.173% ##### ( 580, 870 ] 38636 4.781% 99.954% # ( 870, 1300 ] 124 0.015% 99.970% ( 1300, 1900 ] 174 0.022% 99.991% ( 1900, 2900 ] 73 0.009% 100.000%

** Level 4 read latency histogram (micros): Count: 2210585 Average: 271.2868 StdDev: 183.37 Min: 65 Median: 202.0324 Max: 3042

Percentiles: P50: 202.03 P75: 436.70 P99: 825.78 P99.9: 925.25 P99.99: 1812.38

( 51, 76 ] 46735 2.114% 2.114% ( 76, 110 ] 386207 17.471% 19.585% ### ( 110, 170 ] 461019 20.855% 40.440% #### ( 170, 250 ] 527794 23.876% 64.316% ##### ( 250, 380 ] 69519 3.145% 67.461% # ( 380, 580 ] 587836 26.592% 94.052% ##### ( 580, 870 ] 129047 5.838% 99.890% # ( 870, 1300 ] 1692 0.077% 99.967% ( 1300, 1900 ] 603 0.027% 99.994% ( 1900, 2900 ] 131 0.006% 100.000% ( 2900, 4400 ] 2 0.000% 100.000%

** DB Stats ** Uptime(secs): 1053.3 total, 1053.3 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

52272764 38 8258696058 11347503 177412 451 20773820 38668 0 5213911 11282684

I/O count: 282494

范围查询 open

Using rocksdb.SuRFFilter No Compression open range query throughput: 876.183 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1032.819441 95 : 1931.852143 99 : 2718.324813 100 : 3470.000000 rocksdb.block.cache.miss COUNT : 3680159 rocksdb.block.cache.hit COUNT : 1310824 rocksdb.block.cache.add COUNT : 3680159 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1091798 rocksdb.block.cache.index.hit COUNT : 210501 rocksdb.block.cache.index.add COUNT : 1091798 rocksdb.block.cache.index.bytes.insert COUNT : 471699830864 rocksdb.block.cache.index.bytes.evict COUNT : 471697844128 rocksdb.block.cache.filter.miss COUNT : 1094755 rocksdb.block.cache.filter.hit COUNT : 1094651 rocksdb.block.cache.filter.add COUNT : 1094755 rocksdb.block.cache.filter.bytes.insert COUNT : 73222953752 rocksdb.block.cache.filter.bytes.evict COUNT : 73117829504 rocksdb.block.cache.data.miss COUNT : 1493606 rocksdb.block.cache.data.hit COUNT : 5672 rocksdb.block.cache.data.add COUNT : 1493606 rocksdb.block.cache.data.bytes.insert COUNT : 6243985488 rocksdb.block.cache.bytes.read COUNT : 154407188504 rocksdb.block.cache.bytes.write COUNT : 551166770104 rocksdb.bloom.filter.useful COUNT : 1664075 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 575 rocksdb.l1.hit COUNT : 2269 rocksdb.l2andup.hit COUNT : 997156 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 50000 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 51600000 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1584 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1015.237147 95 : 1641.396626 99 : 1898.918409 100 : 4884.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 513.000000 95 : 513.000000 99 : 513.000000 100 : 513.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1537.380628 95 : 2759.235669 99 : 8671.384615 100 : 17888.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 104.712739 95 : 163.089212 99 : 168.831605 100 : 1211.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 1032.819441 95 : 1931.852143 99 : 2718.324813 100 : 3470.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 180.083697 95 : 579.698960 99 : 815.265115 100 : 13044.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 1/0 57.77 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L1 5/0 230.72 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 56/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 402/0 24.99 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1120/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1584/0 97.90 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1069.4 total, 1069.4 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 152627 Average: 102.9138 StdDev: 35.33 Min: 68 Median: 103.5140 Max: 8187

Percentiles: P50: 103.51 P75: 132.64 P99: 168.59 P99.9: 169.94 P99.99: 239.52

( 51, 76 ] 12316 8.069% 8.069% ## ( 76, 110 ] 79084 51.815% 59.885% ########## ( 110, 170 ] 61137 40.056% 99.941% ######## ( 170, 250 ] 86 0.056% 99.997% ( 250, 380 ] 2 0.001% 99.999% ( 6600, 9900 ] 2 0.001% 100.000%

** Level 1 read latency histogram (micros): Count: 214799 Average: 120.8600 StdDev: 94.82 Min: 65 Median: 100.1631 Max: 13044

Percentiles: P50: 100.16 P75: 132.30 P99: 532.63 P99.9: 683.11 P99.99: 856.14

( 51, 76 ] 27446 12.778% 12.778% ### ( 76, 110 ] 112503 52.376% 65.153% ########## ( 110, 170 ] 56905 26.492% 91.646% ##### ( 170, 250 ] 4052 1.886% 93.532% ( 250, 380 ] 5887 2.741% 96.273% # ( 380, 580 ] 7676 3.574% 99.846% # ( 580, 870 ] 324 0.151% 99.997% ( 870, 1300 ] 1 0.000% 99.998% ( 6600, 9900 ] 2 0.001% 99.999% ( 9900, 14000 ] 3 0.001% 100.000%

** Level 2 read latency histogram (micros): Count: 269227 Average: 196.3445 StdDev: 151.42 Min: 69 Median: 155.8218 Max: 11928

Percentiles: P50: 155.82 P75: 230.84 P99: 579.59 P99.9: 845.06 P99.99: 2686.37

( 51, 76 ] 8057 2.993% 2.993% # ( 76, 110 ] 67758 25.168% 28.160% ##### ( 110, 170 ] 76992 28.597% 56.758% ###### ( 170, 250 ] 64576 23.986% 80.743% ##### ( 250, 380 ] 13981 5.193% 85.936% # ( 380, 580 ] 35243 13.090% 99.027% ### ( 580, 870 ] 2572 0.955% 99.982% ( 870, 1300 ] 9 0.003% 99.986% ( 1300, 1900 ] 5 0.002% 99.987% ( 1900, 2900 ] 9 0.003% 99.991% ( 2900, 4400 ] 4 0.001% 99.992% ( 4400, 6600 ] 3 0.001% 99.993% ( 6600, 9900 ] 13 0.005% 99.998% ( 9900, 14000 ] 5 0.002% 100.000%

** Level 3 read latency histogram (micros): Count: 820224 Average: 265.5928 StdDev: 177.01 Min: 65 Median: 199.6544 Max: 2993

Percentiles: P50: 199.65 P75: 430.19 P99: 814.78 P99.9: 866.80 P99.99: 2008.45

( 51, 76 ] 16868 2.057% 2.057% ( 76, 110 ] 142727 17.401% 19.457% ### ( 110, 170 ] 178180 21.723% 41.181% #### ( 170, 250 ] 195147 23.792% 64.973% ##### ( 250, 380 ] 27464 3.348% 68.321% # ( 380, 580 ] 218320 26.617% 94.938% ##### ( 580, 870 ] 41152 5.017% 99.955% # ( 870, 1300 ] 120 0.015% 99.970% ( 1300, 1900 ] 154 0.019% 99.989% ( 1900, 2900 ] 92 0.011% 100.000% ( 2900, 4400 ] 1 0.000% 100.000%

** Level 4 read latency histogram (micros): Count: 2224865 Average: 272.9724 StdDev: 185.19 Min: 66 Median: 202.2712 Max: 2906

Percentiles: P50: 202.27 P75: 439.74 P99: 827.71 P99.9: 922.67 P99.99: 1851.61

( 51, 76 ] 48219 2.167% 2.167% ( 76, 110 ] 391629 17.602% 19.770% #### ( 110, 170 ] 461987 20.765% 40.534% #### ( 170, 250 ] 522069 23.465% 64.000% ##### ( 250, 380 ] 66746 3.000% 67.000% # ( 380, 580 ] 595889 26.783% 93.783% ##### ( 580, 870 ] 135894 6.108% 99.891% # ( 870, 1300 ] 1691 0.076% 99.967% ( 1300, 1900 ] 564 0.025% 99.992% ( 1900, 2900 ] 175 0.008% 100.000% ( 2900, 4400 ] 2 0.000% 100.000%

** DB Stats ** Uptime(secs): 1069.4 total, 1069.4 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

59193759 43 9340570156 12793357 181238 468 20806220 38791 0 5888347 12715484

I/O count: 423393

Surf real

closed

Using rocksdb.SuRFFilter No Compression closed range query throughput: 1347.56 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 736.017228 95 : 1517.751825 99 : 1868.116788 100 : 3616.000000 rocksdb.block.cache.miss COUNT : 3607680 rocksdb.block.cache.hit COUNT : 1319487 rocksdb.block.cache.add COUNT : 3607680 rocksdb.block.cache.add.failures COUNT : 0 rocksdb.block.cache.index.miss COUNT : 1058209 rocksdb.block.cache.index.hit COUNT : 215574 rocksdb.block.cache.index.add COUNT : 1058209 rocksdb.block.cache.index.bytes.insert COUNT : 457814784664 rocksdb.block.cache.index.bytes.evict COUNT : 457812240312 rocksdb.block.cache.filter.miss COUNT : 1092631 rocksdb.block.cache.filter.hit COUNT : 1096775 rocksdb.block.cache.filter.add COUNT : 1092631 rocksdb.block.cache.filter.bytes.insert COUNT : 73220759744 rocksdb.block.cache.filter.bytes.evict COUNT : 73115635496 rocksdb.block.cache.data.miss COUNT : 1456840 rocksdb.block.cache.data.hit COUNT : 7138 rocksdb.block.cache.data.add COUNT : 1456840 rocksdb.block.cache.data.bytes.insert COUNT : 6090044496 rocksdb.block.cache.bytes.read COUNT : 156642543528 rocksdb.block.cache.bytes.write COUNT : 537125588904 rocksdb.bloom.filter.useful COUNT : 1664075 rocksdb.persistent.cache.hit COUNT : 0 rocksdb.persistent.cache.miss COUNT : 0 rocksdb.sim.block.cache.hit COUNT : 0 rocksdb.sim.block.cache.miss COUNT : 0 rocksdb.memtable.hit COUNT : 0 rocksdb.memtable.miss COUNT : 1000000 rocksdb.l0.hit COUNT : 575 rocksdb.l1.hit COUNT : 2269 rocksdb.l2andup.hit COUNT : 997156 rocksdb.compaction.key.drop.new COUNT : 0 rocksdb.compaction.key.drop.obsolete COUNT : 0 rocksdb.compaction.key.drop.range_del COUNT : 0 rocksdb.compaction.key.drop.user COUNT : 0 rocksdb.compaction.range_del.drop.obsolete COUNT : 0 rocksdb.compaction.optimized.del.drop.obsolete COUNT : 0 rocksdb.number.keys.written COUNT : 0 rocksdb.number.keys.read COUNT : 1000000 rocksdb.number.keys.updated COUNT : 0 rocksdb.bytes.written COUNT : 0 rocksdb.bytes.read COUNT : 1024000000 rocksdb.number.db.seek COUNT : 50000 rocksdb.number.db.next COUNT : 0 rocksdb.number.db.prev COUNT : 0 rocksdb.number.db.seek.found COUNT : 24956 rocksdb.number.db.next.found COUNT : 0 rocksdb.number.db.prev.found COUNT : 0 rocksdb.db.iter.bytes.read COUNT : 25754592 rocksdb.no.file.closes COUNT : 0 rocksdb.no.file.opens COUNT : 1584 rocksdb.no.file.errors COUNT : 0 rocksdb.l0.slowdown.micros COUNT : 0 rocksdb.memtable.compaction.micros COUNT : 0 rocksdb.l0.num.files.stall.micros COUNT : 0 rocksdb.stall.micros COUNT : 0 rocksdb.db.mutex.wait.micros COUNT : 0 rocksdb.rate.limit.delay.millis COUNT : 0 rocksdb.num.iterators COUNT : 0 rocksdb.number.multiget.get COUNT : 0 rocksdb.number.multiget.keys.read COUNT : 0 rocksdb.number.multiget.bytes.read COUNT : 0 rocksdb.number.deletes.filtered COUNT : 0 rocksdb.number.merge.failures COUNT : 0 rocksdb.bloom.filter.prefix.checked COUNT : 0 rocksdb.bloom.filter.prefix.useful COUNT : 0 rocksdb.number.reseeks.iteration COUNT : 0 rocksdb.getupdatessince.calls COUNT : 0 rocksdb.block.cachecompressed.miss COUNT : 0 rocksdb.block.cachecompressed.hit COUNT : 0 rocksdb.block.cachecompressed.add COUNT : 0 rocksdb.block.cachecompressed.add.failures COUNT : 0 rocksdb.wal.synced COUNT : 0 rocksdb.wal.bytes COUNT : 0 rocksdb.write.self COUNT : 0 rocksdb.write.other COUNT : 0 rocksdb.write.timeout COUNT : 0 rocksdb.write.wal COUNT : 0 rocksdb.compact.read.bytes COUNT : 0 rocksdb.compact.write.bytes COUNT : 0 rocksdb.flush.write.bytes COUNT : 0 rocksdb.number.direct.load.table.properties COUNT : 0 rocksdb.number.superversion_acquires COUNT : 1 rocksdb.number.superversion_releases COUNT : 0 rocksdb.number.superversion_cleanups COUNT : 0 rocksdb.number.block.compressed COUNT : 0 rocksdb.number.block.decompressed COUNT : 0 rocksdb.number.block.not_compressed COUNT : 0 rocksdb.merge.operation.time.nanos COUNT : 0 rocksdb.filter.operation.time.nanos COUNT : 0 rocksdb.row.cache.hit COUNT : 0 rocksdb.row.cache.miss COUNT : 0 rocksdb.read.amp.estimate.useful.bytes COUNT : 0 rocksdb.read.amp.total.read.bytes COUNT : 0 rocksdb.number.rate_limiter.drains COUNT : 0 rocksdb.db.get.micros statistics Percentiles :=> 50 : 1000.630332 95 : 1610.805218 99 : 1889.829563 100 : 4979.000000 rocksdb.db.write.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.subcompaction.setup.times.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.table.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compaction.outfile.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.wal.file.sync.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.manifest.file.sync.micros statistics Percentiles :=> 50 : 405.000000 95 : 405.000000 99 : 405.000000 100 : 405.000000 rocksdb.table.open.io.micros statistics Percentiles :=> 50 : 1542.569270 95 : 2744.366197 99 : 8163.692308 100 : 16656.000000 rocksdb.db.multiget.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.compaction.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.block.get.micros statistics Percentiles :=> 50 : 104.368382 95 : 162.994781 99 : 168.815012 100 : 893.000000 rocksdb.write.raw.block.micros statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.l0.slowdown.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.memtable.compaction.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.num.files.stall.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.hard.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.soft.rate.limit.delay.count statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.numfiles.in.singlecompaction statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.db.seek.micros statistics Percentiles :=> 50 : 736.017228 95 : 1517.751825 99 : 1868.116788 100 : 3616.000000 rocksdb.db.write.stall statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.sst.read.micros statistics Percentiles :=> 50 : 179.186073 95 : 576.032738 99 : 809.918133 100 : 11805.000000 rocksdb.num.subcompactions.scheduled statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.read statistics Percentiles :=> 50 : 1024.000000 95 : 1024.000000 99 : 1024.000000 100 : 1024.000000 rocksdb.bytes.per.write statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.per.multiget statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.compressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.bytes.decompressed statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.compression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.decompression.times.nanos statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000 rocksdb.read.num.merge_operands statistics Percentiles :=> 50 : 0.000000 95 : 0.000000 99 : 0.000000 100 : 0.000000

** Compaction Stats [default] **

Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop

L0 1/0 57.77 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L1 5/0 230.72 MB 0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L2 56/0 2.46 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L3 402/0 24.99 GB 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 L4 1120/0 70.17 GB 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Sum 1584/0 97.90 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0 Uptime(secs): 1034.4 total, 1034.4 interval Flush(GB): cumulative 0.000, interval 0.000 AddFile(GB): cumulative 0.000, interval 0.000 AddFile(Total Files): cumulative 0, interval 0 AddFile(L0 Files): cumulative 0, interval 0 AddFile(Keys): cumulative 0, interval 0 Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] ** ** Level 0 read latency histogram (micros): Count: 151175 Average: 102.8127 StdDev: 32.05 Min: 69 Median: 103.5578 Max: 6988

Percentiles: P50: 103.56 P75: 132.78 P99: 168.57 P99.9: 169.92 P99.99: 235.65

( 51, 76 ] 12695 8.398% 8.398% ## ( 76, 110 ] 77595 51.328% 59.725% ########## ( 110, 170 ] 60819 40.231% 99.956% ######## ( 170, 250 ] 62 0.041% 99.997% ( 250, 380 ] 2 0.001% 99.999% ( 4400, 6600 ] 1 0.001% 99.999% ( 6600, 9900 ] 1 0.001% 100.000%

** Level 1 read latency histogram (micros): Count: 207637 Average: 117.9722 StdDev: 88.35 Min: 65 Median: 99.6618 Max: 11805

Percentiles: P50: 99.66 P75: 130.46 P99: 524.50 P99.9: 669.74 P99.99: 855.34

( 51, 76 ] 27965 13.468% 13.468% ### ( 76, 110 ] 108995 52.493% 65.961% ########## ( 110, 170 ] 55038 26.507% 92.468% ##### ( 170, 250 ] 3944 1.899% 94.368% ( 250, 380 ] 4988 2.402% 96.770% ( 380, 580 ] 6409 3.087% 99.856% # ( 580, 870 ] 292 0.141% 99.997% ( 870, 1300 ] 1 0.000% 99.998% ( 6600, 9900 ] 4 0.002% 100.000% ( 9900, 14000 ] 1 0.000% 100.000%

** Level 2 read latency histogram (micros): Count: 253383 Average: 191.7469 StdDev: 143.26 Min: 69 Median: 156.0863 Max: 10349

Percentiles: P50: 156.09 P75: 226.86 P99: 578.20 P99.9: 842.77 P99.99: 2778.34

( 51, 76 ] 7498 2.959% 2.959% # ( 76, 110 ] 63449 25.041% 28.000% ##### ( 110, 170 ] 72574 28.642% 56.642% ###### ( 170, 250 ] 65452 25.831% 82.473% ##### ( 250, 380 ] 12211 4.819% 87.292% # ( 380, 580 ] 29935 11.814% 99.106% ## ( 580, 870 ] 2219 0.876% 99.982% ( 870, 1300 ] 3 0.001% 99.983% ( 1300, 1900 ] 7 0.003% 99.986% ( 1900, 2900 ] 11 0.004% 99.991% ( 2900, 4400 ] 2 0.001% 99.991% ( 4400, 6600 ] 9 0.004% 99.995% ( 6600, 9900 ] 10 0.004% 99.999% ( 9900, 14000 ] 3 0.001% 100.000%

** Level 3 read latency histogram (micros): Count: 804681 Average: 262.7535 StdDev: 174.15 Min: 65 Median: 198.7372 Max: 2562

Percentiles: P50: 198.74 P75: 426.35 P99: 810.00 P99.9: 866.62 P99.99: 1834.33

( 51, 76 ] 17098 2.125% 2.125% ( 76, 110 ] 138257 17.182% 19.306% ### ( 110, 170 ] 177894 22.107% 41.414% #### ( 170, 250 ] 192340 23.903% 65.316% ##### ( 250, 380 ] 28537 3.546% 68.863% # ( 380, 580 ] 213095 26.482% 95.345% ##### ( 580, 870 ] 37087 4.609% 99.954% # ( 870, 1300 ] 126 0.016% 99.969% ( 1300, 1900 ] 187 0.023% 99.993% ( 1900, 2900 ] 60 0.007% 100.000%

** Level 4 read latency histogram (micros): Count: 2192387 Average: 270.2196 StdDev: 182.95 Min: 65 Median: 201.3206 Max: 3160

Percentiles: P50: 201.32 P75: 436.47 P99: 823.14 P99.9: 899.56 P99.99: 1862.32

( 51, 76 ] 50080 2.284% 2.284% ( 76, 110 ] 386398 17.625% 19.909% #### ( 110, 170 ] 459492 20.959% 40.867% #### ( 170, 250 ] 511416 23.327% 64.194% ##### ( 250, 380 ] 69938 3.190% 67.384% # ( 380, 580 ] 591302 26.971% 94.355% ##### ( 580, 870 ] 121464 5.540% 99.895% # ( 870, 1300 ] 1522 0.069% 99.965% ( 1300, 1900 ] 593 0.027% 99.992% ( 1900, 2900 ] 178 0.008% 100.000% ( 2900, 4400 ] 5 0.000% 100.000%

** DB Stats ** Uptime(secs): 1034.4 total, 1034.4 interval Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s Interval stall: 00:00:0.000 H:M:S, 0.0 percent

72888663 53 11480322480 15632249 188427 502 20870364 39026 0 7224062 15531545

I/O count: 268926

SuRF, zipfian Build time = 24.2963 Throughput = 1.98383 positives = 7105118 true positives = 7002747 false positives = 102371 true negatives = 2894883 count = 0 False Positive Rate = 0.0341549 Memory = 66377919

SuRFHash, zipfian Build time = 24.679 Throughput = 2.01478 positives = 7105118 true positives = 7002747 false positives = 102371 true negatives = 2894883 count = 0 False Positive Rate = 0.0341549 Memory = 91377919

SuRFReal, zipfian Build time = 24.3743 Throughput = 1.92494 positives = 7011945 true positives = 7002747 false positives = 9198 true negatives = 2988056 count = 0 False Positive Rate = 0.00306881 Memory = 91377919

SuRF, uniform Build time = 23.896 Throughput = 1.86278 positives = 6815614 true positives = 6702050 false positives = 113564 true negatives = 3184387 count = 0 False Positive Rate = 0.0344347 Memory = 66377919

SuRFHash, uniform Build time = 24.3468 Throughput = 1.87323 positives = 6815614 true positives = 6702050 false positives = 113564 true negatives = 3184387 count = 0 False Positive Rate = 0.0344347 Memory = 91377919

SuRFReal, uniform Build time = 24.3376 Throughput = 1.70486 positives = 6713106 true positives = 6702050 false positives = 11056 true negatives = 3286895 count = 0 False Positive Rate = 0.00335238 Memory = 91377919

SuRF, latest Build time = 24.1038 Throughput = 2.01029 positives = 6799946 true positives = 6618394 false positives = 181552 true negatives = 3200055 count = 0 False Positive Rate = 0.0536881 Memory = 66377919

SuRFHash, latest Build time = 24.6203 Throughput = 1.99807 positives = 6799946 true positives = 6618394 false positives = 181552 true negatives = 3200055 count = 0 False Positive Rate = 0.0536881 Memory = 72627927

SuRFReal, latest Build time = 24.5668 Throughput = 1.95996 positives = 6753584 true positives = 6618394 false positives = 135190 true negatives = 3246417 count = 0 False Positive Rate = 0.039978 Memory = 72627927

2.87 23.05

7.31 18.67

echo ‘Bloom Filter, zip fian’

../build/bench/workload Bloom 1 mixed 50 0 randint point zipfian

echo ‘SuRFHash, zipfian’ ../build/bench/workload SuRFHash 4 mixed 50 0 randint point zipfian

echo ‘SuRFReal, zipfian’ ../build/bench/workload SuRFReal 4 mixed 50 0 randint point zipfian

echo ‘SuRFMixed, zipfian’ ../build/bench/workload SuRFMixed 2 mixed 50 0 randint mix zipfian

False Positive Rate (%)

echo ‘Bloom Filter, uniform’

../build/bench/workload Bloom 1 mixed 50 0 randint point uniform

echo ‘SuRFHash, uniform’ ../build/bench/workload SuRFHash 4 mixed 50 0 randint point uniform

echo ‘SuRFReal, uniform’ ../build/bench/workload SuRFReal 4 mixed 50 0 randint point uniform

echo ‘SuRFMixed, uniform’ ../build/bench/workload SuRFMixed 2 mixed 50 0 randint mix uniform

echo ‘Bloom Filter, latest’

../build/bench/workload Bloom 1 mixed 50 0 randint point latest

echo ‘SuRFHash, latest’ ../build/bench/workload SuRFHash 4 mixed 50 0 randint point latest

echo ‘SuRFReal, latest’ ../build/bench/workload SuRFReal 4 mixed 50 0 randint point latest

echo ‘SuRFMixed, latest’ ../build/bench/workload SuRFMixed 2 mixed 50 0 randint mix latest

Range queries are very inefficient in LSM-tree based KV stores due to the leveled structure of the LSM-tree. Range filters have become popular because they can effectively reduce IO costs and improve range-query performance. However, in modern storage environments, on the one hand, existing range filters have become a performance bottleneck when using modern storage devices with high bandwidth and low latency. On the other hand, data migrations and mass deletes are common, which severely impact range-query performance (also referred as the large-scale delete problem). In this paper, AegisKV is proposed to improve the range-query performance of LSM-tree based KV stores. A learned range filter is first designed to speed up file filtering, reducing extra IOs. Also, a high-efficient partition strategy is proposed to solve the large-scale delete problem. Besides, asynchronous query design is adopted, and SPDK is supported for high concurrency and low latency. AegisKV is implemented on RocksDB. The evaluation results indicate that compared with RocksDB, the range-query performance of AegisKV is improved by 4× to 7× without loss of write performance and AegisKV provides stable query performance even when a large number of deletes or migrations occur.

Storage meets ai

less than 1 minute read

Published:

Storage Technologies Meets Artificial Intelligence: A Survey