ssd noop anticipatory deadline cfq io scheduler performance test

简介:
上一篇测试了SSD分区对齐和不对齐的IOPS性能.
本文将要介绍一下SSD分区对齐的情况下, 不同的IO调度算法的iops性能.
测试结果如下
1. noop调度
[root@db-xx ~]# cat /sys/block/sde/queue/scheduler 
[noop] anticipatory deadline cfq 

[postgres@db-xx pgdata]$ pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 16kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                       10574.965 ops/sec      95 usecs/op
        fsync                            9959.066 ops/sec     100 usecs/op
        fsync_writethrough                            n/a
        open_sync                       13894.208 ops/sec      72 usecs/op

Compare file sync methods using two 16kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                        7272.895 ops/sec     137 usecs/op
        fsync                            6892.519 ops/sec     145 usecs/op
        fsync_writethrough                            n/a
        open_sync                        6968.750 ops/sec     143 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write       13905.409 ops/sec      72 usecs/op
         2 *  8kB open_sync writes       8994.942 ops/sec     111 usecs/op
         4 *  4kB open_sync writes       5241.717 ops/sec     191 usecs/op
         8 *  2kB open_sync writes       2500.587 ops/sec     400 usecs/op
        16 *  1kB open_sync writes       1300.364 ops/sec     769 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close              9292.012 ops/sec     108 usecs/op
        write, close, fsync              9201.620 ops/sec     109 usecs/op

Non-Sync'ed 16kB writes:
        write                           124100.336 ops/sec       8 usecs/op



2. anticipatory调度
[root@db-xx ~]# echo anticipatory > /sys/block/sde/queue/scheduler 
[root@db-xx ~]# cat /sys/block/sde/queue/scheduler 
noop [anticipatory] deadline cfq 

[postgres@db-xx pgdata]$ pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 16kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                       10980.717 ops/sec      91 usecs/op
        fsync                            9989.117 ops/sec     100 usecs/op
        fsync_writethrough                            n/a
        open_sync                       13447.041 ops/sec      74 usecs/op

Compare file sync methods using two 16kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                        7001.547 ops/sec     143 usecs/op
        fsync                            6757.166 ops/sec     148 usecs/op
        fsync_writethrough                            n/a
        open_sync                        6852.453 ops/sec     146 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write       13445.661 ops/sec      74 usecs/op
         2 *  8kB open_sync writes       8902.478 ops/sec     112 usecs/op
         4 *  4kB open_sync writes       5137.514 ops/sec     195 usecs/op
         8 *  2kB open_sync writes       2473.485 ops/sec     404 usecs/op
        16 *  1kB open_sync writes       1248.299 ops/sec     801 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close              8992.447 ops/sec     111 usecs/op
        write, close, fsync              8999.892 ops/sec     111 usecs/op

Non-Sync'ed 16kB writes:
        write                           131947.640 ops/sec       8 usecs/op


3. deadline调度
[root@db-xx ~]# echo deadline > /sys/block/sde/queue/scheduler 
[root@db-xx ~]# cat /sys/block/sde/queue/scheduler 
noop anticipatory [deadline] cfq 

[postgres@db-xx pgdata]$ pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 16kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                       10475.719 ops/sec      95 usecs/op
        fsync                            9861.660 ops/sec     101 usecs/op
        fsync_writethrough                            n/a
        open_sync                       13804.860 ops/sec      72 usecs/op

Compare file sync methods using two 16kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                        6998.343 ops/sec     143 usecs/op
        fsync                            6704.945 ops/sec     149 usecs/op
        fsync_writethrough                            n/a
        open_sync                        6906.296 ops/sec     145 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write       13755.100 ops/sec      73 usecs/op
         2 *  8kB open_sync writes       8987.003 ops/sec     111 usecs/op
         4 *  4kB open_sync writes       5242.396 ops/sec     191 usecs/op
         8 *  2kB open_sync writes       2491.866 ops/sec     401 usecs/op
        16 *  1kB open_sync writes       1249.717 ops/sec     800 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close              8981.361 ops/sec     111 usecs/op
        write, close, fsync              9001.074 ops/sec     111 usecs/op

Non-Sync'ed 16kB writes:
        write                           121845.410 ops/sec       8 usecs/op


4. cfq调度
[root@db-xx ~]# echo cfq > /sys/block/sde/queue/scheduler 
[root@db-xx ~]# cat /sys/block/sde/queue/scheduler 
noop anticipatory deadline [cfq] 

[postgres@db-xx pgdata]$ pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 16kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                       10981.601 ops/sec      91 usecs/op
        fsync                            9312.220 ops/sec     107 usecs/op
        fsync_writethrough                            n/a
        open_sync                       13029.732 ops/sec      77 usecs/op

Compare file sync methods using two 16kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
        open_datasync                                 n/a
        fdatasync                        6997.665 ops/sec     143 usecs/op
        fsync                            6082.214 ops/sec     164 usecs/op
        fsync_writethrough                            n/a
        open_sync                        6519.312 ops/sec     153 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
         1 * 16kB open_sync write       12990.761 ops/sec      77 usecs/op
         2 *  8kB open_sync writes       8603.877 ops/sec     116 usecs/op
         4 *  4kB open_sync writes       4986.898 ops/sec     201 usecs/op
         8 *  2kB open_sync writes       2369.063 ops/sec     422 usecs/op
        16 *  1kB open_sync writes       1220.165 ops/sec     820 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
        write, fsync, close              8919.973 ops/sec     112 usecs/op
        write, close, fsync              8857.715 ops/sec     113 usecs/op

Non-Sync'ed 16kB writes:
        write                           124013.883 ops/sec       8 usecs/op

相关文章
|
1月前
|
存储 缓存 安全
Java 中 IO 流、File文件
Java 中 IO 流、File文件
|
17天前
|
Java Unix Windows
|
2天前
|
Java 开发者
Java一分钟之-Java IO流:文件读写基础
【5月更文挑战第10天】本文介绍了Java IO流在文件读写中的应用,包括`FileInputStream`和`FileOutputStream`用于字节流操作,`BufferedReader`和`PrintWriter`用于字符流。通过代码示例展示了如何读取和写入文件,强调了常见问题如未关闭流、文件路径、编码、权限和异常处理,并提供了追加写入与读取的示例。理解这些基础知识和注意事项能帮助开发者编写更可靠的程序。
10 0
|
6天前
|
存储 缓存 Java
Java IO 流详解
Java IO 流详解
16 1
|
12天前
|
存储 Java
java IO接口(Input)用法
【5月更文挑战第1天】Java的`java.io`包包含多种输入输出类。此示例展示了如何使用`FileInputStream`从`input.txt`读取数据。首先创建`FileInputStream`对象,接着创建一个字节数组存储读取的数据,调用`read()`方法将文件内容填充至数组。然后将字节数组转换为字符串并打印,最后关闭输入流。注意,`InputStream`是抽象类,此处使用其子类`FileInputStream`。其他子类如`ByteArrayInputStream`、`ObjectInputStream`和`BufferedInputStream`各有特定用途。
21 2
|
13天前
|
存储 Java Linux
【Java EE】 文件IO的使用以及流操作
【Java EE】 文件IO的使用以及流操作
|
18天前
|
存储 Java 数据库
[Java 基础面试题] IO相关
[Java 基础面试题] IO相关
|
19天前
|
缓存 Java API
Java NIO和IO之间的区别
NIO(New IO),这个库是在JDK1.4中才引入的。NIO和IO有相同的作用和目的,但实现方式不同,NIO主要用到的是块,所以NIO的效率要比IO高很多。在Java API中提供了两套NIO,一套是针对标准输入输出NIO,另一套就是网络编程NIO。
17 1
|
22天前
|
Java
Java基础教程(12)-Java中的IO流
【4月更文挑战第12天】Java IO涉及输入输出,包括从外部读取数据到内存(如文件、网络)和从内存输出到外部。流是信息传输的抽象,分为字节流和字符流。字节流处理二进制数据,如InputStream和OutputStream,而字符流处理Unicode字符,如Reader和Writer。File对象用于文件和目录操作,Path对象简化了路径处理。ZipInputStream和ZipOutputStream则用于读写zip文件。