KVM disk performance: raw vs qcow2 format

Some time ago I compared disk drivers performance in KVM. Today I compared different storage formats – raw and qcow2. Let’s have a look:

Test procedure: Create an empty 10 GB image, attach to VM using VirtIO driver, boot F20 Alpha Live x86_64, measure the time of installation. Repeat installation once again, this time reusing the existing image (instead of creating a new one). Do this for both formats.

Test results:

raw 1st pass          2:36
raw 2nd pass          2:38
qcow2 1st pass        2:36
qcow2 2nd pass        2:44

As you can see, the results are very much the same. It seems it doesn’t matter much which format you use.

But, qcow2 format has some nice additional features, like copy-on-write cloning. If I need to test something very quickly in my existing VM and then revert the changes back, this is the easiest way:

$ cd /var/lib/libvirt/images
$ mv f19.qcow2 f19.qcow2_orig
$ qemu-img create -f qcow2 -b f19.qcow2_orig f19.qcow2
Formatting 'f19.qcow2', fmt=qcow2 size=10737418240 backing_file='f19.qcow2_orig' encryption=off cluster_size=65536 lazy_refcounts=off
$ # Run the VM now and do your tasks
$ mv f19.qcow2_orig f19.qcow2

Enjoy.

Flattr this

20 thoughts on “KVM disk performance: raw vs qcow2 format

  1. Very interesting idea… Could you also test with hdparm -t or bonnie++ after installation? Those should give more depth to the numbers you came up with.

    1. I perform mostly clean installations, so the numbers above are the most important for me. You’re right that bonnie++ output would be interesting as well. I’ll try to include that in some future test.

  2. You don’t mention how you configured the disk cache mode on the virtual machine. The default cache mode allows for host I/O caching. Also you don’t mention if the raw image was sparse or pre-allocated. A pre-allocated raw image is expected to clearly beat both a sparse raw image or a qcow2 image, because it avoids the I/O penalty inherent in allocating blocks for grow-on-demand formats.

    1. In virt-manager the cache mode is set to “default” (without actually indicating what the default is). I’d like to compare different cache modes as well, when time permits.
      The raw images were created with “qemu-img create foo.img 10G”. I guess that means sparse, because the initial file size is very small.

  3. qcow2 also supports snapshots, which is a great feature:

    virsh snapshot-create-as f19 my-snapshot
    #do your dangerous magic
    virsh snapshot-revert f19 my-snapshot
    virsh snapshot-delete f19 my-snapshot

  4. And then of course there’s using LVM as a backend which also allows easy cloning/snapshotting and should offer (in theory) better performance because it avoids another complicated level of abstraction (host filesystem). Perhaps next iteration could compare that as well.. 🙂

    1. Seconded — I’m much more interested in how qcow2 compares to LVM. Raw files aren’t even on my radar for VM storage backends.

    2. LVM is not that interesting to me, because it uses much more space, same as full disk allocation. Doesn’t work well on my SSD disk. Also, if I create a snapshot, I have to edit the machine XML. I like shuffling plain image files more.

        1. Ah, interesting, thanks. But I still see no benefit over file images 🙂 File images can be cached in memory and therefore they are faster to work with.

        2. I thought that kernel caching works only with filesystems, not raw devices. In my experience a repeated VM boot is faster when using a file image than using a lvm device. I attributed this to kernel caching the image file in my memory. Maybe there is a different reason then.

  5. Whoa!!!! Compariing performance of file systems when using an SSD isn’t very useful. Heck – use sparse and performance will still be FANTASTIC.

    What about spinning HDDs?

    Seruiously – leaving out that SSDs where used is a MAJOR oversight.

    1. I’m so used to SSD that I’ve completely forgotten something else exists 🙂 Yes, I should have mentioned that in the article. Thanks for pointing it out.

  6. I did some testing with different allocation styles on spinning disks when Ubuntu 14.04 was first released. These were desktop install, which aren’t very interesting, but impact more users. Also, these were virtualbox on a Windows host (not that I use it myself, but … I was there and it was convenient to test). The results are here:

    Installation times where crazy different.
    * QCOW vHDD – 50 min
    * VDI Fully Allocated vHDD – 12 min
    * VDI Sparse – no 3D Accel – 14 min

    Login:
    * QCOW vHDD – 20 sec
    * VDI Fully Allocated vHDD – 12 sec
    * Prealloc + 3D accel – 10 sec
    * VDI Sparse + 3D Accel – 12 sec
    * VDI Sparse – no 3D Accel – 13 sec
    * VDI Sparse w/Guest Adds – no 3D Accel – 11 sec
    * VDI Sparse w/Guest Adds + 3D Accel – 9 sec

    Launch firefox:
    * QCOW vHDD – 23 sec
    * VDI Fully Allocated vHDD – 8 sec
    * Prealloc + 3D accel – 6 sec
    * VDI Sparse – no 3D accel – 10 sec
    * VDI Sparse w/Guest Adds – no 3D Accel – 10 sec
    * VDI Sparse w/Guest Adds + 3D Accel – 7 sec

    Hope these help someone.
    On KVM, I’ve been using raw files on hostOS LVM allocations for so long anything else just isn’t a consideration. Too many hassles with performance initially to look further. Guess that things have changed?

Leave a Reply (Markdown syntax supported)

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s