Some time ago I compared disk drivers performance in KVM. Today I compared different storage formats – raw and qcow2. Let’s have a look:
Test procedure: Create an empty 10 GB image, attach to VM using VirtIO driver, boot F20 Alpha Live x86_64, measure the time of installation. Repeat installation once again, this time reusing the existing image (instead of creating a new one). Do this for both formats.
raw 1st pass 2:36 raw 2nd pass 2:38 qcow2 1st pass 2:36 qcow2 2nd pass 2:44
As you can see, the results are very much the same. It seems it doesn’t matter much which format you use.
But, qcow2 format has some nice additional features, like copy-on-write cloning. If I need to test something very quickly in my existing VM and then revert the changes back, this is the easiest way:
$ cd /var/lib/libvirt/images $ mv f19.qcow2 f19.qcow2_orig $ qemu-img create -f qcow2 -b f19.qcow2_orig f19.qcow2 Formatting 'f19.qcow2', fmt=qcow2 size=10737418240 backing_file='f19.qcow2_orig' encryption=off cluster_size=65536 lazy_refcounts=off $ # Run the VM now and do your tasks $ mv f19.qcow2_orig f19.qcow2
20 thoughts on “KVM disk performance: raw vs qcow2 format”
Very interesting idea… Could you also test with hdparm -t or bonnie++ after installation? Those should give more depth to the numbers you came up with.
I perform mostly clean installations, so the numbers above are the most important for me. You’re right that bonnie++ output would be interesting as well. I’ll try to include that in some future test.
You don’t mention how you configured the disk cache mode on the virtual machine. The default cache mode allows for host I/O caching. Also you don’t mention if the raw image was sparse or pre-allocated. A pre-allocated raw image is expected to clearly beat both a sparse raw image or a qcow2 image, because it avoids the I/O penalty inherent in allocating blocks for grow-on-demand formats.
In virt-manager the cache mode is set to “default” (without actually indicating what the default is). I’d like to compare different cache modes as well, when time permits.
The raw images were created with “qemu-img create foo.img 10G”. I guess that means sparse, because the initial file size is very small.
try disabling cache
qcow2 also supports snapshots, which is a great feature:
virsh snapshot-create-as f19 my-snapshot
#do your dangerous magic
virsh snapshot-revert f19 my-snapshot
virsh snapshot-delete f19 my-snapshot
That’s neat! (I had to use –force when reverting). Thanks a lot.
And then of course there’s using LVM as a backend which also allows easy cloning/snapshotting and should offer (in theory) better performance because it avoids another complicated level of abstraction (host filesystem). Perhaps next iteration could compare that as well.. 🙂
Seconded — I’m much more interested in how qcow2 compares to LVM. Raw files aren’t even on my radar for VM storage backends.
LVM is not that interesting to me, because it uses much more space, same as full disk allocation. Doesn’t work well on my SSD disk. Also, if I create a snapshot, I have to edit the machine XML. I like shuffling plain image files more.
LVM can do thin provisioning for quite some time.
Ah, interesting, thanks. But I still see no benefit over file images 🙂 File images can be cached in memory and therefore they are faster to work with.
I am not sure I understand the comment wrt caching. LVM uses block buffer/cache system just like any other block-layer device.
I thought that kernel caching works only with filesystems, not raw devices. In my experience a repeated VM boot is faster when using a file image than using a lvm device. I attributed this to kernel caching the image file in my memory. Maybe there is a different reason then.
What about qed?
Whoa!!!! Compariing performance of file systems when using an SSD isn’t very useful. Heck – use sparse and performance will still be FANTASTIC.
What about spinning HDDs?
Seruiously – leaving out that SSDs where used is a MAJOR oversight.
I’m so used to SSD that I’ve completely forgotten something else exists 🙂 Yes, I should have mentioned that in the article. Thanks for pointing it out.
I did some testing with different allocation styles on spinning disks when Ubuntu 14.04 was first released. These were desktop install, which aren’t very interesting, but impact more users. Also, these were virtualbox on a Windows host (not that I use it myself, but … I was there and it was convenient to test). The results are here:
Installation times where crazy different.
* QCOW vHDD – 50 min
* VDI Fully Allocated vHDD – 12 min
* VDI Sparse – no 3D Accel – 14 min
* QCOW vHDD – 20 sec
* VDI Fully Allocated vHDD – 12 sec
* Prealloc + 3D accel – 10 sec
* VDI Sparse + 3D Accel – 12 sec
* VDI Sparse – no 3D Accel – 13 sec
* VDI Sparse w/Guest Adds – no 3D Accel – 11 sec
* VDI Sparse w/Guest Adds + 3D Accel – 9 sec
* QCOW vHDD – 23 sec
* VDI Fully Allocated vHDD – 8 sec
* Prealloc + 3D accel – 6 sec
* VDI Sparse – no 3D accel – 10 sec
* VDI Sparse w/Guest Adds – no 3D Accel – 10 sec
* VDI Sparse w/Guest Adds + 3D Accel – 7 sec
Hope these help someone.
On KVM, I’ve been using raw files on hostOS LVM allocations for so long anything else just isn’t a consideration. Too many hassles with performance initially to look further. Guess that things have changed?