Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
LibVMI: virtual machine introspection (libvmi.com)
90 points by ingve on Oct 22, 2016 | hide | past | favorite | 9 comments


The "crash" utility[1] has been doing this for a while on live QEMU/KVM guests, but there's certainly a need for easier to use tooling.

A long time ago I wrote some tools that let you get process listings, dmesg and so on from guests[2]. Unfortunately the kernel developers didn't want to provide an easy introspection method so it ended up being a constant battle to modify it for changes to the internal kernel structures (although not impossible, just a full time job which I didn't want to do).

Actually, just getting dmesg from a guest is super-useful when the guest "goes away" and you've no idea if it has crashed or why. Nowadays I use virt-ls to find out what a guest is doing[3].

[1] https://github.com/crash-utility/crash

[2] https://people.redhat.com/~rjones/virt-dmesg/

[3] https://rwmj.wordpress.com/2012/02/27/using-libguestfs-to-fi...


It's viewing the memory of one virtual machine from a different virtual machine. Shouldn't it be called "inspection" rather than "introspection"?


Is there anything like this for / can this be used with AWS?


These kinds of tools use hypervisor APIs that allow you to inspect memory from outside the guest. eg. QEMU's pmemsave command. I would think it's highly unlikely that AWS would want to give out such low-level access, not least because you can do very bad stuff if that access gets into the wrong hands (eg. read "secure" bits of guest kernel memory).


Thanks. I didn't expect this particular tool would work. I am diagnosing a performance issue on hosts using HVM virtualization (where instead of being better than paravirtualization, it is actually markedly worse); if there was a tool for a guest to on some level provide debug information for itself, I thought it may aid my investigation. Sadly, I suspect even if any such tools exist, Amazon will have disabled their use for all the obvious, managed, multi-tenant reasons.


If you can dump the VMs "physical memory" - even from an AWS VM, e.g. using the LiME kernel module (https://github.com/volatilityfoundation/volatility/wiki/Lime...) you can still use Volatility and I believe libvmi as well to get some details about what's happening and I don't think it would be possible to disable this technique. Also, this shouldn't interfere with any other guests on the same metal.


Not knowing more about your problem, I'm not so sure that something at the hypervisor level is necessary to see where you're losing performance on HVM, even though it could potentially help. Have you tried using tools like perf, SystemTap, or dtrace inside the VM?


I used to think Paravirtualization must be faster than HVM. At very least for network and I/O.


Papyrusssssss

(it's, to people with a bit of design experience, just as bad as Comic Sans)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: