KVM Lab inside a VirtualBox VM (Nested Virtualization) using Vagrant

Share on:


It has been a while since I wanted to blog about nested virtualization, but the good news is I now have a vagrant build to share with you so you can vagrant it up:). KVM (Kernel-based Virtual Machine) started to interest me as soon as I learned that Oracle allowed for hard partitioning on KVM boxes if installed on top of Oracle Linux. I instantly thought of the benefit for customers that were struggling with Oracle licensing under Vmware farms.

Anyway, today Oracle Cloud Infrastructure (OCI) and Engineered Systems are relying hugely on KVM and the old OVM is slowly being replaced by OLVM manager which orchestrates VM administration on top of KVM hosts.


Nested Virtualization

KVM inside VirtualBox

So how to make a Hypervisor (KVM) aware of the Host hardware when it’s itself installed under another Hypervisor layer (VirtualBox)? Well, this is now possible thanks to the nested virtualization feature available in the latest version of VirtualBox and it is very simple to enable even after your VM has been provisioned. More on how to enable it in my previous blog.

Please note that it’s actually qemu-kvm that’s available using nested virtualization here, which is a type 2 hypervisor (virtual hardware emulation). 


How to get started

Can’t afford a dedicated host for KVM? I’ve got you covered – you can start exploring KVM right now on your laptop with my vagrant build. Sharing is good, but live labs are even better.

GitHub Repo

  • Clone the repo
C:\Users\brokedba> git clone https://github.com/brokedba/KVM-on-virtualbox.git
C:\Users\brokedba> cd KVM-on-virtualbox
  • Start the VM (make sure you have 2Cores and 4GB RAM to spare before the launch)
C:\Users\brokedba> vagrant up


KVM Tools


Virsh is a command-line interface that can be used to create, destroy, stop-start and edit virtual machines and configure the virtual environment (such as virtual networks etc). >> see Cheatsheet 


Although this tool is a reference in terms of VM creation, it has never succeeded to create qemu VMS on my VM. Which is why I won’t talk about it today.


This is a wonderful CLI tool that’s created by karmab and interacts with libvirt API to manage KVM environments from configuring to managing the guest VMs. It even interacts with other virtualization providers (KubeVirt, oVirt, OpenStack, VMware vSphere, GCP and AWS) and easily deploys and customize VMs from cloud images. See details in the official Github repository.

I will do some examples using KCLI in this post since It is already shipped with my nested vagrant build.


All these 3 are a set of GUI based management tools to manage VM Guests.


Is a Utility to display a graphical console for a virtual machine



Virsh: Create and build a default storage pool (already done in my nested vagrant VM) and describe the host

# mkdir /u01/guest_images
# virsh pool-define-as default dir - - - - "/u01/guest_images"
# virsh pool-build default

# virsh pool-list --all
Name        State      Autostart
------------ --------- -------------
 default      active      yes

[root@localhost ~]# virsh pool-info default
Name:           default
UUID:           2b273ed0-e666-4c52-a383-c47a03727fc1
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       49.97 GiB
Allocation:     32.21 MiB
Available:      49.94 GiB
[root@localhost ~]# virsh nodeinfo  ---> the host is actually my virtualbox vm 
CPU model:           x86_64
CPU(s):              2
CPU frequency:       2592 MHz
CPU socket(s):       1
Core(s) per socket:  2
Thread(s) per core:  1
NUMA cell(s):        1
Memory size:         3761104 KiB

---- vm 
# virsh list --all  (list all guest vms including shutdown vms)
# virsh dominfo db2  (describe a “db2” vm)
# virsh edit db2   (edit “db2” vm attributes)  

--- Reboot Shutdown start VMs 
# virsh start my-vm    
# virsh reboot my-vm   
# virsh shutdown my-vm 



I will be using KCLI in my example where I will

  • Create a default storage pool and configure it (already done in my vagrant VM)
# kcli create pool -p /u01/guest_images default
# kcli list pool
| Pool         |        Path             |
| default      | /u01/guest_images       |
  • Since kcli uses docker we will need to update the kcli alias according to the pool path      
# alias kcli='docker run --net host -it --rm --security-opt label=disable -v /root/.kcli:/root/.kcli -v /root/.ssh:/root/.ssh -v /u01/guest_images:/u01/guest_images -v /var/run/libvirt:/var/run/libvirt -v $PWD:/workdir quay.io/karmab/kcli'
  • Create a default network (already done in my vagrant VM)
# kcli create network  -c default
# kcli list network
| Network |  Type  |       Cidr       | Dhcp |  Domain | Mode |
| default | routed | | True | default | nat  |


Download an Image

KCLI makes it very easy to download an image from the cloud repository as shown in the below example

  • Download ubuntu 1803 from ubuntu cloud image repository
# kcli download image ubuntu1804  -p default
Using url https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img...

# kcli list image
| Images                                             |
| /u01/guest_images/bionic-server-cloudimg-amd64.img |
  • You can also use Curl if you have a specific image you want to download
# curl –sL image-URL –o /Pool/Path/image.img


Create a VM

Once the image is in the storage pool you only have to run the kcli create command as below ( see syntax)

# kcli create vm ubuntuvm -i ubuntu1804 -P network=default -P virttype=qemu –P memory=512 -P numcpus=1

Deploying vm ubuntu_vm from profile ubuntu1804...
ubuntu_vm created on local
# kcli list vm
|    Name  | Status| Ips  | Source      |Plan | Profile |
| ubuntuvm |  up   || bionic-server*md64.img|kvirt| ubuntu1804 |

usage : kcli create vm [-h] [-p PROFILE] [--console] [-c COUNT] [-i IMAGE]
                      [--profilefile PROFILEFILE] [-P PARAM]
                      [--paramfile PARAMFILE] [-s] [-w]


Login to the VM

  • The IP address will take some time before it’s assigned but when it’s done, just log in using ssh. The KCLI creates the VM based on the default ssh key (~/id_rsa).
# kcli ssh ubuntuvm
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-163-generic x86_64)
ubuntu@ubuntuvm:~$ uname -a
Linux ubuntuvm 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • You can also log in using a root password, but for this, you’ll have to set it during the VM creation via Cloud-init
# kcli create vm ubuntuvm -i ubuntu1804 -P network=default -P virttype=qemu \
-P cmds=['echo root:unix1234 | chpasswd']

# virsh console ubuntuvm
Connected to domain ubuntuvm
ubuntuvm login: root
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-163-generic x86_64)


KCLI  Tips

  • KCLI configuration is done in ~/.kcli directory, which you need to manually create (done in my vagrant build already). It will contain:

    • config.yml generic configuration where you declare clients.
    • profiles.yml stores your profiles where you combine things like memory, numcpus and all supported parameters into named profiles to create VMs from

For example, you could create the same VM described earlier by storing the VM specs in the profiles.yml

 --- excerpt from ~/.kcli/profiles.yml
  image: bionic-server-cloudimg-amd64.img
  numcpus: 1
  memory: 512
  - default
  pool: default
  - echo root:unix1234 | chpasswd

Then call the named profile using the –i argument during the creation of the VM

 # kcli create vm ubuntuvm –i local_ubuntu1804



  • I’m very glad to finally share this with you, especially since it includes my vagrant build that you can try yourself and play with KVM from VirtualBox.
  • Keep in mind that the more resource you allocate to your Host/root VM the more stuff you can spin
  • My vagrant default build includes 2 Vcpu and 4GB of RAM, but you can tweak the values in the Vagrantfile
  • Please give KCLI project a star in GitHub as its creator has helped me a lot which deserves a huge shout out
Share on:

More from this Author

Terraform Pipelines for Dummies Part 1 Run a Terraform Configuration in GitLabCI

Terraform Pipelines for Dummies Part 1: Run a Terraform Configuration in GitLabCI

Introduction Automating infrastructure provisioning with Terraform is nothing for many, but to truly harness IaC power, seamless integration with ... Read More

Farewell to ClickOps OCI CLI Seamless DataGuard Replication for ExaC@C (1)

Farewell to ClickOps: OCI CLI Seamless Data Guard Replication for ExaC@C

Introduction Since the very beginning, everyone was introduced to Cloud Services through the console as it’s very quick. But the cloud CLI tooling ... Read More

Back to Top