Virtual Machine Types

Introduction

This document describes the two virtual machine (VM) types minimega is capable of launching - QEMU/KVM virtual machines and containers. minimega uses a common API to describe features and supports launching experiments consisting of both types of VM.

A quick example

Launching a VM is as simple as describing features of the VM, and then instructing minimega to launch one or more VMs based on the current description.

For example, to launch a QEMU/KVM-based VM, we need to specify at least a disk image (or kernel/initrd pair). Say we have a disk foo.qcow2:

# set a disk image
vm config disk foo.qcow2

# set some other common parameters
vm config memory 4096
vm config net 100

# launch one VM, named foo
vm launch kvm foo

# launch 10 more, named bar1, bar2, bar3...
vm launch kvm bar[1-10]

Notice we configure the disk image once, but end up launching 11 VMs that use foo.qcow2. minimega always uses the current configuration when launching VMs, which allows you to launch multiple copies of a VM easily.

Also notice that in the vm launch command, we specify that we want to launch a kvm type VM. minimega currently supports the kvm type and the container type, described in detail below.

Common configuration

All VM types are configured using the vm config API. Many configuration parameters are common to all VM types, such as memory, net, and uuid. Others are specific to one type of VM, such as disk, which is specific for KVM type VMs, and filesystem, which is specific to container type VMs.

When launching a VM, only the configuration parameters used by that VM type are used. For example, it is safe to have vm config disk specified when launching a container type VM because the container type doesn't use that configuration parameter.

VM types

KVM Virtual Machines

minimega supports booting kvm type VMs by using QEMU/KVM. When launching kvm type VMs, minimega will use the configuration provided in the vm config API to generate command line arguments for QEMU. Additional configuration, such as creating network taps and assigning them to openvswitch bridges occurs at launch time.

minimega manages running kvm type VMs, including providing an interface to the VM's QMP socket. When a VM quits or crashes, minimega will reflect this in the state column of vm info.

KVM-specific configuration parameters

Most vm config parameters are common to all VM types, though some are specific to KVM and container instances. See the minimega API for documentation on specific configuration parameters.

Configuration parameters specific to KVM instances:

Technical details

minimega uses QEMU version 1.6 or greater. When launching kvm type VMs, the following occurs, in order:

A number of QEMU arguments are hardcoded, such as using the host CPU architecture (for kvm support), and enabling memory ballooning. In rare circumstances some of these arguments need to by removed or modified. The vm config qemu-override API provides a mechanism to patch the QEMU argument string before launching VMs.

Containers

minimega supports booting container type VMs via a custom container implementation. A container requires at minimum a root filesystem (rootfs) and an executable to be run at init (PID 1) within the rootfs. The init program can be a shell script. minimega's container implementation supports full-system containers only. This means that every container obtains a PID, network, mount, and IPC namespace. The rootfs must have certain directories populated in order to function correctly. A prebuilt container filesystem can be obtained here or by running the misc/get_containerfs.bash script in the minimega distribution.

container-specific configuration parameters

Most vm config parameters are common to all VM types, though some are specific to KVM and container instances. See the minimega API for documentation on specific configuration parameters.

Configuration parameters specific to container instances:

Technical details

minimega uses a custom container implementation to boot container type VMs. minimega requires that cgroups be enabled (see notes below for special constraints), and that the linux host support overlayfs. overlayfs is enabled by default in linux 3.18+.

When launching container type VMs, the following occurs, in order:

When the container shim is launched in a new set of namespaces, the following occurs:

Notes

Many linux distributions explicitly disable the memory cgroup, which is required for minimega to boot container type VMs. On debian based linux hosts (including ubuntu), add

cgroup_enable=memory

to the kernel boot parameters (in GRUB or otherwise) to enable the memory cgroup. To enable this in GRUB on Debian, open /etc/default/grub and edit the GRUB_CMDLINE_LINUX_DEFAULT line to include the cgroup parameter. Then run update-grub to update the GRUB config. When you reboot, the cgroup will be enabled.

To better work with systemd-based systems, minimega requires the cgroup hierarchy be mounted as individual mounts (as opposed to one large cgroup mount with all enabled subsystems). If you don't already have a cgroup hierarchy mounted, the following will create a minimal one for minimega:

mount -t tmpfs cgroup /sys/fs/cgroup
mkdir /sys/fs/cgroup/memory
mkdir /sys/fs/cgroup/freezer
mkdir /sys/fs/cgroup/devices
mount -t cgroup cgroup -o memory /sys/fs/cgroup/memory
mount -t cgroup cgroup -o freezer /sys/fs/cgroup/freezer
mount -t cgroup cgroup -o devices /sys/fs/cgroup/devices

Authors

The minimega authors

22 Mar 2016