The correct way to copy LXD VMs and containers between hosts

Saturday, March 12th, 2022 at 9:06 pm | 575 views | trackback url

There are quite a few posts out there describing some very odd methods to copy LXD containers from host to host, including shipping snapshots and tarballs of the container’s data directory around.

Those instructions are wrong. Don’t follow them.

The correct, and clean way to do this, is to configure your LXD hosts to talk to each other, and simply copy the containers between them. There’s a few reasons to use this approach:

  1. It’s more secure, using a secure transport, and proper authorization
  2. It doesn’t clutter up the source and destination with 2-3x the container size to create a temporary tarball which gets shipped around between hosts
  3. It allows you to start moving towards using LXD clusters, which is a “Good Thing(tm)”
  4. It relies purely on LXD concepts and built-ins, not external apps, programs or workarounds

So let’s get to it.

On LXD host_1, you can create a container or VM, as follows:

lxc launch ubuntu:20.04 --vm vm1 # virtual machine 1
lxc launch ubuntu:20.04 c1       # container 1

Wait for those to spin up and get an IP from your network.

lxc list
+------------------------------+---------+-------------------------+------+-----------------+-----------+
|             NAME             |  STATE  |          IPV4           | IPV6 |      TYPE       | SNAPSHOTS |
+------------------------------+---------+-------------------------+------+-----------------+-----------+
| c1                           | RUNNING | 192.168.101.57 (eth0)   |      | CONTAINER       | 0         |
+------------------------------+---------+-------------------------+------+-----------------+-----------+
| vm1                          | RUNNING | 192.168.101.56 (enp5s0) |      | VIRTUAL-MACHINE | 0         |
+------------------------------+---------+-------------------------+------+-----------------+-----------+

On this same LXD host, we can now configure a “remote” LXD host for it to speak to:

lxc remote add host_2

You will be prompted to accept the host’s fingerprint, and alternately a connection password to authorize the addition. Once added, you can verify it with:

lxc remote list
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| host_2          | https://host_2:8443                      | lxd           | tls         | NO     | NO     | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                                  | lxd           | file access | NO     | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+

Before we do anything to the running VM (vm1) and container (c1), we want to take a snapshot to ensure that any trouble we have, can be restored safely from that snapshot.

lxc stop vm1
lxc stop c1
lxc snapshot vm1 2022-03-12-snapshot # any name will do
lxc snapshot c1 2022-03-12-snapshot

We always confirm our changes, especially where it relates to data preservation:

lxc info vm1
Name: vm1
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Created: 2022/03/12 19:35 EST
Last Used: 2022/03/12 19:36 EST

Snapshots:
+---------------------+----------------------+------------+----------+
|        NAME         |       TAKEN AT       | EXPIRES AT | STATEFUL |
+---------------------+----------------------+------------+----------+
| 2022-03-12-snapshot | 2022/03/12 19:44 EST |            | NO       |
+---------------------+----------------------+------------+----------+

Now we can start those back up:

lxc start vm1 c1 

From here, we can now copy the snapshots we just made on LXD host_1 to LXD host_2:

lxc copy vm1/2022-03-12-snapshot host_2:vm1 --verbose
Transferring instance: vm1: 938.46MB (117.30MB/s) 

On host_2, you can see that the vm1 we created on host_1, is now copied to host_2, and remains in a ‘STOPPED‘ state:

lxc list
+------------------------------+---------+-----------------------+------+-----------------+-----------+
|             NAME             |  STATE  |         IPV4          | IPV6 |      TYPE       | SNAPSHOTS |
+------------------------------+---------+-----------------------+------+-----------------+-----------+
| vm1                          | STOPPED |                       |      | VIRTUAL-MACHINE | 0         |
+------------------------------+---------+-----------------------+------+-----------------+-----------+

You can now start that VM, and have it running there, on host_2:

lxc start vm1

Note: host_2 may have live on the same subnet as host_1, which means it may need a new IP address, if the original container is still running on host_1.

You will need to stop the container on host_1 and either give host_1 a new IP address, or start up host_2, and give it a new IP address. The two containers on the same L2 network will conflict, and your DHCP server will refuse to hand out a lease to the second one requesting it.

There are a couple of ways to do this:

  1. Give the container a static IP address. When you copy it to the second host, give it a different static IP address there, or
  2. If these containers will request a DHCP lease, you can remove /etc/machine-id and generate a new one by running systemd-machine-id-setup. With a new machine-id, the container will appear to be a new machine to your DHCP server, and it will hand out a second lease to the second container.

With the container(s) copied from host to host, and their networking reconfigured to fit your LAN/network topology, you should have them running.

This is a stopgap though, as this isn’t an HA setup. If you truly want to have resilience, you should set up a LXD cluster between both LXD hosts, and then you can see/create/move/migrate containers between the hosts on-demand, seamlessly. When you configure those LXD servers to use shared storage (common to both LXD hosts in this case), the containers will survive a full outage of one or the other LXD host.

Good luck!

Last Modified: Saturday, March 12th, 2022 @ 21:06

Leave a Reply

You must be logged in to post a comment.

Bad Behavior has blocked 1731 access attempts in the last 7 days.