lxc.conf(5)lxc.conf(5)NAMElxc.conf - linux container configuration file
DESCRIPTION
The linux containers (lxc) are always created before being used. This
creation defines a set of system resources to be virtualized / isolated
when a process is using the container. By default, the pids, sysv ipc
and mount points are virtualized and isolated. The other system
resources are shared across containers, until they are explicitly
defined in the configuration file. For example, if there is no network
configuration, the network will be shared between the creator of the
container and the container itself, but if the network is specified, a
new network stack is created for the container and the container can no
longer use the network of its ancestor.
The configuration file defines the different system resources to be
assigned for the container. At present, the utsname, the network, the
mount points, the root file system and the control groups are sup‐
ported.
Each option in the configuration file has the form key = value fitting
in one line. The '#' character means the line is a comment.
ARCHITECTURE
Allows to set the architecture for the container. For example, set a
32bits architecture for a container running 32bits binaries on a 64bits
host. That fix the container scripts which rely on the architecture to
do some work like downloading the packages.
lxc.arch
Specify the architecture for the container.
Valid options are x86, i686, x86_64, amd64
HOSTNAME
The utsname section defines the hostname to be set for the container.
That means the container can set its own hostname without changing the
one from the system. That makes the hostname private for the container.
lxc.utsname
specify the hostname for the container
STOP SIGNAL
Allows to specify signal name or number, sent by lxc-stop to shutdown
the container. Different init systems could use different signals to
perform clean shutdown sequence. Option allows signal to be specified
in kill(1) fashion, e.g. SIGKILL, SIGRTMIN+14, SIGRTMAX-10 or plain
number.
lxc.stopsignal
specify the signal used to stop the container
NETWORK
The network section defines how the network is virtualized in the con‐
tainer. The network virtualization acts at layer two. In order to use
the network virtualization, parameters must be specified to define the
network interfaces of the container. Several virtual interfaces can be
assigned and used in a container even if the system has only one physi‐
cal network interface.
lxc.network.type
specify what kind of network virtualization to be used for the
container. Each time a lxc.network.type field is found a new
round of network configuration begins. In this way, several net‐
work virtualization types can be specified for the same con‐
tainer, as well as assigning several network interfaces for one
container. The different virtualization types can be:
empty: will create only the loopback interface.
veth: a peer network device is created with one side assigned to
the container and the other side is attached to a bridge speci‐
fied by the lxc.network.link. If the bridge is not specified,
then the veth pair device will be created but not attached to
any bridge. Otherwise, the bridge has to be setup before on the
system, lxc won't handle any configuration outside of the con‐
tainer. By default lxc choose a name for the network device
belonging to the outside of the container, this name is handled
by lxc, but if you wish to handle this name yourself, you can
tell lxc to set a specific name with the lxc.network.veth.pair
option.
vlan: a vlan interface is linked with the interface specified by
the lxc.network.link and assigned to the container. The vlan
identifier is specified with the option lxc.network.vlan.id.
macvlan: a macvlan interface is linked with the interface speci‐
fied by the lxc.network.link and assigned to the container.
lxc.network.macvlan.mode specifies the mode the macvlan will use
to communicate between different macvlan on the same upper
device. The accepted modes are private, the device never commu‐
nicates with any other device on the same upper_dev (default),
vepa, the new Virtual Ethernet Port Aggregator (VEPA) mode, it
assumes that the adjacent bridge returns all frames where both
source and destination are local to the macvlan port, i.e. the
bridge is set up as a reflective relay. Broadcast frames coming
in from the upper_dev get flooded to all macvlan interfaces in
VEPA mode, local frames are not delivered locallay, or bridge,
it provides the behavior of a simple bridge between different
macvlan interfaces on the same port. Frames from one interface
to another one get delivered directly and are not sent out
externally. Broadcast frames get flooded to all other bridge
ports and to the external interface, but when they come back
from a reflective relay, we don't deliver them again. Since we
know all the MAC addresses, the macvlan bridge mode does not
require learning or STP like the bridge module does.
phys: an already existing interface specified by the lxc.net‐
work.link is assigned to the container.
lxc.network.flags
specify an action to do for the network.
up: activates the interface.
lxc.network.link
specify the interface to be used for real network traffic.
lxc.network.name
the interface name is dynamically allocated, but if another name
is needed because the configuration files being used by the con‐
tainer use a generic name, eg. eth0, this option will rename the
interface in the container.
lxc.network.hwaddr
the interface mac address is dynamically allocated by default to
the virtual interface, but in some cases, this is needed to
resolve a mac address conflict or to always have the same link-
local ipv6 address
lxc.network.ipv4
specify the ipv4 address to assign to the virtualized interface.
Several lines specify several ipv4 addresses. The address is in
format x.y.z.t/m, eg. 192.168.1.123/24. The broadcast address
should be specified on the same line, right after the ipv4
address.
lxc.network.ipv4.gateway
specify the ipv4 address to use as the gateway inside the con‐
tainer. The address is in format x.y.z.t, eg. 192.168.1.123.
Can also have the special value auto, which means to take the
primary address from the bridge interface (as specified by the
lxc.network.link option) and use that as the gateway. auto is
only available when using the veth and macvlan network types.
lxc.network.ipv6
specify the ipv6 address to assign to the virtualized interface.
Several lines specify several ipv6 addresses. The address is in
format x::y/m, eg. 2003:db8:1:0:214:1234:fe0b:3596/64
lxc.network.ipv6.gateway
specify the ipv6 address to use as the gateway inside the con‐
tainer. The address is in format x::y, eg. 2003:db8:1:0::1 Can
also have the special value auto, which means to take the pri‐
mary address from the bridge interface (as specified by the
lxc.network.link option) and use that as the gateway. auto is
only available when using the veth and macvlan network types.
lxc.network.script.up
add a configuration option to specify a script to be executed
after creating and configuring the network used from the host
side. The following arguments are passed to the script: con‐
tainer name and config section name (net) Additional arguments
depend on the config section employing a script hook; the fol‐
lowing are used by the network system: execution context (up),
network type (empty/veth/macvlan/phys), Depending on the network
type, other arguments may be passed: veth/macvlan/phys. And
finally (host-sided) device name.
lxc.network.script.down
add a configuration option to specify a script to be executed
before destroying the network used from the host side. The fol‐
lowing arguments are passed to the script: container name and
config section name (net) Additional arguments depend on the
config section employing a script hook; the following are used
by the network system: execution context (down), network type
(empty/veth/macvlan/phys), Depending on the network type, other
arguments may be passed: veth/macvlan/phys. And finally (host-
sided) device name.
NEW PSEUDO TTY INSTANCE (DEVPTS)
For stricter isolation the container can have its own private instance
of the pseudo tty.
lxc.pts
If set, the container will have a new pseudo tty instance, mak‐
ing this private to it. The value specifies the maximum number
of pseudo ttys allowed for a pts instance (this limitation is
not implemented yet).
CONTAINER SYSTEM CONSOLE
If the container is configured with a root filesystem and the inittab
file is setup to use the console, you may want to specify where goes
the output of this console.
lxc.console
Specify a path to a file where the console output will be writ‐
ten. The keyword 'none' will simply disable the console. This is
dangerous once if have a rootfs with a console device file where
the application can write, the messages will fall in the host.
CONSOLE THROUGH THE TTYS
If the container is configured with a root filesystem and the inittab
file is setup to launch a getty on the ttys. This option will specify
the number of ttys to be available for the container. The number of
getty in the inittab file of the container should not be greater than
the number of ttys specified in this configuration file, otherwise the
excess getty sessions will die and respawn indefinitly giving annoying
messages on the console.
lxc.tty
Specify the number of tty to make available to the container.
CONSOLE DEVICES LOCATION
LXC consoles are provided through Unix98 PTYs created on the host and
bind-mounted over the expected devices in the container. By default,
they are bind-mounted over /dev/console and /dev/ttyN. This can prevent
package upgrades in the guest. Therefore you can specify a directory
location (under /dev under which LXC will create the files and bind-
mount over them. These will then be symbolically linked to /dev/console
and /dev/ttyN. A package upgrade can then succeed as it is able to
remove and replace the symbolic links.
lxc.devttydir
Specify a directory under /dev under which to create the con‐
tainer console devices.
/DEV DIRECTORY
By default, lxc does nothing with the container's /dev. This allows the
container's /dev to be set up as needed in the container rootfs. If
lxc.autodev is set to 1, then after mounting the container's rootfs LXC
will mount a fresh tmpfs under /dev (limited to 100k) and fill in a
minimal set of initial devices. This is generally required when start‐
ing a container containing a "systemd" based "init" but may be optional
at other times. Addional devices in the containers /dev directory may
be created through the use of the lxc.hook.autodev hook.
lxc.autodev
Set this to 1 to have LXC mount and populate a minimal /dev when
starting the container.
ENABLE KMSG SYMLINK
Enable creating /dev/kmsg as symlink to /dev/console. This defaults to
1.
lxc.kmsg
Set this to 0 to disable /dev/kmsg symlinking.
MOUNT POINTS
The mount points section specifies the different places to be mounted.
These mount points will be private to the container and won't be visi‐
ble by the processes running outside of the container. This is useful
to mount /etc, /var or /home for examples.
lxc.mount
specify a file location in the fstab format, containing the
mount informations. If the rootfs is an image file or a device
block and the fstab is used to mount a point somewhere in this
rootfs, the path of the rootfs mount point should be prefixed
with the /usr/lib64/lxc/rootfs default path or the value of
lxc.rootfs.mount if specified.
lxc.mount.entry
specify a mount point corresponding to a line in the fstab for‐
mat.
ROOT FILE SYSTEM
The root file system of the container can be different than that of the
host system.
lxc.rootfs
specify the root file system for the container. It can be an
image file, a directory or a block device. If not specified, the
container shares its root file system with the host.
lxc.rootfs.mount
where to recursively bind lxc.rootfs before pivoting. This is to
ensure success of the pivot_root(8) syscall. Any directory suf‐
fices, the default should generally work.
lxc.pivotdir
where to pivot the original root file system under lxc.rootfs,
specified relatively to that. The default is mnt. It is created
if necessary, and also removed after unmounting everything from
it during container setup.
CONTROL GROUP
The control group section contains the configuration for the different
subsystem. lxc does not check the correctness of the subsystem name.
This has the disadvantage of not detecting configuration errors until
the container is started, but has the advantage of permitting any
future subsystem.
lxc.cgroup.[subsystem name]
specify the control group value to be set. The subsystem name is
the literal name of the control group subsystem. The permitted
names and the syntax of their values is not dictated by LXC,
instead it depends on the features of the Linux kernel running
at the time the container is started, eg. lxc.cgroup.cpuset.cpus
CAPABILITIES
The capabilities can be dropped in the container if this one is run as
root.
lxc.cap.drop
Specify the capability to be dropped in the container. A single
line defining several capabilities with a space separation is
allowed. The format is the lower case of the capability defini‐
tion without the "CAP_" prefix, eg. CAP_SYS_MODULE should be
specified as sys_module. See capabilities(7),
UID MAPPINGS
A container can be started in a private user namespace with user and
group id mappings. For instance, you can map userid 0 in the container
to userid 200000 on the host. The root user in the container will be
privileged in the container, but unprivileged on the host. Normally a
system container will want a range of ids, so you would map, for
instance, user and group ids 0 through 20,000 in the container to the
ids 200,000 through 220,000.
lxc.id_map
Four values must be provided. First a character, either 'u', or
'g', to specify whether user or group ids are being mapped. Next
is the first userid as seen in the user namespace of the con‐
tainer. Next is the userid as seen on the host. Finally, a range
indicating the number of consecutive ids to map.
STARTUP HOOKS
Startup hooks are programs or scripts which can be executed at various
times in a container's lifetime.
lxc.hook.pre-start
A hook to be run in the host's namespace before the container
ttys, consoles, or mounts are up.
lxc.hook.pre-mount
A hook to be run in the container's fs namespace but before the
rootfs has been set up. This allows for manipulation of the
rootfs, i.e. to mount an encrypted filesystem. Mounts done in
this hook will not be reflected on the host (apart from mounts
propagation), so they will be automatically cleaned up when the
container shuts down.
lxc.hook.mount
A hook to be run in the container's namespace after mounting has
been done, but before the pivot_root.
lxc.hook.autodev
A hook to be run in the container's namespace after mounting has
been done and after any mount hooks have run, but before the
pivot_root, if lxc.autodev == 1. The purpose of this hook is to
assist in populating the /dev directory of the container when
using the autodev option for systemd based containers. The con‐
tainer's /dev directory is relative to the ${LXC_ROOTFS_MOUNT}
environment variable available when the hook is run.
lxc.hook.start
A hook to be run in the container's namespace immediately before
executing the container's init. This requires the program to be
available in the container.
lxc.hook.post-stop
A hook to be run in the host's namespace after the container has
been shut down.
STARTUP HOOKS ENVIRONMENT VARIABLES
A number of environment variables are made available to the startup
hooks to provide configuration information and assist in the function‐
ing of the hooks. Not all variables are valid in all contexts. In par‐
ticular, all paths are relative to the host system and, as such, not
valid during the lxc.hook.start hook.
LXC_NAME
The LXC name of the container. Useful for logging messages in
commmon log environments. [-n]
LXC_CONFIG_FILE
Host relative path to the container configuration file. This
gives the container to reference the original, top level, con‐
figuration file for the container in order to locate any addo‐
tional configuration information not otherwise made available.
[-f]
LXC_CONSOLE
The path to the console output of the container if not NULL.
[-c] [lxc.console]
LXC_CONSOLE_LOGPATH
The path to the console log output of the container if not NULL.
[-L]
LXC_ROOTFS_MOUNT
The mount location to which the container is initially bound.
This will be the host relative path to the container rootfs for
the container instance being started and is where changes should
be made for that instance. [lxc.rootfs.mount]
LXC_ROOTFS_PATH
The host relative path to the container root which has been
mounted to the rootfs.mount location. [lxc.rootfs]
EXAMPLES
In addition to the few examples given below, you will find some other
examples of configuration file in /usr/share/doc/lxc/examples
NETWORK
This configuration sets up a container to use a veth pair device with
one side plugged to a bridge br0 (which has been configured before on
the system by the administrator). The virtual network device visible in
the container is renamed to eth0.
lxc.utsname = myhostname
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.hwaddr = 4a:49:43:49:79:bf
lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
UID/GID MAPPING
This configuration will map both user and group ids in the range 0-9999
in the container to the ids 100000-109999 on the host.
lxc.id_map = u 0 100000 10000
lxc.id_map = g 0 100000 10000
CONTROL GROUP
This configuration will setup several control groups for the applica‐
tion, cpuset.cpus restricts usage of the defined cpu, cpus.share prior‐
itize the control group, devices.allow makes usable the specified
devices.
lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rw
lxc.cgroup.devices.allow = b 8:0 rw
COMPLEX CONFIGURATION
This example show a complex configuration making a complex network
stack, using the control groups, setting a new hostname, mounting some
locations and a changing root file system.
lxc.utsname = complex
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bf
lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
lxc.network.ipv6 = 2003:db8:1:0:214:5432:feab:3588
lxc.network.type = macvlan
lxc.network.flags = up
lxc.network.link = eth0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 10.2.3.4/24
lxc.network.ipv4 = 192.168.10.125/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = dummy0
lxc.network.hwaddr = 4a:49:43:49:79:ff
lxc.network.ipv4 = 10.2.3.6/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3297
lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rw
lxc.cgroup.devices.allow = b 8:0 rw
lxc.mount = /etc/fstab.complex
lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0
lxc.rootfs = /mnt/rootfs.complex
lxc.cap.drop = sys_module mknod setuid net_raw
lxc.cap.drop = mac_override
SEE ALSOchroot(1), pivot_root(8), fstab(5)SEE ALSOlxc(1), lxc-create(1), lxc-destroy(1), lxc-start(1), lxc-stop(1), lxc-
execute(1), lxc-kill(1), lxc-console(1), lxc-monitor(1), lxc-wait(1),
lxc-cgroup(1), lxc-ls(1), lxc-ps(1), lxc-info(1), lxc-freeze(1), lxc-
unfreeze(1), lxc-attach(1), lxc.conf(5)AUTHOR
Daniel Lezcano <daniel.lezcano@free.fr>
Sat Sep 28 09:42:42 UTC 2013 lxc.conf(5)