1.4. Running vCGNAT in Docker#


This installation is only suitable for functional tests!

First, install Docker and KVM. Make sure the following modules are installed:

$ modprobe kvm kvm_intel#

1.4.1. Building Docker Image#

You do not need to build a Docker image yourself. You can contact our sales department for Docker and qcow2 images.

1.4.2. Running Container#


Here is a description of the network connection to the container using veth interfaces.

  • Run the container by specifying the required number of dataplane interfaces in the CLAB_INTFS environment variable. In this example, the default Docker network is used for management (usually it is The address from this network is assigned to the vCGNAT management interface. It is possible to create another network and connect it with the --network option:

    $ docker run  -d --name vcgnat -e CLAB_INTFS=1 --privileged <image_name> --username admin --password admin --hostname vcgnat --connection-mode tc --trace#
  • It’s obligatory to create a shell access to the network namespace of the container (make network namespace available to work using the shell):

    $ export pid="$(docker inspect -f '{{.State.Pid}}' vcgnat)"#
    $ sudo ln -sf /proc/$pid/ns/net "/var/run/netns/vcgnat"#
  • Using a pair of veth interfaces, associate the main network namespace and the namespace in which vCGNAT works. Сreate as many pairs of interfaces as there are interfaces required for the dataplane. Enable interfaces on both sides:

  • After that, the script inside the container should be able to find the dataplane interfaces that have been already added into it, and the virtual machine should start loading:

    user:~$ docker logs vcgnat
    2022-09-08 12:04:35,470: vrnetlab   DEBUG    Creating overlay disk image
    2022-09-08 12:04:35,527: vrnetlab   DEBUG    Starting vrnetlab NFWare
    2022-09-08 12:04:35,527: vrnetlab   DEBUG    VMs: [<__main__.NFWare_vm object at 0x7f44598bae80>]
    2022-09-08 12:04:35,531: vrnetlab   DEBUG    VM not started; starting!
    2022-09-08 12:04:35,531: vrnetlab   INFO     Starting NFWare_vm
    2022-09-08 12:04:35,531: vrnetlab   DEBUG    number of provisioned data plane interfaces is 1
    2022-09-08 12:04:35,532: vrnetlab   DEBUG    waiting for provisioned interfaces to appear...
    2022-09-08 12:05:00,556: vrnetlab   DEBUG    interfaces provisioned, continuing...
    2022-09-08 12:05:00,557: vrnetlab   DEBUG    ['qemu-system-x86_64', '-enable-kvm', '-display', 'none', '-machine', 'pc', '-monitor',
    'tcp:,server,nowait', '-m', '7000', '-serial', 'telnet:,server,nowait', '-drive', 'if=ide,file=/None_968cbe26
    405a_vcgnat_4.3.2.qcow2,cache=unsafe', '-cpu', 'host', '-smp', '2', '-monitor', 'tcp:,server,nowait', '-device', 'pci-bri
    dge,chassis_nr=1,id=pci.1', '-device', 'virtio-net-pci,netdev=p00,mac=52:54:00:1e:cc:00', '-netdev', 'user,id=p00,net=,tft
    5:80,hostfwd=tcp::2443-', '-device', 'virtio-net-pci,netdev=p01,mac=52:54:00:e2:38:01,bus=pci.1,addr=0x2', '-netdev', 't
    2022-09-08 12:05:00,557: vrnetlab   DEBUG    joined cmd: qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.
    0:4000,server,nowait -m 7000 -serial telnet:,server,nowait -drive if=ide,file=/None_968cbe26405a_vcgnat_4.3.2.qcow2,cache
    =unsafe -cpu host -smp 2 -monitor tcp:,server,nowait -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netd
    ev=p00,mac=52:54:00:1e:cc:00 -netdev user,id=p00,net=,tftp=/tftpboot,hostfwd=tcp::2022-,hostfwd=udp::2161-10.0
    .0.15:161,hostfwd=tcp::2830-,hostfwd=tcp::2080-,hostfwd=tcp::2443- -device virtio-net-pci,netde
    v=p01,mac=52:54:00:e2:38:01,bus=pci.1,addr=0x2 -netdev tap,id=p01,ifname=tap1,script=/etc/tc-tap-ifup,downscript=no
  • Loading takes a few minutes, and once it is finished, you can connect to vCGNAT via ssh:

    user:~$ ssh [email protected]
    [email protected]'s password:
    Last login: Thu Sep  8 12:09:14 2022
    Hello, this is NFWare OS.
    vcgnat# sh int brief
    Interface       Status  VRF             Addresses
    ---------       ------  ---             ---------
    if0             up      default
    lo              up      default
  • Add veth interfaces in the main network namespace to bridges and send traffic using them according to your needs:

    $ sudo brctl addif br-vcgnat eth101#
    $ sudo ip a add dev br-vcgnat#