Zerotier Controller on Mikrotik CHR

As of RouterOS 7.4Beta4 Mikrotik have added container functionality back in to RouterOS. I’ve had mixed experiences with containers over the years and never really took to them however, running them on RouterOS provides an excellent opportunity to add some functionality to RouterOS that’s missing.

Most notable among these for me at least is the lack of Zerotier on CHR.

I absolutely love Zerotier as a relatively pain free way of achieving a kind of quasi SDN solution. However the device limit on their free tier while understandable is a bit limiting. So you can get around that by running your own Zerotier Controller. This functionality is baked into the Zerotier package for Mikrotik ARM devices but it’s an older release of Zerotier. There is more to be gained by using a more recent release of Zerotier.

So with containers available again it was the perfect time to put it to the test and run a Zerotier controller on a CHR. The solution I opted for is ZTNCUI by Key Networks.

My test CHR has two CPU cores and I gave it 2GB of RAM. Turns out you don’t need actually need that amount of RAM but anyway, the more the merrier.

Mikrotik suggest using dedicated storage for the containers and I added a second disk to the CHR to store the containers.

A word of advice here – When you configure the second disk through Winbox you neet to use an EXT4 file system and DO NOT create a partition table! If you do you will discover your CHR will not boot and will throw a panic. Don’t worry! Simply remove the second disk from your CHR config on your hypervisor and then you can add it again.

RouterOS DIsk Format Dialog Box

So now you need to install the container package – you can do this by downloading the “Extras Package” for CHR (or your device) from the Mikrotik Download page. I assume you know how to install a package!

Before we go any further you need to configure Router OS Device Mode – this was a new one on me and it did catch me out.

To enable containers you need to run this command from the terminal of your CHR:

/system/device-mode/update container=yes

Now you need to POWER OFF your CHR before the timer runs out. You have 5 minutes. DO NOT reset the power, DO NOT use the reboot command DO NOT press the reboot button in Winbox. You HAVE TO POWER OFF the CHR.

Once you’re back up and running now we can create our container. Note that in the instructions that follow my second disk is disk2:

Create a Bridge for our VETH devices that will be used for our container’s networking and give it an IP Address:

/interface/bridge/add name=dockers
/ip/address/add address=172.17.0.1/16 interface=docker
/interface/bridge/port add bridge=dockers interface=veth1

Add veth interface and give an IP address for the container. You need one for each docker image.

/interface/veth/add name=veth1 address=172.17.0.2/16 gateway=172.17.0.1

Add the veth device to the docker bridge:

/interface/bridge/port add bridge=docker interface=veth1

Normally at this stage you would define your variables however they are not needed for this container.

Next we need to define our mount points on our second disk:

/container/mounts/add name=etc_ztncui src=disk2/etc dst=/etc/ztncui

Now we define our Docker Registry where the image will be pulled from:

/container/config/set registry-url=https://registry-1.docker.io

Now we pull our image:

/container/add remote-image=keynetworks/ztncui:latest interface=veth2 root-dir=disk2/ztncui mounts=etc_ztncui logging=yes

Depending on your download speed, your container should be ready in a couple of minutes. You can check with this command:

/container print

You will see a number assigned to your container and we need to now start this container with this command:

/container/start 0

Now our container is running with IP address 172.17.0.2. Depending on your network topology you may be able to access this IP address directly. You may need to add a masquerade firewall rule or in my case I have my CHR advertising the Docker Subnet via OSPF.

ZTNCUI listens on port 3443 and will be accessible on https://172.17.0.2:3443 but now you need to log in to it.

This is probably the most complicated part of the setup. When the ztncui image is run for the first time it will generate a one time password to it’s log file. This file is not accessible from RouterOS itself.

To get around this I shared disk2 on my CHR using SMB on my internal interface and was then able to access the log file and retrieve the password.

As of 7.4RC2 you can now enter the shell of the running container with the following command:

/container shell 1

Where 1 is the number of your running container.

The logfile is disk2\ztncui\var\log\docker-ztncui on my CHR.

Log into the portal with user admin and the password at the top of the logfile. You will be prompted to change the password.

As you will likely no longer need SMB – be sure to disable the SMB share!

If you successfully log into the ztncui console then you will see it’s similar to the actual Zerotier Web console and you can go ahead and create your first self hosted Zerotier Network.

Leave a Reply

Your e-mail address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.