Migrating from VMware ESXi to Proxmox

So, news that VMware is going subscription only is not going well with some home labbers however lots of home labbers are already using Proxmox. Since Broadcom’s announcement of their purchase of VMware I’ve been slowly transitioning over to Proxmox and phasing out ESXi at home. There is a touch of sadness as I used to work for VMware and always loved using their products.

Assuming you have a Proxmox host or two up and running – how do you move a VMware VM over to it?

Simplest solution I’ve discovered is to have a common NFS share between an ESXi host and a Proxmox host. In my home lab I have an NFS share called FS02VMs that is mounted in both environments. In vCenter if you are running it, you need to migrate your vm’s storage to the NFS share.

Once the VM is moved to NFS – you can go to your Proxmox web interface and create a new VM with settings that match or are close to those of the VM you want to import. It is important here however that you do not create a hard disk in Proxmox for this VM as we will be importing it from the NFS share. Once the VM is created you need to make a note of the VM ID. You will also need to make a note of the original VMware VMDK name.

My Proxmox host I want to import to has a local LVM storage attached called NVME_LVM and the VM I want to import is an Ubuntu Linux 22.04 VM and is called test.lab.sweetnam.eu.

The fist step involves connecting to your Proxmox host via ssh and change directory to your NFS mount where your ESXi VM is located. The NFS mount will be under /mnt/pve and as mentioned already mine is called FS02VMs so I need to change directory to /mnt/pve/FS02VMs.

Under this directory is the folder with my ESXi VM so change directory into it. The VMDK I want to import is called test.lab.sweetnam.eu.vmdk

There might be other VMDK names or ones with flat.vmdk in the name. We only want the main VMDK which may be just a KB or so in size!

So now you have your Proxmox VM ID and you have the storage you want to import it to and you have the VM ready to import it to on Proxmox and it’s time to begin.

From the directory with the VMDK you just need to type in the following command:

qm importdisk 110 test.lab.sweetnam.eu.vmdk NVME_LVM --format raw

You can see the VM ID is 110 and local storage is NVME_LVM

After a while it will be imported and attached to the VM but not quite ready for use yet.

You need to go to the VM hardware page – you will see that your imported disk is there but it’s unused. Simply double click on it to get your VM to use it. It is usually ok to keep the defaults. If your storage is SSD then you should click on the “Discard” checkbox. If your VM is Windows 10, 11. Server 2019 or newer you need to make a few more changed. More on this further below.

Almost ready now to boot the VM for the first time but first we need to make sure the VM is configured to boot from the disk. Again in the VM settings, click on Options and then “Boot Order”. Remove the checks from all but your hard disk. Now go to the console and start your VM and get ready for some configuration of the guests.

Linux

If your imported disk is Linux – the first thing you will notice is that VM may take a very long time to boot and/or you have no network connectivity. This is because the ethernet device will have changed.

Simply run : dmesg | grep eth to get the new interface. For my test VM the original interface was /dev/ens192 and the new device name was /dev/ens18

For Ubuntu 20.04 and 22.04 you need to edit the netplan config located in /etc/netplan. There is probably only a single file in that directory and I have seen it with a few different names. In my VM is is called 00-installer-config.yaml

All you need to do is use your favourite editor to change the old interface name to the new one.

If you are using Debian or an earlier Ubuntu version then you will need to change the interface name in the file /etc/network/interfaces

Regardless of your distribution you should remove open-vm-tools and also install the QEMU Guest Agent. On Debian/Ubuntu this is a simple apt install qemu-guest-agent.

Once the guest agent is installed you need to power off the VM, edit the Options and make sure “QEMU Guest Agent” is enabled. Be sure to do this after powering off the VM. Once it’s enabled power on your VM and you should be good to go!

Windows

If you are using a recent version of Windows from 10 up or Server 2019 and up you will need to change the BIOS from Proxmox default of SeaBIOS to OVMF (UEFI) in the hardware settings.

As the VM is imported from VMware it should have the required EFI partitions on it already so you can ignore any EFI messages that appear.

Once your Windows VM boots successfully you probably won’t have networking on it. However first things first – you should uninstall VMware tools and remove any VMware devices in Device Manager.

You will also need to install the VirtIO drivers. For this you will need to download the latest ISO from here:

https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/?C=M;O=D

You can upload the ISO to your Proxmox host and then add it as a CD-ROM device to your VM.

Run the installer in your VM and shutdown the VM when complete. Next edit the VM Options and make sure “QEMU Guest Agent” is enabled and power on the VM.

Once booted you probably won’t have any network connectivity or a DHCP address if you were using static. In this case simply change the network settings for the VirtIO network adapter.

FreeBSD

Importing a FreeBSD disk is fairly forward however my boot device changed on first boot and it it couldn’t mount the root file system. Thankfully FreeBSD will prompt you to press ? for a list of devices and you can enter the relevant one to get your root filesystem.

MAKE A NOTE of the devices. In my case there were two – the root UFS file system which was now called /dev/ada0s1a and the swap which was now called /dev/ada0s1b

Once booted edit /etc/fstab and change the devices to the new ones.

You wil also probably find like Windows and Linux that you have no networking – in the case you need to get your new network card name with dmesg | grep Ethernet and then edit /etc/rc.conf to change the old name to the new one. Then reboot!

If it boots correctly you can then proceed with removing open-vm-tools if installed and installing the qemu-guest-agent. Like Windows and Linux you will need to shutdown the VM and power it on again for Proxmox to pickup the QEMU Guest agent.

Conclusion

It takes quite some time to import even small 40Gb VMs but this is a network limitation more than anything else.

I completed my migration today and I now have 4 hosts with 14 VMs of which all 14 were imported from ESXi. There are 3 Windows VMs, 2 FreeBSD VMs and 9 Debian and Ubuntu VMs all now performing as good as they were on ESXi.

Baby Jumbo Frames and PPPoE

It’s quite common for ISPs to assign static IP addresses to their customers via PPPoE rather that via DHCP. Eir, the largest ISP in Ireland is no exception and since 2005 I think I’ve had the same IP address assigned to me via ADSL, VDSL and now FTTH via PPPoE.

One of the first things you may notice or be told as in my case many years ago when I first got my static IP, is that because of an overhead of 8 bytes introduced by PPPoE, the MTU needs to be set to a value of 1492 bytes instead of the usual 1500.

MTU means Maximum Transmission Unit and refers to the maximum size of a packet or piece of data that can be transferred across a network at any one time. This is also known as a frame. Frames don’t have to be the same size but the defacto standard is the above mentioned 1500 bytes.

For example a 1 Gigabyte file is 1,073,741,824 bytes. This means with a standard 1500 byte frame size, that 1GB file will be transferred across a network or the internet in 715828 frames or packets (rounded up!)

Because PPPoE overhead is 8 bytes, it needs to be subtracted from the standard 1500 bytes which is where the maximum value of 1492 comes from. Therefore the same 1GB file above at an MTU of 1492 would require 719,666 packets. Doesn’t sound a lot but for large transfers it can have a significant impact because packet fragmentation.

I already mentioned above Frames don’t have to be the same size but fragmentation occurs when two devices (or something else in their path) have different MTU sizes. An interface configured for an MTU of 1492 receiving packets or frames from a device with an MTU of 1500 needs to fragment the packets and reassemble them and vice versa. If there’s a lot of fragmentation this can result in a significant load on router. An example being my own router. It can route with a packet size of 1518 bytes at a maximum of 9851Mbps. With a packet size of 512 bytes it can route at 9551 Mbps but with a 64 byte packet it can only route at 3128Mbps.

In the greater scheme of things you might not notice any difference a lower MTU makes however some applications may not work well in this environment. To complicate matters, some routers ignore packet-too-big messages and keep sending packets that exceed the MTU. (Ref).

Where this really causes issues is when using methods to transfer data that were never really designed to traverse the internet such as Windows SMB/CIFS file shares, NFS, etc.

So what’s a Jumbo Frame? As mentioned above the defacto standard frame size for ethernet is 1500 bytes. Therefore a Jumbo Frame is anything larger than that although Jumbo Frame typically refers to a frame that’s much larger than 1500 bytes. The term Baby Jumbo or Baby Giant specifically refers to a frame that’s only a fraction larger than 1500 bytes.

Using a baby jumbo frame on your PPPoE connection means setting the on your router’s PPPoE interface to 1508 bytes to allow for the 8 bytes of overhead on PPPoE thereby allowing a maximum frame size of 1500 bytes.

So before you do anything or change anything you need to know that baby jumbo frames are wholly dependant on two things:

  1. Your ISP’s infrastructure (they might not support baby jumbos)
  2. Your router (also might not support baby jumbos)

As I mentioned already my ISP is Eir and they do support baby jumbos. I use Mikrotik routers and they also support baby jumbos. However there are a few hoops to jump through first.

Eir FTTH is delivered via a VLAN. Specifically VLAN 10. PPPoE is usually configured as a subinterface of an existing interface. In Mikrotik’s world a VLAN interface for a router is a subinterface of an existing interface which means there are three interfaces to change the MTU on!

In my setup at home using my RB5009, ether1 is connected to the FTTH ONT. To get service I added a VLAN interface to ether1 called EIR_VLAN_10 and because I use PPPoE I added a PPPoE interface to EIR_VLAN_10. Sounds convoluted but not really!

So we have two interfaces where the MTU needs to be set to 1508 (ether1 and EIR_VLAN_10) and the PPPoE interface itself needs to be set to 1500. A picture speaks a thousand words apparently so here’s a screenshot of my confguration:

Screenshot of Mikrotik Winbox showing interface information for baby jumbo configuration.

Zerotier Controller on Mikrotik CHR

As of RouterOS 7.4Beta4 Mikrotik have added container functionality back in to RouterOS. I’ve had mixed experiences with containers over the years and never really took to them however, running them on RouterOS provides an excellent opportunity to add some functionality to RouterOS that’s missing.

Most notable among these for me at least is the lack of Zerotier on CHR.

I absolutely love Zerotier as a relatively pain free way of achieving a kind of quasi SDN solution. However the device limit on their free tier while understandable is a bit limiting. So you can get around that by running your own Zerotier Controller. This functionality is baked into the Zerotier package for Mikrotik ARM devices but it’s an older release of Zerotier. There is more to be gained by using a more recent release of Zerotier.

So with containers available again it was the perfect time to put it to the test and run a Zerotier controller on a CHR. The solution I opted for is ZTNCUI by Key Networks.

My test CHR has two CPU cores and I gave it 2GB of RAM. Turns out you don’t need actually need that amount of RAM but anyway, the more the merrier.

Mikrotik suggest using dedicated storage for the containers and I added a second disk to the CHR to store the containers.

A word of advice here – When you configure the second disk through Winbox you neet to use an EXT4 file system and DO NOT create a partition table! If you do you will discover your CHR will not boot and will throw a panic. Don’t worry! Simply remove the second disk from your CHR config on your hypervisor and then you can add it again.

RouterOS DIsk Format Dialog Box

So now you need to install the container package – you can do this by downloading the “Extras Package” for CHR (or your device) from the Mikrotik Download page. I assume you know how to install a package!

Before we go any further you need to configure Router OS Device Mode – this was a new one on me and it did catch me out.

To enable containers you need to run this command from the terminal of your CHR:

/system/device-mode/update container=yes

Now you need to POWER OFF your CHR before the timer runs out. You have 5 minutes. DO NOT reset the power, DO NOT use the reboot command DO NOT press the reboot button in Winbox. You HAVE TO POWER OFF the CHR.

Once you’re back up and running now we can create our container. Note that in the instructions that follow my second disk is disk2:

Create a Bridge for our VETH devices that will be used for our container’s networking and give it an IP Address:

/interface/bridge/add name=dockers
/ip/address/add address=172.17.0.1/16 interface=docker
/interface/bridge/port add bridge=dockers interface=veth1

Add veth interface and give an IP address for the container. You need one for each docker image.

/interface/veth/add name=veth1 address=172.17.0.2/16 gateway=172.17.0.1

Add the veth device to the docker bridge:

/interface/bridge/port add bridge=docker interface=veth1

Normally at this stage you would define your variables however they are not needed for this container.

Next we need to define our mount points on our second disk:

/container/mounts/add name=etc_ztncui src=disk2/etc dst=/etc/ztncui

Now we define our Docker Registry where the image will be pulled from:

/container/config/set registry-url=https://registry-1.docker.io

Now we pull our image:

/container/add remote-image=keynetworks/ztncui:latest interface=veth2 root-dir=disk2/ztncui mounts=etc_ztncui logging=yes

Depending on your download speed, your container should be ready in a couple of minutes. You can check with this command:

/container print

You will see a number assigned to your container and we need to now start this container with this command:

/container/start 0

Now our container is running with IP address 172.17.0.2. Depending on your network topology you may be able to access this IP address directly. You may need to add a masquerade firewall rule or in my case I have my CHR advertising the Docker Subnet via OSPF.

ZTNCUI listens on port 3443 and will be accessible on https://172.17.0.2:3443 but now you need to log in to it.

This is probably the most complicated part of the setup. When the ztncui image is run for the first time it will generate a one time password to it’s log file. This file is not accessible from RouterOS itself.

To get around this I shared disk2 on my CHR using SMB on my internal interface and was then able to access the log file and retrieve the password.

As of 7.4RC2 you can now enter the shell of the running container with the following command:

/container shell 1

Where 1 is the number of your running container.

The logfile is disk2\ztncui\var\log\docker-ztncui on my CHR.

Log into the portal with user admin and the password at the top of the logfile. You will be prompted to change the password.

As you will likely no longer need SMB – be sure to disable the SMB share!

If you successfully log into the ztncui console then you will see it’s similar to the actual Zerotier Web console and you can go ahead and create your first self hosted Zerotier Network.

Two Weeks with Starlink

I had almost forgotten that way back in February I had pre-ordered Starlink until I got an e-mail to say it was now available in Ireland and that I could pay the balance and they would ship Dishy to me. I completed the order. The following day I got an e-mail to say that Dishy had shipped from Los Angeles and just one day later a DHL driver was at my door handing over a very large box!

It all happened so fast that I didn’t really have time to consider where I was going to mount it. I live in the middle of a town at the bottom of a hill and I’ve a very small yard at the back of the house. Luckily I have very high walls and as fate would have it I had recently cleared the ivy off them as my neighbour had recently cleared his side of the wall so I had no choice but to reciprocate!

Dishy Dishy on the Wall

Above you can see dishy in his temporary mounting position. I’m waiting for a custom pole adapter to be made and soon dishy should be about 4m above its current position.

Dishy needs a 100° field of vision from the vertical. That is when it is powered up the flat front faces straight up and and if you visualise a 100° cone from the dish upwards then that’s all the clear sky it needs. It’s actually not a huge amount but absolutely no obstructions are essential for a decent service. There is a diagram of that field of view and some excellent installation advice on this page here.

Once dishy is placed and the 100ft cable is ready, all you need to do is plug the cable for the router and dishy into the supplied PoE block and then finish the setup using the Starlink app on your phone. It’s very straightforward and in no time at all your dishy will start searching for satellites.

It does take about 10 to 15 minutes for Dishy to connect and settle a bit. After a short while I ran a speed test and immediately got a speed around 180Mbps. So far so impressive. Latency is between 30ms and 40ms. But then after a few hours I saw peak speeds of around 290Mbps.

For your first day you can expect plenty of outages. The service is in Beta and when you sign up it is made clear that some outages are to be expected. In the statistics section of the App these take two forms. Beta outages which are to be expected some times and Other Outages which can be from obstructions or No satellites being visible.

For the first couple of days you will get outages as both Dishy and the router automatically update.

You can use your own router rather than the Starlink supplied router. This is actually what I have done and I’m using a spare Mikrotik RB750Gr3 into which the white cable from the PoE block is plugged. It will receive a CG-NAT IP address and works perfectly. However to access the stats on the app you need to make sure you setup a route to Dishy who has it’s own IP address of 192.168.100.1/24. In most routers this is fairly straightforward. Here is what it looks like for Mikrotik with ether1 being the interface that is plugged into the PoE block.

Mikrotik Static Route
Screenshot of Mikrotik Static Route to Dishy

Within a day I noticed my speeds were rarely going above 100Mbps. The app was constantly warning of “Poor Ethernet Connection”. Naturally I checked all the connections and they all seemed fine so I rebooted everything and speeds hit over 200Mbps again. For about 10 minutes. Then the dreaded “Poor Ethernet Connection” error appeared again.

After several reboots and the same behaviour being observed I decided to open a support ticket. You can do this through the app. Within 40 minutes of opening the ticket I got a phone call from Starlink support who ultimately decided there was an issue with the Dish itself and that they would send me out an entire new kit. A few days later I got an e-mail notifying me that the replacement kit had shipped so I boxed up the original kit to be shipped back and one day later the new kit had arrived from LA! Absolutely stellar service from Starlink I must say.

With Dishy 2 installed there was an instant improvement. Over a week later and I haven’t seen a “Poor Ethernet Connection” error and speeds are consistently between 130 and 290Mbps.

For a few days there were quite a few “No Satellites” errors but it seems I wasn’t the only person in this part of the country to get them. In the last few days it appears Dishy has received another firmware upgrade and I have had absolutely no errors or downtime. Quickly checking the stats and I can see that it’s reporting 2 seconds downtime in the last 12 hours. I’ll happily take that!

Dishy Ping Stats

From the get go Starlink made big claims as to what they expected from their service and personally I think it has already exceeded my expectations. There are more satellites being launched each month and the service will advance as a result. It’s a perfect solution for both rural and some urban locations alike. Particularly if you are being left out of the Fibre Rollouts underway in Ireland. It also for the moment at least has none of the contention issues of some LTE masts and also has no measly data cap.

There are some disadvantages though:

  • You need a clear unobstructed field of view for the dish.
  • It’s expensive at €500 to purchase the equipment and €99 per month.
  • You also need to expect outages as the service is in Beta.
  • It used CG-NAT so you won’t be able to host behind it. (Not an issue for most!)
  • Latency might be too high for hardcore gamers.

Once you are aware of these issues and you still want to go ahead I would recommend it in a heartbeat.

LTE Broadband 4G+ – Playing with LTE Bands

My fixed line broadband is VDSL supplied by Ireland’s largest Telco Eir. By and large I’m getting a pretty good service. I have a static IP so my eir F3000 modem (Sagemcom) is bridged so my PPPoE connection is handled by a Mikrotik RB3011.

Occasionally however DSL resyncs itself and always at the most inopportune times. It got so bad at one stage (up to 7 times a day) that I decided to replace the main telephone wiring in the house using two pairs of a Cat6 cable and decided to get a pay as you go SIM card for an LTE backup connection.

The rewiring did the trick on the DSL sync issues. Now it only resyncs every 4 or 5 days. However the backup link is quite handy for offloading some other traffic so you can see in my previous posts how I implemented policy based routing to utilise the LTE link.

I opted to go for a 3 SIM only pay as you go package and for the router I chose a Mikrotik Chateau LTE12.

As I live at the bottom of a hill I also opted to go for an external antenna and I’m using a dual polarisation Yagi Antenna in the attic pointed in the general direction of the nearest antenna. The Mikrotik Chateau supports carrier aggregation and it initially chose for itself LTE bands 3 (1800 Mhz) and 20 (800Mhz).

Band 3 on the 3 Network has double the bandwidth of Band 20 available and 2100Mhz has more available again.

Initial SpeedTests to my own server in a Data Centre in Cork City showed that speeds were not at all that bad. Depending on the time of the day it was possible to get speeds of up to 80Mbps. Not to shabby and on a par with my VDSL connection.

LTE carrier aggregation test on 3 Network. Bands 3 and 20

Above was a test this morning at approximately 09:00

I know that LTE Band 1 (2100Mhz) was repurposed and licensed temporarily for 4G to provide extra capacity for the Home Work and Home Schooling necessary because of the Covid pandemic. However I never saw it available near me until this morning.

For some reason the Chateau couldn’t use Band 1 and Band 3 but it could use Band 1 and Band 20. After configuring it to use these settings I performed another SpeedTest and I think the result speaks for itself:

LTE speedtest on 3 Network using Bands 1 and 20.

A quick check of the router and it seems to be using Band 1 for CA as well:

LTE Statistics from Mikrotik WinBox

The performance increase of Band 1 speaks for itself really! However the repurposing of 2100Mhz (Band 1) is only valid until the 1st of July 2021 which is only 18 days from the time of writing this. Here’s hoping they extend the license!

FreeBSD HDMI Audio

I have multiple small form factor desktops that I used on a day to day basis. Each with their own Operating Systems. One is running Windows 10, one is running Manjaro Linux and on another I decided I was going to run a FreeBSD desktop. Just to be different!

I have all the desktops connected up to a HDMI KVM switch which means that I can use the headphone socket on the monitor to be able to listen to whichever desktop I happen to be using. And while the desktops use DisplayPort, I use HDMI converters and have had no issues with audio. Until trying to play something on the FreeBSD desktop that is.

FreeBSD uses OSS as it’s default audio interface. While I could get no audio out of the monitor via HDMI I could get audio plugged into one of the two jacks on the desktop. So I had to do a bit of poking about.

The first thing you need to do is to see how the audio devices are enumerated by the kernel at boot time. Like most desktops and laptops these days there are usually two audio hardware devices. My desktop is no different in having a Realtek ALC233 on board audio (that uses the two jacks on the PC) and an Intel HD Audio Device.

Which ever one is enumerated first becomes the default audio device. In my case it was the Realtek Device.

You can check this yourself with the following command (as root)

cat /dev/sndstat

You will see something like this:

results of the command cat /dev/sndstat

Here you can see the Realtek is enumerated first as both pcm0 and pcm1.

The important thing to remember here is that while three devices are listed there are actually just two audio devices which means the HDMI device is actually device 2

To make the HDMI audio device the default device you need to execute a sysctl command as root:

sysctl hw.snd.default_unit=2

If this works and your audio is now routed correctly you will need to make the change permanent.

To do this you need to edit the file /etc/sysctl.conf and add the line we used above (minus the sysctl part) to it. Mine looks like this:

My sysctl.conf file

Now your HDMI audio output should persist after a reboot.

Or at least it should! In my case on this particular desktop I had no audio at all. Even though the HDMI audio was set correctly to default! It was driving me crazy until I realised that the desktop has two Display Port sockets at the back. I switched the cable to the other one and finally I had sound.

Mikrotik RouterOS: Simple Policy Based Routing

So let’s say you have multiple ISPs or different rules for VLAN traffic and you want a simple way to define which network routes through which gateway.

In this example we have a routing table main and a routing table 4G

  • Routing table main uses the default route which is a PPPoE connection on the same router.
  • Routing table 4G uses a secondary router which has IP Address 172.20.0.29

Defining Routes

First we need to define our default route. This may already exist:

/ip route dst-address=0.0.0.0/0 gateway=PPPoE

Next we need to add a route to the second gateway with a routing mark.

/ip route dst-address=0.0.0.0/0 gateway=172.20.0.29 routing-mark=4G

Once you are finished your Route List in Winbox will look similar to this:

This is self-explanatory, all external traffic will go through the PPPoE interface and all traffic marked 4G will route through 172.20.0.29.

Now we need to define the subnets that will have the routing marks applied.

To achieve this we need to define the traffic that will be marked. For this example we want to mark all traffic from 192.168.200.0/24 to go through gateway 172.20.0.29:

/ip route rule add src-address=192.168.200.0/24 dst-address=0.0.0.0/0 action=lookup table=4G

However now all local traffic from 192.168.200.0/24 will go through the 4G router rather than the main router.

To resolve this you need to add a rule for the local network before the rule above to route local traffic using the router’s main lookup table:

/ip route rule add src-address=192.168.200.0/24 dst-address=172.20.0.0/16 action=lookup table=main

Here is a screenshot of my Router on a Stick configuration that handles VLAN traffic on my LAN.

You can see there are three lookup tables and three subnets with their associated routing rules:

It’s fairly obvious but you can see how rules are processed in order and that subnet 192.168.200.0/24 uses the 4G lookup table and 192.168.101.0/24 subnet uses the UBNT table.

Mikrotik RouterOS: VLANs

VLANs on RouterOS used to be a bit of a dark art and was very much dependant on the hardware. What worked on one might not work on another. Thankfully that has changed some time ago in the 6.3x releases and now there’s a standard way using bridge VLAN filtering.

A quick note before we begin – Not all Mikrotik hardware supports hardware based VLAN filtering and those that don’t like the CRS-125 in my example will rely heavily on the CPU instead.

You’re probably familiar with how RouterOS bridges work? If not then it’s simple a matter of creating a Bridge in Winbox and then adding every port to it.

With this done first things first is we want to make sure VLAN filtering is disabled so we don’t lose connectivity to our switch:

First ensure VLAN Filtering is disabled so you don’t lose connectivity.

/interface bridge set CoreNet vlan-filtering=no

Next add bridge ports and specify pvid for VLAN access ports to assign their untagged traffic to the intended VLAN.

ether1 is our trunk and that needs to be tagged and ether16 will be on VLAN999 so therefore will be untagged and have a pvid of 999:

/interface bridge vlan
add bridge=CoreNet tagged=ether1 untagged=ether16 vlan-ids=999

Repeat for each VLAN or interface you so desire. For example to add ether 5 to VLAN40:

/interface bridge vlan
add bridge=CoreNet tagged=ether1 untagged=ether5 vlan-ids=40

Once you’re happy, enable Bridge VLAN filtering and hope everything works!

/interface bridge set CoreNet vlan-filtering=yes

That’s it! You’re done!