Browsed by
Etikett: gre0

OpenVZ: Bridged IPv6 subnets

OpenVZ: Bridged IPv6 subnets

I’ve been working on a gre tunneling interface for a while, but had wishes to make my OpenVZ host take care of the services that should host the tunnel – like for example, instead of assigning each single IPv6-address manually via vzctl, addressing should be handled from the container. And as long as vzctl and the venet-interfaces is used, it has to be done this way. With OpenVZ this is not entirely obvious, since documentation is not always collected in the same place.

As a matter of fact, after searching half of the day, I think I’ve got it covered. First, make sure you’re not using vzctl –ipadd, when you’re adding a larger subnet. Let’s use an example:

vzctl set <ctid> ---ipadd 2a01:299:a0:7000::/64

The example above will only assign one ip – 2a01:299:a0:7000:: – to your container, not the entire subnet. To have more addresses in this case, you have to make vzctl set them up in the same way: 2a01:299:a0:7000::1/64, 2a01:299:a0:7000::2/64, etc. The real magic occurs when you’re starting to use veth and brctl correct. To make it quick:

vzctl set <ctid> --netif_add eth0 --save

# Find the right veth-interface and ...
brctl addif br0 veth-interface

In the OpenVZ release I use, the bridging is set up by linking br0 with the created vethinterface – how to identify this interface when having more containers than one is currently undicovered ground as, again, documents are not very clear on this. I’ve found names like veth101.0 have been used, but in my case – with Virtuozzo 7.x – I get interfaces like veth123a4bcd, and they are a bit hard do identify and connect to the right containter. This should be automatically fixed with scripts that is running during the openvz startup sequences, but that part is still undiscovered.

Instead, I’ve created a cron-job that makes sure that all the interfaces are really linked up after the server boot (which is probably a high security issue if you host many containers for many users as bridging opens up networks a bit more than you probably wish):

#!/bin/bash PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin 

veth=$(ifconfig|grep veth|sed 's/:/ /'|awk '{printf $1 " "}')
echo "Bridging VE interfaces..."

for interface in $veth
do
hasInterface=$(brctl show | grep ${interface})
if [ "" = "$hasInterface" ] ; then
echo brctl addif br0 ${interface}
brctl addif br0 ${interface
fi
done

With this little ugly one, we’re making sure that unbridged interfaces are really bridged, and only once, so brctl does not need to rerun creation all the time. I think there are much better ways of doing this, however. The last thing to do after this setup is to actually assign the subnets properly. On the host server:

ip route add <A:B:C:D::/64> dev br0

At the container:

#!/bin/bash

# Local bridge
ip -6 addr add A:B:C:D::/64 dev eth0

# Gateway
ip -6 route add <gateway> dev eth0

# Do not route ipv6 via venet0
ip -6 route del default dev venet0

# Route default via this router
ip -6 route add default via <gateway> dev eth0

# Route ipv6 via eth0
ip -6 route add default via <gateway> dev eth0

New updates

On newer openvz-releases neither tun/tap nor gre-tunneling seem to be a problem anymore. However, SIT remains impossible to run. Most comments on the internet very much likely are old, linking to userspace applications that has to be compiled or a note that just says: ”You have to compile into the kernel as it is disable per default. Security issues”. Many people also say that IF you’re running SIT you should probably run most of the data on the host of the server. With no further examples on HOW they mean. The below example IS a live example even if the solution itself – as always – does not work proberly. However, by running those commands I managed to see life from the tunnel server that was in this case based on Hurricane Electric tunnels. The solution also has a static ip address that HE can reach.

The only problem with the example below is the fact that i still struggle with the most common error of them all: protocol 41 port 0 unreachable. The connection itself also ”demands” that you are not using the venet-links. The example tries to connect with HE Fremont.

vzctl set 7030 –netdev_del he-fremont –save vzctl restart 7030 vzctl exec 7030 ip addr add dev eth0 ip tunnel add he-fremont mode sit remote 72.52.104.74 local any ttl 255 dev vzctl set 7030 –netdev_add he-fremont –save vzctl exec 7030 ip link set he-fremont up vzctl exec 7030 ip tun change he-fremont local vzctl exec 7030 ip addr add dev he-fremont vzctl exec 7030 ip route add ::/0 dev he-fremont vzctl exec 7030 ip addr add dev eth0 vzctl exec 7030 ip -f inet6 addr

Trying to ping the addresses added above will make tcpdump react. There is something going on. But since protocol 41 fails here, everything stays with this. If someone has a fantastic solution on how to activate ipv6/protocol 41 on a ve please tell!

OpenVZ and sit-tunnels (Hurricane Electric) +openvpn

OpenVZ and sit-tunnels (Hurricane Electric) +openvpn

It has come absolutely clear to me that sit-tunneling on OpenVZ is practically impossible. A technician from Hurricane Electric has rounded it up by following comment:

Ok, I’ve been handling a bunch of tickets opened by folks now trying to get OpenVZ or Virtuozzo set up. The common mistake being done is people trying to bring up the tunnel inside the virtualized server. You MUST set up the tunnel on the OS that runs the physical machine. Then you can assign IPv6 addresses to the virtualized servers from your routed allocations.

Link

And he’s definetly not joking. Setting up tunnels, based on sit is not working, regardless of the method you use. However, gre-tunneling actually seem to work with a bit of work. First of all, you have to make sure the interfaces are really available when running them on the host.

The following commands are activating both gre and sit, but with no permissions to use sit on the virtual host. To make tunneling with sit work, make sure the tunnel is added on the host, not the VPS itself. I have currently found no way of making sit work. Either I get no permission to the interface, or I get ”No buffer space available”.

Another tip that people have linked to (oh, of course, the links are dead) – is tb-tun (https://code.google.com/archive/p/tb-tun/) which is an application that allows sit to be created in a userspace (not tested).

modprobe ip_gre
modprobe ip_tunnel
modprobe sit

The next step is to activate some features for the box. This step made me activate the gre-interface and I thought I also got the sit to work. But no, that failed. Do not forget to shut down your VPS here, as the following steps requires this.

vzctl set ctid --features ipgre:on,sit:on,ipip:on,bridge:on --save
vzctl set ctid --devnodes net/tun:rw --save
vzctl set ctid --netfilter full

The last step was to set the VPS capabilities. As vzctl has the capability setting deprecated, it’s better using prlctl for this action. This was actually made in bash as there was too many row to set manually…

capabilities="net_admin net_raw sys_admin ve_admin sys_resource"
for cap in $capabilities
do
prlctl set $ctid --capability ${cap}:on
done

If the interfaces do not show up when starting up the VPS again, you might also need to ass the devices manually.

vzctl set ctid --netdev_add gre0 --save
vzctl set ctid --netdev_add sit0 --save

In some cases you also need to use mknod to create the /dev/net/tun, that is used by the tunnel interfaces (I did this both on the host and the virtual server).

mkdir -p /dev/net
mknod /dev/net/tun c 10 200

At this moment you should be able to create both gre-tunnels and actually also use openvpn. However – still – trying to use sit, is a no go.