Browsed by
Etikett: he

OpenVZ: Bridged IPv6 subnets

OpenVZ: Bridged IPv6 subnets

I’ve been working on a gre tunneling interface for a while, but had wishes to make my OpenVZ host take care of the services that should host the tunnel – like for example, instead of assigning each single IPv6-address manually via vzctl, addressing should be handled from the container. And as long as vzctl and the venet-interfaces is used, it has to be done this way. With OpenVZ this is not entirely obvious, since documentation is not always collected in the same place.

As a matter of fact, after searching half of the day, I think I’ve got it covered. First, make sure you’re not using vzctl –ipadd, when you’re adding a larger subnet. Let’s use an example:

vzctl set <ctid> ---ipadd 2a01:299:a0:7000::/64

The example above will only assign one ip – 2a01:299:a0:7000:: – to your container, not the entire subnet. To have more addresses in this case, you have to make vzctl set them up in the same way: 2a01:299:a0:7000::1/64, 2a01:299:a0:7000::2/64, etc. The real magic occurs when you’re starting to use veth and brctl correct. To make it quick:

vzctl set <ctid> --netif_add eth0 --save

# Find the right veth-interface and ...
brctl addif br0 veth-interface

In the OpenVZ release I use, the bridging is set up by linking br0 with the created vethinterface – how to identify this interface when having more containers than one is currently undicovered ground as, again, documents are not very clear on this. I’ve found names like veth101.0 have been used, but in my case – with Virtuozzo 7.x – I get interfaces like veth123a4bcd, and they are a bit hard do identify and connect to the right containter. This should be automatically fixed with scripts that is running during the openvz startup sequences, but that part is still undiscovered.

Instead, I’ve created a cron-job that makes sure that all the interfaces are really linked up after the server boot (which is probably a high security issue if you host many containers for many users as bridging opens up networks a bit more than you probably wish):

#!/bin/bash PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin 

veth=$(ifconfig|grep veth|sed 's/:/ /'|awk '{printf $1 " "}')
echo "Bridging VE interfaces..."

for interface in $veth
do
hasInterface=$(brctl show | grep ${interface})
if [ "" = "$hasInterface" ] ; then
echo brctl addif br0 ${interface}
brctl addif br0 ${interface
fi
done

With this little ugly one, we’re making sure that unbridged interfaces are really bridged, and only once, so brctl does not need to rerun creation all the time. I think there are much better ways of doing this, however. The last thing to do after this setup is to actually assign the subnets properly. On the host server:

ip route add <A:B:C:D::/64> dev br0

At the container:

#!/bin/bash

# Local bridge
ip -6 addr add A:B:C:D::/64 dev eth0

# Gateway
ip -6 route add <gateway> dev eth0

# Do not route ipv6 via venet0
ip -6 route del default dev venet0

# Route default via this router
ip -6 route add default via <gateway> dev eth0

# Route ipv6 via eth0
ip -6 route add default via <gateway> dev eth0

New updates

On newer openvz-releases neither tun/tap nor gre-tunneling seem to be a problem anymore. However, SIT remains impossible to run. Most comments on the internet very much likely are old, linking to userspace applications that has to be compiled or a note that just says: ”You have to compile into the kernel as it is disable per default. Security issues”. Many people also say that IF you’re running SIT you should probably run most of the data on the host of the server. With no further examples on HOW they mean. The below example IS a live example even if the solution itself – as always – does not work proberly. However, by running those commands I managed to see life from the tunnel server that was in this case based on Hurricane Electric tunnels. The solution also has a static ip address that HE can reach.

The only problem with the example below is the fact that i still struggle with the most common error of them all: protocol 41 port 0 unreachable. The connection itself also ”demands” that you are not using the venet-links. The example tries to connect with HE Fremont.

vzctl set 7030 –netdev_del he-fremont –save vzctl restart 7030 vzctl exec 7030 ip addr add dev eth0 ip tunnel add he-fremont mode sit remote 72.52.104.74 local any ttl 255 dev vzctl set 7030 –netdev_add he-fremont –save vzctl exec 7030 ip link set he-fremont up vzctl exec 7030 ip tun change he-fremont local vzctl exec 7030 ip addr add dev he-fremont vzctl exec 7030 ip route add ::/0 dev he-fremont vzctl exec 7030 ip addr add dev eth0 vzctl exec 7030 ip -f inet6 addr

Trying to ping the addresses added above will make tcpdump react. There is something going on. But since protocol 41 fails here, everything stays with this. If someone has a fantastic solution on how to activate ipv6/protocol 41 on a ve please tell!

Netflix and the blocking of tunneled ipv6-routes

Netflix and the blocking of tunneled ipv6-routes

The solution below is implemented at one of my recursion-dns servers 88.80.16.49 so it can be used out of the box.

Today I discovered that Netflix started blocking tunneled ipv6-routes. This means, in SiXXS case (which I primarily use to reach ipv6 routes), that I’m for now blocked from using Netflix this way. This also means that I have a few options, to make Netflix work again, even if I run with ipv6 simultaneously:

  • Edit the hosts-file. Make a look up on netflix.com, to pick up all addresses based on ipv4. Problem: Any changes that Netflix makes, will never reach me. Besides, the streaming servers are probably named differently than only ”www.netflix.com”.
  • Disable ipv6 while watching netflix. Problem: All connectivity with ipv6 is lost while watching Transformers.

So, the real problem here is that Netflix resolves both on ipv4 and ipv6, so I need to find a DNS server that only gives me ipv4-responses, so I don’t have to guard DNS updates myself. What I did to solve this problem was, since I host my own DNS-services, therefore to set up a secondary DNS server that explicitly returns ipv4-addresses when making lookups on a ipv4-network – without the list of ipv6-addresses, like this:

v4

In the primary master server, I’ll put up a forward zone like this:

zone "netflix.com" IN {
        type forward;
        forwarders {
                10.1.1.129;
        };
};

And suddenly Netflix becomes available again, on a ipv4-only network…

Update 2019-12-29

As if bind 9.14, the above solution is obsolete [in the native daemon] and should removed. If you’ve installed bind with correct plugins (I’ve installed bind via ISC PPA), there’s a replacement for the above solution. In named.conf, place this outside the configuration block, and everything should run as before again.

plugin query "filter-aaaa.so" {
        filter-aaaa-on-v4 yes;
};