Kategoriarkiv: Ingen kategori

I have a static IP at OVPN but I want to route it from a Linux firewall. How do I fix this?

ovpn.com is a VPN service that offers VPN connectivity for OpenVPN and Wireguard. As of today ovpn.com has a beta release of static ip-addresses over Wireguard. As I, for some weird reason, love to collect ip-addresses I also have a few accounts at ovpn that has different tasks. They are however very much bound to a static interface and nothing less. The default solution does not allow you to forward the entire IP address somewhere else on your LAN, so if you need to give access to several machines this way, you have to set up port forwarding.

Now I have a server pool where different IP’s are routed to different places on my LAN Several years ago, I found out how to handle PRQ (Periquito) VPN routing, very much the same way when several ip-address was was configured.

But this problem can be solved both with OpenVPN and the WireGuard solution. Both OpenVPN and WireGuard allows post-up scripts where you explicitly can configure a more advanced routing.

Before starting #1

Before starting, remember when your VPN connectivity is about to be configured: Do a first traceroute from your VPN connection and see where the traffic goes like this:

root@ovpn-wg:/home/me# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  172.32.0.1 (172.32.0.1)  13.130 ms  12.973 ms  12.941 ms

As you can see, your first stop to internet is at 172.32.0.1 – you need to know this and set up your local server to be for example 172.32.0.2 (you use the entire network for this, like 172.32.0.0/24). For OpenVPN a similar set up is based on 10.127.0.0/24 instead.

Before starting #2

The first thing you should take care of, is the ip-forwarding. Go to /etc/sysctl.conf and change:

#net.ipv4.ip_forward=1
#net.ipv6.conf.all.forwarding=1

to …

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

This makes the changes permanent, since you probably want to use this at boot up. Then run sysctl -p to update the settings without the rebooting part. You have now ip forwarding rules activated.

OpenVPN

For OpenVPN, I realize this is much straight forward. When OpenVPN is executed you just have to replace the IP-addresses like this:

ip addr del 1.2.3.4/24 dev tun0
ip addr add 10.127.0.2/24 dev tun0
ip route add 1.2.3.4/32 dev eth0

For the moment, I usually set up the auto connectivity with OpenVPN with this script. At this point I have no better solution for a cronjob or service (I realize that I have to fix this too). So unless you use the script, there’s much of manually setting this solution up. By means, I’ve started to avoid this. But not only because cronjobbing OpenVPN is a pain, but also because OpenVPN compared to WireGuard is extremely slow. Mostly caused by the encryption libraries.

Coming this far, means you are probably ready to set up your LAN machine, so jump to that section from here.

WireGuard

For WireGuard, I actually used a new, clean, instance. I usually run openvz for my machines but OpenVZ does not support the user space setup that WireGuard requires. So instead, when I wrote this, I started with a new virtualbox server. If your machines or VM-host support wireguard, there would be no difference here for you.

Also, I use a bunch of scripts to make sure the OpenVPN is always running. This is not necessary for the WireGuard set up this far, so those parts are skipped.

Make sure you have the ip forwarding enabled as described above. Begin with installing WireGuard. This is done on Ubuntu 20.04, so it’s very straight forward from here too. As an option, you could install the recommended packages too, as in the example.

sudo apt install wireguard resolvconf

Copy your ovpn-config from ovpn and put it in /etc/wireguard. If you don’t know what this means, you have probably missed that part. You have to log in at ovpn.com and generate keys and the configuration first.

To entirely enable autostart on boot, run:

sudo systemctl enable wg-quick@vpn<num>-se-ipv4
sudo systemctl daemon-reload

After this, you are probably ready to connect.

sudo service wg-quick@vpn<num>-se-ipv4 restart

If everything works this far, you have this visible in your ip config (ip addr):

3: vpn<num>-se-ipv4: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet <your-ip-address>/32 scope global vpn01-se-ipv4
       valid_lft forever preferred_lft forever

It is now time to check the trace, that was described before. See ”Before starting #1”, where you do the traceroute. If everything seems ok, you can now reconfigure your interface. Manually, it looks like this:

## SERVER
ip addr del <static-ip>/32 dev <vpn-device-name>
ip addr add 172.32.0.2/24 dev <vpn-device-name>
ip route add <static-ip> dev <local-network-device>
ip route add 0.0.0.0/1 dev <vpn-device-name>

The most likely scenario now, is that you won’t be able to reach the gateway from the server you’ve done this at. On the other hand, you will reach your client when the configuration is finished. This is probably cause by OVPN-firewalls. The local addresses are probably not permitted to communicate, since it has been set up for a static ip.

There might be more to fix here, for example OVPN endpoint is periodically pinging the static ip to check if it is alive. As we are forwarding traffic to another place, your server will report back that it is unavailable:

14:29:41.466350 IP 172.32.0.1 > <static-ip>: ICMP echo request, id 54127, seq 12, length 24
14:29:44.529574 IP 10.1.1.41 > 172.32.0.1: ICMP host <static-ip> unreachable, length 52

That is something that I still have to work with, as long as no client is configured.

So, let’s continue to the client configuration.

Client Configuration

This is the easiest part. As you can see below, we till use ip rules here. You probably have another ip address configured at your local computer, so we need to tell the routing tables where to put the inbound- and outbound traffic for the static ip address:

ip addr add <static-ip>/32 dev enp4s0
ip rule add from <static-ip>/32 table <tableId>
ip route add default via <wireguard-gateway> table <tableId>

How do I script this, serverside?

To make life a bit easier, you can also script the above solution. If you use the example here, you can for example put that script in /etc/wireguard/reroute.sh – do not forget to chmod +x /etc/wireguard/reroute.sh

Also make sure that you change localNet and wgIp to what fits your connection. localNet for example could also be eth0 or eth1, while wgIp is the value reflected in the above traceroute.

#!/bin/bash

vpnName=$1
localNet=enp0s3
wgIp=172.32.0.2/24
workDir=/etc/wireguard

echo "WireGuard"
if [ ! -f ${workDir}/${vpnName}.conf ] ; then
	exit
fi

ipAddr=$(grep Address ${workDir}/${vpnName}.conf | awk '{printf $3}')

echo "Reroute ${ipAddr} to ${localNet}, exit interface is now ${wgIp}."

ip addr del $ipAddr dev ${vpnName} >/dev/null 2>&1
ip addr del $wgIp dev $vpnName >/dev/null 2>&1
ip addr add $wgIp dev $vpnName

# Remove the route if is already there, to make errors silent.
ip route del $ipAddr dev $localNet >/dev/null 2>&1
ip route add $ipAddr dev $localNet

ip route add 0.0.0.0/1 dev ${vpnName}

When this is all done, add this row in the [Interface] section of the wireguard configuration:

PostUp = bash /etc/wireguard/reroute.sh %i

MASQUERADE notice

For this solution I never had to use any MASQUERADE-setup, but if you feel this is necessary, for example if you use many clients and this as a gateway and it’s absolutely necessary you can add this row:

iptables -t nat -A POSTROUTING -o <vpn-device-name> -j MASQUERADE

Probably this is less of a problem if you use the same ip-address network all the way out to the exiting point.

You’re done!

That’s it, folks!

If something went wrong, feel free to tell about it.

DeepFaceLab in Linux – How to handle cuda libraries

I’ve been struggling with cuda libraries and DeepFaceLab for the last day, and failed making it work properly in Linux. The most common case when it comes to Linux failures (at least for this one) is that people tend to be happy with their systems, while the rest of the world is actually moving forward in development. So very much of old instructions quickly become outdated. This was the exact case with DeepFaceLab.

So I ended up restarting my little project with FULL instructions of making DeepFaceLab work as of July 2021. Because developers are usually not maintaining documentation properly. This probably also includes me. When I started this I ended up with a nvidia-machine-learning package. During todays session, this package did not show up anywhere, why I believe that it is not actually required. So here we go!

Environment

  • Linux MINT (Tricia) with Ubuntu 18-bionic core
  • NVIDIA GeForce GTX 1060

In this instructive document, we will try to install deepfacelab with the drivers recommended for the graphics card. First out, I’d like to find my recommended graphics driver. I ended up with 470. Example:

# export LC_ALL=C.UTF-8
# export LANG=C.UTF-8
# ubuntu-drivers devices

Output:

WARNING:root:_pkg_get_support nvidia-driver-390: package has invalid Support Legacyheader, cannot determine support level
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001C03sv00001043sd000085AEbc03sc00i00
vendor   : NVIDIA Corporation
model    : GP106 [GeForce GTX 1060 6GB]
driver   : nvidia-driver-460-server - distro non-free
driver   : nvidia-driver-390 - distro non-free
driver   : nvidia-driver-450-server - distro non-free
driver   : nvidia-driver-470 - distro non-free recommended
driver   : nvidia-driver-418-server - distro non-free
driver   : nvidia-driver-460 - distro non-free
driver   : xserver-xorg-video-nouveau - distro free builtin

Complication and dependencies may be a problem here since I had other drivers installed since last round. In my case it was solved by installing one of them first. I assume that in ”normal systems” libnvidia-compute does not have to be installed first.

apt-get -y install libnvidia-compute-470
apt-get -y install nvidia-driver-470

You probably want to reboot now, since nvidia-smi (/usr/bin/nvidia-smi) will not work properly until then. After rebooting, nividia-smi would look like this.

NVIDIA-SMI 470.57.02    Driver Version: 470.57.02    CUDA Version: 11.4

Now, Tensorflow will be your greatest enemy. For perl runs this is the driver that tries to find whether you can run deepfacelab with a GPU or not. So first of all, we need as much as possible of the cuda libraries since python needs to find them when it’s time for tensorflow to find the graphics card. Go to https://developer.nvidia.com/cuda-toolkit and download cuda toolkit While trying to make this work manually, the deb-packages seems to be ”problematic”. However, when running the local repo package it seem to actually work! During the local install I see a couple of libraries being installed which has been missing during the last struggle with libs.

At this moment, when installation is down, we now should have a /usr/local/cuda which was – the last time – missing all files necessary for python to find the necessary librarties. I can now see that it contains a lib64 too, which wasnt the case last time. You can run the command below to check the libraries (according to https://www.easy-tensorflow.com/tf-tutorials/install/cuda-cudnn).

# ldconfig -p | grep cuda

And this is what we look for:

At this point, according to ”easy-tensorflow” above, we are also adding this into the .bashrc:

export CUDA_ROOT=/usr/local/cuda/bin/
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

However, I also chose to put this in the /etc/profile so it will become system global. According to the instructions at that page, we also need more drivers (cudnn for Cuda Deep Neural Networks). As described, an account – that is free – are required. At this moment, I have no idead whether URL is https://developer.nvidia.com/cudnn this is required for deepfacelab or not, but I chose to install this anyway. If someone know anything about this, please tell. The deb files may or may not work, but according to easy-tensorflow they used the tar-archive. The fix-broken below rather removed the drivers again, than installed them.

apt-get install libcupti-dev

Before going somewhere at this state, we should now see whether tensorflow works properly. I realized this after getting an error from python that looked like this:

Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64/

TensorRT can be found here: https://developer.nvidia.com/tensorrt-getting-started.
This is something that Nvidia themselves links to during ”error messaging”. So, now we must ensure that this is also in place. While testing the python (below), I also had to do some symlink magic, since some of the files as missing the links. The files ARE there but versioned differently. The exception is libcudnn.so.7 that is needed in my case.

The next step was necessary to run (at least for me) since the symlinks that is requested by tensorflow. It may be different to yours if you install this on another system.

ln -s libcudart.so libcudart.so.10.0
ln -s libcublas.so libcublas.so.10.0
ln -s libcufft.so libcufft.so.10.0
ln -s libcurand.so libcurand.so.10.0
ln -s libcusolver.so libcusolver.so.10.0
ln -s libcusolver.so libcusolver.so.10
ln -s libcusparse.so libcusparse.so.10.0
ln -s libcudnn.so.8 libcudnn.so.7

Make sure you install tensorflow like this before trying the next step:

python -m pip install tensorflow-gpu==2.0.0

Installing a higher version of tensorflow-gpu could be more silent. Start python and do this:

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

The output result should look like this:

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 4200197191832695499
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 7983059117765893941
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 315957471147119305
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 5468454912
locality {
  bus_id: 1
  links {
  }
}
incarnation: 11492666565116267717
physical_device_desc: "device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1"
]

When I last tested this, it failed. But now, the devices finally shows up here. So now it is time for anaconda. We will use the recommended installer at https://www.anaconda.com/products/individual. Currently they offer Anaconda with Python 3.8. It should be quite straight forward to install anaconda.

Now go for the deepfacelab for Linux — https://github.com/nagadit/DeepFaceLab_Linux. This may be a bit tricky, since the instructions are pointing at other versions than we just installed. If unsure here, after installation do this, to make sure that your environment is ready. You could also try to reboot one time.

# source ~/.bashrc

The go for the source and do the rest as described at the github repo, With some minor updates. They will be shown after the codeblock.

# conda create -n deepfacelab -c main python=3.8 cudnn=8.2.1 cudatoolkit=11.3.1
# conda activate deepfacelab
# git clone --depth 1 https://github.com/nagadit/DeepFaceLab_Linux.git
# cd DeepFaceLab_Linux
# git clone --depth 1 https://github.com/iperov/DeepFaceLab.git

As you can see above we use python 3.8, cudnn 8.2.1 and cudatoolkid 11.3.1 instead of 3.7/7.6.5/10.1.243. The instructions at github seem to have become a bit old, so this has to be changed at first. Secondly you also have to change some values in the requirements-cuda.txt before you can proceed, since python also changes over time. The errors below is what I got during the pip-install.

opencv-python==4.1.0.25

Should instead be:

opencv-python

Otherwise you’d at least get one error (at least I did), that looks like this:

ERROR: No matching distribution found for opencv-python==4.1.0.25

Now, install the requirements:

# python -m pip install -r ./DeepFaceLab/requirements-cuda.txt

At this point you may be finished. However, the environment file delivered in the current linux repo is still pointing to the python version that they use there. This has to be changed. Go for this row:

export DFL_PYTHON="python3.8"

and change it to

export DFL_PYTHON="python3.8"

If anaconda changes supported versions, you need to remember that too, or you will get further errors when starting DeepFaceLab.

And now, you should be finished!

Vill du gå in i väggen i sommar, helt utan arbeta?

Ja, då ska du lyssna på Aftonbladets råd, ty semestern är den tid då andra förväntar sig att man inte ska vila upp sig från sitt arbete. Så låt mig göra om Aftonbladets Hemester-artikel åt er, så att den stämmer bättre in med verkligheten. Den inleds nämligen så här:

Vill du gå in i väggen under sommarens hemester?

Med Aftonbladets intensiva guide hittar du enkelt så många utflyktsmål under så kort tid som möjligt, nära dig. Och vi ger dig chansen att dela med dig av dina härligaste kollapser till andra. För det är där du kommer hamna: I ett nervsammanbrott.

Hitta din nästa utflyktskatastrof – sök på önskad kommun i kartan nedan.

Du kan även filtrera bland typer av aktiviteter via ikonen nere till vänster i kartan.

Även kartans filter har reviderats för att täcka hela semesterperioden, från planeringen, starten och inläggningen.