Kategori IT/Development

How to proxy-relay SMTP over squid services with postfix and netcat

I’ve seen a lot of questions asked on some places, how to send SMTP traffic over proxies. Noone – as usual has given a proper answer on this really specific problem. Let me explain.

  • I do have a mailserver, but it is blocked to use SMTP ports. This mailserver is postfix.
  • I also do have access to a squid-proxy.
  • Now, I want to send e-mail over this squid proxy. Can this be done.
  • Now, all questions are just returned with the answer ”Nope, squid is a http/https proxy. No can do!”

I beg to differ. You can send SMTP traffic over a SQUID proxy. You just have to tweak the sending and ports. First of all; we know that squid proxies requires a handshake that very much looks like this:

CONNECT remote.http.server:80 HTTP/1.0

With this done, the SQUID proxy replies like this:

HTTP/1.1 200 Connection established

While playing with the postfix settings, it turns out that postfix ignores that response. So instead, you could just connect to a SMTP-server instead. If the squid proxy is not controlled by you, however, this connectivity will fail on SMTP ports. So this solution is based on the fact that your real SMTP relay server should answer at port 80 or 443, as it was a web service itself.

For several years ago, I configured a postfix to answer on those ports, since port 25 and 587 locally was usually blocked. To make sure I have a relay that can handle the mail service, I therefore configured a VPS specifically to use the default webports to receive e-mail. When received, this postfix service will then make sure that the mail are delivered properly.

So, where does the squid service come in?

Well, I’ve seen a lot of recommendations that includes installations of complex crappy application that needs to be configured to death. I was thinking if this could be handled easier, by for example netcat. A little experimenting I figured out, that the only thing required for this to work was to put up a simple script that handles the proxy link and a inetd-kind-of service. But for systemd. It very much looks like this:



cat <(echo "CONNECT SMTP.SERVER.IP:443 HTTP/1.0"; echo "") -|nc squid.server.host 3128

This little nifty script makes sure we connect to the squid proxy. When connected, it sends the command to the SQUID proxy, necessary to make squid reconnect to the real server. As it probably blocks the regular ports, but allows http/https-traffic, squid will now open a connection to the real server.

With this knowledge, I now need a local service that handles postfix relay connections as it was a real SMTP relay that it connects to, since postfix does not handle proxies. For this example, I’ve been using port 1588 (instead 587). With help from https://mgdm.net/weblog/systemd-socket-activation/ i managed to do the following:


Description=ProxySMTP Socket







Some things that has been pointed out at the site I got inspired from is to take not one the @ in the filename. This is significant as it indicates the service is a template and that a new instance of the service will be run on every connection.

With all this in place, it is time to enable the service and test it. This is done in following steps:

  • systemctl daemon-reload
  • systemctl enable proxysmtp
  • systemctl start proxysmtp.socket
  • systemctl enable proxysmtp.socket

When testing this solution, I will now get the following response when trying to connect to port 1588 as configured above:

my-server:~$ telnet localhost 1588
Trying localhost...
Connected to localhost.
Escape character is '^]'.
HTTP/1.1 200 Connection established

220 my-smtp-server ESMTP Postfix

This is something that is apparently perfectly supported by postfix, so now we’ve configured SMTP-Over-Proxy with very low effort and very high efficiency.

PrestaShop 1.7.7.x and its unability to have complete configuration file

Stored by emergency below, as it seems that documentation widely spread over the world can’t have a complete, working, config file for prestashop in nginx. Here’s one, that worked perfectly without any weird issues with 404-errors, etc.

Don’t forget to check your hostname and/or eventually SSL configuration for port 443.


I have a static IP at OVPN but I want to route it from a Linux firewall. How do I fix this?

ovpn.com is a VPN service that offers VPN connectivity for OpenVPN and Wireguard. As of today ovpn.com has a beta release of static ip-addresses over Wireguard. As I, for some weird reason, love to collect ip-addresses I also have a few accounts at ovpn that has different tasks. They are however very much bound to a static interface and nothing less. The default solution does not allow you to forward the entire IP address somewhere else on your LAN, so if you need to give access to several machines this way, you have to set up port forwarding.

Now I have a server pool where different IP’s are routed to different places on my LAN Several years ago, I found out how to handle PRQ (Periquito) VPN routing, very much the same way when several ip-address was was configured.

But this problem can be solved both with OpenVPN and the WireGuard solution. Both OpenVPN and WireGuard allows post-up scripts where you explicitly can configure a more advanced routing.

Before starting #1

Before starting, remember when your VPN connectivity is about to be configured: Do a first traceroute from your VPN connection and see where the traffic goes like this:

root@ovpn-wg:/home/me# traceroute
traceroute to (, 30 hops max, 60 byte packets
 1 (  13.130 ms  12.973 ms  12.941 ms

As you can see, your first stop to internet is at – you need to know this and set up your local server to be for example (you use the entire network for this, like For OpenVPN a similar set up is based on instead.

Before starting #2

The first thing you should take care of, is the ip-forwarding. Go to /etc/sysctl.conf and change:


to …


This makes the changes permanent, since you probably want to use this at boot up. Then run sysctl -p to update the settings without the rebooting part. You have now ip forwarding rules activated.


For OpenVPN, I realize this is much straight forward. When OpenVPN is executed you just have to replace the IP-addresses like this:

ip addr del dev tun0
ip addr add dev tun0
ip route add dev eth0

For the moment, I usually set up the auto connectivity with OpenVPN with this script. At this point I have no better solution for a cronjob or service (I realize that I have to fix this too). So unless you use the script, there’s much of manually setting this solution up. By means, I’ve started to avoid this. But not only because cronjobbing OpenVPN is a pain, but also because OpenVPN compared to WireGuard is extremely slow. Mostly caused by the encryption libraries.

Coming this far, means you are probably ready to set up your LAN machine, so jump to that section from here.


For WireGuard, I actually used a new, clean, instance. I usually run openvz for my machines but OpenVZ does not support the user space setup that WireGuard requires. So instead, when I wrote this, I started with a new virtualbox server. If your machines or VM-host support wireguard, there would be no difference here for you.

Also, I use a bunch of scripts to make sure the OpenVPN is always running. This is not necessary for the WireGuard set up this far, so those parts are skipped.

Make sure you have the ip forwarding enabled as described above. Begin with installing WireGuard. This is done on Ubuntu 20.04, so it’s very straight forward from here too. As an option, you could install the recommended packages too, as in the example.

sudo apt install wireguard resolvconf

Copy your ovpn-config from ovpn and put it in /etc/wireguard. If you don’t know what this means, you have probably missed that part. You have to log in at ovpn.com and generate keys and the configuration first.

To entirely enable autostart on boot, run:

sudo systemctl enable wg-quick@vpn<num>-se-ipv4
sudo systemctl daemon-reload

After this, you are probably ready to connect.

sudo service wg-quick@vpn<num>-se-ipv4 restart

If everything works this far, you have this visible in your ip config (ip addr):

3: vpn<num>-se-ipv4: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    inet <your-ip-address>/32 scope global vpn01-se-ipv4
       valid_lft forever preferred_lft forever

It is now time to check the trace, that was described before. See ”Before starting #1”, where you do the traceroute. If everything seems ok, you can now reconfigure your interface. Manually, it looks like this:

ip addr del <static-ip>/32 dev <vpn-device-name>
ip addr add dev <vpn-device-name>
ip route add <static-ip> dev <local-network-device>
ip route add dev <vpn-device-name>

The most likely scenario now, is that you won’t be able to reach the gateway from the server you’ve done this at. On the other hand, you will reach your client when the configuration is finished. This is probably cause by OVPN-firewalls. The local addresses are probably not permitted to communicate, since it has been set up for a static ip.

There might be more to fix here, for example OVPN endpoint is periodically pinging the static ip to check if it is alive. As we are forwarding traffic to another place, your server will report back that it is unavailable:

14:29:41.466350 IP > <static-ip>: ICMP echo request, id 54127, seq 12, length 24
14:29:44.529574 IP > ICMP host <static-ip> unreachable, length 52

That is something that I still have to work with, as long as no client is configured.

So, let’s continue to the client configuration.

Client Configuration

This is the easiest part. As you can see below, we till use ip rules here. You probably have another ip address configured at your local computer, so we need to tell the routing tables where to put the inbound- and outbound traffic for the static ip address:

ip addr add <static-ip>/32 dev enp4s0
ip rule add from <static-ip>/32 table <tableId>
ip route add default via <wireguard-gateway> table <tableId>

How do I script this, serverside?

To make life a bit easier, you can also script the above solution. If you use the example here, you can for example put that script in /etc/wireguard/reroute.sh – do not forget to chmod +x /etc/wireguard/reroute.sh

Also make sure that you change localNet and wgIp to what fits your connection. localNet for example could also be eth0 or eth1, while wgIp is the value reflected in the above traceroute.



echo "WireGuard"
if [ ! -f ${workDir}/${vpnName}.conf ] ; then

ipAddr=$(grep Address ${workDir}/${vpnName}.conf | awk '{printf $3}')

echo "Reroute ${ipAddr} to ${localNet}, exit interface is now ${wgIp}."

ip addr del $ipAddr dev ${vpnName} >/dev/null 2>&1
ip addr del $wgIp dev $vpnName >/dev/null 2>&1
ip addr add $wgIp dev $vpnName

# Remove the route if is already there, to make errors silent.
ip route del $ipAddr dev $localNet >/dev/null 2>&1
ip route add $ipAddr dev $localNet

ip route add dev ${vpnName}

When this is all done, add this row in the [Interface] section of the wireguard configuration:

PostUp = bash /etc/wireguard/reroute.sh %i


For this solution I never had to use any MASQUERADE-setup, but if you feel this is necessary, for example if you use many clients and this as a gateway and it’s absolutely necessary you can add this row:

iptables -t nat -A POSTROUTING -o <vpn-device-name> -j MASQUERADE

Probably this is less of a problem if you use the same ip-address network all the way out to the exiting point.

You’re done!

That’s it, folks!

If something went wrong, feel free to tell about it.

DeepFaceLab in Linux – How to handle cuda libraries

I’ve been struggling with cuda libraries and DeepFaceLab for the last day, and failed making it work properly in Linux. The most common case when it comes to Linux failures (at least for this one) is that people tend to be happy with their systems, while the rest of the world is actually moving forward in development. So very much of old instructions quickly become outdated. This was the exact case with DeepFaceLab.

So I ended up restarting my little project with FULL instructions of making DeepFaceLab work as of July 2021. Because developers are usually not maintaining documentation properly. This probably also includes me. When I started this I ended up with a nvidia-machine-learning package. During todays session, this package did not show up anywhere, why I believe that it is not actually required. So here we go!


  • Linux MINT (Tricia) with Ubuntu 18-bionic core
  • NVIDIA GeForce GTX 1060

In this instructive document, we will try to install deepfacelab with the drivers recommended for the graphics card. First out, I’d like to find my recommended graphics driver. I ended up with 470. Example:

# export LC_ALL=C.UTF-8
# export LANG=C.UTF-8
# ubuntu-drivers devices


WARNING:root:_pkg_get_support nvidia-driver-390: package has invalid Support Legacyheader, cannot determine support level
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001C03sv00001043sd000085AEbc03sc00i00
vendor   : NVIDIA Corporation
model    : GP106 [GeForce GTX 1060 6GB]
driver   : nvidia-driver-460-server - distro non-free
driver   : nvidia-driver-390 - distro non-free
driver   : nvidia-driver-450-server - distro non-free
driver   : nvidia-driver-470 - distro non-free recommended
driver   : nvidia-driver-418-server - distro non-free
driver   : nvidia-driver-460 - distro non-free
driver   : xserver-xorg-video-nouveau - distro free builtin

Complication and dependencies may be a problem here since I had other drivers installed since last round. In my case it was solved by installing one of them first. I assume that in ”normal systems” libnvidia-compute does not have to be installed first.

apt-get -y install libnvidia-compute-470
apt-get -y install nvidia-driver-470

You probably want to reboot now, since nvidia-smi (/usr/bin/nvidia-smi) will not work properly until then. After rebooting, nividia-smi would look like this.

NVIDIA-SMI 470.57.02    Driver Version: 470.57.02    CUDA Version: 11.4

Now, Tensorflow will be your greatest enemy. For perl runs this is the driver that tries to find whether you can run deepfacelab with a GPU or not. So first of all, we need as much as possible of the cuda libraries since python needs to find them when it’s time for tensorflow to find the graphics card. Go to https://developer.nvidia.com/cuda-toolkit and download cuda toolkit While trying to make this work manually, the deb-packages seems to be ”problematic”. However, when running the local repo package it seem to actually work! During the local install I see a couple of libraries being installed which has been missing during the last struggle with libs.

At this moment, when installation is down, we now should have a /usr/local/cuda which was – the last time – missing all files necessary for python to find the necessary librarties. I can now see that it contains a lib64 too, which wasnt the case last time. You can run the command below to check the libraries (according to https://www.easy-tensorflow.com/tf-tutorials/install/cuda-cudnn).

# ldconfig -p | grep cuda

And this is what we look for:

At this point, according to ”easy-tensorflow” above, we are also adding this into the .bashrc:

export CUDA_ROOT=/usr/local/cuda/bin/
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/

However, I also chose to put this in the /etc/profile so it will become system global. According to the instructions at that page, we also need more drivers (cudnn for Cuda Deep Neural Networks). As described, an account – that is free – are required. At this moment, I have no idead whether URL is https://developer.nvidia.com/cudnn this is required for deepfacelab or not, but I chose to install this anyway. If someone know anything about this, please tell. The deb files may or may not work, but according to easy-tensorflow they used the tar-archive. The fix-broken below rather removed the drivers again, than installed them.

apt-get install libcupti-dev

Before going somewhere at this state, we should now see whether tensorflow works properly. I realized this after getting an error from python that looked like this:

Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64/

TensorRT can be found here: https://developer.nvidia.com/tensorrt-getting-started.
This is something that Nvidia themselves links to during ”error messaging”. So, now we must ensure that this is also in place. While testing the python (below), I also had to do some symlink magic, since some of the files as missing the links. The files ARE there but versioned differently. The exception is libcudnn.so.7 that is needed in my case.

The next step was necessary to run (at least for me) since the symlinks that is requested by tensorflow. It may be different to yours if you install this on another system.

ln -s libcudart.so libcudart.so.10.0
ln -s libcublas.so libcublas.so.10.0
ln -s libcufft.so libcufft.so.10.0
ln -s libcurand.so libcurand.so.10.0
ln -s libcusolver.so libcusolver.so.10.0
ln -s libcusolver.so libcusolver.so.10
ln -s libcusparse.so libcusparse.so.10.0
ln -s libcudnn.so.8 libcudnn.so.7

Make sure you install tensorflow like this before trying the next step:

python -m pip install tensorflow-gpu==2.0.0

Installing a higher version of tensorflow-gpu could be more silent. Start python and do this:

from tensorflow.python.client import device_lib

The output result should look like this:

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
incarnation: 4200197191832695499
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
incarnation: 7983059117765893941
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
incarnation: 315957471147119305
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 5468454912
locality {
  bus_id: 1
  links {
incarnation: 11492666565116267717
physical_device_desc: "device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1"

When I last tested this, it failed. But now, the devices finally shows up here. So now it is time for anaconda. We will use the recommended installer at https://www.anaconda.com/products/individual. Currently they offer Anaconda with Python 3.8. It should be quite straight forward to install anaconda.

Now go for the deepfacelab for Linux — https://github.com/nagadit/DeepFaceLab_Linux. This may be a bit tricky, since the instructions are pointing at other versions than we just installed. If unsure here, after installation do this, to make sure that your environment is ready. You could also try to reboot one time.

# source ~/.bashrc

The go for the source and do the rest as described at the github repo, With some minor updates. They will be shown after the codeblock.

# conda create -n deepfacelab -c main python=3.8 cudnn=8.2.1 cudatoolkit=11.3.1
# conda activate deepfacelab
# git clone --depth 1 https://github.com/nagadit/DeepFaceLab_Linux.git
# cd DeepFaceLab_Linux
# git clone --depth 1 https://github.com/iperov/DeepFaceLab.git

As you can see above we use python 3.8, cudnn 8.2.1 and cudatoolkid 11.3.1 instead of 3.7/7.6.5/10.1.243. The instructions at github seem to have become a bit old, so this has to be changed at first. Secondly you also have to change some values in the requirements-cuda.txt before you can proceed, since python also changes over time. The errors below is what I got during the pip-install.


Should instead be:


Otherwise you’d at least get one error (at least I did), that looks like this:

ERROR: No matching distribution found for opencv-python==

Now, install the requirements:

# python -m pip install -r ./DeepFaceLab/requirements-cuda.txt

At this point you may be finished. However, the environment file delivered in the current linux repo is still pointing to the python version that they use there. This has to be changed. Go for this row:

export DFL_PYTHON="python3.8"

and change it to

export DFL_PYTHON="python3.8"

If anaconda changes supported versions, you need to remember that too, or you will get further errors when starting DeepFaceLab.

And now, you should be finished!

Letsencrypt and webservices auto-restart

So I have a bunch of SSL-certificates, wildcarded, that is renewing from time to time. And since I hate googing for things ending up without any results anyway, I recently wrote a script that is checking against apache and nginx pid-files if any pem-files in the letsencrypt directory (under /etc/letsencrypt) are newer than the last restart time of the webserver.

If the pid files for apache and nginx are older than the respective pem-file, the script is set to restart the webserver. The script itself has focus on apache, since I still have unmigrated services left in my systems. The below script has been set to email me on such changes, but has been removed from this snippet.



ap=$(which apachectl)
if [ -f $apachePid ] ; then
	apacheDate=$(date -r ${apachePid} "+%s")
	restartCmd="$ap restart"

if [ -f $nginxPid ] ; then
	nginxDate=$(date -r ${nginxPid} "+%s")
	if [ "" != "$apacheDate" ] ; then
		if [ $nginxDate -gt $apacheDate ] ; then
			echo "Nginx date is newer than apache, will use that instead."
	restartCmd="service nginx restart"

if [ "$allowSslScan" = "1" ] ; then

	if [ -d /etc/letsencrypt/live ] ; then
		pems=$(find /etc/letsencrypt/live -type l)
		for pem in $pems
			realfile=$(readlink -f $pem)
			thisDate=$(date -r $realfile "+%s")
			if [ $thisDate -gt $apacheDate ] ; then
	if [ "$requireRestart" = "1" ] ; then
		echo "Chosen restart command: $restartCmd"
		echo "One or more SSL certificates are newer than the current apache2 session. We require a restart!"