CrazyCompute
Education • Science & Tech
Calibrating a camera with OpenCV using a Charuco board.
October 21, 2023
post photo preview

Calibrating is the process that we must to with every camera we have before using it for vision projects. Other articles describe this better than I can, but the idea is that every lens causes some distortion. We want to undo those imperfections, so we can more accurately handle the objects in the image. This is helpful for every vision task including OCR, pose estimation, object tracking...etc. If you want more information look for “homography” and “epipolar geometry” These require two matrices one called Camera Matrix and the other is the Distortion Coefficients.

Originally, I was following tutorials online for “calibrate camera with churaco board” the problem I ran into was that the openCV they used was OpenCV4.5 or OpenCV4.7 and the latest is OpenCV4.8.1, and in this latest version they made lots of changes to the Aruco object. This rendered all of the tutorials online outdated, so their code would not compile. I managed to get around this by using the C++ examples in the latest openCV repository. This meant I needed to compile from source.

 

Get Ubuntu Prerequisites

The first step is to make sure we have the prerequisites to build openCV from source. In a terminal paste the following block of text to fetch the needed files

 

sudo apt install -y python3-dev python3-pip python3-numpy build-essential libgtk-3-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev gfortran openexr libatlas-base-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libdc1394-22-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libatlas-base-dev libfaac-dev libmp3lame-dev libtheora-dev libxvidcore-dev libx264-dev libopencore-amrnb-dev libopencore-amrwb-dev libgphoto2-dev libeigen3-dev libhdf5-dev doxygen x264 v4l-utils

# https://gist.github.com/Mahedi-61/804a663b449e4cdb31b5fea96bb9d561

 

Compiling OpenCV from source

 

To compile the code we’ll need to set up the folder structure:

opencv_build

    opencv_build\opencv

    opencv_build\opencv-contrib

 

to do this we run the following:

 

mkdir ~/opencv_build && cd ~/opencv_build

git clone https://github.com/opencv/opencv.git

git clone https://github.com/opencv/opencv_contrib.git

 

mkdir -p build && cd build

 

Before we actually tell the system to build the code, it is possible to enable CUDA, but I am using CPU only, so I don’t configure the CUDA feature. I also wanted to enable “PKGCONFIG” instead of using CMAKE files, which is the recommended way, I used pkgconfig to link the openCV libraries when I built the examples. So my make command, which you can paste into a terminal looks like:

sudo cmake -D CMAKE_BUILD_TYPE=RELEASE \

-D CMAKE_INSTALL_PREFIX=/usr/local \

-D OPENCV_ENABLE_NONFREE=ON \

-D INSTALL_C_EXAMPLES=ON \

-D INSTALL_PYTHON_EXAMPLES=ON \

-D OPENCV_GENERATE_PKGCONFIG=ON \

-D OPENCV_PC_FILE_NAME=opencv.pc \

-D OPENCV_GENERATE_PKCONFIG=ON \

-D OPENCV_EXTRA_MODULES_PATH=~/opencv_build/opencv_contrib/modules \

-D PYTHON_EXECUTABLE=/usr/bin/python3 \

-D PYTHON_DEFAULT_EXECUTABLE=$(which python3) .. \

-D BUILD_EXAMPLES=ON ..

 

We then have to specify the number of cores we have that we want to use to actually compile the code. I used 8, but this example I’ll leave it at 4…

 

make -j4

sudo make install

sudo sh -c 'echo "/usr/local/lib" >> /etc/ld.so.conf.d/opencv.conf'

sudo ldconfig

 

We now have openCV install on the system, but we do need to run the next couple commands for python..

 

pip install opencv-contrib-python

 

Creating the Charuco Board

 

This got much easier than when I first tried the calibration code. Originally, I used the “create_board_charuco.cpp” file to make an executable with

g++ -ggdb create_board_charuco.cpp -o create_board_charuco `pkg-config --cflags --libs opencv`

 

The nice thing with building the code youself is that one of the fields we used in the CMAKE command was “ -D BUILD_EXAMPLES=ON “ this compiled the examples for us, so we simply need to navigate to:

 

cd ~/opencv_build/opencv/build/bin

 

We can create our Charuco board with

 

./example_aruco_create_board_charuco -w=5 -h=7 --sl=200 -ml=120 -d=10 "~/charucoboard.png"

 

With this image we need to print it out. When printing try not to adjust the image if possible, so no scaling… try to print it as is. After we have it printed out, measure the squares and the marker lengths using a caliper or mm machinist ruler. We’ll be specifying the sizes of the squares and markers later…

 

Gather images from the camera

 

Our next task is to get images of the board in the camera at different locations. When capturing the images, make sure to try keeping the board at the same distance from the camera. We do not want to change the Z axis we only want the X and Y coordinate to change.

 

To capture images I created a “calibrate” directory to store my images with

 

mkdir calibrate

 

Now I can create my python file

 

touch collectImages.py

 

and paste the code below in it…

 

 

# import the opencv library
import cv2
import numpy as np
import cv2, PIL, os
from cv2 import aruco
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
 
# define a video capture object
vid = cv2.VideoCapture(0)
imgCount =0
while(True):
 
# Capture the video frame
# by frame
ret, frame = vid.read()
 
# Display the resulting frame
cv2.imshow('frame', frame)
 
# the 'q' button is set as the
# quitting button you may use any
# desired button of your choice
if cv2.waitKey(1) & 0xFF == ord('q'):
    break
 
# we need to import our aruco dictionary, which
# I generated a default one
#aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250)
#board = aruco.CharucoBoard_create(7, 5, 0.0347, 0.0206, aruco_dict)
dictionary = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_6X6_250)
parameters = cv2.aruco.DetectorParameters()
detector = cv2.aruco.ArucoDetector(dictionary, parameters)

# we already have our frame, so detect it
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
markerCorners, markerIds, rejectedCandidates = detector.detectMarkers(gray)
 
if len(markerCorners) >=17:
    frame_markers = aruco.drawDetectedMarkers(frame.copy(), markerCorners, markerIds)
    cv2.imshow('frame_markers', frame_markers)
    cv2.imwrite('calibrate/calibrate_'+str(imgCount)+'.png',frame)
    imgCount= imgCount+1

# After the loop release the cap object
vid.release()
# Destroy all the windows
cv2.destroyAllWindows()

 

(sorry about the indents I think the copy + paste didn't keep them... here is a picture to show what it supposed to look like)

Run the python file with:

 

python ./collectImages.py

 

When the images come up on the screen move around with the board, again making sure to keep the distance the same. Don’t sit in one place for too long because it is saving images as fast as it can. It won’t be able to read the board in a blurred image, so if you move it won’t see the markers until you stop for a moment. After a few seconds you should have a whole bunch of images in the calibrate folder. We’ll need this for the next step.

 

Calibrate Camera Using OpenCV

 

There is a example we can compile in the calib3d tutorial code. This one uses a “default.xml” file to specify the configuration rather than having a bunch of command line arguments, so to build it:

 

cd ~/opencv_build/opencv/samples/cpp/tutorial_code/calib3d/camera_calibration

g++ -ggdb camera_calibration.cpp -o camera_calibration `pkg-config --cflags --libs opencv`

 

 

This default.xml file contains the settings to use for the application. These settings specify the board size, board type (we can do more than charuco), what images it will use...etc.

 

You can copy the existing IN_VID5.xml file to default.xml as a base to modify.

 

cp in_VID5.xml default.xml

 

Now modify the default.xml with our settings…

for the types of boards we can look at the example code we build and see what options there are.

 

For the last file, which in my case I need because I am specifying where the images are located. I have to have at least the number of images I set in the default.xml to use for calibration. I told it I would use 25 images, so if I don’t have at least that the program will simply die without providing any information of why nor will it give us the camera matrix, so make sure there are enough images defined… You can also use a video stream to calibrate with, but I chose the image route since I already took them.

 

 

 

 

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_0.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_1.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_30.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_37.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_24.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_25.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_113.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_120.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_184.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_177.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_195.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_194.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_192.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_190.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_188.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_186.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_182.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_180.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_178.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_176.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_174.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_172.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_170.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_168.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_166.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_164.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_162.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_160.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_158.png

/home/matthew/workspace/13CameraCalibration/Charuco/calibrate/calibrate_156.png

 

 

 

 

Run the compiled app with:

 

./camera_calibration

 

Now we have a file “out_camera_data.xml” and this file contains the matrices we need…

 

you can see the rows and columns are mentioned there in the xml. What must be kept in mind is you need to read this data from top down and left to right, so this 3x3 matrix is:

 

3.6272145870642380e+04

0

6.3950000000000000e+02

0

3.6272145870642380e+04

3.5950000000000000e+02

0

0

1

 

While it’s not important for our project here, but it’s helpful to know that the camera matrix contains the “focal lengths” and the “camera center” values. This is what the 4 fields specify. In other words we only need 4 values to create this matrix. Another way to look at this matrix looks like:

(fx, 0, offsetx,

0, fy, offsety,

0, 0, 1)

 

We will define it and the distortion coefficients in python like this:

cameraMatrix = np.array([[ 3.6272145870642380e+04, 0.0, 6.3950000000000000e+02 ],

[ 0., 3.6272145870642380e+04, 3.5950000000000000e+02],

[ 0., 0., 1.]])

distCoeffs = np.array([-3.9253296104015567e+02, 6.7280534769708663e+05, 0., 0., -1.0812339216870673e+09])

 

The python file to undistort an image is incredibly easy… all you have to do is use those matrices by applying them to an image with the undistort function.

 

img = cv2.undistort(frame.copy(), cameraMatrixInit,distCoeffsInit)

 

my file is “undistort.py”

import cv2
import numpy as np
import cv2, PIL, os
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd


datadir = "Charuco/calibrate/"
images = np.array([datadir + f for f in os.listdir(datadir) if f.endswith(".png") ])
order = np.argsort([int(p.split(".")[-2].split("_")[-1]) for p in images])
images = images[order]


allImages = []


for im in images[150:155]:
print("=> Processing image {0}".format(im))
frame = cv2.imread(im)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

 
imsize = gray.shape


cameraMatrixInit = np.array([[ 3.6272145870642380e+04, 0.0, 6.3950000000000000e+02 ],
[ 0., 3.6272145870642380e+04, 3.5950000000000000e+02],
[ 0., 0., 1.]])
 
print("cameram matrix shape")
print(cameraMatrixInit.shape)
distCoeffsInit = np.array([-3.9253296104015567e+02, 6.7280534769708663e+05, 0., 0., -1.0812339216870673e+09])

cv2.imshow('original', frame)

img = cv2.undistort(frame.copy(), cameraMatrixInit,distCoeffsInit)
cv2.imshow('undistoredFrame', img)

 

# the 'q' button is set as the
# quitting button you may use any
# desired button of your choice
if cv2.waitKey(0) & 0xFF == ord('q'):
# Destroy all the windows
cv2.destroyAllWindows()
quit()



 

you can see the rows and columns are mentioned there in the xml. What must be kept in mind is you need to read this data from top down and left to right, so this 3x3 matrix is:

 

3.6272145870642380e+04

0

6.3950000000000000e+02

0

3.6272145870642380e+04

3.5950000000000000e+02

0

0

1

 

While it’s not important for our project here, but it’s helpful to know that the camera matrix contains the “focal lengths” and the “camera center” values. This is what the 4 fields specify. In other words we only need 4 values to create this matrix. Another way to look at this matrix looks like:

 

(fx, 0, offsetx,

0, fy, offsety,

0, 0, 1)

 

We will define it and the distortion coefficients in python like this:

cameraMatrix = np.array([[ 3.6272145870642380e+04, 0.0, 6.3950000000000000e+02 ],

[ 0., 3.6272145870642380e+04, 3.5950000000000000e+02],

[ 0., 0., 1.]])

 

distCoeffs = np.array([-3.9253296104015567e+02, 6.7280534769708663e+05, 0., 0., -1.0812339216870673e+09])

 

 

The python file to undistort an image is incredibly easy… all you have to do is use those matrices by applying them to an image with the undistort function.

 

img = cv2.undistort(frame.copy(), cameraMatrixInit,distCoeffsInit)

 

my file is “undistort.py”

 

import cv2

import numpy as np

import cv2, PIL, os

from mpl_toolkits.mplot3d import Axes3D

import matplotlib.pyplot as plt

import matplotlib as mpl

import pandas as pd

 

# https://mecaruco2.readthedocs.io/en/latest/notebooks_rst/Aruco/sandbox/ludovic/aruco_calibration_rotation.html

datadir = "Charuco/calibrate/"

images = np.array([datadir + f for f in os.listdir(datadir) if f.endswith(".png") ])

order = np.argsort([int(p.split(".")[-2].split("_")[-1]) for p in images])

images = images[order]

 

allImages = []

for im in images[150:155]:

print("=> Processing image {0}".format(im))

frame = cv2.imread(im)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

imsize = gray.shape

 

cameraMatrixInit = np.array([[ 3.6272145870642380e+04, 0.0, 6.3950000000000000e+02 ],

[ 0., 3.6272145870642380e+04, 3.5950000000000000e+02],

[ 0., 0., 1.]])

print("cameram matrix shape")

print(cameraMatrixInit.shape)

distCoeffsInit = np.array([-3.9253296104015567e+02, 6.7280534769708663e+05, 0., 0., -1.0812339216870673e+09])

 

cv2.imshow('original', frame)

 

img = cv2.undistort(frame.copy(), cameraMatrixInit,distCoeffsInit)

cv2.imshow('undistoredFrame', img)

 

# the 'q' button is set as the

# quitting button you may use any

# desired button of your choice

if cv2.waitKey(0) & 0xFF == ord('q'):

# Destroy all the windows

cv2.destroyAllWindows()

quit()

 

 

To run the code we execute with:

 

python ./undistort.py

 

 

Sources:

 

https://stackoverflow.com/questions/16329867/why-does-the-focal-length-in-the-camera-intrinsics-matrix-have-two-dimensions

https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html

https://neuraspike.com/blog/3-rookie-mistakes-people-make-installing-opencv-avoid-it/

https://www.skynats.com/blog/installing-opencv-on-ubuntu-20-04/

https://stackoverflow.com/questions/15320267/package-opencv-was-not-found-in-the-pkg-config-search-path

https://answers.opencv.org/question/121885/cant-find-opencvhpp-however-it-is-not-installed-in-usrinclude/

https://linuxhint.com/getting_started_opencv_ubuntu/

https://stackoverflow.com/questions/39379311/how-to-generate-a-charuco-board-calibration

 

looks like the API for the detection changed… it actually is a bit simpler.

https://stackoverflow.com/questions/74964527/attributeerror-module-cv2-aruco-has-no-attribute-dictionary-get

 

in case use older patterns

https://github.com/opencv/opencv/issues/23873

community logo
Join the CrazyCompute Community
To read more articles like this, sign up and join my community today
0
What else you may like…
Posts
Articles
New topic idea: Enterprise level software management using open source software.

This series would take a long time to make, but ultimately I believe this is useful for those starting out building their own home lab. There are a ton of additional services that would be good to run, but I wanted to limit this specifically to managing software projects. You can add public users or even friends to help develop software. I envision this to work running in a dedicated virtual machine with docker containers. These containers will have to be configured properly to work together.

The idea of having jenkins is that it can use webhooks into the gitlabs to automatically perform builds and push updates. For example, I might have a docker image for building software. If I make changes to that image, I can modify it's git repo, push the changes, the git server signals jenkins to build and publish the image to the docker "registry" then it would be available to those who need to get the latest docker container. For most of my cases, I'd use jenkins as a way to keep my docker ...

post photo preview
Home Assistant Actions Are Sequentially Executed.

While I had a visitor they opened the front door at night and 5 or 10 minutes later, my inside light would turn on. This was obvious that the automation to turn some lights on wasn't set up right. As soon as I opened the automation I saw under the actions I had "turn on porch light script" then "turn on inside light script" it became obvious that the first script had to complete before I it called the second one. The correct way to do this is to add a "run in parallel" action then add the calls to the scripts within it.

Project details: Time Series Modeling Wood boiler

One of the projects I've been working pretty closely is using temperature data from a wood boiler. The history behind this was a few years ago my dad wanted to see the temperature of the wood stove before having to go outside in 20F or less conditions. The first attempt I put a ESP8266 chip with a high temp water sensor and programmed it to send http post requests to a server that would take that data, save it into a postgreSQL db, and finally display that on a simple D3JS line chart. Showing the line chart for the past 24 hours, we had discovered that there are obvious places where it is clear the burner is out of wood and struggling. We also saw points where someone would take a shower and cause the hot water to rapidly decrease in temperature. After a few lightening storms and failing sensors I ended up changing the system to where it is today (at least 3-5 years later) with home assistant. Unfortunately, I have not been able to find backups of the original data, so I've been ...

Globally Available Home Network (Part 2)
Software installs and Configurations

The previous article, we created a dynamic DNS name and a machine on vultr.com, which will be our wireguard VPN host. This is the machine all the clients will connect to and can if we should desire run NginX as a reverse proxy that opens up those clients to the public world through that proxy. (we don't need the NginX reverse proxy if we don't want to expose the machines to the public. This will keep things completely private, but I wanted to mention it as another route if we need it.)

Installing ddclient on Ubuntu.

The first application we need is ddclient. This is what updates namescheap with our ip address. It is available in ubuntu's repository so we can simply run the "apt" commands. Before we can do that we need to get access to the terminal on our server. We have a few options, but I like to use something like "putty" to ssh into our server's ip address. You can find the server's ip address and the password from my.vultr.com, and click on the machine you want to view. There is a place that you can reveal the password for the root account. Assuming you did that let's run the commands to install and configure ddclient...

 

apt install ddclient 

 

Now we can run the next command to edit the config file...

 

nano /etc/ddclient.conf

 

In the ddclient.conf file we need it to look like the following:

# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf

protocol=namecheap \
use=if, if=enp1s0 \
login=xyz123.net \
password='1asdfasdfsa2ef1234daf345dsg768' \
wireguard, test1

###########################################

NOTE: I left "test1" in there to show you what you can do if you wanted more sub domains... you can keep appending more on as you wish. For now remove that "test1" as it probably doesn't exist and could cause some issues trying to update a subdomain that doesn't exist. The worst case is you could get error logs. It's also important to point out the "use=if" tells the software you want to use the network interface to get the ip address, and "if=enp1s0" specifies which interface you are getting the ip address of. In this VM we only have one, so this is where users will try to connect through.

 

Installing Wireguard VPN on Ubuntu

I really don't want to go through all of the steps and details to setting this up on the server. Especially since there exists other sites that go through it already. If you want to get a better understanding of how to do this, please check out how-to-set-up-wireguard-on-ubuntu-20-04

In this article I'm going to keep it very simple and just show the main steps...

The first one is to install the software...

 

apt install wireguard resolvconf

 

The DNS resolver isn't necessary for what we are doing, but it is needed if we want to route all of our traffic through the VPN, so we'll use it anyways.

 

resolvectl dns enp1s0

 

We need to generate the server's private and public keys. The private key doesn't leave the server. The public key is ok to send to anyone in the world. Being public key cryptography, it doesn't matter who has the public part. It is more complicated than I need to get into, but what the software does is encrypt some data with the public key, but only the private key can undo that encryption. This means it's important to keep your private key safe!

To make the keys, as the article linked to above says... you can run:

 

wg genkey |  tee /etc/wireguard/private.key

chmod go= /etc/wireguard/private.key

cat /etc/wireguard/private.key | wg pubkey |  tee /etc/wireguard/public.key

 

Now edit the /etc/wireguard/wg0.conf file (or create it if doesn't exist)

 

nano /etc/wireguard/wg0.conf

 

make it look like the following, but fill in the values you need:

################################################

[Interface]
Address = 192.168.51.1/24
DNS = 1.1.1.1
DNS = 8.8.8.8
SaveConfig = true
PostUp = ufw route allow in on wg0 out on enp1s0
PostUp = ufw route allow in on wg0 out on wg0
PostUp = iptables -t nat -I POSTROUTING -o enp1s0 -j MASQUERADE
PreDown = ufw route delete allow in on wg0 out on enp1s0
PreDown = ufw route delete allow in on wg0 out on wg0
PreDown = iptables -t nat -D POSTROUTING -o enp1s0 -j MASQUERADE
ListenPort = 51825
PrivateKey =

################################################

A few things to note:

the commands that say "PostUp" and "PreDown" are issued when the wireguard application starts up and shutsdown. Specifically we use it to set firewall rules to have while the VPN is connected and when it ends. The "UFW route allow on wg0 out on enp1s0" tells the UFW firewall to allow the data from the wg0 interface (our VPN) to go our the network card. If you want your machines to be able to go out to the internet we need that. If you want your machines to talk to each other you need the line with "ufw route allow in on wg0 out on wg0" this allows machines on the wg0 to cross over to other machines on the wg0 interface.

The "Address" is the IP address we want this machine to have on the VPN. I made sure to keep it with a .1 at the end. This is what routers will use. For example if you buy a home router it defaults to 192.168.1.1.

The listen port I have in bold because you can change that too. I changed the default to make it slightly less obvious. You can set this port to 8080 or whatever you want. Just make sure it's not a port of any services you may want to publicly expose if you are going to run a reverse proxy like NginX.

Now that we have that out of the way we need to make sure to configure it to start at boot.

 

sudo systemctl enable [email protected]

sudo systemctl start [email protected]

sudo systemctl status [email protected]

sudo wg-quick up wg0

 

We need to also open our firewall's ports with:

 

sudo ufw allow 51825/udp
sudo ufw allow OpenSSH

 

At this point we have a running wireguard VPN server we can connect to, but there are no clients allowed. Kind of useless, so let's get a client on...

Setting up a wiregaurd client.

This part depends a lot on the operating system and software we are using. If you are using linux the install process and creating the keys is the same as you did on the server. If you are using windows there is an app that can create the public and private keys for you. The important thing is that you have the right config file settings in the wg0.conf on the client. For the clients the "peer" it talks to is the server. Here is my example:

############################################

[Interface]
PrivateKey =
Address = 192.168.51.10/32

[Peer]
PublicKey =
Endpoint = wireguard.xyz123.net:51825
AllowedIPs = 192.168.51.0/24

PersistentKeepalive=25

 

############################################

For the "Address" in the "[interface]" section you would use the address the server is set to allow you to use. This is where wireguard isn't a very good at privacy, but we aren't using it for that. These IP addresses are set and the server also needs to allow that public key to use that ip address. I am using "192.168.51.x/32" because it is very unusual. I doubt a person will ever run another VPN or service in that range. instead of .51 you can choose 24,25,26 or 27...whatever you want as long as it is between 1 and 254. The reason for this is if you chose 192.168.1.1 that is the default address of your home router, so your local router will see that and thing the packets are for it. In other words they won't make it to the right place. 

The "Persistent Keep Alive" in the peer section is what is supposed to keep the connection active. If your client is a server that others should connect to I highly recommend having this in there to keep the connection going. If your client is simply a device using other services I'd remove the line completely. Don't need to send our unecessary data to keep a connection. It will be active when you try using it.

For the "[Peer]" section we need to specify the server's information. In the "Allowed IPs" field we specify the ip addresses that go over to the peer. In other words, if we specified 0.0.0.0 we would say all traffic goes over the VPN. This is good if you want to use it to hide your web traffic out to the cloud provider. In this case, I specify all traffic to the IP network goes over the VPN. This means that if I visit a site like 192.168.51.1 it will send it through the VPN same as if I visit 192.168.51.22 would also be sent out to the VPN. (they don't have to really exist the machine will still send it through there) and your traffic to places like 192.168.1.1 will not be routed over the vpn. This let's you use local machines not on the VPN at the same time as you access the VPN machines. If this doesn't make sense that's ok... just keep the ips I have in the example and you'll be fine. IP addresses are a bit complicated and best left to "network engineers"

Now we have the client's config set up we can't really start it yet since it won't work... 

Adding the client to the VPN Server

Let's hop back to the VPN server's terminal and run the following command to add the client to the VPN:

wg set wg0 peer allowed-ips 192.168.51.10/32

Each client will need their own IP Address, so make sure they aren't all using 192.168.51.10. As soon as the above line is executed on the server, we can tell the client to connect.

If all is good the client and server should be able to talk. To test this ping the other's VPN ip address. One the server you should be able to run

ping 192.168.51.10

and the client should be able to run

ping 192.168.51.1

If this doesn't work then your problem has to do with network traffic. That means the server might not be allowing traffic through the VPN or the IP addresses weren't configured properly... and good luck figuring it out. You will have to double check all of your settings and maybe doing some web searching. There are too many possiblilities to have in this article.

Now we have a client and a server we can keep adding new clients to the mix and they can all talk to eachother. I have noticed sometimes clients will disconnect randomly. I think this is due to unstable issues at the client's ISP.

Configuring a VPN watchdog

A simple way to keep the vpn alive is to add a watchdog script to all of your servers. Each server will be a little different, but I do have some examples for TrueNAS and UnRAID. The differences is the paths to use for the wq-quick command. In TrueNAS you have to point to the executable, but Unraid it has it in the path, so you don't have to worry about it... 

To have these watchdog scripts execute, you will need to setup crontab on a linux machine. On windows or android these scripts probably won't work and the clients may already do this. For my NAS boxes that run linux I use cron jobs... can find more information about cron jobs at https://www.freecodecamp.org/news/cron-jobs-in-linux/

Here is my TrueNAS script:

###############################################

#!/bin/bash

if ! ping -c1 192.168.51.1 > /dev/null 2>&1

then

echo “Wireguard VPN is down… trying to restart”

/usr/local/etc/rc.d/wireguard stop

/usr/local/etc/rc.d/wireguard start

ping -c1 192.168.51.1 > /dev/null 2>&1

fi

###############################################

 

Here is my Unraid/Linux Script:

##############################################

#!/bin/bash

if ! ping -c1 192.168.51.1 > /dev/null 2>&1

then

echo “Wireguard VPN is down… trying to restart”

wg-quick down wg0

wg-quick up wg0

ping -c1 192.168.51.1 > /dev/null 2>&1

fi

##############################################

 

What I need to point out is we are pinging the wireguard VPN server's IP address on the VPN. If this ping works we know we are connected to the vpn otherwise it will fail, which is when we need to try the commands to restart the wg0 interface. You can run this using a cron job as mentioned above. You can set it to run */1 minutes (every 1 minute) or */5 minutes (for every 5 minutes) just a little tidbit...

Now you should have a bunch of machines on the VPN and they all can talk to each other securely. In future articles I'll talk more about setting up reverse proxies. Reverse proxies would allow the public to access resources in the VPN. We would run the reverse proxy on the wireguard VPN host then it would direct traffic to specific IP addresses within the VPN. This is helpful if you want to run nextcloud on say Unraid and expose it to the public. If you don't want to publicly expose anything then don't worry about a reverse proxy.

Read full Article
Globally Available Home Network
A safe and secure solution

There are lots of reasons why one should have their services running over a publicly facing wireguard VPN server running on a cloud provider. I found that for the small price of $5/month it offers a lot of flexibility. It allows machines to communicate with eachother no matter where they are in the world. It adds a layer of encryption to those machines. Put a NginX reverse proxy on the public facing wireguard VPN server and you can have it route requests to the machines on the VPN. We can easily use this to create our own and private dropbox like service with Nextcloud, gitlab for our git repository, rocket chat for a private chat service, or even a jitsi server for voice communications. This sounds a bit confusing, so we need to take a look at what I'm talking about...

In the image above each machine is at a different location. One might be in Texas while another could be in Washington. The "Wireguard VPN" is the only one that we can directly talk to on the internet. In other words, it's the only one exposed publically. This machine is running on a cloud provider such as vultr.com or you can run it on amazon's cloud too. Vultr has machines with 1GB ram down to around $5, so I've been using them.

Another advantage to having that VPN server in the cloud is that we don't have to open any ports to our machines at home. This can be more complicated than people want to deal with, it exposes our home network, and some ISPs have strange setups making it hard or impossible to host services at your home. With the above image we can host services on the cloud, but the machines are really at home.

If we are talking about privacy, it's important to keep in mind at each location they have an ISP. This ISP can see the public facing VPN server and that it's a VPN, but can't see the packets going over it. You can use this same setup as a VPN for your web traffic, but in that case you'd set your clients to route all traffic through. In this article, we are only routing the IP traffic for the machines on the VPN.

If the VPN is configured to allow clients to talk with each other, we can have "Mike's NAS" ssh into "Liz's NAS" to do a file transfer with say rsync. This allows us to do remote backups. The catch here is that we need to make sure the cloud provided machine has enough data transfer limits for us to work within. Sometimes you'll see 1 TB/month other times it could be 2TB or more. If you need more space it's always possible to upgrade later, so don't worry too much about it.

Create the Domain Names

To create this kind of setup our first task is we need a dynamic DNS. I found namescheap to be a good option. I can't really show the details as this has lots of private information, but the idea is you use namescheap to buy your own domain name like "randomHomeServices.com" it may not even matter what name you choose if you are simply using it for your own services. You can choose "xyz123.net" We will eventuall create new "sub domains" and these subdomains allow us to have nextcloud.xyz123.net go to one address, and vpn.xyz123.net go to another. We can utilize this to allow us infinite number of urls if we need it. In this case, I only want "wireguard.zyx123.net" for our subdomain. This is the one we'll be setting up as our public facing url.

What we can see in this image is that I own the xyz123.net on my account. That's the domain I own. I can add subdomains by clicking on "Manage" then click on "Advanced DNS". When the page loads click on "Add new Record" you want to make sure you pick "A+ Dunamic DNS Record" then for host you'd put the subdomain you want like "wireguard". When you try to access it, you would go to "wireguard.xyz123.net"... btw we aren't done on this page yet. We have to make sure we enable dynamic DNS clients...

To enable dynamic DNS clients to update the IP Address in the value column, which is how machines know what the "wireguard.xyz123.net" points to. A quick note: All machines have an IP address to talk with eachother it's us humans who have a hard time remembering that ip address, so we use Domain Name Services (DNS) to make it easier for us to remember. That also means this Dynamic DNS is completely optional, but keep in mind our IP addresses can change randomly, which is another benefit to using dynamic DNS. When the IP changes it will be updated.

Any ways, scroll down the page to get to the option to enable dynamic DNS clients... we need to make sure it's turned on AND take note of that password. That password will be required for clients.

Now we have the domain name, the sub domain, and the dynamic DNS client, we can move to the next step and that is to get the server setup...

Creating the Cloud Server.

There are lots of cloud providers, but I have settled on Vultr.com. I was going to try linnode, but they wouldn't let me get an account since I use a VPN for my web traffic. Vultr offered the same services and what appears to be the same prices, so I have been quite happy with my choice.

At vultr.com, we want to create a very lightweight machine. It doesn't have to do anything other than host the wireguard VPN server, and in the future run NginX if we should desire to go that far.

 

In the above image, you can see I own two machines. One is for the backup network that we are setting up in this article and the other I have my cell phone and some family member's on. The wireguard vpn I use for the cell phones, I route all of the network traffic over it. This hides the traffic from the cell phone providers sending it through the cloud service.

Anyways, to create these machines (the UI has changed a bit since I first created these machines) click on the "Deploy +" then the first item is "Deploy new machine" it will bring you to a page where you can select the machine you want. Keep in mind a few limitations we have to work with. We need to look at CPU cores, RAM, and network capacity. 

For the server we want to host the cheapest option is all that is necessary, so pick the "cloud compute" server starting at $2.50/month.

You can pick if you want AMD or Intel CPUs. I picked AMD Epic, but it looks like the Intel ones are also the same price, so either works. As you scroll down you'll see more options like where you want this machine hosted. For this, ideally you would want a location that is  closest to reduce latency, but it often doesn't really matter that much. As you scroll down more you can get to the real important stuff...

When we pick the "server size" we probably want the cheapest one we can get. These options do change based on the CPU we picked. If I said "Intel" I would get different options here, so the cheapest option I could find was $5/month and it was the "Intel Regular Performance"... for $1 more we do get more bandwidth, so I am satisfied here.

I haven't mentioned it yet, but we also want to have "Ubuntu LTS" version for our operating system. You could choose other operating systems if you want, but the commands later on will be different.

Just below, the "server size" you'll notice vultr automatically enables "backups", for this kind of setup we can get rid of that. You can always make a script to push backups down to one of the clients on the VPN then restore it manually if something goes wrong. In a production setup we might want to figure out how to do restores as fast as possible, maybe using snapshots, but in our case we don't need that. If someone is not able to access the VPN for a few minutes or even days it's not a problem.

This is where I should point out another nice reason to use dynamic dns... the machines can be set up to keep trying to connect to that dynamic DNS address, which can be moved to another machine. In other words, if our server gets hacked or deleted somehow, all of the clients are trying to connect to the wireguard.xyz123.net. We can create a new cloud machine, configure the DNS and the wireguard server...and the clients will reconnect like nothing happened. As long as I save my configurations I can restore them to any server I want. These backups can be hosted at the vultr.com site for some extra $, or I can just save them to one of my machines. I choose to save it to my machines. I do pay for snapshots back to when I first set it up, so that is helpful if something goes wrong. They can be restored in moments.

The "Additional Features" doesn't really matter much... there is another part where we can add ssh keys to allow us to remote into the root account, but I won't configure ssh keys for this article... let's keep it simple. At the bottom of the page we name the server. This name doesn't matter at all. Users are going to use the dynamic DNS name, so the name we choose here is only how we see the machine at vultr.com on the dashboard.

When you complete the purchase, you will have a new cloud machine. It takes a few minutes to set everything up, so let's leave the configurations to the next article... 

https://crazycompute.locals.com/post/4860893/globally-available-home-network-part-2

Read full Article
Nextcloud syncing
the double edge sword

I do a lot of software based work and very privacy contious. I have multiple machines for specific tasks. Managing the data between them all can be quite a challenge. This is where syncing is helpful. With the right design, I can share specific folders like ones that contain my password safe (keepassXC) to all machines while not sharing data related specifically to my task. For example, hosting docker containers. I wouldn't want all of my PCs to have scripts for my docker images. This took a bit to fully figure out. Through my experiences I have found some huge cons and work arounds.

To review a bit, Nextcloud is a web server that can help share data between users. It also has clients that can be installed to sync data between the server and the clients. You can share folders to other users to sync those files to those user's machine.

As of today, there are some issues with this syncing. First one is that if a user A shared their files with another user B, that user B will immediately start downloading the files whether they want them or not. There is a feature you can enable to prevent automatically downloading if the filesize is too large. This helps mitigate issues but not completely. If someone really wanted to mess with others they could create shares up to that limit and simply spam the targeted user forcing them to download a ton of files just under that limit... I think the fix to this is to default to always asking the user if they want to accept a share and what to do with it, or even simply accept it but don't auto download.

Another issue I have found is you do NOT want to share directories or files that change frequently. (I can't adjust the sync delays) For example, if you are running a portable application or docker container in a share, it is constantly modifying files this makes the client sync application freak out. If you are trying to share applications to other users it can easily corrupt those files. This means you also don't want to sync a "work directory" where you may create python environments and download nuget packages etc. What I do is create a local directory for those things then copy + paste the completed files into nextcloud. This situation is probably better handled using git and not nextcloud sync if you are writing code.

My last complaint is similar to the first, but a bit more specific. There is a situation where you share a folder for my example I'll say "Home Records" it's the top most level, so if I shared it and created a new sub folder called "Purchases" that change will be pushed to all users I have shared the HomeRecords with. I suppose this is expected, but if I loaded that Purchases with 500GB of data no matter the setting the other users have it will push those files to everyone who is holding this top most share. I honestly don't know how to handle such a situation since most cases it is expected. Ultimately, being able to reject a share could be very helpful feature.

Now we have some understanding of a few big issues, the way I was able to get around the first problem was to use multiple users to my benefit. I can't say for computer A to hold specific folders and not others. Sure, there is the option to tell the client some kind of paths to ignore much like a gitIgnore file, but it isn't the easiest thing to figure out. If I move to a new PC I'd also have to put that back in place. The better option is if I could register "devices"/computers/clients that I own and choose which folders to share between them. Since this is not possible, I created one user for me, and sub users for each machine I have. This allows me to share some folders to specific machines without sharing everything to all or dealing with really annoying client settings. The system looks like this...

The idea is that all files will be created on the Main User's account. From there, I can share the folders to the specific machines I want those files to be stored on. In the rare case I need to move files between say my laptop to my data machine I could create a folder on my laptop and share it with the data machine directly, but I tend to avoid this because it adds a lot of uneccesary complexity. 

Something to note is that no computer should never have a client logged into the main user account. That's just a central contifuration point. You can log into it using a web browser (to create the shares), but never have the installed client application syncing files from that accoune. The reason for this is because it can get messy down the road and you'll be syncing all the files from that account. The only reason I can see to do that is if you want a full backup on another machine.

Read full Article
Available on mobile and TV devices
google store google store app store app store
google store google store app tv store app tv store amazon store amazon store roku store roku store
Powered by Locals