In my previous post, I used Nebula to setup a secured network between 2 virtual machines.
This time, I’ll try to make a MySQL client and server communicate through a Nebula tunnel. And to make it a little bit more difficult, I’ll use podman to run the client and the server in containers.
I begin by restarting the virtual machines :
1vagrant up
To restart Nebula automatically, I’m using systemd. I generate the config file for Nebula service :
1cat <<EOF > nebula.service
2echo
3[Unit]
4Description=Nebula service
5[Service]
6Type=simple
7ExecStart=/opt/nebula/nebula -config /etc/nebula/config.yml
8Restart=on-failure
9[Install]
10WantedBy=multi-user.target
11EOF
Then I push this file in the boxA temporary folder. I wish I could place it directly at his place at /etc/systemd/system but, to do that, I would need more privilege :
1vagrant upload nebula.service /tmp/ boxA
Now, inside boxA, I can “sudo” to place the config file in the apropriate folder :
1vagrant ssh boxA -c "sudo mv /tmp/nebula.service /etc/systemd/system/"
And, I activate this service so it starts at boot time.
1vagrant ssh boxA -c "sudo systemctl enable nebula"
I do the same for boxB, beginning by copying the config file :
1vagrant upload nebula.service /tmp/ boxB
Then I move it to the right place :
1vagrant ssh boxB -c "sudo mv /tmp/nebula.service /etc/systemd/system/"
Finaly I activate the service :
1vagrant ssh boxB -c "sudo systemctl enable nebula"
Now, it is Podman turn to be installed, first on boxA :
1vagrant ssh boxA -c "sudo apt install -y podman && sudo reboot"
Then on boxB :
1vagrant ssh boxB -c "sudo apt install -y podman && sudo reboot"
Just after Podman installation, I need to reboot the virtual machines, so podman can be launched as rootless (in the user session).
And because I configured Nebula as a systemd service, the tunnel will start as well.
I just need to wait that the 2 virtual machines finish to boot. I can see their status with vagrant status :
1> vagrant status
2
3Current machine states:
4
5boxA running (virtualbox)
6boxB running (virtualbox)
I install the MySQL image and start the server onboxA :
1vagrant ssh boxA -c "podman run -p 192.168.168.100:3306:3306 --name=db --env MYSQL_ALLOW_EMPTY_PASSWORD='true' -dt docker.io/library/mysql"
podman run : I use Podman without sudo (it’s one big advantage on Docker) to start the container with MySQL.
-p 192.168.168.100:3306:3306 : I publish the MySQL port on the Nebula IP so I can access the server from another machine on this network.
–name=db : I name this container db so I can easily manipulate it later.
–env MYSQL_ALLOW_EMPTY_PASSWORD=‘true’ : I choose an empty password for this test. Of course, I would not do that in production.
-dt docker.io/library/mysql": at last I specify the MySQL image to use.
To check if the server is correctly started, I can use podman ps :
1> vagrant ssh boxA -c "podman ps"
2
3CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d6c2625aafb4 docker.io/library/mysql mysqld 30 seconds ago Up 30 seconds ago 192.168.168.100:3306->3306/tcp db
It is working !
So now, I try to access this server from boxB. I use nearly the same podman command as before, but this time I run the MySQL client :
If all is ok I will be prompted by MySQL :
1> vagrant ssh boxB -c "podman run -ti --rm docker.io/library/mysql mysql -h192.168.168.100 -uroot"
2Welcome to the MySQL monitor. Commands end with ; or \g.
3Your MySQL connection id is 9
4Server version: 8.0.29 MySQL Community Server - GPL
5
6Copyright (c) 2000, 2022, Oracle and/or its affiliates.
7
8Oracle is a registered trademark of Oracle Corporation and/or its
9affiliates. Other names may be trademarks of their respective
10owners.
11
12Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
13
14mysql> show databases;
15+--------------------+
16| Database |
17+--------------------+
18| information_schema |
19| mysql |
20| performance_schema |
21| sys |
22+--------------------+
234 rows in set (0.01 sec)
24
25mysql>
Hourra!