HTTP/3 integracija kroz Nginx/Ubuntu

QUIC Preuzeto sa https://github.com/quicwg

HTTP/3 Intro

HTTP/3 protokol je nastavak HTTP/2 a za razliku od njega koristi QUIC koji je baziran na UDP. Razlog koriscenja UDP jeste nisko kasnjenje, brzina isporuke dok kompletan HTTP protokol sa svim porukama, kodovima, standardom ostaje isti. Vazno je napomenuti da je QUIC "nastavak" na TLS 1.3 gdje ima lista prosirenja (npr ALPN i slicno) TLS protokola. Tako da je moguce koristiti na drugacije nacine TLS.

Brzina, benchmark i kasnjenje HTTP/3

Prema stranici , performanse su:

Connection Setup Time: HTTP/1 (50–120ms), HTTP/2 (40-100ms), HTTP/3 (20–50ms in low-latency networks)

File Download (1MB with 2% packet loss): HTTP/1 (1.8s), HTTP/3 (1.2s)

Page Load Latency (mobile 3G): HTTP/1 (600ms), HTTP/3 (300ms)

HTTP/2 unaprijedio web performanse sa omogucavanjem visestrukih zahtjeva i odgovora na jednoj konekciji, redukujuci kasnjenje. HTTP/3, baziran na QUIC, unapredjuje brzinu i smanjuje kasnjenje kroz zamjenu TCP sa UDP-baziranim-transportom, cineci ih eksponencijalno brzim, posebno za real-time aplikacije i mobilne uredjaje.

Prema testu sa stranice imamo performanse za poredjenje:

Ucitavanje tipicno E-commerce stranice (100 resursa):

HTTP/1.1: • Otvara 6–8 paralelnih konekcija • Sekvencijalno procesuiranje svake konekcije• Totalno vrijeme ucitavanja: ~8 sekundi

HTTP/2: • Jedna konekcija sa multipleksiranje • Kompresija headera i push sa servera • Totalno vrijeme ucitavanja: ~4 seconds

HTTP/3: • QUIC eliminise ostala uska grla • Bolje handlovanje izgubljenih paketa • Totalno vrijeme ucitavanja: ~3.2 seconds

Provjera browsera/web servera i instalacija HTTP/3 na Ubuntu 24.04/Nginx

Provjera Browsera

Jedan od prvih uslova jeste da nas browser podrzava HTTP/3 tako sto odemo na stranicu: https://quic.nginx.org/ ( lista browsera i verzija koji podrzavaju HTTP/3 )

Instalacija zadnje verzije nginx (1.29.0) na Ubuntu 24.04

U ovom dijelu cu zaobici podesavanja nginx konfiguracije, firewalla - vec cu direktno ici na podesavanja i instalaciju nginx.

Koraci za instalaciju zadnje verzije nginx na Ubuntu (preuzeto sa stranice):

sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring

curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
    | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null

gpg --dry-run --quiet --no-keyring --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg

Poslije zadnje komande se ocekuje output:


pub   rsa4096 2024-05-29 [SC]  8540A6F18833A80E9C1653A42FD21310B49F6B46
uid    nginx signing key 

pub   rsa2048 2011-08-19 [SC] [expires: 2027-05-24] 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
uid    nginx signing key 

pub   rsa4096 2024-05-29 [SC]  9E9BE90EACBCDE69FE9B204CBCDCD8A38D88A2B3
uid    nginx signing key 

Dalji koraci (koristimo mainline repo, tj zadnju verziju nginx-a):


echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/mainline/ubuntu `lsb_release -cs` nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list

U slucaju da zelimo stabilnu verziju, komanda umjesto gornje:


echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list

Sljedeci korak:


echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" \
    | sudo tee /etc/apt/preferences.d/99nginx

sudo apt update
sudo apt install nginx

Da provjerimo instalaciju:


nginx -v

output: nginx version: nginx/1.29.0

nginx -V
output: nginx version: nginx/1.29.0
built by gcc 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04)
built with OpenSSL 3.0.13 30 Jan 2024
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/run/nginx.pid --lock-path=/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module ***--with-http_v3_module*** --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -ffile-prefix-map=/home/builder/debuild/nginx-1.29.0/debian/debuild-base/nginx-1.29.0=. -flto=auto -ffat-lto-objects -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fdebug-prefix-map=/home/builder/debuild/nginx-1.29.0/debian/debuild-base/nginx-1.29.0=/usr/src/nginx-1.29.0-1~noble -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -flto=auto -ffat-lto-objects -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

Gdje se nalazi u zadnjem dijelu dio "--with-http_v3_module" koji nam omogucava HTTP/3.

http/3 modul

Podesavanje nginx

Sljedeca podesavanja mozete naci na stranici: https://nginx.org/en/docs/quic.html

Da bi smo podesili nginx da radi, imamo dva dijela podesavanja. Prvi je nginx.conf a drugi je nas server/domena koji trebaju podesavanja.


Brisemo liniju(nove verzije ne koriste vise ovo): ssl on;

Dodajemo:

quic_retry on;

ssl_early_data on;

quic_gso on;



U dijelu:

server {

        listen 443 quic reuseport;

        listen 443 ssl;

      index index.php; #u slucaju da imamo ovo podeseno, podesiti PHP

       location / {

        add_header Alt-Svc 'h3=":443"; ma=86400';

       }

      location ~ \.php$ {

                add_header Alt-Svc 'h3=":443"; ma=86400';

      }

}

Poslije ovoga mozemo izvrsiti komandu nginx -t gdje bi trebalo da izadje:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Onda mozemo i systemctl restart nginx.

Gdje mozemo vidjeti na UDP portu:

netstat udp

Testiranje web servera

Testiranje web servera mozemo uraditi na (mozete izabrati neku drugu domenu, ova je podesena kao test): https://http3check.net/?host=https%3A%2F%2Fwww.vladimircicovic.com%2F test web servera

Sa Inspect(Chrome) mozemo na Networking vidjeti ucitavanje H3 tj HTTP/3: Inspect Chrome

Testiranje sa cuRl:

curl --http3-only -vvvv https://www.vladimircicovic.com/

ili u slucaju da nema http3 prebaci na http2 il http 1.1

curl --http3 -vvvv https://www.vladimircicovic.com/

starija verzija cuRl-a bi mogla raditi sa:

curl --alt-svc altcache -vvvv https://www.vladimircicovic.com/

Dodatne analize naseg TLS podesavanja, mozemo na: https://www.ssllabs.com/ssltest/analyze.html?d=www.vladimircicovic.com

SadServer Solutions - Melbourne

SadServer solution for https://sadservers.com/scenario/melbourne

Reviewing the code shows me that part Content-Length contains 0 size. So I set the proper size of the code (that is "Hello, World\n" size). And then "sudo systemctl restart gunicorn.service" and "sudo systemctl restart gunicorn.socket" The final code looks like this:

Proper code

Next issue is with Nginx - inside of "default" config we have file that ends with .socket, and in /run/gunicorn.sock - it use .sock Replace this and restart Nginx "sudo systemctl restart nginx"

Bad config

proper name for socket

Final test:

final test

SadServer Solutions - Oaxaca

SadServer - Oaxaca solution for https://sadservers.com/scenario/oaxaca

So, the process is started with the current bash - and it has FD (file descriptor) 77. In the picture, you can see "FD/77w". File descriptor usually usage is: 0 for stdin, 1 for stdout, 2 for stderr, and 255 for bash shell - it points to /dev/tty or /dev/pts/N - where is N number. The main process is bash shell and the subprocess is this FD/77 - so by killing the main process we are doomed to destroy our connections. if we run "lsof somefile" - it will show us our bash shell and under /proc/[PID of shell]/fd/77 we have a symbolic link to /home/admin/somefile.

to release the file descriptor we need to re-use the same number with the command: eval "exec 77>&-" close file descriptor

.bashrc contains the running command:

close file descriptor

symbolic link to file:

close file descriptor

SadServer Solutions - Salta solution

SadServer Salt solution URL: https://sadservers.com/scenario/salta

After logging into a server - notice that port 8888 is used. Missing tool lsof, I install with "sudo apt install lsof" and review what process is using port 8888. Nginx was used so I stopped the process with "sudo systemctl stop nginx"

nginx using port 8888

Inside of Dockerfile - found missing proper port 8888 (it was written 8880) and for CMD there was "serve.js" instead of "server.js" - a local file in the same directory.

Dockerfile fix

When the fix was solved, the docker container is built with cmd: "sudo docker build -t sampleapp:v1 . "

Dockerfile build

running app with "sudo docker run -p 8888:8888 sampleapp:v1" and the task is done

Dockerfile run

SadServer Solutions - Cape Town

Solution for Cape Town task from URL: https://sadservers.com/scenario/capetown

After logging into the server, there is no working nginx.

nginx not working

Examine details on why nginx does not work show me the first line containing ";". So I removed ";" from the nginx file and nginx does not yet work.

first issue

After examining the error log - I was able to see and spot file limits.

second issue

After viewing /proc/[pid]/limits - I spot this (Max open files 10)

issue

I did check limits for user www-data, as well as other things (fs.file-max, others)

In the end - Maybe systemD has some limitations per process.

After reading the .service file from systemD

second issue

Add # on the start of the line, reload the system daemon, and restart nginx and it works!

For this task, it takes 30 min to solve

SadServer Solutions - Manhattan medium

SadServer Manhattan - medium, url: https://sadservers.com/scenario/manhattan My first medium task and solution. When I log in - running just


sudo systemctl restart postgresql@14-main.service

The issue is in these lines: "no space left on device" Postgress issue

After running df -h, notice 100% usage for /opt/pgdata/ Removing files that do not need it at all - solved this issue.

Postgres solution Solution

SadServer Solutions - Biblao K8S task

SadServer task Bilbao url: https://sadservers.com/scenario/bilbao

After login and inspection of the issue, we get this picture:

Pod status Pod status

After googling I found an issue with nodeSelector - so I removed it from the manifest.yml file:

nodeSelector

Remove nodeSelector

After removing - it needs to delete all pods, re-run manifest, and check with curl: final Delete pods & run yaml file

SadServer Solutions - Bucharest

Postgress Solution for Bucharest

Bucharest - url for task: https://sadservers.com/scenario/bucharest File edit:

If we run command:


PGPASSWORD=app1user psql -h 127.0.0.1 -d app1 -U app1user -c '\q'

We could see issue with access and file named "pg_hba.conf " If we open file, and we could see next lines:


host    all             all             all                     reject
host    all             all             all                     reject

If we dig manual for file - we could find that "reject" do that - reject any connection. So replacing with md5 word reject (use sudo for editing file and restarting services), and restart services with


sudo systemctl restart postgresql@13-main.service

SadServer Solutions - TaiPei

Task at: https://sadservers.com/scenario/taipei

Inside of this task we have port knocking, very famous solution to bring protection to access for certain port. More information on this: https://wiki.archlinux.org/title/Port_knocking

One simple way to unlock port 80 is to use nmap 2-3 times on all ports to open port 80:


nmap -Pn -p 1-65535 localhost

port knocking Famous port knocking with nmap tool

SadServer Solutions - Command Line Murderes

Solution for: https://sadservers.com/scenario/command-line-murders

File people contain all names, what we need is to find md5 hash of real name to add into mysolution file.

First, command to extract all people names:


awk '{print $1" "$2}' people >> find

awk usage

After that we need to extract proper name with md5 hash from file "find", so use command:


while read -r col1 ; do
   echo $col1  "$(echo $col1 | md5sum )" | grep 9bba101c7369f49ca890ea96aa242dd5
done < find

and here you go your killer name:

hash match Killer name

SadServer Solutions - Saskatoon

Solution for Saskatoon Command:


awk '{print $1}' access.log | sort | uniq -c | sort -r | head -20

With awk, we pickup first field from the line - that is IP address. Second command - make sort this lines so next one command uniq could easy count (that is -c) and sort -r will set reverse list from highest to lower counts of IPs

Count ips Count of IPs from access log

SadServer Solutions - Saint John task

For the task https://sadservers.com/scenario/saint-john there is easy solution by usage of tool called lsof (short for list list open files) where man pages https://man7.org/linux/man-pages/man8/lsof.8.html gives details of usage. Take notice here: I use command kill -9, but in case of important tasks like email servers and similar where are data valuable please use kill -15

Solution in one picture:

enter image description here

SadServer Solutions

I'm solving SadServers Challenges!

I've decided to dive into the world of SadServers challenges (https://sadservers.com/scenarios)! This platform offers a variety of system administrator scenarios that test your troubleshooting skills and Linux knowledge.

With over 26 years of experience, I've encountered a wide range of Linux issues, including the infamous "Out of Memory" (OOM) problems with drivers.

Here's the exciting part:

I'll be tackling these challenges and sharing my solutions right here! The first one will be published today, June 7th, 2024, and I'll keep this page updated with links to all my future solutions.

Stay tuned for some in-depth troubleshooting and Linux problem-solving!

Easy

Saint John - Easy - solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-saint-john-task

Saskatoon - Easy solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-saskatoon

Santiago - Easy solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-santiago

Command line murderers - Easy solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-command-line-murderes

Taipei - Easy solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-taipei

Lhasa - Easy math solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-lhasa

Bucharest - Easy, Postgres solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-bucharest

Bilbao - Easy Kubernetes issue solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-biblao-k8s-task

Apia - Easy file tools usage solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-apia-task

Medium

Manhattan - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-manhattan-medium

Tokyio - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-tokyio-solution

Cape town - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-cape-town

Salta - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-salta-solution

Venice - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-venice

Oaxaca - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-oaxaca

Melbourn - medium, solution: https://www.vladimircicovic.com/2024/06/sadserver-solutions-melbourne

GeoDNS + Nginx reverse proxy

Optimizacija i ubrzavanje bloga uz pomoć Geo DNS i reverznih proksija

nginx proxy digitalocean image

Vikend eksperiment jeste postavljanje više PoP (Point of Presence. Više na https://www.cachefly.com/news/why-points-of-presence-pops-are-pivotal-for-optimal-cdn-performance/) servera kako bi lag/kašnjenje bilo manje.

Koristio sam gotove servise: EasyDNS i Linode Cloud.

EasyDNS nudi GeoDNS za 9$ mjesečno. Moguće je prema geografskoj lokaciji odrediti koji DNS zapis tj povratni odgovor na DNS upit (od A recorda pa dalje). Sa Linodom je moguće "kopirati" na više lokacija širom svijeta (paralelno) kada se sredi prvi box/server. Linode mjenja IP adresu servera koji se klonira a sve ostalo ostaje kako jeste (config, šifre, ključevi, SSL sertifikati, ostalo)

Tako da podesimo 1 server i uradimo kloniranje na više mjesta i samim tim napravimo PoP (https://www.cachefly.com/news/why-points-of-presence-pops-are-pivotal-for-optimal-cdn-performance/)

Prije prije proxy servera

poslije poslije dodavanja Geo DNS i reversnih proxy servera

Testiranje uz pomoc: https://www.dotcom-tools.com/website-speed-test

DNS poool
Geo DNS Pool

Koraci koje trebamo uraditi su sljedeći:

  • Kreirati bazni reverse Nginx proxy
  • Klonirati na različite lokacije
  • Testirati uz pomoć Curl-a
  • Podesiti GeoDNS na EasyDNS
  • Testirati DNS propagaciju i web prisutnost

Nginx reverse proxy

Konfiguracija za revezni proxy (neću davati druge nepotrebne informacije):



http {

   # putanja za /cache mora biti kreirana i podešena za nginx usera/grupu www-data

   # levels - do kojeg nivoa će ići poddirektorijumi za kesiranje

  # Broj objekata koje će sačuvati u kešu - 10m

  # maksimalna veličina fajla koji može biti keširan - 1g

proxy_cache_path /cache levels=1:2 keys_zone=m_cache:10m max_size=1g;

   server {

           location / {

                proxy_cache m_cache;

                proxy_cache_valid 200 302 120m;   
                # vremenski koliko dugo - 120 minuta

                proxy_cache_valid 404 1m;     

                proxy_pass https://9.8.7.1;   
                # ovdje je 9.8.7.1 izvorni sajt

             }

}


Kako testirati da li je proxy podešen

Ako imamo Linux komandnu liniju onda izvršite komandu:


curl -H "host: www.vladimircicovic.com" -k https://172.232.148.193/

Ovdje koristimo host header i IP adresu gdje Curl upucujemo da ignorise SSL/TLS sertifikat i validnost (porediće IP adresu i domenu u SSL sertifikatu, i zatim odbiti da pošalje zahtjev, zato dodajemo -k opciju) Sa ovom komandom bi trebalo da vidimo početnu stranicu.

Kako testirati DNS za odredjene zemlje

Jedan od bržih načina da se vidi DNS propagacija (https://www.digicert.com/faq/dns/what-is-dns-propagation) jeste sajt: https://dnschecker.org/#A/www.vladimircicovic.com

Drugi način ako imate Linux komandnu liniju:


dig +short A www.vladimircicovic.com @118.127.62.178 

172.105.181.107

Gdje je javni DNS server za Australiju dostupan sa: https://public-dns.info/nameserver/au.html

IP adresa 172.105.181.107 je za Australiju i druge zemlje Okeanije.

Kako testirati web pristup za odredjene zemlje

Najbolje je uz pomoć sajta https://www.dotcom-tools.com/website-speed-test ali postoje i slični gdje je moguće testirati.

Optimizacija web stranice i ograničavanje sadržaja radi bržeg učitavanja

Npr možete prikazati 5, 10 zadnjih postova na svom sajtu. Tako učitavanje bude jako teško za klijenta. Jedan od načina jeste ograničavanjem količine broja postova na prvoj strani.

Alat za testiranje: https://pagespeed.web.dev/

Prije ograničavanja sadržaja:

Mobilni

Desktop

Poslije ograničavanja sadržaja:

Mobilni

Desktop

Kubernetes setup on Ubuntu 22.04 LTS (Jammy Jellyfish)

This is a quick and short text on how to install Kubernetes on Ubuntu 22.04. Spend the last 2 weeks solving issues with Containerd and Kubernetes.

After reviewing the GitHub issue for Kubernetes - found this to solve a problem that I have.

The problem manifests as up/down for Kubernetes health and running buggy. On Ubuntu 20.04 there was no problem. The issue is connected with Containerd Cgroups settings for version 1.5.9 (any above this version would work fine, read a link that I provided on information). So let us start with installation.

Kubernetes Kubernetes map - from cloudsigma.com

Kubernetes setup on Ubuntu 22.04

Let me guess you have 2 machines, with Ubuntu 22.04 installed. First, will be used for the k8s control pane. The second will be used as a worker/pod. Both machines have the same steps for k8s installation. Please notice that the IP addresses I am using are from my network - so add proper IP for cp1 and worker1
Here you go for the k8s control pane and worker:


apt update
apt upgrade

echo is this IP of your machine?

echo "192.168.50.204  worker1.example.com worker1" >> /etc/hosts

echo "192.168.50.165 cp1.example.com cp1" >> /etc/hosts

hostnamectl set-hostname cp1
hostnamectl set-hostname worker1

modprobe br_netfilter
modprobe overlay

cat << EOF | tee /etc/modules-load.d/k8s-modules.conf
br_netfilter
overlay
EOF

cat << EOF |  tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

apt-get update ; apt-get install -y containerd

mkdir -p /etc/containerd

containerd config default | tee /etc/containerd/config.toml

sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" /etc/containerd/config.toml

systemctl restart containerd

swapoff -a

apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add

apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

apt install -y kubeadm kubelet kubectl

After you finish these commands on both machines, you have an exclusive command for the k8s control pane and for the worker. For Control Pane:


kubeadm init --pod-network-cidr=192.168.0.0/16

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml


After you finish this on the Control Pane machine, there will be a command printed similar to this and copy to CLI of worker1:



kubeadm join 192.168.50.165:6443 --token azlcfh.36lef5ty7890en1r6 --discovery-token-ca-cert-hash sha256:47a02b63e60278025d3423883a05a48e06f943195f06ebe414d929f60028e


Or you can execute on Control Pane and get printed:



kubeadm token create --print-join-command


and then execute on Worker1 command that is printed The output of this command is copied/past on the worker machine. You can add more worker machines with no problem.

If all goes ok you will be able to check with:



kubectl run curl --image=radial/busyboxplus:curl -i --tty 


this would run a container on pods and you can see if the network and other parts are running ok. Type exit to leave the container. to delete this pod:



kubectl delete pods curl


You can also see how this goes with my video: Youtube K8s setup

Thanks to this people

@Jocix84 and @Bojana_dev for support, direction, and inspiration!