SadServer Solutions - Melbourne

SadServer solution for https://sadservers.com/scenario/melbourne

Reviewing the code shows me that part Content-Length contains 0 size. So I set the proper size of the code (that is "Hello, World\n" size). And then "sudo systemctl restart gunicorn.service" and "sudo systemctl restart gunicorn.socket" The final code looks like this:

Proper code

Next issue is with Nginx - inside of "default" config we have file that ends with .socket, and in /run/gunicorn.sock - it use .sock Replace this and restart Nginx "sudo systemctl restart nginx"

Bad config

proper name for socket

Final test:

final test

SadServer Solutions - Oaxaca

SadServer - Oaxaca solution for https://sadservers.com/scenario/oaxaca

So, the process is started with the current bash - and it has FD (file descriptor) 77. In the picture, you can see "FD/77w". File descriptor usually usage is: 0 for stdin, 1 for stdout, 2 for stderr, and 255 for bash shell - it points to /dev/tty or /dev/pts/N - where is N number. The main process is bash shell and the subprocess is this FD/77 - so by killing the main process we are doomed to destroy our connections. if we run "lsof somefile" - it will show us our bash shell and under /proc/[PID of shell]/fd/77 we have a symbolic link to /home/admin/somefile.

to release the file descriptor we need to re-use the same number with the command: eval "exec 77>&-" close file descriptor

.bashrc contains the running command:

close file descriptor

symbolic link to file:

close file descriptor

SadServer Solutions - Salta solution

SadServer Salt solution URL: https://sadservers.com/scenario/salta

After logging into a server - notice that port 8888 is used. Missing tool lsof, I install with "sudo apt install lsof" and review what process is using port 8888. Nginx was used so I stopped the process with "sudo systemctl stop nginx"

nginx using port 8888

Inside of Dockerfile - found missing proper port 8888 (it was written 8880) and for CMD there was "serve.js" instead of "server.js" - a local file in the same directory.

Dockerfile fix

When the fix was solved, the docker container is built with cmd: "sudo docker build -t sampleapp:v1 . "

Dockerfile build

running app with "sudo docker run -p 8888:8888 sampleapp:v1" and the task is done

Dockerfile run

SadServer Solutions - Cape Town

Solution for Cape Town task from URL: https://sadservers.com/scenario/capetown

After logging into the server, there is no working nginx.

nginx not working

Examine details on why nginx does not work show me the first line containing ";". So I removed ";" from the nginx file and nginx does not yet work.

first issue

After examining the error log - I was able to see and spot file limits.

second issue

After viewing /proc/[pid]/limits - I spot this (Max open files 10)

issue

I did check limits for user www-data, as well as other things (fs.file-max, others)

In the end - Maybe systemD has some limitations per process.

After reading the .service file from systemD

second issue

Add # on the start of the line, reload the system daemon, and restart nginx and it works!

For this task, it takes 30 min to solve

SadServer Solutions - Manhattan medium

SadServer Manhattan - medium, url: https://sadservers.com/scenario/manhattan My first medium task and solution. When I log in - running just


sudo systemctl restart postgresql@14-main.service

The issue is in these lines: "no space left on device" Postgress issue

After running df -h, notice 100% usage for /opt/pgdata/ Removing files that do not need it at all - solved this issue.

Postgres solution Solution

SadServer Solutions - Biblao K8S task

SadServer task Bilbao url: https://sadservers.com/scenario/bilbao

After login and inspection of the issue, we get this picture:

Pod status Pod status

After googling I found an issue with nodeSelector - so I removed it from the manifest.yml file:

nodeSelector

Remove nodeSelector

After removing - it needs to delete all pods, re-run manifest, and check with curl: final Delete pods & run yaml file

SadServer Solutions - Bucharest

Postgress Solution for Bucharest

Bucharest - url for task: https://sadservers.com/scenario/bucharest File edit:

If we run command:


PGPASSWORD=app1user psql -h 127.0.0.1 -d app1 -U app1user -c '\q'

We could see issue with access and file named "pg_hba.conf " If we open file, and we could see next lines:


host    all             all             all                     reject
host    all             all             all                     reject

If we dig manual for file - we could find that "reject" do that - reject any connection. So replacing with md5 word reject (use sudo for editing file and restarting services), and restart services with


sudo systemctl restart postgresql@13-main.service