Deleting Kubernetes namespace issue

Deleting a kubernetes namespace can be done easily with kubectl delete command. However, sometime you would see the deleting process stuck at “Terminating” status and never finish. There is an open issue on Kubernetes Github repository for this issue. It happened to my Kubernetes cluster v1.13 as well.

For example following command shows the monitoring namespace is being Terminating.

1
2
3
4
5
6
7
8
$ kubectl get namespaces
NAME STATUS AGE
default Active 17d
kube-node-lease Active 17d
kube-public Active 17d
kube-system Active 17d
traefik Active 16d
monitoring Terminating 14d

To troubleshoot this issue, let’s take a look at the namespace’s information.

1
$ kubectl get namespace monitoring -o json

You should see a similar output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"monitoring\"}}\n"
},
"creationTimestamp": "2019-05-20T10:19:50Z",
"deletionTimestamp": "2019-06-03T10:30:48Z",
"name": "monitoring",
"resourceVersion": "4330042",
"selfLink": "/api/v1/namespaces/monitoring",
"uid": "cce62477-7ae8-11e9-860d-527c82638905"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Terminating"
}
}

The finalizers array has one value which is kubernetes. This is the reason that causes the terminating process keep running forever.

The Finalizer are arbitrary string values, that when present ensure that a hard delete of a resource is not possible while they exist. Kubernetes only finally deletes the object if the list of finalizers is empty, meaning all finalizers have been executed.

This issue can be solved quickly by removing the values of finalizers array. However, to be safe, we should double check the list of our namespace’s related resources. Following command shows the resources belong to monitoring namespace.

1
$ kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n monitoring

If you see any resource from above command output, try to delete it with kubectl delete command. If not, let’s try to modify the Finalizer then. You will not be able to remove it by using kubectl edit. To delete it, we have to update the its JSON data via API.

First of all, let’s expose the API using kube-proxy

1
$ kubectl proxy &

Then use curl command to push the modified version of above JSON data to the Kubernetes API. Make sure you emptied the finalizers array and save it into a file, data.json for example.

1
$ curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @data.json http://localhost:8001/api/v1/namespaces/monitoring/finalize

Now let’s check the namespace list again, you would see the monitoring namespace has been removed successfully.

1
2
3
4
5
6
7
$ kubectl get namespaces
NAME STATUS AGE
default Active 17d
kube-node-lease Active 17d
kube-public Active 17d
kube-system Active 17d
traefik Active 16d

If you wish to automate the deleting process, check out following script from ctron

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash
set -eo pipefail
die() { echo "$*" 1>&2 ; exit 1; }

need() {
which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}

# checking pre-reqs
need "jq"
need "curl"
need "kubectl"

PROJECT="$1"
shift
test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"

kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
kill $PROXY_PID
}
trap killproxy EXIT

sleep 1 # give the proxy a second

kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"
Share Comments