K8S Utility IV - kubectl utility plugin

This article was last updated on: July 24, 2024 am

Opening

📜 introduction

  • Sharpen knives and do not chop wood by mistake
  • Better tools make good work

In theOne of the K8S utilities - How to merge multiple kubeconfig?In this article, we introduced Krew, Kubectl’s plug-in management tool. Next, I will introduce a few practical kubectl plugins.

kubectl Utility plugins

access-matrix

Displays the RBAC access matrix for server resources.

Have you ever wondered what access you have to the provided Kubernetes cluster? For a single resource, you can use kubectl auth can-i List deployment, but maybe you’re looking for a complete overview? That’s what it does. It lists the current user and access rights for all server resources, similar tokubectl auth can-i --list

Installation

1
kubectl krew install access-matrix

use

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Review access to cluster-scoped resources
$ kubectl access-matrix

Review access to namespaced resources in 'default'
$ kubectl access-matrix --namespace default

Review access as a different user
$ kubectl access-matrix --as other-user

Review access as a service-account
$ kubectl access-matrix --sa kube-system:namespace-controller

Review access for different verbs
$ kubectl access-matrix --verbs get,watch,patch

Review access rights diff with another service account
$ kubectl access-matrix --diff-with sa=kube-system:namespace-controller

The display effect is as follows:

 access-matrix

ca-cert

Print the PEM CA certificate for the current cluster

Installation

1
kubectl krew install ca-cert

use

1
kubectl ca-cert

kubectl ca-cert

cert-manager

This doesn’t need to be introduced, right? The famous cert-manager is used to manage the certificate resources in the cluster.

cert-manager

It needs to be used in conjunction with the installation of cert-manager in the K8S cluster. I’ll have time to go into more detail later

kubectl cert-manager help

cost

View cluster cost information.

kubectl-cost is a kubectl plugin that provides simple CLI access to Kubernetes cost allocation metrics through the kubeccost API. It allows developers, DevOps, and others to quickly determine the cost and efficiency of Kubernetes workloads.

Installation

  1. Install Kubecost (Helm’s options can be found here:cost-analyzer-helm-chart

    1
    2
    helm repo add kubecost https://kubecost.github.io/cost-analyzer/
    helm upgrade -i --create-namespace kubecost kubecost/cost-analyzer --namespace kubecost --set kubecostToken="a3ViZWN0bEBrdWJlY29zdC5jb20=xm343yadf98"

    The deployment completion is displayed as follows:

    NAME: kubecost
    LAST DEPLOYED: Sat Nov 27 13:44:30 2021
    NAMESPACE: kubecost
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    --------------------------------------------------Kubecost has been successfully installed. When pods are Ready, you can enable port-forwarding with the following command:
    
        kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090
    
    Next, navigate to http://localhost:9090 in a web browser.
    
    Having installation issues? View our Troubleshooting Guide at http://docs.kubecost.com/troubleshoot-install
    
  2. Install the kubectl cost

    1
    kubectl krew install cost

use

The use can be viewed directly through the browser:

kubecost UI

ctx

Switch contexts in kubeconfig

Installation

1
kubectl krew install ctx

use

It is also simple to use and execute kubectl ctx Then choose which context you want to switch to.

1
2
$ kubectl ctx
Switched to context "multicloud-k3s".

deprecations

Check for deprecated objects in the cluster. It is generally used to check before upgrading the K8S. Also called KubePug

KubePug

Installation

1
kubectl krew install deprecations

use

It is also simple to use and execute kubectl deprecations That’s it, and as shown below, it tells you which APIs have been deprecated and makes it easy to plan for your K8S upgrade.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ kubectl deprecations
W1127 16:04:58.641429 28561 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1127 16:04:58.664058 28561 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
W1127 16:04:59.622247 28561 warnings.go:70] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W1127 16:05:00.777598 28561 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1127 16:05:00.808486 28561 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
RESULTS:
Deprecated APIs:

PodSecurityPolicy found in policy/v1beta1
├─ PodSecurityPolicy governs the ability to make requests that affect the Security Context that will be applied to a pod and container. Deprecated in 1.21.
-> GLOBAL: kube-prometheus-stack-admission
-> GLOBAL: loki-grafana-test
-> GLOBAL: loki-promtail
-> GLOBAL: loki
-> GLOBAL: loki-grafana
-> GLOBAL: prometheus-operator-grafana-test
-> GLOBAL: prometheus-operator-alertmanager
-> GLOBAL: prometheus-operator-grafana
-> GLOBAL: prometheus-operator-prometheus
-> GLOBAL: prometheus-operator-prometheus-node-exporter
-> GLOBAL: prometheus-operator-kube-state-metrics
-> GLOBAL: prometheus-operator-operator
-> GLOBAL: kubecost-grafana
-> GLOBAL: kubecost-cost-analyzer-psp

ComponentStatus found in /v1
├─ ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+
-> GLOBAL: controller-manager
-> GLOBAL: scheduler


Deleted APIs:

It can also be used in conjunction with CI processes:

1
$ kubectl deprecations --input-file=./deployment/ --error-on-deleted --error-on-deprecated

df-pv

Installation

1
kubectl krew install df-pv

use

execute kubectl df-pv:

kubectl df-pv

get-all

Really get all the resources of Kubernetes.

Installation

1
kubectl krew install get-all

use

Direct execution kubectl get-all, an example effect is as follows:

kubectl get-all

images

Displays the container images used in the cluster.

Installation

1
kubectl krew install images

use

execute kubectl images -A , the result is as follows:

kubectl images

kubesec-scan

Use kubesec.io to scan Kubernetes resources.

Installation

1
kubectl krew install kubesec-scan

use

Examples are as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ kubectl kubesec-scan statefulset loki -n loki-stack
scanning statefulset loki in namespace loki-stack
kubesec.io score: 4
-----------------
Advise1. .spec .volumeClaimTemplates[] .spec .accessModes | index("ReadWriteOnce")
2. containers[] .securityContext .runAsNonRoot == true
Force the running image to run as a non-root user to ensure least privilege
3. containers[] .securityContext .capabilities .drop
Reducing kernel capabilities available to a container limits its attack surface
4. containers[] .securityContext .runAsUser > 10000
Run as a high-UID user to avoid conflicts with the host's user table
5. containers[] .securityContext .capabilities .drop | index("ALL")
Drop all capabilities and add only those required to reduce syscall attack surface

neat

Remove clutter from the Kubernetes display to make it more readable.

Installation

1
kubectl krew install neat

use

Examples are as follows:

Some information that we do not care about such as:creationTimeStampmanagedFields etc. were removed. Very refreshing

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
$ kubectl neat get -- pod loki-0 -oyaml -n loki-stack
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: b9ab988df734dccd44833416670e70085a2a31cfc108e68605f22d3a758f50b5
prometheus.io/port: http-metrics
prometheus.io/scrape: "true"
labels:
app: loki
controller-revision-hash: loki-79684c849
name: loki
release: loki
statefulset.kubernetes.io/pod-name: loki-0
name: loki-0
namespace: loki-stack
spec:
containers:
- args:
- -config.file=/etc/loki/loki.yaml
image: grafana/loki:2.3.0
livenessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
name: loki
ports:
- containerPort: 3100
name: http-metrics
readinessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/loki
name: config
- mountPath: /data
name: storage
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-jhsvm
readOnly: true
hostname: loki-0
preemptionPolicy: PreemptLowerPriority
priority: 0
securityContext:
fsGroup: 10001
runAsGroup: 10001
runAsNonRoot: true
runAsUser: 10001
serviceAccountName: loki
subdomain: loki-headless
terminationGracePeriodSeconds: 4800
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: config
secret:
secretName: loki
- name: storage
- name: kube-api-access-jhsvm
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.namespace
path: namespace

node-shell

Generate a root shell on a node via kubectl

Installation

1
kubectl krew install node-shell

use

Examples are as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ kubectl node-shell instance-ykx0ofns
spawning "nsenter-fr393w" on "instance-ykx0ofns"
If you don't see a command prompt, try pressing enter.
root@instance-ykx0ofns:/# hostname
instance-ykx0ofns
root@instance-ykx0ofns:/# ifconfig
...
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.64.4 netmask 255.255.240.0 broadcast 192.168.79.255
inet6 fe80::f820:20ff:fe16:3084 prefixlen 64 scopeid 0x20<link>
ether fa:20:20:16:30:84 txqueuelen 1000 (Ethernet)
RX packets 24386113 bytes 26390915146 (26.3 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18840452 bytes 3264860766 (3.2 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
root@instance-ykx0ofns:/# exit
logout
pod default/nsenter-fr393w terminated (Error)
pod "nsenter-fr393w" deleted

ns

Switch the ns of Kubernetes.

Installation

1
kubectl krew install ns

use

1
2
3
4
5
6
7
8
9
10
11
12
13
$ kubectl ns loki-stack
Context "multicloud-k3s" modified.
Active namespace is "loki-stack".

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
loki-promtail-fbbjj 1/1 Running 0 12d
loki-promtail-sx5gj 1/1 Running 0 12d
loki-0 1/1 Running 0 12d
loki-grafana-8bffbb679-szdpj 1/1 Running 0 12d
loki-promtail-hmc26 1/1 Running 0 12d
loki-promtail-xvnbc 1/1 Running 0 12d
loki-promtail-5d5h8 1/1 Running 0 12d

outdated

Find stale container images running in the cluster.

Installation

1
kubectl krew install outdated

use

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ kubectl outdated
Image Current Latest Behind
index.docker.io/rancher/klipper-helm v0.6.6-build202110220.6.8-build202111232
docker.io/rancher/klipper-helm v0.6.4-build202108130.6.8-build202111234
docker.io/alekcander/k3s-flannel-fixer 0.0.2 0.0.2 0
docker.io/rancher/metrics-server v0.3.6 0.4.1 1
docker.io/rancher/coredns-coredns 1.8.3 1.8.3 0
docker.io/rancher/library-traefik 2.4.8 2.4.9 1
docker.io/rancher/local-path-provisioner v0.0.19 0.0.20 1
docker.io/grafana/promtail 2.1.0 2.4.1 5
docker.io/grafana/loki 2.3.0 2.4.1 2
quay.io/kiwigrid/k8s-sidecar 1.12.3 1.14.2 5
docker.io/grafana/grafana 8.1.6 8.3.0-beta1 8
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.18.1 1.3.0 9
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.20.0 0.23.0 5
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.0.1 0.0.1 0
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v1.9.4 2.0.0-beta 5
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v2.15.2 2.31.1 38
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.35.0 0.42.1 11
docker.io/kiwigrid/k8s-sidecar 0.1.20 1.14.2 46
docker.io/grafana/grafana 6.5.2 8.3.0-beta1 75
registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.35.0 0.42.1 12
docker.io/squareup/ghostunnel v1.5.2 1.5.2 0
docker.io/grafana/grafana 8.1.2 8.3.0-beta1 12
docker.io/kiwigrid/k8s-sidecar 1.12.3 1.14.2 5
docker.io/prom/prometheus v2.22.2 2.31.1 21

popeye(Popeye)

Scan the cluster for potential resource issues. It is the popeye that the K9S is also using.

Popeye is a utility that scans Kubernetes clusters in real time and reports potential issues with deployed resources and configurations. It cleans up the cluster based on what has been deployed, not what is on disk. By scanning the cluster, it detects misconfigurations and helps you ensure that best practices are in place to avoid future headaches. It aims to reduce the cognitive overload that people face when operating Kubernetes clusters in the wild. Also, if your cluster uses a metrics server, it reports that the allocated resources are more or below the allocated resources and attempts to warn you when the cluster runs out of capacity.

Popeye is a read-only tool that doesn’t change any of your Kubernetes resources in any way!

Installation

1
kubectl krew install popeye

use

As follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
❯ kubectl popeye

___ ___ _____ _____ K .-'-.
| _ \___| _ \ __\ \ / / __| 8 __| `\
| _/ _ \ _/ _| \ V /| _| s `-,-`--._ `\
|_| \___/_| |___| |_| |___| [] .->' a `|-'
Biffs`em and Buffs`em! `=/ (__/_ /
\_, ` _)
`----; |




DAEMONSETS (1 SCANNED) 💥 0 😱 1 🔊 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
· loki-stack/loki-promtail.......................................................................😱
🔊 [POP-404] Deprecation check failed. Unable to assert resource version.
🐳 promtail
😱 [POP-106] No resources requests/limits defined.


DEPLOYMENTS (1 SCANNED) 💥 0 😱 1 🔊 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
· loki-stack/loki-grafana........................................................................😱
🔊 [POP-404] Deprecation check failed. Unable to assert resource version.
🐳 grafana
😱 [POP-106] No resources requests/limits defined.
🐳 grafana-sc-datasources
😱 [POP-106] No resources requests/limits defined.



PODS (7 SCANNED) 💥 0 😱 7 🔊 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
· loki-stack/loki-0..............................................................................😱
🔊 [POP-206] No PodDisruptionBudget defined.
😱 [POP-301] Connects to API Server? ServiceAccount token is mounted.
🐳 loki
😱 [POP-106] No resources requests/limits defined.
· loki-stack/loki-grafana-8bffbb679-szdpj........................................................😱
🔊 [POP-206] No PodDisruptionBudget defined.
😱 [POP-301] Connects to API Server? ServiceAccount token is mounted.
🐳 grafana
😱 [POP-106] No resources requests/limits defined.
🔊 [POP-105] Liveness probe uses a port#, prefer a named port.
🔊 [POP-105] Readiness probe uses a port#, prefer a named port.
🐳 grafana-sc-datasources
😱 [POP-106] No resources requests/limits defined.
· loki-stack/loki-promtail-5d5h8.................................................................😱
🔊 [POP-206] No PodDisruptionBudget defined.
😱 [POP-301] Connects to API Server? ServiceAccount token is mounted.
😱 [POP-302] Pod could be running as root user. Check SecurityContext/image.
🐳 promtail
😱 [POP-106] No resources requests/limits defined.
😱 [POP-103] No liveness probe.
😱 [POP-306] Container could be running as root user. Check SecurityContext/Image.

SUMMARY
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
Your cluster score: 80 -- B
o .-'-.
o __| B `\
o `-,-`--._ `\
[] .->' a `|-'
`=/ (__/_ /
\_, ` _)
`----; |

resource-capacity

Provides an overview of resource requests, limits, and usage.

This is a simple CLI that provides an overview of resource requests, limits, and utilization in a Kubernetes cluster. It attempts to combine the best parts of the output from kubectl top and kubectl describe into one easy-to-use CLI that focuses on cluster resources.

Installation

1
kubectl krew install resource-capacity

use

The following example is to look at node, you can also look at pod, filter by label, and sort and other functions.

1
2
3
4
5
6
7
8
$ kubectl resource-capacity
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 710m (14%) 300m (6%) 535Mi (6%) 257Mi (3%)
09b2brd7robnn5zi-1106883 0Mi (0%) 0Mi (0%) 0Mi (0%) 0Mi (0%)
hecs-348550 100m (10%) 100m (10%) 236Mi (11%) 27Mi (1%)
instance-wy7ksibk 310m (31%) 0Mi (0%) 174Mi (16%) 0Mi (0%)
instance-ykx0ofns 200m (20%) 200m (20%) 53Mi (5%) 53Mi (5%)
izuf656om146vu1n6pd6lpz 100m (10%) 0Mi (0%) 74Mi (3%) 179Mi (8%)

score

Kubernetes static code analysis.

Installation

1
kubectl krew install score

use

It can also be integrated with CI. Examples are as follows:

kubectl score

sniff

It is highly recommended that a previous POD network problem was analyzed with this help. It uses tcpdump and wireshark to initiate remote packet capture on the pod

Installation

1
kubectl krew install sniff

use

1
2
3
4
5
kubectl < 1.12:
kubectl plugin sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

kubectl >= 1.12:
kubectl sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]

As follows:

kubectl sniff

starboard

Also a security scanning tool.

Installation

1
kubectl krew install starboard

use

1
kubectl starboard report deployment/nginx > nginx.deploy.html

A safety report can be generated:

starboard

tail - kubernetes tail

Kubernetes tail。 Log streams for all containers that match the pod. Match pods by service, replicaset, deployment, and so on. Adjust to changing clusters – When pods fall in or out of selection, they will be added or removed from the log.

Installation

kubectl krew install tail

use

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 匹配所有 pod
$ kubectl tail

# 配置 staging ns 的所有 pod
$ kubectl tail --ns staging

# 匹配所有 ns 的 rs name 为 worker 的 pod
$ kubectl tail --rs workers

# 匹配 staging ns 的 rs name 为 worker 的 pod
$ kubectl tail --rs staging/workers

# 匹配 deploy 属于 webapp,且 svc 属于 frontend 的 pod
$ kubectl tail --svc frontend --deploy webapp

The effect is as follows, and the pod corresponding to the log will be added first:

tail 效果

trace

Use system tools to track Kubernetes pods and nodes.

Kubectl Trace is a Kubectl plugin that allows you to schedule the execution of bpftrace programs in a Kubernetes cluster.

Installation

1
kubectl krew install trace

use

I don’t know much about this piece, so I won’t comment much.

1
kubectl trace run ip-180-12-0-152.ec2.internal -f read.bt

tree

One kubectl plugins, through the Kubernetes object ownersReferences to explore the ownership relationship between them.

Installation

use krew Plugin Manager installation:

1
2
kubectl krew install tree
kubectl tree --help

use

DaemonSet example:

DaemonSet  Tree

Knative Service example:

Knative Service

tunnel

Reverse tunnel between the cluster and your own machine.

It allows you to expose machines as services in a cluster, or expose them to specific deployments. The purpose of this project is to provide a holistic solution to this particular problem (accessing local machines from Kubernetes pods).

Installation

1
kubectl krew install tunnel

use

The following command will allow pods in the cluster to access your local web application (listening on port 8000) over http (i.e. Kubernetes applications can send requests to myapp:8000)

1
2
ktunnel expose myapp 80:8000
ktunnel expose myapp 80:8000 -r #deployment & service will be reused if exists or they will be created

warp

Sync and execute local files in pods

kubectl (Kubernetes CLI) plugin, just like kubectl runs with rsync.

It creates temporary pods and syncs local files to the desired container and executes any commands.

For example, this can be used to build and run your local project in Kubernetes with more resources, required architecture, etc., while using your preferred editor locally.

Installation

1
kubectl krew install warp

use

1
2
3
4
5
6
7
# 在 ubuntu 镜像中启动 bash。并将当前目录中的文件同步到容器中
kubectl warp -i -t --image ubuntu testing -- /bin/bash

# 在 node 容器中启动 nodejs 项目
cd examples/nodejs
kubectl warp -i -t --image node testing-node -- npm run watch

who-can

Shows who has RBAC access to Kubernetes resources

Installation

1
kubectl krew install who-can

use

1
2
3
4
5
6
7
$ kubectl who-can create ns all-namespaces
No subjects found with permissions to create ns assigned through RoleBindings

CLUSTERROLEBINDING SUBJECT TYPE SA-NAMESPACE
cluster-admin system:masters Group
helm-kube-system-traefik-crd helm-traefik-crd ServiceAccount kube-system
helm-kube-system-traefik helm-traefik ServiceAccount kube-system

EOF


K8S Utility IV - kubectl utility plugin
https://e-whisper.com/posts/60907/
Author
east4ming
Posted on
November 27, 2021
Licensed under