DirectPV를 직접 구성하고 테스트를 해보고 싶어서 로컬에서 빠르게 삭제하고 다시 올릴 수 있는 Vagrant를 사용했습니다.
DirectPV를 통해서 nginx pod의 log 경로를 directpv를 통해 스토리지를 연결하는 것을 테스트해보려고 합니다.
Coudnet@ MinIO 스터디를 진행하면서 정리한 글입니다. 스터디에서 제공받은 실습과는 좀 다를 수 있습니다.
kind로 시도했으나 directpv install 후 node-server의 node-controller 파드에서 status.devices 오류로 기동되지 않았습니다.
E0917 15:54:08.970761 2706 main.go:148] "unable to execute command" err="DirectPVNode.directpv.min.io \"kind-control-plane\" is invalid: status.devices: Required value"
kind는 노드가 컨테이너로 실행되어 호스트의 블록 디바이스가 정상 노출되지 않는 것으로 예상되는데, 관련 이슈도 있지만 해결될 것 같지는 않습니다.
https://github.com/minio/directpv/issues?q=is%3Aissue%20status.devices
VirtualBox VM에 10GiB 가상 디스크 4개를 붙여 DirectPV용 드라이브로 사용합니다.
Vagrant.configure("2") do |config|
config.vm.box = "bento/ubuntu-24.04"
config.vm.provider "virtualbox" do |vb|
(1..4).each do |i|
file_to_disk = "./disk#{i}.vdi"
unless File.exist?(file_to_disk)
vb.customize ['createhd', '--filename', file_to_disk, '--size', 10 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', i, '--device', 0, '--type', 'hdd', '--medium', file_to_disk]
end
vb.cpus = 4
vb.memory = 8192
end
config.vm.provision "shell", inline: <<-SHELL
echo " install docker"
sudo apt-get update && sudo apt-get install -y ca-certificates curl > /dev/null
sudo install -m 0755 -d /etc/apt/keyrings > /dev/null
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc > /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$UBUNTU_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update && \
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin > /dev/null
sudo usermod -aG docker vagrant > /dev/null
sudo systemctl enable --now docker.service > /dev/null
echo " install k3s"
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.33.4+k3s1 INSTALL_K3S_EXEC=" --disable=traefik" K3S_KUBECONFIG_MODE="644" sh -s - server --token miniotoken > /dev/null
mkdir -p ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
echo " install kubectl, k9s, krew"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && \
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && rm kubectl > /dev/null
wget https://github.com/derailed/k9s/releases/download/v0.50.9/k9s_linux_amd64.deb && sudo dpkg -i k9s_linux_amd64.deb && rm k9s_linux_amd64.deb > /dev/null
(
set -x; cd "$(mktemp -d)" &&
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
KREW="krew-${OS}_${ARCH}" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
tar zxvf "${KREW}.tar.gz" &&
./"${KREW}" install krew
) > /dev/null
echo " finalizing setup"
echo 'alias k="kubectl"' >> ~/.bashrc
echo "export PATH=\"$(echo "${KREW_ROOT:-$HOME/.krew}/bin"):\$PATH\"" >> ~/.bashrc
echo 'sudo su - && exit' > /home/vagrant/.bash_login
SHELL
config.vm.post_up_message = <<-EOT
vagrant is up and running!
vagrant ssh default
EOT
end
플러그인 설치 후 드라이버를 배포합니다.
kubectl krew install directpv
kubectl directpv install
install 시 기본 StorageClass가 생성됩니다.
root@vagrant:~# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
directpv-min-io directpv-min-io Delete WaitForFirstConsumer true 8m6s
standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 23m
root@vagrant:~# kubectl get deployment,daemonset,pod -n directpv
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 3/3 3 3 6m6s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-server 1 1 1 1 1 <none> 6m6s
NAME READY STATUS RESTARTS AGE
pod/controller-6fdb5bf4f7-2n6l2 3/3 Running 0 6m6s
pod/controller-6fdb5bf4f7-r9fq4 3/3 Running 0 6m6s
pod/controller-6fdb5bf4f7-wl5l2 3/3 Running 0 6m6s
pod/node-server-f6rj7 4/4 Running 0 6m6s
Controller pod에는 아래와 같이 csi-provisioner csi-resizer controller 컨테이너로 구성되어 있으며, 이전 글에서 정리한 공식문서에서 나온 아키텍처에 나온 구성과 같습니다.
root@vagrant:~# kubectl get pods -n directpv controller-6fdb5bf4f7-2n6l2 -oyaml | yq ".spec.containers.[] | .name"
"csi-provisioner"
"csi-resizer"
"controller"

deployment로 배포가 되어있는데, 아래 명령어를 통해 리더 pod를 확인해 볼 수 있습니다.
root@vagrant:~# kubectl describe lease -n directpv external-resizer-directpv-min-io
Name: external-resizer-directpv-min-io
Namespace: directpv
Labels: <none>
Annotations: <none>
API Version: coordination.k8s.io/v1
Kind: Lease
Metadata:
Creation Timestamp: 2025-09-17T15:56:53Z
Resource Version: 1743
UID: a8d3ccb5-95f0-4f09-bb7d-f8ed48b68897
Spec:
Acquire Time: 2025-09-17T15:56:53.791653Z
Holder Identity: controller-6fdb5bf4f7-2n6l2
Lease Duration Seconds: 15
Lease Transitions: 0
Renew Time: 2025-09-17T16:23:31.860625Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LeaderElection 26m external-resizer-directpv-min-io/controller-6fdb5bf4f7-2n6l2 controller-6fdb5bf4f7-2n6l2 became leader
Node-server 에는 아래와 같이 node-driver-registrar node-server node-controller liveness-probe 컨테이너로 구성되어 있습니다.

root@vagrant:~# kubectl get pods -n directpv node-server-f6rj7 -oyaml | yq ".spec.containers.[] | .name"
"node-driver-registrar"
"node-server"
"node-controller"
"liveness-probe"
root@vagrant:~# kubectl get crd
NAME CREATED AT
addons.k3s.cattle.io 2025-09-17T15:56:03Z
directpvdrives.directpv.min.io 2025-09-17T15:56:38Z
directpvinitrequests.directpv.min.io 2025-09-17T15:56:38Z
directpvnodes.directpv.min.io 2025-09-17T15:56:38Z
directpvvolumes.directpv.min.io 2025-09-17T15:56:38Z
etcdsnapshotfiles.k3s.cattle.io 2025-09-17T15:56:03Z
helmchartconfigs.helm.cattle.io 2025-09-17T15:56:03Z
helmcharts.helm.cattle.io 2025-09-17T15:56:03Z
위의 아키텍처에서 DirectPVDrive와 DirectPVVolume 이 나오는데, CRD에서도 이런 이름을 볼 수 있습니다.
초기에는 DirectPVDrive/DirectPVVolume이 비어있을 수 있습니다. 드라이브 초기화 후 Drive가 생성되고, PVC 생성 후 Volume이 생성됩니다.
root@vagrant:~# kubectl directpv discover
Discovered node 'vagrant' ✔
┌─────────────────────┬─────────┬───────┬────────┬────────────┬───────────────────┬───────────┬─────────────┐
│ ID │ NODE │ DRIVE │ SIZE │ FILESYSTEM │ MAKE │ AVAILABLE │ DESCRIPTION │
├─────────────────────┼─────────┼───────┼────────┼────────────┼───────────────────┼───────────┼─────────────┤
│ 8:16$8JGs+LVuHL9... │ vagrant │ sdb │ 10 GiB │ - │ ATA VBOX_HARDDISK │ YES │ - │
│ 8:32$P1QOiN3d3oS... │ vagrant │ sdc │ 10 GiB │ - │ ATA VBOX_HARDDISK │ YES │ - │
│ 8:48$t4ozu08SmGs... │ vagrant │ sdd │ 10 GiB │ - │ ATA VBOX_HARDDISK │ YES │ - │
│ 8:64$p6TMzb/Y1UN... │ vagrant │ sde │ 10 GiB │ - │ ATA VBOX_HARDDISK │ YES │ - │
└─────────────────────┴─────────┴───────┴────────┴────────────┴───────────────────┴───────────┴─────────────┘
Generated 'drives.yaml' successfully.
root@vagrant:~#
위 discover을 통해서 생성된 drives.yaml 파일을 확인합니다
생성된 YAML에서 select: "yes"로 표시된 디스크가 초기화 대상입니다.
root@vagrant:~# cat drives.yaml |yq
{
"version": "v1",
"nodes": [
{
"name": "vagrant",
"drives": [
{
"id": "8:64$p6TMzb/Y1UNWjKqFFbmtvmMTbOJteIm9O7R0hPWu+5w=",
"name": "sde",
"size": 10737418240,
"make": "ATA VBOX_HARDDISK",
"select": "yes"
},
...생략
이제 drives.yaml 파일을 기반으로 --dangerous 플래그를 사용해서 디스크를 초기화합니다. 초기화 시 해당 디스크의 모든 데이터가 삭제됩니다.
root@vagrant:~# kubectl directpv init drives.yaml
ERROR Initializing the drives will permanently erase existing data. Please review carefully before performing this *DANGEROUS* operation and retry this command with --dangerous flag.
root@vagrant:~# kubectl directpv init drives.yaml --dangerous
███████████████████████████████████████████████████████████████████████████ 100%
Processed initialization request 'd6840d53-f7da-4b20-a222-b92aff452bdd' for node 'vagrant' ✔
┌──────────────────────────────────────┬─────────┬───────┬─────────┐
│ REQUEST_ID │ NODE │ DRIVE │ MESSAGE │
├──────────────────────────────────────┼─────────┼───────┼─────────┤
│ d6840d53-f7da-4b20-a222-b92aff452bdd │ vagrant │ sdb │ Success │
│ d6840d53-f7da-4b20-a222-b92aff452bdd │ vagrant │ sdc │ Success │
│ d6840d53-f7da-4b20-a222-b92aff452bdd │ vagrant │ sdd │ Success │
│ d6840d53-f7da-4b20-a222-b92aff452bdd │ vagrant │ sde │ Success │
└──────────────────────────────────────┴─────────┴───────┴─────────┘
root@vagrant:~#
lsblk 와 blkid를 통해 확인해보면 directpv를 통해 xfs로 포맷되어 마운트 되어있는 걸 확인할 수 있습니다.
root@vagrant:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...생략
sdb 8:16 0 10G 0 disk /var/lib/directpv/mnt/911b296c-765d-47f9-ab9b-a3cef30eca6a
sdc 8:32 0 10G 0 disk /var/lib/directpv/mnt/ba86bcd2-5465-4354-a8e0-6b346bedb0fe
sdd 8:48 0 10G 0 disk /var/lib/directpv/mnt/f7a0f10b-ced8-4234-8263-d1898af320ec
sde 8:64 0 10G 0 disk /var/lib/directpv/mnt/fe8938a7-9933-40e8-9ce1-f270d2e40cc8
root@vagrant:~# blkid
/dev/sdd: LABEL="DIRECTPV" UUID="f7a0f10b-ced8-4234-8263-d1898af320ec" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdb: LABEL="DIRECTPV" UUID="911b296c-765d-47f9-ab9b-a3cef30eca6a" BLOCK_SIZE="512" TYPE="xfs"
/dev/sde: LABEL="DIRECTPV" UUID="fe8938a7-9933-40e8-9ce1-f270d2e40cc8" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdc: LABEL="DIRECTPV" UUID="ba86bcd2-5465-4354-a8e0-6b346bedb0fe" BLOCK_SIZE="512" TYPE="xfs"
...생략
root@vagrant:~# df -hT --type xfs
Filesystem Type Size Used Avail Use% Mounted on
/dev/sde xfs 10G 104M 9.9G 2% /var/lib/directpv/mnt/fe8938a7-9933-40e8-9ce1-f270d2e40cc8
/dev/sdc xfs 10G 104M 9.9G 2% /var/lib/directpv/mnt/ba86bcd2-5465-4354-a8e0-6b346bedb0fe
/dev/sdb xfs 10G 104M 9.9G 2% /var/lib/directpv/mnt/911b296c-765d-47f9-ab9b-a3cef30eca6a
/dev/sdd xfs 10G 104M 9.9G 2% /var/lib/directpv/mnt/f7a0f10b-ced8-4234-8263-d1898af320ec
그리고 총 용량은 kubectl directpv info를 통해 10GB 디스크 x 4해서 40GB를 확인할 수 있습니다
root@vagrant:~# k directpv info
┌───────────┬──────────┬───────────┬─────────┬────────┐
│ NODE │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├───────────┼──────────┼───────────┼─────────┼────────┤
│ • vagrant │ 40 GiB │ 0 B │ 0 │ 4 │
└───────────┴──────────┴───────────┴─────────┴────────┘
0 B/40 GiB used, 0 volumes, 4 drives
root@vagrant:~#
이제 앞에서 비어있었던 CRD를 확인해보면 DirectPVDrive는 확인되지만 DirectPVVolume은 아직 만든 게 없기 때문에 여전히 리소스가 없습니다.
root@vagrant:/# kubectl get DirectPVDrive
NAME AGE
911b296c-765d-47f9-ab9b-a3cef30eca6a 3m47s
ba86bcd2-5465-4354-a8e0-6b346bedb0fe 3m47s
f7a0f10b-ced8-4234-8263-d1898af320ec 3m47s
fe8938a7-9933-40e8-9ce1-f270d2e40cc8 3m47s
root@vagrant:/# kubectl get DirectPVVolume
No resources found
재부팅을 해보니 /etc/fstab에 특별한 내용이 없어도 정상적으로 동작하는 걸 확인했는데, 그래서 부팅 후 로그를 좀 봤습니다.
2025-09-17T16:36:12.907918+00:00 vagrant systemd[1]: Started cri-containerd-1ded95198761a8182f1e63c4e43c0c7c44b206936826d3d9ffc931cbb3f730ef.scope - libcontainer container 1ded95198761a8182f1e63c4e43c0c7c44b206936826d3d9ffc931cbb3f730ef.
2025-09-17T16:36:12.925238+00:00 vagrant systemd[1]: Started cri-containerd-a4379b8eaf0308d0f40edbf290f042b6d66b5adadf3de4d49403b6028812f5e8.scope - libcontainer container a4379b8eaf0308d0f40edbf290f042b6d66b5adadf3de4d49403b6028812f5e8.
2025-09-17T16:36:13.266350+00:00 vagrant systemd[1]: Started cri-containerd-b5f5e55041057cfc6789b1da665fe7a963b02366a377272f2592190e27bf9ced.scope - libcontainer container b5f5e55041057cfc6789b1da665fe7a963b02366a377272f2592190e27bf9ced.
2025-09-17T16:36:13.491566+00:00 vagrant kernel: loop0: detected capacity change from 0 to 32768
2025-09-17T16:36:13.752896+00:00 vagrant kernel: SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled
2025-09-17T16:36:13.754759+00:00 vagrant kernel: XFS (loop0): Mounting V5 Filesystem bbfe9639-349c-4f53-a264-675f2e4020b4
2025-09-17T16:36:13.756614+00:00 vagrant kernel: XFS (sdb): Mounting V5 Filesystem 911b296c-765d-47f9-ab9b-a3cef30eca6a
2025-09-17T16:36:13.757748+00:00 vagrant kernel: XFS (loop0): Ending clean mount
2025-09-17T16:36:13.757752+00:00 vagrant kernel: XFS (loop0): Quotacheck needed: Please wait.
2025-09-17T16:36:13.771897+00:00 vagrant kernel: XFS (loop0): Quotacheck: Done.
2025-09-17T16:36:13.771905+00:00 vagrant kernel: xfs filesystem being mounted at /tmp/xfs.check.mnt.4254791425 supports timestamps until 2038-01-19 (0x7fffffff)
2025-09-17T16:36:13.776696+00:00 vagrant kernel: XFS (sdb): Ending clean mount
2025-09-17T16:36:13.783233+00:00 vagrant kernel: xfs filesystem being mounted at /var/lib/directpv/mnt/911b296c-765d-47f9-ab9b-a3cef30eca6a supports timestamps until 2038-01-19 (0x7fffffff)
2025-09-17T16:36:13.791727+00:00 vagrant kernel: XFS (sdc): Mounting V5 Filesystem ba86bcd2-5465-4354-a8e0-6b346bedb0fe
2025-09-17T16:36:13.811954+00:00 vagrant kernel: XFS (sdc): Ending clean mount
... 생략
처음 시작된 컨테이너를 보면 1ded95198761a8182f1e63c4e43c0c7c44b206936826d3d9ffc931cbb3f730ef 해시값이 나오는데, 해당 해시값의 앞 13자리만 가지고 컨테이너를 조회해봅니다.
root@vagrant:~# crictl ps | grep -E "1ded95198761a|CONTAINER"
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
1ded95198761a 086eb6b9a9354 27 minutes ago Running controller 1 1695862a6900e controller-6fdb5bf4f7-wl5l2
/etc/fstab 엔트리 없이도, kubelet이 재기동되면 Node Server가 DirectPV CRD 상태와 kubelet의 CSI 워크플로우에 따라 다시 마운트하는 것으로 예상됩니다.
StorageClass directpv-min-io를 사용해 PVC를 만들고, nginx의 /var/log/nginx를 해당 PVC로 마운트합니다.
cat <<EOF > nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
volumeMode: Filesystem
storageClassName: directpv-min-io
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 8Mi
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
volumes:
- name: nginx-volume
persistentVolumeClaim:
claimName: nginx-pvc
containers:
- name: nginx-container
image: nginx:alpine
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-volume
EOF
kubectl apply -f nginx-pvc.yaml
이렇게 생성 후 로그를 남기기 위해 curl 로 요청을 보내 로그를 생성합니다.
k exec nginx-pod -- curl 127.0.0.1
이제 lsblk 를 확인해보면 세 가지 경로가 보입니다.
root@vagrant:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...생략
sdb 8:16 0 10G 0 disk /var/lib/directpv/mnt/911b296c-765d-47f9-ab9b-a3cef30eca6a
sdc 8:32 0 10G 0 disk /var/lib/directpv/mnt/ba86bcd2-5465-4354-a8e0-6b346bedb0fe
sdd 8:48 0 10G 0 disk /var/lib/directpv/mnt/f7a0f10b-ced8-4234-8263-d1898af320ec
sde 8:64 0 10G 0 disk /var/lib/kubelet/pods/4efb6ab2-9529-493d-a003-349437bbc263/volumes/kubernetes.io~csi/pvc-bc5763cb-17e4-44ff-9d97-441fe672154a/mount
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/de5c3f94a0d0406b88aa03234060bba5777b46a7fe84afcf33a30cf830c7df60/globalmount
/var/lib/directpv/mnt/fe8938a7-9933-40e8-9ce1-f270d2e40cc8
directpv에서 사용하던 마운트포인트 외에 mount 경로와 globalmount경로가 추가되어있습니다.k get pv를 통해 확인해보면 /mount 경로는 실제 연결되는 경로인 것 같고, globalmount는 위의 그림과는 좀 다르지만 github의 아키텍처를 보면 staging target path를 말하는 것 같습니다.
directpvvolumes CRD의 status 파트를 확인해보니 dataPath stagingTargetPath targetPath라는 명칭을 확인할 수 있었습니다.
root@vagrant:~# kubectl get directpvvolumes -oyaml
apiVersion: v1
items:
- apiVersion: directpv.min.io/v1beta1
... 생략
status:
availableCapacity: 8388608
dataPath: /var/lib/directpv/mnt/fe8938a7-9933-40e8-9ce1-f270d2e40cc8/.FSUUID.fe8938a7-9933-40e8-9ce1-f270d2e40cc8/pvc-bc5763cb-17e4-44ff-9d97-441fe672154a
fsuuid: fe8938a7-9933-40e8-9ce1-f270d2e40cc8
stagingTargetPath: /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/de5c3f94a0d0406b88aa03234060bba5777b46a7fe84afcf33a30cf830c7df60/globalmount
status: Ready
targetPath: /var/lib/kubelet/pods/4efb6ab2-9529-493d-a003-349437bbc263/volumes/kubernetes.io~csi/pvc-bc5763cb-17e4-44ff-9d97-441fe672154a/mount
totalCapacity: 8388608
usedCapacity: 0
kind: List
metadata:
resourceVersion: ""
root@vagrant:~#
mount와 globalmount 경로 둘 다 같은 경로로 마운트 된 것을 확인할 수 있고, 아래처럼 실제 파일도 확인할 수 있습니다.
root@vagrant:~# cat /var/lib/directpv/mnt/fe8938a7-9933-40e8-9ce1-f270d2e40cc8/.FSUUID.fe8938a7-9933-40e8-9ce1-f270d2e40cc8/pvc-bc5763cb-17e4-44ff-9d97-441fe672154a/access.log
127.0.0.1 - - [17/Sep/2025:17:12:08 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
127.0.0.1 - - [17/Sep/2025:17:12:20 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
127.0.0.1 - - [17/Sep/2025:17:13:29 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
root@vagrant:~# cat /var/lib/kubelet/pods/4efb6ab2-9529-493d-a003-349437bbc263/volumes/kubernetes.io~csi/pvc-bc5763cb-17e4-44ff-9d97-441fe672154a/mount/access.log
127.0.0.1 - - [17/Sep/2025:17:12:08 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
127.0.0.1 - - [17/Sep/2025:17:12:20 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
127.0.0.1 - - [17/Sep/2025:17:13:29 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
root@vagrant:~# cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/de5c3f94a0d0406b88aa03234060bba5777b46a7fe84afcf33a30cf830c7df60/globalmount/access.log
127.0.0.1 - - [17/Sep/2025:17:12:08 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
127.0.0.1 - - [17/Sep/2025:17:12:20 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
127.0.0.1 - - [17/Sep/2025:17:13:29 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.14.1" "-"
root@vagrant:~#| MinIO 풀 확장 및 리밸런싱 (0) | 2025.09.19 |
|---|---|
| MinIO DirectPV 소개 및 설치 (0) | 2025.09.18 |
| MinIO 아키텍처와 핵심 개념 정리 (0) | 2025.09.13 |