kubernetes提供的local path给持久化提供了相当大的便利, 但有一个问题是, 每次都需要手动提前在机器上创建相应的目录来做为被调度应用的持久化目录, 不是很方便, 有没有办法可以自动创建呢? 
对于rancher平台,rancher也提供了相应的工具local-path-provisioner 
 
使用 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 apiVersion:  v1 kind:  Namespace metadata:   name:  local-path-storage  --- apiVersion:  v1 kind:  ServiceAccount metadata:   name:  local-path-provisioner-service-account    namespace:  local-path-storage  --- apiVersion:  rbac.authorization.k8s.io/v1 kind:  ClusterRole metadata:   name:  local-path-provisioner-role  rules: -  apiGroups:  ["" ]  resources:  ["nodes" , "persistentvolumeclaims" ]   verbs:  ["get" , "list" , "watch" ] -  apiGroups:  ["" ]  resources:  ["endpoints" , "persistentvolumes" , "pods" ]   verbs:  ["*" ] -  apiGroups:  ["" ]  resources:  ["events" ]   verbs:  ["create" , "patch" ] -  apiGroups:  ["storage.k8s.io" ]  resources:  ["storageclasses" ]   verbs:  ["get" , "list" , "watch" ] --- apiVersion:  rbac.authorization.k8s.io/v1 kind:  ClusterRoleBinding metadata:   name:  local-path-provisioner-bind  roleRef:   apiGroup:  rbac.authorization.k8s.io    kind:  ClusterRole    name:  local-path-provisioner-role  subjects: -  kind:  ServiceAccount   name:  local-path-provisioner-service-account    namespace:  local-path-storage  --- apiVersion:  apps/v1 kind:  Deployment metadata:   name:  local-path-provisioner    namespace:  local-path-storage  spec:   replicas:  1    selector:      matchLabels:        app:  local-path-provisioner    template:      metadata:        labels:          app:  local-path-provisioner      spec:        serviceAccountName:  local-path-provisioner-service-account        containers:        -  name:  local-path-provisioner          image:  rancher/local-path-provisioner:v0.0.14          imagePullPolicy:  IfNotPresent          command:          -  local-path-provisioner          -  --debug          -  start          -  --config          -  /etc/config/config.json          volumeMounts:          -  name:  config-volume            mountPath:  /etc/config/          env:          -  name:  POD_NAMESPACE            valueFrom:              fieldRef:                fieldPath:  metadata.namespace        volumes:          -  name:  config-volume            configMap:              name:  local-path-config  --- apiVersion:  storage.k8s.io/v1 kind:  StorageClass metadata:   name:  local-path  provisioner:  rancher.io/local-path volumeBindingMode:  WaitForFirstConsumer reclaimPolicy:  Delete --- kind:  ConfigMap apiVersion:  v1 metadata:   name:  local-path-config    namespace:  local-path-storage  data:   config.json:  |-          {                 "nodePathMap":[                 {                         "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",                         "paths":["/data"]                 }                 ]         }    setup:  |-         #!/bin/sh         path=$1         mkdir -m 0777 -p ${path}    teardown:  |-         #!/bin/sh         path=$1         rm -rf ${path} 
 
yaml文件几乎可以不用动直接部署, 部署之后在集群中就会出现一个名为local-path的storeclass
这样,如果一个应用需要使用某台机器的本地目录做为持久化,需要二步操作
调度到这台机器上 
使用pvc时指定storeclass为local-path 
 
使用官方的例子
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 apiVersion:  v1 kind:  Pod metadata:   name:  volume-test    namespace:  default  spec:   containers:    -  name:  volume-test      image:  nginx:stable-alpine      imagePullPolicy:  IfNotPresent      volumeMounts:      -  name:  volv        mountPath:  /data      ports:      -  containerPort:  80    volumes:    -  name:  volv      persistentVolumeClaim:        claimName:  local-path-pvc  --- apiVersion:  v1 kind:  PersistentVolumeClaim metadata:   name:  local-path-pvc    namespace:  default  spec:   accessModes:      -  ReadWriteOnce    storageClassName:  local-path     resources:      requests:        storage:  2Gi  
 
kubectl apply -f 
查看pv/pvc绑定情况
1 2 3 4 5 6 7 8 9 10 11 $ kubectl get pv NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                    STORAGECLASS   REASON    AGE pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            Delete           Bound     default/local-path-pvc   local-path               4s $ kubectl get pvc NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE local-path-pvc   Bound     pvc-bc3117d9-c6d3-11e8-b36d-7a42907dda78   2Gi        RWO            local-path     16s $ kubectl get pod NAME          READY     STATUS    RESTARTS   AGE volume-test   1/1       Running   0          3s 
 
可以看到pv/pvc绑定成功了且应用的持久化目录在机器的/data目录下
配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 kind:  ConfigMap apiVersion:  v1 metadata:   name:  local-path-config    namespace:  local-path-storage  data:   config.json:  |-          {                 "nodePathMap":[                 {                         "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",                         "paths":["/data"]                 },                 {                         "node":"yasker-lp-dev1",                         "paths":["/opt/local-path-provisioner", "/data1"]                 },                 {                         "node":"yasker-lp-dev3",                         "paths":[]                 }                 ]         }    setup:  |-         #!/bin/sh         path=$1         mkdir -m 0777 -p ${path}    teardown:  |-         #!/bin/sh         path=$1         rm -rf ${path} 
 
配置里是可以对不同的node指定不同的path, 如果未指定node, 则使用默认值
不足 通过这种方式我们可以不再需要生成pv, 直接使用pvc即可,但还是有些不足的地方,目前还不支持对持久化目录做到可以限制使用大小
另外,kubernetes sig也出了一个对于local volume的工具sig-storage-local-static-provisioner 相对来说更加通用,但是也有不足的地方,感兴趣的可以参考
参考文章: