Z.S.K.'s Records

Kubernetes学习(rancher local path dynamic provisioner源码分析)

之前在rancher上实战了一下local path的持久卷provisioner, 也读一读它的源码加深下映像.

配置文件

还是先贴一下配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/data"]
}
]
}
setup: |-
#!/bin/sh
path=$1
mkdir -m 0777 -p ${path}
teardown: |-
#!/bin/sh
path=$1
rm -rf ${path}

引用

源代码关键的就是两个go文件,一个main.go, 一个provisioner.go, 接下来分别分析

main.go

main.go中大多数都是在进行一些逻辑的判断, 比如解析启动参数,解析configmap配置等.

比较重要的关键代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
func startDaemon(c *cli.Context) error {
// ...
provisioner, err := NewProvisioner(stopCh, kubeClient, configFile, namespace, helperImage, configMapName)
if err != nil {
return err
}
pc := pvController.NewProvisionController(
kubeClient,
provisionerName,
provisioner,
serverVersion.GitVersion,
)
logrus.Debug("Provisioner started")
pc.Run(stopCh)
logrus.Debug("Provisioner stopped")
return nil
}

这里引用了pvController.NewProvisionController,这个函数是控制循环,比较复杂,本人太菜,有些看不透,可能还是对k8s的控制循环的理解不到位, 感兴趣的可以自行分析,源代码controller.go

简单理解就是这里启动了Provisioner的控制循环,这个控制循环主要检测的是provisioner对象, 这里又涉及到2个关键的接口

1
2
3
4
5
6
7
8
9
10
11
type Provisioner interface {
// Provision creates a volume i.e. the storage asset and returns a PV object
// for the volume
Provision(ProvisionOptions) (*v1.PersistentVolume, error)
// Delete removes the storage asset that was created by Provision backing the
// given PV. Does not delete the PV object itself.
//
// May return IgnoredError to indicate that the call has been ignored and no
// action taken.
Delete(*v1.PersistentVolume) error
}

还有

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
type ProvisionOptions struct {
// StorageClass is a reference to the storage class that is used for
// provisioning for this volume
StorageClass *storageapis.StorageClass

// PV.Name of the appropriate PersistentVolume. Used to generate cloud
// volume name.
PVName string

// PVC is reference to the claim that lead to provisioning of a new PV.
// Provisioners *must* create a PV that would be matched by this PVC,
// i.e. with required capacity, accessMode, labels matching PVC.Selector and
// so on.
PVC *v1.PersistentVolumeClaim

// Node selected by the scheduler for the volume.
SelectedNode *v1.Node
}

引用

这两个对象主要在provisioner.go中引用

provisioner.go

LocalPathProvisioner实现了Provisioner接口

主要的就是Provision方法,关键代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
func (p *LocalPathProvisioner) Provision(opts pvController.ProvisionOptions) (*v1.PersistentVolume, error) {
// ...
basePath, err := p.getRandomPathOnNode(node.Name) // 由于支持在一个node上指定多个目录用于挂载,因此随机选择一个
if err != nil {
return nil, err
}

name := opts.PVName
folderName := strings.Join([]string{name, opts.PVC.Namespace, opts.PVC.Name}, "_")

path := filepath.Join(basePath, folderName)
logrus.Infof("Creating volume %v at %v:%v", name, node.Name, path)

createCmdsForPath := []string{ //这里指定需要helpPod执行的命令,一个是创建,一个是删除,这个脚本在configmap中定义
"/bin/sh",
"/script/setup",
}
// 这里我创建了个busybox容器根据path来创建目录
if err := p.createHelperPod(ActionTypeCreate, createCmdsForPath, name, path, node.Name); err != nil {
return nil, err
}
...
}

追一下createhelperPod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
func (p *LocalPathProvisioner) createHelperPod(action ActionType, cmdsForPath []string, name, path, node string) (err error) {
// ...
Containers: []v1.Container{
{
Name: "local-path-" + string(action),
Image: p.helperImage,
Command: append(cmdsForPath, filepath.Join("/data/", volumeDir)), // 在这里定义了当容器启动后需要执行的命令,根据上面的参数,这里转化成mkdir -p /data/xxxx
VolumeMounts: []v1.VolumeMount{
{
Name: "data",
ReadOnly: false,
MountPath: "/data/", // 容器会将Node上指定的路径挂载到自己的/data目录下然后进行创建pv目录
},
{
Name: "script",
ReadOnly: false,
MountPath: "/script",
},
},
ImagePullPolicy: v1.PullIfNotPresent,
},
},
...
}

这个方法中还有判断容器运行是否成功的语句, 通过这种方式,pv目录就创建成功了。

流程

所以整个流程变成: 集群管理员只需要将local-path-dymanic发布到集群中, 如果需要使用持久化存储,只需要声明pvc, 指定local-path-dymanic的storageClass,local-path-dymanic则会自动在调度的Node上新建出pv目录,pv pvc bound,整个流程完成.

具体的使用可参考

参考文章:

转载请注明原作者: 周淑科(https://izsk.me)

 wechat
Scan Me To Read on Phone
I know you won't do this,but what if you did?