Service Mesh 之 Linkerd

Install

参考 官方文档 进行安装

核心命令:

1
2
3
4
5
6
7
8
9
# install `linkerd` command
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh
export PATH=$HOME/.linkerd2/bin:$PATH
linkerd check --pre

# install linkerd resources
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
# option: linkerd check

Linkerd Authorization Policy

Linkerd 支持流量加密认证,即:

  • 对于指定Pod(eg: minio), 可以控制哪些流量(Pod)可以访问它
  • 其他Pod访问它时,流量通过 MTLS 加密

当然 Linkerd 还有其他流量认证的方式,但是这里只介绍,项目可能用到的部分

业务流量加密认证

Linker有以下几个概念用于流量加密认证

CRD 描述
Server 如其名相当于流量的目标位置,定义一组Pod以及其上的一个Port, 可以类比K8s原生的Service
NetworkAuthentication 基于请求来源IP认证(用于来自集群外部流量访问认证,或者K8s的各类探针访问),这部分流量目前并 不会加密
MeshTLSAuthentication MTLS加密流量, 关联流量源头(Pod)的ServiceAccount
AuthorizationPolicy ServerMeshTLSAuthentication关联起来

示例演示

下面的示例,是通过minio客户端(mc)来访问minio的服务

示例minio配置
  1. MinIO Server Deployment
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
apiVersion: v1
kind: Namespace
metadata:
  name: minio-test-server
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: minio-test-server
  namespace: minio-test-server
  labels:
    app: minio
    project: mytest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: minio
  namespace: minio-test-server
  labels:
     app: minio
spec:
  selector:
    matchLabels:
      app: minio
  serviceName: minio
  replicas: 1
  template:
    metadata:
      annotations:
        linkerd.io/inject: enabled
        config.linkerd.io/default-inbound-policy: deny
      labels:
        app: minio
        project: mytest
    spec:
      containers:
      - name: minio
        envFrom:
        - secretRef:
            name: minio-admin-config
        image: minio/minio:RELEASE.2024-11-07T00-52-20Z
        args: ["server", "/data"]
        ports:
        - name: minio
          containerPort: 9000
      serviceAccountName: minio-test-server

---
apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: minio-test-server
spec:
  selector:
    app: minio
  ports:
  - name: service
    protocol: TCP
    port: 9000
    targetPort: 9000
  type: ClusterIP
---
apiVersion: v1
kind: Secret
metadata:
  name: minio-admin-config
  namespace: minio-test-server
type: Opaque
stringData:
  MINIO_ROOT_USER: "minio"
  MINIO_ROOT_PASSWORD: "IPFvslzI1pVCrfbO"
  1. Minio Test Client
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
apiVersion: v1
kind: Namespace
metadata:
  name: minio-test-client
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: minio-test-client
  namespace: minio-test-client
  labels:
    app: minio
    project: mytest
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio-client
  namespace: minio-test-client
  labels:
     app: minio-client
spec:
  selector:
    matchLabels:
      app: minio-client
  replicas: 1
  template:
    metadata:
      annotations:
        # auto insert linkerd sider-car
        linkerd.io/inject: enabled
      labels:
        app: minio-client
        project: mytest
    spec:
      containers:
      - name: minio
        image: minio/minio:RELEASE.2024-11-07T00-52-20Z
        envFrom:
        - secretRef:
            name: minio-admin-config
        command:
        - sh
        - -c
        - |
          set -ex
          mc alias set myminio http://minio.minio-test-server.svc:9000 $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD
          while true; do
            if [ $(mc ls myminio | wc -l) -eq 0 ]; then
              mc mb myminio/test-bucket
            fi
            mc ls myminio/
            sleep 30
          done          

      serviceAccountName: minio-test-client
---
apiVersion: v1
kind: Secret
metadata:
  name: minio-admin-config
  namespace: minio-test-client
type: Opaque
stringData:
  MINIO_ROOT_USER: "minio"
  MINIO_ROOT_PASSWORD: "IPFvslzI1pVCrfbO"
  1. 创建网络认证
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: policy.linkerd.io/v1beta3
kind: Server
metadata:
  name: minio-server
  namespace: minio-test-server
spec:
  podSelector:
    matchLabels:
      app: minio
  port: minio
---
apiVersion: policy.linkerd.io/v1alpha1
kind: MeshTLSAuthentication
metadata:
  name: minio-client-auth
  namespace: minio-test-server
spec:
  identityRefs:
    - kind: ServiceAccount
      name: minio-test-client
      namespace: minio-test-client
---
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
  name: minio-auth-policy
  namespace: minio-test-server
spec:
  targetRef:
    group: policy.linkerd.io
    kind: Server
    name: minio-server
  requiredAuthenticationRefs:
    - name: minio-client-auth
      kind: MeshTLSAuthentication
      group: policy.linkerd.io

QA

  1. 如何验证minio服务

成功示例

1
2
3
4
[root@test-tools-657748779f-5w88z /]# mc alias set myminio http://minio.minio-test-server.svc:9000 minio IPFvslzI1pVCrfbO
Added `myminio` successfully.
[root@test-tools-657748779f-5w88z /]# mc mb myminio/test-bucket
Bucket created successfully `myminio/test-bucket`.

失败示例 当minio server启动之后,如果在其他Pod访问minio会失败

1
2
3
4
[root@test-tools-657748779f-wm2vd /]# mc alias set myminio http://minio.minio-test-server.svc:9000 minio IPFvslzI1pVCrfbO
Added `myminio` successfully.
[root@test-tools-657748779f-wm2vd /]# mc mb myminio/test-bucket
mc: <ERROR> Unable to make bucket `myminio/test-bucket`. Access Denied.

原因是 minio-test-serverannotations增加了config.linkerd.io/default-inbound-policy: deny,默认会拒绝所有流量。

PS: 有一点很奇怪,虽然创建bucket的时候失败了,但是执行 mc alias 时却可以检查账密是否正确. 是因为linkerd自动识别 了s3的协议,做了适配吗?

  1. 关于minio服务的annotations
  • linkerd.io/inject: enabled 用于自动将linkerd的sider-car注入,这里是针对Pod主动注入,可以直接在Namespace上增加annotation,这样namespace下的资源不用手动增加; 参考 Automatic Proxy Injection
  • config.linkerd.io/default-inbound-policy: deny
    • 用于默认情况下流量的认证,这里是默认拒绝流量(只允许创建的AuthorizationPolicy源头才可以访问)
    • 如果不设置该annotation,那么创建的Pod在没有创建对应的Server是允许访问的; 一旦创建了Server后,Pod就不可以访问。(当然这一段话,并不绝对,因为只考虑了Pod的配置,了解官方文档可以看到更全面的信息。)
    • default-inbound-policy也可以配置在Namespace上,这样就不用调整原有部署模版;也可以在安装linkerd时配置,具体可以参考官方文档。
    • 参考链接: Configuring Per-Route Authorization Policy

Probe等认证

当创建好Server或者default-inbound-policy默认拒绝流量时,会导致Kubelet自动检查Pod状态的请求会失败,比如:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
kind: Deployment
apiVersion: apps/v1
spec:
  template:
    spec:
      containers:
      - name: minio
        image: minio/minio:RELEASE.2024-11-07T00-52-20Z
        readinessProbe:
          httpGet:
            path: /minio/health/live
            port: 9000
        ports:
        - name: minio
          containerPort: 9000
...

K8s因为无法访问/minio/health/live接口而失败,所以需要使用linkerdNetworkAuthentication来进行认证。更多请参考 官 方文档

NetworkAuthentication示例
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: policy.linkerd.io/v1alpha1
kind: NetworkAuthentication
metadata:
  name: authors-probe-authn
  namespace: minio-test-server
spec:
  networks:
  - cidr: 0.0.0.0/0
  - cidr: ::/0
---
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
  name: authors-probe-policy
  namespace: minio-test-server
spec:
  targetRef:
    group: policy.linkerd.io
    kind: Server
    name: minio-server
  requiredAuthenticationRefs:
    - name: authors-probe-authn
      kind: NetworkAuthentication
      group: policy.linkerd.io

安全注意: 上述示例对minio的请求打开了所有的来源流量,会导致上文中的MeshTLSAuthentication使用作用,从而出现安全风险,可以有两个方法进行加固:

  • [推荐] 使用HTTPRoute加固, NetworkAuthentication只允许GET /minio/health/live请求
  • 小心设置来源IP, 只允许来自于kubelet的流量访问

Linkderd 限制访问集群外部服务

linkerd不支持该能力

TraefikLinkerd的集成

(待补充)