分享交流
合作共赢!

Kubernetes/K8S基础使用方法总结【二十七】——Istio通过cert-manager配置Let'sEncrypt证书

张琼杰阅读(519)

说明:

如下实例介绍的是在istio做流量管理集群中配置Let’sEncrypte域名证书的内容。如果想要基于k8s的ingress配置Let’sEncrypte域名证书,请参考如下文章:

Kubernetes/K8S基础使用方法总结【二十三】——cert-manager的部署和使用

1.先创建issuer资源(我这里直接创建cluster-issuer来使用)

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod-cluster
  namespace: istio-system
spec:
  acme:
    email: jakeli@jakeli.com 
    server: https://acme-v02.api.letsencrypt.org/directory 
    privateKeySecretRef:
      name: letsencrypt-prod-cluster
    solvers:
    - http01:
        ingress:
          class: istio

2.创建certificates

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: devops-jakeli
  namespace: istio-system
spec:
  secretName: devops-jakeli-cert-prod 
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  isCA: false
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - server auth
    - client auth
  dnsNames:
    - "code.devops.jakeli.com"
    - "coder.devops.jakeli.com"
    - "nginx.devops.jakeli.com"
  issuerRef:
    name: letsencrypt-prod-cluster
    kind: ClusterIssuer
    group: cert-manager.io

3.创建gateway

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  labels:
    release: istio
  name: gateway 
  namespace: default 
spec:
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  servers:
  - hosts:
    - "*.devops.jakeli.com"
    port:
      name: http
      number: 80
      protocol: HTTP
  - hosts:
    - '*'
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: devops-jakeli-cert-prod

4.创建virtual Service

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: nginx 
  namespace: kube-public
spec:
  hosts:
  - "nginx.devops.jakeli.com"
  gateways:
  - "default/gateway"
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: "nginx.kube-public.svc.cluster.local"
        port:
          number: 80

参考:https://medium.com/@rd.petrusek/kubernetes-istio-cert-manager-and-lets-encrypt-c3e0822a3aaf

Cluster-API的使用方法总结

张琼杰阅读(504)

一、简述

Cluster API是kubernetes的另一个开源项目,其主要作用是基于不同的云平台或虚拟化平台而创建的CRD资源,以定义kubernetes对象资源的方式来定义、使用和管理各个平台资源的一种新型方式。下面就个人使用情况做一个记录,这里基于AWS云平台。

二、准备工作

在使用cluster api之前需要做一些准备工作,详细安装过程这里先跳过。

1.准备一个kubernetes集群,作为manager cluster。

创建配置kubernetes集群的详细步骤此处先跳过。

2.安装必要的工具及相关配置:

1)kubectl

2)docker

3)clusterctl 目前下载最新版本

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.3/clusterctl-linux-amd64 -o clusterctl

4)clusterawsadm

下载地址:https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases

此工具主要用来解决创建aws资源的IAM权限问题,使用clusterawsadm之前需要持有administrator权限,并配置如下环境变量:

  • AWS_REGION
  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN (如果你使用的是多因子认证需要配置)

a.然后执行如下命令来创建相关的IAM资源:

clusterawsadm bootstrap iam create-cloudformation-stack

提示:其他额外的权限配置可参考 这里

b.将上面AWS环境变量信息存储至kubernetes secret中

export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile) 

5)配置默认配置文件

默认配置文件位于:$HOME/.cluster-api/clusterctl.yaml,其中可以配置provider的众多变量,如下:

[root@ip-172-31-13-197 src]# clusterctl generate provider --infrastructure aws --describe
Name: aws
Type: InfrastructureProvider
URL: https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/
Version: v1.4.0
File: infrastructure-components.yaml
TargetNamespace: capa-system
Variables:
- AUTO_CONTROLLER_IDENTITY_CREATOR
- AWS_B64ENCODED_CREDENTIALS
- AWS_CONTROLLER_IAM_ROLE
- CAPA_EKS
- CAPA_EKS_ADD_ROLES
- CAPA_EKS_IAM
- CAPA_LOGLEVEL
- EVENT_BRIDGE_INSTANCE_STATE
- EXP_BOOTSTRAP_FORMAT_IGNITION
- EXP_EKS_FARGATE
- EXP_MACHINE_POOL
- K8S_CP_LABEL
Images:
- k8s.gcr.io/cluster-api-aws/cluster-api-aws-controller:v1.4.0

若想覆盖此配置,还可以配置Overrides Layer

注意:

当同时也设置了相同名字的环境变量,环境变量具有更高的优先级。

三、初始化Manager Cluster

clusterctl初始化默认安装provider的最新可用版本(这里以AWS为例)。

[root@ip-172-31-13-197 customer]# clusterctl init --infrastructure aws --target-namespace capa
Fetching providers
Installing cert-manager Version="v1.5.3"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.1.3" TargetNamespace="capa"
Installing Provider="bootstrap-kubeadm" Version="v1.1.3" TargetNamespace="capa"
Installing Provider="control-plane-kubeadm" Version="v1.1.3" TargetNamespace="capa"
I0417 09:20:04.840223 15472 request.go:665] Waited for 1.026955854s due to client-side throttling, not priority and fairness, request: GET:https://8AD3E49178C37D17AAE79D9114DD0D5F.gr7.us-east-1.eks.amazonaws.com/apis/controlplane.cluster.x-k8s.io/v1beta1?timeout=30s
Installing Provider="infrastructure-aws" Version="v1.4.0" TargetNamespace="capa"
Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -

如果报错如下:

[root@ip-172-31-13-197 customer]# clusterctl init --infrastructure aws --target-namespace capa
Fetching providers
Installing cert-manager Version="v1.5.3"
Error: failed to read "cert-manager.yaml" from provider's repository "cert-manager": failed to get GitHub release v1.5.3: rate limit for github api has been reached. Please wait one hour or get a personal API token and assign it to the GITHUB_TOKEN environment variable 

点击 这里 可以获取GITHUB_TOKEN的值,然后通过配置环境变量GITHUB_TOKEN来解决此报错:

# export GITHUB_TOKEN=ghp_SHNvEyOYMHw040eMlPMOYLWxLtRFsC0J

四、创建workload cluster

1.通过clusterctl工具生成workload cluster清单文件

clusterctl generate cluster capa01 \
–kubernetes-version v1.21.1 \
–control-plane-machine-count=1 \
–worker-machine-count=1 \
–flavor machinepool \
–target-namespace mycluster \
> capa01.yaml

2.创建名称空间mycluster

kubectl create ns mycluster

3.创建workload cluster

kubectl apply -f capa01.yaml

4.查看control plane状态

此时control plane还没有就绪,如下:

# kubectl get kubeadmcontrolplane -A 
NAMESPACE NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
mycluster capa01-control-plane capa01 true 1 1 0 28m v1.21.1

5.获取workload capa01的kubeconfig信息到capa01.kubeconfig文件中

# clusterctl get kubeconfig capa01 > capa01.kubeconfig

6.通过kubeconfig文件为workload集群安装网络插件

这里默认是calica网络插件

# kubectl --kubeconfig=./capa01.kubeconfig   apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml                                     
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

7.再次查看control plane的状态

如下所示,已显示为ready

# kubectl get kubeadmcontrolplane -A 
NAMESPACE NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
mycluster capa01-control-plane capa01 true true 1 1 1 0 28m v1.21.1

此时可以查看集群信息,如node信息:

# kubectl --kubeconfig=./capa01.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-213-197.ec2.internal Ready control-plane,master 20m v1.21.1

查看pod信息

# kubectl --kubeconfig=./capa01.kubeconfig get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-fd5d6b66f-w7wck 1/1 Running 0 2m29s
kube-system calico-node-gx246 1/1 Running 0 2m29s
kube-system coredns-558bd4d5db-66jz8 1/1 Running 0 21m
kube-system coredns-558bd4d5db-vxs82 1/1 Running 0 21m
kube-system etcd-ip-10-0-213-197.ec2.internal 1/1 Running 0 20m
kube-system kube-apiserver-ip-10-0-213-197.ec2.internal 1/1 Running 0 20m
kube-system kube-controller-manager-ip-10-0-213-197.ec2.internal 1/1 Running 0 20m
kube-system kube-proxy-4fd6b 1/1 Running 0 21m
kube-system kube-scheduler-ip-10-0-213-197.ec2.internal 1/1 Running 0 20m

五、清理资源

1.删除workload cluster

kubectl delete cluster capa01

2.删除manage cluster

clusterctl delete cluster

以yaml格式导出kubernetes集群所有资源信息

张琼杰阅读(1470)

在使用kubernetes的过程中,如果有手动导出所有集群资源信息的需求,可以通过以下脚本按namespace分类自动导出。

导出后所有文件保存至tar.gz类型压缩文件中,解压后显示如下,每个目录即namespace名字,目录中的文件即此命名空间下的所有资源,以yaml格式文件存在。

完整脚本内容如下:

您暂时无权查看此隐藏内容!

Kubernetes/K8S基础使用方法总结【二十六】——变量的定义和使用

张琼杰阅读(1580)

一、简介

在编写kubernetes的manifest清单文件的时候,一般会涉及到变量的使用,用以替换yaml文件中某字段的实际值。我们在实际使用过程中,为了使目录结构简单明了,通常会结合kustomize来渲染清单文件,从而对kubernetes的对象进行控制和管理。下面就针对kustomization来对变量的定义和使用做演示。

二、文件格式变量

1.定义变量

1.在自定义文件中定义变量

在文件中以key=value的形式定义变量,多个变量可以写多行。如下,编辑名为params.env的文件,内容如下:

COP_DUMP_URL=__COP_DUMP_URL__

2.定义变量可使用位置

如下所示,列出来的项目不一定全面,如果所定义的变量无法引用可以基于如下内容进行补充。

varReference:
- path: metadata/name
  kind: Deployment
- path: metadata/namespace
  kind: Deployment
- path: metadata/labels
  kind: Deployment
- path: spec/template/metadata/labels
  kind: Deployment
- path: spec/template/spec/containers/name
  kind: Deployment
- path: spec/template/spec/containers/env/value
  kind: Deployment
- path: spec/template/spec/containers/env/valueFrom/secretKeyRef
  kind: Deployment
- path: spec/template/spec/containers/volumeMounts/name
  kind: Deployment
- path: spec/template/spec/containers/volumeMounts/mountPath
  kind: Deployment
- path: spec/template/spec/containers/envFrom/configMapRef
  kind: Deployment
- path: spec/template/spec/containers/envFrom/secretRef
  kind: Deployment
- path: spec/template/spec/volumes/configMap/name
  kind: Deployment
- path: spec/template/spec/volumes/secret/secretName
  kind: Deployment
- path: spec/template/spec/volumes/secret/items/key
  kind: Deployment
- path: spec/template/spec/volumes/secret/items/path
  kind: Deployment
- path: spec/template/spec/volumes/name
  kind: Deployment
- path: spec/selector/matchLabels
  kind: Deployment
- path: metadata/labels
  kind: Service
- path: metadata/name
  kind: Service
- path: metadata/namespace
  kind: Service
- path: metadata/annotations
  kind: Service
- path: spec/ports/name
  kind: Service
- path: spec/selector
  kind: Service
- path: metadata/name
  kind: Ingress
- path: metadata/namespace
  kind: Ingress
- path: spec/rules/http/paths/backend
  kind: Ingress
- path: spec/rules/host
  kind: Ingress
- path: spec/tls/secretName
  kind: Ingress
- path: spec/tls/hosts
  kind: Ingress
- path: metadata/name
  kind: BackendConfig
- path: metadata/namespace
  kind: BackendConfig
- path: metadata/name
  kind: Namespace
- path: metadata/name
  kind: Secret
- path: metadata/namespace
  kind: Secret
- path: data
  kind: Secret
- path: metadata/name
  kind: ConfigMap
- path: metadata/namespace
  kind: ConfigMap
- path: data
  kind: ConfigMap
- path: metadata/name
  kind: VirtualService
- path: metadata/namespace
  kind: VirtualService
- path: spec/gateways
  kind: VirtualService
- path: spec/http/route/destination/host
  kind: VirtualService
- path: metadata/name
  kind: Gateway
- path: metadata/namespace
  kind: Gateway

3.在kustomization.yaml中定义变量

这里定义的主要作用是对前面定义的parames.env和params.yaml的引用,原理是通过将变量内容创建为configmap,然后从configmap中读取变量。参考内容如下:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- job.yaml

vars:
- name: COP_DUMP_URL
  objref:
    kind: ConfigMap
    name: update-esa-policy-cop
    apiVersion: v1
  fieldref:
    fieldpath: data.COP_DUMP_URL
    
generatorOptions:
  disableNameSuffixHash: true
configMapGenerator:
- name: update-esa-policy-cop
  env: params.env
configurations:
- params.yaml

2.使用变量

这里使用变量就比较简单了,只需要在yaml清单文件中通过$()的方式进行引用。如下$(COP_DUMP_URL)

---
apiVersion: batch/v1
kind: Job
metadata:
  name: update-esa-policy-cop
  namespace: edsf-dsg
  labels:
    app.kubernetes.io/name: update-esa-policy-cop
    app.kubernetes.io/instance: update-esa-policy-cop
spec:
  backoffLimit: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: update-esa-policy-cop
    spec:
      automountServiceAccountToken: false
      restartPolicy: Never
      containers:
        - args:
            - "-c"
            - "curl -k $(COP_DUMP_URL) -o /var/data/policy/cop_dump.tgz"
          command:
            - "/bin/sh" 
          name: update-esa-policy-cop
          image: update-esa-policy-cop
          imagePullPolicy: IfNotPresent
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
          volumeMounts:
            - name: policy-storage
              mountPath: /var/data/policy
              subPath: policy-storage
          resources:
            limits:
              cpu: 500m
              memory: 3500Mi
            requests:
              cpu: 200m
              memory: 256Mi
      volumes:
        - name: policy-storage
          persistentVolumeClaim:
            claimName: dsg-policy-pv-claim

三、pod信息变量

将pod信息作为变量传递给容器,一般有两种使用场景:

1.用 Pod 字段作为环境变量的值

参考代码如下:

apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never

这个配置文件中,你可以看到五个环境变量。env 字段是一个 EnvVars. 对象的数组。 数组中第一个元素指定 MY_NODE_NAME 这个环境变量从 Pod 的 spec.nodeName 字段获取变量值。 同样,其它环境变量也是从 Pod 的字段获取它们的变量值。

2.用 Container 字段作为环境变量的值

参考代码如下:

apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-resourcefieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox:1.24
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_CPU_REQUEST MY_CPU_LIMIT;
printenv MY_MEM_REQUEST MY_MEM_LIMIT;
sleep 10;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
- name: MY_CPU_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.cpu
- name: MY_MEM_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.memory
- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
restartPolicy: Never

这个配置文件中,你可以看到四个环境变量。env 字段是一个 EnvVars. 对象的数组。数组中第一个元素指定 MY_CPU_REQUEST 这个环境变量从 Container 的 requests.cpu 字段获取变量值。同样,其它环境变量也是从 Container 的字段获取它们的变量值。

Kubernetes/K8S基础使用方法总结【二十五】——垃圾回收

张琼杰阅读(683)

参考kubernetes官方文档:Garbage Collection | Kubernetes

1.获取所有kubernetes资源

kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, kind: .kind, ownerReferences:  .metadata.ownerReferences }'

2.获取所有kubernetes资源的metadata.ownerReferenceskubernetes

kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, kind: .kind, ownerReferences: select( has ("ownerReferences") ).ownerReferences }'

3.清理方式

如何清理kubernetes集群内部的垃圾信息呢?这里以清理无用的pv为例进行说明。思路:通过创建kubernetes对象cronjob来定期进行清理,完整yaml清单文件如下:

您暂时无权查看此隐藏内容!

 

kubernetes常用组件使用方法总结【一】——Cluster Autoscaler

张琼杰阅读(855)

一、简介

Cluster Autoscaler组件是一个K8S集群必不可少的组件之一,它主要根据node节点的压力大小,来横向伸缩node节点数量,以满足集群业务的稳定运行。

官方文档参考如下:

Github:

AWS:

二、安装部署

这里以AWS为例进行安装部署和使用。

1.准备

1.创建IAM Policy

policy内容如下:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": "*",
"Effect": "Allow"
}
]
}

提示:可以通过控制台直接创建,也可以通过如下命令行创建(需要将policy内容保存到文件cluster-autoscaler-policy.json,然后执行下面命令)

aws iam create-policy \
--policy-name AmazonEKSClusterAutoscalerPolicy \
--policy-document file://cluster-autoscaler-policy.json

2.创建IAM role

提示: 可通过控制台 IAM Management Console (amazon.com) 直接创建,也可以通过命令行创建,绑定上面创建好的Policy即可。

命令行:

eksctl create iamserviceaccount \
--cluster=<my-cluster> \
--namespace=kube-system \
--name=cluster-autoscaler \
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/<AmazonEKSClusterAutoscalerPolicy> \
--override-existing-serviceaccounts \
--approve

2.部署

1.下载应用部署清单文件

wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

2.在文件中找到名为cluster-autoscaler的service

  • 添加annotations
eks.amazonaws.com/role-arn: "arn:aws:iam::<ACCOUNT_ID>:role/<AmazonEKSClusterAutoscalerRole>"

3.在文件中找到名为cluster-autoscaler的deployment

  • 添加annotations
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
  • 添加如下两行内容,并将<YOUR CLUSTER NAME>替换为自己集群的名字:
--balance-similar-node-groups
--skip-nodes-with-system-pods=false

结果如下:

    spec:
containers:
- command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<YOUR CLUSTER NAME>
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
  • 找到对应的image的tag值

在官方 Github仓库 中找到对应image的tag值,这里要根据自己cluster集群的版本来进行选择,例如下面选择的是v1.21.版本:

cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:v<1.21.n>

我这里因为kubernetes版本是1.18,用的镜像版本是v1.18.1

3.测试

通过增加集群压力然后观察集群node节点数量,会发现node会自动增加。 这里就不再贴图了,亲测有效。

Kubernetes/K8S基础使用方法总结【二十四】——Confluence的安装和使用

张琼杰阅读(1026)

Confluence is your remote-friendly team workspace where knowledge  and collaboration meet.

Install confluence by helm

We can refer to the official helm chart to install confluence. Click here for more information.

Download the chart

1. Add repo to your helm repo by the under command:

helm repo add stevehipwell https://stevehipwell.github.io/helm-charts/

2. You can download the chart by the command:

helm install my-confluence-server stevehipwell/confluence-server --version 3.3.2

Modify the chart configuration file according to your actual situation

1.For the file values.yaml

General,we need to change the two parts of ingress and persistence. For example:

ingress:
  enabled: true
  annotations:
    kubernetes.io/tls-acme: “true”
    kubernetes.io/ingress.class: nginx
  path: /
  hosts:
    – atlassian.db.bciadopt.com
  tls:
  – hosts:
    – atlassian.db.bciadopt.com
    secretName: devops-tls

persistence:
  enabled: true
  annotations: {}
  accessMode: ReadWriteOnce
  storageClass: “aws-efs”
  size: 9Gi

In order to obtain the license of Confluence, we also need to configure environment variables.

env:
  – name: JAVA_OPTS
    value: “-javaagent:/var/atlassian/application-data/confluence/atlassian-agent-v1.2.3/atlassian-agent.jar

2.For the file /templates/deployment.yaml

Edit the initContainers section refer to following command. Get license, click here for more information.

      initContainers:
        – name: busybox-latest
          image: busybox:latest
          imagePullPolicy: IfNotPresent
          command: [“sh”]
          args: [“-c”, “wget -O /var/atlassian/application-data/confluence/atlassian-agent-v1.2.3.tar.gz https://gitee.com/pengzhile/atlassian-agent/attach_files/283101/download/atlassian-agent-v1.2.3.tar.gz; cd /var/atlassian/application-data/confluence/; tar -xvf /var/atlassian/application-data/confluence/atlassian-agent-v1.2.3.tar.gz;”]
          volumeMounts:
            – mountPath: /var/atlassian/application-data/confluence
              name: {{ include “confluence-server.pvcname” . }}

Install chart

Install the chart by the local chart file, refer to the following command. Wait about 2 minutes to visit the confluence homepage.

helm install atlassian -n devops -f values-production.yaml ./

tips

If you want to download the complete file, please click to here.

Other confluence installation methods

If you don’t want to make complicated changes, click here to install directly.

Installing the Chart

1.Before installing the chart you will need to add the stevehipwell repo to Helm.

git clone https://github.com/zhangqiongjie/qiongjiebiji.git -b helm-chart2.

2.Unzip the gzp format file and switch to the unzipped directory。

tar -xvf confluence-server-3.3.1.tgz && cd confluence-server

3.After you’ve installed the repo you can install the chart.

helm upgrade --install --namespace default --values ./values-production.yaml my-release ./

Obtain the license of Confluence

Reference the below command(related value need be changed by your situation), you have to into your pod generated by the deployment before you execute below command.

java -jar atlassian-agent.jar -p conf -m example@example.com -n my_name -o https://example.com -s ABCD-1234-EFGH-5678

Confluence configure LDAP

After logging in to the confluence homepage, find User management–>User Directories–>Add Directory. The configuration refer to below screenshot.

分享交流,合作共赢!

联系我们加入QQ群