昨天已经简单介绍过Prometheus了,今天要来将他装在我们的丛集里,我们是用helm来安装,关於helm的介绍可以去看其他文章,这里就不多讲,直接来安装Prometheus。
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm search repo prometheus
helm pull prometheus-community/kube-prometheus-stack
tar -xvf kube-prometheus-stack-12.12.1.tgz
cd kube-prometheus-stack
vim values.yaml
prometheus storageSpec
要修改storageClassName栏位,修改成自己所建立的Storage Class
## Deploy a Prometheus instance
##
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: {Your Storage Class}
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
alertmanager storage
要修改storageClassName栏位,修改成自己所建立的Storage Class
## Configuration for alertmanager
## ref: https://prometheus.io/docs/alerting/alertmanager/
##
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: prometheus-storage-class
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
cd ..
mkdir prometheus-storage-class
cd prometheus-storage-class
cat <<EOF >./rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF >./storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prometheus-storage-class
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
EOF
cat <<EOF >./nfs-client-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.0.1.236
- name: NFS_PATH
value: /var/nfsshare/
volumes:
- name: nfs-client-root
nfs:
server: {IP}
path: /var/nfsshare/
EOF
kubectl apply -f rbac.yaml
kubectl apply -f storageclass.yaml
kubectl apply -f nfs-client-provisioner.yaml
此步骤为安装nfs,并使用nfs来当作我们的storage class,如果已经有storage class的可以略过这步骤
apt-get install nfs-kernel-server nfs-common
mkdir /var/nfsshare
chmod -R 777 /var/nfsshare/
echo "/var/nfsshare *(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
/etc/init.d/nfs-kernel-server restart
apt-get install nfs-common
kubectl create ns mornitor
helm package kube-prometheus-stack
helm install kube-prometheus-stack-12.12.1.tgz --name-template prometheus -n mornitor
kubectl port-forward --address=0.0.0.0 svc/prometheus-grafana -n mornitor 30001:80
之後便可以到你主机的IP:30001去到grafana的介面上。
grafana default account
account: admin
password: prom-operator
kubectl port-forward --address=0.0.0.0 svc/prometheus-kube-prometheus-prometheus -n mornitor 30002:9090
之後便可以到你主机的IP:30002去到Prometheus的介面上。
又一天:)
>>: Day 05: Creational patterns - Simple Factory Method
专案工作流程 专案的工作流程是极为关键的一环,重视专案管理和团队协作,是系统迭代及品质的保证。 专案...
在一个电信行业的技术词汇,它是指为了向用户提供(新)服务的准备和安装一个网络的处理过程。它也包括改变...
昨天小弟呢刚刚从传说中的快速面试活着走出来了(夸张!)但连续9间快面下来真的很…刺激,哈哈,要注重...
经过不懈的努力!我们终於来到此次专案时做的最後一个章节,前三个部分我们已经算是达成任务,成功训练出一...
曾有前辈说过,要以终为始。 程序结束在哪,就从那边开始追。 睡了两天,就在礼拜一列下想做的事情, 接...