Skip to content

Commit 555c6c8

Browse files
yalosevlegacyrj
authored andcommitted
Conformance results for v1.24/DeckhouseInstaller (cncf#2220)
Signed-off-by: Yuriy Losev <[email protected]> Signed-off-by: Yuriy Losev <[email protected]> Signed-off-by: legacyrj <[email protected]>
1 parent a5bbf63 commit 555c6c8

File tree

4 files changed

+36175
-0
lines changed

4 files changed

+36175
-0
lines changed

v1.24/dhctl/PRODUCT.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
vendor: Flant
2+
name: Deckhouse Installer (DHCTL)
3+
version: v1.37.5
4+
website_url: https://github.com/deckhouse/deckhouse
5+
repo_url: https://github.com/deckhouse/deckhouse
6+
documentation_url: https://deckhouse.io/en/documentation/v1/installing/
7+
product_logo_url: https://deckhouse.io/images/logos/deckhouse-platform.svg
8+
type: installer
9+
description: 'Deckhouse Installer (DHCTL) is an application for creating Kubernetes clusters and configuring their infrastructure.'

v1.24/dhctl/README.md

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
# Reproducing the test results
2+
3+
##### 1. Create basic configuration
4+
```console
5+
$ cat > config.yml <<EOF
6+
apiVersion: deckhouse.io/v1
7+
kind: ClusterConfiguration
8+
clusterType: Static
9+
podSubnetCIDR: 10.111.0.0/16
10+
serviceSubnetCIDR: 10.222.0.0/16
11+
kubernetesVersion: "1.24"
12+
clusterDomain: "cluster.local"
13+
---
14+
apiVersion: deckhouse.io/v1
15+
kind: InitConfiguration
16+
deckhouse:
17+
releaseChannel: Stable
18+
configOverrides:
19+
global:
20+
modules:
21+
publicDomainTemplate: "%s.example.com"
22+
cniFlannelEnabled: true
23+
cniFlannel:
24+
podNetworkMode: VXLAN
25+
---
26+
apiVersion: deckhouse.io/v1
27+
kind: StaticClusterConfiguration
28+
internalNetworkCIDRs:
29+
- 192.168.0.0/24
30+
EOF
31+
```
32+
33+
##### 2. Run the prebuilt installation image of Deckhouse Installer
34+
```console
35+
$ docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
36+
registry.deckhouse.io/deckhouse/ce/install:stable bash
37+
```
38+
39+
##### 3. Initiate the process of installation
40+
```console
41+
# dhctl bootstrap \
42+
--ssh-user=<username> \
43+
--ssh-host=<master_ip> \
44+
--ssh-agent-private-keys=/tmp/.ssh/id_rsa \
45+
--config=/config.yml
46+
```
47+
48+
`username` variable here refers to the user that generated the SSH key.
49+
`master_ip` is an IP address of your machine
50+
51+
After the installation is complete, you will be returned to the command line.
52+
53+
##### 4. SSH to host machine
54+
```console
55+
$ ssh <username>@<master_ip>
56+
$ sudo -i
57+
```
58+
Admin kubernetes config symlinked to /root/.kube/config. You can run further commands with root privilege
59+
60+
##### 5. Remove taints for single master configuration:
61+
```console
62+
# kubectl patch nodegroup master --type json -p '[{"op": "remove", "path": "/spec/nodeTemplate/taints"}]'
63+
```
64+
65+
##### 5.1 Or you can add more nodes via this [guide](https://deckhouse.io/en/documentation/v1/modules/040-node-manager/faq.html#how-do-i-automatically-add-a-static-node-to-a-cluster)
66+
67+
## Run the tests
68+
##### 1. Download a [binary release](https://github.com/vmware-tanzu/sonobuoy/releases) of the CLI.
69+
Like this:
70+
71+
```console
72+
# wget https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.10/sonobuoy_0.56.10_linux_amd64.tar.gz
73+
# tar xzf sonobuoy_0.56.10_linux_amd64.tar.gz
74+
```
75+
76+
##### 2. Deploy a Sonobuoy pod to your cluster with:
77+
78+
```console
79+
# ./sonobuoy run --mode=certified-conformance
80+
```
81+
82+
##### 3. View actively running pods:
83+
84+
```console
85+
# ./sonobuoy status
86+
```
87+
88+
##### 4. To inspect the logs:
89+
90+
```console
91+
# ./sonobuoy logs
92+
```
93+
94+
##### 5. Once `sonobuoy status` shows the run as `completed`, copy the output directory from the main Sonobuoy pod to
95+
a local directory:
96+
97+
```console
98+
# ./sonobuoy retrieve .
99+
```
100+
101+
This copies a single `.tar.gz` snapshot from the Sonobuoy pod into your local `.` directory. Extract the contents into `./results` with:
102+
103+
```console
104+
# mkdir ./results; tar xzf *.tar.gz -C ./results
105+
```
106+
107+
##### 6. To clean up Kubernetes objects created by Sonobuoy, run:
108+
109+
```console
110+
# ./sonobuoy delete
111+
```
112+

0 commit comments

Comments
 (0)