Upgrades
Upgrade support matrix
The following table shows the upgrade path of all supported versions.
Upgrade from version | Supported new version(s) |
---|---|
v1.4.2 |
|
v1.4.1 |
|
v1.3.2 |
|
v1.3.1 |
|
v1.2.2 |
|
v1.2.1 |
Rancher upgrade
If you are using Rancher to manage your Harvester cluster, we recommend upgrading your Rancher server first. For more information, please refer to the Rancher upgrade guide.
For the Harvester & Rancher support matrix, please visit our website here.
|
Before starting an upgrade
Check out the available upgrade-config
setting to tweak the upgrade strategies and behaviors that best suit your cluster environment.
Start an upgrade
|
|
|
-
Make sure to read the Warning paragraph at the top of this document first.
-
Harvester checks if there are new upgradable versions periodically. If there are new versions, an upgrade button shows up on the Dashboard page.
-
If the cluster is in an air-gapped environment, please see Prepare an air-gapped upgrade section first. You can also speed up the ISO download by using the approach in that section.
-
-
Navigate to Harvester GUI and click the upgrade button on the Dashboard page.
-
Select a version to start upgrading.
-
Click the circle on the top to display the upgrade progress.
Prepare an air-gapped upgrade
Make sure to check Upgrade support matrix section first about upgradable versions. |
-
Download a Harvester ISO file from release pages.
-
Save the ISO to a local HTTP server. Assume the file is hosted at
http://10.10.0.1/harvester.iso
. -
Download the version file from release pages, for example,
https://releases.rancher.com/harvester/{version}/version.yaml
-
Replace
isoURL
value in theversion.yaml
file:apiVersion: harvesterhci.io/v1beta1 kind: Version metadata: name: v1.0.2 namespace: harvester-system spec: isoChecksum: <SHA-512 checksum of the ISO> isoURL: http://10.10.0.1/harvester.iso # change to local ISO URL releaseDate: '20220512'
-
Assume the file is hosted at
http://10.10.0.1/version.yaml
.
-
-
Log in to one of your control plane nodes.
-
Become root and create a version:
rancher@node1:~> sudo -i rancher@node1:~> kubectl create -f http://10.10.0.1/version.yaml
-
An upgrade button should show up on the Harvester GUI Dashboard page.
Free system partition space requirement
SUSE Virtualization checks the amount of free system partition space on each node when you select Upgrade. If any node does not meet the requirement, the upgrade is denied as follows:

If you want to try upgrading even if the free system partition space is insufficient on some nodes, you can update the harvesterhci.io/minFreeDiskSpaceGB
annotation of the Version
object.
apiVersion: harvesterhci.io/v1beta1
kind: Version
metadata:
annotations:
harvesterhci.io/minFreeDiskSpaceGB: "30" # the value is pre-defined and may be customized
name: 1.2.0
namespace: harvester-system
spec:
isoChecksum: <SHA-512 checksum of the ISO>
isoURL: http://192.168.0.181:8000/harvester-master-amd64.iso
minUpgradableVersion: 1.1.2
releaseDate: "20230609"
Setting a smaller value than the pre-defined value may cause the upgrade to fail and is not recommended in a production environment. |
The following sections describe solutions for issues related to this requirement.
Set up a private container registry and skip image preloading
The system partition might still lack free space even after you remove images. To address this, set up a private container registry for both current and new images, and configure the setting upgrade-config
with following value:
{"imagePreloadOption":{"strategy":{"type":"skip"}}, "restoreVM": false}
SUSE Virtualization skips the upgrade image preloading process. When the deployments on the nodes are upgraded, the container runtime loads the images stored in the private container registry.
Do not rely on the public container registry. Note any potential internet service interruptions and how close you are to reaching your Docker Hub rate limit. Failure to download any of the required images may cause the upgrade to fail and may leave the cluster in a middle state. |
Longhorn Manager Crashes Due to Backing Image Eviction
When upgrading to SUSE Virtualization v1.4.x, Longhorn Manager may crash if the To prevent the issue from occurring, ensure that the |
Re-enable RKE2 ingress-nginx admission webhooks (CVE-2025-1974)
If you disabled the RKE2 ingress-nginx admission webhooks to mitigate CVE-2025-1974, you must re-enable the webhook after upgrading to SUSE Virtualization v1.5.0 or later.
-
Verify that SUSE Virtualization is using nginx-ingress v1.12.1 or later.
$ kubectl -n kube-system get po -l"app.kubernetes.io/name=rke2-ingress-nginx" -ojsonpath='{.items[].spec.containers[].image}' rancher/nginx-ingress-controller:v1.12.1-hardened1
-
Run
kubectl -n kube-system edit helmchartconfig rke2-ingress-nginx
to remove the following configurations from theHelmChartConfig
resource.-
.spec.valuesContent.controller.admissionWebhooks.enabled: false
-
.spec.valuesContent.controller.extraArgs.enable-annotation-validation: true
-
-
Verify that the new
.spec.ValuesContent
configuration is similar to the following example.apiVersion: helm.cattle.io/v1 kind: HelmChartConfig metadata: name: rke2-ingress-nginx namespace: kube-system spec: valuesContent: |- controller: admissionWebhooks: port: 8444 extraArgs: default-ssl-certificate: cattle-system/tls-rancher-internal config: proxy-body-size: "0" proxy-request-buffering: "off" publishService: pathOverride: kube-system/ingress-expose
If the
HelmChartConfig
resource contains other customingress-nginx
configuration, you must retain them when editing the resource. -
Exit the
kubectl edit
command execution to save the configuration.SUSE Virtualization automatically applies the change once the content is saved.
-
Verify that the
rke2-ingress-nginx-admission
webhook configuration is re-enabled.$ kubectl get validatingwebhookconfiguration rke2-ingress-nginx-admission NAME WEBHOOKS AGE rke2-ingress-nginx-admission 1 6s
-
Verify that the
ingress-nginx
pods are restarted successfully.kubectl -n kube-system get po -lapp.kubernetes.io/instance=rke2-ingress-nginx NAME READY STATUS RESTARTS AGE rke2-ingress-nginx-controller-l2cxz 1/1 Running 0 94s