Velero backup fails with the error “unable to create lock: repository is already locked (…)” or similar.

Modified on Fri, 17 Nov, 2023 at 2:28 PM

The error means a previous backup attempt was interrupted, and Velero could not release the lock.


The solution requires unlocking the repository and ensuring it will not be locked again.


First, unlock it. It can be done manually or via Restic command:


  • Manually - Go to the backup storage location (E.g. S3 Bucket) → restic → {app-name} → locks. Delete the locks folder. Usually, {app-name} is the namespace with the application to backup (E.g. “default” or “labforward”). There is only one “locks” folder in the storage;


  • With Restic -   In the cluster, run the command:


kubectl --namespace velero exec pod/velero-67457b986-mzvvt --container velero -- restic unlock --repository-file=s3:s3-eu-central-1.amazonaws.com/velero-bucket/restic/labforward --password-file=/tmp/credentials/velero/velero-repo-credentials-repository-password


Adjust it to your specific environment:


  • pod/velero-67457b986-mzvvt → The velero POD resulting from the command:
    kubectl get pod --namespace velero -o name;
  • s3:s3-eu-central-1.amazonaws.com/velero-bucket/restic/labforward → Your backup repository. You can see it in the error message in the --repo=… flag.


Second, identify the root cause for the locking and prevent it. There is no straightforward solution except to dig into Velero's log and look for it. However, this issue is normally caused by a lack of memory to complete the backup process. If the POD gets out of memory, it will restart. Check the number of restart in the backup-related PODs:


If the restart number exceeds 0, try increasing the respective POD’s memory.


To see the current memory limit of the backup-related PODs, run the command:


 kubectl get pods -n velero -o=custom-columns='NAME:{.metadata.name}',RESTART_NUMBER:'{.status.containerStatuses[*].restartCount}'


To increase Velero's memory to 2G, run:

kubectl patch deployment velero -n velero --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value": "2G"}]'

To increase the Node-agent’s memory to 1G, run:

 kubectl patch daemonset node-agent -n velero --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value": "1G"}]'

This command will update the memory limit to Labforward recommendations and cover around 500GB of data. Feel free to adjust the value to your requirements.


If the snapshots are still falling after increasing the POD’s memory, create a support bundle and send it to Labforward support.






Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article