vm's with cloudinit disk can't be processed
Proxmox does not create rbd image snapshots for cloudinit drives, we have to do this..
Context:
[2021-01-27 06:57:48.637311] INFO: backup starting for fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365)
[2021-01-27 06:57:48.637726] DEBUG: found proxmox vm disk: rbd:vm-118-disk-0
[2021-01-27 06:57:48.637828] DEBUG: found proxmox vm disk: rbd:vm-118-cloudinit
[2021-01-27 06:57:48.638235] DEBUG: GET https://10.1.1.205:8006/api2/json/nodes/pve5/qemu/118/feature
[2021-01-27 06:57:48.661135] DEBUG: Status code: 200, output: b'{"data":{"hasFeature":1,"nodes":["pve1","pve5","pve4"]}}'
[2021-01-27 06:57:48.661287] INFO: save current config into vm metadata image of vm 9d3b412a-dcb6-4c4c-be1a-07152abe8365 (id=118, name=fedora33-template-master)
[2021-01-27 06:57:48.661422] DEBUG: exec command 'rbd -p rbd ls --format json'
[2021-01-27 06:57:48.750125] INFO: metadata image for vm not existing; creating...
[2021-01-27 06:57:48.750633] INFO: creating ceph rbd image rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata
[2021-01-27 06:57:48.750910] DEBUG: exec command 'rbd create rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata -s 10M'
[2021-01-27 06:57:48.966041] DEBUG: exec command 'rbd -p rbd ls --format json'
[2021-01-27 06:57:49.056690] DEBUG: exec command 'rbd feature disable 9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata object-map fast-diff deep-flatten'
[2021-01-27 06:57:49.258555] DEBUG: mapping ceph image rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata
[2021-01-27 06:57:49.258811] DEBUG: exec command 'rbd -p rbd device map 9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata'
[2021-01-27 06:57:52.303928] DEBUG: get info about mapped rbd images locally
[2021-01-27 06:57:52.304190] DEBUG: exec command 'rbd device list --format json'
[2021-01-27 06:57:52.369149] DEBUG: exec command '/usr/sbin/mkfs.ext4 -L 9d3b412a-dcb6-4c /dev/rbd0'
[2021-01-27 06:57:52.413236] DEBUG: mount vm metadata filesystem: 9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata
[2021-01-27 06:57:52.413475] DEBUG: exec command 'mkdir -p /tmp/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata'
[2021-01-27 06:57:52.419724] DEBUG: exec command 'mount /dev/rbd0 /tmp/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata'
[2021-01-27 06:57:52.454550] DEBUG: save current config into metadata image -> /tmp/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata/118.conf
[2021-01-27 06:57:52.457411] DEBUG: exec command 'umount /tmp/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata'
[2021-01-27 06:57:52.493040] DEBUG: exec command 'rmdir /tmp/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata'
[2021-01-27 06:57:52.498572] DEBUG: unmapping ceph image rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata
[2021-01-27 06:57:52.498791] DEBUG: exec command 'rbd -p rbd device unmap 9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata'
[2021-01-27 06:57:52.594487] DEBUG: exec command 'rbd image-meta set rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata "vm.id" "118"'
[2021-01-27 06:57:52.723870] DEBUG: exec command 'rbd image-meta set rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata "vm.uuid" "9d3b412a-dcb6-4c4c-be1a-07152abe8365"'
[2021-01-27 06:57:52.849915] DEBUG: exec command 'rbd image-meta set rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata "vm.name" "fedora33-template-master"'
[2021-01-27 06:57:52.989051] DEBUG: exec command 'rbd image-meta set rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata "vm.running" "False"'
[2021-01-27 06:57:53.109398] DEBUG: exec command 'rbd image-meta set rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata "last_updated" "2021-01-27 06:57:53.109325"'
[2021-01-27 06:57:53.244459] INFO: creating ceph snapshot for image rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata
[2021-01-27 06:57:53.244842] DEBUG: exec command "rbd -p rbd snap create 9d3b412a-dcb6-4c4c-be1a-07152abe8365_vm_metadata@backup_daily_0afd7d12da630e05"
[2021-01-27 06:57:53.695199] DEBUG: ceph snapshot created backup_daily_0afd7d12da630e05
[2021-01-27 06:57:53.695969] DEBUG: GET https://10.1.1.205:8006/api2/json/nodes/pve5/qemu/118/snapshot
[2021-01-27 06:57:53.723009] DEBUG: Status code: 200, output: b'{"data":[{"running":0,"digest":"ce421417eda17046e034539c6c5e750d32fd5cef","description":"You are here!","name":"current"}]}'
[2021-01-27 06:57:53.723323] INFO: create vm snapshot via proxmox api for fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365)
[2021-01-27 06:57:53.723705] DEBUG: POST https://10.1.1.205:8006/api2/json/nodes/pve5/qemu/118/snapshot {'snapname': 'backup_daily_0afd7d12da630e05', 'vmstate': 0, 'description': '!!!DO NOT REMOVE!!! automated snapshot by proxmox-rbd-backup. !!!DO NOT REMOVE!!!'}
[2021-01-27 06:57:53.764991] DEBUG: Status code: 200, output: b'{"data":"UPID:pve5:00366A75:18E46437:601100E1:qmsnapshot:118:root@pam:"}'
[2021-01-27 06:57:54.766527] DEBUG: GET https://10.1.1.205:8006/api2/json/nodes/pve5/qemu/118/snapshot
[2021-01-27 06:57:54.798416] DEBUG: Status code: 200, output: b'{"data":[{"snaptime":1611727073,"vmstate":0,"name":"backup_daily_0afd7d12da630e05","description":"!!!DO NOT REMOVE!!! automated snapshot by proxmox-rbd-backup. !!!DO NOT REMOVE!!!","snapstate":"prepare"},{"digest":"063671b465239681d85433133780d0123445d537","running":0,"name":"current","description":"You are here!"}]}'
[2021-01-27 06:57:54.798595] DEBUG: snapshot creation for fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) was successful
[2021-01-27 06:57:54.798726] DEBUG: wait for snapshot creation completion of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-disk-0@backup_daily_0afd7d12da630e05. 500 tries left of 500
[2021-01-27 06:57:55.799873] DEBUG: get ceph snapshot count for image vm-118-disk-0
[2021-01-27 06:57:55.800077] DEBUG: exec command 'ssh root@10.1.1.205 -T -o Compression=no -x rbd -p rbd ls --format json'
[2021-01-27 06:57:56.066821] DEBUG: exec command 'ssh root@10.1.1.205 -T -o Compression=no -x rbd -p rbd snap ls --format json vm-118-disk-0'
[2021-01-27 06:57:56.371216] DEBUG: snapshot of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-disk-0@backup_daily_0afd7d12da630e05 found
[2021-01-27 06:57:56.371623] INFO: initial backup, starting full copy of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-disk-0
[2021-01-27 06:57:56.371968] DEBUG: exec command 'ssh root@10.1.1.205 -T -o Compression=no -x rbd info rbd/vm-118-disk-0 --format json'
[2021-01-27 06:57:56.690998] DEBUG: exec command '/bin/bash -c set -o pipefail; ssh root@10.1.1.205 -T -o Compression=no -x "rbd export --no-progress rbd/vm-118-disk-0@backup_daily_0afd7d12da630e05 - | lz4 -z --fast=12 --sparse" | pv --rate --bytes --timer -c -N compressed-network | lz4 -d | pv --rate --bytes --progress --timer --eta --size 21474836480 -c -N import | rbd import --no-progress - rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0'
[2021-01-27 07:01:09.783877] INFO: creating ceph snapshot for image rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0
[2021-01-27 07:01:09.784268] DEBUG: exec command "rbd -p rbd snap create 9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0@backup_daily_0afd7d12da630e05"
[2021-01-27 07:01:10.082024] DEBUG: ceph snapshot created backup_daily_0afd7d12da630e05
[2021-01-27 07:01:10.082358] INFO: initial backup of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-disk-0 complete
[2021-01-27 07:01:10.082766] DEBUG: check if image and snapshot does exist on backup cluster for fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> 9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0@backup_daily_0afd7d12da630e05
[2021-01-27 07:01:10.082946] DEBUG: get ceph snapshot count for image 9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0
[2021-01-27 07:01:10.083051] DEBUG: exec command 'rbd -p rbd ls --format json'
[2021-01-27 07:01:10.190082] DEBUG: exec command 'rbd -p rbd snap ls --format json 9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0'
[2021-01-27 07:01:10.302751] DEBUG: snapshot rbd/9d3b412a-dcb6-4c4c-be1a-07152abe8365-rbd-vm-118-disk-0@backup_daily_0afd7d12da630e05 found
[2021-01-27 07:01:10.303114] DEBUG: wait for snapshot creation completion of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-cloudinit@backup_daily_0afd7d12da630e05. 500 tries left of 500
[...]
[2021-01-27 07:14:07.559213] DEBUG: wait for snapshot creation completion of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-cloudinit@backup_daily_0afd7d12da630e05. 1 tries left of 500
[2021-01-27 07:14:08.560818] DEBUG: get ceph snapshot count for image vm-118-cloudinit
[2021-01-27 07:14:08.561419] DEBUG: exec command 'ssh root@10.1.1.205 -T -o Compression=no -x rbd -p rbd ls --format json'
[2021-01-27 07:14:08.826837] DEBUG: exec command 'ssh root@10.1.1.205 -T -o Compression=no -x rbd -p rbd snap ls --format json vm-118-cloudinit'
[2021-01-27 07:14:09.115967] ERROR: unexpected exception (probably a bug): waiting for ceph rbd snapshot creation completion of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-cloudinit tined out after 500 tries
[2021-01-27 07:14:09.118185] ERROR: Traceback (most recent call last):
File "/root/proxmox-rbd-backup/lib/backup.py", line 283, in run_backup
self.backup_vm_disk(vm, disk, snapshot_name, is_backup_mode_incremental, existing_backup_snapshot)
File "/root/proxmox-rbd-backup/lib/backup.py", line 220, in backup_vm_disk
self.wait_for_rbd_image_snapshot_completion(vm, image, snapshot_name, self.get_snapshot_name_prefix())
File "/root/proxmox-rbd-backup/lib/backup.py", line 197, in wait_for_rbd_image_snapshot_completion
raise RuntimeError(f'waiting for ceph rbd snapshot creation completion of {vm} -> {image} tined out after {tries_attempted} tries')
RuntimeError: waiting for ceph rbd snapshot creation completion of fedora33-template-master (id=118, uuid=9d3b412a-dcb6-4c4c-be1a-07152abe8365) -> rbd/vm-118-cloudinit tined out after 500 tries
Edited by CompileNix