Kopane-Grommunio migration

Auf Kopano Server User auslesen:

kopano-admin -l | sed -e '1,4d' -e '/^$/d' | awk '{ print $1 }' | sort | while read user; do kopano-admin --details $user; done | egrep '^(Username|Fullname|Emailaddress|Store GUID| Warning| Soft| Hard)' | sed -e 's#^ ##g' -e 's#^Username:\t*##g' -e 's#.*:[\t ]*#;#g' | sed ':a;N;$!ba;s/\n;/;/g' >> kopano-users.txt

Auf Grommunio Server Migration starten

# Attachment Directory auf neuen Server mounten
sshfs root@kp:/var/lib/kopano/attachments /mnt

# SSH Tunnel für MySQL aufbauen
ssh -L 12345:localhost:3306 root@kp

# Migration starten

SQLPASS=pass123 gromox-kdb2mt --sql-host 127.0.0.1 --sql-port 12345 --sql-user sqlUsername --src-attach /mnt --mbox-mro kopanousername | gromox-import -u destinationusername@example.com

Links

Doku: Grommunio: Kopane Benutzer auslesen

Doku: Grommunio: Kopano migration

Proxmox PBS ZFS best practice

ZFS Pool erstellen mit special device für Metadaten

zpool create  -o ashift=12  backupstorage raidz1  \
  /dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0JCB5 \
  /dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0EPHT \
  /dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0K5GS \
  /dev/disk/by-id/ata-ST24000NM000C-3WD103_ZXA0MQM7 \
  special mirror /dev/disk/by-id/nvme-eui.01000000000000008ce38ee306063c4d /dev/disk/by-id/nvme-eui.01000000000000008ce38ee30a536b19

zpool add backupstorage log mirror /dev/disk/by-id/xxx /dev/disk/by-id/xxx
zpool add backupstorage cache /dev/disk/by-id/xxx

zfs set mountpoint=/mnt/pbs backupstorage
zfs set recordsize=1M backupstorage
zfs set compression=zstd backupstorage
zfs set atime=off backupstorage
zfs set xattr=sa backupstorage
zfs set acltype=posixacl backupstorage

zfs set recordsize=128K backupstorage
zfs set special_small_blocks=128K backupstorage # 32K alternativ

Benchmark

Sequentielle Backup-Performance (fio write, 1M)

fio --rw=write --bs=1M --numjobs=1 --iodepth=4
StorageDurchsatzLatenz (avg)
1× HDD-Mirror250–300 MB/s4–6 ms
2× HDD-Mirror450–550 MB/s5–8 ms
LimitierendHDD-Sequential

Sequentielle Restore-Performance (fio read, 1M)

fio --rw=read --bs=1M --numjobs=1 --iodepth=4
StorageDurchsatz
1× HDD-Mirror280–320 MB/s
2× HDD-Mirror500–650 MB/s
Mit warmem ARC+10–20 %

Verify-ähnlicher Read (fio read, 128K, 4 Jobs)

fio --rw=read --bs=128K --numjobs=4 --iodepth=2
SzenarioDurchsatzIOPS
Ohne Special Device80–150 MB/s600–1.200
Mit NVMe Special 250–400 MB/s2.000–3.500