2020年7月17日 星期五

OpenMediaVault 5 + OMV-Extras 以 Proxmox LXC 安裝

本文主要參考以下文章

「更新」在網絡上查找 OMV-Extras 的安裝方法時,發現 OpenMediaVault 根本不需要另外安裝,也就是 Step 5 之後,直接可以跳到 Step 10,那一行指令會同時安裝 OpenMediaVault + OMV-Extras。有時網絡上的經驗分享也是有盲點的~
  1. 用以下指令查詢可用的 LXC 系統檔
    root@pve2:~# pveam available --section system
    system          alpine-3.10-default_20190626_amd64.tar.xz
    system          alpine-3.11-default_20200425_amd64.tar.xz
    system          archlinux-base_20200508-1_amd64.tar.gz
    system          centos-6-default_20191016_amd64.tar.xz
    system          centos-7-default_20190926_amd64.tar.xz
    system          centos-8-default_20191016_amd64.tar.xz
    system          debian-10.0-standard_10.0-1_amd64.tar.gz
    system          debian-8.0-standard_8.11-1_amd64.tar.gz
    system          debian-9.0-standard_9.7-1_amd64.tar.gz
    system          fedora-31-default_20191029_amd64.tar.xz
    system          fedora-32-default_20200430_amd64.tar.xz
    system          gentoo-current-default_20200310_amd64.tar.xz
    system          opensuse-15.1-default_20190719_amd64.tar.xz
    system          ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
    system          ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
    system          ubuntu-19.10-standard_19.10-1_amd64.tar.gz
    system          ubuntu-20.04-standard_20.04-1_amd64.tar.gz
  2. 下載 debian 10 系統檔
    root@pve2:~# pveam download debian-10.0-standard_10.0-1_amd64.tar.gz
  3. 新建一個 Debian 10 LXC,1 Core / 1024 Mem / 1024 Swap 但是先不要開機,到 pve shell 編輯設定檔
    root@pve2:~# nano /etc/pve/lxc/105.conf
    在檔案最後加上以下設定,然後存檔,離開。
    lxc.apparmor.profile: unconfined
    lxc.mount.auto: cgroup:rw
    lxc.mount.auto: proc:rw
    lxc.mount.auto: sys:rw
    
  4. 開機新建立的 Debian 10,打開容器的 Shell,首先將 apt source 改成 Taiwan 的伺服器
    root@OMV:~# cat /etc/apt/sources.list
    deb http://ftp.tw.debian.org/debian buster main contrib
    
    deb http://ftp.tw.debian.org/debian buster-updates main contrib
    
    # security updates
    deb http://security.debian.org buster/updates main contrib
  5. 將 Debian 10 系統更新
    root@OMV:~# apt update && apt dist-upgrade -y && reboot
  6. 重開機後,加入 OMV 的套件來源
    root@OMV:~# echo "deb http://packages.openmediavault.org/public usul main" \
    
    >> /etc/apt/sources.list.d/omv.list
  7. 必須先安裝 gnupg 套件
    root@OMV:~# apt install gnupg -y
  8. 安裝 OpenMediaVault 5 套件的 PGP Key,步奏如下
    root@OMV:~# export LANG=C.UTF-8
    root@OMV:~# export DEBIAN_FRONTEND=noninteractive
    root@OMV:~# export APT_LISTCHANGES_FRONTEND=none
    root@OMV:~# wget -O \
    
    "/etc/apt/trusted.gpg.d/openmediavault-archive-keyring.asc" \
    https://packages.openmediavault.org/public/archive.key
    root@OMV:~# apt-key add \
    "/etc/apt/trusted.gpg.d/openmediavault-archive-keyring.asc" 
    root@OMV:~# apt update 
  9. 再來是開始安裝 OpenMediaVault 5
    root@OMV:~# apt
    --yes --auto-remove --show-upgraded \ --allow-downgrades --allow-change-held-packages \ --no-install-recommends \ --option Dpkg::Options::="--force-confdef" \ --option DPkg::Options::="--force-confold" \ install openmediavault-keyring openmediavault root@OMV:~# omv-confdbadm populate 
  10. 安裝 omv-extras
    root@OMV:~# wget -O - \
    https://github.com/OpenMediaVault-Plugin-Developers/packages/raw/master/install \
    | bash
  11. 大功告成!

2020年7月16日 星期四

在 Proxmox 上安裝 macOS 10.15 (Catalina) 虛擬機

本篇做法主要是參考下面這篇文章。
我所使用的 Proxmox VE 版本是 6.2 
  1. 首先是下載 Catalina BaseSystem.dmg
    按照參考文章內的方式以 fetch-macOS.py 下載 Catalina 的 BaseSystem.dmg
  2. 將 BaseSystem.dmg 上傳到 pve server
    # scp BaseSystem.dmg root@pve2:/root
  3. 用以下指令將 dmg 轉換成 iso,並儲存到 iso template 目錄
    root@pve2:~# qemu-img convert BaseSystem.dmg -O raw Catalina-installer.iso
    root@pve2:~# mv Catalina-installer.iso /var/lib/vz/template/iso/
  4. 此處 下載最新的 OpenCore.iso.gz,解壓縮成 iso 再上傳到 pve
  5. 這篇文章 可以找出 Catalina 的 OSK(難怪外國部落客不直接放在文章中)
  6. 依照參考文章依樣畫葫蘆建立 VM,安裝 Catalina,把 OpenCore 安裝到 EFI Disk,設定 Catalina 為適當的螢幕解析度。
  7. 最後是打開 Catalina 的 Screen Sharing,這樣我就可以使用 Ubuntu 上的 Remmina 以 VNC 協議來操作 Catalina。。
  8. 大功告成!
 

2020年7月15日 星期三

在 Proxmox 上安裝 Xpenology DSM6.2 虛擬機

這次使用的 Proxmox VE 版本是 6.2 ,安裝之前需要先下載以下檔案。
  • Xpenology loader 1.03b for DS3617xs (link)
  • DSM6.2.3 system file for DS3617xs (.pat)
新增一個虛擬機給 DSM
  1. General 頁面:給虛擬機取個名字。
  2. OS 頁面:選 "Do not use any media",Guest OS: Linux / 5.x-2.6 Kernel
  3. System 頁面:只需改 Machine 為 q35。(我個人喜歡選用新一點的系統)維持 SeaBIOS 就好,不用改成 OMVF (UEFI),這樣就不用多一個 EFI Disk。
  4. Hard Disk 頁面:Bus 設為 SATA(因為 xpenology loader 能正常支援的有限),我的 pve 是設定成 ZFS 檔案系統,所以我預計讓 pve 管理所有的硬碟以及系統備份,而不打算使用網路上常見的 “硬碟直通” 設定方式,因此虛擬硬碟我是設定在 zfs-local 的 pool 上。為了效能Cache 選 Write back (unsafe),同時勾選 Discard。
  5. Display 頁面:直接使用內定值即可。
  6. CPU 頁面:1 Socket / 2 Cores。
  7. Memory 頁面:1024 / 1024,取消 Ballooning。
  8. Network 頁面:選 E1000,其他都不用改。
在開啟 DSM 虛擬機前,還要加入其他設定
  1. 將 synoboot.img 上傳到 pve
    # scp synoboot.img root@pve2:/var/lib/vz/images/{VMID}/
  2. 在 pve 的 shell 上直接編輯虛擬機設定檔
    root@pve2:~# nano /etc/pve/qemu-server/{VMID}.conf
  3. 在設定檔開頭加入以下內容
    args: -device ich9-usb-ehci1,id=usb,addr=0x18 -drive file=/var/lib/vz/images/{VMID}/synoboot.img,format=raw,if=none,id=drive-synoboot-usb -device usb-storage,drive=drive-synoboot-usb,id=synoboot-usb,bootindex=1
  4. 再加入下面這一行,就可以使用 Xterm.js 看到 Loader 的開機 log。
    serial0: socket 
  5. 存檔,離開。
  6. 點開虛擬機的 Options 設定頁,‘Boot Order’設定為 Disk,'Use tablet for pointer' 為 No。
開啟虛擬機,找出 DSM IP,然後用瀏覽器設定 DSM 即可。其他的安裝細節可以參考以下這篇。

重裝 Ubuntu 20.04 on ZFS root

使用 Ubuntu on ZFS root 大約一個星期之後,覺得 ZFS 在做系統快照的時候,不像 Btrfs 會讓整個系統 CPU Usage 飆高,所有的 UI 反應幾乎都停格,整個卡頓過程大概會有 30 秒,真的有點令人受不了。像我試著嘗試每小時快照一次,每次一到整點,就好像要強迫你要起來走走,真的有點要裝狂。。。

這一次的目標是把之前安裝在分區5的 Btrfs 刪除,將完整的一半 SSD 空間做成 ZFS 給 Ubuntu 使用。暫時對 Btrfs 說聲抱歉,之後成熟度再高些的時候再來看看,畢竟 Btrfs 配上 TimeShift 在 Ubuntu 上還算整合滿好的,安裝設定都很簡單,又有不錯的圖形界面,有些接近 MAC 上的 Time-Machine 體驗。唯一的遺憾就是 Btrfs 模式下 TimeShift 只能把快照放在本機的硬碟,如果想要把快照儲存在外接硬碟的話,就要配合像 btrbk 這樣的工具,也就沒有圖形界面可用了。而且 btrbk 又會產生另外一組快照,總覺得有些重複多餘。

首先使用 Ubuntu 20.04 安裝碟開機,執行 Disks,重裝前的 SSD 分割狀況如下:
第一步就是用 Disks 把分區 5, 6, 7 都刪除,然後用命令列執行 sgdisk 加上需要的 ZFS 分區(因為 sgdisk 可以指定分區的格式碼為 BF01:Solaris),看到有文件說 bpool 最好不要劃的太小,以免 kernel 版本比較多的時候,/boot 可能會空間不夠放。所以這一次 bpool 預計是 2GB,其餘的都劃給 rpool 使用。
ubuntu@ubuntu:~$ sudo sgdisk -p /dev/nvme0n1
Disk /dev/nvme0n1: 2000409264 sectors, 953.9 GiB
Model: INTEL SSDPEKKW010T7                     
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): C85DF9DB-279D-4D14-83DC-4D49489E105A
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 2000409230
Partitions will be aligned on 2048-sector boundaries
Total free space is 1025223277 sectors (488.9 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  EF00  EFI system partition
   2         2099200         2361343   128.0 MiB   0C01  Microsoft reserved ...
   3         2361344       955197439   454.3 GiB   0700  Basic data partition
   4      1980418048      2000408575   9.5 GiB     2700  Basic data partition

ubuntu@ubuntu
:~$ sudo sgdisk -n5:0:+2G -t5:BF01 /dev/nvme0n1 The operation has completed successfully. ubuntu@ubuntu:~$ sudo sgdisk -n6:0:0 -t6:BF01 /dev/nvme0n1 The operation has completed successfully.
ubuntu@ubuntu
:~$ sudo sgdisk -p /dev/nvme0n1 Disk /dev/nvme0n1: 2000409264 sectors, 953.9 GiB Model: INTEL SSDPEKKW010T7 Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): C85DF9DB-279D-4D14-83DC-4D49489E105A Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 2000409230 Partitions will be aligned on 2048-sector boundaries Total free space is 2669 sectors (1.3 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 2099199 1024.0 MiB EF00 EFI system partition 2 2099200 2361343 128.0 MiB 0C01 Microsoft reserved ... 3 2361344 955197439 454.3 GiB 0700 Basic data partition 4 1980418048 2000408575 9.5 GiB 2700 Basic data partition 5 955197440 959391743 2.0 GiB BF01 6 959391744 1980418047 486.9 GiB BF01  
我在這次安裝之前有把系統ZFS快照存在我的一張大容量SD卡上,現在我只須要先把zpool import進來,然後再把整個系統拷貝過去即可。
ubuntu@ubuntu:~$ sudo zpool import -d /dev/mmcblk0 sd256gpool
ubuntu@ubuntu:~$ zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
sd256gpool   240G  85.3G   155G        -         -     0%    35%  1.00x    ONLINE  -

ubuntu@ubuntu:~$ ls /media/SD256GB/backup/ BOOT HOME ROOT

建立 ZFS pool

因為之前分區5是Btfrs,分區6是另一個pool所使用,所以在建立zpool時要加上 -f 強制覆蓋。
ubuntu@ubuntu:~$ sudo zpool create -f \
>     -o ashift=12 \
>     -O acltype=posixacl -O canmount=off -O compression=lz4 \
>     -O dnodesize=auto -O normalization=formD -O relatime=on \
>     -O xattr=sa -O mountpoint=/ -R /mnt \
>     rpool /dev/nvme0n1p6
ubuntu@ubuntu:~$ zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool       1.88G   444K  1.87G        -         -     0%     0%  1.00x    ONLINE  /mnt
sd256gpool   240G  63.3G   177G        -         -     0%    26%  1.00x    ONLINE  -

ubuntu@ubuntu:~$ sudo zpool create -f \ > -o ashift=12 -d \ > -o feature@async_destroy=enabled \ > -o feature@bookmarks=enabled \ > -o feature@embedded_data=enabled \ > -o feature@empty_bpobj=enabled \ > -o feature@enabled_txg=enabled \ > -o feature@extensible_dataset=enabled \ > -o feature@filesystem_limits=enabled \ > -o feature@hole_birth=enabled \ > -o feature@large_blocks=enabled \ > -o feature@lz4_compress=enabled \ > -o feature@spacemap_histogram=enabled \ > -o feature@zpool_checkpoint=enabled \ > -O acltype=posixacl -O canmount=off -O compression=lz4 \ > -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \ > -O mountpoint=/boot -R /mnt \ > bpool /dev/nvme0n1p5 ubuntu@ubuntu:~$ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bpool 1.88G 444K 1.87G - - 0% 0% 1.00x ONLINE /mnt rpool 484G 516K 484G - - 0% 0% 1.00x ONLINE /mnt sd256gpool 240G 63.3G 177G - - 0% 26% 1.00x ONLINE -

建立 ZFS Dataset 容器

ubuntu@ubuntu:~$ sudo zfs create -o canmount=on -o mountpoint=/ rpool/ROOT
ubuntu@ubuntu:~$ sudo zfs create -o canmount=on -o mountpoint=/home rpool/HOME
ubuntu@ubuntu:~$ sudo zfs create -o canmount=on -o mountpoint=/boot  bpool/BOOT

ubuntu@ubuntu:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT bpool 516K 1.75G 96K /mnt/boot bpool/BOOT 96K 1.75G 96K /mnt/boot rpool 704K 469G 96K /mnt rpool/HOME 96K 469G 96K /mnt/home rpool/ROOT 104K 469G 104K /mnt sd256gpool 63.3G 169G 24K /media/SD256GB sd256gpool/Pictures 38.2G 169G 38.2G /media/SD256GB/Pictures sd256gpool/backup 25.1G 169G 27K /media/SD256GB/backup sd256gpool/backup/BOOT 185M 169G 185M /media/SD256GB/backup/BOOT sd256gpool/backup/HOME 19.0G 169G 17.6G /media/SD256GB/backup/HOME sd256gpool/backup/ROOT 5.89G 169G 5.04G /media/SD256GB/backup/ROOT

ubuntu@ubuntu:~$ mount
...
sd256gpool on /media/SD256GB type zfs (rw,xattr,noacl) sd256gpool/Pictures on /media/SD256GB/Pictures type zfs (rw,xattr,noacl) sd256gpool/backup on /media/SD256GB/backup type zfs (rw,xattr,noacl) sd256gpool/backup/HOME on /media/SD256GB/backup/HOME type zfs (rw,xattr,noacl) sd256gpool/backup/ROOT on /media/SD256GB/backup/ROOT type zfs (rw,xattr,noacl) sd256gpool/backup/BOOT on /media/SD256GB/backup/BOOT type zfs (rw,xattr,noacl) rpool/ROOT on /mnt type zfs (rw,relatime,xattr,posixacl) rpool/HOME on /mnt/home type zfs (rw,relatime,xattr,posixacl)
bpool/BOOT on /mnt/boot type zfs (rw,nodev,relatime,xattr,posixacl)

使用 rsync 完整拷貝系統檔案

拷貝 ROOT
ubuntu@ubuntu:~$ sudo rsync -av --info=progress2 --no-inc-recursive \
>    --human-readable /media/SD256GB/backup/ROOT/ /mnt
building file list ...
...
...
...
var/tmp/systemd-private-96661d6909d9419ba0c1e66f7193125f-upower.service-wvp5Ng/ var/tmp/systemd-private-96661d6909d9419ba0c1e66f7193125f-upower.service-wvp5Ng/tmp/ 8.45G 99% 44.37MB/s 0:03:01 (xfr#161021, to-chk=0/216830) sent 8.47G bytes received 3.23M bytes 46.15M bytes/sec total size is 8.45G speedup is 1.00
ubuntu@ubuntu:~$ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bpool 1.88G 624K 1.87G - - 0% 0% 1.00x ONLINE /mnt rpool 484G 5.54G 478G - - 0% 1% 1.00x ONLINE /mnt sd256gpool 240G 63.3G 177G - - 0% 26% 1.00x ONLINE - ubuntu@ubuntu:~$ zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 nvme0n1p6 ONLINE 0 0 0 errors: No known data errors

拷貝 BOOT
ubuntu@ubuntu:~$ sudo rsync -av --info=progress2 --no-inc-recursive \
>    --human-readable /media/SD256GB/backup/BOOT/ /mnt/boot building file list ... done ./ System.map-5.4.0-31-generic 4.74M 2% 106.81MB/s 0:00:00 (xfr#1, to-chk=300/302)
...
...
grub/x86_64-efi/zstd.mod 207.41M 99% 39.84MB/s 0:00:04 (xfr#293, to-chk=0/302) sent 207.48M bytes received 5.61K bytes 37.73M bytes/sec total size is 207.41M speedup is 1.00
ubuntu@ubuntu:~$ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bpool 1.88G 187M 1.69G - - 0% 9% 1.00x ONLINE /mnt rpool 484G 5.54G 478G - - 0% 1% 1.00x ONLINE /mnt sd256gpool 240G 63.3G 177G - - 0% 26% 1.00x ONLINE - ubuntu@ubuntu:~$ zpool status bpool pool: bpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM bpool ONLINE 0 0 0 nvme0n1p5 ONLINE 0 0 0 errors: No known data errors

拷貝 HOME
ubuntu@ubuntu:~$ sudo rsync -av --info=progress2 --no-inc-recursive \
>    --human-readable /media/SD256GB/backup/HOME/ /mnt/home building file list ...
...
...
...
rick/toolchains/gcc-arm-8.3-2019.03-x86_64-arm-eabi/share/man/man7/gpl.7 27.77G 99% 23.46MB/s 0:18:48 (xfr#337485, to-chk=0/359204) sent 27.80G bytes received 6.48M bytes 23.58M bytes/sec total size is 27.77G speedup is 1.00
ubuntu@ubuntu:~$ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bpool 1.88G 187M 1.69G - - 0% 9% 1.00x ONLINE /mnt rpool 484G 24.3G 460G - - 0% 5% 1.00x ONLINE /mnt sd256gpool 240G 63.4G 177G - - 0% 26% 1.00x ONLINE - ubuntu@ubuntu:~$ zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 nvme0n1p6 ONLINE 0 0 0 errors: No known data errors

安裝 GRUB

準備 chroot
ubuntu@ubuntu:~$ sudo su
root@ubuntu:/home/ubuntu# for d in proc sys dev; do mount --rbind /$d /mnt/$d; done
root@ubuntu:/home/ubuntu# chroot /mnt root@ubuntu:/# cat /etc/fstab
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/nvme0n1p5 during installation #UUID=123f16e7-43b5-412d-8872-ee464f237c12 / btrfs defaults,subvol=@ 0 1 # /boot/efi was on /dev/nvme0n1p1 during installation UUID=0E18-3C17 /boot/efi vfat umask=0077 0 1 # /home was on /dev/nvme0n1p5 during installation #UUID=123f16e7-43b5-412d-8872-ee464f237c12 /home btrfs defaults,compress=lzo,subvol=@home 0 2 #/swapfile none swap sw 0 0 /dev/disk/by-id/wwn-0x5000cca77ff2de2b /media/brtbkhd btrfs noatime,nodiratime,compress=lzo,discard,space_cache,nosuid,nodev,nofail,noauto,x-gvfs-show 0 0 /dev/disk/by-id/usb-PNY_USB_3.0_FD_7FF55046ABD74833890F0B63E0BFD5-0:0-part1 /media/PNY-USB30 btrfs noatime,nodiratime,compress=lzo,discard,space_cache,nosuid,nodev,nofail,noauto,x-gvfs-show 0 0 /dev/disk/by-id/usb-PNY_USB_3.0_FD_7FF55046ABD74833890F0B63E0BFD5-0:0 /media/PNY-USB30 btrfs noatime,nodiratime,compress=lzo,discard,space_cache,nosuid,nodev,nofail,noauto,x-gvfs-show 0 0 root@ubuntu:/# ls /boot/efi/ root@ubuntu:/# mount /boot/efi
root@ubuntu:/# ls -l /boot/efi total 12 drwx------ 5 root root 4096 Oct 16 2019 EFI drwx------ 3 root root 4096 Jan 16 11:57 McAfee -rwx------ 1 root root 68 Oct 15 2019 _SMSTSVolumeID.7159644d-f741-45d5-ab29-0ad8aa4771ca
檢查 grub-probe 結果
root@ubuntu:/# grub-probe /boot
zfs
更新 initrd
root@ubuntu:/# update-initramfs -c -k all
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LANG = "zh_TW.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
update-initramfs: Generating /boot/initrd.img-5.4.0-31-generic
update-initramfs: Generating /boot/initrd.img-5.4.0-33-generic
確認 /etc/default/grub 的 GRUB_CMDLINE_LINUX_DEFAULT 包含 init_on_alloc=0

root@ubuntu:/# cat /etc/default/grub 
# If you change this file, run &apos;update-grub&apos; afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n &apos;Simple configuration&apos; GRUB_DEFAULT=0 GRUB_TIMEOUT_STYLE=hidden GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2&gt; /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT=&quot;quiet splash init_on_alloc=0" GRUB_CMDLINE_LINUX=""

更新 grub
root@ubuntu:/# update-grub
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
cannot open 'bpool/BOOT/ROOT': dataset does not exist
Found linux image: vmlinuz-5.4.0-31-generic in rpool/ROOT
Found initrd image: initrd.img-5.4.0-31-generic in rpool/ROOT
Found linux image: vmlinuz-5.4.0-33-generic in rpool/ROOT
Found initrd image: initrd.img-5.4.0-33-generic in rpool/ROOT
device-mapper: reload ioctl on osprober-linux-nvme0n1p6  failed: Device or resource busy
Command failed.
grub-probe: error: cannot find a GRUB drive for /dev/sda1.  Check your device.map.
Adding boot menu entry for UEFI Firmware Settings
done

root@ubuntu:/# grub-install --target=x86_64-efi --efi-directory=/boot/efi \
>     --bootloader-id=ubuntu --recheck --no-floppy
Installing for x86_64-efi platform.Installation finished. 
No error reported.

設定 ZFS pool 開機自動掛載

先要確認 /etc/zfs/zfs-list.cache 目錄下有要自動掛載的 pool 檔案,而且內容非空白。

root@ubuntu:/# cat /etc/zfs/zfs-list.cache/bpool 
bpool	/boot	off	on	on	off	on	off	on	off	-none
bpool/BOOT	/boot	on	on	on	off	on	off	on	off	-	none
root@ubuntu:/# cat /etc/zfs/zfs-list.cache/rpool 
rpool	/	off	on	on	on	on	off	on	off	-	none
rpool/HOME	/home	on	on	on	on	on	off	on	off	-	none
rpool/ROOT	/	on	on	on	on	on	off	on	off	-	none
root@ubuntu:/# cat /etc/zfs/zfs-list.cache/sd256gpool 
sd256gpool	/media/SD256GB	on	on	off	on	on	off	on	off	-	none
sd256gpool/Pictures	/media/SD256GB/Pictures	on	on	off	on	on	off	on	off	-	none
sd256gpool/backup	/media/SD256GB/backup	on	on	off	on	on	off	on	off	-	none
sd256gpool/backup/BOOT	/media/SD256GB/backup/BOOT	on	on	off	on	on	off	on	off-	none
sd256gpool/backup/HOME	/media/SD256GB/backup/HOME	on	on	off	on	on	off	on	off-	none
sd256gpool/backup/ROOT	/media/SD256GB/backup/ROOT	on	on	off	on	on	off	on	off-	none
現在可以離開 chroot 環境,最後就是安全解除掛載 zpool。
root@ubuntu:/# exit
exit
root@ubuntu:/home/ubuntu# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
root@ubuntu:/home/ubuntu# zpool export -a
root@ubuntu:/home/ubuntu# 

大工告成!如果一切正常,重新開機後應該要看到 grub 選單,然後開機進入 Ubuntu 桌面~