Install Arch Linux with Fake RAID

出自 Arch Linux 中文维基

本文內容或本節內容已經過期。

原因: 請提供模板的第一個位置參數以概括原因。 (在Talk:Install Arch Linux with Fake RAID討論)

本文或本節需要翻譯。要貢獻翻譯,請訪問簡體中文翻譯團隊

附註: 請提供模板的第一個位置參數以更詳細的指示。(在 Talk:Install Arch Linux with Fake RAID# 中討論)

本指南的目的是使用由板上BIOS的RAID控制器所生成的RAID,從而使得GRUB可以從RAID的Linux和Windows的分區啟動。當使用所謂的"fake RAID"或"host RAID"時,硬盤是/dev/mapper/chipsetName_randomName而不是/dev/sdX.

什麼是"fake RAID"[編輯 | 編輯原始碼]

維基:

基於作業系統的RAID並不總是保護引導過程,對Windows桌面版本一般的不實用。硬件RAID昂貴且是專有的。為了填補這一缺口,引入了價格便宜的「RAID控制器「,它不包含RAID控制器晶片,但只是一個帶特殊的固件和控制器晶片的標準磁盤驅動器。在啟動初期,RAID通過固件實現。當加載了保護模式作業系統內核如Linux或現代微軟Windows系統,驅動程序接管RAID。
這些控制器被製造商描述為RAID控制器,但很少清楚地吿訴購買者,RAID處理的開銷是由主機的CPU承擔,而不是RAID控制器本身,硬件RAID不存在這樣的開銷。固件控制器往往只能使用特定的幾種硬盤(例如:Intel的Matrix RAID使用SATA硬盤,現代Intel ICH南橋不支持PATA和SCSI;但主板廠商在一些主板的南橋之外實現了RAID控制器)。因為在此之前,「RAID控制器「已經實現了--控制器做了處理,所以這種新技術被技術知識界稱為「fake RAID「,即使RAID本身是正確實施。 Adaptec的稱他們為「host RAID「。wikipedia:RAID

參考 Wikipedia:RAID or FakeRaidHowto @ Community Ubuntu Documentation以獲得更多的信息。

不考慮術語,通過dmraid建立的"fake RAID"軟RAID是健壯的,提供了一個通過多個磁盤實現的堅實的鏡像或條帶的數據系統,並只有可以忽略不計開銷。 dmraid的對比於mdraid(純粹的Linux軟RAID)提供了如下好處:當出錯時,能夠在重啟之前完全重建一個硬盤。

歷史[編輯 | 編輯原始碼]

在Linux 2.4中, ATARAID kernel framework提供了對fake RAID (由BIOS協助的軟RAID)的支持. Linux 2.6中,device-mapper framework ,包括其它的如LVM和EVMS, 可以做ATARAID在2.4中做的事.雖然新的代碼中處理RAID的I/O仍然在內核中運行時,device-mapper通常是由一個用户空間應用程式配置。很明顯,當使用RAID的device-mapper,檢測會在用户空間。

Heinz Maulshagen開發了dmraid工具來檢測RAID和創建它們的映射.支持的硬件是帶BIOS功能的fake RAID IDE/SATA. 常見的如: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; 和 NVIDIA nForce.

備份[編輯 | 編輯原始碼]

警吿: 使用RAID前請備份您的數據。條帶式RAID對硬盤錯誤是很第敏感的。可以考慮常規的備份或鏡像式RAID。警吿!

提綱[編輯 | 編輯原始碼]

  • 準備
  • 從安裝盤啟動
  • 加載dmraid
  • 執行傳統安裝
  • 安裝GRUB

準備[編輯 | 編輯原始碼]

  • 在其它機器上打開需要的指南(如 Installation guide)。如果沒有其它的機器,打印出來。
  • 下載最新的Arch Linux安裝鏡像.
  • 備份所有重要的文件,因為目標分區中的所有文件將被破壞。

配置RAID[編輯 | 編輯原始碼]

警吿: 如果您的硬盤沒有配置RAID並安裝了Windows, 切換到"RAID"可能造成Windows啟動時的BSOD錯誤。[1][失效連結 2021-05-13 ⓘ]
  • 進入BIOS設置,激活RAID控制器。
    • BIOS可能包含配置SATA硬盤的選項如"IDE","AHCI"或者"RAID"; 確認選擇了"RAID"。
  • 保存並退出BIOS設置。啟動時進入RAID設置工具。
    • RAID設置工具通常可以通過啟動菜單(通常是F8, F10或CTRL+I)或RAID控制器啟動時進入。
  • 使用RAID設置工具來建立所選的條帶或鏡像的集。
提示:詳情查看主板文檔。具體型號可能不同。

從安裝盤啟動[編輯 | 編輯原始碼]

詳情參見Installation guide#Pre-installation

提示:如果您的顯示器支持,考慮在啟動時加vga=795選項以獲得更高的framebuffer解像度。

加載dmraid[編輯 | 編輯原始碼]

加載device-mapper並尋找RAID:

# modprobe dm_mod
# dmraid -ay
# ls -la /dev/mapper/

輸出例子:

/dev/mapper/control            <- 由device-mapper生成;如果存在,device-mapper是工作的。
/dev/mapper/sil_aiageicechah   <- Silicon镜像SATA RAID控制器下的一个RAID集
/dev/mapper/sil_aiageicechah1  <- 该RAID的第一个分区

如果只有(/dev/mapper/control),用lsmod檢查您是否加載了控制晶片的模塊。如果加載了,那麼dmraid不支持該控制晶片或系統沒有RAID集(再檢查一下BIOS中的RAID設置)。如果還是正確的,那您只能用software RAID (也就是説您不能用雙啟動的RAID系統)。

如果你的晶片模塊沒有加載,加載它,如:

# modprobe sata_sil

可用的驅動參見/lib/modules/`uname -r`/kernel/drivers/ata/

測試RAID:

# dmraid -tay

執行傳統安裝[編輯 | 編輯原始碼]

切換到tty2開始安裝:

# /arch/setup

RAID分區[編輯 | 編輯原始碼]

  • Prepare Hard Drive選擇Manually partition hard drives因為Auto-prepare選項找不到您的RAID。
  • 選擇OTHER,輸入您的RAID路徑(如/dev/mapper/sil_aiageicechah)。切換到tty1檢查您的輸入。
  • 按常規方法分區。
提示:如果計劃雙啟動,這時可以安裝其它系統。如安裝Windows XP到"C:",則所有Windows分區前的分區應該改為類型[1B](隱藏的FAT32)來在安裝windows時隱藏分區.當安裝完後再將它們改回為類型[83] (Linux)。當然上述過程需重啟。

加載文件系統[編輯 | 編輯原始碼]

如果在Manually configure block devices, filesystems and mountpoints沒有找到新的分區--很可能是這種情況:

  • 切換到tty1.
  • 去除所有device-mapper節點:
# dmsetup remove_all
  • 重新激活新建立的RAID節點:
# dmraid -ay
# ls -la /dev/mapper
  • 切換到tty2,重新進入Manually configure block devices, filesystems and mountpoints菜單,分區應該就可用了。

安裝和配置Archlinux[編輯 | 編輯原始碼]

提示:使用3個終端:一個使用GUI來配置系統,一個使用chroot來安裝GRUB,一全使用cfdisk來參考因為RAID盤的名字很奇怪。
  • tty1: chroot和安裝grub
  • tty2: /arch/setup
  • tty3: 用cfdisk做輸入參考,分區
讓程序一直運行,使用時切換。

切換到安裝程序(tty2)繼續:

  • 選擇包
    • 確認標記了安裝dmraid
  • 配置系統
    • mkinitcpio.conf中的MODULES行加入dm_mod。如果使用鏡像陣列還要加入dm_mirror
    • 如果需要還要在MODULES行加入晶片驅動模塊chipset_module_driver
    • mkinitcpio.conf中HOOKS行加入dmraid;可加在sata but before filesystems之後

安裝 GRUB[編輯 | 編輯原始碼]

警吿: You can normally specify default saved instead of a number in menu.lst so that the default entry is the entry saved with the command savedefault. If you are using dmraid do not use savedefault or your array will de-sync and will not let you boot your system.

Please read GRUB for more information about configuring GRUB. Installation is begun by selecting Install Bootloader from the Arch installer.

注意: For an unknown reason, the default menu.lst will likely be incorrectly populated when installing via fake RAID. Double-check the root lines (e.g. root (hd0,0)). Additionally, if you did not create a separate /boot partition, ensure the kernel/initrd paths are correct (e.g. /boot/vmlinuz-linux and /boot/initramfs-linux.img instead of /vmlinuz-linux and /initramfs-linux.img.

For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as:

  /dev/mapper     |    Linux    GRUB Partition
                  |  Partition      Number
nvidia_fffadgic   |
nvidia_fffadgic5  |    /              4
nvidia_fffadgic6  |    /boot          5
nvidia_fffadgic7  |    /home          6

The correct root designation would be (hd0,5) in this example.

注意: If you use more than one set of dmraid arrays or multiple Linux distributions installed on different dmraid arrays (for example 2 disks in nvidia_fdaacfde and 2 disks in nvidia_fffadgic and you are installing to the second dmraid array (nvidia_fffadgic)), you will need designate the second array's /boot partition as the GRUB root. In the example above, if nvidia_fffadgic was the second dmraid array you were installing to, your root designation would be root (hd1,5).

After saving the configuration file, the GRUB installer will FAIL. However it will still copy files to /boot. DO NOT GIVE UP AND REBOOT -- just follow the directions below:

  • Switch to tty1 and chroot into our installed system:
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash
  • Switch to tty3 and look up the geometry of the RAID set. In order for cfdisk to find the array and provide the proper C H S information, you may need to start cfdisk providing your raid set as the first argument. (i.e. cfdisk /dev/mapper/nvidia_fffadgic):
    • The number of Cylinders, Heads and Sectors on the RAID set should be written at the top of the screen inside cfdisk. Note: cfdisk shows the information in H S C order, but grub requires you to enter the geometry information in C H S order.
Example: 18079 255 63 for a RAID stripe of two 74GB Raptor discs.
Example: 38914 255 63 for a RAID stripe of two 160GB laptop discs.
  • GRUB will fail to properly read the drives; the geometry command must be used to manually direct GRUB:
    • Switch to tty1, the chrooted environment.
    • Install GRUB on /dev/mapper/raidSet:
# dmsetup mknodes
# grub --device-map=/dev/null

grub> device (hd0) /dev/mapper/raidSet
grub> geometry (hd0) C H S

Exchange C H S above with the proper numbers (be aware: they are not entered in the same order as they are read from cfdisk).

If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command. If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples:

grub> find /grub/stage1       # use when you have a separate boot partition
grub> find /boot/grub/stage1  # use when you have no separate boot partition

Grub will report the proper device to designate as the grub root below (i.e. (hd0,0), (hd0,4), etc...) Then, continue to install the bootloader into the Master Boot Record, changing "hd0" to "hd1" if required.

grub> root (hd0,0)
grub> setup (hd0)
grub> quit
注意: With dmraid >= 1.0.0.rc15-8, partitions are labeled "raidSetp1, raidSetp2, etc. instead of raidSet1, raidSet2, etc. If the setup command fails with "error 22: No such partition", temporary symlinks must be created.[2]

The problem is that GRUB still uses an older detection algorithm, and is looking for /dev/mapper/raidSet1 instead of /dev/mapper/raidSetp1.

The solution is to create a symlink from /dev/mapper/raidSetp1 to /dev/mapper/raidSet1 (changing the partition number as needed). The simplest way to accomplish this is to:
# cd /dev/mapper
# for i in raidSetp*; do ln -s $i ${i/p/}; done

Lastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the /boot/grub/device.map file to help GRUB retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:

(hd0) /dev/mapper/nvidia_fdaacfde
(hd1) /dev/mapper/nvidia_fffadgic

And now you are finished with the installation!

# reboot

Troubleshooting[編輯 | 編輯原始碼]

Booting with degraded array[編輯 | 編輯原始碼]

One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility.

Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:

  1. Edit the kernel line from the GRUB menu
    1. Remove references to dmraid devices (e.g. change /dev/mapper/raidSet1 to /dev/sda1)
    2. Append disablehooks=dmraid to prevent a kernel panic when dmraid discovers the degraded array
  2. Boot the system