NFS
來自維基百科:NFS 網絡文件系統(Network File System) 是由Sun公司1984年發佈的分佈式文件系統協議。它允許客户端上的用户像訪問本地文件一樣地訪問網絡上的文件。
安裝[編輯 | 編輯原始碼]
強烈建議使用時間同步守護進程以保持客户端/伺服器之間的時間同步,如果各個節點上沒有精確同步的時鐘,NFS 可能產生非預期的延遲。
配置[編輯 | 編輯原始碼]
服務端[編輯 | 編輯原始碼]
全局設置選項在/etc/nfs.conf
中被列出.只進行簡單配置的用户無需編輯此文件.
NFS 伺服器需要按照 /etc/exports
或 /etc/exports.d/*.exports
文件中定義的「導出」文件夾列表進行共享(詳細介紹參見 exports(5))。這些共享的對象是相對於所謂的"NFS根目錄"的.出於安全考慮,建議定義一個單獨的目錄為NFS根,這可以將用户限制在該掛載點中. 綁定的掛載點(bind mounts)將文件系統上他處的目錄與被分享的掛載點連接起來.
查看下面的例子。在本例中:
- NFS 根目錄是
/srv/nfs
- 將要共享的目錄是
/srv/nfs/music
,該目錄以綁定掛載方式指向了它實際的位置/mnt/music
。
# mkdir -p /srv/nfs/music /mnt/music # mount --bind /mnt/music /srv/nfs/music
為了讓伺服器重啟後共享仍舊有效,增加綁定到 fstab
文件:
/etc/fstab
/mnt/music /srv/nfs/music none bind 0 0
增加允許被掛載的目錄和使其只能被屬於特定CIDR所指定的IP範圍或擁有特定主機名的客户端所訪問的限制至/etc/exports
文件,例如:
/etc/exports
/srv/nfs 192.168.1.0/24(rw,sync,crossmnt,fsid=0) /srv/nfs/music 192.168.1.0/24(rw,sync) /srv/nfs/home 192.168.1.0/24(rw,sync,nohide) /srv/nfs/public 192.168.1.0/24(ro,all_squash,insecure) desktop(rw,sync,all_squash,anonuid=99,anongid=99) # 將訪客映射到特定用户組 - 在本例中是nobody
fsid=0
表示的條目指定的,其它文件夾必須位於該文件夾下。/etc/nfs.conf
文件中的 rootdir
選項在這種情況下不起效。crossmnt
選項使客户端可以訪問所有掛載在文件系統上並標記有crossmnt
的文件系統並且客户端不需要單獨逐個掛載子共享. 請注意,你可能不希望在子共享同時被共享到另一端地址時使用該選項.- 除了
crossmnt
之外,你也可以在子共享上使用nohide
選項,這樣的話,子共享就會在根共享被掛載時自動掛載.與crossmnt
不同的是,nohide
仍然會遵守子共享的地址範圍. insecure
選項使客户端可以用高於 1023 的端口進行連接。(大概是只有 root 用户可以使用較低編號的端口,因此阻斷其它端口可以作為簡單的訪問控制方法。在實際使用中,使用insecure
選項與否並不能帶來任何安全方面的提升或是下降。)- 使用一個通配符(
*
)以允許來自所有接口的訪問.
如果服務運行時修改了 /etc/exports
文件, 你需要重新導出使其生效:
# exportfs -arv
想要查看已經加載的共享的詳細信息,請使用:
# exportfs -v
有關所有可用選項的詳細介紹,請參閱exports(5).
fsid=1
選項.開始運行服務[編輯 | 編輯原始碼]
Users of protocol version 4 exports will probably want to mask at a minimum both rpcbind.service
and rpcbind.socket
to prevent superfluous services from running. See FS#76453. Additionally, consider masking nfs-server.service
which pulled in for some reason as well.
限制NFS使其只允許來自特定接口/IP位址的訪問[編輯 | 編輯原始碼]
默認情況下,啟動nfs-server.service
會忽略/etc/exports
文件的內容,而是在所有網絡接口上監聽連接.可以通過定義監聽的IP和/或主機名來改變這一行為.
/etc/nfs.conf
[nfsd] host=192.168.1.123 # 或者也可以使用主機名. # host=myhostname
在修改完後,重啟nfs-server.service
以應用設置.
防火牆配置[編輯 | 編輯原始碼]
To enable access through a firewall, TCP and UDP ports 111
, 2049
, and 20048
may need to be opened when using the default configuration; use rpcinfo -p
to examine the exact ports in use on the server:
$ rpcinfo -p | grep nfs
100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl
When using NFSv4, make sure TCP port 2049
is open. No other port opening should be required:
/etc/iptables/iptables.rules
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
When using an older NFS version, make sure other ports are open:
# iptables -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT # iptables -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT # iptables -A INPUT -p tcp -m tcp --dport 20048 -j ACCEPT # iptables -A INPUT -p udp -m udp --dport 111 -j ACCEPT # iptables -A INPUT -p udp -m udp --dport 2049 -j ACCEPT # iptables -A INPUT -p udp -m udp --dport 20048 -j ACCEPT
To have this configuration load on every system start, edit /etc/iptables/iptables.rules
to include the following lines:
/etc/iptables/iptables.rules
-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p tcp -m tcp --dport 20048 -j ACCEPT -A INPUT -p udp -m udp --dport 111 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 20048 -j ACCEPT
The previous commands can be saved by executing:
# iptables-save > /etc/iptables/iptables.rules
If using NFSv3 and the above listed static ports for rpc.statd
and lockd
the following ports may also need to be added to the configuration:
/etc/iptables/iptables.rules
-A INPUT -p tcp -m tcp --dport 32765 -j ACCEPT -A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT -A INPUT -p udp -m udp --dport 32765 -j ACCEPT -A INPUT -p udp -m udp --dport 32803 -j ACCEPT
To apply changes, Restart iptables.service
.
啟用NFSv4 ID映射[編輯 | 編輯原始碼]
- NFSv4 ID映射不適用於默認的
sec=sys
掛載選項.https://web.archive.org/web/20220602190451/https://dfusion.com.au/wiki/tiki-index.php?page=Why+NFSv4+UID+mapping+breaks+with+AUTH_UNIX] - 在客户端和伺服器上均需要啟用NFSv4 ID映射
- 另一個選擇是確保用户ID和組ID(UID和GID)在客户端和伺服器上相同.
- 不需要啟用/啟動
nfs-idmapd.service
,因為它已被新的id映射器替換:
# dmesg | grep id_resolver
[ 3238.356001] NFS: Registering the id_resolver key type [ 3238.356009] Key type id_resolver registered
NFSv4 協議將客户端的UID和GID值表示為user@domain
形式的字符串.將UID和字符串互相轉換的過程稱為ID映射,詳見 nfsidmap(8)。
即使idmapd可能正在運行,它也可能未被完全啟用。如果 /sys/module/nfs/parameters/nfs4_disable_idmapping
或 /sys/module/nfsd/parameters/nfs4_disable_idmapping
在客户端或伺服器上返回Y
,請通過以下方式啟用它:
nfs4
and nfsd
need to be loaded (respectively) for the following paths to be available.客户端:
# echo N > /sys/module/nfs/parameters/nfs4_disable_idmapping
伺服器:
# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping
設置為內核模塊參數以使此更改永久生效,即:
/etc/modprobe.d/nfsd.conf
options nfs nfs4_disable_idmapping=0 options nfsd nfs4_disable_idmapping=0
要完全使用idmapping,請確保在伺服器和客户端的/etc/idmapd.conf
文件中都配置了域:
/etc/idmapd.conf
# 以下應設置為本地NFSv4域名 # 默認為主機的DNS域名。 Domain = domain.tld
有關詳細信息,請參見[1].
客户端[編輯 | 編輯原始碼]
打算將NFS4與Kerberos一起使用的用户需要啟動並啟用nfs-client.target
.
手動掛載[編輯 | 編輯原始碼]
對於NFSv3,請使用以下命令顯示伺服器分享的文件系統:
$ showmount -e servername
對於NFSv4,請掛載NFS根目錄,並查看可用的掛載:
# mount servername:/ /mountpoint/on/client
然後掛載分享.掛載時省略伺服器的NFS分享根目錄:
# mount -t nfs -o vers=4 servername:/music /mountpoint/on/client
如果掛載失敗,請嘗試包括伺服器的分享根目錄(對於Debian/RHEL/SLES是必需的,某些發行版需要使用-t nfs4
而不是-t nfs
):
# mount -t nfs -o vers=4 servername:/srv/nfs/music /mountpoint/on/client
servername
必須被替換為有效的主機名(而不僅僅是IP位址)。否則,掛載遠程共享將掛起。使用/etc/fstab掛載[編輯 | 編輯原始碼]
Using fstab is useful for a server which is always on, and the NFS shares are available whenever the client boots up. Edit /etc/fstab
file, and add an appropriate line reflecting the setup. Again, the server's NFS export root is omitted.
/etc/fstab
servername:/music /mountpoint/on/client nfs defaults,timeo=900,retrans=5,_netdev 0 0
Some additional mount options to consider:
- rsize and wsize
- The
rsize
value is the number of bytes used when reading from the server. Thewsize
value is the number of bytes used when writing to the server. By default, if these options are not specified, the client and server negotiate the largest values they can both support (see nfs(5) for details). After changing these values, it is recommended to test the performance (see #性能調優).
- soft or hard
- Determines the recovery behaviour of the NFS client after an NFS request times out. If neither option is specified (or if the
hard
option is specified), NFS requests are retried indefinitely. If thesoft
option is specified, then the NFS client fails an NFS request after retrans retransmissions have been sent, causing the NFS client to return an error to the calling application.
soft
timeout can cause silent data corruption in certain cases. As such, use the soft
option only when client responsiveness is more important than data integrity. Using NFS over TCP or increasing the value of the retrans
option may mitigate some of the risks of using the soft
option.- timeo
- The
timeo
value is the amount of time, in tenths of a second, to wait before resending a transmission after an RPC timeout. The default value for NFS over TCP is 600 (60 seconds). After the first timeout, the timeout value is doubled for each retry for a maximum of 60 seconds or until a major timeout occurs. If connecting to a slow server or over a busy network, better stability can be achieved by increasing this timeout value.
- retrans
- The number of times the NFS client retries a request before it attempts further recovery action. If the
retrans
option is not specified, the NFS client tries each request three times. The NFS client generates a "server not responding" message after retrans retries, then attempts further recovery (depending on whether the hard mount option is in effect).
- _netdev
- The
_netdev
option tells the system to wait until the network is up before trying to mount the share - systemd assumes this for NFS.
fs_passno
) to a nonzero value may lead to unexpected behaviour, e.g. hangs when the systemd automount waits for a check which will never happen.Mount using /etc/fstab with systemd[編輯 | 編輯原始碼]
Another method is using the x-systemd.automount option which mounts the filesystem upon access:
/etc/fstab
servername:/home /mountpoint/on/client nfs _netdev,noauto,x-systemd.automount,x-systemd.mount-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0
To make systemd aware of the changes to fstab, reload systemd and restart remote-fs.target
[2].
- The
noauto
mount option will not mount the NFS share until it is accessed: useauto
for it to be available immediately.
If experiencing any issues with the mount failing due to the network not being up/available, enableNetworkManager-wait-online.service
. It will ensure thatnetwork.target
has all the links available prior to being active. - The
users
mount option would allow user mounts, but be aware it implies further options asnoexec
for example. - The
x-systemd.idle-timeout=1min
option will unmount the NFS share automatically after 1 minute of non-use. Good for laptops which might suddenly disconnect from the network. - If shutdown/reboot holds too long because of NFS, enable
NetworkManager-wait-online.service
to ensure that NetworkManager is not exited before the NFS volumes are unmounted. - Do not add the
x-systemd.requires=network-online.target
mount option as this can lead to ordering cycles within systemd [3]. systemd adds thenetwork-online.target
dependency to the unit for_netdev
mount automatically. - Using the
nocto
option may improve performance for read-only mounts, but should be used only if the data on the server changes only occasionally.
作為 systemd 單元[編輯 | 編輯原始碼]
在/etc/systemd/system
目錄下創建一個新.mount
文件,例如mnt-home.mount
.有關詳細信息,請參見systemd.mount(5).
/mnt/home
時才能使用單元名稱 mnt-home.mount
,否則可能會發生以下錯誤:systemd[1]: mnt-myshare.mount: Where= setting does not match unit name. Refusing.
。 If the mountpoint contains non-ASCII characters, use systemd-escape).What=
分享的路徑
Where=
分享應當被掛載的路徑
Options=
掛載分享的選項
- 網絡安裝單元會自動獲取對
remote-fs-pre.target
,network.target
和network-online.target
的After
依賴,並獲得對remote-fs.target
的Before
依賴,除非設置了nofail
掛載選項。在後一種情況下,還會添加一個Wants
單元。 - Append
noauto
toOptions
preventing automatically mount during boot (unless it is pulled in by some other unit). - If you want to use a hostname for the server you want to share (instead of an IP address), add
nss-lookup.target
toAfter
. This might avoid mount errors at boot time that do not arise when testing the unit.
/etc/systemd/system/mnt-home.mount
[Unit] Description=Mount home at boot [Mount] What=172.16.24.192:/home Where=/mnt/home Options=vers=4 Type=nfs TimeoutSec=30 [Install] WantedBy=multi-user.target
ForceUnmount=true
to [Mount]
, allowing the export to be (force-)unmounted.To use mnt-home.mount
, start the unit and enable it to run on system boot.
自動掛載[編輯 | 編輯原始碼]
要想自動掛載一個分享,你可以使用下面的自動掛載單元:
/etc/systemd/system/mnt-home.automount
[Unit] Description=Automount home [Automount] Where=/mnt/home [Install] WantedBy=multi-user.target
Disable/stop the mnt-home.mount
unit, and enable/start mnt-home.automount
to automount the share when the mount path is being accessed.
使用autofs掛載[編輯 | 編輯原始碼]
Using autofs is useful when multiple machines want to connect via NFS; they could both be clients as well as servers. The reason this method is preferable over the earlier one is that if the server is switched off, the client will not throw errors about being unable to find NFS shares. See autofs#NFS network mounts for details.
提示和技巧[編輯 | 編輯原始碼]
性能調優[編輯 | 編輯原始碼]
When using NFS on a network with a significant number of clients one may increase the default NFS threads from 8 to 16 or even a higher, depending on the server/network requirements:
/etc/nfs.conf
[nfsd] threads=16
It may be necessary to tune the rsize
and wsize
mount options to meet the requirements of the network configuration.
In recent linux kernels (>2.6.18) the size of I/O operations allowed by the NFS server (default max block size) varies depending on RAM size, with a maximum of 1M (1048576 bytes), the max block size of the server will be used even if nfs clients requires bigger rsize
and wsize
. See https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/5.8_technical_notes/known_issues-kernel
It is possible to change the default max block size allowed by the server by writing to the /proc/fs/nfsd/max_block_size
before starting nfsd. For example, the following command restores the previous default iosize of 32k:
# echo 32768 > /proc/fs/nfsd/max_block_size
max_block_size
may decrease NFS performance on modern hardware.To make the change permanent, create a systemd-tmpfile:
/etc/tmpfiles.d/nfsd-block-size.conf
w /proc/fs/nfsd/max_block_size - - - - 32768
To mount with the increased rsize
and wsize
mount options:
# mount -t nfs -o rsize=32768,wsize=32768,vers=4 servername:/srv/nfs/music /mountpoint/on/client
Furthermore, despite the violation of NFS protocol, setting async
instead of sync
or sync,no_wdelay
may potentially achieve a significant performance gain especially on spinning disks. Configure exports with this option and then execute exportfs -arv
to apply.
/etc/exports
/srv/nfs 192.168.1.0/24(rw,async,crossmnt,fsid=0) /srv/nfs/music 192.168.1.0/24(rw,async)
async
comes with a risk of possible data loss or corruption if the server crashes or restarts uncleanly.處理自動掛載[編輯 | 編輯原始碼]
This trick is useful for NFS-shares on a wireless network and/or on a network that may be unreliable. If the NFS host becomes unreachable, the NFS share will be unmounted to hopefully prevent system hangs when using the hard
mount option [4].
Make sure that the NFS mount points are correctly indicated in fstab:
/etc/fstab
lithium:/mnt/data /mnt/data nfs noauto 0 0 lithium:/var/cache/pacman /var/cache/pacman nfs noauto 0 0
Create the auto_share
script that will be used by cron or systemd/Timers to use ICMP ping to check if the NFS host is reachable:
/usr/local/bin/auto_share
#!/bin/bash function net_umount { umount -l -f $1 &>/dev/null } function net_mount { mountpoint -q $1 || mount $1 } NET_MOUNTS=$(sed -e '/^.*#/d' -e '/^.*:/!d' -e 's/\t/ /g' /etc/fstab | tr -s " ")$'\n'b printf %s "$NET_MOUNTS" | while IFS= read -r line do SERVER=$(echo $line | cut -f1 -d":") MOUNT_POINT=$(echo $line | cut -f2 -d" ") # Check if server already tested if [[ "${server_ok[@]}" =~ "${SERVER}" ]]; then # The server is up, make sure the share are mounted net_mount $MOUNT_POINT elif [[ "${server_notok[@]}" =~ "${SERVER}" ]]; then # The server could not be reached, unmount the share net_umount $MOUNT_POINT else # Check if the server is reachable ping -c 1 "${SERVER}" &>/dev/null if [ $? -ne 0 ]; then server_notok[${#server_notok[@]}]=$SERVER # The server could not be reached, unmount the share net_umount $MOUNT_POINT else server_ok[${#server_ok[@]}]=$SERVER # The server is up, make sure the share are mounted net_mount $MOUNT_POINT fi fi done
# Check if the server is reachable ping -c 1 "${SERVER}" &>/dev/null
with:
# Check if the server is reachable timeout 1 bash -c ": < /dev/tcp/${SERVER}/2049"in the
auto_share
script above.Make sure the script is executable.
Next check configure the script to run every X, in the examples below this is every minute.
Cron[編輯 | 編輯原始碼]
# crontab -e
* * * * * /usr/local/bin/auto_share
systemd/Timers[編輯 | 編輯原始碼]
/etc/systemd/system/auto_share.timer
[Unit] Description=Automount NFS shares every minute [Timer] OnCalendar=*-*-* *:*:00 [Install] WantedBy=timers.target
/etc/systemd/system/auto_share.service
[Unit] Description=Automount NFS shares After=syslog.target network.target [Service] Type=oneshot ExecStart=/usr/local/bin/auto_share [Install] WantedBy=multi-user.target
Finally, enable and start auto_share.timer
.
Using a NetworkManager dispatcher[編輯 | 編輯原始碼]
NetworkManager can also be configured to run a script on network status change.
The easiest method for mount shares on network status change is to symlink the auto_share
script:
# ln -s /usr/local/bin/auto_share /etc/NetworkManager/dispatcher.d/30-nfs.sh
However, in that particular case unmounting will happen only after the network connection has already been disabled, which is unclean and may result in effects like freezing of KDE Plasma applets.
The following script safely unmounts the NFS shares before the relevant network connection is disabled by listening for the down
, pre-down
and vpn-pre-down
events, make sure the script is executable:
/etc/NetworkManager/dispatcher.d/30-nfs.sh
#!/bin/sh # Find the connection UUID with "nmcli con show" in terminal. # All NetworkManager connection types are supported: wireless, VPN, wired... WANTED_CON_UUID="CHANGE-ME-NOW-9c7eff15-010a-4b1c-a786-9b4efa218ba9" if [ "$CONNECTION_UUID" = "$WANTED_CON_UUID" ]; then # Script parameter $1: network interface name, not used # Script parameter $2: dispatched event case "$2" in "up") mount -a -t nfs4,nfs ;; "down"|"pre-down"|"vpn-pre-down") umount -l -a -t nfs4,nfs -f >/dev/null ;; esac fi
noauto
option, remove this mount option or use auto
to allow the dispatcher to manage these mounts.在/etc/NetworkManager/dispatcher.d/pre-down
中創建一個符號連結以捕獲pre-down
事件:
# ln -s /etc/NetworkManager/dispatcher.d/30-nfs.sh /etc/NetworkManager/dispatcher.d/pre-down.d/30-nfs.sh
TLS 加密[編輯 | 編輯原始碼]
NFS traffic can be encrypted using TLS as of Linux 6.5 using the xprtsec=tls
mount option. To begin, install the ktls-utilsAUR package on the client and server, and follow the below configuration steps for each.
服務端[編輯 | 編輯原始碼]
Create a private key and obtain a certificate containing your server's DNS name (see Transport Layer Security for more detail). These files do not need to be added to the system's trust store.
Edit /etc/tlshd.conf
to use these files, using your own values for x509.certificate
and x509.private_key
/etc/tlshd.conf
[authenticate.server] x509.certificate= /etc/nfsd-certificate.pem x509.private_key= /etc/nfsd-private-key.pem
Now start and enable tlshd.service
.
客户端[編輯 | 編輯原始碼]
Add the server's TLS certificate generated in the previous step to the system's trust store (see Transport Layer Security for more detail).
Start and enable tlshd.service
.
Now you should be able to mount the server using the server's DNS name:
# mount -o xprtsec=tls servername.domain:/ /mountpoint/on/client
Checking journalctl on the client should show that the TLS handshake was successful:
$ journalctl -b -u tlshd.service
Sep 28 11:14:46 client tlshd[227]: Built from ktls-utils 0.10 on Sep 26 2023 14:24:03 Sep 28 11:15:37 client tlshd[571]: Handshake with servername.domain (192.168.122.100) was successful
故障排查[編輯 | 編輯原始碼]
參考單獨的 NFS/Troubleshooting 頁面。
更多參考[編輯 | 編輯原始碼]
- See also Avahi, a Zeroconf implementation which allows automatic discovery of NFS shares.
- HOWTO: Diskless network boot NFS root
- Microsoft Services for Unix NFS Client info
- NFS on Snow Leopard
- http://chschneider.eu/linux/server/nfs.shtml
- How to do Linux NFS Performance Tuning and Optimization
- Linux: Tune NFS Performance