NFS v2 & v3 and MOUNT v1 & v3 protocol server
You must be root to start this daemon. |
nfsd [-DFPt] [-c file] [-f n] [-H n] [-h n] [-o option] [-p n] [-s n] [-x n] &
Neutrino
The open file cache is used to cache open files and directories (with a 5-second idle timeout). If you know nfsd services only one client that only reads/writes to a single file, reducing this cache may save memory. If you know nfsd services many clients that read/write many files, increasing this cache could improve performance for read/write operations.
Keep this cache a reasonable size, as file descriptors (open files) are a limited resource — by default, QNX Neutrino sets a maximum of 1000 open files per process. Besides this cache, nfsd needs file descriptors for sockets (servicing TCP consumes more than just UDP) and internal readdir() operations. |
The file handle cache is a straight memory/performance trade-off, however it doesn't significantly affect read/write performance. It helps speed up ls-type operations (very useful for compiling/makefiles). To get a rough idea of how large this cache should (optimally) be, use the output of:
find mnt1 ... mntN | wc -l
The XID cache isn't used for performance, but rather to ensure nonidempotent operations are responded to correctly.
Consider what happens when a client issues a remove request. Normally, the server receives the request, removes the file, and sends back a successful response. Suppose that, for some reason, the server doesn't respond fast enough for the client, and the client retransmits the request. If the server tries to remove the file (again), it fails.
Instead, each request is assigned a transaction identifier, known as an xid, which remains constant for retransmissions. If the client retransmits the request, the server matches it with the previous request and just replies with the previous status. Generally, the busier the network and server are, the more requests are retransmitted by the client(s), and the larger the XID cache should be.
The nfsd daemon services both NFS mount requests and NFS requests, as specified by the exports file. When it starts, nfsd reads the /etc/exports.hostname file (if this file doesn't exist, nfsd reads /etc/exports instead) to determine which mountpoints to service. Changes made to this file don't take effect until you either restart nfsd or you send nfsd a SIGHUP signal:
slay -s SIGHUP nfsd
There's no direct check for root on the mount; nfsd checks only that requests come in on a privileged port, which implies root access.
The nfsd daemon supports a maximum of 15 nested directory levels.
The nfsd command doesn't tolerate any parsing errors while reading
the exports file. If an error is detected, nfsd terminates.
To keep downtime to a minimum if you modify the exports file,
we recommend that you either:
|
NFS is a very unsecure protocol. Although nfsd checks each request's origin against restrictions specified in the exports file, this helps only in an “honest” network. It's not difficult to spoof NFS requests.
Fine-tuning nfsd caches may result in less memory usage and improved performance, but these qualities are usually exclusive. Before modifying the default behavior of nfsd, it's important to know what its clients will demand from it. Also note that these caches are shared across all mountpoints.
/etc/exports, fs-nfs2, fs-nfs3, io-pkt*, mount, syslogd, umount
“NFS filesystem” in the Working With Filesystems chapter of the User's Guide