What is NFS? Network File System. Network access protocol for file systems. How to open a body extension in Need for Speed ​​Open NSF

If you have installed on your computer antivirus program Can scan all files on your computer, as well as each file individually. You can scan any file by clicking right click mouse on the file and selecting the appropriate option to scan the file for viruses.

For example, in this figure it is highlighted file my-file.nfs, then you need to right-click on this file and select the option in the file menu "scan with AVG". When you select this option, it will open AVG Antivirus, which will scan this file for viruses.


Sometimes an error may occur as a result incorrect installation software , which may be due to a problem encountered during the installation process. This may interfere with your operating system associate your NFS file with the correct application tool, influencing the so-called "file extension associations".

Sometimes simple reinstallation of F1 2015 may solve your problem by correctly associating NFS with F1 2015. In other cases, problems with file associations may result from bad software programming developer and you may need to contact the developer to obtain additional help.


Advice: Try updating F1 2015 to latest version to make sure you have the latest patches and updates installed.


This may seem too obvious, but often The NFS file itself may be causing the problem. If you received the file via an attachment email or downloaded it from a website and the download process was interrupted (for example, a power outage or other reason), the file may become damaged. If possible, try getting a new copy of the NFS file and try opening it again.


Carefully: A damaged file may cause collateral damage to a previous or existing malware on your PC, so it is very important to keep an updated antivirus running on your computer at all times.


If your file is NFS related to the hardware on your computer to open the file you may need update device drivers associated with this equipment.

This problem usually associated with media file types, which depend on successfully opening the hardware inside the computer, e.g. sound card or video cards. For example, if you are trying to open an audio file but cannot open it, you may need to update sound card drivers.


Advice: If when you try to open an NFS file you receive .SYS file error message, the problem could probably be associated with corrupted or outdated device drivers that need to be updated. This process can be made easier by using driver update software such as DriverDoc.


If the steps do not solve the problem and you are still having problems opening NFS files, this may be due to lack of available system resources . Some versions of NFS files may require a significant amount of resources (e.g. memory/RAM, processing power) to properly open on your computer. This problem occurs quite often if you are using a fairly old computer. hardware and at the same time a much newer operating system.

This problem can occur when the computer has difficulty completing a task because operating system(and other services running in the background) can consume too many resources to open an NFS file. Try closing all applications on your PC before opening F1 2015 Speech Data. Freeing up all available resources on your computer will provide the best conditions for attempting to open an NFS file.


If you completed all the steps described above and your NFS file still won't open, you may need to run equipment update. In most cases, even when using older versions of hardware, the processing power can still be more than sufficient for most user applications (unless you're doing a lot of CPU-intensive work, such as 3D rendering, financial/scientific modeling, or intensive multimedia work) . Thus, it is likely that your computer does not have enough memory(more commonly called "RAM", or RAM) to perform the file open task.

When it comes to computer networks, you can often hear NFS mentioned. What does this abbreviation mean?

It is a distributed file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network, similar to accessing local storage. NFS, like many other protocols, is based on the Open Network Computing Remote Procedure Call (ONC RPC) system.

In other words, what is NFS? It is an open standard, defined by Request for Comments (RFC), allowing anyone to implement the protocol.

Versions and variations

The inventor used only the first version for his own experimental purposes. When the development team added significant changes to the original NFS and released it outside of Sun's ownership, they designated new version as v2 so that interoperability between distributions can be tested and a fallback can be created.

NFS v2

Version 2 initially worked only over the User Datagram Protocol (UDP). Its developers wanted to keep the server side without blocking implemented outside the main protocol.

The virtual file system interface allows you to modular implementation reflected in a simple protocol. By February 1986, solutions had been demonstrated for operating systems such as System V release 2, DOS and VAX/VMS using Eunice. NFS v2 only allowed the first 2 GB of a file to be read due to 32-bit limitations.

NFS v3

The first proposal to develop NFS version 3 at Sun Microsystems was announced shortly after the release of the second distribution. The main motivation was to try to mitigate the performance problem of synchronous recording. By July 1992, practical improvements had resolved many of the shortcomings of NFS version 2, leaving only insufficient file support (64-bit file sizes and file offsets).

  • support for 64-bit file sizes and offsets to handle data larger than 2 gigabytes (GB);
  • support for asynchronous recording on the server to improve performance;
  • additional file attributes in many answers to avoid having to re-fetch them;
  • READDIRPLUS operation to obtain data and attributes along with file names when scanning a directory;
  • many other improvements.

During the introduction of version 3, support for TCP as a transport layer protocol began to increase. The use of TCP as a means of transferring data, performed using NFS over a WAN, began to allow large file sizes to be transferred for viewing and writing. Thanks to this, developers were able to overcome the 8 KB limitations imposed by the User Datagram Protocol (UDP).

What is NFS v4?

Version 4, influenced by Endres File System (AFS) and Server Message Block (SMB, also called CIFS), includes performance improvements, provides better security, and introduces a compliance protocol.

Version 4 was the first distribution developed by the Internet Engineering Task Force (IETF) after Sun Microsystems outsourced protocol development.

NFS version 4.1 aims to provide protocol support for leveraging clustered server deployments, including the ability to provide scalable parallel access to files distributed across multiple servers (pNFS extension).

The newest file system protocol, NFS 4.2 (RFC 7862), was officially released in November 2016.

Other extensions

With the development of the standard, corresponding tools for working with it appeared. Thus, WebNFS, an extension for versions 2 and 3, allows the protocol network access To file systems Easier to integrate into web browsers and enable work across firewalls.

Various third party protocols have also become associated with NFS. The most famous of them are:

  • Network Lock Manager (NLM) with byte protocol support (added to support UNIX System V file locking API);
  • Remote Quota (RQUOTAD), which allows NFS users to view storage quotas on NFS servers;
  • NFS over RDMA is an adaptation of NFS that uses remote direct memory access (RDMA) as the transmission medium;
  • NFS-Ganesha is an NFS server running in user space and supporting CephFS FSAL (File System Abstraction Layer) using libcephfs.

Platforms

Network File System is often used with operating systems Unix systems(such as Solaris, AIX, HP-UX), Apple's MacOS and Unix-like operating systems (such as Linux and FreeBSD).

It is also available for platforms such as Acorn RISC OS, OpenVMS, MS-DOS, Microsoft Windows, Novell NetWare and IBM AS/400.

Alternative protocols remote access files include Server Message Block (SMB, also called CIFS), Apple Transfer Protocol (AFP), NetWare Core Protocol (NCP), and OS/400 Server File System (QFileSvr.400).

This is due to the requirements of NFS, which are aimed mostly at Unix-like “shells”.

At the same time, the SMB and NetWare (NCP) protocols are used more often than NFS in systems running managed by Microsoft Windows. AFP is most common on Apple Macintosh platforms, and QFileSvr.400 is most common on OS/400.

Typical implementation

Assuming a typical Unix-style scenario in which one computer (the client) needs access to data stored on another (the NFS server):

  • The server implements Network File System processes, running by default as nfsd, to make its data publicly available to clients. The server administrator determines how to export directory names and settings, typically using the /etc/exports configuration file and the exportfs command.
  • Administering server security ensures that it can recognize and approve an authenticated client. Its network configuration ensures that eligible clients can negotiate with it through any firewall system.
  • The client machine requests access to the exported data, usually by issuing a command. It queries the server (rpcbind) that is using the NFS port and subsequently connects to it.
  • If everything happens without errors, users on the client machine will be able to view and interact with the installed file systems on the server within the permitted parameters.

It should also be noted that automation of the Network File System process can also take place - perhaps using etc/fstab and/or other similar tools.

Development to date

By the 21st century, competing protocols DFS and AFS had not achieved any major commercial success compared to the Network File System. IBM, which previously acquired all commercial rights to the above technologies, donated most of the AFS source code to the free software community in 2000. The Open AFS project still exists today. In early 2005, IBM announced the end of sales of AFS and DFS.

In turn, in January 2010, Panasas proposed NFS v 4.1 based on technology that improves parallel data access capabilities. Network File System v 4.1 protocol defines a method for separating file system metadata from a location certain files. So it goes beyond simple name/data separation.

What is NFS of this version in practice? The above feature distinguishes it from the traditional protocol, which contains the names of files and their data under one connection to the server. With Network File System v 4.1, some files can be shared across multi-node servers, but client involvement in sharing metadata and data is limited.

When implementing the fourth distribution of the protocol, the NFS server is a set of server resources or components; they are assumed to be controlled by the metadata server.

The client still contacts a single metadata server to traverse or interact with the namespace. As it moves files to and from the server, it can directly interact with a set of data owned by an NFS group.

Already at the start screen in Need for Speed ​​Undergroung 2 lets you get acquainted with the main goal of the game. This is a beautiful and powerful car, designed to match the player's superb style, with modified body parts and unrivaled speed characteristics. In the game you can completely recreate replicas of a Ford Mustang, Nikki Maurice's car with the original NFSU vinyl, or the Nissan Z, which was Rachel Taylor's main car and had the rarest set of tuning parts - a wide body kit.

There is a lot of controversy surrounding this topic, because no one knows a clear way to gain access to such upgrades. It has been fully established that wide body kits become available regardless of game progress, but they can be opened no earlier than at the fifth stage of your career.

Car reputation points and sponsors

Parts for improving a car are often bonuses for winning competitions, usually sponsored and special ones. In addition, for winning any of the races, not only monetary reward, but also the awarding of reputation points, a large number of which increases the player’s chances of discovering unique parts.

It should be recalled that each sponsor offers to hold several special races at the finals of the competition and ultimately receive unique prizes. There are several cases where it was possible to open a body extension in Need for Speed ​​after winning sponsored races or immediately after.

Random receipt and prizes for hidden races

Players who intend to unlock all possible improvements for the car, including wide body kits, should not rush through the game only through the tasks of the main storyline. Each career stage is accompanied by additional opportunities that are not entirely obvious to the gamer.

One of the "Easter eggs" of Underground 2 is special racers who can meet the main character as rivals in the main competitions. In this case, winning the race is almost always accompanied by a discovery additional features in the workshop.

On the map game world There are many races available to complete, but the sponsor does not oblige you to participate in all of them to fulfill the terms of the contract. The more victories a player wins, the greater the likelihood of receiving additional prizes. There are also hidden races in the game. You can't find them on the map in free drive mode, but you can see the halos when you drive past. Although, by looking at the map and being in the garage, the player can determine the approximate location of the competition sites. It is worth noting that the likelihood of receiving an additional prize for participating in races is completely random.

Participation in outrans

The most effective method Obtaining unique upgrades, including opening kits of body extensions, is to participate in free races, the so-called Out Run. You can start them by getting closer to your opponent in city driving mode (other racers are indicated on the map with red triangles).

To receive a body extension from outrans, the player needs to win several victories in a row, and the more unique and valuable the discovery, the greater this number. For example, the first wide body kit is unlocked by winning four times, and to get the NFSU logo on your car you need to defeat opponents eleven times in a row. In addition to body extensions, other upgrades can be received as rewards, and the victory counter is reset after failure or receiving a prize.

NFS, or Network File System, is a popular network file system protocol that allows users to mount remote network directories on their machine and transfer files between servers. You can use the disk space on another machine for your files and work with files located on other servers. Essentially this is an alternative public access Windows for Linux, unlike Samba, is implemented at the kernel level and works more stably.

This article will cover installing nfs on Ubuntu 16.04. We will look at installing all the necessary components, setting up a shared folder, and connecting network folders.

As already mentioned, NFS is a network file system. To work, you need a server on which to host shared folder and clients who can mount a network folder as a regular disk on the system. Unlike other protocols, NFS provides transparent access To deleted files. Programs will see files as in a regular file system and work with them as with local files, nfs returns only the requested part of the file, instead of the entire file, so this file system will work fine on systems with fast internet or in local network.

Installing NFS Components

Before we can work with NFS, we will have to install several programs. On the machine that will be the server, you need to install the nfs-kernel-server package, which will be used to open nfs shares in ubuntu 16.04. To do this, run:

sudo apt install nfs-kernel-server

Now let's check if the server has been installed correctly. The NFS service listens for connections for both TCP and UDP on port 2049. You can see if these ports are actually in use with the command:

rpcinfo -p | grep nfs

It is also important to check whether NFS is supported at the kernel level:

cat /proc/filesystems | grep nfs

We see that it works, but if not, you need to manually load the nfs kernel module:

Let's also add nfs to startup:

sudo systemctl enable nfs

You need to install the nfs-common package on the client computer to be able to work with this file system. You don't have to install the server components, just this package will be enough:

sudo apt install nfs-common

Setting up an NFS server on Ubuntu

We can open NFS access to any folder, but let's create a new one for this purpose:

client folder_address (options)

The folder address is the folder that needs to be made accessible over the network. Client - IP address or network address from which this folder can be accessed. But with options it’s a little more complicated. Let's look at some of them:

  • rw- allow reading and writing in this folder
  • ro- allow read only
  • sync- respond to next requests only when the data is saved to disk (default)
  • async- do not block connections while data is being written to disk
  • secure- use only ports below 1024 for connection
  • insecure- use any ports
  • nohide- do not hide subdirectories when opening access to several directories
  • root_squash- replace requests from root with anonymous ones
  • all_squash- turn all requests anonymous
  • anonuid And anongid- specifies the uid and gid for the anonymous user.

For example, for our folder this line might look like this:

/var/nfs 127.0.0.1(rw,sync,no_subtree_check)

Once everything was configured, all that was left was to update the NFS export table:

sudo exportfs -a

That's all, opening nfs shares in ubuntu 16.04 is complete. Now let's try to configure the client and try to mount it.

NFS connection

We will not dwell on this issue in detail in today's article. This is a fairly large topic that deserves its own article. But I will still say a few words.

To mount a network folder, you don’t need any Ubuntu nfs client, just use the mount command:

sudo mount 127.0.0.1:/var/nfs/ /mnt/

Now you can try to create a file in the connected directory:

We will also look at the mounted file systems using df:

127.0.0.1:/var/nfs 30G 6.7G 22G 24% /mnt

To disable this file system, just use the standard umount:

sudo umount /mnt/

Conclusions

This article covered setting nfs ubuntu 16.04, as you can see, everything is done very simply and transparently. Connecting NFS shares is done in a few clicks using standard commands, and opening nfs shares in ubuntu 16.04 is not much more complicated than connecting. If you have any questions, write in the comments!

Related posts:


Good afternoon, readers and guests. There was a very long break between posts, but I’m back in action). In today's article I will look at NFS protocol operation, as well as setting up NFS server and NFS client on Linux.

Introduction to NFS

NFS (Network File System - network file system) in my opinion - an ideal solution on a local network, where fast (faster compared to SAMBA and less resource-intensive compared to remote file systems with encryption - sshfs, SFTP, etc...) data exchange is needed and is not at the forefront security of transmitted information. NFS protocol allows mount remote file systems over the network into a local directory tree, as if it were a mounted disk file system. This allows local applications to work with a remote file system as if they were a local one. But you need to be careful (!) with setting up NFS, because with a certain configuration it is possible to freeze the client’s operating system waiting for endless I/O. NFS protocol work based RPC protocol, which is still beyond my understanding)) so the material in the article will be a little vague... Before you can use NFS, be it a server or a client, you must make sure that your kernel has support for the NFS file system. You can check whether the kernel supports the NFS file system by looking for the presence of corresponding lines in the file /proc/filesystems:

ARCHIV ~ # grep nfs /proc/filesystems nodev nfs nodev nfs4 nodev nfsd

If the specified lines in the file /proc/filesystems does not appear, then you need to install the packages described below. This will most likely allow you to install dependent kernel modules to support the required file systems. If after installing the packages, NFS support is not displayed in specified file, then it will be necessary to enable this function.

Story Network File System

NFS protocol developed by Sun Microsystems and has 4 versions in its history. NFSv1 was developed in 1989 and was experimental, running on the UDP protocol. Version 1 is described in . NFSv2 was released in the same 1989, described by the same RFC1094 and also based on the UDP protocol, while allowing no more than 2GB to be read from a file. NFSv3 finalized in 1995 and described in . The main innovations of the third version were support for large files, added support for the TCP protocol and large TCP packets, which significantly accelerated the performance of the technology. NFSv4 finalized in 2000 and described in RFC 3010, revised in 2003 and described in . The fourth version included performance improvements, support various means authentication (specifically Kerberos and LIPKEY using the RPCSEC GSS protocol) and access control lists (both POSIX and Windows types). NFS version v4.1 was approved by the IESG in 2010 and received the number . An important innovation in version 4.1 is the specification of pNFS - Parallel NFS, a mechanism for parallel NFS client access to data from multiple distributed NFS servers. The presence of such a mechanism in the network file system standard will help build distributed “cloud” storage and information systems.

NFS server

Since we have NFS- This network file system, then necessary. (You can also read the article). Next is necessary. On Debian this is a package nfs-kernel-server And nfs-common, in RedHat this is a package nfs-utils. And also, you need to allow the daemon to run at the required OS execution levels (command in RedHat - /sbin/chkconfig nfs on, in Debian - /usr/sbin/update-rc.d nfs-kernel-server defaults).

Installed packages in Debian are launched in the following order:

ARCHIV ~ # ls -la /etc/rc2.d/ | grep nfs lrwxrwxrwx 1 root root 20 Oct 18 15:02 S15nfs-common -> ../init.d/nfs-common lrwxrwxrwx 1 root root 27 Oct 22 01:23 S16nfs-kernel-server -> ../init.d /nfs-kernel-server

That is, it starts first nfs-common, then the server itself nfs-kernel-server. In RedHat the situation is similar, with the only exception that the first script is called nfslock, and the server is called simply nfs. About nfs-common The debian website tells us this verbatim: shared files for NFS client and server, this package must be installed on the machine that will operate as an NFS client or server. The package includes programs: lockd, statd, showmount, nfsstat, gssd and idmapd. Viewing the contents of the launch script /etc/init.d/nfs-common you can track the following sequence of work: the script checks for the presence of an executable binary file /sbin/rpc.statd, checks for presence in files /etc/default/nfs-common, /etc/fstab And /etc/exports parameters that require running daemons idmapd And gssd , starts the daemon /sbin/rpc.statd , then before launch /usr/sbin/rpc.idmapd And /usr/sbin/rpc.gssd checks the presence of these executable binary files, then for daemon /usr/sbin/rpc.idmapd checks availability sunrpc,nfs And nfsd, as well as file system support rpc_pipefs in the kernel (that is, having it in the file /proc/filesystems), if everything is successful, it starts /usr/sbin/rpc.idmapd . Additionally, for the demon /usr/sbin/rpc.gssd checks kernel module rpcsec_gss_krb5 and starts the daemon.

If you view the content NFS server startup script on Debian ( /etc/init.d/nfs-kernel-server), then you can follow the following sequence: at startup, the script checks the existence of the file /etc/exports, availability nfsd, availability of support NFS file system in (that is, in the file /proc/filesystems), if everything is in place, then the daemon starts /usr/sbin/rpc.nfsd , then checks whether the parameter is specified NEED_SVCGSD(set in the server settings file /etc/default/nfs-kernel-server) and, if given, starts the daemon /usr/sbin/rpc.svcgssd , launches the daemon last /usr/sbin/rpc.mountd . From of this script it's clear that NFS server operation consists of daemons rpc.nfsd, rpc.mountd and if Kerberos authentication is used, then the rcp.svcgssd daemon. In the red hat, the rpc.rquotad and nfslogd daemon are still running (For some reason in Debian I did not find information about this daemon and the reasons for its absence, apparently it was deleted...).

From this it becomes clear that The Network File System server consists of the following processes (read: daemons), located in the /sbin and /usr/sbin directories:

In NFSv4, when using Kerberos, additional daemons are started:

  • rpc.gssd- The NFSv4 daemon provides authentication methods via GSS-API (Kerberos authentication). Works on client and server.
  • rpc.svcgssd- NFSv4 server daemon that provides server-side client authentication.

portmap and RPC protocol (Sun RPC)

In addition to the above packages, for correct operation NFSv2 and v3 required additional package portmap(replaced in newer distributions by renamed to rpcbind). This package it is usually installed automatically with NFS as a dependent and implements the operation of the RPC server, that is, it is responsible for the dynamic assignment of ports for some services registered in the RPC server. Literally, according to the documentation, this is a server that converts RPC (Remote Procedure Call) program numbers into TCP/UDP port numbers. portmap operates on several entities: RPC calls or requests, TCP/UDP ports,protocol version(tcp or udp), program numbers And software versions. The portmap daemon is launched by the /etc/init.d/portmap script before NFS services start.

In short, the job of an RPC (Remote Procedure Call) server is to process RPC calls (so-called RPC procedures) from local and remote processes. Using RPC calls, services register or remove themselves to/from a port mapper (aka port mapper, aka portmap, aka portmapper, aka, in new versions, rpcbind), and clients use RPC calls to send requests to the portmapper receive the necessary information. User-friendly names of program services and their corresponding numbers are defined in the /etc/rpc file. As soon as any service has sent the corresponding request and registered itself on the RPC server in the port mapper, the RPC server assigns, maps to the service the TCP and UDP ports on which the service started and stores in the kernel the corresponding information about the running service (name), a unique number service (in accordance with /etc/rpc), about the protocol and port on which the service runs and about the version of the service and provides the specified information to clients upon request. The port converter itself has a program number (100000), version number - 2, TCP port 111 and UDP port 111. Above, when specifying the composition of the NFS server daemons, I indicated the main RPC program numbers. I've probably confused you a little with this paragraph, so I'll say a basic phrase that should make things clear: the main function of the port mapper is to return, upon request of the client who provided the RPC program number (or RPC program number) and version to him (the client) the port on which the requested program is running. Accordingly, if a client needs to access RPC with a specific program number, it must first contact the portmap process on the server machine and determine the communication port number with the RPC service it needs.

The operation of an RPC server can be represented by the following steps:

  1. The port converter should start first, usually when the system boots. This creates a TCP endpoint and opens TCP port 111. It also creates a UDP endpoint that waits for a UDP datagram to arrive on UDP port 111.
  2. At startup, a program running through an RPC server creates a TCP endpoint and a UDP endpoint for each supported version of the program. (An RPC server can support multiple versions. The client specifies the required version when making the RPC call.) A dynamically assigned port number is assigned to each version of the service. The server logs each program, version, protocol, and port number by making the appropriate RPC call.
  3. When the RPC client program needs to obtain the necessary information, it calls the port resolver routine to obtain a dynamically assigned port number for the specified program, version, and protocol.
  4. In response to this request, the north returns a port number.
  5. The client sends an RPC request message to the port number obtained in step 4. If UDP is used, the client simply sends a UDP datagram containing the RPC challenge message to the UDP port number on which the requested service is running. In response, the service sends a UDP datagram containing an RPC response message. If TCP is used, the client actively opens to the TCP port number of the desired service and then sends an RPC challenge message over the established connection. The server responds with an RPC response message on the connection.

To obtain information from the RPC server, use the utility rpcinfo. When specifying parameters -p host the program displays a list of all registered RPC programs on the host host. Without specifying the host, the program will display services on localhost. Example:

ARCHIV ~ # rpcinfo -p prog-ma vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 59451 status 100024 1 tcp 60872 status 100021 1 udp 44310 nlockmgr 1000 21 3 udp 44310 nlockmgr 100021 4 udp 44310 nlockmgr 100021 1 tcp 44851 nlockmgr 100021 3 tcp 44851 nlockmgr 100021 4 tcp 44851 nlockmgr 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100005 1 udp 51306 mountd 100005 1 tcp 41405 mountd 100005 2 udp 51306 mountd 100005 2 tcp 41405 mountd 100005 3 udp 51306 mountd 100005 3 tcp 41405 mountd

As you can see, rpcinfo displays (in columns from left to right) the registered program number, version, protocol, port and name. Using rpcinfo you can remove a program's registration or get information about a specific RPC service (more options in man rpcinfo). As you can see, portmapper version 2 daemons are registered on udp and tcp ports, rpc.statd version 1 on udp and tcp ports, NFS lock version manager 1,3,4, nfs server daemon version 2,3,4, as well as mounting daemon versions 1,2,3.

The NFS server (more precisely, the rpc.nfsd daemon) receives requests from the client in the form of UDP datagrams on port 2049. Although NFS works with a port resolver, which allows the server to use dynamically assigned ports, UDP port 2049 is hardcoded to NFS in most implementations .

Network File System Protocol Operation

Mounting remote NFS

The process of mounting a remote NFS file system can be represented by the following diagram:

Description of the NFS protocol when mounting a remote directory:

  1. An RPC server is launched on the server and client (usually at boot), which is serviced by the portmapper process and registered on the tcp/111 and udp/111 port.
  2. Services are launched (rpc.nfsd, rpc.statd, etc.), which are registered on the RPC server and registered on arbitrary network ports (if a static port is not specified in the service settings).
  3. the mount command on the client computer sends a request to the kernel to mount a network directory, indicating the type of file system, host and directory itself; the kernel sends and forms an RPC request to the portmap process on the NFS server on port udp/111 (if the option to work via tcp is not set on the client )
  4. The NFS server kernel queries the RPC for the presence of the rpc.mountd daemon and returns it to the client kernel network port, on which the daemon is running.
  5. mount sends an RPC request to the port on which rpc.mountd is running. The NFS server can now validate a client based on its IP address and port number to see if the client can mount the specified file system.
  6. The mount daemon returns a description of the requested file system.
  7. The client's mount command outputs system call mount to associate the file handle obtained in step 5 with a local mount point on the client host. The file handle is stored in the client's NFS code, and from now on any access by user processes to files on the server's file system will use the file handle as a starting point.

Communication between client and NFS server

A typical access to a remote file system can be described as follows:

Description of the process of accessing a file located on an NFS server:

  1. The client (user process) does not care whether it is accessing a local file or an NFS file. The kernel interacts with hardware through kernel modules or built-in system calls.
  2. Kernel module kernel/fs/nfs/nfs.ko, which performs the functions of an NFS client, sends RPC requests to the NFS server via the TCP/IP module. NFS typically uses UDP, however newer implementations may use TCP.
  3. The NFS server receives requests from the client as UDP datagrams on port 2049. Although NFS can work with a port resolver, which allows the server to use dynamically assigned ports, UDP port 2049 is hard-coded to NFS in most implementations.
  4. When the NFS server receives a request from a client, it is passed to the local file access routine, which provides access to local disk on the server.
  5. The result of the disk access is returned to the client.

Setting up an NFS server

Server setup in general consists of specifying local directories that are allowed to be mounted by remote systems in a file /etc/exports. This action is called export directory hierarchy. The main sources of information about exported catalogs are the following files:

  • /etc/exports- the main configuration file that stores the configuration of the exported directories. Used when starting NFS and by the exportfs utility.
  • /var/lib/nfs/xtab- contains a list of directories mounted by remote clients. Used by the rpc.mountd daemon when a client attempts to mount a hierarchy (a mount entry is created).
  • /var/lib/nfs/etab- a list of directories that can be mounted by remote systems, indicating all the parameters of the exported directories.
  • /var/lib/nfs/rmtab- a list of directories that are not currently unexported.
  • /proc/fs/nfsd- a special file system (kernel 2.6) for managing the NFS server.
    • exports- a list of active exported hierarchies and clients to whom they were exported, as well as parameters. The core receives this information from /var/lib/nfs/xtab.
    • threads- contains the number of threads (can also be changed)
    • using filehandle you can get a pointer to a file
    • etc...
  • /proc/net/rpc- contains “raw” statistics, which can be obtained using nfsstat, as well as various caches.
  • /var/run/portmap_mapping- information about services registered in RPC

Note: In general, on the Internet there are a lot of interpretations and formulations of the purpose of the xtab, etab, rmtab files, I don’t know who to believe. Even on http://nfs.sourceforge.net/ the interpretation is not clear.

Setting up the /etc/exports file

In the simplest case, the /etc/exports file is the only file that requires editing to configure the NFS server. This file manages the following aspects:

  • What kind of clients can access files on the server
  • Which hierarchies? directories on the server can be accessed by each client
  • How will custom customer names be be displayed to local usernames

Each line of the exports file has the following format:

export_point client1 (options) [client2(options) ...]

Where export_point absolute path of the exported directory hierarchy, client1 - n name of one or more clients or IP addresses, separated by spaces, that are allowed to mount export_point . Options describe mounting rules for client, specified before options .

Here's a typical one exports file configuration example:

ARCHIV ~ # cat /etc/exports /archiv1 files(rw,sync) 10.0.0.1(ro,sync) 10.0.230.1/24(ro,sync)

IN in this example computers files and 10.0.0.1 are allowed access to the export point /archiv1, while the files host has read/write access, and the host 10.0.0.1 and subnet 10.0.230.1/24 have read-only access.

Host descriptions in /etc/exports are allowed in the following format:

  • The names of individual nodes are described as files or files.DOMAIN.local.
  • The domain mask is described in the following format: *DOMAIN.local includes all nodes of the DOMAIN.local domain.
  • Subnets are specified as IP address/mask pairs. For example: 10.0.0.0/255.255.255.0 includes all nodes whose addresses begin with 10.0.0.
  • Specifying the name of the @myclients network group that has access to the resource (when using an NIS server)

General options for exporting directory hierarchies

The exports file uses the following general options(options used by default in most systems are listed first, non-default ones in brackets):

  • auth_nlm (no_auth_nlm) or secure_locks (insecure_locks)- specifies that the server should require authentication of lock requests (using the NFS Lock Manager protocol).
  • nohide (hide)- if the server exports two directory hierarchies, with one nested (mounted) within the other. The client needs to explicitly mount the second (child) hierarchy, otherwise the child hierarchy's mount point will appear as an empty directory. The nohide option results in a second directory hierarchy without an explicit mount. ( note: I couldn't get this option to work...)
  • ro(rw)- Allows only read (write) requests. (Ultimately, whether it is possible to read/write or not is determined based on file system rights, and the server is not able to distinguish a request to read a file from a request to execute, so it allows reading if the user has read or execute rights.)
  • secure (insecure)- requires NFS requests to come from secure ports (< 1024), чтобы программа без root rights could not mount directory hierarchy.
  • subtree_check (no_subtree_check)- If a subdirectory of the file system is exported, but not the entire file system, the server checks whether the requested file is in the exported subdirectory. Disabling verification reduces security but increases data transfer speed.
  • sync (async)- specifies that the server should respond to requests only after the changes made by those requests have been written to disk. The async option tells the server not to wait for information to be written to disk, which improves performance but reduces reliability because In the event of a connection break or equipment failure, information may be lost.
  • wdelay (no_wdelay)- instructs the server to delay executing write requests if a subsequent write request is pending, writing data in larger blocks. This improves performance when sending large queues of write commands. no_wdelay specifies not to delay execution of a write command, which can be useful if the server receives a large number of unrelated commands.

Export symbolic links and device files. When exporting a directory hierarchy containing symbolic links, the link object must be accessible to the client (remote) system, that is, one of the following rules must be true:

The device file belongs to the interface. When you export a device file, this interface is exported. If the client system does not have a device of the same type, the exported device will not work. IN client system When mounting NFS objects, you can use the nodev option so that device files in the mounted directories are not used.

Default options in different systems may vary, they can be viewed in the file /var/lib/nfs/etab. After describing the exported directory in /etc/exports and restarting the NFS server, all missing options (read: default options) will be reflected in the /var/lib/nfs/etab file.

User ID display (matching) options

For a better understanding of the following, I would advise you to read the article. Each Linux user has its own UID and main GID, which are described in the files /etc/passwd And /etc/group. The NFS server assumes that the remote host's operating system has authenticated the users and assigned them the correct UID and GID. Exporting files gives users of the client system the same access to those files as if they were logged directly on the server. Accordingly, when an NFS client sends a request to the server, the server uses the UID and GID to identify the user on the local system, which can lead to some problems:

  • a user may not have the same identifiers on both systems and therefore may be able to access another user's files.
  • because The root user ID is always 0, then this user is displayed to the local user depending on the specified options.

The following options set the rules for displaying remote users in local ones:

  • root_squash (no_root_squash)- With the option specified root_squash, requests from the root user are mapped to the anonymous uid/gid, or to the user specified in the anonuid/anongid parameter.
  • no_all_squash (all_squash)- Does not change the UID/GID of the connecting user. Option all_squash sets the display of ALL users (not just root), as anonymous or specified in the anonuid/anongid parameter.
  • anonuid= UID And anongid= GID - Explicitly sets the UID/GID for the anonymous user.
  • map_static= /etc/file_maps_users - Specifies a file in which you can set the mapping of remote UID/GID to local UID/GID.

Example of using a user mapping file:

ARCHIV ~ # cat /etc/file_maps_users # User mapping # remote local comment uid 0-50 1002 # mapping users with remote UID 0-50 to local UID 1002 gid 0-50 1002 # mapping users with/span remote GID 0-50 to local GID 1002

NFS Server Management

The NFS server is managed using the following utilities:

  • nfsstat
  • showmsecure (insecure)mount

nfsstat: NFS and RPC statistics

The nfsstat utility allows you to view statistics of RPC and NFS servers. The command options can be found in man nfsstat.

showmount: Display NFS status information

showmount utility queries the rpc.mountd daemon on the remote host about mounted file systems. By default, a sorted list of clients is returned. Keys:

  • --all- a list of clients and mount points is displayed indicating where the client mounted the directory. This information may not be reliable.
  • --directories- a list of mount points is displayed
  • --exports- a list of exported file systems is displayed from the point of view of nfsd

When you run showmount without arguments, information about the systems that are allowed to mount will be printed to the console local catalogues. For example, the ARCHIV host provides us with a list of exported directories with the IP addresses of hosts that are allowed to mount the specified directories:

FILES ~ # showmount --exports archive Export list for archive: /archiv-big 10.0.0.2 /archiv-small 10.0.0.2

If you specify the hostname/IP in the argument, information about this host will be displayed:

ARCHIV ~ # showmount files clnt_create: RPC: Program not registered # this message tells us that the NFSd daemon is not running on the FILES host

exportfs: manage exported directories

This command serves the exported directories specified in the file /etc/exports, it would be more accurate to write that it does not serve, but synchronizes with the file /var/lib/nfs/xtab and removes non-existent ones from xtab. exportfs is executed when the nfsd daemon is started with the -r argument. The exportfs utility in 2.6 kernel mode communicates with the rpc.mountd daemon through files in the /var/lib/nfs/ directory and does not communicate directly with the kernel. Without parameters, displays a list of currently exported file systems.

exportfs parameters:

  • [client:directory-name] - add or remove the specified file system for the specified client)
  • -v - display more information
  • -r - re-export all directories (synchronize /etc/exports and /var/lib/nfs/xtab)
  • -u - remove from the list of exported
  • -a - add or remove all file systems
  • -o - options separated by commas (similar to the options used in /etc/exports; i.e. you can change the options of already mounted file systems)
  • -i - do not use /etc/exports when adding, only current command line options
  • -f - reset the list of exported systems in kernel 2.6;

NFS client

Before accessing a file on a remote file system, the client (client OS) must mount it and receive from the server pointer to it. NFS Mount can be done with or using one of the proliferating automatic mounters (amd, autofs, automount, supermount, superpupermount). The installation process is well demonstrated in the illustration above.

On NFS clients no need to run any daemons, client functions executes a kernel module kernel/fs/nfs/nfs.ko, which is used when mounting a remote file system. Exported directories from the server can be mounted on the client in the following ways:

  • manually using the mount command
  • automatically at boot, when mounting file systems described in /etc/fstab
  • automatically using the autofs daemon

I will not consider the third method with autofs in this article, due to its voluminous information. Perhaps there will be a separate description in future articles.

Mounting the Network Files System with the mount command

An example of using the mount command is presented in the post. Here I will look at an example of the mount command for mounting an NFS file system:

FILES ~ # mount -t nfs archiv:/archiv-small /archivs/archiv-small FILES ~ # mount -t nfs -o ro archiv:/archiv-big /archivs/archiv-big FILES ~ # mount ..... .. archiv:/archiv-small on /archivs/archiv-small type nfs (rw,addr=10.0.0.6) archiv:/archiv-big on /archivs/archiv-big type nfs (ro,addr=10.0.0.6)

The first command mounts the exported directory /archiv-small on the server archive to local mount point /archivs/archiv-small with default options (i.e. read and write). Although mount command in the latest distributions it can understand what type of file system is used even without specifying the type, but still indicate the parameter -t nfs desirable. The second command mounts the exported directory /archiv-big on the server archive to local directory /archivs/archiv-big with read-only option ( ro). mount command without parameters, it clearly shows us the mounting result. In addition to the read-only option (ro), it is possible to specify other Basic options when mounting NFS:

  • nosuid- This option prohibits executing programs from the mounted directory.
  • nodev(no device - not a device) - This option prohibits the use of character and block special files as devices.
  • lock (nolock)- Allows NFS locking (default). nolock disables NFS locking (does not start the lockd daemon) and is useful when working with older servers that do not support NFS locking.
  • mounthost=name- The name of the host on which the NFS mount daemon is running - mountd.
  • mountport=n - Port used by the mountd daemon.
  • port=n- port used to connect to the NFS server (default is 2049 if the rpc.nfsd daemon is not registered on the RPC server). If n=0 (default), then NFS queries the portmap on the server to determine the port.
  • rsize=n(read block size - read block size) - The number of bytes read at a time from the NFS server. Standard - 4096.
  • wsize=n(write block size - write block size) - The number of bytes written at a time to the NFS server. Standard - 4096.
  • tcp or udp- To mount NFS, use the TCP or UDP protocol, respectively.
  • bg- If you lose access to the server, try again in the background so as not to block the system boot process.
  • fg- If you lose access to the server, try again in priority mode. This parameter may block the system boot process by repeating mount attempts. For this reason, the fg parameter is used primarily for debugging.

Options affecting attribute caching on NFS mounts

File attributes, stored in (inodes), such as modification time, size, hard links, owner, typically change infrequently for regular files and even less frequently for directories. Many programs, such as ls, access files read-only and do not change file attributes or content, but waste system resources on expensive network operations. To avoid wasting resources, you can cache these attributes. The kernel uses the modification time of a file to determine whether the cache is out of date by comparing the modification time in the cache and the modification time of the file itself. The attribute cache is periodically updated in accordance with the specified parameters:

  • ac (noac) (attrebute cache- attribute caching) - Allows attribute caching (default). Although the noac option slows down the server, it avoids attribute staleness when multiple clients are actively writing information to a common hierarchy.
  • acdirmax=n (attribute cache directory file maximum- maximum attribute caching for a directory file) - The maximum number of seconds that NFS waits before updating directory attributes (default 60 sec.)
  • acdirmin=n (attribute cache directory file minimum- minimum attribute caching for a directory file) - Minimum number of seconds that NFS waits before updating directory attributes (default 30 sec.)
  • acregmax=n (attribute cache regular file maximum- attribute caching maximum for a regular file) - The maximum number of seconds that NFS waits before updating the attributes of a regular file (default 60 sec.)
  • acregmin=n (attribute cache regular file minimum- minimum attribute caching for a regular file) - Minimum number of seconds that NFS waits before updating the attributes of a regular file (default 3 seconds)
  • actimeo=n (attribute cache timeout- attribute caching timeout) - Replaces the values ​​for all the above options. If actimeo is not specified, then the above values ​​take on the default values.

NFS Error Handling Options

The following options control what NFS does when there is no response from the server or when I/O errors occur:

  • fg(bg) (foreground- foreground, background- background) - Attempts to mount a failed NFS in the foreground/background.
  • hard (soft)- displays the message "server not responding" to the console when the timeout is reached and continues to attempt to mount. With option given soft- during a timeout, informs the program that called the operation about an I/O error. (it is recommended not to use the soft option)
  • nointr (intr) (no interrupt- do not interrupt) - Does not allow signals to interrupt file operations in a hard-mounted directory hierarchy when a large timeout is reached. intr- enables interruption.
  • retrans=n (retransmission value- retransmission value) - After n small timeouts, NFS generates a large timeout (default 3). A large timeout stops operations or prints a "server not responding" message to the console, depending on whether the hard/soft option is specified.
  • retry=n (retry value- retry value) - The number of minutes the NFS service will repeat mount operations before giving up (default 10000).
  • timeo=n (timeout value- timeout value) - The number of tenths of a second the NFS service waits before retransmitting in case of RPC or a small timeout (default 7). This value increases with each timeout up to a maximum of 60 seconds or until a large timeout occurs. In the case of a busy network, a slow server, or when the request is passing through multiple routers or gateways, increasing this value may improve performance.

Automatic NFS mount at boot (description of file systems in /etc/fstab)

You can select the optimal timeo for a specific value of the transmitted packet (rsize/wsize values) using the ping command:

FILES ~ # ping -s 32768 archiv PING archiv.DOMAIN.local (10.0.0.6) 32768(32796) bytes of data. 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=1 ttl=64 time=0.931 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=2 ttl=64 time=0.958 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=3 ttl=64 time=1.03 ms 32776 bytes from archiv.domain.local (10.0.0.6): icmp_req=4 ttl=64 time=1.00 ms 32776 bytes from archive .domain.local (10.0.0.6): icmp_req=5 ttl=64 time=1.08 ms ^C --- archive.DOMAIN.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min/avg/max/mdev = 0.931/1.002/1.083/0.061 ms

As you can see, when sending a packet of size 32768 (32Kb), its travel time from the client to the server and back floats around 1 millisecond. If this time exceeds 200 ms, then you should think about increasing the timeo value so that it exceeds the exchange value by three to four times. Accordingly, it is advisable to do this test during heavy network load.

Launching NFS and setting up Firewall

The note was copied from the blog http://bog.pp.ru/work/NFS.html, for which many thanks!!!

Run NFS server, mount, block, quota and status with "correct" ports (for firewall)

  • It is advisable to first unmount all resources on clients
  • stop and disable rpcidmapd from starting if you do not plan to use NFSv4: chkconfig --level 345 rpcidmapd off service rpcidmapd stop
  • if necessary, allow the portmap, nfs and nfslock services to start: chkconfig --levels 345 portmap/rpcbind on chkconfig --levels 345 nfs on chkconfig --levels 345 nfslock on
  • if necessary, stop the nfslock and nfs services, start portmap/rpcbind, unload the modules service nfslock stop service nfs stop service portmap start # service rpcbind start umount /proc/fs/nfsd service rpcidmapd stop rmmod nfsd service autofs stop # somewhere later it must be launched rmmod nfs rmmod nfs_acl rmmod lockd
  • open ports in
    • for RPC: UDP/111, TCP/111
    • for NFS: UDP/2049, TCP/2049
    • for rpc.statd: UDP/4000, TCP/4000
    • for lockd: UDP/4001, TCP/4001
    • for mountd: UDP/4002, TCP/4002
    • for rpc.rquota: UDP/4003, TCP/4003
  • for the rpc.nfsd server, add the line RPCNFSDARGS="--port 2049" to /etc/sysconfig/nfs
  • for the mount server, add the line MOUNTD_PORT=4002 to /etc/sysconfig/nfs
  • to configure rpc.rquota for new versions, you need to add the line RQUOTAD_PORT=4003 to /etc/sysconfig/nfs
  • to configure rpc.rquota it is necessary for older versions (however, you must have the quota package 3.08 or newer) add to /etc/services rquotad 4003/tcp rquotad 4003/udp
  • will check the adequacy of /etc/exports
  • run the services rpc.nfsd, mountd and rpc.rquota (rpcsvcgssd and rpc.idmapd are launched at the same time, if you remember to delete them) service nfsd start or in new versions service nfs start
  • for the blocking server for new systems, add the lines LOCKD_TCPPORT=4001 LOCKD_UDPPORT=4001 to /etc/sysconfig/nfs
  • for the lock server for older systems, add directly to /etc/modprobe[.conf]: options lockd nlm_udpport=4001 nlm_tcpport=4001
  • bind the rpc.statd status server to port 4000 (for older systems, run rpc.statd with the -p 4000 switch in /etc/init.d/nfslock) STATD_PORT=4000
  • start the lockd and rpc services.statd service nfslock start
  • make sure that all ports are bound normally using "lsof -i -n -P" and "netstat -a -n" (some of the ports are used by kernel modules that lsof does not see)
  • if before the “rebuilding” the server was used by clients and they could not be unmounted, then you will have to restart the automatic mounting services on the clients (am-utils, autofs)

Example NFS server and client configuration

Server configuration

If you want to make your NFS shared directory open and writable, you can use the option all_squash in combination with options anonuid And anongid. For example, to set permissions for user "nobody" in group "nobody", you could do the following:

ARCHIV ~ # cat /etc/exports # Read and write access for client on 192.168.0.100, with rw access for user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99) ) # Read and write access for client on 192.168.0.100, with rw access for user 99 with gid 99 /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99))

This also means that if you want to allow access to the specified directory, nobody.nobody must be the owner of the shared directory:

man mount
man exports
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/nfs_perf.htm - NFS performance from IBM.

Best regards, McSim!



2024 wisemotors.ru. How does this work. Iron. Mining. Cryptocurrency.