NVMe over Fabrics (NVMe-oF) is a technology that extends the performance and efficiency of NVMe devices across network fabrics such as TCP, RDMA, and Fibre Channel. With NVMe-oF, remote storage devices can be accessed just like local NVMe drives, offering low-latency and high-throughput storage solutions.
In this blog, we will explore how to configure an NVMe-oF linux target as well as the linux client(Host) and utilize the nvme-cli tool to manage, discover, and connect to NVMe devices. Whether you're looking to configure your storage solution for a data center or lab setup, this guide will walk you through the essential steps.
Before diving into the technical setup, let's understand the role of an NVMe-oF target and client.
Now, let's get started with setting up NVMe over Fabrics using NVMe/TCP.
First, you need to install the nvme-cli
tool, which allows you to interact with NVMe devices. You can install it using the package manager:
$ sudo apt install nvme-cli
Step 2: Ensure the Kernel Version
Make sure that your Linux kernel is updated to version 5.0 or above to support NVMe over TCP. Update your system with:
$ sudo apt update
$ sudo apt upgrade
Note: NVMe/TCP host and NVM subsystem software need to be installed in order to run NVMe/TCP. The software is available with Linux Kernel (v5.0) and SPDK (v. 19.01), as well as commercial NVMe/TCP target devices.
Step 3: Setting Up the Linux Target
Next, you will set up the NVMe over TCP target on a Linux system:
$ sudo modprobe nvme_tcp
$ sudo modprobe nvmet
$ sudo modprobe nvmet-tcp
$ sudo mkdir /sys/kernel/config/nvmet/subsystems/nvmet-test
$ cd /sys/kernel/config/nvmet/subsystems/nvmet-test
$ echo 1 |sudo tee -a attr_allow_any_host > /dev/null
$ sudo mkdir namespaces/1
$ cd namespaces/1/
$ sudo echo -n /dev/nvme0n1 |sudo tee -a device_path > /dev/null
$ echo 1|sudo tee -a enable > /dev/null
If you don’t have access to an NVMe device, you can use a null block device instead:
$ sudo modprobe null_blk nr_devices=1
$ sudo ls /dev/nullb0
$ echo -n /dev/nullb0 > device_path
$ echo 1 > enable
Then, configure the network port for the NVMe target:
$ sudo mkdir /sys/kernel/config/nvmet/ports/1
$ cd /sys/kernel/config/nvmet/ports/1
$ echo 192.168.1.29 |sudo tee -a addr_traddr > /dev/null
# Replace with target IP
$ echo tcp|sudo tee -a addr_trtype > /dev/null
$ echo 8009|sudo tee -a addr_trsvcid > /dev/null
$ echo ipv4|sudo tee -a addr_adrfam > /dev/null
$ sudo ln -s /sys/kernel/config/nvmet/subsystems/nvmet-test/ /sys/kernel/config/nvmet/ports/1/subsystems/nvmet-test
Once done, save this as a .sh
file and make it executable:
$ chmod +x filename.sh
$ ./filename.sh
Step 4: Verify TCP Port Configuration
Use the dmesg
command to confirm that the TCP port is enabled:
$ dmesg |grep "nvmet_tcp"
Example output:
Now that the target is set up, you can configure the client to discover and connect to the NVMe device.
First, you need to install the nvme-cli
tool, which allows you to interact with NVMe devices. You can install it using the package manager:
$ sudo apt install nvme-cli
Step 2: Ensure the Kernel Version
Make sure that your Linux kernel is updated to version 5.0 or above to support NVMe over TCP. Update your system with:
$ sudo apt update
$ sudo apt upgrade
Note: NVMe/TCP host and NVM subsystem software need to be installed in order to run NVMe/TCP. The software is available with Linux Kernel (v5.0) and SPDK (v. 19.01), as well as commercial NVMe/TCP target devices.
Step 3: Setting Up the Linux Client(Host)
$ sudo modprobe nvme
$ sudo modprobe nvme-tcp
$ sudo nvme discover -t tcp -a 192.168.1.29 -s 8009
# Replace with target IP
Example output:
To connect to the NVMe target, use the nvme connect command:
$ sudo nvme connect -t tcp -n nvmet-test -a 192.168.1.29 -s 8009 # Replace with target IP
Example output:
At this point, you have a remote NVMe block device that can be read and written just like a local high-performance block device.
The nvme-cli
tool is a powerful utility for managing and monitoring NVMe devices. It supports a wide range of commands that allow you to list devices, check drive health, and manage namespaces.
If you haven't already installed nvme-cli
, you can do so with:
$ sudo apt install nvme-cli
To verify the installation:
$ which nvme
Example output:
Note: nvme-cli
requires root access for most commands to communicate directly with your drive, so use it with a sudoers capable account.
To list all NVMe devices and namespaces on your machine:
$ sudo nvme list
Example output:
This command will display details like the device node, serial number, model, namespace, and usage.
Viewing Detailed Drive Information
For detailed information about a specific NVMe drive:
$ sudo nvme id-ctrl /dev/nvme0n1
Example output:
To check the overall health of the drive, use the smart-log
command:
$ sudo nvme smart-log /dev/nvme0n1
Example output:
After setting up the target, you can discover and connect to it:
$ sudo nvme discover -t tcp -a 192.168.1.29 -s 8009
# Replace with target IP
Example output:
discover
: The primary command used to locate NVMe subsystems available on the specified network target.-t tcp
: Specifies the transport protocol, in this case, TCP (Transmission Control Protocol), which enables communication with the remote NVMe storage over the network.-a 192.168.1.29
: Defines the IP address of the target host where the NVMe devices are located.-s 8009
: Specifies the port (8009 in this case) that the target NVMe subsystem is using to accept connections.Wireshark Screenshot:
To establish the connection:
$ sudo nvme connect -t tcp -n nvmet-test -a 192.168.1.29 -s 8009
# Replace with target IP
connect
: This command establishes a connection to a remote NVMe subsystem that has been previously discovered.-t tcp
: Specifies the transport protocol, in this case, TCP (Transmission Control Protocol), for communicating with the remote NVMe subsystem over the network.-n nvmet-test
: This flag defines the NVMe subsystem name (in this example, nvmet-test
) that you are trying to connect to on the target.-a 192.168.1.29
: Specifies the IP address of the target system where the NVMe subsystem is hosted.-s 8009
: Identifies the port (8009) on which the NVMe subsystem is listening for incoming connections.Example output:
Wireshark Screenshot:
To Disconnect the Linux Target:
$ sudo nvme disconnect -n (subnqn)
Example output:
To Disconnect the all Linux Target at once:
nvme-disconnect-all - Disconnect from all connected Fabrics controllers.
$ sudo nvme disconnect-all
Example output:
Explanation:
In the image above, the /dev/ng0n1
device (Samsung SSD 980 500GB) represents the local drive, while /dev/ng1n1
and /dev/ng2n1
(both labeled as Linux) are external devices connected via the nvme connect
command. When executing the nvme disconnect-all
command, or using nvme disconnect -n (subnqn)
, only the local NVMe drive remains connected, and the external devices are disconnected.
$ echo "hello world" | sudo nvme write /dev/nvme0n1 --data-size=520 --prinfo=1
write: Success
echo "hello world"
: This outputs the string "hello world" and pipes it into the NVMe write command.nvme write /dev/nvme0n1
: Writes data to the specified NVMe device (/dev/nvme0n1).--data-size=520
: Specifies the size of the data to write (520 bytes in this case). Adjust the size as needed for your data.--prinfo=1
: Specifies the protection information (PI) format for the write operation. This option manages metadata associated with data integrity.This command writes the string "hello world" to the NVMe device and returns "write: Success" upon completion.
$ sudo nvme read /dev/nvme0n1 --data-size=520 --prinfo=1
Output:hello world
read: Success
nvme read /dev/nvme0n1
: Reads data from the specified NVMe device (/dev/nvme0n1).--data-size=520
: Specifies the size of the data to read (520 bytes).--prinfo=1
: Uses the same protection information (PI) format as in the write operation, ensuring consistency.This command reads the data from the NVMe device, in this case, retrieving the "hello world" string, and outputs "read: Success" after the operation completes.
With these commands, you can manage NVMe devices over fabrics using TCP.
This concludes our introduction to setting up NVMe-oF targets using NVMe/TCP and utilizing the nvme-cli
tool to manage NVMe devices. We hope this guide has helped you understand the basic concepts and commands needed to get started with NVMe over Fabrics!
Next Blog Topic: Setting up a SPDK target and its Commands
Stay tuned for the next blog in this series, where we will explore how to set up a SPDK and Commands related to it.
Contributors