Recently there have been some questions about what storage technology to use on multi-server video projects. Small, single server deployments are straightforward because local storage is normally a reasonable option and easy to deploy.
The storage strategy for larger projects is more complex and dictated by many factors; the size of the project, the scope and scale of the existing infrastructure, and of course, the IT budget. The critical decision is choosing between a storage area networks (SAN) and network attached storage (NAS). The factors affecting that decision have changed, as both SAN and NAS technology have evolved and virtualization has become a major factor. Network infrastructure may also be a decisive factor, especially if fiber channel fabric is already in place or if 10 Gig Ethernet is already in the plan.
Let’s begin by clarifying the difference between a SAN and NAS. Understanding the differences and similarities will go a long way towards understanding where each is useful and appropriate.
A SAN is a block storage device. Any device that exposes its storage externally as a block device falls into this category such as an external hard drive or Direct Attach Storage (DAS). It is called an external hard drive when it is attached to a desktop. It is called a DAS when it is attached directly to a server. A SAN is some added form of networking, generally a switch and a fiber-based cabling system, between the device and the server that is consuming the storage. Common protocols for communicating with block storage include iSCSI and Fibre Channel. In the end, a computer attaching to a block storage device will always see the storage presented as a disk drive.
A NAS is a file storage device. This means that it exposes its storage as a network file system. Any computer attaching to this storage does not see a disk drive but instead sees a file system. Users and servers attach to the NAS primarily using TCP/IP-over-Ethernet, and the NAS has its own IP address. Common protocols for communicating with file storage devices include NFS, SMB/CIFS and AFP. What separates block storage and file storage is the type of interface that they present to the outside world.
Both types have the option of providing extended features beneath the "demarcation point" before they hand off the storage to the outside. Both may (or may not) provide RAID, logical volume management, monitoring, etc.
While both potentially can sit on a network and allow multiple computers to attach to them, only a file storage device has the ability to arbitrate that access. This is very important and cannot be glossed over. Block storage appears as a disk drive. Only one server can write or change the data at once. A file storage device, on the other hand, has natural arbitration as the NAS itself handles the communications for access to the file system.
This distinction is what makes a NAS attractive to VMS. Conventionally, a SAN was used to provide the needed performance on large scale VMS projects. With a SAN using fiber channel instead of network protocols for data transfer (required by a NAS), it reduces latency and improves performance. However, using block storage adds complexity since there is a need to statically map each server to a defined RAID target, or LUN in SCSI terms.
Thus, the larger the system, the more mappings will exist. In times when outages are experienced (such as a VMS server failing or network disruption), the need to statically map the LUNS from one VMS server to another occurs. This results in the loss of access to recorded video while the LUNS are re-mapped to the new server.
When a NAS storage element is used, there is no re-mapping of video storage necessary since it can provide a single volume that all VMS servers can simultaneously access on demand. Think of this as a big C: drive, but accessible via an IP address.