Something that is making me very angry with the current project I’m on is the difference between DAS, NAS, and SAN technologies. The worst is that I’m working with these people on a specific thing not related to storage infrastructure, but instead development architecture and the people that are dealing with the storage infrastructure are the people that don’t know what the hell their talking about. In particular, the hosting provider that does all of the storage infrastructure work for us doesn’t know what the differences are. Oh, and don’t get me started on a VMware paper that we had that didn’t know the difference either. It just drives me nuts.
For those of you keeping score, I’m going to outline this out.
DAS = Direct Attached Storage. These are disks that are physically located in your host machine.
NAS = Network Attached Storage. NAS is file based. For example a CIFS or NFS share. This is typically TCP/IP based access. The NAS device “owns” the data on it. That is, the NAS device administers the data. For example, you connect to a NAS device from a windows machine by accessing servernameshare.
SAN = Storage Area Network. SAN is block based. This is when LUNs (logical unit numbers) are involved on a host. The host “owns” the data. The host is in charge of the partition, formating, and access to the LUN. You can access a SAN via two protocols: iSCSI (TCP/IP) and/or Fiber Channel (FC).
I’m so sick of seeing people talk about iSCSI NAS. There’s no such thing because in a NAS scenario you are sending CIFS or NFS protocols over TCP/IP while in a SAN solution you’re sending SCSI protocols over TCP/IP. Huge difference.
And yes, you can have a device that serves both NAS and SAN from one filer. This is called Unified Storage. All NetApp devices can do this.
Are we clear now?!