Building a better NAS for use with Macs, part 3 Getting started

Well there’s nothing much more to say regarding the hardware which was pretty well detailed in the [first article](/groups/infrageeks/weblog/540bc/Building_a_better_NAS_(for_use_with_Macs__part_1_-_Introduction.html) of the series. Setting up the box presented a few different choices, like which operating system/distribution was the most appropriate to my needs.

Basically the choices boiled down to:- Solaris 10

  • OpenSolaris

  • Nexenta

Nexenta was taken off the list rapidly since the free version is limited in the amount of capacity you can use, and I didn’t really see much value add for my use since I’ll be doing most of the management from the command line and the ZFS commands are incredibly easy to learn and use that the web interface seems almost silly. Although for a serious production environment, the interface developed by Sun for it’s Unified Storage bays is nothing short of astounding with the detail available in its real time reporting.

Between Solaris 10 and OpenSolaris it was a bit of a harder decision, but OpenSolaris won out with the more open approach, mostly to do with managing package installations and updates.

Installing OpenSolaris is a breeze and other than the issue with the network card noted in the first article, pretty much next next next. There are a couple of little points that are worth noting though.

Creating user accountsIf you choose to create user accounts during the installation process this will activate the Solaris RBAC (Role Based Account Control) mode which is massive overkill for my needs and introduces a number of small but potentially annoying issues. RBAC goes beyond the sudo model for accessing root privileges and you use pfexec for executing commands with superuser privileges. Not a big deal, but given that it’s primarily a storage server on a private network, I went the simple route, using only the root account for all management. I did open the huge security hole of permitting root access via ssh (edit /etc/ssh/sshd.conf) which generally I frown upon, but for this use it’s reasonable.

I chose to use ZFS as the filesystem for the boot volume in order to profit from the snapshots, and eventually setup a mirror of the boot drive.

To avoid running into the problem I had with the network card, I strongly suggest that you burn a copy of the OpenSolaris CD and run the driver check application on your target hardware before making any final decisions. It’s a LiveCD type of install so you can boot off the CD, run the check and make sure that all of the necessary drivers are included in the base install package.

I did run into some issues with the motherboard and determining the boot volume. For reasons that remain mysterious, the P5K will not take the latest BIOS update and as a result it ignores my requests to set the boot drive to the PATA disk so I have to hammer on F8 at reboot time in order to tell it which drive to use at boot. But given that the server rarely requires a reboot (and ever rarer unattended) and I’ve got it on a UPS this is not a big deal for me.

So the basic install has 4 1Tb SATA disks, all connected to the onboard SATA ports which I discovered using the format command to list all available disk devices. Setting up the basic storage pool is simplicity itself using the zpool command.

zpool create siovale raidz c8d0 c9d0 c10d0 c11d0

Now I have my basic pool from which I will start creating zfs filesystems. I’ve done some testing on the environment and I could easily host user home directories on the server directly for any machine on a wired connection, but since the media server is too far away and I can’t run wires, I’m limited to Wifi which is (for me) not sufficiently performant or reliable for this task.

So I created a few different filesystems shared over NFS (something that OS X supports very nicely). Creating filesystems is another painfully simple operation and does not hard allocate any of your space so that each filesystem has access to the total space in the pool. This means that you don’t have to make early decisions with long term impact just at the moment where you have the least amount of reliable data.

I started with the following filesystems:

zfs create siovale/archivezfs create siovale/mediazfs create siovale/portable

archive will be used for all of the stuff I have cluttering up various drives that I really don’t need on an immediate basis but that I can’t bring myself to throw away.

media will be used for backing up the media drive (more in the next article).

portable is my generic workspace for current data that I’m working on.

Sharing the filesystems can be set as a value on the pool, and inhereited by subsidiary file systems but for the moment I’m setting them manually on each share. There are a number of little things to take care of at the beginning that aren’t immediately obvious. The first is that you want to set the rights on the filesystem so that when you’re connected across the network you can write to the volume. This would be the old fashioned chmod command on the directory representing the filesystem. Example:

chmod a+rwx /siovale/archive

The next step is setting the NFS sharing to be on. The simplest (and least secure) approach is to set the sharenfs propery to rw with the command:

zfs set property sharenfs=rw siovale/archive

Now using NFS requires a bit of a shift of perception if you’re used to sharing files via AFP or SMB. These are protocols that demand a user authentication in order to be given access to a share. NFS really is “Network File System” and as such it has a different way of looking at things. Basically the rule is that if you’re on the network and the rw flag is set, anyone who can get to the share over the IP network can use it (subject the security settings or ACLs on a given file or directory). If you want to lock it down a little more (say if you let other people use your network) you can restrict access a number of different ways, but always based on the identity of the connecting computer. It’s not perfect or bulletproof security, but a reasonable compromise, especially since I would normally put this on a completely private storage network. I’m not going to go through the bother of setting up private VLANs on my home network just yet, but I might later on just to see how well it works (and if the Airport Extreme switches support this or not).

If you have multiple machines on your network you also need to review how you want to handle security and sharing. When you connect from your Mac, what the NFS server looks at for determining if you have the rights to something is your UID. Now the problem you may run into is that if you take a Mac straight out of the box, the first user will always be assigned UID 501 so as a result, the server will see everyone as the same person when they connect. I’m running OS X Server with portable accounts so I manage individual UID’s on all of my machines from one context but you might need to look into manually changing UIDs on user accounts if the default 501 is used on multiple machines for different people. The side issue that goes along with that is that the Solaris knows nothing about any local groups you may have assigned to simplify sharing so you’ll need to recreate user accouts and groups manually on the server if you need those kind of advanced sharing options. I haven’t yet looked into getting OpenSolaris to work as an LDAP client to Open Directory on the OS X Server, but theoretically that should be possible.

If you manage your own internal DNS, and use static IP addresses or DHCP reservations, you can set those names as the only computers have rights to access a share with the syntax:

zfs set property sharenfs=rw=alphaport.infrageeks.com siovale/archive

Yes the double equals sign is ugly, but that’s the way it is. The basic configuration does a reverse lookup on your address and tries to match the fully qualified name of the computer to determine if you can connect or not. So your DNS had better be set up correctly.

Connecting from a Mac can be done a number of different ways. The first obvious method is simply to use the Command-K, Connect to Server… in the Finder and enter the NFS URL like so:

nfs://shemhazai/siovale/portable

which will mount the share as a new volume. However, Leopard’s volume management is a little odd when it comes to NFS shares and while you’ll see the volume with the correct name in the sidebar if you go poking around in the volumes directory, the mounted volume will be the name of the server with a dash number for additional mounted shares from the same server. For day to day use you might not care about this but if you intend to do any kind of scripting or automation, you want to have fixed paths that you can rely on.

But there are other ways to manage this. You can map any NFS share to a directory on the existing HFS filesystem at the command line but that’s not necessarily the easiest aproach. Apple has kindly included a simple interface for managing NFS shares in the Directory Utility (found of course, in the Utilities folder)

In the course of playing around with various different setting I’ve run into a few conflicts which led me to create a folder in the Volumes folder with the name of the server under which I map the various shares. In the example given I was testing out the ability to map a user home directory to an NFS share under /Users which works, but this caused SuperDuper! to die when cloning the disk. However, SuperDuper ignores the contents of the /Volume directory so putting mounts in here seemed to be a better solution. For day to day access I’ve dragged out the manually created folder into the sidebar for direct access.

The other nice thing about these mountpoints is that Leopard will automatically mount them if the server is available. This means that you can define these mountpoints on a portable as when you’re away from your network, selecting the share will just give you back an error about an alias not resolving correctly, but the moment that you’re back on your network, the links are live without any intervention on your part.

Note: Snow Leopard has moved a number of things around and I haven’t yet sorted out just where the NFS mounts are managed yet.

Next up: Snapshots and other backup goodies