So my OpenFiler VM crashed (OK, you got me: I was trying to ‘tweak’ it). That wasn’t so bad but I had 25GB worth of ISOs being shared out to my other VMs. This was bad news, but I managed to save my day. Luckily I always make it a point to add discrete virtual disks (VDIs) in XenCenter when I want to add another share or iSCSI disk in OpenFiler. So maybe I was in luck.

My first step was not so successful: I had decided to bring up a quick-and-dirty Debian VM, detach the VDI from the now defunct OpenFiler VM, and attach it to the Debian VM. After doing the usual:

fdisk -l

…to see what the device name was of the VDI (which the XenCenter kindly tells you as well), I tried to mount it. Now, you may or may not know that the mount command is just a triage to the mount command that’s specific to the type of filesystem you’re trying to mount. Often the mount command can tell what kind of filesystem is on the device, and call the correct mount for you. You can also give the mount command a clue by saying something like:

mount /dev/sdc1 /mnt -t xfs

…which in this case tells mount that the device contains an “xfs” filesystem. Use the manual pages like this:

man mount

…for a full description of this (or any other) command.

After the mount command kept on burping that it didn’t know the filesystem type, I tried to coerce it since I knew I had chosen to format the disk using the xfs filesystem type in OpenFiler. Nothing would work; it was almost like the message from mount was telling me to give up, since this filesystem is toast. I found myself telling the Debian VM that it “just didn’t understand what I wanted”, and that if it didn’t do what I wanted, I would give It the virtual equivalent of the “elevator shaft” treatment. This is what we used to do back in the day to real hardware that wouldn’t comply – next-to-top-floor drop down the elevator shaft.

My first attempt behind me, I had the brilliant idea of reinstalling a fresh copy of OpenFiler, because surely it would know what I wanted, after all this VDI was created by OpenFiler. Lucky for me that these things take minutes using XenServer: You can do things that your boss would have given you 2-weeks of sick leave, for even suggesting just a few short years ago – all from the comfort of your ergonomically correct office chair.

So there I was issuing the mount command again, and again the mount command just didn’t get it (it turns out I didn’t get it, but you know how that goes). Time for a cuppa tea. By-the-way, I’ve always been ‘green’ when drinking my black tea – always from a mug, never from Styro.

During my tea break, I started thinking it through (which is like reading the manual for your barn door, after your Harley has been nicked): OpenFiler actually leverages the Logical Volume manager to do its stuff. The LVM has a Volume Group, which is a collection of physical volumes, which in the case of a VM is the VDI (or VDIs) that you assign to that VM. On a real machine it would be a collection of physical disk drives. You can have lots of physical disks of differing sizes and type collected in a Volume Group, and then meter out Logical Volumes from that group. A logical volume, looks and acts like a physical volume, except the LVM may be spreading your one LV amongst several PVs, depending on the attributes of your VG. A good example is setting up RAID; at the LV level, you’re not really interested how the LVM is dealing with it, but you’re glad it can. When I say that an LV ‘looks’ like a PV, I mean it’s just another block device, like:

/dev/XSLocalEXT-0cbc8c20-3268-5876-db13-128ad9d0b9c1/0cbc8c20-3268-5876-db13-128ad9d0b9c1

You can get a list of LVs on your system by using:

lvdisplay

Similarly, you can get a list of VGs on your system by using:

vgdisplay

One ‘ah, ah!’ moment later, I was able to explain it to myself. The xfs format wasn’t on the physical drive I was trying to mount, it was on the Logical Volume. The PV has the LVM format imposed on it, and the LV has the xfs filesystem format imposed on it. So maybe if I mounted the LV that would give me what I wanted: access to that 25GB-worth of ISOs. Bingo! (That’s reminds me, I’ll be doing some training at Synergy this May in Vegas). Lucky for me that I had kept this particular VDI in a VG of its own, and by itself. I suspect, though, that had I had multiple VDIs in the VG, I could have still gotten it to work since LVM marks the disks appropriately to make them system independent.

The rest of the story is simple, but worth mentioning since (again), doing this back in the day, would have taken you having to pawn your wedding ring, and as much time as you would have been in the dog-house for doing so. I dropped a brand new 30GB VDI into the OpenFiler VM, because even though the old VDI was recognized, I couldn’t get OpenFiler to re-share it. After a few clicks in the OpenFiler admin tool, I had shared the new disk. Using the mount command by itself, i.e.:

mount

…I was able to discover where OpenFiler had mounted my new disk, and copy everything from the old disk to the new from the CLI. I was then able to detach the old disk, and delete it.

Basically, this story is true, but I may have made the stuff up about the Harley being stolen, the tea-bag and the ergo-chair. I’ll leave that for you to figure out! Let me know.