What I Learned

This is essentially part two of the post mortem on the server failure.  The first post was basically just me outlining exactly what happened, while this post will be about what I’ve learned.

1.  System State backups are not the greatest thing in the world.  In fact, they are pretty much useless except for a few key situations.  Basically, in all of the Microsoft Press Books for the MSCE tests (and well, just about any other study material), system state backups are thought of as Gods gift to backups.  In reality this is hardly the case.  In fact, after doing system state backups on all of my servers, the only one that actually worked after a restore is the domain controller.  Granted, this was because there was nothing else on the machine.

All the other machines had software installed when the backups were taken, and now after restoring the system state, the machines are in a weird state where they have all the registry entries for software that isn’t physically on the machine (registry gets restored).  Now, this would be great if I had backed up the whole machine, but I didn’t.  Oh, and don’t even get me started with a system state restore and IIS.  Put simply, your metabase that is restored from the system state, won’t function on your new server, because your machine crypto key is different.

2.  The physical network at the apartment is a mess, and it definitely limits our ability to do a lot of things.  It seems to be further limiting my ability to create a perimeter and internal network with ISA.  For an unknown (as of yet) reason, anything not connected to the bridge/switch that my ESX box is connected to, can not ping the 192.168.2.0/24 network which resides as a virtual switch on the ESX box, even with the static routes set.  What’s really making this aggravating is that if I initiate a ping from the 192.168.2.0/24 network to a specific machine in the 192.168.1.0/24 network, then everything works fine until that tunnel through ISA is closed.  However, once that tunnel is closed, nothing even hits the ISAs external interface, so it’s not really a tunnel through ISA, but a mapped route that’s appearing and disappearing.  Annoying to say the least.  If you feel like you want to help, or see a better explanation, feel free to check out my thread over at isaserver.org.

3.  WinSCP.  I can’t believe I haven’t been using this app with ESX before.  Setting up FTP can be a pain, and is a security hole, so being able to easily upload ISOs or whatever to the ESX box has been unbelievably helpful.

4.  Linux.  It’s amazing how much easier it is to learn things when you actually have a reason to, like when it’s broken.  Unfortunately, with a lot of the original problems I had I wasn’t able to reference them on google.  However, after thinking about it for a bit, and using basic troubleshooting skills, I’ve been able to solve all the linux problems.  Thankfully.

5.  The new Perc controller rocks.  The site is noticeably more performant, and it doesn’t take forever to initialize an array.  It’s amazing what a generation later and 112 MB of cache can do for you.

Post Mortem

Anyways, it’s alive.  It may have taken a little longer than expected, but it’s back.  Hopefully. 

I’ve rebuilt all the virtual machines, mostly from backups, so I didn’t actually lose anything, but I’ve also changed a lot of the layout behind the scenes.  This, along with ordering new parts, and the rest of life, has kept the site off longer than I would’ve liked, but so is life without enterprise level machines and support.

So now it’s time for a post mortem on all this fun stuff.

The week of April 17th is when this story will start.  Basically, the website kept going down and the server hosting it became unresponsive to everything but ping.  I couldn’t SSH into the box or actually log in AT the box or anything.  So, I’d simply restart it.  After this happened a few times, I started scouring the logs to see what exactly was going on.  Basically, I couldn’t find anything.  As you can remember from a previous post, I thought that I had the problem licked.  However, I had never actually seen an error message or anything telling me exactly what was going on.  I was just going on gut instinct. 

So after figuring I fixed the problem, I went on with life, and it did work for quite a few days.  And then it started happening again.  So I decided to reinstall ESX thinking it may be a problem with that.  It still hung a few times, and since I couldn’t actually log in at the box, I decided to log in as soon as I rebooted the server and just leave it logged in.  Maybe something was being written to the display before it hung.  Well, the server worked for awhile, and then sometime on Sunday the 28th it went down again.

At the time I wasn’t at home, and had to wait until I got home, which was around 10 PM.  I go to the machine, and sure enough, I have the first actual error I’ve seen.

SCSI Host 0 Reset (PID 0)
Time Out Again—-

So, looking at the error, I thought that it may be the hard drive on SCSI ID 0.  Looking back, this was the first sign as to what was actually wrong.  I then replaced the hard drive and boot it back up.  The machine doesn’t go anywhere.  No ESX, no nothing other than trying to boot from the NIC.  Definitely not a good sign.  This was a RAID 5 setup, it should’ve recreated the array and everything should’ve been fine after I replaced the hard drive.  Well, apparently it didn’t want to do that, but it was too late now.  This was sign number two as to what the true problem was.  It was now 2 AM on Sunday, with work the following day, so I turned everything off and gave up for the night.

The following day I attempt to fix it again.  Since I still wasn’t sure what was going on, and I wanted to rule out the RAM, I ran MemTest86+ on the machine for a few hours.  No problems found.  I tried to do an upgrade with ESX, but ESX told me it couldn’t find any of the old partitions or installs.  Great.  Well maybe it’s just the partition table that’s gone, and not all the data.  I found this great utility CD called the ultimate boot CD.  On it there’s a program called TestDisk, which can salvage Linux partition tables.  After having to mess with the boot CD awhile to get the MegaRaid SCSI drivers installed on it, I was off and running.  Needless to say, that didn’t work, no partitions found.

Well, that means all the data’s essentially gone, since I was definitely not going to pay someone to get it back.  Thankfully I had started doing backups not more than 2 weeks prior to all this happening!

The rebuild of all the virtual machines then commenced.  However, with the server hanging it took quite awhile in order to get everything back up and running.  What made it even more interesting was the myriad of errors that each hang would create.  Honestly, I don’t think I saw the same error more than twice the whole time it was down.

During this time I also redid the setup to put all my machines in an Internal network behind an ISA server.  Right now there’s the external network (the internet), a perimeter network (my workstation, Binford’s workstation, and some misc machines that don’t need security), and then the internal network (all my enterprise level machines).  There is still one huge problem with this, but I’m still working on it, and it’s not a big deal.  Basically, from my workstation and Binford’s workstation you can’t ping the internal network unless the machine you’re trying to ping, pings out first.  It’s something to do with our messed up physical infrastructure, but hopefully I can fix it.

Basically, this whole time was to try and get the site and back-end up to where it was prior to the problems, and also fix what was wrong.  The more and more it happened, the more and more I thought it was the SCSI card.  So I changed the channel that all the drives were on, and rebuilt the array.  Needless to say that didn’t help much, and so this past Sunday I bought a new Dell Perc 3/DC card on ebay for $61 shipped.  Yesterday it came in, and last night I migrated all the virtual machines off, installed the new card, rebuilt the array, reinstalled ESX, migrated the virtual machines back on, and then brought the machines back on.

Right now we’re flying on the new Perc Card that has 112 MB more cache, and the ability to initialize an array in under 5 seconds as opposed to 100 minutes.  Hopefully we don’t see a hang.  Let’s all hold our breath, mkay?

Under Construction Page

Well, if you attempted to visit my page yesterday (probably even some into today), you were hit with an Under Construction page.  Basically, I did this so that I could put something up that explains what’s going on, and that it was that the server died, again.

At least I know it’s definitely not the logs filling it up.  However, I still have no clue what’s going on.  In trying to troubleshoot, I found that the CDROM drive on the server was bad though.  Thankfully I have another, which was pretty much identical.

I’ve also got a functioning monitor and keyboard hooked up to the machine right now, so hopefully I’ll be able to log onto it and see what’s going on.

Here’s hoping.

Oh yeah, and those with @rebelpeon.com email addresses, I had setup forwarding to another of your accounts, so you should’ve all gotten email during the downtime.  I’ve since switched it back to be delivered to rebelpeon.com correctly.  If it happens again, I’ll just keep switching them around so that there is no email downtime (or at the least very small amounts).

Update 4/25 10:33 PM
More downtime again.  I’ve finally reinstalled ESX to see what happens, but I still have no idea what’s going on.  I ran some memory tests, and that doesn’t seem to be the culprit either.  If the reinstall doesn’t fix it, the next thing I’m thinking is that it could be power related, so a new surge protector may be in order.

For those using email, it’s still forwarding to another address.  I’ve just setup the website so I can easily tell if it goes down again or not.

Gallery RSS

For those of you that never actually visit the site, and use some sort of syndication, there’s a new RSS feed on the site for my gallery.  The only downside to it is that I’ll probably be adding pictures in large clumps, so the 10 images it gets will be woefully inadequate. 

Downtime(s)

I’m sure some of you have noticed the downtime I’ve been having here.  I’m not sure what exactly is going on yet.  Basically my ESX host becomes unresponsive at around 1:30 AM and kills all the virtual machines with it.  I’m planning on playing with it some tonight, so there will probably be more downtime to come.

Thanks for your patience while I deal with this matter.

Update 4/21/2006 6:11 PM
Ugh, more downtime as it happened again, but the good news is that I think I may have found the problem.  I’m not 100% positive, but I’m pretty sure.  I feel pretty stupid about this one too.  Basically, some of the log files had grown a wee bit large, and it looks like I was running out of disk space.  I guess I should setup a cron job to take care of that, eh?

Another Five Years

In addition to buying the game today, I’ve also renewed this domain for another 5 years.  With the talks of VeriSign upping the price of domain names, I figure 5 years should keep me from worrying about it for awhile.

Downtime Yesterday

So, there were some technical difficulties yesterday.  And by technical difficulties, I mean that I killed the router.  Basically what happened was I attempted to fix my router problem by following the “Linksys WRT54G + Bittorrent Problems?” link from yesterday.  Note to anyone who’s attempting to do this; do not attempt this fix while you are at work on the WAN side of the router.  Yes, I know it wasn’t my brightest moment, but it should have worked fine.

It turns out that it had worked fine, but upon reboot, it didn’t pick up an IP address from Comcast, so it just sat there, all stupid like.

Anyways, the fix appeared to work, but not for long.  The same problems come right back.  I think I know what the problem may be now, so I’m going to attempt to fix it again tonight.

Link Sharing

As you can see from the article below, I’m trying something new.  Instead of spamming everyone via instant messages, I thought I’d consolidate into one place.  No idea how well it will work, but we’ll see.