vmware 2.0 startup problems (and a solution)
Posted Wed, 09 Jan 2008
Tonight, I rebooted my server after accidentally powering it off while cleaning dust off of the intake vents, and vmware didn't start back up. Technically, all of the startup scripts (/etc/init.d/vmware) ran fine and reported no errors, but I couldn't connect to the management interface on port 8333. Netstat output confirmed that nothign was listening on this port. Crap.
After grepping around in various places, I figured that the tomcat server that comes with vmware (named webAccess) had no intentions on listening to port 8333, and this was normal. I checked /var/log/ for anything useful, and found /var/log/vmware. In this directory, was a set of hostd-N.log files, where N is a number. In hostd-0.log, was this entry (the entry below is truncated for readability):
[2008-01-08 21:31:23.790 'vm:/vmdisks/vms/filer (solaris 64bit)/filer (solaris 64bit ).vmx' 47879793637584 warning] Disk was not opened successfully. Backing type unknow n: 0 [2008-01-08 21:31:23.790 'vm:/vmdisks/vms/filer (solaris 64bit)/filer (solaris 64bit ).vmx' 47879793637584 warning] Disk was not opened successfully. Backing type unknow n: 0 [2008-01-08 21:31:23.791 'App' 47879793637584 error] Exception: ASSERT /build/mts/release/bora-63231/bfg-atlantis/bora/vim/hostd/vmsvc/vm ConfigReader.cpp:3251 [2008-01-08 21:31:23.794 'App' 47879793637584 error] Backtrace: <actual backtrace snipped>Keep in mind, that even though vmware-hostd was failing, /etc/init.d/vmware reported success for every operation. Eek.
So, I went to my filer vmx file and commented out the rawDisk entries and restarted vmware (with the init script). No more failures were logged in hostd-0.log, and a subsequent netstat showed vmware-hostd listening on port 8333. Peachy.
Back on my windows box, I ran the vmware console, and guess what happens... I can now manage my vmware sessions again.
I can only hope that VMware decides to allow raw, local disk access in the finished version of vmware 2.0, because I am rather dependent on it. If they don't, I might be able to get away with moving the data out of the zfs pool, initializing the drives with some random linux file system, and creating a 500gig vmware virtual drive on each disk, and finally telling Solaris to fix its zfs stuff. Since I don't have too much data there, I might be able to get away with draining one disk out of the zfs pool, and doing the conversion from raw to virtual disk one physical disk at a time. Might be a useful exercise in learning zfs more.
I'll cross that bridge when I get to it.