Yeah I am still using the machines in question for pretty heavy I/O load (they are dedicated storage servers for an OpenVZ cluster). They are running RAID5 from the card. Can't remember the specs on it, if I could run the CLI tool I would tell you. But its a definite regression in that if the CLI tool doesn't work anymore, then we can't monitor the RAID for failures, and then don't know if a drive has failed. btw, just so everyone knows, there is a couple of nagios plugins to monitor this card using the CLI tool here:
I don't get any other errors out of the cards unless I run the tool. Of course areca may release a different version of CLI64 that makes the problem go away as well.
For my money, I would try and get the latest drive into the Hardy kernel though.
Yeah I am still using the machines in question for pretty heavy I/O load (they are dedicated storage servers for an OpenVZ cluster). They are running RAID5 from the card. Can't remember the specs on it, if I could run the CLI tool I would tell you. But its a definite regression in that if the CLI tool doesn't work anymore, then we can't monitor the RAID for failures, and then don't know if a drive has failed. btw, just so everyone knows, there is a couple of nagios plugins to monitor this card using the CLI tool here:
http:// www.nagiosexcha nge.org/ Search_ Projects. 43.0.html? tx_netnagext_ pi1%5Bphrase% 5D=areca& tx_netnagext_ pi1%5Bsubmit% 5D=search& tx_netnagext_ pi1%5Bsearch% 5D=1
(sorry about the massive url).
I don't get any other errors out of the cards unless I run the tool. Of course areca may release a different version of CLI64 that makes the problem go away as well.
For my money, I would try and get the latest drive into the Hardy kernel though.