It does happen outside if this window, and usually I know when this happens, because I have a monitoring script which check for the system load every 5 sec, kills processes according to the load and logs everything to a file.
The problem is - I can't do anything about it. Either there must be less VPS'es per hosting system, or an upgrade to another hosting plan ($80 - still a VPS, though) is needed.
The system was effectively down for 7 hours. Spent 2 hours in a live chat session with support, escalated ticket, (an admin was indeed working on the system), went to bed at 6AM.
The result - the system is still mostly down.
Almost 20 hour of down time.
The VPS finally crashed and could not boot up. The problem with disks is not gone. Escalating the ticket again.
A disk died in the RAID. Pretty much obvious but was not tested by the hosters in time.
Resynchronised at the moment. I think it will take no less than 6-8 hours.
Looks like the disks are still synchronizing
The load has reduced. I suppose that the disks are in sync now.
I think the performance is quite acceptable now. I still get "server errors" from time to time, but the reason is not the disks from what I can tell. Unless there are activity spikes the monitoring programs do not catch.