We're having some lingering issues with the server, specifically the database. It's partly because I'm running a big (actually huge) re-index process in the background, and it is taxing the database a bit more than I expected.
I've been rehearsing this particular re-index for days (a week?) and it went smoother on a test database, but then nobody else was using it at the time. Whenever I stop watching the log output, the process starts slowing down until it brings the whole site to its knees.
So, sorry about that. I've scaled back the re-index a bit, biting off smaller chunks at a time so the server doesn't choke.
There are a couple of other issues, too. There are a couple of places where the new database is stricter than the old database, so I am finding occasional query failures in the logs where I just need to tighten up the parameters a bit. Put another way, some of the database queries were sloppy
The final issue is that someone (or several someones) are running a lot of searches at the moment. No fault there, I'm not asking anyone to not search, but searches being the most strenuous thing the database does, combined with the other issues, means I can see more than a few failures in the logs -- a LOT of failures, actually. Hopefully that will improve as I scale back the re-indexing.
One positive note, too: I've upgraded the underlying storage system used by both the database and the main server itself. Amazon AWS has recently rolled out upgraded SSD drives (gp3) with much better throughput, and I am taking advantage of those everywhere I can.
Thanks, yet again, for your patience.