Last weekend some of you noticed we were having some issues with processing data — it was an embarrassing cock up on our part, the processing engine wasn’t restarted after a reboot of all the systems and we failed to notice for like 10hrs! 🙂 But, as I hope you also noticed, as soon as it became apparent that something was amiss, it was all hands on deck, and eventually we got the system fully functional again with no loss of data, and began back processing to catch up.
This went ok as a “quick fix”, but some longer term planning has been the top priority for some days now, as quick fixes only go so far, and the overall stability of the system is more important. At the end of the day, a quick fix here would be like most anywhere else, just the illusion of “fixed”, and not truly very helpful.
The optimization of queries, processes and several seriously heavy duty servers take a little time, so this post is just to let you guys know that if you’re seeing a few oddities in your reports over this week, it’s all in a good cause, and no data is being lost. While we have over eight million records being logged every single day though, the optimization process has to be handled very carefully, so will take a little time.
Optimization was always the plan, we just need to do it now, not later…
We always expected to have to optimize the system when we had real data running through it, and it was always part of the plan. We just didn’t expect to have to do it quite so soon after launch. With over 5000 blogs being tracked, we’re about 3x as far ahead as we expected to be at this stage which is cool on one hand, but a bit of a pain on the other as resources need to be moved around a reprioritized accordingly.
I hope that answers a few of the questions im seeing in the forums, and that you’ll bear with us for a few days while we get some long term solutions in place as opposed to quick fixes.
My apologies for any inconvenience, you will of course be the first to know when things change.