How fast is too fast?

c: | f: /

What happens to real-time systems when things accelerate to infinity and beyond? Slow downs that require some counter-intuitive head-scratching to circumvent.

I’m a database junkie. There, I admitted it. But it often surprises people to find I loathe relational databases with a venom equalled only by my despise for the mainstream media. I prefer my databases nimble, light, breezy and very fast. Which means Oracle, MySQL, M$ SQL Server, Informix, SyBase, Postgres, TimesTen, etc are all out of the running.

I suppose I’ve been spoiled somewhat by ERDB and, more recently, Ancelus. There’s something deeply exhilarating about shifting bytes around significantly faster than everyone else and being able to design a database without worrying about the drudgery of indexing, query performance and doing the normalization / denormalization dance. It’s a liberating experience if databases are your thang.

But we’re now facing a problem — which in one sense is a nice problem to have — in that the performance of the database is being hampered by the hardware and the underlying operating system.

In theory, things scale almost linearly under Ancelus. Throw 16 cores at it and you’ll get just shy of 16 times the performance of a single core (compared to a relational database where the performance might be 3 or 4 times at best, due to Amdahl’s Law). But when we tried it on a modest 6-core desktop box we didn’t hit the theoretical maximum. The reason was twofold:

  1. Scheduler thrashing: at full load our test box spent too much time waking waiting threads, processing ones it had slots for and putting the rest to sleep only to have them woken again nanoseconds later and having to put them to sleep again. Anyone with a rudimentary knowledge of fluid dynamics will recognise the solution as the same thing applied to motoring bottlenecks: back off the throttle a little and you can fit more in the same space
  2. Lock contention: multiple clients pounding the same table cause spinlocks to spin. This is easily solvable with Nagle’s algorithm by delaying transactions for a short number of microseconds, batching up any similar transactions that occur in the intervening time and then posting them all at once under a single lock

These two phenomena aren’t directly linked with the database, but simply come about when things peripheral to the OS go too fast for it to cope.

A similar problem occurs at the chip level with pipelining and so-called intelligent cacheing. By pre-emptively filling the on-chip caches with data from RAM — effectively hedging bets on what the next bit of data to be fetched will be — any write to any byte of memory invalidates the cache and it needs to be flushed, wasting CPU cycles. In the real-time arena this can become a killer problem and needs attention.

Even with these Operating System and hardware niggles, Ancelus still outperforms any relational system by several orders of magnitude; on our little desktop PC we were achieving a sustained six hundred thousand transactions per second — jumping to over a million per second when we adjusted the application to circumvent the OS limitations. But it goes to show that sometimes the quest for speed falls foul of external factors and if you accept the one-click solution at face value you could be denying you and your users significant horsepower.

Spout 'em if you got 'em

(required)

(required, never made visible)

(optional, linked with rel="nofollow")

(required)