Sunday, March 8, 2009

Performance and scalability concepts

Performance is about going fast, it's got nothing to do with application or meeting business needs. It's just about low latency and making things go fast. Describes the elapsed time that it takes to execute an operation

Scalability is about adding units of operation and making a system pull more volume in order to meet an increase demand.

Ideally we want to put this two attributes into a running system. This is usually not so easy since more often than not, scalability features will conflict will raw performance figures. It is hard to decide (up front) between scalability and raw performance (most applications do not need to scale though).

In addition, scalability describes how a system behaves given an increasing number of simultaneous operations. Scalable performance describes a system that can scale predictably under load and can also execute operations quickly. This behavior has to be architected in.

How do we make applications that rely on databases, scalably performant? There are a few rules that define this situation:
  1. Databases are far away, usually on the other side of a serialization barrier (lots of I/O, up and down network stack)
  2. Databases are optimized for bulk calculations
  3. Databases don't scale past a single server machine

And a few things to keep in mind:
  • Minimize roundtrips
  • Because of 1, trips to DB are slow. Because of 3, each hit brings us closer to the limit of the DB server.
  • Offload calculations and joins
  • Because of 2, the DB does this very fast. Because of 3, this could end up limiting scalability.
  • Course grained transactions
  • Because of 1, course-grained boundaries and caching will pump performance (make the unit of work as wide as it can be). The number of transactions goes down, increasing scalability. Opens the door for write behind caching and queued updates (this can potentially remove synchronicity from DB operations).
  • Effective statement batching
  • With coarse-grained transactions: batching boosts performance and reduces statement execution count on server which results in better scalability. Using a good ORM can do the job.
  • Reduce lock duration
  • Locking implies limiting concurrency which is necessary to maintaining ACID properties. The problem is that locks on data force parallel data access to happen in sequence. Use optimistic locks to improve scalability!
  • Polish the DB schema
  • Use the DBA :)