There’s been a meme going around recently that SQL and relational databases are somehow “too complicated”, antiquated and “old hat” and should be replaced with something simpler and therefore more efficient.
This opinion is missguided (and perhaps slightly juvenile). Never-the-less a kind of “NoSQL” movement formed which has created some very useful things in the Distributed Hash Table (DHT) space. (In a video on Cassandra, Eric Evans claims to have invented the term NoSQL and wishes he hadn’t!).
I hope to show that SQL and DHT (NoSQL) systems are complimentary to each other and not in competition.
Useful data storage system have “ACID” characteristics (Atomicity, Consistency, Isolation, Durability). SQL systems are very strong on Atomicity, Consistency and Isolation and can also achieve “5 nines” or more reliability in terms of Durability. But, even with highly partitioned data stores, the Consistency requirements often prove to be a bottleneck in terms of performance. This can be seen as an impact on Durability – i.e. database performance under sufficient write load can drop to a point where the database is effectively unavailable.
Sharding – completely splitting the database into isolated parts – can be used to increase performance very effectively, but Consistency, and queries that require access to the whole database, can become costly and complicated. In the latter case a proxy is usually required to submit the same query to all shards and then combine the results together before returning it to the client. This can be very ineffiecient when making range queries.
DHT systems trade Atomicity and Consistancy even further for more Durability under load (ie. performance scaling). Strictly speaking NoSQL can be implemented by a simple hash table on a single host – e.g. Berkley DB – but these implementations have no scaling capability so are not included in this discussion.
SQL implementations include: MySQL, Oracle, PostgreSQL, SQL server etc. DHT implementations include: Cassandra, HBase, membase, voldemort etc.. MapReduce implementations (e.g. Hadoop) are a form of DHT but one that can trade key uniqueness for the speed of “stream/tail processing”.
|Immediate (or blocking) consistancy||Eventual consistancy: reads don’t wait for a write to completely propogate. Last write wins, conflict resolution on read etc.|
|Transactional||Multiple-operation transactions implemented in the application.|
|Scale write performance by partitioning (utilise multiple disk spindles). Writes go to a privileged master or master cluster (which may also service reads). Scale read performance by “fan out”: multiple read slaves replicating from the master.||All nodes are functionally equal, no privileged “name” or meta nodes. Scale reads and writes by adding new nodes (heterogenious preferably).|
|Relational. Indexes available on multiple columns (one column optionally a “primary” unique key).||Non-relational, single index, key-value stores (“column family” DHT systems are just an extension of the single key)|
The metric is then quite simple: if high-capacity (data volume or operations per second) is required, data is only ever accessed by primary key, and eventual consistancy is good enough, then you have an excellent candidate for storage in a DHT.
Other relational storage can be replaced with DHT systems but only at the cost of denormalising the data – the data is structured for reads not writes – but this should probably be avoided! You can use a DHT to speed up a RDMS with regard to the storage of blobs. Some RBMSs have a separate disk space for blobs, some include them in the normal memory space along with the rest of the data. If you have a DHT to hand then another technique is to split up any updates into 2 halves – the first uses the RDMS to store the simple, relational data and returns a primary key, the 2nd then store the blobs in the DHT against that primary key instead of in the RDMS. This shortens the write thread, and any associated locking, in the RDMS as much as possible.
Update Sept 22nd, useful links: