When it comes to scaling MySQL (and in fact scaling DBs in general), Cal Henderson in his excellent book "Building Scalable Web Sites" makes a concise summary:
1. For limited nonlinear read capacity scaling, we can use MySQL's replication system
2. For architectural linear scaling, we can go:
a. vertical, and partition our table space into clusters, or
b. horizontal, and partition our data into shards
A bit more details follow.
1. With replication, basically data (e.g. the whole database) is replicated among multiple machines. This is done to have multiple sources to read from, i.e. it's a method used to enhance scalability of reads. For a typical web application, where the number of reads is 10-100 times larger than the number of writes, this might be good enough.
2. If we want good scalability of writes and reads, we need to split data into pieces, such that several write operations are possible in parallel (e.g. each write happening on a piece of data, independent from the other writes). We can split the data in two ways:
a. Clustering (vertical partitioning): each chunk of split data contains a subset of your tables; this chunk is called cluster and it can be organized as you wish: a single machine or again, data can be replicated (see 1) on multiple machines with master-slave or master-master relationships.
b. Federation (horizontal partitioning): each chunk of data contains a part of a table; for example suppose we split our table USERS by the username column: users go into different chunks based on the first letter of the username. Such a chunk of data and the machines behind them are called shards.
Pitfall: queries that need data in several shards
To avoid it: make sure that all records you need to fetch reside on the same shard. In some cases this is difficult, because one query will be able to use data on a single shard, but another query might involve the same data as the first, plus some other data on a different shard. A solution to this is denormalization: we store data on both shards (so we have duplication) in existing, normalized tables and in new, denormalized tables.
Of course another issue here is to synchronize the duplicated data, in order to avoid inconsistencies. But that's yet another chapter in the long history of designing for performance.
No comments:
Post a Comment