We run one or more independent '''storage clusters'''. Each cluster is identified by a URL at which it speaks a common storage-server protocol. Different clusters may be implemented in vastly different ways and have different operational properties. For example, one might be using Cassandra, another might be using MySQL. But they all look the same from the outside.
Each user account is explicitly assigned to a particular cluster. Clients are responsible for discovering their assigned cluster and communicating with it using This mapping is managed in a separate, high-availability system called the common storage protocol'''userdb'''.
A user's cluster assignment might change over time, due to evolving infrastructure needs. For example, we might decommission a cluster and migrate all its users to a shiny new one. We will take responsibility for moving the data around, but clients must be prepared to re-discover their cluster URLduring a migration.
We will run an independent, highly-available piece of infrastructure just Clients are responsible for doing discovering their assigned cluster management and discoverycommunicating with it using the common storage protocol. They must be prepared to re-discover their cluster URL, if we happen to migrate the "userdb"user to a different cluster.
Architecturally, the system winds up looking something like this: