How does Radix scale linearly?



To solve the problem of scalability, Radix started with the data architecture required for that kind of scale. We realized that to serve the world you needed a data architecture that could allow billions of people and even more devices to use it simultaneously. That would need to be both incredibly easy to index (to make the data structure usable), and massively sharded (cut into fragments) so as the demand of the network grew, more computers could be added to increase the throughput of the network.

Learn what is sharding in this explainer blog post

We realized that cutting up the data ad-hoc would not work. Instead you need to already know how the data will be cut up. You should be able to look at any piece of data and know where it lives in the data structure without having to re-index every time large amounts of new data is added. This feature of Radix is achieved through a deterministic process (a process that, given a set of inputs, always outputs the same answer, no matter when it is computed). We achieve that by pre-cutting up the data structure of the platform into 18.4 quintillion shards (2^64). We then use key reference fields, such as a public key, as a way of determining where in that shard space a particular piece of data lives.

The result of this innovation is a data structure that is sharded and able to scale linearly, without overhead, from tiny data sets to ones large enough to service every person and business in the world. The total throughput of the network is only limited by the number of participating nodes in the network.

learn why and how Radix shards in this blog post.


Thanks for raising a word about Radix scale. This would be so helpful.