Why Windows Azure is great for SaaS digital signage

I have just been reading an MSDN blog about the scalability of Windows Azure and it makes some interesting reading. The entire article is here for the propeller heads that wish to dig deeper, but the part that stands out for me is this …….

Scalability and Performance Targets

Now that we have given a high level description of storage accounts, storage abstractions and how they are grouped into partitions, we want to talk about the scalability targets for storage accounts, objects and their partitions.

The following are the scalability targets for a single storage account:

* Capacity – Up to 100 TBs
* Transactions – Up to 5,000 entities/messages/blobs per second
* Bandwidth – Up to 3 gigabits per second

The 100TB is a strict limit for a storage account, whereas the transactions and bandwidth are the current targets we’ve built the system to for a single storage account. Note, the actual transaction and bandwidth achieved by your storage account will very much depend upon the size of objects, access patterns, and the type of workload your application exhibits. To go above these targets, a service should be built to use multiple storage accounts, and partition the blob containers, tables and queues and objects across those storage accounts. By default, a subscription gets 5 storage accounts, and you can contact customer support to get more storage accounts if you need to store more than that (e.g., petabytes) of data.

It is expected that a hosted service needs up to as many storage accounts to meet its performance targets given the above, which is typically a handful of storage accounts to up to 10s of storage accounts to store PBs of data. The point here is that a hosted service should not plan on creating a separate storage account for each of its customers. Instead, the hosted service should either represent a customer within a storage account (e.g., each customer could have its own Blob Container), or map/hash the customer’s data across the hosted service’s storage accounts.

Within a storage account, all of the objects are grouped into partitions as described above. Therefore, it is important to understand the performance targets of a single partition for our storage abstractions, which are:

* Single Queue – all of the messages in a queue are accessed via a single queue partition. A single queue is targeted to be able to process:
o Up to 500 messages per second
* Single Table Partition – a table partition are all of the entities in a table with the same partition key value, and most tables have many partitions. The throughput target for a single partition is:
o Up to 500 entities per second
o Note, this is for a single partition, and not a single table. Therefore, a table with good partitioning, can process up to a few thousand requests per second (up to the storage account target).
* Single Blob – the partition key for blobs is the “container name + blob name”, so we can partition blobs down to a single blob to spread out blob access across our servers. The target throughput of a single blob is:
o Up to 60 MBytes/sec

The above throughputs are the high end targets for the current system. What can be achieved by your application very much depends upon the size of the objects being accessed, the operations (workload) and the access patterns. We encourage all services to test the performance at the partition level for their workload.

When your application reaches the limit to what a partition can handle for your workload, it will start to get back “503 server busy” responses. When this occurs, the application should use exponential backoff for retries. The exponential backoff allows the load on the partition to decrease, and to ease out spikes in traffic to the partition. If this is a regular occurrence, then the application should try to improve its data partitioning and throughput as follows for the different storage abstractions:

What does all this mean? Well here is an example – digitalsignage.NET gets a file download bandwidth of 15Gbits/s, with each file for download having a 60Mbytes/s available bandwidth.

Which is not something one can easily achieve by just putting a number of servers into an ISP 🙂

I would imagine Amazon offer similar performance metrics, but even any digital signage companies with dedicated data centres must burn some cash to be able to support that sort of bandwidth.


Comments are closed.