Comparing Cloud Compute Services

Over the years we've spent a good amount of time testing and thinking about how to compare cloud services. Some services, like content delivery networks (CDN), managed DNS and object storage are relatively easy because they have few deployment options and similar features between providers.

Comparing cloud compute or servers is a different story entirely. Because of the diverse deployment options and dissimilar features of different services, formulating relevant and fair comparisons is challenging to say the least. In fact, we've come to the conclusion that there is no perfect way to do it. This isn't to say that you can't - but if you do, or if you are handed a third party comparison to look over, there are some things you should keep in mind - and watch out for (we've seen some poorly constructed comparisons).

The purpose of this post is to highlight some of these considerations. To do so, I'll present actual comparisons from testing we've done recently on Amazon EC2, DigitalOcean, Google Cloud Platform, Microsoft Azure and SoftLayer. I won't declare any grand triumphs of one service over another (this is something we try to avoid anyway). Instead, I'll just present the numbers as we observed them with some cursory commentary, and let you draw your own conclusions. I'll also discuss value and some non-performance related factors you might want to consider.

This post is essentially a summary of a report we've published in tandem. The report covers the same topics, but in more detail. You can download the report for free.

Apples and Oranges

Before you start running tests, you first need to pick some compute instances from different services you are considering. Ideally your choices will be based on your actual requirements and workloads.

Some comparisons I've seen get off to a bad start here - picking instances that are essentially apples and oranges. For example, I still see studies that compare older Amazon EC2 m1 instances (which have been replaced with newer classes) to the latest and greatest of other services. I've also seen comparisons where instances have dissimilar CPU cores (CPU benchmark metrics are often proportional to cores). Watch out for these types of studies, because often the conclusions they come to are inaccurate and irrelevant.

Test Workloads

Before comparing compute services, you should have an idea about the type of workloads you'd like to compare. Different workloads have different performance characteristics. For our comparisons, we based testing on two workloads - web and database servers.

For the web server workload, our focus was CPU, disk read and external network performance. Because web servers usually don't store mission critical data, we picked instances with generally faster local drives (SSD where possible). We looked for instances with a ratio of 1-2X CPU cores-to-memory (e.g. 1 core/2GB memory) - sufficient for many web server workloads.

For the database server workload, our primary focus was CPU, disk read and write, memory and internal network performance. Because database servers are typically a critical component in an application stack, we chose off instance storage instead of local because of its better resilience. If a host system that fails, off instance storage volumes can usually be quickly restored on a new compute instance. We looked for instances with a ratio of 2-4X CPU cores-to-memory for this workload.

We picked three instance sizes for each workload - small, medium and large. In order to provide an apples-to-apples comparison, we chose from current instance classes with each service and matched the number of CPU cores precisely.

Our Comparisons

Based on the criteria above - the tables below show the instances we picked for each service and workload. More details on our reasoning for these selections are covered in the report.

CPU cores-to-memory ratios are often where it is nearly impossible to match services exactly. Our primary consideration was to match CPU cores - since this is what affects CPU benchmark metrics the most.

Web Server Comparisons

On July 1 2014, Amazon announced T2 instances with burstable CPU capacity. This instance class offers 1 to 2 CPU cores and provides bursting using on a predictable, credit based method. CPU bursting is nothing new, but the T2 implementation with a predictable, credit based burst model is, and offers good value for workloads that fall within its 10-20% bursting allowance. Because workloads with temporary bursting are common, we have included the t2.medium instance in the small web server CPU performance and value analysis (in addition to the c3.large instance type).

Compute Service Small Web Server Medium Web Server Large Web Server
Amazon EC2 c3.large + t2.medium c3.xlarge c3.2xlarge
DigitalOcean 4 GB / 2 Cores 8 GB / 4 Cores 16 GB / 8 Cores
Google Compute Engine n1-highcpu-2 n1-highcpu-4 n1-highcpu-8
Microsoft Azure Medium (A2) Large (A3) Extra Large (A4)
Rackspace Performance 1 2GB Performance 1 4GB Performance 1 8GB
SoftLayer 2 GB / 2 Cores 4 GB / 4 Cores 8 GB / 8 Cores

Database Server Comparisons

Compute Service Small Web Server Medium Web Server Large Web Server
Amazon EC2 c3.xlarge c3.2xlarge c3.4xlarge
DigitalOcean 8 GB / 4 Cores 16 GB / 8 Cores 48 GB / 16 Cores
Google Compute Engine n1-standard-4 n1-standard-8 n1-standard-16
Microsoft Azure Large (A3) Extra Large (A4) A91
Rackspace Performance 2 15GB Performance 2 30GB Performance 2 60GB
SoftLayer 8 GB / 4 Cores 16 GB / 8 Cores 32 GB / 16 Cores

1 The A9 instance is part of Azure's new compute intensive instance class. It is based on new hardware and a higher CPU cores-to-memory ratio (7X) than the other Azure instances included in the comparisons.

Benchmark Relevance

Once you've picked services and compute instances to compare, and workload to focus on - the next step is to choose benchmarks that are relevant to those workloads. This is another area where I've seen some bad comparisons - often arriving at broad conclusions based on simplistic or irrelevant benchmarks.

The benchmarks we chose are SPEC CPU 2006 for CPU performance, fio for disk IO, STREAM for memory, and our own benchmarks (based on standard Linux tools) for internal and external network testing. More details about these benchmarks, and the runtime configurations we used are provided in the report.

Performance Comparisons

The graphs below show some of the results from our testing. Complete results are available in the full report.

CPU Performance

With the exception of Microsoft Azure (and occasionally SoftLayer), all of the services we tested use current generation Sandy or Ivy bridge processors resulting in similar CPU metrics.

SPEC CPU 2006 is an industry standard benchmark for measuring CPU performance using 29 different CPU test workloads. A higher number in these graphs represents better CPU performance. SPEC CPU 2006 has two components - integer and floating point. Only integer results are included because floating point results were similar. Complete results are available in the report.

Due to SPEC rules governing test repeatability (which cannot be guaranteed in virtualized, multi-tenant cloud environments), the results below should be taken as estimates.

In the web server tests Amazon EC2 edged out other services - likely due to slightly faster hardware - 2.8 GHz Ivy Bridge. This was particularly apparent for the t2.medium instance in peak/burst operating mode in the small web server comparisons. The report lists all CPU architectures we observed for each service. Azure predictably performed poorer due to its use of older AMD 4171 processors.

In the database server tests Rackspace Performance 2 instances were slightly faster except for the Large Database Server where Azure's new A9 compute intensive instance performed best.

CPU Variability

One factor many comparisons overlook is performance variability. Cloud services are usually virtualized and multi-tenant and these factors can introduce performance variability due to sharing of resources.

To capture CPU performance variability, we ran SPEC CPU multiple times on multiple instances from each service and measured the relative standard deviation from the resulting metrics. In these graphs a higher value represents greater variability.

For web server tests CPU variability was highest for SoftLayer and Rackspace. For SoftLayer the cause of this was their use different CPU models, some older and some newer (we observed 6 different architectures on SoftLayer from 2009 X3470 to 2013 Ivy Bridge). For Rackspace and DigitalOcean the CPU architecture was consistent, so the reason may be resource contention and/or use of hyperthreading and floating cores.

For database server tests SoftLayer variability was again the highest due to changing CPU types across instances. Rackspace Performance 2 instances were less variable than Performance 1 instances.

Disk Performance

Our disk performance tests were conducted using 6 block sizes (4k, 16k, 32k, 64k, 128k and 1024k) and asynchronous IO using an optimal queue depth for each service, workload and block size. For the web server comparisons our testing used 100% read workloads. For database server comparisons we used mixed read + write workloads (80% write and 20% read). More details about the fio test settings are provided in the report.

At the time of testing, Google, Microsoft and SoftLayer did not offer a local SSD storage (Google recently announced external SSD storage). Because of this, web server disk performance for these services was notably slower compared to that of services with a local SSD storage option.

In these graphs, a higher value signifies faster performance. Our fio test results include 12 sets of test per instance (6 block sizes with both random and sequential IO). For brevity purposes, I've provided a sampling of these results in this post. The complete results are available in the report.

For the large web server random read tests, Amazon EC2 and Rackspace performed best. Although DigitalOcean also uses local SSD storage, its performance was consistently slower that other local SSD based services.

For the database server disk tests we used Amazon EC2 standard and EBS optimized instances with General Purpose (SSD) and provisioned IOPS (PIOPS) EBS volumes. For Rackspace we used both SSD and SATA off instance storage. DigitalOcean was excluded from the database server tests because they do not have an off instance storage option. For the small database server the Amazon EBS PIOPS volumes were provisioned for 1500 IOPS, and the Rackspace database servers used SATA storage volumes.

For the medium database server the Amazon EBS PIOPS volumes were provisioned for 3000 IOPS. The Rackspace database servers used SSD storage volumes.

For the large database server the Amazon EBS PIOPS volumes were provisioned for 4000 IOPS. The Rackspace database servers used SATA storage volumes.

Disk Consistency

Disk IO is a performance characteristic where variability can be very high for cloud environments. This is often due to the same physical disk drives being shared by multiple users. For off instance storage variability can be magnified due to network effects.

To measure disk performance consistency, we captured relative standard deviation of IOPS from many test iterations on multiple instances. In these graphs a higher value signifies greater variability. These graphs provide a subset of the analysis included in the report.

For the web server tests, DigitalOcean consistently demonstrated the highest variability of all services.

For all three database server test iterations Amazon EBS PIOPS volumes were the least variable, and Rackspace SATA volumes had the highest variability.

The medium Rackspace database servers used SSD external volumes which was less variable than SATA volumes. IO for Amazon EBS PIOPS volumes was very consistent. Microsoft Azure consistency improved with this instance type - perhaps because it is a larger type.

Rackspace SATA external volumes demonstrated high variability in the large database server tests.

Memory Performance

Memory performance is usually dependent on CPU architecture, with newer CPU models usually having faster memory. Unlike CPU and disk, memory is often not shared and thus provides consistent performance.

We ran memory benchmarks using the STREAM benchmark. STREAM uses multiple memory tests to measure memory bandwidth. In these graphs a higher value signifies faster memory bandwidth.

Memory performance results were similar for all database sizes with the exception of SoftLayer (due to use of multiple CPU types), so I've only included results for the medium and large database instances. Amazon EC2 and Google were consistently the fastest for memory tests.

Microsoft Azure A9 instance performed well in the large database server group due to its use of newer Intel E5-2670 processors (as opposed to old AMD 4171 processors used on other Azure instances).

External Network Performance

Web servers typically respond to requests from users over the Internet. The speed at which web servers respond is dependent on the distance between the user and web server, and the Internet connectivity provided by the service.

Measuring external network performance is complex due to a large number of variables (there are tens of thousands of ISPs and infinite routing possibilities). We host a cloud speedtest that lets users run network tests to measure their connectivity to different cloud services. Using a Geo IP database, we summarize the results of these tests regionally. These summaries are the basis for the data provided in the graphs below. In these graphs we provide mean latency between cloud services and users located in different geographical regions. A lower value signifies better latency - a shorter logical path between users and the service.

Compute services often allow users to provision instances in data centers located in different regions. These results are based on the most optimal service region for each and continent.

Region Amazon EC2 DigitalOcean Google Compute Engine Microsoft Azure Rackspace
North America us-east-1 NY2 us-central1 South Central US ORD
Europe eu-west-1 AMS1 europe-west1 West Europe LON
Asia ap-southeast-1 SG1 asia-east1 Southeast Asia HKG
Oceania ap-southeast-1 SFO1 asia-east1 Southeast Asia SYD
South America sa-east-1 NY2 us-central1 South Central US DFW

SoftLayer is excluded from these tests because at the time of writing, we did not have it available on the cloud speedtest.

Network connectivity in Asia and South America is generally slower than North America and Europe. Because of this, every service performed worse in those regions.

Internal Network Performance

Database servers often interact with other servers located in the same data center over the internal network. For example, a web server might query a database server to display dynamic content on a website. Because of this, we chose to measure internal network performance for the database workload. To do so, we used a custom test that uses ping and http to measure latency and throughput performance within the internal network of each service.

The data below is based on interaction between each of the three types of web and database server instances - small, medium and large. To maximize performance, we provisioned instances using the following optimal network settings for each service:

For latency tests a lower value signifies better performance, while for throughput tests a higher value is better.

Use of Amazon EC2 placement groups and VPC provided significantly higher throughput compared to other services - nearly 9 Gb/s. SoftLayer and DigitalOcean networks are likely capped at 1 Gb/s.

For some services, uplink and downlink network throughput performance was asymmetrical. Rackspace for example appears to cap the uplink (outbound traffic) but not the downlink (inbound traffic) - even on the internal network interface. The caps listed on the Rackspace website range between 200 Mb/s to 1,600 Mb/s for Performance 1 compute instances.

Network latency for all services was less than 0.7ms. Amazon EC2 and SoftLayer had the best latency at between 0.10 and 0.15ms.

Value

When picking a service - you may want to look for the best combination of performance and price - or in other words, the best value.

In this section, I estimate value for each service using CPU performance and hourly costs for each service and instance type. Because services sometimes offer different prices based term commitments, I included the following term options (where applicable):

Value was carried over for services not offering a particular term option. All of the prices are hourly normalized. If a term option has a setup fee, that fee was added to the hourly cost by dividing it by the total number of hours in the term.

The benchmark metric used for this value analysis is SPEC CPU 2006. The value metric provided is a ratio of SPEC CPU 2006 to the hourly cost for each service and instance type. More detail and data is available in the report.

In peak/burst operating mode the t2.medium provides the best value by a significant margin in the small web server category. Keep in mind that this level of performance is achievable for between 10-20% of the total operational time. The the baseline (non-burst) value is substantially lower (about 5X).

Google's new monthly discounts offer a good value without requiring any upfront setup fees (if you keep the instance live for a month you automatically get a discount). DigitalOcean provides good value for on-demand instances. Amazon EC2 is generally the best value for 1 or 3 year terms. Microsoft Azure value is low due to use of older CPU hardware resulting in poor performance on CPU benchmarks.

DigitalOcean value for database server instances was best for on-demand pricing, while Amazon EC2 was best with for 1 and 3 year terms.

Although Microsoft Azure performed well in the large database server comparisons, its value was low due to its higher price tag (partially attributable to its 7X CPU cores-to-memory ratio).

Other Considerations

When choosing a compute service, there are other things you should consider unrelated to performance or price. Depending on your workloads, these may be of greater or lesser importance.

Service Reliability

Service reliability is likely important to any organization. Quantifying reliability is difficult because there are many possible points of failure. We maintain compute instances with most compute services and use them to track outages and measure availability. Although maintaining a single compute instance with each service isn't a full proof method for measuring availability, we do manage to capture many service related outages this way. The table below shows our availability data for each service over the past 30 days. These metrics are based on mean availability for each service across all service regions. Our cloud status page provides realtime status and lets you view availability and outages over different periods.

Service 30 Day Availability Outages Downtime
Amazon EC2 100% 0
Rackspace 99.9971% 3 7.45 mins
Google Compute Engine 99.9862% 10 17.85 mins
Microsoft Azure 99.9922% 23 25.5 mins
DigitalOcean 99.9756% 16 52.67 mins

Service Quotas

Most compute services impose limits on the number of instances you can have active at a given time. These limits may affect you if you experience rapid growth, or have elastic workloads with high usage requirements during peak times. Although you can typically request an increase to quotas, the default limits can convey the scale at which a service operates. Services operating at a larger scale may be better able to support rapid growth and elastic workloads.

Each service typically has a different procedure for obtaining quote increases. Our experience with these has been mixed across services. With Amazon and Google, for example, we have often obtained increases within hours, while with some others responses have slower and capacity more limited.

These tables list both the quota policies and how they would affect provisioning of compute instances of different sizes.

Quota Policies

Service Policy
Amazon EC2 20 instances per region for most compute instance types
DigitalOcean 10 compute instances (Droplets)
Google Compute Engine 24 CPU cores per region
Microsoft Azure 20 CPU cores
Rackspace 128 GB memory
SoftLayer 20 compute instances per day, additional instances reviewed manually for approval

Instance Quotas

Instance Size Amazon EC2 DigitalOcean Google Compute Engine Microsoft Azure Rackspace SoftLayer
2 GB / 1 Core 160 (20 per region) 10 72 (24 per region) 20 64 20 daily1
4 GB / 2 Core 160 (20 per region) 10 32 (12 per region) 10 32 20 daily
8 GB / 4 Core 160 (20 per region) 10 18 (6 per region) 5 16 20 daily
16 GB / 8 Core 160 (20 per region) 10 9 (3 per region) 2 8 20 daily
32 GB / 16 Core 160 (20 per region) 10 3 (1 per region) 1 4 20 daily

1Certain SoftLayer regions have more cloud capacity than others. We have experienced provisioning requests in some regions being cancelled due to lack of capacity, while in others they are successful.

Storage Offerings

Compute services often offer different storage options. These options may include:

This table shows support of each of these storage options by each service in this post:

Compute Service Local External Drive Types Multiple Volumes Snapshots Provisioned IO
Amazon EC2 Yes Yes SATA; SSD Yes Yes Yes
DigitalOcean Yes No SSD No Yes No
Google Compute Engine Yes (beta) Yes Unknown1 Yes Yes No
Microsoft Azure Yes Yes Unknown Yes Yes No
Rackspace Yes Yes SATA; SSD Yes Yes No
SoftLayer Yes Yes SATA Yes Yes No

1Google recently announced preview of SSD based off instance storage

Networking Capabilities

Another consideration for compute services is the networking capabilities. The following are a few such considerations:

This table shows support of each of these networking capabilities by each service in this post:

Compute Service IPv6 Support Multiple IPv4 Private IP Load Balancing Health Checks
Amazon EC2 Yes1 Yes Yes Yes Yes
DigitalOcean No No Partial (3 of 6 regions) No No
Google Compute Engine No No Yes Yes Yes
Microsoft Azure No Yes Yes Yes Yes
Rackspace Yes Yes2 Yes Yes Yes
SoftLayer Yes Yes Yes Yes Yes

1 IPv6 supported when used with elastic load balancer (ELB) only

2 Must request with support ticket with Rackspace and provide valid justification

Data Center Locations

If your users are dispersed globally - or located primarily in a specific geographical region, you may want to consider where a services' data centers are located. The table below lists the number of data centers for each service in 6 continents.

Compute Service North America South America Europe Asia Australia
Amazon EC2 4 1 1 3 1
DigitalOcean 3 0 1 1 0
Google Compute Engine 1 0 1 1 0
Microsoft Azure 6 1 2 4 0
Rackspace 4 0 1 1 1
SoftLayer 4 0 1 1 0

Security

Security is another concern many have with the cloud. A services' security capabilities may be an important consideration. Although operating systems usually support security features and software based firewalls, it is often better to deal with security separately, outside of the operating system entirely. Some common security features include:

This table lists support for these security features by each compute service in this post:

Compute Service Firewall VPN VPC PCI DSS
Amazon EC2 Yes Yes Yes Yes
DigitalOcean No No No No
Google Compute Engine Yes No Yes No
Microsoft Azure Yes Yes Yes Yes
Rackspace $160/mo1 $160/mo1 Yes Yes
SoftLayer Yes Yes Yes Yes

1 Requires additional subscription to Brocade Vyatta vRouter starting at $160 per month for each compute instance

Service Ecosystem

If you are using a cloud compute service, you likely use other types of cloud services. When evaluating compute services you should consider the provider's ability to fulfill other hosting requirements you may have. Some other types of cloud services include:

This table lists support for each of these other cloud service types by each provider included in this post:

Compute Service Object Storage CDN DNS DbaaS PaaS
Amazon Yes Yes Yes Yes Yes
DigitalOcean No No No No No
Google Yes No Yes Yes Yes
Microsoft Azure Yes Yes No Yes Yes
Rackspace Yes Yes1 Yes Yes No
SoftLayer Yes Yes2 Yes No No

1 Rackspace resells Akamai CDN using a subset of 219 Akamai POPs

2 SoftLayer resells Edgecast CDN

Conclusion

Comparing compute services is a challenging task. I've covered a lot of ground in this post including how to properly choose instances from different services, picking relevant benchmarks, some actual comparisons of services, estimating value, and other considerations. The biggest take away I'd hope for is a better understanding about how to compare compute services accurately, and identify comparisons that are of questionable quality.

It should also be noted that because compute services are frequently updated, the validity of the benchmark metrics in this post are time limited.

If you'd to know more, the full report download contains 120 pages of graphs, tables and additional commentary.

Correction 7/16/2014: Post and report have been updated to reflect availability of local storage, reduced pricing, and a compute region in South America for Microsoft Azure.

TAGS:
Cloud Compute;
Performance Comparisons;
CPU Performance;
Disk Performance;
Memory Performance;
Network Performance;
Compute Service Considerations;
Service Availability
Read More

Alexa and Fortune 500 DNS Marketshare - 4 Jul 2014

This post provides current marketshare statistics for managed DNS services amongst top Alexa websites and US Fortune 500 companies. The methodology used to collect this data is explained in a prior post.

Alexa Top 1,000 DNS Marketshare - 4 Jul 2014

Historical Alexa 1,000 DNS Marketshare - 5 Jun 14 to 4 Jul 2014

Alexa Top 1,000 DNS Marketshare - 4 Jul 2014

This table provides marketshare for managed DNS services amongst top 1,000 Alexa websites. Change metrics are derived from the difference in marketshare from Jun 2014 to Jul 2014.
ProviderRankWebsites (out of 1000)MarketshareMarketshare Change
Dyn DNS110510.5%+4
Amazon Route 532787.8%+3
UltraDNS3686.8%0
Akamai DNS4646.4%-1
CloudFlare DNS5515.1%+3
DNSPod6292.9%0
DNS Made Easy7282.8%-1
Versign DNS8151.5%0
GoDaddy DNS9151.5%-1
Enom DNS1060.6%0
Rackspace Cloud DNS1150.5%-2
Internap1240.4%0
ClouDNS1340.4%+1
SoftLayer DNS1430.3%0
Nettica1530.3%0
Easy DNS1630.3%0
Namecheap1720.2%0
Savvis1820.2%0
NSONE DNS1910.1%0
CDNetworks DNS2110.1%0

Alexa Top 1,000 DNS Switchers - 4 Jul 2014

This table lists Alexa 1,000 websites that have switched DNS services since the prior marketshare analysis in Jun 2014.
HostnameRankNew DNS ProviderPrevious DNS Provider
soundcloud.com209Dyn DNSAmazon Route 53
media-fire.org639CloudFlare DNSGoDaddy DNS
glassdoor.com655CloudFlare DNSRackspace Cloud DNS
behance.net910Amazon Route 53Dyn DNS
souq.com946Dyn DNSDNS Park

Alexa Top 10,000 DNS Marketshare - 4 Jul 2014

Historical Alexa 10,000 DNS Marketshare - 5 Jun 14 to 4 Jul 2014

Alexa Top 10,000 DNS Marketshare - 4 Jul 2014

This table provides marketshare for managed DNS services amongst top 10,000 Alexa websites. Change metrics are derived from the difference in marketshare from 5 Jun 2014 to 4 Jul 2014.
ProviderRankWebsites (out of 10000)MarketshareMarketshare Change
CloudFlare DNS17837.83%+17
Amazon Route 5327547.54%+40
Dyn DNS35055.05%+14
UltraDNS43593.59%-7
Akamai DNS53393.39%+3
GoDaddy DNS63213.21%-6
DNSPod72922.92%-1
DNS Made Easy82852.85%+3
Rackspace Cloud DNS91461.46%-2
Versign DNS101281.28%+2
SoftLayer DNS11670.67%+1
Enom DNS12620.62%-1
Easy DNS13590.59%+2
Namecheap14490.49%-2
ClouDNS15300.3%+2
Savvis16280.28%-1
Internap17220.22%0
DNS Park18170.17%-1
No-IP21140.14%0
CDNetworks DNS2490.09%0
NSONE DNS2630.03%

Alexa Top 10,000 DNS Switchers - 4 Jul 2014

This table lists Alexa 10,000 websites that have switched DNS services since prior marketshare analysis on 5 Jun 2014.
HostnameRankNew DNS ProviderPrevious DNS Provider
soundcloud.com209Dyn DNSAmazon Route 53
media-fire.org639CloudFlare DNSGoDaddy DNS
glassdoor.com655CloudFlare DNSRackspace Cloud DNS
behance.net910Amazon Route 53Dyn DNS
souq.com946Dyn DNSDNS Park
freelotto.com1054Dyn DNSUltraDNS
t411.me1206Amazon Route 53Nettica
digitalpoint.com1444CloudFlare DNSNamecheap
searchenginejournal.com1545Amazon Route 53DNS Made Easy
tripleclicks.com1807Zerigo DNSCloudFlare DNS
juicyads.com1926Dyn DNSUltraDNS
cosmopolitan.com2048Amazon Route 53Akamai DNS
themindunleashed.org2126CloudFlare DNSGoDaddy DNS
cleartrip.com2159Dyn DNSUltraDNS
statscrop.com2263Easy DNSCloudFlare DNS
petardas.com2937Amazon Route 53Nettica
vgsgaming-ads.com3115DNS Made EasyUltraDNS
motherjones.com3557Dyn DNSNettica
ovguide.com3567Amazon Route 53GoDaddy DNS
clicksia.com3836GoDaddy DNSNamecheap
ricardoeletro.com.br4503DNS Made EasyNettica
eporner.com4622DNS Made EasyEasy DNS
instructure.com4662Amazon Route 53Dyn DNS
adorama.com4742DNS Made EasyUltraDNS
bakusai.com4796Worldwide DNSGoDaddy DNS
ifilez.org4852CloudFlare DNSGoDaddy DNS
songkick.com5306Easy DNSAmazon Route 53
dicio.com.br5454Amazon Route 53Nettica
shopzilla.com6132Amazon Route 53Akamai DNS
avsforum.com6226Amazon Route 53DNS Made Easy
fstoppers.com6945Amazon Route 53GoDaddy DNS
housing.com7032Amazon Route 53GoDaddy DNS
videostripe.com7440CloudFlare DNSGoDaddy DNS
wondershare.com7741GoDaddy DNSDNSPod
spin.com7937Akamai DNSGoDaddy DNS
shoghlanty.com8703CloudFlare DNSGoDaddy DNS
couchtuner.eu9293CloudFlare DNSGoDaddy DNS
pbh2.com9318Amazon Route 53CloudFlare DNS
el7l.co9348SoftLayer DNSCloudFlare DNS

Fortune 500 DNS Marketshare - 4 Jul 2014

Historical Fortune 500 DNS Marketshare - 5 Jun 14 to 4 Jul 2014

Fortune 500 2013 DNS Marketshare - 4 Jul 2014

This table provides marketshare for managed DNS services amongst 2013 US
ProviderRankWebsites (out of 500)MarketshareMarketshare Change
UltraDNS1387.6%+1
Versign DNS2357.0%+1
Akamai DNS3234.6%0
Dyn DNS4153%0
DNS Made Easy581.6%0
GoDaddy DNS671.4%0
Amazon Route 53740.8%0
Internap840.8%0
Savvis940.8%0
Enom DNS1020.4%0
Rackspace Cloud DNS1120.4%0
Easy DNS1210.2%0
SoftLayer DNS1310.2%0
No-IP1410.2%0

Fortune 500 DNS Switchers - 4 Jul 2014

There were no changes in managed DNS providers amongst Fortune 500 websites within this time period.

TAGS:
DNS Marketshare;
Alexa 10,000;
Alexa 1,000;
Fortune 500;
Dyn;
AWS Route 53;
UltraDNS;
DNSPod
Read More

CDN Performance Summary 2011-2014

This is the first in what I hope to be monthly series summarizing CDN performance using data from our cloud speedtest. Because it's the first post, I've also summarized data from all tests run to date - 2011 to present.

The cloud speedtest is a tool we created to measure network latency and throughput between real users and cloud services. When a user starts the test, we use Maxmind's GeoIP database to determine their location (city, state, country) and ISP. We then store the results of every test in a database. Since 2011, we've captured results from over 20 million CDN tests from 5.1 million unique users on 27,000 different ISPs.

The Tests

The speedtest lets users run 3 types of CDN performance tests. Each test includes a brief warmup followed by multiple test iterations. We capture multiple data points from every test including mean, median and standard deviation. The metric the analysis in this post is based on is median.

CDN 101

CDN is an abbreviation for Content Delivery Network. A CDN is essentially a bunch of web servers you can host web content on. CDN servers are called points of presence (POPs), and are usually located in multiple data centers globally. A CDN speeds up web requests by responding to them using servers that are closer to the user.

For example, consider a situation where you have users in US, Europe and Australia, and a web server in the US. Your website responsiveness would likely be acceptable in the US, slower in Europe, and nearly unresponsive in Australia (where bits are traveling ~8,500 miles each way). Using a CDN in this scenario could dramatically improve performance if the CDN uses POPs on the same continent. Other CDN benefits are fault tolerance and offloading of traffic from your web servers.

Picking the Right POP

The most important task for a CDN is to speed up user requests by responding them using the fastest POP in its network for that user. Often, this POP is the one that is physically nearest the user - but this is not always true. CDNs use different methods to choose a POP when a request is received. The most common methods are:

1 IP Anycast for CDN POP routing should not be confused with IP Anycast for DNS. Most, if not all CDNs use IP Anycast for DNS

Geographical Presence

The performance improvement a CDN can provide is constrained by the number and geographical distribution of its POPs. In North America and Europe it is relatively inexpensive and easy for a CDN to provision new POPs and bandwidth. Where this becomes more challenging and costly is in less connected regions like Asia, Oceania and South America.

CDNs Covered

This table lists each of the CDNs included in this post including the POP selection methods used and continent based POP counts. The CDN name links to a more detailed profile we've created for each service including pricing.

CDN Routing Methods POPs per continent
North America South America Europe Asia Oceania Africa
AkamaiDNS/ProprietaryAkamai claims to have over a hundred thousand servers in 75 countries. Many of these servers are located within ISP facilities
Amazon CloudFrontDNS/EDNS/Proprietary14210910
Azure CDNDNS818710
CacheFlyIP Anycast/DNS/EDNS11110922
CDNetworksDNS/Proprietary1272142114
EdgeCastDNS/EDNS/Proprietary911111220
Internap CDNDNS/Proprietary702200
Level 3 CDNDNS/Proprietary19313810
LimelightDNS/Proprietary11304310
MaxCDNIP Anycast/DNS90314140

1 Some CDNetwork Asia POPs are located in China and Russia and require special permission to use

2 Some EdgeCast Asia POPs are located in China and require special permission to use

3 Limelight POP numbers and locations are estimated because specifics are not disclosed

4 MaxCDN POPs outside of North America and Europe require special provisioning and an additional monthly fee

CDN Performance - 2014 Summary

Latency

In this chart, each continent is represented by a different segment the length of which represents median latency for that region. A shorter segment represents lower (better) latency. The segments for North America and Europe are relatively similar for each CDN. Where we see larger differentiation is in the segments representing Asia, Oceania, South America and Africa with some services having much longer segments (higher latency) than others.

CDN Latency - 2014 (milliseconds)
ServiceNorth AmericaEuropeSouth AmericaAfricaAsiaOceania
Akamai56.8848.1594.8484.06114.24103.53
Amazon CloudFront61.0758.78119.09198.45105.7592.31
Azure CDN53.6356.71160.10160.83118.9594.15
CacheFly54.7162.63105.07177.78138.2176.11
CDNetworks58.9164.57169.37115.82107.9398.10
EdgeCast52.4952.58170.06184.82126.2279.06
Internap CDN66.6782.86192.73203.58179.72229.42
Level 3 CDN67.3069.31149.99100.79179.24215.00
LimeLight53.7459.97155.32173.33104.95121.35
MaxCDN56.0965.86184.99183.56255.82240.92

Small File Throughput

In this chart, each continent is represented by a different segment the length of which represents median small file throughput (1-80KB files) in megabits per second for that region. A longer segment represents higher (better) throughput. Small file testing is more sensitive to latency because the requests are brief.

CDN Small File Throughput - 2014 (Mb/s)
ServiceNorth AmericaEuropeSouth AmericaAfricaAsiaOceania
Akamai2.072.010.771.601.070.65
Amazon CloudFront2.201.560.850.531.191.04
Azure CDN (Microsoft)2.822.000.670.991.371.03
CacheFly3.883.212.110.731.972.13
CDNetworks2.382.360.530.901.251.32
EdgeCast3.372.880.910.481.631.93
Internap CDN1.571.340.740.440.680.39
Level 3 CDN2.051.910.980.831.460.47
LimeLight2.501.600.610.551.020.97
MaxCDN2.782.440.400.620.530.55

Large File Throughput

In this chart, each continent is represented by a different segment the length of which represents median large file throughput (198KB-5.1MB files) in megabits per second for that region. A longer segment represents higher (better) throughput.

CDN Large File Throughput - 2014 (Mb/s)
ServiceNorth AmericaEuropeSouth AmericaAfricaAsiaOceania
Akamai15.1113.714.411.837.395.73
Amazon CloudFront10.118.174.491.134.792.47
Azure CDN (Microsoft)15.5413.333.402.355.624.88
CacheFly18.6218.513.822.578.269.38
CDNetworks15.9717.522.611.766.196.21
EdgeCast17.4515.172.791.187.396.98
Internap CDN13.1810.652.461.514.452.28
Level 3 CDN12.329.863.461.454.221.95
LimeLight14.409.483.140.926.334.53
MaxCDN16.2216.891.891.592.932.52

CDN Performance: North America - 2011 to 2014

The remaining sections provide historical performance data based on test results collected between 2011 (when we first released the speedtest) and present.

North America - Latency

CDN network latency has consistently improved since 2011 based on our test results. Today North America latency test results for all CDNs in this post are closely grouped - around 40-70 milliseconds.

North America - Small File Throughput

Since 2012, CacheFly, EdgeCast and MaxCDN have been top small file performers in North America. Both CacheFly and MaxCDN use Anycast CDN networks in this region which likely helps to achieve better performance.

North America - Large File Throughput

Since 2013, CacheFly, EdgeCast and MaxCDN have also been top large file performers in North America.

CDN Performance: Europe - 2011 to 2014

Europe - Latency

CDN network latency has consistently improved in Europe based on our test results. Today Europe latency is tightly grouped at around 40-80 milliseconds for all CDNs in this post.

Europe - Small File Throughput

Since 2013, CacheFly, CDNetworks, EdgeCast, and MaxCDN have been top small file performers in Europe. Both CacheFly and MaxCDN use Anycast CDN networks in this region which likely helps to achieve better performance.

Europe - Large File Throughput

Since 2013, CacheFly, CDNetworks, EdgeCast, and MaxCDN have also been top large file performers in Europe.

CDN Performance: South America - 2011 to 2014

Testing activity in South America is much lower than North America and Europe. Because of this, the data provided here is based on smaller test populations and thus less robust and reliable compared to analysis in other regions.

South America - Latency

Latency improvements in South America have been less pronounced since 2011. Only 2/3rds of the CDNs in this post currently have POPs in South America. Akamai, Amazon CloudFront and CacheFly have all shown consistent latency improvement in this region with all 3 having 1 or more POPs there. Lack of a POP in South America is less detrimental than Asia or Oceania because distances to US based POPs are shorter.

South America - Small File Throughput

As the only CDN with both a regional POP and IP Anycast in South America, CacheFly is the top performer for small file throughput in South America since 2013.

South America - Large File Throughput

Since 2011, Akamai has consistently been a top large file performer in South America. During this time, Amazon CloudFront and CacheFly have expanded their networks to include POPs in this region and are now positioned with Akamai in the top 3 positions.

CDN Performance: Africa - 2011 to 2014

Testing activity in Africa is the lowest of any continent. Because of this, the analysis provided here is based on a smaller test population and is thus less robust than other regions.

Africa - Latency

Since 2011 Akamai and CDNetworks have consistently provided the lowest latency in Africa.

Africa - Small File Throughput

Akamai and CDNetworks have also been consistent top performers for small file throughput in Africa.

Africa - Large File Throughput

For large file throughput, Akamai and CDNetworks have been top performers since 2011. However, we have also observed good performance from CacheFly and Azure CDN.

CDN Performance: Asia - 2011 to 2014

Asia - Latency

There is a distinct separation between CDNs with Asia POPs and those without. For services with, median latency results generally fall around 100-140 milliseconds. Akamai has been a top performer in this region, but in the past few years it has become more competitive. CacheFly appears to be using mixed DNS and Anycast POP selection in this region (as opposed to straight Anycast in North America, Europe and South America). Our MaxCDN account did not have Asia POPs enabled, and this is likely the cause for its higher latency in this region.

Asia - Small File Throughput

Since 2013, CacheFly, CDNetworks and EdgeCast have been top performers for small file throughput in Asia.

Asia - Large File Throughput

Since 2013, Akamai, CacheFly, CDNetworks and EdgeCast have been top performers for large file throughput in Asia.

CDN Performance: Oceania - 2011 to 2014

Oceania is the continent where lack of a CDN POP presence is most pronounced in our test data. Due to its distance from other major Internet regions (~8000 miles from the US), a regional POP is necessary for a CDN to provide acceptable performance.

Oceania - Latency

There is a notable separation between those services with Oceania POPs and those without. For services with, median latency results generally fall around 80-120 milliseconds. Our MaxCDN account did not have Asia POPs enabled, and this is likely the cause for its lower throughput in this region.

Oceania - Small File Throughput

Since 2013, CacheFly and EdgeCast have been top performers for small file throughput in Oceania.

Oceania - Large File Throughput

Since 2013, CacheFly and EdgeCast have also been top performers for large file throughput in Oceania.

Alexa and Fortune 500 DNS Marketshare - 5 Jun 2014

This post provides current marketshare statistics for managed DNS services amongst top Alexa websites and US Fortune 500 companies. The methodology used to collect this data is explained in a prior post.

Alexa Top 1,000 DNS Marketshare - 5 Jun 2014

Historical Alexa 1,000 DNS Marketshare - 8 Apr 14 to 5 Jun 2014

Alexa Top 1,000 DNS Marketshare - 5 Jun 2014

This table provides marketshare for managed DNS services amongst top 1,000 Alexa websites. Change metrics are derived from the difference in marketshare from Apr 2014 to Jun 2014.
ProviderRankWebsites (out of 1000)MarketshareMarketshare Change
Dyn110610.6%+3
AWS Route 532686.8%+5
UltraDNS3666.6%+1
Akamai4666.6%+6
CloudFlare5525.2%+5
DNS Made Easy6282.8%0
DNSPod7242.4%+3
Verisign DNS8181.8%0
GoDaddy DNS9121.2%-1
Rackspace Cloud DNS1080.8%0
Enom DNS1150.5%0
Internap1240.4%0
Nettica1330.3%0
Namecheap1430.3%0
ClouDNS1530.3%0
Softlayer DNS1630.3%0
easyDNS1730.3%0
Savvis1820.2%0
DNS Park1920.2%+1
EuroDNS2010.1%0
CDNetworks DNS2110.1%0
DTDNS2210.1%0

Alexa Top 1,000 DNS Switchers - 5 Jun 2014

This table lists Alexa 1,000 websites that have switched DNS services since the prior marketshare analysis in Apr 2014.
HostnameRankNew DNS ProviderPrevious DNS Provider
soundcloud.com166AWS Route 53Dyn
shutterstock.com193DynUltraDNS
warriorforum.com240AWS Route 53Softlayer DNS
wetransfer.com703AWS Route 53Dyn
nfl.com800AkamaiUltraDNS
linkwithin.com999Softlayer DNSGoDaddy DNS

Alexa Top 10,000 DNS Marketshare - 5 Jun 2014

Historical Alexa 10,000 DNS Marketshare - 8 Apr 14 to 5 Jun 2014

Alexa Top 10,000 DNS Marketshare - 5 Jun 2014

This table provides marketshare for managed DNS services amongst top 10,000 Alexa websites. Change metrics are derived from the difference in marketshare from 8 Apr 2014 to 5 Jun 2014.
ProviderRankWebsites (out of 10000)MarketshareMarketshare Change
CloudFlare17487.48%+40
AWS Route 5327017.01%+30
Dyn34964.96%+12
UltraDNS43693.69%+4
GoDaddy DNS53353.35%+9
Akamai63313.31%+25
DNS Made Easy72812.81%+9
DNSPod82762.76%-1
Rackspace Cloud DNS91571.57%+1
Verisign DNS101251.25%-1
Enom DNS11690.69%-3
Softlayer DNS12680.68%0
easyDNS13590.59%-3
Namecheap14500.5%+1
ClouDNS15320.32%-2
Savvis16290.29%-1
Nettica17260.26%-1
Internap18230.23%0
DNS Park19190.19%+5
ZoneEdit20150.15%-2
No-IP21110.11%0
CDNetworks DNS22100.1%0
EuroDNS23100.1%+2
Zerigo DNS2480.08%-1
Worldwide DNS2530.03%0
DTDNS2610.01%0

Alexa Top 10,000 DNS Switchers - 5 Jun 2014

This table lists Alexa 10,000 websites that have switched DNS services since prior marketshare analysis on 8 Apr 2014.
HostnameRankNew DNS ProviderPrevious DNS Provider
soundcloud.com166AWS Route 53Dyn
shutterstock.com193DynUltraDNS
warriorforum.com240AWS Route 53Softlayer DNS
wetransfer.com703AWS Route 53Dyn
nfl.com800AkamaiUltraDNS
linkwithin.com999Softlayer DNSGoDaddy DNS
statscrop.com1195CloudFlareGoDaddy DNS
name.com1323AkamaiVerisign DNS
searchenginejournal.com1618DNS Made EasyDyn
authorize.net1684Verisign DNSUltraDNS
flirt4free.com1764UltraDNSVerisign DNS
naturalnews.com1945Softlayer DNSGoDaddy DNS
speedbit.com2098AWS Route 53Akamai
onefloorserve.com2134AWS Route 53GoDaddy DNS
bigstockphoto.com2204DynUltraDNS
ryushare.com2262CloudFlareeasyDNS
whatsapp.com2316DynSoftlayer DNS
codecademy.com2559AWS Route 53DNS Made Easy
couchtuner.eu2561GoDaddy DNSCloudFlare
tasnimnews.com2739ClouDNSDNS Made Easy
gilt.com2987CloudFlareUltraDNS
zurb.com3519CloudFlareRackspace Cloud DNS
wanelo.com3749DNS Made EasyDyn
filefactory.com3838DynDNS Made Easy
iflscience.com3880DNS Made EasyGoDaddy DNS
sammydress.com4799AkamaiDNSPod
watchfreemovies.ch5163CloudFlareClouDNS
minnano-av.com5316GoDaddy DNSAWS Route 53
eweb4.com5462GoDaddy DNSZoneEdit
optimizepress.com6047CloudFlareNamecheap
lifescript.com6110AkamaiSavvis
pornport.xxx6334CloudFlareEnom DNS
express.com6344AkamaiUltraDNS
jquery4u.com6346DNS Made EasyGoDaddy DNS
informe21.com6729CloudFlareAWS Route 53
findthecompany.com6938AWS Route 53CloudFlare
tvrage.com7105CloudFlareZoneEdit
iphonehacks.com7636AWS Route 53GoDaddy DNS
fashionista.com7656DynCloudFlare
ssense.com8280CloudFlareRackspace Cloud DNS
food52.com8758CloudFlareGoDaddy DNS
hubspot.net8956AkamaiDNS Made Easy
filfan.com9026CloudFlareeasyDNS
imgrind.com9109DNS Made EasyCloudFlare
autotrader.ca9277AWS Route 53easyDNS
pijamasurf.com9350CloudFlareRackspace Cloud DNS
internetbs.net9411AWS Route 53CloudFlare
mrmoneymustache.com9488DNS Made EasyClouDNS
downloadming.me9952GoDaddy DNSUltraDNS
qianzhan.com9986ClouDNSDNSPod

Fortune 500 DNS Marketshare - 5 Jun 2014

Historical Fortune 500 DNS Marketshare - 8 Apr 14 to 5 Jun 2014

Fortune 500 2013 DNS Marketshare - 5 Jun 2014

This table provides marketshare for managed DNS services amongst 2013 US
ProviderRankWebsites (out of 500)MarketshareMarketshare Change
UltraDNS1377.4%-1
Verisign DNS2346.8%0
Akamai3234.6%+1
Dyn4153%+2
DNS Made Easy581.6%+1
GoDaddy DNS671.4%+1
AWS Route 53740.8%0
Internap840.8%0
Savvis940.8%0
Enom DNS1020.4%0
Rackspace Cloud DNS1120.4%0
easyDNS1210.2%0
Softlayer DNS1310.2%0
No-IP1410.2%0

Fortune 500 DNS Switchers - 5 Jun 2014

There were no changes in managed DNS providers amongst Fortune 500 websites within this time period.