Alexa 10,000 and Fortune 500 DNS Marketshare - 30 Jun 2013

This post provides current marketshare statistics for managed DNS services amongst top Alexa websites and US Fortune 500 companies. The methodology used to collect this data is explained in a prior post.

Alexa Top 1,000 DNS Marketshare - 30 June 2013

Historical Alexa 1,000 DNS Marketshare - 6 March to 30 June 2013

Alexa Top 1,000 DNS Marketshare - 30 June 2013

This table provides marketshare for managed DNS services amongst top 1,000 Alexa websites. Change metrics are derived from the difference in marketshare from 5 May 2013 to 30 Jun 2013. This list is generally more consistent than Alexa 10,000. In this segment only AWS Route 53 showed a notable marketshare increase. Akamai increases are the result of migrations off the Cotendo platform they acquired and intend to shutdown later this year.
ProviderRankWebsites (out of 1,000)MarketshareMarketshare Change
DynECT1828.2%0
UltraDNS2626.2%0
Akamai3484.8%+3 / +6.667%
AWS Route 534434.3%+7 / +19.444%
DNSPod5303%-1 / -3.226%
DNS Made Easy6212.1%+1 / +5%
GoDaddy DNS7131.3%-1 / -7.143%
Verisign DNS8121.2%0
CloudFlare990.9%+1 / +12.5%
Cotendo Advanced DNS1090.9%-3 / -25%
easyDNS1170.7%-1 / -12.5%
Rackspace Cloud DNS1260.6%-2 / -25%
Softlayer DNS1350.5%0
Namecheap1440.4%-1 / -20%
Enom DNS1540.4%0
Internap1630.3%0
Savvis1730.3%0
ZoneEdit1820.2%-1 / -33.333%
Nettica1920.2%-1 / -33.333%
DNS Park2010.1%0
DTDNS2110.1%0
Worldwide DNS2210.1%0
ClouDNS2310.1%-1 / -50%
No-IP2410.1%0
EuroDNS2510.1%0

Alexa Top 1,000 DNS Switchers - 30 June 2013

This table lists Alexa 1,000 websites that have switched DNS services since prior marketshare analysis on 5 May 2013.
HostnameRankNew DNS ProviderPrevious DNS Provider
wigetmedia.com114ClouDNSZoneEdit
themeforest.net206AWS Route 53Rackspace Cloud DNS
myspace.com296AkamaiCotendo Advanced DNS
1channel.ch316CloudFlareClouDNS
wix.com384AkamaiCotendo Advanced DNS
mashable.com386AkamaiDynECT
codecanyon.net672AWS Route 53Rackspace Cloud DNS
softigloo.com757AWS Route 53Nettica
xtube.com760UltraDNSeasyDNS
myegy.com786CloudFlareNamecheap
gizmodo.com990DynECTCotendo Advanced DNS

Alexa Top 10,000 DNS Marketshare - 30 June 2013

Historical Alexa 10,000 DNS Marketshare - 6 March to 30 June 2013

Alexa Top 10,000 DNS Marketshare - 30 June 2013

This table provides marketshare for managed DNS services amongst top 10,000 Alexa websites. Change metrics are derived from the difference in marketshare from 5 May 2013 to 30 Jun 2013. AWS Route 53, China-based DNSPod, and CloudFlare have continued to capture significant marketshare in this list.
ProviderRankWebsites (out of 10,000)MarketshareMarketshare Change
DynECT14364.36%+8 / +1.869%
AWS Route 5324264.26%+43 / +11.227%
DNSPod34204.2%+53 / +14.441%
CloudFlare43663.66%+51 / +16.19%
UltraDNS53633.63%0
GoDaddy DNS62922.92%-10 / -3.311%
DNS Made Easy72412.41%+6 / +2.553%
Akamai82252.25%+16 / +7.656%
Rackspace Cloud DNS91281.28%-12 / -8.571%
Verisign DNS101061.06%-3 / -2.752%
Softlayer DNS11780.78%+1 / +1.299%
easyDNS12640.64%-9 / -12.329%
Enom DNS13600.6%+1 / +1.695%
Namecheap14580.58%-8 / -12.121%
Savvis15410.41%0
Cotendo Advanced DNS16340.34%-11 / -24.444%
Internap17260.26%-2 / -7.143%
Nettica18250.25%-5 / -16.667%
ZoneEdit19250.25%-5 / -16.667%
ClouDNS20230.23%+5 / +27.778%
DNS Park21140.14%-2 / -12.5%
No-IP22100.1%-2 / -16.667%
Zerigo DNS2370.07%-2 / -22.222%
EuroDNS2470.07%-1 / -12.5%
Worldwide DNS2560.06%+1 / +20%

Alexa Top 10,000 DNS Switchers - 30 June 2013

This table lists the top 50 Alexa 10,000 websites that have switched DNS services since prior marketshare analysis on 5 May 2013.
HostnameRankNew DNS ProviderPrevious DNS Provider
wigetmedia.com114ClouDNSZoneEdit
themeforest.net206AWS Route 53Rackspace Cloud DNS
myspace.com296AkamaiCotendo Advanced DNS
1channel.ch316CloudFlareClouDNS
wix.com384AkamaiCotendo Advanced DNS
mashable.com386AkamaiDynECT
codecanyon.net672AWS Route 53Rackspace Cloud DNS
softigloo.com757AWS Route 53Nettica
xtube.com760UltraDNSeasyDNS
myegy.com786CloudFlareNamecheap
gizmodo.com990DynECTCotendo Advanced DNS
usmagazine.com1028AWS Route 53Verisign DNS
yepi.com1209DynECTAkamai
seekingalpha.com1247AkamaiCotendo Advanced DNS
aa.com1417AkamaiUltraDNS
lapatilla.com1502CloudFlareeasyDNS
etoro.com1513AkamaiCotendo Advanced DNS
adxite.com1521ClouDNSZoneEdit
24trk.com1562ClouDNSZoneEdit
break.com1607DNS Made EasyGoDaddy DNS
z5x.net1660DynECTUltraDNS
graphicriver.net1817AWS Route 53Rackspace Cloud DNS
naughtyamerica.com2230DynECTUltraDNS
rollingstone.com2256AWS Route 53Verisign DNS
solidtrustpay.com2257Softlayer DNSDynECT
justjared.com2309AkamaieasyDNS
wikiwiki.jp2328CloudFlareAWS Route 53
adultadmedia.com2370ClouDNSZoneEdit
filgoal.com2386easyDNSDynECT
howtogeek.com2484DynECTSoftlayer DNS
multiply.com2567UltraDNSAWS Route 53
etype.com2691DynECTUltraDNS
yepme.com2895AkamaieasyDNS
serviporno.com2897CloudFlareNettica
forumspecialoffers.com2928Rackspace Cloud DNSAWS Route 53
envato.com3136AWS Route 53Rackspace Cloud DNS
runetki.com3191Enom DNSNamecheap
sweepstakes.com3269AkamaiCotendo Advanced DNS
sevenforums.com3557DynECTGoDaddy DNS
wpmu.org3697CloudFlareEnom DNS
freeridegames.com3705AkamaiCotendo Advanced DNS
woozworld.com3746AWS Route 53DynECT
ringtonematcher.com3773AWS Route 53GoDaddy DNS
bsaving.com3963AkamaiCotendo Advanced DNS
dailykos.com4173DynECTZerigo DNS
mawaly.com4222CloudFlareZoneEdit
creditkarma.com4236DynECTRackspace Cloud DNS
chirpme.com4256DynECTGoDaddy DNS
xat.com4399CloudFlareAWS Route 53
8tracks.com4412AWS Route 53DynECT

Fotune 500 DNS Marketshare - 30 June 2013

Historical Fortune 500 DNS Marketshare - 6 March to 30 June 2013

Fotune 500 DNS Marketshare - 30 June 2013

Fortune 500 2012 DNS Marketshare - 30 June 2013

This table provides marketshare for managed DNS services amongst 2012 US Fortune 500 companies. Change metrics are derived from the difference in marketshare from 5 May 2013 to 30 Jun 2013.
ProviderRankWebsites (out of 500)MarketshareMarketshare Change
UltraDNS1346.8%-2 / -5.556%
Verisign DNS2255%+1 / +4.167%
Akamai3142.8%+1 / +7.692%
DynECT481.6%0
DNS Made Easy561.2%0
Savvis640.8%0
GoDaddy DNS740.8%0
Internap830.6%-1 / -25%
AWS Route 53920.4%0
Rackspace Cloud DNS1020.4%0
Enom DNS1120.4%+1 / +100%
easyDNS1210.2%0
No-IP1310.2%0
ZoneEdit1410.2%0

Fortune 500 2013 DNS Marketshare - 30 June 2013

Money Magazine recently updated the list of Fortune 500 US companies for 2013. This table represents DNS marketshare for this new list. Future posts will be based on this list.
ProviderRankWebsites (out of 500)Marketshare
UltraDNS1346.8%
Verisign DNS2295.8%
Akamai3153%
DynECT491.8%
DNS Made Easy571.4%
GoDaddy DNS651%
Savvis740.8%
Internap830.6%
Rackspace Cloud DNS920.4%
Enom DNS1020.4%
easyDNS1110.2%
ZoneEdit1210.2%
AWS Route 531310.2%
No-IP1410.2%

Fotune 500 DNS Switchers - 30 June 2013

There were no changes in managed DNS providers amongst Fortune 500 websites within this time period.

TAGS:
DNS Marketshare;
Alexa 10,000;
Alexa 1,000;
Fortune 500;
Dyn;
AWS Route 53;
UltraDNS;
DNSPod
Read More

Value of the Cloud - CPU Performance

Abstract

This post compares CPU performance and value for 18 compute instance types from 5 cloud compute platforms - AWS EC2, Google Compute Engine, Windows Azure, HP Cloud and Rackspace Cloud. The most interesting content is the data and resulting analysis. If you're in a rush, scroll down or click below to go straight to it.

Go To Comparisons

Overview

In the escalating cloud arms race, performance is a frequent topic of conversation. Often, overly simplistic test models and fuzzy logic are used to substantiate sweeping claims. In a general sense, computing performance is relative to, and dependent on workload type. There is no single metric or measurement that encapsulates performance as a whole.

In the context of cloud, performance is also subject to variability due to nondeterministic factors such as multitenancy and hardware abstraction. These factors combined increase the complexity of cloud performance analysis because they reduce one's ability to dependably repeat and reproduce such analysis. This is not to say that cloud performance cannot be measured, rather that doing so is not a precise science, and differs somewhat from traditional hardware performance analysis where such factors are not present.

Performance is workload dependent. Cloud performance is hard to measure consistently because of variability from multitenancy and hardware abstraction.

Motivation

My goal in starting CloudHarmony in 2010 was to provide a credible source for objective and reliable performance analysis about cloud services. Since then, cloud has grown extensively and become an even more confusing place. The intent of this post is to present techniques and a visual tool we're using to help assess and compare performance and value of cloud services. The focus of this post is cloud compute CPU performance and value. In the coming weeks, follow up posts will be published covering other performance topics including block storage, network, and object storage. As is our general policy, we have not been paid or otherwise influenced in the testing or analysis presented in this post.

The focus of this post is compute CPU performance and value. Follow up posts will cover other performance topics. We were not paid to write this post.

Testing Methods

To test performance of compute services we run a suite of about 100 benchmarks on each type of compute instance offered. These benchmarks measure various performance properties including CPU, memory and disk IO. Each test iteration takes between 1-2 days to complete. When multiple configuration options are offered, we usually run additional test iterations for each such option (e.g. compute services often offer multiple block storage options). Linux CentOS 6.* is our operating system of choice because of its nearly ubiquitous availability across services.

CPU Performance

Although our test suite includes many CPU benchmarks, our preferred method for compute CPU performance analysis is based on metrics provided by the CPU2006 benchmark suites. CPU2006 is an industry standard benchmark created by the Open Systems Group of the non-profit Standard Performance Evaluation Corporation (SPEC). CPU2006 consists of 2 benchmark suites that measure Integer and Floating Point CPU performance. The Integer suite contains 12 benchmarks, and Floating Point 17. According to the CPU2006 website "SPEC designed CPU2006 to provide a comparative measure of compute-intensive performance across the widest practical range of hardware using workloads developed from real user applications." Thorough documentation about CPU2006 including about the benchmarks used is available on the CPU2006 website. CloudHarmony is a SPEC CPU2006 licensee.

The results table below contains CPU2006 SPECint (Integer) and SPECfp (Floating Point) metrics for each compute instance type included in this post. Each score is linked to a PDF report generated by the CPU2006 runtime for that specific test run. CPU2006 run and reporting rules require disclosure of settings and parameters used when compiling and running the CPU2006 test suites and this data is included in the reports. To summarize, our runs are based on the following settings:

Compiler
Intel C++ and Fortran Compilers version 12.1.5
Compilation Guidelines
Base
Run Type
Rate
Rate Copies
1 copy per CPU core or per 1GB memory (lesser of the two)
SSE Compiler Option
SSE4.2 or SSE4.1 (if supported by the compute instance)
Our preferred method for compute CPU performance analysis is based on metrics provided by the SPEC CPU2006 benchmark suites

CPU2006 Test Results

To be considered official, CPU2006 results must adhere to specific run and reporting guidelines. One such guideline states that results should be reproducible. While this is important in the context of hardware testing, it is impractical for cloud due to performance variability resulting from multitenancy and hardware abstraction. However, CPU2006 guidelines allow for reporting of estimated results in cases where not all guidelines can be adhered to. In such cases results must be clearly designated as estimates. It is for this reason that results in the table below are designate as such.

Compute ServiceInstance TypeCPU TypeCoresPrice2SPECint1SPECfp1
AWS EC2cc2.8xlargeIntel E5-2670 2.60GHz322.40441.511194357.602046
HP Clouddouble-extra-largeIntel T7700 2.40GHz81.12168.55417132.3234
AWS EC2m3.2xlargeIntel E5-2670 2.60GHz81.00150.30509128.159625
Google Computen1-standard-8Intel 2.60GHz81.06149.354133143.1015
HP Cloudextra-largeIntel T7700 2.40GHz40.5698.43095585.24574
Rackspace Cloud30gbAMD Opteron 417081.0095.4397983.89602
Windows AzureA4AMD Opteron 417180.4891.3329477.93744
AWS EC2m3.xlargeIntel E5-2670 2.60GHz40.5080.18057871.753345
Google Computen1-standard-4Intel 2.60GHz40.5366.94586666.84303
Rackspace Cloud8gbAMD Opteron 417040.3251.70977947.562079
Windows AzureA3AMD Opteron 417140.2451.5895346.9475
HP CloudmediumIntel T7700 2.40GHz20.1448.82527544.085027
Google Computen1-standard-2Intel 2.60GHz20.26539.46947839.094813
AWS EC2m1.largeIntel E5645 2.40GHz20.2439.02358634.7884
AWS EC2m1.largeIntel E5-2650 2.00GHz20.2438.81663537.10992
AWS EC2m1.largeIntel E5430 2.66GHz20.2429.53462823.805172
Windows AzureA2AMD Opteron 417120.1827.3807125.92939
Rackspace Cloud4gbAMD Opteron 417020.1625.85486124.25972

1: Base/Rate - Estimate
2: Hourly, USD - On Demand

Simplifying the Results

In order to provide simple and concise analysis derived from multiple relevant performance properties, it is helpful to reduce metrics from multiple related benchmarks to a single comparable value. The CPU2006 benchmark suites produce two metrics, SPECint for Integer, and SPECfp for Floating Point performance. A naive approach might be to combine them using a mean or sum of their values. However, doing so would be inaccurate because they are dissimilar values. Although they are calculated using the same algorithms, SPECint and SPECfp are produced from different benchmarks, and thus represent different meanings - as the idiom goes, this would be an apples to oranges comparison. An external analogy might be attempting to average 1 gallon of milk with 2 dozen eggs - in doing so, the resulting value: $$(1+2)/2=1.5$$ is meaningless because they are dissimilar values to begin with.

To merge dissimilar values like metrics from different benchmarks, the values must first be normalized to a common notional scale. One method for doing so is ratio conversion using factors from a common scale. The resulting ratios represent relationships between the original metrics and the common scale. Because the values share the same scale, they may then be operated on together using mathematical functions like mean and median. Using the same milk and eggs analogy, and assuming a common scale of groceries needed for the week, defined as 2 gallons of milk and 3 dozen eggs, grocery deficiency ratios may then be calculated as follows: \[\text"Milk deficiency" = \text"2 gallons needed" / \text"1 gallon on hand" = \text"Deficiency ratio 2"\] \[\text"Eggs deficiency" = \text"3 dozen needed" / \text"2 dozen on hand" = \text"Deficiency ratio 1.5"\] The resulting ratios, 2 and 1.5, may then be reduced to a single ratio representing the average grocery deficiency for both milk and eggs: \[\text"Average grocery deficiency" = (2+1.5)/2 = \text"1.75"\] In other words, in order to stock up on groceries for the week, we'll need to buy 1.75 times the milk and eggs currently on hand. Take note, however, that this ratio is only relevant in the context of milk and eggs as a whole, not separately, nor does it apply to other types of groceries.

The benefit of reducing dissimilar benchmarks values to a single representative metric is to simplify the expression and comparison of related performance properties. It allows us to present cloud performance more generally, and at a level more fitting to the interests and time of cloud users. As much as we'd like users to become well versed in the intricacies of benchmarking and performance analysis, this is simply not feasible for most, and is a primary reason for our existence. Our goal is to provide users with a simple starting point to help narrow the scope from hundreds of possible cloud services.

In order to more generally and simply present cloud performance information we generate a single value derived from multiple related benchmarks

CPU Performance Metric

The CPU performance metric displayed in the graph below was calculated using both SPECint and SPECfp metrics and the common scale ratio normalization technique described above. The common scale was the mid 80th percentile mean of all CloudHarmony SPECint and SPECfp test results from the prior year. These results included many different compute services and compute instance types, not just those included in this post. This calculation results in the following common normalization factors:

SPECint Factor
64.056
SPECfp Factor
55.995

To shorten resulting long decimal values, ratios were multiplied by 100. The meaning of the metric can thus be interpreted as CPU performance relative to the mean of compute instances from many different cloud services. A value of 100 represents performance comparable to the mean, 200 twice the mean, and 50 1/2 of the mean. For example, the HP double-extra-large compute instance produced scores of 168.55417 for SPECint, and 132.3234 for SPECfp. The resulting CPU performance metric of 249.72 was then calculated using the following formula: $$\text"CPU Performance"\ = (((168.55417/64.056) + (132.3234/55.995))/2)*100 → (4.99448532/2)*100 → 249.724266$$ The value 249.72 signifies this instance type performed about 2.5 times better than the mean.

The CPU performance metric used below represents SPECint and SPECfp scores relative to compute instances from many cloud services. A higher value is better

Value Calculation

Cloud compute pricing is usually tied to CPU and memory allocation, with larger instance types offering more (or faster) CPU cores and memory. The CPU2006 benchmark suites are designed to take advantage of multicore systems when compiled and run correctly. Given the same hardware type, our test results generally show a near linear correlation between CPU allocation and CPU2006 scores. Because of these factors, the CPU performance metric derived from CPU2006 is well-suited for estimating value of compute instance types. To do so, we calculate value by dividing the metric by the hourly USD instance cost. For example, the HP extra-large compute instance costs 0.56 USD per hour and has a performance metric of 152.96. The resulting value metric 273.14 is calculated using the following formula: $$\text"Fixed Value"\ = 152.96/0.56 → 273.142857$$

Tiered Value

The graph below allows selection of either Tiered or Fixed Value options. Tiered Value is Fixed Value with an adjustment applied to instances ranked in the top or bottom 20 percent. The table below lists the exact adjustments used. The concept behind tiered values is based loosely on CPU pricing models where the top end processors generally command premium per GHz pricing, while the low end is often discounted. The HP double-extra-large compute instance costs 1.12 USD per hour and has a performance metric of 249.72. It is also ranked in the 91st percentile which receives a +10% value adjustment. The resulting tiered value metric 245.256 is calculated using the following formula: $$\text"Tiered Value"\ = (249.72/1.12)*1.1 → 222.96*1.1 → 245.256$$

Tiered Value Ranking Adjustments
Ranking PercentileValue Adjustment
Top 5%+20%
Top 10%+10%
Top 20%+5%
Mid 60%None
Bottom 20%-5%
Bottom 10%-10%
Bottom 5%-20%
Cloud compute pricing is usually tied to CPU and memory allocation. Value metrics in the graph below are derived by dividing CPU performance by the hourly cost

Price Normalization

Most cloud providers, including all those covered in this post, offer on demand hourly pricing for compute instances. In addition, some providers offer commit based pricing and volume discounts. AWS EC2 for example offers six 1 and 3 year reserve/commit based pricing tiers. These pricing tiers exchange lower hourly rates for a setup fee paid in advance, and in the case of heavy reserve, commitment to run the compute instance 24x7x365 for the duration of the term (light and medium reserve tiers do not have this requirement). In order to represent these pricing tiers in the graph below, the total cost was normalized to an hourly rate by amortizing the setup fee into the hourly rate. For example, the m3.xlarge instance type is offered under a 1 year heavy reserve tier for 1489 setup and 0.123 per hour. For this instance type and pricing model the hourly rate used in the graph and for value metrics was 0.293/hr calculated using the following formula: $$\text"Normalized Hourly Rate"\ = ((1489/365)/24) + 0.123 → 0.17 + 0.123 → 0.293$$

AWS EC2 is also available under a bid based pricing model called Spot pricing. Although spot pricing is typically priced substantially below standard rates, it is highly volatile and subject to transient spikes that may result in unexpected termination of instances without notice. Due to this, spot pricing is generally not recommended for long term usage. The spot pricing included in the graph below is based on a snapshot taken in early June 2013 and may not represent current rate.

Volume discount and membership based pricing like Windows Azure MSDN, were not included in the graph and value analysis because they are not as straight forward, and often require substantial monthly spend commitments at which users would likely be able to negotiate similar discounts with any vendor.

The graph provides a drop down list allowing select of different pricing models. When changed, the graph and table below will automatically update.

The AWS EC2 reserve hourly pricing in the graph below is based on a normalized hourly value calculated by amortizing the setup fee into the hourly rate

Visualizing Value & Performance

On our current website and in prior posts we've often used traditional bar charts to represent data visually. While this is a typical approach to presenting comparative analysis, it often resulted in lengthy displays, and did not lend well to large multivariate data sets. In the search for a more efficient and intuitive way to visualize such data, we discovered the D3 visualization library, which provides excellent tools and examples for creating data visualizations. It is based on this that we designed the graph below. The goal of this graph is to present large multivariate data sets in a concise, intuitive and interactive format. In a relatively small space, this graph allows users to observe many different characteristics of cloud services including:

Performance
The size or diameter of the circle represents proportional CPU performance of each compute instance. A larger circle represents more performant systems.
Price & Value
The fill color of each circle represents either the value or the price of each compute instance (defaults to value). Users can toggle between price, fixed value and tiered value fill options. Blue represents better value/lower price, while red represents lower value/higher price. A grey color is used for the midrange.
Vertical Scalability
Not all workloads lend well to horizontal scaling models (load is spread across many compute nodes). Legacy database servers for example often do not (easily) support multi-node clusters. By observing variation in circle sizes from small to large, users may better understand the vertical scaling range and limits of each cloud service.
Instance Type Variability
Results are grouped by instance type and CPU architecture. In the case of EC2, this allowed display of multiple records for a single instance type. The m1.large, for example, deployed to 3 different host types during our testing, each of which demonstrating slightly different performance characteristics.
Multiple Pricing Models
Users may view pricing and value based on different service pricing models. In the case of EC2, this allows toggling between on demand, reserve and spot pricing. Results in the graph and details table are updated instantly when the pricing model selection is changed.

Below the graph a sortable table displays details for each service and compute instance displayed in the graph. This table updates dynamically when fill color or pricing model selections are changed. Details for specific compute instances can also be viewed by hovering over a circle. In addition, users may zoom into a particular service by clicking on the container for that service. The graph can also be displayed in a larger popup view by clicking on the blue zoom icon displayed in the upper right corner when hovering over it.

The interactive graph below displays multiple characteristics of compute services and instance types including performance, price, value and vertical scalability. EC2 price and value can be toggled between on demand and reserve pricing tiers
HOW TO READ THIS DIAGRAM
Performance Worse
Better
Performance

Performance is represented by the diameter of the circle. Larger circles represent more performant systems.

Close
Value
Lower
Higher
Price & Value

Price and value are represented by the circle fill color. Blue represents lower pricing/better value.

Close
OPTIONS
Fill Metric
Fill Metric

The Value fill metric represents a ratio between performance and price, while Price represents a fixed hourly cost.

Close
Value Calculation
Value Calculation

Fixed values are based on a simple ratio between performance and hourly cost. Tiered values are Fixed Values with an adjustment applied to services ranked in the top or bottom 5, 10 and 20 percent.

Close

CPU2006 Results Summary Diagram

This diagram displays the actual CPU2006 SPECint and SPECfp metrics for each compute service and instance type. Hovering over a specific segment in the diagram displays these metrics.

HOW TO READ THIS DIAGRAM
Benchmark Result Worse
Better
Benchmark Results Help

Segments in this diagram depict individual benchmark metrics for each compute service and instance type. Segments are color coded where blue represents a better score and red worse.

Close
OPTIONS
Group by Service
Group by Services Help

When grouped by service, all instances for a specific compute service are listed together. The order of cloud services is based on the mean performance for all instance types belonging to that service. The service with the highest overall value appears in the 12 o'clock position. When not grouped by service, compute instances are ordered by mean results with the best performing instance located in the 12 o'clock position.

Close

Comments and Observations

As is our generally policy, we don't recommend any one service over another. However, we'd like to point out some observations about each compute service included in this post.

AWS EC2

Google Compute Engine (GCE)

Windows Azure

HP Cloud

Rackspace Cloud

Next Up - Storage IO

CPU and storage IO are generally the two most important performance characteristics for compute services. Depending on workload, one might be more important than the other. Compute services often offer multiple storage options. Many storage options are networked and thus subject to higher variability than CPU and memory. Many workloads are sensitive to IO variations and may perform poorly in such environments. In the next post, we'll present IO performance and consistency analysis for the same providers covered in this post. Storage options covered will include:

AWS EC2
Ephemeral, EBS, EBS Provisioned IOPS, EBS Optimized
Google Compute Engine
Local/Scratch, Persistent Storage
HP Cloud
Local, Block/External Storage
Azure
Local Replicated, Geo Replicated
Rackspace
Local, SATA and SSD Block/External Storage

Follow storage IO, we will also release posts covering network performance (inter-region, intra-region and external), and object storage IO.

TAGS:
CPU Performance;
Cloud Compute;
Memory IO Performance;
AWS EC2;
Rackspace;
Google Compute Engine;
HP Cloud;
Windows Azure
Read More

Alexa Top 10,000 DNS Marketshare - May 6, 2013

This post provides an update on DNS marketshare statistics for May 2013. This is a follow up to our previous posts in March and April 2013.

Alexa Top 10,000 DNS Marketshare - May 6, 2013

The following table provides a comparison of DNS provider marketshare between Apr 8 and May 6 2013 for the top 10,000 Alexa websites. Alexa rankings change frequently. This analysis is based on an Alexa rankings snapshot taken on April 5 2013.

ProviderRankWebsites (out of 10,000)MarketshareMarketshare Change
DynECT14404.4%+6 / +1.382%
AWS Route 5323813.81%+14 / +3.815%
UltraDNS33613.61%-2 / -0.551%
DNSPod43363.36%+5 / +1.511%
CloudFlare53143.14%+23 / +7.904%
GoDaddy DNS62872.87%-10 / -3.367%
DNS Made Easy72462.46%0
Akamai82172.17%+10 / +4.831%
Rackspace Cloud DNS91561.56%-2 / -1.266%
Verisign DNS101061.06%+5 / +4.95%
Softlayer DNS11790.79%0
Namecheap12760.76%0
easyDNS13760.76%-1 / -1.299%
Enom DNS14660.66%-1 / -1.493%
Cotendo Advanced DNS15470.47%-11 / -18.966%
Savvis16420.42%0
Nettica17300.3%0
ZoneEdit18290.29%0
Internap19270.27%0
ClouDNS20210.21%+3 / +16.667%
DNS Park21170.17%+1 / +6.25%
No-IP22120.12%0
Zerigo DNS23100.1%0
EuroDNS2470.07%0
Worldwide DNS2550.05%-1 / -16.667%
DTDNS2620.02%0
CDNetworks DNS2720.02%+1 / +100%

Akamai's marketshare growth is primarily being driven by migrations off the Cotendo platform which is scheduled to be shutdown sometime this year.

AWS Route 53 and CloudFlare continue to grow significantly in this market segment.

Alexa Top 1,000 DNS Marketshare - May 6, 2013

The following table provides a comparison of DNS provider marketshare between Apr 8 and May 6 2013 for the top 1,000 Alexa websites. Alexa rankings change frequently. This analysis is based on an Alexa rankings snapshot taken on April 5 2013.

This market segment is much less volatile month-to-month relative to the top 10,000.

ProviderRankWebsites (out of 1,000)MarketshareMarketshare Change
DynECT1797.9%0
UltraDNS2636.3%+1 / +1.613%
Akamai3484.8%0
AWS Route 534343.4%-1 / -2.857%
DNSPod5323.2%0
DNS Made Easy6212.1%0
GoDaddy DNS7141.4%0
Cotendo Advanced DNS8111.1%-1 / -8.333%
Verisign DNS9101%0
easyDNS10101%0
CloudFlare1180.8%+1 / +14.286%
Rackspace Cloud DNS1270.7%0
Namecheap1360.6%0
Softlayer DNS1450.5%0
Enom DNS1550.5%0
Internap1630.3%0
Savvis1730.3%0
Nettica1820.2%0
ClouDNS1920.2%0
ZoneEdit2020.2%0
DTDNS2110.1%0
EuroDNS2210.1%0
No-IP2310.1%0
Worldwide DNS2410.1%0
TAGS:
DNS Marketshare;
Alexa 10,000;
Alexa 1,000;
Fortune 500;
Dyn;
AWS Route 53;
UltraDNS;
DNSPod
Read More

DNS Marketshare - Alexa 10,000 + Fortune 500 - April 2013

This is a followup from last month's DNS marketshare post. In this post we use the same top 10,000 Alexa websites from last month (based on a 10 Mar 2013 Alexa snapshot). We then mined the DNS servers for each of these websites again using the same technique. The tables below represent the changes in marketshare and DNS hosting occurring during this time period (about 30 days).

New to this post is marketshare analysis for 2012 US Fortune 500 companies. This is a list published annually by Fortune Magazine ranking public US companies by gross revenue. For this analysis, we used the corporate websites listed for these companies on the CNN.com website. This marketshare analysis shows a very different provider makeup relative to top Alexa websites.

Notable during the past 30 days was a large scale DNS DDoS attack against spam blacklist provider spamhaus. To avert this attack, spamhaus switched DNS hosting to CloudFlare who successfully worked internally and with upstream providers to defend spamhaus against the attack as documented on their blog. The resulting positive press, appears to have attracted many new customers with their marketshare increasing by nearly 9% (22 Alexa 10,000 websites).

Fortune 500 DNS Marketshare - Apr 8, 2013

The following table provides a comparison of DNS provider marketshare between Mar 5 and Apr 8 2013 for the 2012 US Fortune 500 Companies.

ProviderRankWebsites (out of 500)MarketshareMarketshare Change
UltraDNS1357%+1 / +2.941%
Verisign DNS2244.8%0
Akamai3132.6%0
DynECT481.6%0
DNS Made Easy561.2%0
Savvis640.8%0
GoDaddy DNS740.8%0
Internap840.8%0
Rackspace Cloud DNS920.4%0
AWS Route 531020.4%+1 / +100%

Alexa Top 1,000 DNS Marketshare - Apr 8, 2013

The following table provides a comparison of DNS provider marketshare between Mar 5 and Apr 8 2013 for the top 1,000 Alexa websites.

ProviderRankWebsites (out of 1,000)MarketshareMarketshare Change
DynECT1787.8%+5 / +6.849%
UltraDNS2626.2%0
Akamai3494.9%+3 / +6.522%
AWS Route 534363.6%-1 / -2.703%
DNSPod5282.8%0
DNS Made Easy6212.1%0
GoDaddy DNS7131.3%0
Cotendo Advanced DNS8111.1%-1 / -8.333%
Verisign DNS9111.1%+1 / +10%
easyDNS10101%0
Rackspace Cloud DNS1180.8%0
CloudFlare1280.8%+2 / +33.333%
Enom DNS1370.7%0
Namecheap1460.6%0
Softlayer DNS1550.5%0
Internap1640.4%0
Savvis1730.3%0
Nettica1820.2%0
ClouDNS1920.2%0
ZoneEdit2020.2%0
DTDNS2110.1%0
EuroDNS2210.1%0
Worldwide DNS2310.1%0
No-IP2410.1%0