數學有意思

2010年1月30日 星期六

NComputing Brings Inverse Cloud Computing To Joe The Plumber

My apologies, but with the election over, I just couldn’t resist the urge to use ‘Joe the Plumber’. What a joke, but I’ll tell you what isn’t a joke, Ncomputing’s networked computer system. The tiny, fanless box contains no CPU or extreme hardware, but allows its user to perform their day-to-day PC tasks, such as Web surfring, document editing and more. It works by connecting to one central computer that does all the heavy lifting that is shared with other users. As a result, NComputing’s machines are ultra green using 95% less energy than the laptop I write this post on – about 1-4watts. Also, with no moving parts in the NComputers there’s less to breakdown and less heat dissipation, which means no cooling fans or costly, nonecofriendly A/C.

The NComputers already in use by over a million people in India, Bangladesh and Macedonia and are largely utilized by schools, business and public access areas.

Official product page here

Mathematical Proof of the Inevitability of Cloud Computing

http://cloudonomics.wordpress.com/2009/11/30/mathematical-proof-of-the-inevitability-of-cloud-computing/

http://cloudonomics.wordpress.com/2009/11/30/mathematical-proof-of-the-inevitability-of-cloud-computing/


In the emerging business model and technology known as cloud computing, there has been discussion regarding whether a private solution, a cloud-based utility service, or a mix of the two is optimal. My analysis examines the conditions under which dedicated capacity, on-demand capacity, or a hybrid of the two are lowest cost. The analysis applies not just to cloud computing, but also to similar decisions, e.g.: buy a house or rent it; rent a house or stay in a hotel; buy a car or rent it; rent a car or take a taxi; and so forth.

To jump right to the punchline(s), a pay-per-use solution obviously makes sense if the unit cost of cloud services is lower than dedicated, owned capacity. And, in many cases, clouds provide this cost advantage.

Counterintuitively, though, a pure cloud solution also makes sense even if its unit cost is higher, as long as the peak-to-average ratio of the demand curve is higher than the cost differential between on-demand and dedicated capacity. In other words, even if cloud services cost, say, twice as much, a pure cloud solution makes sense for those demand curves where the peak-to-average ratio is two-to-one or higher. This is very often the case across a variety of industries. The reason for this is that the fixed capacity dedicated solution must be built to peak, whereas the cost of the on-demand pay-per-use solution is proportional to the average.

Also important and not obvious, leveraging pay-per-use pricing, either in a wholly on-demand solution or a hybrid with dedicated capacity turns out to make sense any time there is a peak of “short enough” duration. Specifically, if the percentage of time spent at peak is less than the inverse of the utility premium, using a cloud or other pay-per-use utility for at least part of the solution makes sense. For example, even if the cost of cloud services were, say, four times as much as owned capacity, they still make sense as part of the solution if peak demand only occurs one-quarter of the time or less.

In practice, this means that cloud services should be widely adopted, since absolute peaks rarely last that long. For example, today, Cyber Monday, represents peak demand for many etailers. It is a peak who’s duration is only one-three hundred sixty-fifth of the time. Online flower services who reach peaks around Valentine’s Day and Mother’s day have a peak duration of only one one-hundred eightieth of the time. While retailers experience most of their business during one month of the year, there are busy days and slow days even during those peaks. “Peak” is actually a fractal concept, so if cloud resources can be provisioned, deprovisioned, and billed on an hourly basis or by the minute, then instead of peak month or peak day we need to look at peak hours or peak minutes, in which case the conclusions are even more compelling.

I look at the optimal cost solutions between dedicated capacity, which is paid for whether it is used or not, and pay-per-use utilities. My assumptions for this analysis are that pay-per-use capacity is 1) paid for when used and not paid for when not used; 2) the cost for such capacity does not depend on the time of request or use; 3) the unit cost for on-demand or dedicated capacity does not depend on the quantity of resources requested; 4) there are no additional relevant costs needed for the analysis; 5) all demand must be served without delay.

These are assumptions which may or may not correspond to reality. For example, with respect to assumption (1), most pay-per-use pricing mechanisms offered today are pure. However, in many domains there are membership fees, non-refundable deposits, option fees, or reservation fees where one may end up paying even if the capacity is not used. Assumption (2) may not hold due to the time value of money, or to the extent that dynamic pricing exists in the industry under consideration. A (pay-per-use) hotel room may cost $79 on Tuesday but $799 the subsequent Saturday night. Assumption (3) may not hold due to quantity discounts or, conversely, due to the service provider using yield management techniques to charge less when provider capacity is underutilized or more as provider capacity nears 100% utilization Assumption (4) may or may not apply based on the nature of the application and marginal costs to link the dedicated resources to on-demand resources vs. if they were all dedicated or all on-demand. As an example, there may be wide-area network bandwidth costs to link an enterprise data center to a cloud service provider’s location. Finally, assumption (5) actually says two things. One, that we must serve all demand, not just a limited portion, and two, that we don’t have the ability to defer demand until there is sufficient capacity available. Serving all demand makes sense, because presumably the cost to serve the demand is greatly exceeded by the revenue or value of serving it. Otherwise, the lowest cost solution is zero dedicated and zero utility resources; in other words, just shut down the business. In some cases we can defer demand, e.g., scheduling elective surgery or waiting for a restaurant table to open up. However, most tasks today seem to require nearly real-time response, whether it’s web search, streaming a video, buying or selling stocks, communicating, collaborating, or microblogging.

It is tempting to view this analysis as relating to “private enterprise data centers” vs. “cloud service providers,” but strictly speaking this is not true. For example, the dedicated capacity may be viewed as owned resources in a co-location facility, managed servers or storage with fixed capacity under a long term lease or managed services contract, or even “reserved instances.” By “dedicated” we really mean “fixed for the time period under consideration.” For this reason, I will use the terms “pay-per-use” or “utility” rather than “cloud” except when providing colloquial interpretations.

Let the demand D for resources during the interval 0 to T be a function of time D(t), 0 <= t <= T.

逆雲端運算

目前雲端運算的賣點 運算能力與儲存空間
基於網際網路n層伺服器架構的系統開發與瀏覽器終端
吳昇的Web 3.與Opera瀏覽器的先進功能
雲端運算與電子書互動閱讀 知識管理
雲端運算與直流電力供應系統
雲端運算對災情通報與災害治理系統的效益
台灣雲端運算的大型實驗計畫
台灣雲端運算的前端基礎建設

逆雲端運算

目前雲端運算的賣點 運算能力與儲存空間
基於網際網路n層伺服器架構的系統開發與瀏覽器終端
吳昇的Web 3.與Opera瀏覽器的先進功能
雲端運算與電子書互動閱讀 知識管理
雲端運算與直流電力供應系統
雲端運算對災情通報與災害治理系統的效益
台灣雲端運算的大型實驗計畫
台灣雲端運算的前端基礎建設