What is the PUE of your cloud data center? Google's is 1.10, Microsoft's is 1.13 - 1.2, Amazon is ?

It is a pretty safe assumption that a Cloud Data Center has a low PUE.  The Cloud business is so competitive that the cost to run the power and cooling systems takes a direct hit out of profit margins so almost everyone should be driving more efficient systems.

How efficient are the cloud companies?  

Google is easy to figure out as they quote PUE quarterly here.

NewImage

GigaOm's Stacey Higginbotham had a post on efficient data centers quoting PUE.

Microsoft gave Stacey a bunch of data, but not an exact number.

Microsoft sent me a bunch of information on its PUE figures for its newest data centers which range from 1.13 to 1.2. It doesn’t disclose the PUE for all of its data centers, however.

For Amazon, there is no clear answer.  Note: James Hamilton does not claim the PUE is representative of Amazon.  Given Amazon will let temperatures rise in warehouses for workers, it is hard to believe they wouldn't do the same for voiceless servers.

Amazon’s data center guru James Hamilton published a presentation on Amazon last year that assumed a PUE of 1.45 for the online retailer’s data centers.

Virtualization is 50 years old, maybe the ones excited about virtualization don't remember the mainframe

IBM Systems Magazine has an article on the history of virtualization.

Origins Back to the Mainframe


While often considered a new concept, the idea of virtualization is more than a half-century old. In 1959, computer program language pioneer Christopher Strachey published “Processing Time Sharing in Large Fast Computers” at the International Conference on Information Processing at UNESCO in New York. This article dealt with the use of multiprogramming time sharing and established a new concept of using large machines to increase the productivity of hardware resources. Multiprogramming was used in the Atlas super computer in the early 1960s.

Virtualization started to share an expensive mainframe.

Starting with Time Sharing

Recognizing a new opportunity, in the early ’60s, the IBM Watson Research Center initiated the M44/44x project to evaluate the time-sharing system concept. The architecture was based on VMs, the main one was an IBM 7044 (M44). The address space of the 44X was resident in the M44 memory hierarchy machines, implemented by means of virtual memory and multiprogramming. After the first experiments, IBM made a series of updates to its architecture and spawned several others projects, including the IBM 7040 and IBM 7094, in conjunction with the Compatible Time Sharing System (CTSS) developed by the Massachusetts Institutes of Technology (MIT).

The latest wave of server virtualization driven by VMware was driven by the realization there were a large number of applications that needed a portion of the server, not even half.  VMware was able to get users to buy VMware to reduce the server costs.

Driving Force

In recent years, enterprises have made virtualization a priority. This has been driven by the need to decrease costs associated with:

  • IT facilities—space, power and cooling
  • Software—licenses, maintenance and support
  • Hardware—servers, storage, network switches and routers
  • Administration—site, server, software, applications and data

In the article IBM goes on to state that it has the longest history of virtualization, there it is the most advanced.  There are many users who don't know the history of IBM's virtualization efforts and wouldn't consider a 40-year head start to be advantage.

 

These motivators have contributed to the evolution of virtualization technology, and the introduction of a multitude of proprietary on non-proprietary products. However, IBM’s solutions are the most advanced because of the company’s long history in this area. IBM released its first hypervisor in 1967, giving it a 40-year head start on the competition. Most recently, IBM released the z/VM V6.2 update this past April.

For many, Virtualization expertise is not like Scotch that sells for a premium if it is 40 years old.

Scotch 40 Year Old Whisky

 

If you are looking for the perfect gift for an acquaintance, or simply a great libation to ponder over, we have a superb selection; different regions, different styles and different ages, we have a whisky for every palate. We offer a great range of 40 year old whiskies. These are very well aged, and thus extremely rare. These are in limited supply and offer an outstanding amount of oaken maturity and complexity. These are collector's items. We have 40 year old Scotch whiskies. These are spirits with a massively broad flavour profile. It is surprising just how much variation there can in one country. If you're still undecided, then why not get in touch with us. We would be more than happy to point you in the right direction, we have a team of whisky zealots at the ready to recommend the perfect whisky for any occasion.

 

 

 

 

 

 

 

 

NewImage

Getting some good laughs from the Federal Data Center Consolidation

I  don't know about you, but from the beginning when the US Fed Gov't announced they were going to close data centers, I was getting some good laughs.  Really you think the way to close a data center is tell the staff that their data center will be shut down?  IT staff has learned to ignore the strategic projects from the CIO as he/she will be gone in 2 years. 

Vivek Kundra who announced the Federal data center consolidation lasted 2 1/2 years.

Vivek| Kundra (Hindiविवेक कुंद्रा; born October 9, 1974) is an Indian American administrator who served as the first chief information officer of the United States from March, 2009 to August, 2011 under President Barack Obama.

Network computing has a post on the three lessons learned from the Fed's data center consolidation.  Jumping to the last two paragraphs.

To understand just how wrong a data center consolidation can go, consider the findings of MeriTalk’s recent survey of 200 federal data center operators. Not only did 56 percent give their agencies’ consolidation efforts a grade of C or below -- a clear sign of a project headed south -- a whopping 45 percent believe that it’s more likely they’ll win the lottery, the Washington Redskins will win the next Super Bowl, or another large meteor will hit the Earth than agencies will meet FDCCI objectives.

Far from a glowing endorsement, but it serves as a third lesson -- an important reminder to IT execs that they’d be well advised to consider their data center teams’ perspectives before going full steam with consolidation mandates.

The IT staff is getting a good laugh as they continue to do same job they did yesterday, the month before, the year before, and last decade. :-)

Why the Cloud is winning? It has less friction than legacy IT systems

Security systems work by making it harder for things to get done.  Security creates more friction in the system.  If you have enough momentum you can overcome the friction, if you don't then you get stuck.  If a cyber criminal has the login and password for a user, the security systems fail because he has taken the low friction path that gives him access.

Legacy IT systems are rampant with systems and processes that create more friction - standards, meetings, specifications, requirements all usually make it harder to get things done.

The Cloud is easy.  Login, give them a credit card number, request a service, and it is up and running in seconds.  The Cloud focuses on less friction and that is why it is winning.

Who would want a cloud service that took just as long as a legacy service, eating up your time and money just like you have always done in the past.  Whether it is virtualized or not who cares?  Well VMware does care, but most of you shouldn't really care if it virtualized environment.