In The Cloud do you have visibility of the Physical Infrastructure? No, which means a problem when a Superstorm like Sandy Hits

The way some people talk the Cloud is the only way to go.  Part of being the Cloud is it is Opaque. You can't see the details.  The Cloud is not transparent.  What kind of transparency.  If you knew Superstorm like Sandy was coming to hit an area where you have your Cloud Service, can you determine the physical risk to the facility?

I was reading GigaOm's Barb Darrow's post on lessons from Superstorm Sandy for IT and it is a good summary of some of the lessons.

What superstorm Sandy taught us about protecting IT infrastructure

 

3 HOURS AGO

2 Comments

Getty Images
photo: Getty Images
SUMMARY:

Some post-storm lessons are no brainers — make sure you have the right fuel hoses as well as the fuel itself? Duh. Others may come as surprise.

Then I thought.  Wait if you are in a Cloud service in NYC, you can't find out anything like she suggests.

3: Carefully assess the backup power situation

Even if you have plenty of fuel for backup generators, it won’t help if the generators themselves or the pumps to supply them get flooded in a basement. If this gear must stay on lower floors, make sure it’s fully encapsulated and waterproofed, said Michael Levy, analyst at 451 Research.

Post Sandy, service providers should also make sure they have roll-up generators as well as fuel hoses onsite and easily accessible, said Ryan Murphey VP of operations for PEER 1 Hosting. Oh, and make sure those hoses fit both the generators and the fuel trucks.

If you were in a colocation site, you can tour the facility to do your own assessment and you have contacts to discuss the physical facility.  Can you communicate to the facility department at AWS in Ashburn, VA?  Hell no.

If you have mission critical information in the cloud do you want more transparency?  Hell Yes.

Add this to another reason why users move out of Cloud's like AWS to colocation facilities.

Here is an example where AWS's data centers went down vs. Digital Realty and Dupont Fabros during a power outage.

The two largest wholesale data center operators in the northern Virginia market said their data centers performed flawlessly during last weekend’s electrical storms, maintaining electrical power during grid outages and keeping customers online. Digital Realty Trust (DLR) owns 19 data centers in the region, whileDuPont Fabros Technology (DFT) operates seven facilities. Between them, the two companies operate more than 3.3 million square feet of data center space in northern Virginia.

The announcements provide a contrast to the performance of Amazon Web Services, which had a data center that was knocked offline by power outages during the storms that hit northern Virginia Friday night. While the storms were powerful, other leading data center operators were able to keep their facilities online. The announcements also indicated where Amazon’s failures didn’thappen. Amazon is a major tenant for Digital Realty, leasing 448,895 square feet in six properties, including several in northern Virginia. The announcement made it clear that the Amazon outage did not occur in one of Digital Realty’s buildings, as some have speculated.

Digital Realty said its data centers in northern Virginia “operated as designed and engineered to maintain the highest degree of reliability. These uptime metrics are based on a comprehensive evaluation of the Company’s facilities worldwide using standard industry methodology.

Cloud's have outages.  And with lack of transparency of physical facilities your risk exposure is unknown.  Kind of scary.

 

Have you hit $50k a month on AWS? Do you have the itch to move?

GigaOm's Barb Darrow has a post on startups that have moved off of AWS.

Amazon Web Services: should you stay or should you go?

 

4 HOURS AGO

2 Comments

Image credit: Wikimedia Commons
SUMMARY:

Startups love the flexibility and pricing of AWS. But then again, no one cloud is right for everyone. Here are a few startups who decided to move at least some of their workloads off AWS.

$50K is the magic number that gets discussed in the article.  One bigger player, not a startup is in the post with their views.

Paris Georgiallis, VP of platform operations for RMS, which builds catastrophe risk models for insurance companies, also put some credence in the $50,000 per month cut off, although he, cautioned that every user’s needs vary. “$50K a month equates to $1.8M of capital spend over 36 months. An experienced IT team can work miracles with $1.8M in infrastructure, especially in a mid-to-large enterprise,” he said via email. 
RMS started out with AWS because well, it’s developers loved AWS. That appeal is Amazon’s ace in the hole. AWS “is winning the developer war much like Microsoft did in the 90s — by creating simple tools and eliminating infrastructure as a concern for developers that attraction is high.” The problem with that is developers tend to treat AWS as an all-you-can-eat-buffet which is fine — to a point. But with poor management of VMs and storage, costs can and often do skyrocket.

And, in the comment section is a reader who references his blog post of host their own server vs. softlayer in London.

The hardware experiment – London colocation

The hardware experiment – London colocation

Written by David Mytton

Recently we’ve been reviewing the infrastructure that powers our server and website monitoring service, Server Density, and as a result we have started an experiment looking into buying and colocating our own physical hardware.

Currently, the service is run from 2 data centers in the US with Softlayer and we’re very happy with the service. The ability to deploy new hardware or cloud VMs within hours or minutes on a monthly contract, plus the supporting services likeglobal IPs is very attractive. However, we’re now spending a significant amount of money each month which makes it worth considering running our own hardware.

Time to stop using the word "Cloud"

I was having a dinner conversation discussing a bunch of different topics.  One was about a future SaaS solution that is built in the cloud.  We were discussing the cloud, and then I made the suggestion that it may be best to not use the word "Cloud."

Why?  Everyone wants a Cloud?  No.  Some people want the Cloud.  To some the Cloud means it is not secure, it goes down, and it is not as good as legacy systems.  Thanks to AWS outages, Cloud's are perceived as not as reliable by many.  Microsoft, Google, and Twitter have had outages and the media jumps on it.  Cloud services like LinkedIn have had security breaches.  Perception is reality.

Discussing this idea with another executive who supports the roll out of a SaaS Cloud service, I asked does he spend time in "damage control" mode when a user thinks the Cloud is not secure and unreliable.  Yes, all the time. He has to explain how his Cloud is better than others.  

So, how about just not calling your service a Cloud SaaS.

If users make the leap that your service is like a cloud service and they are positive, then fine say it is a cloud.  Otherwise focus on the business value of your service.  What is the business value of the cloud?  There are plenty of companies, event companies, and companies who have product that many money on the cloud.  Is that what you are, then fine use the word cloud.  If you are not marketing the cloud, then drop the word.

I've convinced myself to stop talking about the cloud in presentations and documents when I can.  It is better to talk about what your service does.  It is highly available, secure, and scales.  The cloud means to many that the service has compromises in security and availability, and leaves a bad aftertaste. 

VMware says Enterprises still need Data Centers, D'oh

The Cloud has changed the data center industry, and some people think the cloud means the end of data centers.  I am reminded of the popular wise man Homer Simpson's phrase. D'oh

InformationWeek reports on the revelation that the Cloud is not the end of the enterprise data center.

VMware: Enterprises Still Need Data Centers

Charles Babcock
Editor At Large, InformationWeek
 

VMware's Gelsinger tells VMworld that cloud services can't yet handle tough compliance, governance and service level requirements.

The insight is here.

He didn't make this assertion in his keynote address at VMworld Monday or in his press conference on Tuesday. Gelsinger's prediction of legacy data center longevity happened after Marc Andreessen, former Netscape developer and now a venture capitalist, stated in an executive roundtable on the first day of VMworld that Silicon Valley startups shun building their own data centers.

"It's extremely rare to see a capital expense budget in a Silicon Valley startup anymore. It consists of four laptops [and cloud computing]," Andreessen said.

Gelsinger responded, "We disagree. People who say put everything into the cloud have never met a highly regulated customer. A lot of people like Graeme [Graeme Hay, head of infrastructure architecture at Credit Swisse] have real service-level agreements, real governance, real compliance needs that can't be easily met in the cloud."

Those who think the Cloud means the end of the enterprise data center will have a Homer Simpson moment, "D'oh".  Those who think the cloud doesn't create a threat for the enterprise data center will have a "D'oh" moment.  The smart are realizing the Public Cloud breaks the monopoly of enterprise data centers, and it means customers now have a choice. Actually many choices as companies see opportunity to compete against the enterprise IT groups.

Cloud is a waste of money for some startups

There is a common myth that the Cloud is the low cost solution vs. having your own IT environment.  The Cloud is only lower cost at a certain point.  To prove the point, consider does Google, Microsoft, or Amazon use the Public Cloud to host its services?  No.  Their lowest cost way is to host IT services is in their own IT environment.  The cloud has limits of where it makes sense.  The cloud is great when you are at the initial startup phase and you don't want to be distracted with buying servers, getting them hosted, configuration, management, etc.

Wired discusses a company smaller that made the switch from AWS to their own servers.

In Silicon Valley, tech startups typically build their businesses with help from cloud computing services — services that provide instant access to computing power via the internet — and Frenkiel’s startup, a San Francisco outfit called MemSQL, was no exception. It rented computing power from the granddaddy of cloud computing, Amazon.com.

But in May, about two years after MemSQL was founded, Frenkiel and company came down from the Amazon cloud, moving most of their operation onto a fleet of good old fashioned computers they could actually put their hands on. They had reached the point where physical machines were cheaper — much, much cheaper — than the virtual machines available from Amazon. “I’m not a big believer in the public cloud,” Frenkiel says. “It’s just not effective in the long run.”

There are details on the hardware costs, but beware there are hidden costs and do you have the people in your company who can run your own hardware.

This past April, MemSQL spent more than $27,000 on Amazon virtual servers. That’s $324,000 a year. But for just $120,000, the company could buy all the physical servers it needed for the job — and those servers would last for a good three years. The company will add more machines over that time, as testing needs continue to grow, but its server costs won’t come anywhere close to the fees it was paying Amazon.

Frenkiel estimates that, had the company stuck with Amazon, it would have spent about $900,000 over the next three years. But with physical servers, the cost will be closer to $200,000. “The hardware will pay for itself in about four months,” he says.

There are limits where the cloud makes sense.  Can you see where the line is drawn?  Where it starts to make sense to move out of the Public Cloud?