AWS's unexplained reboot degrades Trust, Wake up and Focus on Improving Trust

AWS is secretive about its running of AWS.  Last night I saw that AWS was going to reboot a bunch of their EC2 instances on friday.

Yikes: Big Amazon Web Services reboot on the way Friday

 

10 HOURS AGO

2 Comments

SUMMARY:

Many Amazon Web Services customers will soon be subjected to a reboot of their EC2 instances — but no one outside of AWS knows why.

This morning others in the press spread the news.

AWS users fret over downtime ahead of Amazon's massive EC2 reboot

ZDNet - ‎2 hours ago‎
Some AWS users have also expressed concern on the AWS user forum that they've been given too short notice to monitor services that may be affected during the maintenance event. Meanwhile, others have commended AWS for forcing a reboot at the ... 
 

AWS issuing 'urgent patch' to EC2 instances

Computer Business Review - ‎1 hour ago‎
AWS has said that not all instances of the impacted instance types will be rebooted. 2. If you relaunch an instance before the maintenance, you are not guaranteed to get an already-patched host." T1, T2, M2, R3, and HS1 instance types will not be affected. 
 

Cue The Cloud Naysayers--Amazon Web Services Set To Nuke Bulk Servers

Forbes - ‎9 hours ago‎
The details, according to Von Eicken, are that Amazon Web Services (AWS) notified its customers today, Sept 24, that it will be rolling out an urgent patch to all hosts causing a maintenance reboot of nearly all EC2 instances starting September 26, 2014 and ...

So what is the big deal?  There is speculation the reboot is for security issues.  Whatever is the reason Amazon is not saying now.

Does this improve or degrade trust in Amazon?  

Don’t you think the messaging would be different if Amazon focused on improving trust in its cloud?

A conversation I wish I could have, Bill Loeffler a Microsoft Cloud Architect passed away Sept 2014

A friend forwarded the news that Bill Loeffler passed away from melanoma this month at the age of 54.

NewImage

The friend asked if I knew Bill, and yes I had many conversations on the concept of Infrastructure Patterns for IT in my Microsoft days.  Bill and I connected on LinkedIn 5 years ago, but he was based in NYC, so it didn’t seem convenient to chat about his work and mine.  

Bill has posted some good content on building a Cloud Infrastructure.

http://social.technet.microsoft.com/wiki/contents/articles/5711.private-cloud-infrastructure-as-a-service-self-service.aspx

Private Cloud Infrastructure as a Service Self Service

Self Service capability is a characteristic of private cloud computing and must be present in any implementation. The intent is to permit users to approach a self-service capability and be presented with options available for provisioning in an organization. The capability may be basic provisioning of a virtual machine with a pre-defined configuration or may be more advanced allowing configuration options to the base configuration and leading up to a platform capability or service.

http://social.technet.microsoft.com/wiki/contents/articles/4633.what-is-infrastructure-as-a-service.aspx 

What is Infrastructure as a Service?

In defining Infrastructure as a Service (IaaS), we need to drill into specific characteristics that a cloud platform provider must provide to be considered Infrastructure as a Service. This has been no easy task as nearly every cloud platform provider has recently promoted features and services designed to address the IaaS and cloud computing market. Fortunately, as the technology has evolved, a definition of cloud computing has emerged from the National Institute of Standards and Technology (NIST) that is composed of five essential characteristics, three service models, and four deployment models.

I looked up some of our shared connections on LinkedIn and I should set some time aside to chat about Bill.

Can you coerce people to think Mobile, Cloud, and Data First? IBM tries

I made the switch years ago to think about Mobile, Cloud, and Data as the ways to build solutions.  Mobile devices are with people so much more than a laptop or desktop so information is at your fingertips more often.  The Cloud gets you around the monopoly of internal IT organizations and lets you scale faster.  Data has changed from the forms people fill out to be stored in a SQL database which had comfort and control for those who controlled the database schema, to put anything you want in a key value pair approach.

name–value pairkey–value pairfield–value pair or attribute–value pair is a fundamental data representation in computing systems and applications. Designers often desire an open-ended data structure that allows for future extension without modifying existing code or data. In such situations, all or part of the data model may be expressed as a collection of tuples <attribute namevalue>; each element is an attribute–value pair.

The problem is the transition for people who lived in software licenses of applications and the associated hardware can be difficult.

IBM is trying to coerce its staff.

IBM ruffles feathers in push to re-educate veteran employees for cloud, data, mobile

 

17 HOURS AGO

2 Comments

SUMMARY:

Some IBM Global Technology Services employees have been told they keep their jobs (and 90 percent of their pay) as long as they retrain on hot new technologies.

Being more efficient in the data center by changing how you organize the hardware

So much of IT is governed by self optimizing behavior by different fiefdoms.  Security, Web, Data, Apps, Mobile, Marketing, Finance, Manufacturing most of the time own their own gear.  I had a chance to talk to Gigaom’s Jonathan Vanian to discuss how data center clusters (sometimes called cores or pods) are used to organize IT resources to be efficient vs. a departmental approach.  This has been done for so long it’s been well proven.

Here is Jonathan’s post.

Want a more efficient data center? Maybe it’s time you tried a core and pod setup

 

AUG. 27, 2014 - 5:00 AM PDT

No Comments

SUMMARY:

In order for companies to improve their internal data centers’ efficiency and improve their applications’ performance, many are turning to using a “core and pod” setup. In this type of arrangement, data center operators figure out the best configuration of common data center gear and software to suit their applications’ needs.

The fundamental idea is to let system architects define systems as resources for departments.

“You need to have some group that is looking at the bigger picture,” Ohara said. “A group that looks at what is acquired and makes it work all together.”

If more companies were using this approach making them more efficient many times they can get better price performance making them competitive with public clouds.

Jonathan wrote about my perspective.

How pod and core configurations boost performance

GoogleFacebook and eBay are all examples of major tech companies that have been using the pod and core setup, said Dave Ohara, a Gigaom Research analyst and founder of GreenM3. With massive data centers that need to serve millions of users on a daily basis, it’s important for these companies to have the ability to easily scale their data centers with user demand.

Using software connected to the customized hardware, companies can now program their gear to take on specific tasks that the gear was not originally manufactured for, like analyzing data with Hadoop, ensuring that resources are optimized for the job at hand and not wasted, Ohara said.

It used to be that the different departments within a company — such as the business unit or the web services unit — directed the purchases of rack gear as opposed to a centralized data center team that can manage the entirety of a company’s infrastructure.

Because each department may have had different requirements from each other, the data center ended up being much more bloated than it should have been and resulted in what Ohara referred to as “stranded compute and storage all over the place.”

And Jonathan was able to discuss with Redapt a systems integrator who has build many clusters/pods/cores.

Working with companies to build their data centers

At Redapt, a company that helps organizations configure and build out their data centers, the emergence of the pod and core setup has come out of the technical challenges companies like those in the telecommunications industry face when having to expand their data centers, said Senior Vice President of Cloud Solutions Jeff Dickey.

By having the basic ingredients of a pod figured out per your organization’s needs, it’s just a matter of hooking together more pods in case you need to scale out, and now you aren’t stuck with an excess amount of equipment you don’t need, explained Dickey.

Redapt consults with customers on what they are trying to achieve in their data center and handles the legwork involved, such as ordering equipment from hardware vendors and setting up the gear into customized racks. Dickey said that Redapt typically uses an eight-rack-pod configuration to power up application workloads of a similar nature (like multiple data-processing tasks).

The inefficient data centers are fading vs. Rapid Growth of the Efficient - Google, AWS, Facebook, Microsoft

The NRDC fired off a blog post this morning that got a bunch of other media to discuss the waste in the data center industry.

Our study shows that many small, mid-sized, corporate and multi-tenant data centers still waste much of the energy they use. One of the key issues is that many of the roughly 12 million U.S. servers operate most of the time doing little or no work but still drawing power – up to 30 percent of servers are “comatose” and no longer needed but still drawing significant amounts of power, many others are grossly underutilized. However, opportunities abound to reduce energy waste in the data center industry as a whole. The technology exists, but systemic measures are needed to remove the barriers limiting its broad adoption across the industry. 

ServerGraphic_R5.jpg

Over the last 5 years the data center growth of Google, Amazon, Facebook, Microsoft and many others has been growing a pace that blows away the rest.  The old dominants of financials has grown slowly or even declined with the exception of the analytics group that has built huge data farms.

Even though the NRDC raises concerns about the waste, the reality is the Cloud is helping to put so many of these old servers out of business.  But, the top issue is IT asset management is too many times poorly executed, making it difficult to identify the servers that can be turned off.

Gigaom Research analyst Dave Ohara said the report brings up valid points, but more factors need to be considered. CPU utilization is just one metric, Ohara said via email. “RAM and hard-disks also use up energy … and can be just as underutilized …  The problem is that IT asset management is mostly done as a bookkeeping exercise, not as part of a technical IT operations team who purchases, owns and operates the servers.”

Too many times the people who operate the servers can’t find the history of who owns the assets and what is on them.