Paul Rand's Design Principles to think about in Data Centers

John Maeda, President of RISD gave one of the better design focused presentations at GigaOm Roadmap.

NewImage

One area that John focused on is how he brought Paul Rand to MIT to present.

Who is Paul Rand?

PAUL RAND (BORN PERETZ ROSENBAUM, AUGUST 15, 1914 – NOVEMBER 26, 1996) was a well-known American graphic designer, best known for his corporate logo designs. Rand was educated at the Pratt Institute (1929-1932), the Parsons School of Design (1932-1933), and the Art Students League (1933-1934). He was one of the originators of the Swiss Style of graphic design. From 1956 to 1969, and beginning again in 1974, Rand taught design at Yale University in New Haven, Connecticut. Rand was inducted into the New York Art Directors Club Hall of Fame in 1972. He designed many posters and corporate identities, including the logos for IBM, UPS and ABC. Rand died of cancer in 1996.

Here is what John wrote about Paul Rand's visit and the parts he focused on.

It is ironic that 8 years later, I would return to MITas a professor of design, and that I would host a lecture by Paul Rand at MIT, which I did on November 14 of last year. The time for the lecture was set at 10am. For those familiar with how an American university works, an early lecture is very rare because students usually study late into the night and are less apt to attend events in the morning. But Rand insisted that he speak in the morning. He said, "If someone isn't willing to wake up to hear me to speak, I don't want to speak to them!"

The auditorium was packed beyond capacity with people from all over New England, some waking up as early as 5am to arrive in time for the lecture. The Director of the Media Lab, Professor Nicholas Negroponte, later remarked that during all his career at MIT he had never seen such an overwhelming audience for a morning lecture. Although conditions in the lecture hall were crowded, there was complete silence during the lecture as everyone's attention was completely focused on Rand.

The presentation was a question answer format.

JM: "What is design?"

 

 

 

 

"Design is the method of putting form and content together. Design, just as art, has multiple defintions, there is no single definition. Design can be art. Design can be aesthetics. Design is so simple, that's why it is so complicated."

What is bad design?

 

NewImage

"What is the difference between 'good' design and 'bad' design?"

 

 

 

 

"A bad design is irrelevant. It is superficial, pretentious, ... basically like all the stuff you see out there today."

 And, here is part that will resonate with some of you who love design.

"Most of your designs have lasted for several decades, what would you say is your secret?"

 

 

 

"Keeping it simple. Being honest, I mean, completely objective about your work. Working very hard at it."

Air cooled State of the Art Molecular Lab/Office, shares its design and assumptions

Air cooling has been a hot topic for the data center industry with some of the most going to Yahoo's chicken coop which people don't really talk much about.  What is disappointing when companies say they have a new and better way, they don't share the details. What kind of details?

UW has opened a new Molecular Biology office/lab that has a focus on clean energy.

SEATTLE (AP) — As a building, the University of Washington's new molecular engineering lab is interesting in its own right — designed with cutting-edge features to make it energy efficient, and bright lab spaces with killer views of Mount Rainier and the Olympics.

But what really marks the newest building on the UW campus is the work going on inside, where scientists are designing proteins that could one day cure diseases, and clean technology materials that could make it easier to convert solar energy into electricity.

The building is mandated to LEED silver, and should be gold.  But what is the best part is the air cooling in this PDF.

It would be nice if the air cooled data center projects shared this amount of data.  Then, people could harvest best practices that would apply to their projects

NewImage

There are diagrams of the airflow.

NewImage

NewImage

Seattle climate data.

NewImage

Understanding Data Center Types - Cloud, Hosted, Colocation, Wholesale, Owned

I was having a conversation with a client and it occured that most company executives probably don't understand the different data center types that exist.  It seemed worthwhile to describe the different data center types that exist and how executives should understand the differences.  They hear terms like cloud, hosted, colo, and wholesale all the time, but what does this mean?

Besides doing a few searches, I reached out to Jones Lang Lasalle's Michael Siteman to see what he had on data center types. It was quite thorough.  I needed something simpler.  Something a non-data center executive could understand in one ppt slide.  

I am sure this seems obvious to most of you, but trying to get this into one slide was a good exercise.

Here is my current thinking a slide.

NewImage

The text is here.

•Cloud (VM) – bring your code and data - OS, Server, Network, Storage available for lease; on-site operations and IT services all done by cloud provider

•Hosted (Servers) – Physical servers are unit of delivery within IT environment design – HW for lease

–Similar to an internal IT service group for small scale

–Cloud Hybrids are more common

•Colocation (Racks) – bring your IT equipment, pick your ISP, provide power, space, facility operations

–Lease space and power capacity – for example 10 racks @ 10KW/rack

•Wholesale (MW of capacity) – you rent space, power, cooling and an open floor plan; you decide the layout of your space – power, cooling, network

–Need facility operations as well as IT Operations for your space

–Lease 10,000 sq ft @ 1MW

•Owned DC (everything) – you have everything under your control

–Big things gained vs. the previous steps you pick the site, you pick the design best meets your business needs, and the whole facility is yours

–Problem: if you haven’t built a lot of DC, then you will make mistakes

–2MW plus, the big big guys Google, Facebook, Amazon and Microsoft are building 10 – 30MW

 
I think this works pretty well.  Technical enough with details like power consumption, yet high level to convey the differences.

A Seth Godin post can on collision of code and humans can be applied to Data Centers

Seth Godin has a post on the collision of code and humans.

Thriving in a wet environment

If you've ever fixed any kind of machinery, you know that a device that's exposed to the elements is incredibly difficult to maintain. A washing machine or the underside of a car gets grungy, fast.

On the other hand, the dryest, cleanest environment of all is the digital one. Code stays code. If it works today, it's probably going to work tomorrow.

A Challenge for data centers comes to mind with his next paragraph.

Whatever we build gets misunderstood, corroded and chronic, and it happens quickly and in unpredictable ways. That's one reason why the web is so fascinating--it's a collision between the analytic world of code and wet world of people.

Data Center Innovators the guys who are willing to break the rules

I just had a great day at DataCenter Dynamics Seattle, and some of the best conversations were with the guys (sorry no offense to women, but there aren’t a whole of women in the data center design, construction, and operations) who break the rules.  Guys who don’t live in the safe zone of a risk adverse culture.

Why is this important?  Well if you have plenty of money for CapEx and OpEx for your data center it is not a problem.  But, in these times this is a dying mindset.  The survivors are those who are willing to break the rules of convention.  The so called expertise in the industry coming the long established practices has less value.

For an example, of one guy who is winning and breaking the rules check out this WSJ blog post on Ross Brawn.

Ross Brawn: The Most Dangerous Man in Formula One?

Getty Images
Ross Brawn

When Nico Rosberg won the Chinese Grand Prix for Mercedes last month, it marked a historic milestone in Formula One racing. The victory was the first by the Mercedes team in F1 since 1955, and it was Rosberg’s first on the world’s foremost auto-racing circuit after 111 races.

The WSJ post calls Ross a genius.

But it also provided confirmation of something that is become increasingly clear in recent years: In the world of F1 racing, and perhaps even the sports world period, Ross Brawn, the Mercedes team principal, may be the closest thing there is to a certifiable genius.

What does Ross do?  He knows where he can innovate.

Brawn thus has what may be an unparalleled knowledge of the arcane regulations and specifications that make up F1′s rule book. By navigating its gray areas, he has produced some of the most creative—and contentious—innovations in F1 history.

Ross’s magic is done within budget constraints.

In today’s F1, Brawn’s knack for operating at the limit of the rules is more valuable than ever. In an era when cost-cutting measures have restricted how much testing teams can do and how many technical staffers they can employ, one innovation can provide an insurmountable edge.

It is easy to break the rules when you have lots of money, but those days are gone.  The really smart data center guys are being like a Ross Brawn.