A New Option for moving out of AWS, Forsythe's Data Center in 1,000 sq ft increments

Moving out of AWS can save a lot of money. Huh?  Yes, here is one public disclosure from Moz’s CEO.

Building our private cloud

We spent part of 2012 and all of 2013 building a private cloud in Virginia, Washington, and mostly Texas.

This was a big bet with over $4 million in capital lease obligations on the line, and the good news is that it's starting to pay off. On a cash basis, we spent $6.2 million at Amazon Web Services, and a mere $2.8 million on our own data centers. The business impact is profound. We're spending less and have improved reliability and efficiency.

Our gross profit margin had eroded to ~64%, and as of December, it's approaching 74%. We're shooting for 80+%, and I know we'll get there in 2014.

 

 

 

So you want to move out of AWS, you dread the task of finding something the right size.  A cage in a colocation facility.  Seems too old school.  A wholesale pod?  Too big and you aren’t ready to jump into managing your own electrical and mechanical infrastructure.

How about 1,000 sq ft of data center space configured exactly the way you want?  Need more, get another 1,000 sq feet.  This is what Forsythe Data Centers has announced with its latest data center, offering the solution in the middle of this table.

NewImage



“Forsythe’s facility offers the flexibility and agility of the retail data center market, in terms of size and shorter contract length, with the privacy, control and density of large-scale, wholesale data centers,” said Albert Weiss, president of Forsythe Data Centers, Inc., the new subsidiary managing the center. He is also Forsythe Technology’s executive vice president and chief financial officer.

I got a chance to talk to Steve Harris and the flexibility for customers to have multiple suites designed exactly to support their gear is a dream come true those who know one size fits all usually means you wasting money somewhere.  You could have one suite that is just for storage, tape backup and other gear that is more sensitive to heat.  The high temperature gear could be right next to the storage suite.  You could have higher level of redundancy for some equipment and less for others in another suite.

And just like the cloud, your ability to add is so much easier than I need to move to a bigger cage.  Just add another suite.

How much power do you want per rack?  What’s a suite look like?

NewImage

Oh yeh and the data center is green too.

The facility is being designed to comply with U.S. Green Building Council LEED certification standards for data centers and to obtain Tier III certification from the Uptime Institute, a certification currently held by approximately 60 data centers in the U.S. and 360 worldwide, few of which are colocation facilities.

FedEx's Data Center is humming at Xmas

Here is a video on FedEx’s latest data center in Colorado Springs.

The news site is here.

"Saving money and saving resources, they go hand in hand. If it's operating efficiently and sustainability, then we've hit the bulls eye," according to Mitch Jackson, VP of Environmental Affairs for FedEx. That's a very important target, when managing tens of millions of packages a day, and billions of pieces of information.

GM Green's its data center

I’ve been staring at these browser tabs and have been meaning to post on GM’s green data center efforts.

Here is the official GM press release.  They focused on LEED.

GM’s LEED Gold Data Center Drives IT Efficiency

Fri, Sep 13 2013

WARREN, Mich. – A flywheel for battery-free backup power and in-row cooling that reduces the need for electricity contribute to a 70 percent reduction in energy use at General Motors’ world-class Enterprise Data Center, which has earned Gold certification by the U.S. Green Building Council’s LEED, or Leadership in Energy and Environmental Design, program.

Fewer than 5 percent of data centers in the U.S. achieve LEED certification, according to the building council. GM’s data hub on its Technical Center campus in this Detroit suburb is the company’s fifth LEED-certified facility and second brownfield project.

Arstechnica does a much better job of telling a story.

Waterfalls and flywheels: General Motors’ new hyper-green data center

Ars gets a look inside at the first GM-owned data center in nearly 20 years.

A look down the completed "data hall" of GM's Warren Enterprise Data Center. With 2,500 virtual servers up and running, the center is at a tiny fraction of its full capacity.
General Motors

WARREN, Michigan—General Motors has gone through a major transformation since emerging from bankruptcy three years ago.  Now cashflow-positive, the company is in the midst of a different transformation—a three-year effort to reclaims its own IT after 20 years of outsourcing.

Here are few more details.

Building a cloud, under one roof

Enlarge/ GM's IT Operations and Command Center, where all of GM's IT infrastructure—including its partner network, OnStar systems, and design and engineering systems—is monitored and controlled.

The first step in that transformation, Liedel said, was converting everyone running its IT operations to GM employees. Next came centralizing control over the company's widely-scattered IT assets.

So far, three of the company's 23 legacy data centers have been rolled into the new Warren data center. That's eliminated a significant chunk of the company's wide-area network costs. "We have 8,000 engineers at (Vehicle Engineering Center) here," Liedel said. And those engineers are pushing around big chunks of data—the "math" for computer-aided design, computer aided manufacturing, and a wide range of high-performance computing simulations

And. GM chose flywheels.

Almost no batteries required

One of the Warren Enterprise Data Center's two diesel generators.

Aside from its energy efficiency, GM's Warren Data Center picks up green cred in the way it handles its emergency power. Instead of using an array of lead-acid batteries to provide current in the event of an interruption of power, the data center is equipped with uninterruptible power supplies from Piller that use 15,000 pound flywheels spinning at 3,300 revolutions per minute.

 

Lego rendering of Data Center History

Data Centers is what almost all of your care about.  And, some of you may enjoy Legos.  Here is a blog post when you combine both.

Datacenter History: Through the Ages in Lego

The data center has changed dramatically through the ages, as our Lego minifigures can testify!

As a rule, I don’t participate in contests: There’s usually little reward, considering chances of winning. But when Juniper Networks asked me to build a datacenter from Lego bricks, I took a second look. And, seeing that the winner can support a charity of their choice, I felt that this was an excellent opportunity for me to have some fun while doing some good!

The above post goes through history.  For you who won’t click on the post, here is the modern lego data center.

The Modern Datacenter

We now turn to today. Our modern datacenter evolved from the history shown here: We retain the same 19-inch rack mount system used for Colossus way back during World War II. All of our machines are “Turing Complete” like the ENIAC. We run UNIX and Windows Server on CPUs spawned from the PDP-11, and our Windowed GUIs reflect the Xerox Alto. Today’s multi-core servers and multi-threaded operating systems carry the lessons learned by Cray and Thinking Machines.

A modern datacenter, complete with an EMC VMAX, Juniper router, and rackmount servers

My Lego datacenter tour ends here, with two racks of modern equipment. At the rear is an EMC Symmetrix VMAX which, like the CM-5, calls attention to its black monolith shape with a light bar. At front is a Juniper T-Series router (white vertical cards with a blue top) rack-mounted with a number of gold servers. Our technician holds an iPad while walking across a smooth raised floor. I even used a stress-reducing blue color for the walls!

Although the Symmetrix model only has three Lego axes, the router rack features four: The servers sit on forward-facing studs while the router is vertical. Both use black side panels, reflecting today’s “refrigerator” design.

Compass DC serves up 1.2 MW increments to Savvis in Minneapolis-St Paul

I don't know about you, but it is refreshing to see a press release that says exactly what is the first deployment vs. future capacity.  Savvis has a press release on a new data center lease in Minneapolis-St Paul, and the 2nd paragraph makes this statement

The Savvis MP2 data center is designed to serve the region's growing demand for colocation, cloud, managed hosting and network services. Built to support 4.8 megawatts of IT load on 100,000 square feet of raised floor space, it will open with an initial 1.2 megawatts and 13,000 square feet of raised floor space.

Compass data centers is serving up 1.2 megawatt increments to Savvis for the local market.

"We are excited to partner with CenturyLink's Savvis organization, which combines a global leadership position in data center excellence with a deep understanding of Minnesota market needs through the existing local CenturyLink presence," said Chris Crosby, chief executive officer of Compass Datacenters. "Working with Savvis to quickly facilitate expansion in the Minneapolis-St. Paul market, we've developed a streamlined strategy for future expansion and response to the growing demands of businesses in the region."

Now that Savvis has a taste of consuming in 1.2 megawatt increments anywhere they have a market need we'll see how many more data centers start cropping up out of the major internet hubs.