A great electrical mechanical system to learn from, Apollo Navigation system

I am sitting watching on SciHD channels broadcast of the Moon Machines - Navigation Computer video.  You can watch the video here on Youtube.

Here is more about the Navigation System.  We take for granted knowing your location.  Location is an immutable fact  except when you can't figure out where you are.  It's called being lost.  

Sextant, Apollo Guidance and Navigation System
   An Apollo sextant and scanning telescope, from the collections of the National Air and Space Museum. The device penetrated the pressure hull of the Command Module.   Photo: National Air and Space Museum. 1.24078091106

Date manufactured: 1960's, Kollsman Instrument Company

Description: Between December 1968 and December 1972, a total of nine Apollo spacecraft carried human crews away from the Earth to another heavenly body. Primary navigation for these missions was done from the ground. As a backup, and for segments of the mission where ground tracking was not practical, an on-board inertial navigation system was used. Astronauts periodically used a sextant to sight on stars and the horizons of the Earth and Moon to align the inertial system, and to verify the accuracy of the Earth-based tracking data.

Canary in a Data Center, maybe a time to break down the security, IT, and facility barriers

Fast Company has a post on the Canary intelligent home security system.  But, my first thought is this would be great in a data center, colocation, or server closet.  The biggest problem would be it threatens the silos of security, facilities, and IT operations teams in their specialized systems.

I know one of you out there will take the leap and order one of these for your server area.

BTW, I have a home video recording system and it is so much better than a typical security system approach.

The hardware has these features.

NewImage

The software has these features.

NewImage

Counting Servers is Easy, there are a lot of other things that are much harder

James Hamilton has a post saying that is hard to count servers.

At the Microsoft World-Wide Partners Conference, Microsoft CEO Steve Ballmer announced that “We have something over a million servers in our data center infrastructure. Google is bigger than we are. Amazon is a little bit smaller. You get Yahoo! and Facebook, and then everybody else is 100,000 units probably or less.

That’s a surprising data point for a variety of reasons. The most surprising is that the data point was released at all. Just about nobody at the top of the server world chooses to boast with the server count data point. Partly because it’s not all that useful a number but mostly because a single data point is open to a lot of misinterpretation by even skilled industry observers. Basically, it’s pretty hard to see the value of talking about server counts and it is very easy to see the many negative implications that follow from such a number

What is hard is figuring out how many cores these servers have.  What is the age of the servers?  Oldest is 4 years.  Or 3.  What is the rate of adding new data center capacity and how does that relate to overall cores and storage increasing?

The one advantage Microsoft has in making a statement on server count is the companies will not speak up what theirs is.

The first question when thinking about this number is where does the comparative data actually come from?  I know for sure that Amazon has never released server count data. Google hasn’t either although estimates of their server footprint abound. Interestingly the estimates of Google server counts 5 years ago was 1,000,000 servers whereas current estimates have them only in the 900k to 1m range.

We'll see if others speak up on server count or not.  

The US census for years has conducted a study of manufacturing capacity for years.

 

Quarterly Survey of Plant Capacity Utilization (QPC)

The Survey of Plant Capacity Utilization provides statistics on the rates of capacity utilization for the U.S. manufacturing and publishing sectors.

  • The Federal Reserve Board (FRB) and The Department of Defense (DOD) co-fund the survey.
  • The survey collects data on actual, full, and emergency production levels.
  • Data are obtained from manufacturing and publishing establishments by means of a mailed questionnaire.
  • Respondents are asked to report actual production, an estimate of their full production capability, and an estimate of their national emergency production.
  • From these reported values, full and emergency utilization rates are calculated.
  • The survey produces full and emergency utilization rates for the manufacturing and publishing sectors defined by the North American Industrial Classification System (NAICS).
  • Final utilization rates are based on information collected from survey respondents.

Wouldn't it be useful for the FRB and DOD to understand data center capacity and utilization?  It is hard to assess, but that doesn't mean it shouldn't be done.

The peak of dual processor servers is coming, Intel sets new path towards the mainframe

24 years ago in 1989 Compaq released the first dual processor, RAID Intel 486 Server.

At its initial release in November 1989, the SystemPro supported up to two 33 MHz 386 processors, but early in 1990 33 MHz 486 processors became an option (the processors were housed on proprietarydaughterboards).

The SystemPro, along with the simultaneously released Compaq Deskpro 486, was one of the first two commercially available computer systems containing the new EISA bus. The SystemPro was also one of the first PC-style systems specifically designed as a network server, and as such was built from the ground up to take full advantage of the EISA bus. It included such features as multiprocessing (the original systems were asymmetric-only), hardware RAID, and bus-mastering network cards. All models of SystemPro used a full-height tower configuration, with eight internal hard drive bays.

Over the past 24 years the data center has seen a steady growth of dual processor servers.

Yesterday Intel announced a re-architecting of the datacenter.

NewImage

And the future is not dual processor servers to deliver compute, I/O and memory.  The Pooled compute, Pooled Memory, Pooled I/O  looks like a mainframe.

NewImage

Most media is focusing on new processors announced.  That is the old world of thinking.

NewImage

Intel makes the point of going from proprietary to standards with supercomputers.

NewImage

And a diversity of workloads.  The high cpu, memory, I/O was the realm of supercomputers and mainframes.

NewImage

Intel is also driving innovation at the low end, but these are not the systems to run the high resource workloads.

Traditional servers are also evolving. To meet the diverse needs of datacenter operators who deploy everything from compute intensive database applications to consumer facing Web services that benefit from smaller, more energy-efficient processing, Intel outlined its plan to optimize workloads, including customized CPU and SoC configurations.

Do you care more about Top Supercomputers in China and NSA or Massive Clusters at Google, Facebook, Microsoft, and Amazon

There is news that China has the world's record for Supercomputer.

The ten fastest supercomputers on the planet, in pictures

Chinese supercomputer clocks in at 33.86 petaflops to break speed record.

A Chinese supercomputer known as Tianhe-2 was today named the world's fastest machine, nearly doubling the previous speed record with its performance of 33.86 petaflops. Tianhe-2's ascendance was revealed in advance and was made official today with the release of the new Top 500 supercomputer list.

The media will gladly write about who has the biggest and most powerful supercomputer.

As one of my friends who has worked on supercomputer data centers said, we realized we could reduce a lot costs in the data center, because the super computer would often have weekly maintenance intervals as well as monthly and quarterly.  Components are constantly failing and yes there is a degree of isolation in the failures, but you need to eventually repair the failures which can mean a complete shut down.  During these shut downs is when data center maintenance can be performed.

But, at Google, Facebook, Microsoft, and Amazon there is no time to shut down services.  100,000s of servers need to run all the time.  

Amazon threw up a supercomputer entry in 2011, and it is still ranked 127.

ListRankSystemVendorTotal CoresRmax (TFlops)Rpeak (TFlops)Power (kW)
06/2013 127 Amazon EC2 Cluster, Xeon 8C 2.60GHz, 10G Ethernet Self-made 17,024 240.1 354.1  
11/2012 102 Amazon EC2 Cluster, Xeon 8C 2.60GHz, 10G Ethernet Self-made 17,024 240.1 354.1  
06/2012 72 Amazon EC2 Cluster, Xeon 8C 2.60GHz, 10G Ethernet Self-made 17,024 240.1 354.1  
11/2011 42 Amazon EC2 Cluster, Xeon 8C 2.60GHz, 10G Ethernet Self-made 17,024 240.1 354.1

Can you imagine if Google, Facebook, Microsoft, or Amazon put up their clusters as an entry?

Part of companies like Google has as advantage is they have teams of people led by guys like Jeff Dean to really think hard about compute clusters.  Here is a presentation Dean gave 4 years ago.

NewImage

NewImage

Google, Facebook, Microsoft, and Amazon are solving the problem to keep supercomputer performance running 24x7x365 a year.  I think this type of innovation affects us much more than who has the fastest supercomputer which requires hundreds of hours of downtime for maintenance.