64-bit ARM Servers coming Sooner than expected, thanks to Apple A7 waking up the 64-bit opportunity

CNET has a post on 64-bit ARM processors catching ARM and other fabs flat footed.

Phone and tablet makers are rushing to embrace 64-bit designs, surprising even those executives behind the chip platform.

Tom Lantzsch, ARM's executive vice president of corporate strategy, spoke with CNET after the company reported first-quarter earnings on Wednesday.

ARM supplies virtually all of the basic processor designs for phones and tablets running on Android.

"Certainly, we've had big uptick in demand for mobile 64-bit products. We've seen this with our [Cortex] A53, a high-performance 64-bit mobile processor," he said.

This caught the chip designer's executives off guard, as they believed that 64-bit ARM would only be needed for corporate servers in the initial phase of the technology's rollout.

"We've been surprised at the pace that [64-bit] is now becoming mobile centric. Qualcomm, MediaTek, and Marvell are examples of public 64-bit disclosures," he said.

Past assumptions is only large memory addressing would address the need for 64 bit chips.  But, thanks to Apple’s A7 the market has found a new feature to differentiate on.

This echoes comments from a Taiwan Semiconductor Manufacturing Co. executive last week, who said the conversion to 64-bit has in the mobile device industry accelerated in the last six months after Apple made its 64-bit A7 processor -- also an ARM design -- announcement.

How soon are the 64-bit chips showing up?  By Christmas.

So, when will the transition to 64-bit processors happen for Android phones and tablets?

"We believe the capability will be there for a 64-bit phone by Christmas," he said, referring to phones and tablets with 64-bit bit processors.

A peak into Amazon's approach to servers comes from LSI

here is a blog post by Silicon Angle with Robert Ober.  In this post Robert discusses some of Amazon’s approached to IT hardware.

“The evolutionary direction we’re going in the data center, you can call it many things – you can call it pooling, you can call it disaggregation – but at a large scale, at a rack or multiple racks or a hyperscale data center, you wanna start pulling apart the parts,” he remarks.

Optimizing infrastructure down to the component level has many benefits, both architectural and operational, but Ober considers the improvements in thermal management to be the most notable. The reason, he details, is that processors, DRAM, flash and mechanical disk all have different temperature thresholds that have to be sub-optimally balanced in traditional configurations.

Here is a video you can watch with Robert discussing some of the ideas.  This interview was done at the OCP Summit V.

Dudes (and Gals) Why Would Google be design Server Silicon? The Silicon for High Performance Clusters makes more sense

EETimes has an article quoting John Doerr that Google has chip designers.

Google Ramps Up Chip Design

 
 
2/12/2014 02:08 PM EST 
5 comments
 
 
 
NO RATINGS
 
 
 
 

Would Google be designing their own server chips?  The media is all excited Google would develop ARM servers.  How much better can Google make a two proc server run?

What makes way more sense is Google designing silicon for a high performance fabric that works across 100s/1000s of servers or maybe even 10,000/100,000 servers.  If you look at Partha’s resume you can see he has the background for disaggregation systems and systems that scale.  Like a super computer system.

Amazon's James Hamilton throws in support for the idea of Blu Ray Cold storage

Facebook showed its proof of concept Blu Ray based cold storage solution at Open Compute Summit V.

Facebook has built a prototype system for storing petabytes on Blu-ray

 

36 MINS AGO

No Comments

 
SUMMARY:

During the Open Compute Summit in San Jose, Facebook VP of Engineering Jay Parikh shared some big statistics for the company’s cold storage efforts, including those for a protoytpe Blu-ray system capable of storing a petabyte of data today.

James Hamilton posts on the idea of optical storage.

 

Next week, Facebook will show work they have been doing in cold storage mostly driven by their massive image storage problem. At OCP Summit V an innovative low-cost archival storage hardware platform will be shown. Archival projects always catch my interest because the vast majority of the world’s data is cold, the percentage that is cold is growing quickly, and I find the purity of a nearly single dimensional engineering problem to be super interesting. Almost the only dimension of relevance in cold storage is cost. See Glacier: Engineering for Cold Data Storage in the Cloud for more on this market segment and how Amazon Glacier is addressing it in the cloud.

 

This Facebook hardware project is particularly interesting in that it’s based upon an optical media rather than tape. Tape economics come from a combination of very low cost media combined with only a small number of fairly expensive drives. The tape is moved back and forth between storage slots and the drives when needed by robots. Facebook is taking the same basic approach of using robotic systems to allow a small number of drives to support a large media pool. But, rather than using tape, they are leveraging the high volume Blu-ray disk market with the volume economics driven by consumer media applications. Expect to see over a Petabyte of Blu-ray disks supplied by a Japanese media manufacturer housed in a rack built by a robotic systems supplier.

 

I’m a huge believer in leveraging consumer component volumes to produce innovative, low-cost server-side solutions. Optical is particularly interesting in this application and I’m looking forwarding to seeing more of the details behind the new storage platform. It looks like very interesting work.

IBM sells x86 Server business, and doubles capacity of Softlayer Cloud - Sounds like a Good Swap

There is all kinds of news on IBM selling its x86 Server business to Lenovo.

 

Lenovo to Buy IBM Low-End Server Business for $2.3 Billion

Chinese Computer Maker Aims to Expand Corporate-Client Business Beyond Office PCs

Updated Jan. 23, 2014 8:13 a.m. ET
 
The press release is here.  And there is not a single mention of Softlayer who in theory would be using IBM x86 Servers.
 
On Jan 17, 2014 IBM announced it would spend $1.2 bil to double its Softlayer Cloud capacity and word server does not show up.

IBM Commits $1.2 Billion to Expand Global Cloud Footprint

Builds a Massive Network of Local Cloud Hubs for Businesses Worldwide with 40 Data Centers Across Five Continents

ARMONK, N.Y. - 17 Jan 2014: IBM (NYSE: IBM) today announced plans to commit  over $1.2 billion to significantly expand its global cloud footprint. This investment includes a network of cloud centers designed to bring clients greater flexibility, transparency and control over how they manage their data, run their business and deploy their IT operations locally in the cloud. 

I once asked a Softlayer person if they have started running IBM servers in their Cloud environment.  His answer surprised me.  No.  We can’t run the IBM Servers.  We have fine tuned our automation to work with a particular BIOS we control and make sure is on all servers.
Check the this picture of IBM Softlayer CEO Lance Crosby.  Notice how he is back of the server rack and not in front of the servers.  I know the server vendor, but it is not appropriate to share and who knows if they have changed since I talked to the Softlayer person last summer.
NewImage
Wonder if the IBM folks got a wake-up call when they realized they couldn’t use their own servers in their cloud environment without making a lot of changes to accommodate the Softlayer BIOS and probably break a huge amount of other management tools.