China and Taiwan’s #1 Telecom companies join efforts in Cloud Computing between countries

Chunghwa Telecom (Taiwan’s #1 Telecom) and China Telecom (China’s #1 Telecom) have signed an agreement to deploy Telecom services in Western Taiwan to support Cloud and Smart Grid services between the countries reports Taiwan Economic News.

Chunghwa Telecom, China Telecom Co-Tap Coastal Economic Zone Market

2011/03/07

Taipei, March 7, 2011 (CENS)--Chunghwa Telecom Co., Ltd. and China Telecom Co., Ltd. recently signed an agreement to jointly deploy telecom services including intelligent energy management and cloud-computing networks on the Western Taiwan Straits Economic Zone, which mainly lies along the cost of the Fujian Province of mainland China.
The two telecom carriers vow to cooperate on the two telecom services across the mainland starting from the said zone, which was proposed by Chinese central government to integrate the economies, transport, infrastructure, policies from the coastal cities west of the Taiwan Straits for competitiveness, social development, increased and strengthened economic cooperation with Taiwan

The cooperation supports expanded services and cost reduction.

Chunghwa Telecom executives pointed out that the two companies will integrate their networks, distribution channels, products and technologies to explore business opportunities on the zone, with cooperation covering cloud computing, Internet Data Center, intelligent energy management, electronic commerce and energy saving services.

China Telecom Chairman Wang Xiaochu pointed out that the cooperation will help the two companies pare down cost for operations on the zone.

423 job openings in AWS, how many employees does AWS have? 2,000

Seattletimes has an article about Amazon's job growth in the Seattle area.

Amazon.com on a hiring spree

Amazon.com has been expanding its office complex in South Lake Union and has about 1,900 openings in Seattle.

By Amy Martinez

Seattle Times business reporter

Amazon began moving into its new headquarters complex in South Lake Union last spring and now occupies seven buildings covering 845,000 square feet.

Enlarge this photo

GREG GILBERT / THE SEATTLE TIMES

Amazon began moving into its new headquarters complex in South Lake Union last spring and now occupies seven buildings covering 845,000 square feet.

It's tough times all around, just not at Amazon's new headquarters in the South Lake Union area of Seattle.

Young workers dressed in jeans and T-shirts type away on laptops in small conference rooms named after products sold on Amazon's website. Vending machines serve up pricey gluten-free cookies and dry-roasted edamame. And for their pooches, doggy biscuits are handed out at the main reception desk.

Amid a sluggish job market and shaky economic recovery, the world's biggest Internet retailer is hiring like crazy at its headquarters complex. Consider Amazon's online jobs board: It lists about 1,900 openings in Seattle, at least twice as many as a year ago. More than 900 call for techies.

If Amazon is adding 900 technical jobs.  How many are in AWS?  I blogged about 166 job positions open in May 2010.

I was in Home Depot yesterday and saw one of my ex-Microsoft friends who joined Amazon Web Services 4 years ago when people weren't familiar with the term Cloud Computing.  We had a quick catch-up in between Grill Aisle and Lawn Mowers.  I told him how I blogged about AWS having 100 job openings. Curious I checked today and there were 147 US jobs and 19 Int'l jobs for AWS.

image

So, going up to Amazon career and searching for "AWS" there are 43 pages of 10 jobs a page.  The 43rd page has 3 entries.  42 X 10 + 3 = 423.

image

Current job postings for Amazon of 1,900 for 33,700 employees equals a 5.6% hiring rate.

Worldwide, Amazon had 33,700 employees at the end of 2010, about 9,400 more than at the end of the previous year.

For a fast growing business if you assume a 20% hiring percentage, multiply 423 X 5 is about 2,000 employees.  Take your own guess on the size of AWS higher or lower, AWS is on fire and hiring.

Don't forget versus all the rest of the cloud companies, Amazon has more customers and more data than anyone else. A huge investment in customer analytics systems.  Analyzing behavior gives Amazon an advantage where their size makes it difficult for competitors.

SeaMicro ships 64-bit Atom Server, 1/4 power of Xeon

SeaMicro has a press release on its latest 64-bit Intel Atom Server.

image

SeaMicro Now Shipping the World’s Most Energy Efficient x86 Server with New 64-bit Intel Atom N570 Processor

The New SM10000-64™ Integrates 256 Intel® Atom™ Dual-core 1.66GHz Processors – 512 64-bit Cores and 850GHz, into a 10 Rack Unit System

A Hadoop benchmark was run that beat the performance of Xeon based servers with 1/4 the power.

For example, in the Hadoop MinuteSort benchmark test (http://sortbenchmark.org), 29 SeaMicro SM10000-64s were able to beat the performance of 1,406 dual socket quad core servers, but used just one-quarter of the power and took one-fifth the space.

Here is a Mozilla case study on using the SeaMicro box.

Mozilla compared performance on the SeaMicro
SM10000 with their incumbent system - an HP C7000
Dual Socket Quad Core L5530 Xeon Blade and found
SeaMicro to be dramatically superior on all of the
competitive dimensions: Capital expense per unit
compute, HTTP requests serviced per/Watt, and HTTP
requests serviced per Rack Unit. These advantages
combined to produce dramatic savings in both capital
expense and operating expense. On the operating
expense side, the SeaMicro SM10000 used less than
1/5 the power per request, and took less than 1/4 the
space of the HP C7000 for small transaction, high
volume workloads.

VentureBeat also covers the press announcement.

“The response has been extraordinary,” said Andrew Feldman (pictured at top), chief executive of SeaMicro. “The sucking sound in the market is unbelievable. Everybody wants low-power computing.”

Cloud Hype reaches milestone, Toilet Humor

The cloud is everywhere and now it has reached a milestone and achieve Toilet Humor status with Chris Hoff's presentation on Commode Computing.

One nice thing about blogging about topics is it makes you read an article twice, and I missed Part 2 of the video of my first read.

 

Here is Chris's blog post. with more background.

Video Of My CSA Presentation: “Commode Computing: Relevant Advances In Toiletry & I.T. – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”

February 19th, 2011beakerLeave a commentGo to comments

This is probably my most favorite presentation I have given.  It was really fun.  I got so much positive feedback on what amounts to a load of crap. ;)

This video is from the Cloud Security Alliance Summit at the 2011 RSA Security Conference in San Francisco.  I followed Mark Benioff from Salesforce and Vivek Kundra, CIO of the United States.

Apple/Intel Thunderbolt enables low cost 10 GB/s connection

Apple and Intel are announcing support for Thunderbolt support in Apple's new Notebooks.

image

Intel's Light Peak event, Thursday 10 a.m. PT (live blog)

by Josh Lowensohn

  • A photo of Intel's Light Peak technology

A photo of Intel's Light Peak technology

(Credit: Intel)

Intel today is revealing some of the final details of its Light Peak technology as it makes its way into the first wave of consumer and business gadgetry.

Now officially known as Thunderbolt, the data transfer and high-definition PC connection runs at 10 gigabits per second and "can transfer a full-length HD movie in less than 30 seconds," Intel announced this morning.

Part of Thunderbolt is PCI Express.

Here is white paper on PCI express.  Part of the idea behind PCI Express is adding a switch to IO design.

image

image

Given Apple's and Intel's announcement PCI Express interconnects will be cheaper.  See this article in HPC for how PCI express Interconnect could be used in high performance clusters.

January 24, 2011

A Case for PCI Express as a High-Performance Cluster Interconnect

Vijay Meduri, PLX Technology


Page:  1  of  2
1 | 2 All »

In high-speed computing (HPC), there are a number of significant benefits to simplifying the processor interconnect in rack- and chassis-based servers by designing in PCI Express (PCIe). The PCI-SIG, the group responsible for the conventional PCI and the much-higher-performance PCIe standards, has released three generations of PCIe specifications over the last eight years and is fully expected to continue this progression in the future with even newer generations, from which HPC systems will continue to see newer features, faster data throughput and improved reliability.

The latest PCIe specification, Gen 3, runs at 8Gbps per serial lane, enabling a 48-lane switch to handle a whopping 96 GBytes/sec. of full duplex peer to peer traffic. Due to the widespread usage of PCI and PCIe in computing, communications and industrial applications, this interconnect technology's ecosystem is widely deployed and its cost efficiencies as a fabric are enormous. The PCIe interconnect, in each of its generations, offers a clean, high-performance interconnect with low-latency and substantial savings in terms of cost and power. The savings are due to its ability to eliminate multiple layers of expensive switches and bridges that previously were needed to blend various standards. This article explains the key features of a PCIe fabric that now make clusters, expansion boxes and shared-I/O applications relatively easy to develop and deploy.

Here is the idea to use PCI Express for cloud infrastructure.

Figure 1 illustrates a typical topology of building out a server cluster today, in which, while the form factors may change, the basic configuration follows a similar pattern. Given the widespread availability of open-source software and off-the-shelf hardware, companies have successfully built large topologies for their internal cloud infrastructure using this architecture.

Figure 1: Typical Data Center I/O interconnect

Figure 2 illustrates a server cluster built using a native PCIe fabric. As is evident, the usage of numerous adapters and controllers is significantly reduced and this results in a tremendous reduction in power and cost of the overall platform, while delivering better performance in terms of lower latency and higher throughput.

Figure 2: PCI Express-based Server Cluster