Quanta's Direct sales transformation 65% in 2012, expected to be 85% in 2013

Quanta was mostly known as an ODM, the guys who the OEMs went to make their hardware.  But, Quanta has made the move to be in the direct sales business and guess what they are moving very fast.

Compaq was the beginning of the some of the first dual processor, dual hard drive, x86 based servers. In the beginning it was Compaq, HP and IBM who had the knowledge to build servers.  Over time to reduce costs the manufacturing was move to Asia and eventually the engineering was moved to Asia, leaving the OEMs to have the customer relationship.  In the shift to the bigger data center players.  Remember 5 years ago how small Google, Facebook, Amazon, Microsoft, and Apple's data centers were?  Now they are the dominant players and there has been a shift to the economies of scale with 10,000s of servers a small yearly order.  The big guys buy 100,000s of servers a year.

This shift benefits a player like Quanta.

GigaOm has an article on Quanta.

How an unknown Taiwanese server maker is eating the big guys’ lunch

 

MAR. 16, 2013 - 1:30 PM PDT

No Comments

SONY DSC
SUMMARY:

In the server business, Taiwanese hardware company Quanta has shifted from an original-design manufacturer to much more of a direct seller. It wants to extend the trend and sell other products, too.

Here is the part that caught my eye.

Back then, Quanta didn’t sell servers directly to customers, it only built them for traditional server vendors who then put their name on them and sold them to customers. Fast forward a few years, and a majority of Quanta’s server revenue stems from direct deals — 65 percent in 2012, and a forecasted 85 percent this year.

Quanta is expanding the cloud hub of development in Seattle.

Next month, the company will open an office in Seattle in order to be closer to customers. Yang said Quanta has several customers in the area, although he declined to name them. Microsoft, which is building huge data center capacity for Windows Azure and its Live offerings, is a short drive from Seattle, in Redmond, Wash., and Seattle is much closer to Quincy, Wash., a hotbed of data centers, than the Fremont office. Quanta will add more U.S. offices for sales and service this year, Yang said.

Oracle acquires Cloud Infrastructure SW company Nimubla

I met the Nimbula guys a few years ago and I was quite impressed.

What hasn't been impressive is Oracle's cloud efforts.  Well one way to solve this problem for Oracle is to buy Nimbula.

Oracle buys private-cloud pioneer Nimbula

 

6 HOURS AGO

4 Comments

SUMMARY:

Oracle’s acquisition of Nimbula gives it some needed private-cloud savvy and a toehold in the OpenStack camp — should it choose to keep Nimbula’s product around.

An interesting observation by Derrick Harris is the difficulty of selling the private cloud.

Nimbula was on the scene early and, from all accounts, built a good product, but appears to have succumbed to a lackluster private-cloud buying market. It has a handful of publicly named customers, including Russian search engine giant Yandex, but like so many other private-cloud startups, it might have fallen victim to market confusion (i.e., “Can’t we just keep buying VMware?”) and an industry consensus around OpenStack as the private-cloud savior. Indeed, last year, Nimbula made a strong pivot and actually began rebuilding itself as an OpenStack distribution.

Google doubles its presence in Kirkland (Seattle), Seattle is a cloud development hub with Amazon and Microsoft

I moved to Seattle 20 years ago.  Part of Seattle are clouds.  And it makes you wonder if all those clouds are influencing software developers. :-)  As Google invests more in cloud development like Amazon and Microsoft cloud development teams in Seattle.

NYTimes posts on the Google expansion in Kirkland, WA.  Kirkland is probably more known as the Costco branded items.  Kirkland city is near Costco HQ. And Kirkland is where Google has its hub of offices in the Seattle area.

In a battle for dominance in cloud computing, Google is taking onMicrosoft and Amazon in their own back yard.

Google said Tuesday that it was doubling its office space near Seattle, just miles from the campuses of Amazon and Microsoft, and stepping up the hiring of engineers and others who work on cloud technology.

It is part of Google’s dive into a business known as cloud services — renting to other businesses access to its enormous data storage and computing power, accessible by the Internet.

Here is a video by local Komonews with a tour inside the Google Kirkland facility.

Simcity's disaster going online, 92% resolved

Simcity choose to make its latest version online only.  And, has found out what it means to be a 24x7 service with painful PR results and customer frustration.

I am sure for the data center crowd this is an entertaining event.  Here are some nuggets that have me questioning what they were thinking to start with.

In their latest update they sing the praise of resolving issues.  Well, 92% of them.

I’m happy to report that the core problem with getting in and having a great SimCity experience is almost behind us. Our players have been able to connect to their cities in the game for nearly 8 million hours of gameplay time and we’ve reduced game crashes by 92% from day one.

In the third update they said they upgraded their servers.  The initial were 11.  I think the EA folks gave the operations team a bit too little money.

First things first: we’ve been making great strides towards improving our servers. In addition to adding several new ones the past couple of days (including the addition of Antarctica today), we’ve been applying upgrades to 11 of our initial servers. What does this mean? We’ve beefed up these servers to allow for a larger capacity that will not only allow more players to get in, but will also help address a lot of the connection issues we’ve been addressing.

And, they figured out they needed a test server.

Also, we’ve released a new Test server today. As you may have guessed by the name, we’ll be using this server to test changes and new features before we deploy them to all of our other servers. Before you ask, yes, you can play on it. In fact, we’d be grateful if you did. Just note, because this is a test environment, you may experience some unstableness as we push new data to improve non-test servers.

Q: What is it for? 
A: This server is used by us the SimCity developers, and players to test changes and new features before they are released across all the other servers. This test server will improve our ability to deploy these updates as quickly and accurately as possible.

Bet you are a bunch of people inside the company saying, we should have gone to the cloud instead of deploying our own physical servers.  Especially if all you had to start with was 11.

Tip for IT to control the cloud, don't control it, get data

I was on a webinar yesterday to discuss the best route to the cloud.  One of the last questions was 

NewImage

The day before I had a conversation with Luke Kanies, CEO of Puppetlabs to catch up.  I was introduced by a mutual friend a couple of years ago, and we have had always had great discussions.

I told Luke I was participating in a webinar on the cloud and it would seem like a tool like Puppet Enterprise could be used to get the data on what clouds are being built and deployed.

Puppet Enterprise is IT automation software that gives system administrators the power to easily automate repetitive tasks, quickly deploy critical applications, and proactively manage infrastructure changes, on-premise or in the cloud. Learn more about Puppet Enterprise below, or download now and manage up to 10 nodes free.

Download Free

Puppet Enterprise automates tasks at any stage of the IT infrastructure lifecycle, including:

  • Provisioning
  • Discovery
  • OS & App Configuration Management
  • Build & Release Management
  • Patch Management
  • Infrastructure Audit & Compliance

I didn't specifically mention Puppetlabs, but I made the point that the biggest step taken to take control of the cloud is to get data. Data from the deployment tools.  If central IT bought a tool that helped all the users, then they could get the data.

If Puppet Enterprise logs were sent to a central IT function they would have the data to determine what the users are doing in the cloud.  With the data you can determine how best to serve the needs.

This recommendation flies in the face of what I think of what 80% of the people would do which is to just take control.  This makes sense as these same 80% of the people would have no idea what a puppet enterprise log means.

I constantly tell people the misperception of corporate IT is it is technical organization.  No, IT is not necessarily technical.  Take a look around how many of these people are CS degrees, let alone MS or PhD.  What is technical?  Google, Apple, Facebook, Microsoft product development teams are technical.  PuppetLabs is also technical, and they have a good method to manage the IT infrastructure.

How Puppet Works

Puppet uses a declarative, model-based approach to IT automation.

  1. Define the desired state of the infrastructure’s configuration using Puppet’s declarative configuration language.
  2. Simulate configuration changes before enforcing them.
  3. Enforce the deployed desired state automatically, correcting any configuration drift.
  4. Report on the differences between actual and desired states and any changes made enforcing the desired state.

Which reminds me one of the things I enjoy talking to Luke and why another Portland friend introduced us is we both like the use of Models.

Enforce Desired State

After you deploy your configuration modules, the Puppet Agent on each node communicates regularly with the Puppet Master server to automatically enforce the desired states of the nodes.

  1. The Puppet Agent on the node sends Facts, or data about its state, to the Puppet Master server.
  2. Using the Facts, the Puppet Master server compiles a Catalog, or detailed data about how the node should be configured, and sends this back to the Puppet Agent.
  3. After making any changes to return to the desired state (or, in “no-op mode,” simply simulating these changes), the Puppet Agent sends a complete Report back to the Puppet Master.
  4. The Reports are fully accessible via open APIs for integration with other IT systems.

Uh, BTW, this is the way I think a data center should work as well.