Will Zynga's stock performance affect its data center build out? Stock gets close to 8, 20% below IPO

Zynga is close to a stock price of 8 today.

NewImage

Zynga has increased its data center capacity with a recent 9 MW in Vantage.

Zynga signed the largest deal of the year for Grubb & Ellis, leasing nine megawatts of capacity, or six PODs, at a data center in Santa Clara, Calif.

Zynga has about 5 MW on east and west coast locations before this addition.

Part of the IPO was to fund data center expansion.

But to cut costs and diversify its risks, Zynga is now investing more money in building its own data centers, according to the company’s initial public offering filing with the Securities and Exchange Commission.

Zynga considers the investment in its own infrastructure to be important enough to warrant an investment of $100 – $150 million in the second half of 2011, according to the filing.

But with the stock opening at 10 and now at 8 in less than a month, you would think there are a lot of people thinking on how to get the stock price up.  And, one way is to cut costs and figure out how to support more on less hardware which then brings down the need for data center space.

Even though Zynga doesn't own its data centers they can't do much about their PUE.  They can still take efforts to be green in their data center space evaluating how efficiently their code is and their hardware systems.

Now that Zynga is IPO they need to manage their costs to match their revenue if they want their stock to go up long term.

 

27 of 35 IBM's European Data Centers receive Energy Efficiency European Commission Award

IBM has a press release on 27 of its 35 European data centers receiving energy efficiency status.

European Commission Awards IBM for Energy Efficient Data Centers

27 IBM Data Centers in Europe Receive Data Center Energy Efficiency Status

ARMONK, N.Y. - 05 Jan 2012: IBM (NYSE: IBM) today announced the European Commission (EC), the executive body of the European Union, has awarded 27 IBM Data Centers for energy efficiency, based on the European Union (EU) Code of Conduct for Data Centers. The honor represents the largest portfolio of data centers from a single company to receive the recognition.

The EU Code of Conduct was created in response to increasing energy consumption in data centers. The EU aims to inform and encourage data center operators and owners to reduce energy consumption in a cost-effective manner without decreasing mission critical data center functions. The assessment is made against a set of best practices to reduce energy losses which include the usage of energy efficient hardware, installing free cooling and cold aisle containment. Power usage effectiveness (PUE) is an indicator for how efficiently a computer data center uses its power. In May, the Uptime Institute gave IBM data centers a rating of 1.65 for average power usage compared to the industry average of 1.8.

 

 

 

 

 

 

 

 

I had a chance to chat with Rich Lechner, IBM VP, Global Technology Services and Cloud in more detail on the press announcement.

NewImage

One of the thing IBM has achieved in these same facilities is a doubling of compute and storage capacity in three years without increasing power use.

The energy improvements implemented in these data centers helped IBM meet a goal set in 2007 to double the IT capacity of its data centers within three years without increasing the power consumption.

Rich took some time to explain the range of 27 facilities that have achieved the award and there are more facilities that are planned to be submitted.  IBM also has services to help companies get their own certification for their facilities.

For more details on what IBM did to achieve the energy efficiency improvements check out this PDF.  Here is a graph you'll find interesting.

NewImage

One of the tangents we took is discussing IBM's modular/flexible data center designs.  Rich said there are 500 data center sites that use IBM's flexible modular architecture described here.

Upgrade to a modular data center design with IBM. Modular data centers are not about containerization, but about using smaller increments of standardized components. These enable you to match your business requirements to your IT requirements and add data center capacity when needed. A modular approach can enable you to pay as you grow and buy only what you need, when you need it, to defer capital and operational costs by perhaps 40 to 50 percent.

Data centers designed by IBM deliver flexibility throughout their entire life cycles, over an expected life of 10 or 20 years. But how can you estimate the capacity your business will need years from now? The best up-front investment is a statement of requirements created during data center strategy planning. We've infused our data center strategy with mathematical modeling services and tools to help bring the future into the present, so you can take action today. The analytics are designed to reveal your existing data center's status. They report on your data center's financial and operational terms, while modeling alternative scenarios based on your business data.

IBM services for designing a flexible, smarter data center

Data Center Design and Construction Services can help you design your data center with the future in mind.

Loggly suffers extended outage after AWS reboot shuts down their service

Loggly a cloud service  that provides as one of its services System Monitoring and Alerting.

Systems Monitoring & Alerting

Alerting on log events has never been so easy.  Alert Birds will help you eliminate problems before they start by allowing you to monitor for specific events and errors.  Create a better user experience and improve customer satisfaction through proactive monitoring and troubleshooting. Alert Birds are available to squawk & chirp when things go awry.

But, Loggly has suffered an extended outage that was caused by AWS rebooting 100% of their servers, but that was only half the time down.  The other half was due to not knowing the service was down.

Loggly's Outage for December 19th

Posted 19 Dec, 2011 by Kord Campbell

Sometimes there's just no other way to say  "we're down" than just admitting you screwed up and are down.  We're coming back up now, and in theory by the time this is read, we'll be serving the app again normally.  There will be a good amount of time until we can rebuild the indexes for historic data of our paid customers. This is our largest outage to date, and I'm not at all proud of it.

...

Loggly uses a variety of monitoring mechanisms to ensure our services are healthy.  These include, but are not limited to, extensive monitoring with Nagios, external monitors like Zerigo, and using a slew of our own API calls for monitoring for errors in our logs.  When the mass reboot occurred we failed to alert because a) our monitoring server was rebooted and failed to complete the boot cycle, b) the external monitors were only set to test for pings and established connections to syslog and http (more about that in a moment), and c) the custom API calls using us were no longer running because we were down.

Combined, these failures effectively  prevented us from noticing we were down.  This in of itself is was the cause of at least half our down time, and to me, the most unacceptable part of this whole situation.

The other half of the outage was caused by Loggly not testing for a 100% reboot of all machines.

The Human Element

The other cause to our failures is what some of you on Twitter are calling "a failure to architect for the cloud".  I would refine that a bit to say "a failure to architect for a bunch of guys randomly rebooting 100% of your boxes".  A reboot of all boxes has never been tested at Loggly before.  It's a test we've failed completely as of today.  We've been told by Amazon they actually had to work hard at rebooting a few of our instances, and one scrappy little box actually survived their reboot wrath.

One of the lessons that Loggly learned that some of my SW buddies and I are using in a SW design is to add more than one monitoring solution.

The second step is to ensure more robust external monitoring.  With multiple deployments, this issue becomes less of an issue, but clearly we need more reliable checks than what we rely on with Zerigo or other services.  Sorry, but simple HTTP checks, pings and established connections to a box do not guarantee it's up!

 

 

Big Data, Hadoop, Dell, and Splunk, where is the connection?

I have been busy working on a Big Data paper, so my blogging is not as often.  Getting into Big Data technical details has been easy, and then it hit me that Big Data in data centers and IT has a lot in common with monitoring and management systems.  Collecting gigabytes or even terabytes of data a day to monitor operations is a big data center problem.

Researching the Big Data topic it was interesting to see the intersection of Dell, Hadoop, and Splunk in Big Data.

Barton George has a post on Splunk.

Hadoop World: Talking to Splunk’s Co-founder 2 Votes Last but not least in the 10 interviews I conducted while at Hadoop World is my talk with Splunk‘s CTO and co-founder Erik Swan. If you’re not familiar with Splunk think of it as a search engine for machine data, allowing you to monitor and analyze what goes on in your systems. To learn more, listen to what Erik has to say:

Barton references a GigaOm post on Splunk and Hadoop.

Splunk connects with Hadoop to master machine data

Splunk has integrated its flagship product with Apache Hadoop to enable large-scale batch analytics on top of Splunk’s existing sweet spot around real-time search, analysis and visualization of server logsand other machine-generated data. Splunk has long had to answer questionsabout why anyone should use its product over Hadoop, and the new integration not only addresses those concerns but actually opens the door for hybrid environments.

 

 

Dell's Barton George is interview himself as well at Hadoop World.

Hadoop World: What Dell is up to with Big Data, Open Source and Developers

Rate This

 

Besides interviewing a bunch of people at Hadoop World, I also got a chance to sit on the other side of the camera.  On the first day of the conference I got a slot on SiliconANGLE’s the Cube and was interviewed by Dave Vellante, co-founder of Wikibon and John Furrier, founder of SiliconANGLE.

-> Check out the video here.

Storage Vendor predicts data centers could be 25% of power consumption by 2020. huh??

I was this press release and got a good laugh.

PRESS RELEASE

Dec. 5, 2011, 9:01 a.m. EST

Symform Forecasts Top 5 Cloud and Storage Predictions for 2012

New Year Will Ring in a "Storage Revolution" Amid Record Data Growth and Continued Data Center Bloat

 

 

SEATTLE, WA, Dec 05, 2011 (MARKETWIRE via COMTEX) -- The coming year will herald in a "storage revolution," according to Symform, which released its top cloud and storage predictions for 2012. Over the last year, top headlines centered on the growing popularity of cloud services, the staggering growth in data, and several high-profile data center outages. Based on these strong industry trends and insights gathered from customers, partners and industry experts, Symform predicts 2012 will be all about data -- how to store, secure, access and manage it. This will not only be a large enterprise trend but also impactful to the millions of small and medium-sized businesses (SMB), and the service providers who deliver solutions to them.

Check out this claim for energy consumption.

By 2020, Symform predicts that if left unchecked, more than 25 percent of the nation's power will be required to power data centers, unless businesses can identify new means for storing data without building additional data centers.