World Cup Soccer, assessment of unrest and violence

Today is the start of World Cup Soccer and I was watching this screen, and it reminded of some work done by some really smart people.

Below is the setup for the South Africa first goal.

image

The movement that allows South Africa to get in position.

image 

And the goal

image

It is easy to do a post-analysis on what caused the goal.  But, it is much harder to do a pre-analysis on the factors that affect winning or losing.

Now let me show you what the smart people I know are able to do at the World Cup Soccer to understand the potential dynamics that can cause unrest and violence.  Here is the PDF I will be referring to.

The following is a demo showing the use of Thetus's Savanna product.

image

In the upper left is the assumption, "Demographics and migration patterns may influence stability during the world cup.  Immigration is a primary issue"

image

Which then supports, "What are the potential dynamics that can cause unrest or violence during the World Cup?"

image

How many systems have you seen that accept questions as input?  Almost all other systems are focused on data you have already set up monitoring, looking only within your system.  Can you ask questions that gets you looking for information beyond your data feeds?

You can browse the PDF to see more of the events and relationships.

Thetus Savanna allows an analysis process to see relationships and how events occur.  With this understanding you can mitigate undesirable action and invest in areas that are win conditions.

People actually do this activity all the time, but almost no one has a tool like Thetus Publisher.  (Disclosure: I am working with Thetus to figure out how their technology can be applied to data center scenarios, because they are some of the smartest people I've found to figure out how to model the complexity in the data center.)

There are only a handful of people at this time who can understand this technology and apply it.  And, luckily after two years figuring out who the right people are and what are the right scenarios, this method is closer to being deployed.

Writing this blog entry was the easiest way to tell the less than 10 people i know who will understand this approach "here is a cool graphic you can use to illustrate the potential use to your co-workers and team."

And, it is time to start sharing this approach.  I'll continue to post on this modeling method as it explains how I am using my blog posts.  In general, I am blogging about public facts that fit in modeling data centers.

As a few of my business associates and clients know, there is logic to my posts, and they can read the relationships between the posts.  This does not apply to all posts, but they know how to parse what I write.  One benefit of using this blogging approach is I can meet with people, and we immediately can drop into details as they have been reading what I have been blogging.

As my wife just told me last night, I wish you could tell me how to use technology as well as you write things.  I told her well, I spend more time thinking about what to write, than I do thinking about what to say.  :-)

In the same way it is sometimes hard to understand the exact movements in a soccer match that support a goal, once you recognize the patterns that you want to repeat, you start to score more.

Enjoy the World Cup Soccer. 

Read more

Freedom to think of things others don't, accepting different belief structures - Human Factors and the Data Center

Last week I blogged about some big thinking I participated in Portland with a cloud computing director and ten others.  One of the things I do in big meetings is drop into an analyst mode watching the conversations, saying little, listening and watching the dynamics in the meeting.  Normally, I would be discussing big ideas, but with 12 people in the room and plenty of brain power going and an extremely smart guy presenting I could be quiet.   I frequently find I learn more and figure out things being quiet and watching the dynamics between the people.  Isn't it funny how your brain stops listening when you want to talk.

One of the entertaining moments is when a VC came into the meeting late and spent 2 minutes telling the group how important he is and how much influence he has.  I didn't say one word to him, even though he was the man with the money.  I had more important conversations, and for a person like this, it is many times difficult to explain my role, and that this meeting wouldn't be happening if  I hadn't been architecting the solution.

I was sitting next to my friend and we scribbled notes and whispered ideas during the presentation, taking the time to highlight important concepts. One of the concepts that was big is to model different belief structures to interpret data differently which allows you to put data in context of the user. 

A side story, I worked with some data center construction guys and I found their belief and value system was totally different than what I had assumed.  The construction guys thought I was not that smart the more I worked with them.  What I understood is their value and belief system was brittle when exposed to openness and transparency which is requirement for building a knowledge model for the data center.  Luckily I escaped that project.  In the process, I learned a valuable lesson why it is so hard to bridge thinking across data center design, construction and operation.  Most people are fixed in their belief and value system, they can't translate what others do into their beliefs, and vice versa.  Openness and transparency is not compatible with many existing approaches in data centers where keeping things secret is a standard practice.  Also, keeping secrets maximizes control and profits for the suppliers as the customers are mystified by the black magic skills to build a data center.

Note: I have met other data center construction guys who don't exhibit this behavior, so don't think I mean all data center construction is this way.  And, I have met data center designers who believe data center efficiency can be achieved with simpler designs that are easier to operate and maintain when the black magic is not part of the design.

After this lesson, I've spent more time analyzing people and companies for how well they fit in open approaches like we intend to use in the "Open Source Data Center Initiative."

While most people in the meeting were down in the details reviewing ideas presented, I was watching the people and their beliefs, trying to figure out if they accept other people's belief systems.  The more arrogant a person is the less they accept another person's view as being right in their context.

How many different belief systems do you accept as valid in the data center system?

Executive & CIO

Business Unit VP

Facility Operations

IT Operations

Application/Services Operations - Dev and Test

Enterprise Architect

Security

Networking

Database & Storage

Finance

Public Relations

Environmental Impact & Sustainability

Customers

Partners

Suppliers

Government, Finance, and Compliance Regulations

One view I haven't heard is Human factors.  I wrote the above yesterday, but knew it was not finished to post, this morning at 6a it clicked.  One view that touches almost all of  the above, but is not discussed is the holistic view from Human Factors.  I studied Human Factors in college and believed it was key to be a better Industrial Engineer. When I interviewed at IBM, one of the questions was "How do you know what to change?"  Being young and naive, I said you have to care about the people.  The IBM engineers probably thought I was a leftist tree-hugging radical thinker  as I was graduating from UC Berkeley.

What is Human Factors?

Human factors involves the study of all aspects of the way humans relate to the world around them, with the aim of improving operational performance, safety, through life costs and/or adoption through improvement in the experience of the end user.

An area where Human Factors shows up in most data centers is in facilities due to the maturity of the equipment used, regulations like OSHA, and safety requirements around large mechanical and power systems.  But, the application of Human Factors in data centers is relatively new.  In talking to Mike Manos, he described how Microsoft designed its data centers to make it easier to receive fully assembled racks and deploy the heavy racks to their location.

In software and hardware, User Interface design is a more popular term.

In the industrial design field of human-machine interaction, the user interface is (a place) where interaction between humans and machines occurs. The goal of interaction between a human and a machine at the user interface is effective operation and control of the machine, and feedback from the machine which aids the operator in making operational decisions. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls. and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology.

Human Factor and User Interface is discussed in isolated areas within the data center, but I don't think I have heard any one discussing human factors and data centers in the same breath.

The #1 risk to data center operations are human related.  How much did it cost Microsoft and Hitachi Data Systems for the T-Mobile data loss disaster that was a human error.

While users will be relieved that their information looks likely to be recovered, the episode poses several questions over the competence of Danger’s staff; the technical ability of contractor Hitachi Data Systems; and the inherent stupidity of the Cloud concept.

While we are unlikely ever to be told the full story, it looks very much as if Hitachi’s attempts to upgrade Danger’s Storage Area Network failed big time and that the data was put at risk not by hardware failure, but by good old-fashioned human error.

This one event that had a multiple human errors did hundreds of millions of dollars of damage to Hitachi and Microsoft.  Can Hitachi sell a storage system?  Can Microsoft sell its Smartphones?

This problem was caused by people who didn't spend the time to think how the people are interacting with the data center systems.

This is one of my more rambling posts, there are some good ideas here, I need to think about them a bit more though.

Read more

Can Data Centers benefit from Supply Chain Management concepts?

Currently, I am studying data center site selection, and have been asking the question what is wrong with data centers having 1% of the cost being in the land when other commercial real estate will typically have land 20-25% of the cost.  One big thing most miss is land is not a cost, it is a non-depreciable asset. 

Capital assets that are inexhaustible or where the useful life does not diminish or expire over time, such as land and land improvements. Infrastructure assets reported using the modified approach to depreciation are also not depreciated.

Land is not an expense, it is an investment.  So, land should be looked evaluated on its ROI, not it's overall cost, including land improvements. 

Which then led me to think why is it data centers don't use more supply chain management concepts which would address issues like land cost in the overall solution and most likely save you much more than the cost of the land?

Supply Chain Management is defined as.

Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers (Harland, 1996).[1] Supply Chain Management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain).

Another definition is provided by the APICS Dictionary when it defines SCM as the "design, planning, execution, control, and monitoring of supply chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and measuring performance globally."

Can't you think of all the different groups and vendors involved in providing data center and IT services as a supply chain management problem?  Is the CIO in charge of the supply chain? Maybe.

Here is a piece of irony from a CIO.com article on supply chain management.  Supply chain management SW is a mess.

Supply chain management software is possibly the most fractured group of software applications on the planet. Each of the five major supply chain steps previously outlined is comprised of dozens of specific tasks, many of which have their own specific software. Some vendors have assembled many of these different chunks of software together under a single roof, but no one has a complete package that is right for every company. For example, most companies need to track demand, supply, manufacturing status, logistics (i.e. where things are in the supply chain), and distribution. They also need to share data with supply chain partners at an ever increasing rate. While products from large ERP vendors like SAP's Advanced Planner and Optimizer (APO) can perform many or all of these tasks, because each industry's supply chain has a unique set of challenges, many companies decide to go with targeted best of breed products instead, even if some integration is an inevitable consequence.

So, if a bunch of people who focus only on supply chain management can't get the software right, how can the data center industry get the right software to run data centers like a supply chain?

I think I have an answer on how to approach supply chain management for data centers.  The first step is to identify the problem, then test what approaches solve the problem best. The fragmentation and silos is the opportunity to address.  How do you pull all the pieces together?  My ideas are based on using social networking and memetics.

More to come.

Read more

Alternative to Google's hiring Renewable Energy Systems Modeling Engineer

I am spending more time researching the Low Carbon Data Center ideas and I ran across Google's job posting on Renewable Energy System Modeling Engineer.

The role: Renewable Energy System Modeling Engineer - Mountain View

RE<'C will require development of new utility-scale energy production systems. But design iteration times for large-scale physical systems are notoriously slow and expensive. You will use your expertise in computer simulation and modeling to accelerate the design iteration time for renewable energy systems. You will build software tools and models of optical, mechanical, electrical, and financial systems to allow the team to rapidly answer questions and explore the design-space of utility-scale energy systems. You will draw from your broad systems knowledge and your deep expertise in software-based simulation. You will choose the right modeling environment for each problem-from simple spreadsheets to time-based simulators to custom software models you create in high-level languages. The models you create will be important software projects unto themselves. You will follow Google's world-class software development methodologies as you create, test, and maintain these models. You will build rigorous testing frameworks to verify that your models produce correct results. You will collaborate with other engineers to frame the modeling problem and interpret the results.

It's great Google see the need for this person, but I was curious if anyone else has done Renewable Energy System Modeling.  Guess what there is, since 1993 in fact.  NREL has this page on Homer.

New Distribution Process for NREL's HOMER Model

Note! HOMER is now distributed and supported by HOMER Energy (www.homerenergy.com)

To meet the renewable energy industry’s system analysis and optimization needs , NREL started developing HOMER in 1993. Since then it has been downloaded free of charge by more than 30,000 individuals, corporations, NGOs, government agencies, and universities worldwide.

HOMER is a computer model that simplifies the task of evaluating design options for both off-grid and grid-connected power systems for remote, stand-alone, and distributed generation (DG) applications. HOMER's optimization and sensitivity analysis algorithms allow the user to evaluate the economic and technical feasibility of a large number of technology options and to account for uncertainty in technology costs, energy resource availability, and other variables. HOMER models both conventional and renewable energy technologies:

image

I signed up for the Homer Energy site which has 510 users, non apparently Google engineers.

image

I hope to make contact with the Homer Energy team as we are trying to have a session at DataCenterDynamics Seattle on a Low Carbon Data Center.

Maybe Google doesn't have to hire the Renewable Engineering System Modeling engineer after all.  :-)

Read more

Flaw in Data Center Site Selection, one number vs. range of performance approach

I was just talking to some folks about data center site selection and the method to create a long list of criteria, create weightings, multiple the numbers, add the scores, then select the highest score as criteria site selection is flawed.

The flaw? Thinking that the weightings are the right numbers and the criteria can be counted as independent factors.

First one, weightings help prioritize those factors that are more important to the business.  This creates a single number.  Problem is business changes, and not all businesses can be represented by a single number.

Right approach, data center sites should be characterized by a range of performance that support the range of business now and in the future.  The sites that should be scored highest are the ones that best suit the range of performance for the business, not the highest score.

The error of a single number view vs. the range can be illustrated by the "Flaw of Averages."

The Flaw of Averages
A common cause of bad planning is an error Dr. Savage calls the Flaw of Averages which may be stated as follows: plans based on average assumptions are wrong on average.

As a sobering example, consider the state of a drunk, wandering around on a busy highway. His average position is the centerline, so...

Second one.  The criteria listed are assumed to be independent factors, but most criteria have relationships to other things, and the interaction of criteria creates good and bad conditions that experienced people know, but the site selection so-called gurus think they can solve the problem with enough criteria and weightings.

For the amount of money spent on data centers over the lifecycles, data center models should be built.  The trouble is few companies know how to do this as it requires a holistic view,bridging site, data center building, IT hardware, and software. This is problem worth solving.

Read more