How I Attend a Conference is Tiring, allow for downtime

I've just spent two days at the Open Compute Summit and my brain is still decompressing from all the activity.

Everybody has a different style on how they attend conferences.  Some want to try and meet as many people as they can looking for the whales like the casino industry looks for the high roller.  This method is favored by many sales people.

Technology people are looking for newest SW and HW.

Out of 500 people attending the Open Compute Summit, I have 7 new business cards.   So, you can tell I  don't believe in the idea of having to work a room to build your rolodex.

I focused on talking to the influentials and the thought leaders.  People who I met for the first time, we talked about big ideas to see where they are at.  Some people I have known in e-mail or phone, but not met in person.  But, the vast majority of people i talked to where people I have met in the past, and we can exchange new ideas.

Now, some may attend to catch up almost like a reunion, but that is kind of limiting for me.  What is much more interesting is understanding how the ecosystem is working.  Who is there.  Who is not.  Who is talking to who.  Who is presenting and driving new ideas.  How things are changing vs. previous events.  How this event compares to others.  Then, you start to discover new patterns of ideas.  The ideas that are thought leaderships type.

On the plane trip out I watched Sherlock Holmes 2 and power of deduction has been morphed into almost superhuman powers.

Here is a well thought out 12 steps of How to Develop Sherlock Holmes Intuition.  I really like #10.

Talk through your conclusion with a trusted person.
 
 
 
 
 Talk through your conclusion with a trusted person.

Talk through your conclusion with a trusted person. Holmes was a guarded person andtrusted people only when they had proven themselves trustworthy and loyal. In turn, those persons had Holmes' complete trust, such as Watson. By the same token, "nothing clears up a case so much as stating it to another person.",[13] so be sure to open up and talk through your conclusions with someone you do trust, to use them as a sounding board when you've worked through the deductions.

 

One of the things that makes a conference more enjoyable is having a partner, like Sherlock Holmes and Dr. Watson, discussing analyzing, criticizing, exchanging ideas.  Olivier Sanche was one of those guys who was a partner.  And, at OCP i had a partner I trusted to discuss conclusions I had come to.  

After 2 days of intense analysis I especially like the last point.  Getting home it is family time, and we are getting a new ping pong table to play as a  family.

Plan downtime, party time, and leisure time into your life. Sherlock Holmes worked hard when sleuthing but he also loved his leisure and being languid.[15] Deducing things and pushing your intuition to its limits can wear you down and rejuvenation is an essential part of ensuring that you continue to stay sharp, focused, and clever.

CIA Publication says most Intelligence errors cause by filtering errors, not data collection

Years ago, I worked on a closed loop monitoring system, and one of the parts of the design was an analysis of the data that was considered not relevant, looking for data that is mistakenly discarded.

A friend sent me CIA publication that discusses catching the outlier ideas, the stuff that gets discarded.

Hunting for Foxes


Capturing the Potential of Outlier Ideas in the
Intelligence Community


Clint Watts and John E. Brennan


Outlier:
—A data point far outside the
norm for a variable or population;
—An observation that “deviates so much from other
observations as to arouse
suspicions that it was generated by a different mechanism”;
—A value that is “dubious in
the eyes of the researcher”;
—A contaminant.
Source: J. Osborne, “The Power of outliers 

In war you will generally
find that the enemy has at
any time three courses of
action open to him. Of
those three, he will invariably choose the fourth.
—Helmuth Von Moltke

Here is the main point made.

Of all the examinations of
intelligence surprise and failure, Richards Heuer provides
perhaps the most succinct characterization of the problem:


Major intelligence failures are usually caused
by failures of analysis, not
failures of collection. Relevant information is
discounted, misinterpreted, ignored, rejected,
or overlooked because it
fails to fit a prevailing
mental model or mind-set.

The trouble about filters is you filter out good data with the bad data.

Which brings up the issue of Outliers.

Outlier:
—A data point far outside the
norm for a variable or population;
—An observation that “deviates so much from other
observations as to arouse
suspicions that it was generated by a different mechanism”;
—A value that is “dubious in
the eyes of the researcher”;
—A contaminant.

 

 

If there can be an Open Data Standard for Food, why can't the same be done in data centers?

Attending SXSW it can look like it is all about location.

SXSW: Location, location, location fuels mobile apps

A spate of location-based apps generates buzz at SXSW

By Cameron Scott, IDG News Service |  Software Add a new comment

Soon FourSquare won't be the only cool kid on the location-based apps block: A new wave of startups, including Highlight, Zaarly, TaskRabbit and Localmind, are generating buzz at South by Southwest by drawing on smartphone location data to deliver a range of social, commercial and informational experiences.

Highlight is a "social discovery" app that notifies users when they are near someone they don't know with whom they might share interests. It runs in the background, only requiring the user's attention when it has found a potential social contact.

A group of us got a chance to chat with GigaOm's Stacey Higginbotham and asked her as an Austin native what is hot at SXSW.  Food and Health.  Here is a post that Stacey threw up this morning on Open Data Standards for food.

This is cool: An open data standard for food

An open data standard for food has emerged on the web. With such a tool, restaurants, food apps, grocery stores, the government and other interested parties can tell that arugula is also called rocket salad, no matter where on the web it occurs or what a restaurant menu or recipe app calls it. Right now, that’s an impossible task, which leads to inefficiencies in both consumer-facing apps and the supply chains of restaurants and grocery stores.

A group of folks concerned about sustainable foods have built the seeds of an open food database hosted on Heroku, with the code pertaining to it located at Github. The group, which gave an awesome panel at South by Southwest in Austin, consisted of a restaurateur, an urban gardening movement, someone from Code for America and someone who rates sustainable restaurants.

What is cool about this idea is what could be done if we had Open Data Standards for the components in data centers.  Like how? Knowing the ingredients in a recipe is useful.  Wouldn't it be great if you could get details on the components in any piece of equipment.?

He texted me after the panel to say that his primary concern was that the effort be cautious about how it tries to attribute things to restaurants. For example, while he might gain value from starting from such a database, his real value is the taxonomy his team has created around dishes. So, if one checks out catfish po’ boys on Tasted Menu, his app could use the Open Food data for the catfish or the breading, but his app will also note that the food is Cajun or Creole, fried, a sandwich and other things that will help real users figure out where they want to go and what they want to eat.

I don't know about you, but I am much more excited about the idea of open data standards than another location app.

A Lesson from Minority Report, sometimes you want a everybody agreeing to be right

Two of my friends and I have been discussing a variety of technical and business decisions that need to be made.  One of the things we have done is to make it a rule that all three of us need to be in agreement on decisions.   Having three decision makers is a good pattern to insure that a diversity of perspectives are included in analysis, and decisions can be made if one decision maker is not available.

Triple redundancy though is typically used though where as long as two systems are in agreement than you can make a decision.

In computing, triple modular redundancy, sometimes called triple-mode redundancy,[1] (TMR) is a fault-tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. If any one of the three systems fails, the other two systems can correct and mask the fault.

But, an example of the flaw in this approach could be taken from the Minority Report and the use of pre-cogs where a zealousness to come to a conclusion allows a "minority report" to be discarded.

Majority and minority reports

Each of the three precogs generates its own report or prediction. The reports of all the precogs are analyzed by a computer and, if these reports differ from one another, the computer identifies the two reports with the greatest overlap and produces a majority report, taking this as the accurate prediction of the future. But the existence of majority reports implies the existence of a minority report.

James Hamilton has a blog post on error detection.  Errors could be consider the crimes in the data center.  And, you can falsely assume there are no errors (crimes) because there is error correction in various parts of the system.

Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/IP has error correction on every communications packet, why do I need to have application level network error detection?” Another frequent question is “non-ECC mother boards are much cheaper -- do we really need ECC on memory?” The answer is always yes. At scale, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted.

If you think like this.

This incident reminds us of the importance of never trusting anything from any component in a multi-component system. Checksum every data block and have well-designed, and well-tested failure modes for even unlikely events. Rather than have complex recovery logic for the near infinite number of faults possible, have simple, brute-force recovery paths that you can use broadly and test frequently. Remember that all hardware, all firmware, and all software have faults and introduce errors. Don’t trust anyone or anything. Have test systems that bit flips and corrupts and ensure the production system can operate through these faults – at scale, rare events are amazingly common.

Maybe you won't let the majority rule and listen to minority.  All it takes is a small system, a system in the minority to bring down a service.

Implementing a Green Data Center? Think about Changing your Habits

Greening the Data Center is not easy.  Contrary to what many think a green data center is not simply a low PUE, a certification like LEED, and other ways to say "look at me I have a Green Data Center."

A Green Data Center is not a binary thing of do this, and now you are green.  Drive a Prius used to be thought of being a Green thing.  Driving a Hybrid isn't a big deal any more.

What should you do?  Think about changing your habits?  Bad habits can accumulate a lot of waste.

Charles Duhigg has a new book on "The Power of Habit"


“Charles Duhigg masterfully combines cutting-edge research and captivating stories to reveal how habits shape our lives and how we can shape our habits. Once you read this book, you’ll never look at yourself, your organization, or your world quite the same way.”
quote-danpink (1)Daniel H. Pink, author of #1 New York Times bestselling Drive and A Whole New Mind