Notes from the 2008 Almaden Institute

I spent Wednesday and Thursday of this week at the Almaden Institute. This year’s theme was “Innovating with Information”, and I found it to be very interesting.

I took notes in three media:

  • Paper — I thought using the notebook they gave us and leaving my laptop in my office would reduce my distractability and improve my ability to capture relevant information. I was wrong; all I proved was that I can’t write legibly.

  • Text Editor — this worked better, at least in providing legibility afterwards. But in some ways, it was too easy to just copy down what the speakers were saying rather than thinking about it and processing it.

  • Twitter — this is what I used most often, both from my MacBook Pro and from my iPhone. Typing on the MBP was, of course, far easier and more facile, but using the iPhone worked. In both cases, I had to concentrate on what the speaker meant instead of just copying what he or she said, so that I could get points down to 140 characters.

As far as I could tell, there was only one other active Twitterer at the Institute, @jyarmis, Jonathan Yarmis of AMR Research.

Day 1: Selected Session Notes

I make no claim for completeness here, and I didn’t make it to all of the talks (in particular, I missed a lot of the IBM talks). But I did find a lot of interesting material that I thought might be worth sharing.

Hal Varian (Chief Economist, Google): “Innovation, Components, and Complements”.

Hal’s talk seemed familiar, and it was — a quick Google search shows that he’s been giving a variation on this talk for at least 5 years, including at an earlier Almaden Institute. The basic message is simple: innovation doesn’t usually happen as a singular invention; instead, the environment plays a key role, either in providing components the innovator can combine to create the innovation, or by providing complementary inventions which, together, create more value than either would have alone (think DVD players and disks, or Wintel). In the latter case, it can be difficult to convince the owners of the complementary products to work in synch to maximize their joint gain, but changes in the economic model can make a huge difference (look at the change in the DVD market when the studios went from high prices for movies ($80) to “sell-thru” pricing ($20), and then to “revenue sharing” with the big players like Blockbuster and Netflix).

Hal’s talk also reminded me of Tim O’Reilly’s talk a couple of years ago, where he pointed out that as you looked at a stack, alternating levels tended to be either commoditized or very profitable (in the Windows environment, Microsoft sucks up a large part of the profit from the layer just below, the PC maker).

Kris Pister of Berkeley and Dust Networks: “Instrumenting the Planet for Intelligence”

He talked about “Smart Dust”, which turns out (so far) to mean self-organizing mesh networks with nodes about the size of a US penny at the smallest. They turn out to be very valuable in places like oil refineries, where they provide (through clever network design) the equivalent of wired reliability without the difficulty of installing wires or conduit. The next step is location-aware “smart dust”, which scares me for privacy reasons (for similar reasons, I have never tried APRS, nor am I playing with BrightKite).

Atul Ayra from BP: “Business Value of Using Sensors”

BP’s estimates of world petroleum supplies are:

  • Total global petroleum: 7 trillion barrels
  • Extracted to date: 1 trillion barrels
  • “Easily” extractable: 1.2 trillion barrels
  • “Potentially” extractable, with improvements in technology: 0.7 trillion barrels
  • Annual world consumption: 0.032 trillion barrels (US: 0.0075 trillion).

So the potential improvement in technology, mostly driven by improved information, represents about 20 years’ world supply.

BP, of course, doesn’t have all the petroleum in the world, but they think they can add 1 billion barrels to their stock by better use of sensors and information technology — that represents over 120 billion dollars (and growing).

They’ve developed an “Advanced Collaboration Environment” where engineers from different disciplines can work together (that’s physically together) using the same data and shared displays to make better decisions; this is a BIG DEAL for them. They are also suffering from the loss of knowledge with the aging of the Baby Boomers, and are looking for ways to still draw on that knowledge and pass it along as (and after) boomers retire, especially since there’s a big gap in the talent pipeline (there were a few years when they weren’t recruiting).

Brenda Dietrich from IBM Research: “Adding Value to Information via Analytics”

I found two things in her presentation to be particularly interesting.

One is a new way of doing supply chain optimization; traditionally, you optimize against one variable (profit) and treat everything else as a constraint. But if you decide that you want to jointly optimize profit and, say, carbon emissions, you can end up with significantly different strategies.

The other was her discussion of “personal benchmarking” sites, like carbonfootprint.com, which she extended to include some sites with more social networking, like ravelry.com, a knitting site with a permanent waiting list and at least 2 IBM groups. I would add LibraryThing to her list, although I haven’t been
using it for anything but logging my books (and I’m behind there).

Chris Meyer of Monitor Networks: “Evolving Offers in an Instrumented Economy”

“Offers” are what arise when you combine products and services; one interesting feature is that they (and the buyer’s relation to the seller) have to last as long as the buyer’s needs. He also pointed out that all those credit card mailings that banks send out (at least in the US) are sensors, most of which die, silently.

I ran out of energy (both figuratively and literally) around this point, and took few useful notes from the panel, and none during the dinner speaker. However, I did note that Brian Arthur (Santa Fe Institute) claims that the economy developed muscles during the Industrial Revolution; now, he says that the Information Revolution is causing it to develop a neural system. I wonder what will happen when/if the economy develops self-awareness?

Day 2: Selected Session Notes

Martin Fleming from IBM introduced the morning

He noted that IT currently consumes about 2% of the human race’s energy budget (which is a lot), and that we need to be better at making that 2% influence the other 98%. There are, of course, many political and social factors beyond the technology — and he noted that both Europe and Japan have increased carbon emissions since signing the Kyoto Treaty, despite the cap and trade system in Europe and regulations in Japan.

Horst Simon, UC Berkeley: “Using High Performance Computing to Drive Innovation and Knowledge Discovery from Petascale Data”

He claims that Computational Science and Engineering (CSE) is a new field, combining theory, experimentation, and simulation in roughly equal parts. It plays a big role in physics and biology so far. There is a “Data Tsunami”; in 2003, we reached a turning point, as experimental data started to dominate over simulations. And the value of experimental data grows over time, while the value of simulated data declines — the Nobel-Prize winning work from 1992 on the Cosmic Microwave Background can now be assigned to upper-division students to repeat on their laptops.

He also pointed out that fast analysis makes it possible to ask questions of data that you wouldn’t dream of if all you have is overnight batch capability. And he claims that the Big Data which is driving Big Science has specialized networking requirements that can’t necessarily be met by a scaled-up TCP/IP network (isn’t he worried about being accused of heresy?).

Andreas Wiegend, formerly Chief Scientist of Amazon.com: “New Consumer Data Revolution – Pay or Be Paid”

He opened his talk by asking “who works for Google?”. A couple of hands went up, but his point was that we all work for Google, because we provide them with data which they turn into money. And the same is true for Amazon and other firms. This was a good talk, but I didn’t do well at taking notes. Fortunately, his web site has lots of material worth checking out!

Cynthia Dwork, Microsoft: “Privacy: A Natural Resource to be Conserved”

This was the most mathematical talk of the Institute (I’m not sure there were any equations in any other talk, in fact). The basic thrust was that it was surprisingly easy for an attacker to correlate “anonymous” information from a database with other public information to defeat the anonymity (her poster child was the attack on the anonymous Netflix Prize dataset, where the Internet Movie Data Base provided the necessary clues Google Search), but there have been others, including the AOL search fiasco. She went on to talk about the work her team has been doing on how to introduce (steadily increasing) errors into the results of queries into an anonymized database to avoid leaking information.

Her takeaway message: “Privacy is a natural resource. It’s non-renewable, and it’s not yours. Conserve it.”

Conversations

Even though the sessions were good, for me, the best part of the Institute was the conversations. I caught up with a number of IBMers, of course, like Peter Andrews, Currie Boyle, and Mark Wegman, but I really appreciated the opportunity to interact with folks from the outside, such as Andreas Weigend, Cynthia Dwork, Jonathan Yarmis, Ellen Levy (Linkedin), Bruce Paton (SFSU), and Darryl Williams (Partnership Solutions International).

I’m looking forward to next year’s Institute.