Breaking the 1GB barrier

In my exciting last post, I complained about discovering that my 4-year-old Casio digital camera couldn’t handle a 2GB card. I had planned to solve the problem by going back to Radio Shack and buying a couple of 1GB cards, but a different answer presented itself the next morning as I was perusing the Murky Nooz. San Jose Camera and Video was having a “no sales tax” sale, and they were advertising the Nikon Coolpix L18 for under $130, which seemed like a reasonable price if the camera was at all tolerable (especially since if I was going to buy a camera in time for Jeff’s trip, I didn’t have time for anything but a local store).

To make a long (and probably boring) story short, we wound up leaving the store with two L18s; Jeff has one with him in Israel, and we have the other one. I even bought the extended warranty for each camera, but I don’t feel too guilty about it — it came “free” with the $19 case for the camera, and I wasn’t going to leave without buying a case.

I was thinking about buying a Canon for myself so that I could play with the CHDK firmware hack, but three things turned me in the direction of the Nikon:

  • I wasn’t thrilled with the feel of the Canon model I played with
  • Price
  • I need another tech toy like I need another hole in my head

I think the first item was the deciding one, but the other two did play some part. I need to remember that last one more often, too.

Today’s unpleasant discovery about my digital camera

I’m fairly happy with my digital camera, a Casio Exilim P-600. It doesn’t record sounds any more (that broke early in its life, and an attempt to use Fry’s “Performance Guarantee” convinced me never to buy a “Performance Guarantee” again, as well as to buy as little as I can from Fry’s), but that wasn’t a core feature anyway.

But Jeff is getting ready to go to Israel for his senior class trip, and I wanted to send the camera with him and for him to have enough memory not to have to worry about filling it up. So I picked up a 2GB SD card at Radio Shack this evening for $18, dropped it in the camera, and was happy when the display showed 793 pictures remaining.

I, however, am paranoid about these things, so I told the camera to reformat the card, which it did…and suddenly the capacity was only 393 pictures. Hmm, said I, and took the camera to my PC. Which told me that my 2GB card had an E: partition of 968MB. And nothing I could do in the way of reformatting the card changed the situation.

A bit of Googling led me to RITEK USA‘s recovery utility to regain the missing capacity on the card. And a bit more Googling got me an unRAR program, since, for unknown reasons, they distribute the program as an EXE inside a RAR inside a ZIP. And a minute later, I had a 2GB card again, formatted in FAT.

Inserting it into my camera bought me a “CARD ERROR”. I tried again, same result. I tried formatting the card as FAT32, and the camera wouldn’t even boot up.

Eventually, I found the answer at Steve’s Digicam Forums. The maximum capacity SD card for the Exilim S-600 is 1GB. Putting a 2GB card in can work…sort of…but it’s fraught with peril.

So tomorrow, it’s back to Radio Shack to buy a 1GB card for Jeff to take on the trip. I’ll keep the 2GB card, though, because I suspect there’s a new camera in my future anyway.

Notes from the 2008 Almaden Institute

I spent Wednesday and Thursday of this week at the Almaden Institute. This year’s theme was “Innovating with Information”, and I found it to be very interesting.

I took notes in three media:

  • Paper — I thought using the notebook they gave us and leaving my laptop in my office would reduce my distractability and improve my ability to capture relevant information. I was wrong; all I proved was that I can’t write legibly.

  • Text Editor — this worked better, at least in providing legibility afterwards. But in some ways, it was too easy to just copy down what the speakers were saying rather than thinking about it and processing it.

  • Twitter — this is what I used most often, both from my MacBook Pro and from my iPhone. Typing on the MBP was, of course, far easier and more facile, but using the iPhone worked. In both cases, I had to concentrate on what the speaker meant instead of just copying what he or she said, so that I could get points down to 140 characters.

As far as I could tell, there was only one other active Twitterer at the Institute, @jyarmis, Jonathan Yarmis of AMR Research.

Day 1: Selected Session Notes

I make no claim for completeness here, and I didn’t make it to all of the talks (in particular, I missed a lot of the IBM talks). But I did find a lot of interesting material that I thought might be worth sharing.

Hal Varian (Chief Economist, Google): “Innovation, Components, and Complements”.

Hal’s talk seemed familiar, and it was — a quick Google search shows that he’s been giving a variation on this talk for at least 5 years, including at an earlier Almaden Institute. The basic message is simple: innovation doesn’t usually happen as a singular invention; instead, the environment plays a key role, either in providing components the innovator can combine to create the innovation, or by providing complementary inventions which, together, create more value than either would have alone (think DVD players and disks, or Wintel). In the latter case, it can be difficult to convince the owners of the complementary products to work in synch to maximize their joint gain, but changes in the economic model can make a huge difference (look at the change in the DVD market when the studios went from high prices for movies ($80) to “sell-thru” pricing ($20), and then to “revenue sharing” with the big players like Blockbuster and Netflix).

Hal’s talk also reminded me of Tim O’Reilly’s talk a couple of years ago, where he pointed out that as you looked at a stack, alternating levels tended to be either commoditized or very profitable (in the Windows environment, Microsoft sucks up a large part of the profit from the layer just below, the PC maker).

Kris Pister of Berkeley and Dust Networks: “Instrumenting the Planet for Intelligence”

He talked about “Smart Dust”, which turns out (so far) to mean self-organizing mesh networks with nodes about the size of a US penny at the smallest. They turn out to be very valuable in places like oil refineries, where they provide (through clever network design) the equivalent of wired reliability without the difficulty of installing wires or conduit. The next step is location-aware “smart dust”, which scares me for privacy reasons (for similar reasons, I have never tried APRS, nor am I playing with BrightKite).

Atul Ayra from BP: “Business Value of Using Sensors”

BP’s estimates of world petroleum supplies are:

  • Total global petroleum: 7 trillion barrels
  • Extracted to date: 1 trillion barrels
  • “Easily” extractable: 1.2 trillion barrels
  • “Potentially” extractable, with improvements in technology: 0.7 trillion barrels
  • Annual world consumption: 0.032 trillion barrels (US: 0.0075 trillion).

So the potential improvement in technology, mostly driven by improved information, represents about 20 years’ world supply.

BP, of course, doesn’t have all the petroleum in the world, but they think they can add 1 billion barrels to their stock by better use of sensors and information technology — that represents over 120 billion dollars (and growing).

They’ve developed an “Advanced Collaboration Environment” where engineers from different disciplines can work together (that’s physically together) using the same data and shared displays to make better decisions; this is a BIG DEAL for them. They are also suffering from the loss of knowledge with the aging of the Baby Boomers, and are looking for ways to still draw on that knowledge and pass it along as (and after) boomers retire, especially since there’s a big gap in the talent pipeline (there were a few years when they weren’t recruiting).

Brenda Dietrich from IBM Research: “Adding Value to Information via Analytics”

I found two things in her presentation to be particularly interesting.

One is a new way of doing supply chain optimization; traditionally, you optimize against one variable (profit) and treat everything else as a constraint. But if you decide that you want to jointly optimize profit and, say, carbon emissions, you can end up with significantly different strategies.

The other was her discussion of “personal benchmarking” sites, like carbonfootprint.com, which she extended to include some sites with more social networking, like ravelry.com, a knitting site with a permanent waiting list and at least 2 IBM groups. I would add LibraryThing to her list, although I haven’t been
using it for anything but logging my books (and I’m behind there).

Chris Meyer of Monitor Networks: “Evolving Offers in an Instrumented Economy”

“Offers” are what arise when you combine products and services; one interesting feature is that they (and the buyer’s relation to the seller) have to last as long as the buyer’s needs. He also pointed out that all those credit card mailings that banks send out (at least in the US) are sensors, most of which die, silently.

I ran out of energy (both figuratively and literally) around this point, and took few useful notes from the panel, and none during the dinner speaker. However, I did note that Brian Arthur (Santa Fe Institute) claims that the economy developed muscles during the Industrial Revolution; now, he says that the Information Revolution is causing it to develop a neural system. I wonder what will happen when/if the economy develops self-awareness?

Day 2: Selected Session Notes

Martin Fleming from IBM introduced the morning

He noted that IT currently consumes about 2% of the human race’s energy budget (which is a lot), and that we need to be better at making that 2% influence the other 98%. There are, of course, many political and social factors beyond the technology — and he noted that both Europe and Japan have increased carbon emissions since signing the Kyoto Treaty, despite the cap and trade system in Europe and regulations in Japan.

Horst Simon, UC Berkeley: “Using High Performance Computing to Drive Innovation and Knowledge Discovery from Petascale Data”

He claims that Computational Science and Engineering (CSE) is a new field, combining theory, experimentation, and simulation in roughly equal parts. It plays a big role in physics and biology so far. There is a “Data Tsunami”; in 2003, we reached a turning point, as experimental data started to dominate over simulations. And the value of experimental data grows over time, while the value of simulated data declines — the Nobel-Prize winning work from 1992 on the Cosmic Microwave Background can now be assigned to upper-division students to repeat on their laptops.

He also pointed out that fast analysis makes it possible to ask questions of data that you wouldn’t dream of if all you have is overnight batch capability. And he claims that the Big Data which is driving Big Science has specialized networking requirements that can’t necessarily be met by a scaled-up TCP/IP network (isn’t he worried about being accused of heresy?).

Andreas Wiegend, formerly Chief Scientist of Amazon.com: “New Consumer Data Revolution – Pay or Be Paid”

He opened his talk by asking “who works for Google?”. A couple of hands went up, but his point was that we all work for Google, because we provide them with data which they turn into money. And the same is true for Amazon and other firms. This was a good talk, but I didn’t do well at taking notes. Fortunately, his web site has lots of material worth checking out!

Cynthia Dwork, Microsoft: “Privacy: A Natural Resource to be Conserved”

This was the most mathematical talk of the Institute (I’m not sure there were any equations in any other talk, in fact). The basic thrust was that it was surprisingly easy for an attacker to correlate “anonymous” information from a database with other public information to defeat the anonymity (her poster child was the attack on the anonymous Netflix Prize dataset, where the Internet Movie Data Base provided the necessary clues Google Search), but there have been others, including the AOL search fiasco. She went on to talk about the work her team has been doing on how to introduce (steadily increasing) errors into the results of queries into an anonymized database to avoid leaking information.

Her takeaway message: “Privacy is a natural resource. It’s non-renewable, and it’s not yours. Conserve it.”

Conversations

Even though the sessions were good, for me, the best part of the Institute was the conversations. I caught up with a number of IBMers, of course, like Peter Andrews, Currie Boyle, and Mark Wegman, but I really appreciated the opportunity to interact with folks from the outside, such as Andreas Weigend, Cynthia Dwork, Jonathan Yarmis, Ellen Levy (Linkedin), Bruce Paton (SFSU), and Darryl Williams (Partnership Solutions International).

I’m looking forward to next year’s Institute.

Innovation that matters, courtesy of Twitter

Yesterday, I took a brief break at work and tuned into Twitter. One of the first tweets I saw was an Amber Alert with a request to retweet it so that others would see it.

A moment later, Twitter user @Pistachio posted “Thanks @dmitrigunn and @joshlarson for help with Amber deets. Josh notes: current Amber alerts are HARD TO FIND online. This sucks.”

I took that as a challenge and consulted the oracle…err, Google…and quickly found the Amber Alert page on the National Center for Missing and Exploited Children‘s website, which I tweeted, along with the URL for the page with details of the Amber Alert being broadcast (thankfully, the child has been found, so I don’t have to provide that URL any more).

@Twitteratti then wondered “@Davidsinger just struck me, are amber alerts “on” twitter? would seem like a good candidate.”

I checked the obvious Twittername, @AmberAlert, but it had been taken by someone named Amber (and had only been used once), which I mentioned in a tweet.

Then I remembered that @IkePigott had once posted about plans for using Twitter to relay details of evacuations (he works for the Red Cross), so I asked him if he knew if Amber Alerts were on Twitter. He didn’t know of anywhere, but said “@davidsinger – In fact, you could create a Yahoo Pipe that amalgamates the various Amber feeds, and make an Uber-Amber Feed for Tweeting.”

That made me go back out to the Amber Alert site to look for a feed — there wasn’t one, which seemed odd.

Then I remembered reading a Larry Magid column in the San Jose Mercury News (yes, on dead trees!) where he mentioned that he was an unpaid advisor to NCMEC. So I dropped him a quick email asking “Can you use your influence on NCMEC to create an Amber Alert RSS/Atom feed?” 20 minutes later, I got a note from him saying “I passed this on to NCMEC’s COO with a recommendation that they consider it seriously”, and this morning, I woke to another note saying “I got a note back from NCMEC. They will implement an RSS feed. More later.” (By the way, I’d never corresponded with Larry before this.)

Elapsed time, start to Larry’s first reply: 50 minutes. And the note relaying NCMEC’s “yes” came only 11 hours after that, at 3:33am Pacific Time.

I don’t think it would have been possible to make something like this happen in such a short time without social media like Twitter (yes, my correspondence with Larry was by old-fashioned email because I didn’t have his Twitter username and because I wanted to write more than 140 characters, but if we’d been in the same circle, I would have used Twitter without a second thought). The ability to collaborate in public, in real-time, was essential — as was the Amber Alert and @Pistachio’s observation that finding alerts was difficult, neither of which was directed to me.

One footnote: while this was going on, another user, @princess_belle, pointed out that there was, indeed, a Twitter account which tweeted Amber Alerts: @missingchildren; but it only had 90 followers (now 107). Much later, I found out that @NateRitter had created that account and the system to take the NCMEC’s email feed and convert it to tweets, and I urge you to read his posting about the system and why he made it.