Saturday, June 19, 2010

SoCG, Day 3 and MASSIVE

I suppose that before I get too wrapped up in the next conference, I should say a bit about the previous two.

The last day of SoCG featured the second invited talk. Claudio Silva spoke about verification of visualization software. For example, suppose a doctor does a CAT scan and then looks at the resulting images to determine if/how/where surgery should be performed. It'd be good if that visualization was accurate. And in fact, sometimes it's not. (Scary, I know.) He's been working on (and succeeding at) developing algorithms to verify that the visualization is accurate. Nice problem.

The second workshop on massive data algorithmics (MASSIVE) was held the day after SoCG. It was small and excellent. I gave a talk on range searching over compressed kinetic data that, while not in the standard I/O-efficient trend of much of the conference, does begin the path of working on how to deal with massive data generated by moving objects.

The highlight of MASSIVE for me, other than the good food and company, were the two talks on the MapReduce algorithmic framework for parallel/distributed computing. I've never been particularly interested in parallel algorithms since they always seemed annoyingly messy to me, and the frameworks somehow weren't compelling. This framework, however, seems elegant. (And yes, it's patented and used by Google, where I'll be working in a few months.) I won't try to explain it here, but even if you're skeptical about designing parallel algorithms, it's worth looking into.

About 30 seconds after the last talk of MASSIVE had ended, the electricity went out. Nice timing, conference organizers. Congrats and thanks to Suresh and the other folks at U. Utah and to MADALGO.

No comments: