Showing posts with label cs. Show all posts
Showing posts with label cs. Show all posts

Sunday, March 6, 2011

SoCG Accepted Motion/Sensing Papers

Yes, this is somewhat delayed. Better late than never?

Here's the list of exciting-looking motion/sensing papers culled from the SoCG 2011 accepted papers list. Also known as my to-read list.

  • Mark de Berg, Marcel Roeloffzen and Bettina Speckmann. Kinetic Convex Hulls and Delaunay Triangulations in the Black-Box Model
  • Umut Acar, Benoît Hudson and Duru Türkoglu. Kinetic Mesh Refinement in 2D
  • Rishi Gupta, Piotr Indyk, Eric Price and Yaron Rachlin. Compressive sensing with local geometric features

Sadly, none of these seem to have a pdf preprint up yet, providing me with yet another excuse for procrastination...

Tuesday, February 1, 2011

SODA: Instance Optimality

The highlight of SODA for me was the plenary talk Computational Geometry for Non-Geometers by Timothy Chan, especially the part in which he discussed instance optimality. The basic idea is to consider algorithm analyses that take all inputs into account and operate optimally over all inputs in comparison to an algorithm designed specifically for that input (but not for that input's order). I believe this was first considered in a geometric setting (Instance-Optimal Geometric Algorithms), so you could imagine that the input is a set of points and the algorithm you're competing against doesn't know in advance in what order the points will be given to you. This wonderfully fixes the problem with worst-case analyses in which an algorithm might perform poorly on most input instances but there is some hard instance for which the performance is reasonable. Instead, under an instance-optimal analysis, the algorithm is expected to perform well on all input instances.

In the study of data structures for moving objects, a similar idea known as motion sensitivity was previously introduced under a motion-specific setting. Motion sensitivity generally describes algorithms that perform well under orderly motion and have performance that degrades as the point motion becomes more random. In my own work, I consider entropy-based analyses in order to capture this concept. In fact, the instance-optimal analysis Timothy described also resulted in an entropy-like inequality, so perhaps this relationship is inevitable/necessary for these analyses.

It's an exciting new area that I'm hoping to see a lot more of. It was certainly an excellent talk.

Sunday, January 30, 2011

SODA talks

I'm always a bit more interested in what exciting problems I'm introduced to at a conference than the proof techniques used in the solution (probably partially because the best talks often don't get into the details). So here's my report back from SODA, in the form of exciting problem descriptions.

Implicit Flow Routing on Terrains with Applications to Surface Networks and Drainage Structures by Mark de Berg, Herman Haverkort, and Constantinos Tsirogiannis - This problem was described as the "farmer's problem," the problem of determining what fields upslope might contaminate yours by draining with the flow of the water.

Shortest Non-Crossing Walks in the Plane by Jeff Erickson and Amir Nayyeri - This problem is as the title describes. The authors also consider the version in which there are polygonal obstacles. It was a nice problem and made me wonder if a variation has been solved... For public transportation networks it might be interesting to consider short walks that cross as many times as possible (or some balance of short paths and many crossings). This could probably be a variation of a facility location problem. Obviously, the polygonal obstacles would also translate in the form of neighborhoods that hate public transit and block construction attempts (or, more generously, in the form of existing buildings). Anyone know if a problem of this sort has been considered?

And here's one where I really did just like the solution...
Fast, Precise and Dynamic Distance Queries by Yair Bartal, Lee-Ad Gottlieb, Tsvi Kopelowitz, Moshe Lewenstein, and Liam Roditty - This paper relies on a series of other nice ones about dynamic spanners and hierarchies. The first in that immediate group is Deformable Spanners and Applications by Gao, Guilbas, and Nguyen, which set up a tree hierarchy based on local distances between points in which points in the same level of the hierarchy and within some distance of each other are considered neighbors for the purpose of creating a spanner. These authors make the nice observation that the distance between points in the hierarchy whose parents are neighbors are a good approximation for the distance between their descendants. The full algorithm and proof are, of course, much more involved. (For a better description, complete with beautiful pictures, see the slides linked from Lee-Ad's homepage.)

Finally, the award for the best previous work slide goes to Maarten Loffler for his talk on the paper Triangulating the Square and Squaring the Triangle: Quadtrees and Delaunay Triangulations are Equivalent by Maarten Loffler and Wolfgang Mulzer. While I haven't been able to find the incredible slides online, the previous work flow chart can be seen on page 2 of the paper. If I drew cartoons, I'd be able to do a nice parody tribute to this. Instead, all I have is this phd comics strip about references.

Saturday, June 19, 2010

SoCG, Day 3 and MASSIVE

I suppose that before I get too wrapped up in the next conference, I should say a bit about the previous two.

The last day of SoCG featured the second invited talk. Claudio Silva spoke about verification of visualization software. For example, suppose a doctor does a CAT scan and then looks at the resulting images to determine if/how/where surgery should be performed. It'd be good if that visualization was accurate. And in fact, sometimes it's not. (Scary, I know.) He's been working on (and succeeding at) developing algorithms to verify that the visualization is accurate. Nice problem.

The second workshop on massive data algorithmics (MASSIVE) was held the day after SoCG. It was small and excellent. I gave a talk on range searching over compressed kinetic data that, while not in the standard I/O-efficient trend of much of the conference, does begin the path of working on how to deal with massive data generated by moving objects.

The highlight of MASSIVE for me, other than the good food and company, were the two talks on the MapReduce algorithmic framework for parallel/distributed computing. I've never been particularly interested in parallel algorithms since they always seemed annoyingly messy to me, and the frameworks somehow weren't compelling. This framework, however, seems elegant. (And yes, it's patented and used by Google, where I'll be working in a few months.) I won't try to explain it here, but even if you're skeptical about designing parallel algorithms, it's worth looking into.

About 30 seconds after the last talk of MASSIVE had ended, the electricity went out. Nice timing, conference organizers. Congrats and thanks to Suresh and the other folks at U. Utah and to MADALGO.

Wednesday, June 16, 2010

SoCG, Day 2

The highlight of today was the weather and the trip up the mountain (by tram) to this incredible view at the top. Yes, the sky was really that blue, and yes, that's snow we were standing on. You can see Salt Lake City in the valley in the distance.

Otherwise, the day started with another good talk by Chazelle introducing the problem of understanding the convergence of collective motion. He gave a beautiful example about fireflies in Thailand and Malaysia that flash simultaneously.

I won't go into detail, but two other papers I found interesting/ exciting are:

Dynamic can sometimes get close to kinetic...

Monday, June 14, 2010

SoCG, Day 1

I'm here in beautiful, cold, Salt Lake City (or really, in Snowbird, a ski resort in the mountains above the city) for the Symposium on Computational Geometry. So far, it looks like it's going to be a great conference.

I don't know if the first session was purposefully packed with excellent talks, or maybe it just happened to be a session I was interested in, but not only were the results interesting, the talks were good too. Some highlights:

Optimal Partitions Trees and Tight Lower Bounds for Halfspace Range Searching:

Timothy Chan presented a new data structure for range searching with essentially the same bounds as a previous structure by Matousek (aside from preprocessing costs), but which has the important properties of being simpler (implementations possible!) and being based on a multilevel partition tree approach, which is useful for many applications. In fact, this upper bound was shown to be matched later in the session by a lower bound requiring this partition tree approach. David Mount presented that joint paper with Sunil Arya and Jian Xia. (Another connection between the two talks was the levity provided by Dave's attempt to open a Mountain Dew bottle in the middle of Chan's talk - a bottle I suspect was purchased at the bottom of the mountain and then driven up 2000 feet, thus providing for an explosive change in pressure...)

Consistent Digital Line Segments:

This talk provided, for me, an introduction to a really beautiful problem I had never thought in depth about before. The problem is essentially about defining methods for drawing digital lines (lines that are drawn with pixel paths) that are consistent with underlying geometric axioms - ideally axioms that define the important pieces of the intuition about lines that we have taken from Euclidean geometry, for example, two lines should intersect in only one place. It's a nice problem (and solution) that seems to go to the literal heart of computational geometry.

The Geometry of Flocking:

Chazelle presented a continuation of his work on natural algorithms, with a focus on bird flocking. I've written about this line of work a bit already, and don't have much more to say, but I'm looking forward to hearing another talk by him later in the conference.

Discrete Geometric Structures for Architecture:

The invited talk today was by Helmut Pottmann of the Geometric Modeling and Industrial Geometry group at Vienna University of Technology. I must admit, I was skeptical of a talk by a non-geometer, but it was excellent. It was about modern "natural" architecture (like the picture above that I stole from the group page), a style in which architects eschew the traditional lines and right angles of buildings in favor of a more curved appearance. He spoke about types of meshes that could work with these design concepts. I can't do the talk justice here, and I do hope that he puts the slides online.

Lunch today featured the student meet-up that was discussed after the 2008 SoCG, forgotten for awhile, and revived by Suresh in an email to me last week. In other words, it took very little organizational effort - Suresh made an announcement this morning that any interested students should meet outside the conference room at lunch time. Then we all went to a deli, got sandwiches, and took them to a room to eat. About 8-10 students showed up (though we lost some to the need to change rooms to sit down to eat) and it was nice to sit and chat. Thanks folks for showing up and thanks Suresh for making this happen! I hope it continues at future SoCGs - it's certainly worth the effort.

Finally, the business meeting was relatively uncontroversial. SoCG 2012 will be at UNC Chapel Hill (the only bid). If I remember correctly, attendance this year is somewhat down at 105 (compared to about 135 last year and 141 the year before in Maryland). Student attendance is correspondingly down to 28 from about 45. (So getting 8-10 students at the informal lunch is actually pretty good!) All numbers may be somewhat made up since I didn't write anything down. There was some encouragement from the current PC chair to continue the not-actually-a-rebuttal process next year, but no guarantees. Next year in Paris!

Tuesday, June 8, 2010

Range Searching Over Compressed Kinetic Data

The ESA 2010 accepted papers are up. My advisor, David Mount, and I have a paper accepted there - "Spatio-temporal Range Searching Over Compressed Kinetic Sensor Data." There is a (not yet revised and camera ready) version on my website. I'll also be presenting the paper at MASSIVE, a workshop held in conjunction with SoCG, next week.

The motivation of our paper (and generally, of much of my thesis) is to provide an observation-based framework for kinetic data. An earlier post talks more about why we'd want to do that - essentially the goal is to model scientifically collected data from moving objects. In the previous paper, we presented the framework and an accompanying compression scheme. In this paper, we're solving the problem that occurs when you realize that you've compressed all your data but still want to be able to query it (ideally, without decompressing it). We consider spatio-temporal queries, essentially the intersection of a temporal range and a spatial one - answering questions like "how many cars were observed on my street yesterday during rush hour?"

The abstract:

As sensor networks increase in size and number, efficient techniques are required to process the very large data sets that they generate. Frequently, sensor networks monitor objects in motion within their vicinity; the data associated with the movement of these objects is known as kinetic data. In an earlier paper we introduced an algorithm which, given a set of sensor observations, losslessly compresses this data to a size that is within a constant factor of the asymptotically optimal joint entropy bound. In this paper we present an e cient algorithm for answering spatio-temporal range queries. Our algorithm operates on a compressed representation of the data, without the need to decompress it. We analyze the efficiency of our algorithm in terms of a natural measure of information content, the joint entropy of the sensor outputs. We show that with space roughly equal to entropy, queries can be answered in time that is roughly logarithmic in entropy. In addition, we show experimentally that on real-world data our range searching structures use less space and have faster query times than the naive versions. These results represent the first solutions to range searching problems over compressed kinetic sensor data.

Monday, April 12, 2010


The ESA 2010 deadline is tonight at midnight Honolulu time.

The MASSIVE 2010 deadline is Wednesday.

Thursday, December 10, 2009

National Computer Science Education Week

You may not have noticed, but this week is National Computer Science Education Week, a week dedicated to increasing the amount of computer science education (especially in high schools). In this spirit, I looked up the US government projections about computer science occupations in 2018 (ten years from the last time they did this analysis). Unsurprisingly, the number of computer science related jobs is expected to increase. Drastically. The subsection with the most increase is "computer software engineers, applications," which is projected to increase by 34%. The total number of new jobs projected is approximately 450,400. In contrast, the Taulbee Survey shows that the total number of bachelor's degrees awarded in computer science and computer engineering in 2008 was 12,815. In other words, if these levels of new graduates continue (and all take jobs in computer science and no other applicants enter the field), there will be 322,250 jobs unfilled, or more than two-thirds of the newly created jobs.

Of course, the US is not a self-contained system, and neither is our field. Still, perhaps it's worth remembering the statistics when chatting with the dean about department growth...

Wednesday, December 2, 2009

Wednesday, October 28, 2009

Clustering Over Space and Time

(This is partially in response to Suresh's wonderful series on clustering.)

Recently, my advisor (Dave Mount) and I have been kicking around the idea of how to define the problem of clustering over time. Imagine birds traveling in a flock from Canada to Mexico, and suppose that you are able to perfectly observe their locations over time (of course, for any practical use we'd need to come up with a better model, but for now...). Given all these locations, you'd like to be able to identify the flock as a cluster. If this flock crosses another flock (say, stupidly traveling from California to Maryland), you'd like to be able to identify these as two separate flocks, even if the "current" time is the one at which they're all in the same place.

Now, what about if the birds' pattern is a bit more complicated... Suppose that instead of staying nicely within the same approximate radius as they fly, the radius of the cluster varies drastically. Perhaps all the birds fly away from each other for some amount of time, then converge back to a central point, then fly away again, etc. Or perhaps there are many flocks all flying from Canada to Mexico that follow parallel linear paths, but spaced many miles apart. Are these clusters that we want to identify?

These are certainly questions that biologists are interested in (understanding infections over time), as well as folks in data mining and statistical modeling (general ideas about changing data over time). I've seen algorithmic approaches about clustering data streams (Cormode, Muthukrishnan, and Zhuang) or kinetic data structures models that maintain a clustering snapshot (Gao, Guibas, and Nguyen, and my own work) and database papers about essentially this question (Kalnis, Mamoulis, and Bakiras), but haven't seen algorithmic approaches to this question. So, I ask the internet, are there any papers I should know about? Anything else I should be thinking about?

Monday, October 26, 2009

National Computer Science Education Week

Last week, Congress (trying to push off health care reform for as long as possible) passed house resolution 558 to create national computer science education week, the week of December 7th. Why the week of December 7th? Grace Hopper's birthday was December 9th!

If these statistics (from the ACM) are still accurate, then we're badly in need of more computer science education. Talking with a friend about this recently, I found myself defending the position that computer science is not so hard that many more people can't learn it. Though originally a devil's advocate position, I talked myself into believing it. While it's unreasonable to expect that many people could get Ph.D.s in computer science, it does seem reasonable to expect many people to be able to understand basic algorithms and master basic programming skills. And that needs to happen. As money leads, so people will follow. Or, people will follow if the government leads. The government's encouragement is nice, but not actually transformative unless they put muscle behind it - perhaps a mandate for computer science education in high schools, or at least less focus on NCLB testing so that schools can teach subjects other than math and English. Meanwhile, we can all do our best to lure unsuspecting freshman to our classes with robots, videos, and other shiny things.

Wednesday, October 21, 2009

Conference Roommate Finding Forum

Announcing a conference roommate finding board:

The hope is that this will help students (and others) to find roommates to share hotel rooms with at conferences, thus decreasing cost and increasing the possibility of attendance. Ideally, I think this is something that should be hosted by CRA or ACM, but until they take over, it seems better to have something than nothing. Similarly, I don't intend this to be limited to theoretical computer science.

Please spread the word so that it will actually be useful!

Monday, October 19, 2009

On the Job Market

Having just sent off a bunch of applications, it seems fair to say that I'm officially on the job market. I'm planning to graduate this coming summer. Given the current market, I'm applying to all sorts of jobs - academia, labs, industry, permanent, 2-year, etc. I'd love to end up somewhere with exciting colleagues, interesting research possibilities, engaged students, and amazing things to do on the weekends... but I'm cursed with a certain dose of realism. Still, if you know how to achieve all my dreams, I'd love to hear the advice. Or, of course, if you have job tips...

(For more about who I am and what I do, see my more professional site.)

Friday, October 9, 2009

SoCG 2010 Call for Papers

The call for papers for SoCG 2010 just went out. It's in Utah, and Suresh is one of the local organizers. Plus, last I heard I'll be helping out with a student-only get to meet each other event. So you have just under two months to solve a problem and write a paper! Since it looks like the call hasn't appeared on the website yet, I'm including the basic details below:

26th Annual Symposium on Computational Geometry (SoCG 2010)
June 13-16, 2010
Snowbird, Utah, USA
In cooperation with ACM SIGACT and SIGGRAPH

The Twenty-sixth Annual Symposium on Computational Geometry will be held at Snowbird, Utah. We invite submissions of high-quality papers, videos, and multimedia presentations describing original research addressing computational problems in a geometric setting, in particular their algorithmic solutions, implementation issues, applications, and mathematical foundations.

The topics of the Symposium reflect the rich diversity of research interests in computational geometry. They are intended to highlight both the depth and scope of computational geometry, and to invite fruitful interactions with other disciplines. Topics of interest include, but are not limited to:
* design, analysis, and implementation of geometric algorithms and data structures; lower bounds on the computational complexity of geometric problems;
* mathematical, numerical, and algebraic issues arising in the formulation, analysis, implementation, and experimental evaluation of geometric algorithms and heuristics; discrete and combinatorial geometry; computational topology;
* novel algorithmic applications of geometry in computer graphics, geometric modeling, computer-aided design and manufacturing, scientific computing, geographic information systems, database systems, robotics, computational biology, machine learning, sensor networks, medical imaging, combinatorial optimization, statistical analysis, discrete differential geometry, theoretical computer science, graph drawing, pure mathematics, and other fields.

----- Important Dates -----

* November 23, 2009: Paper titles and (short) abstracts due
* December 02, 2009: Full submissions due (23:59, Honolulu time)
* February 15, 2010: Notification of acceptance/rejection of papers
* February, 19, 2010: Video and multimedia submissions due
* March 01, 2010: Notification of acceptance or rejection of video/multimedia submissions
* March 15, 2010: Camera-ready papers and video/multimedia abstracts due
* April 20, 2010: Final versions of video/multimedia presentations due
* June 13-16, 2010: Symposium at Snowbird, Utah

----- Local Arrangements Co-Chairs ------

* Valerio Pascucci (University of Utah)
* Suresh Venkatasubramanian (University of Utah)


We invite submissions of high-quality papers describing original research on geometric algorithms and data structures, their mathematical foundations and correctness, their implementation, and their applications.

The program committee explicitly encourages the submission of video or multimedia in support of submitted papers. Authors may wish to consider making a separate submission to the video/multimedia track (see the guidelines below). Papers and video/multimedia submissions will be reviewed separately; acceptance or rejection of one will not influence acceptance or rejection of the other.

Final versions of accepted papers will be published by ACM in the symposium proceedings. Proceedings will be distributed to symposium participants and will also be available from ACM for purchase or through the digital library. An author of each accepted paper will be expected to attend the Symposium and give a presentation (approximately 20 minutes) of the paper. Authors of a selection of papers from the conference will be invited to submit extended versions of their papers to a special issue of one or more journals.

Thursday, September 24, 2009

LATIN deadline

Just a reminder that the (extended) LATIN deadline is on Monday at 4pm EDT. Given the size of the PC, you probably already knew that.

Friday, September 4, 2009

SODA accepted papers list

The SODA accepted papers list is here (as previously noted). From the titles, there seem to be lots of potentially geometry-related papers, but the promised abstracts should make that clearer.

Monday, August 10, 2009

Numb3rs Algorithm Clips

One of the things I've been enjoying doing while teaching Algorithms during this summer session is including algorithm explanations from Numb3rs whenever possible. Unfortunately, due to copyright constraints, I can't share the actual clips with all of you, but I can at least tell you where to find them. (My limited understanding of the copyright law is that since I bought the episode - $2 from iTunes - I can show short clips for educational use in my classroom. I could be wrong, so proceed based on your own understanding.) Here are the clips by algorithm/problem, episode, title, and approximate time within the episode.

  • Art Gallery Problem - Season 2, Episode 3, Obsession. 9:05 - 9:20. This is unfortunately a very short clip that doesn't really give a statement of the problem.
  • Dijkstra's Shortest Path Algorithm - Season 3, Episode 23, Money for Nothing. 9:20 - 10:10. This gives a nice description of greedy algorithms in general (via the change algorithm, though without David E's more detailed comments) and an overview of Dijkstra's algorithm.
  • Knapsack Problem - Season 3, Episode 24, The Janus List. 18:30 - 19:00. This clip gives a nice and mostly compete description of the problem in the context of "things you would bring on a hiking trip" based on weight and their importance.
  • P vs. NP - Season 1, Episode 2, Uncertainty Principle. 18:30 - 19:00. The P =? NP problem is a theme running throughout this episode... but unfortunately there's no nice clip that summarizes the problem, just lots of shots of Charlie working hard and a mention of the Minesweeper Consistency Problem. The specific times I mention are as close as I could find to a useful clip for teaching purposes.
  • Monty Hall Problem - This is a clip that's actually on YouTube, so I'll let whoever posted it take the blame for the copyright issues and just give a link.
  • The Seven Bridges of Konigsberg - Season 2, Episode 9, Toxin. 24:30 - 25:20. If you imagine that this problem would look beautiful if well-animated and described, you're right.
  • Voronoi Diagrams - Season 2, Episode 10, Bones of Contention. 29:45 - 30:25. A description of Voronoi diagrams in terms of "the closest place to find a cheeseburger."

I've been thoroughly enjoying watching these clips in class, and I think my students have too. Perhaps this will become a running series as/if I find more. If you know of any additional clips, from Numb3rs or elsewhere, please leave them in the comments.

Monday, June 29, 2009

A Sensor-Based Framework for Kinetic Data

I leave in about a week for Greece, where I'll be presenting a paper at AlgoSensors, a workshop associated with ICALP. The paper is entitled "Compressing Kinetic Data From Sensor Networks" and is joint work with my advisor, David Mount. You can see the camera ready version here. Slides will be posted on my website once they exist.

While the main result of the paper is a compression algorithm, the point of the paper is broader than that. When reasoning about kinetic data (the data generated by a set of moving points), the current accepted model is the kinetic data structures model that assumes that points are given flight plans in the form of algebraic expressions. This works well when reasoning theoretically (assume the points follow predetermined algebraic curves...), when modeling physical properties, or possibly when dealing with actual planes that really do need to give us their flight plans in advance. But what about if we want to do statistical analyses on collected data to determine what motion trends the objects follow? The cell phone users, migratory birds, enemy tanks, whatever you like to think about, aren't going to tell us how and when and where they're going to move. Many of these observations are collected using a sensor network, so it makes sense to design algorithms that work over these networks using the possible minimal amount of information available - the number of objects each sensor sees at each time instant. The framework we introduce assumes nothing about the motion of the objects or our advance knowledge of their plans. Instead, we analyze our results with respect to the entropy of this information. The ultimate goal, then, is that algorithms designed over this framework would perform better on "neat" motion, say all the birds following a straight line from Canada to Mexico, while algorithms on "messy" data would take longer. This paper takes the first step in that direction, giving a compression algorithm that compresses the sensor observations to within a constant factor of the optimal joint entropy bound.

For those interested in more precision, the abstract is included below:

We introduce a framework for storing and processing kinetic data observed by sensor networks. These sensor networks generate vast quantities of data, which motivates a significant need for data compression. We are given a set of sensors, each of which continuously monitors some region of space. We are interested in the kinetic data generated by a finite set of objects moving through space, as observed by these sensors. Our model relies purely on sensor observations; it allows points to move freely and requires no advance notification of motion plans. Sensor outputs are represented as random processes, where nearby sensors may be statistically dependent. We model the local nature of sensor networks by assuming that two sensor outputs are statistically dependent only if the two sensors are among the k nearest neighbors of each other. We present an algorithm for the lossless compression of the data produced by the network. We show that, under the statistical dependence and locality assumptions of our framework, asymptotically this compression algorithm encodes the data to within a constant factor of the information-theoretic lower bound optimum dictated by the joint entropy of the system.

Also, a reminder - the SODA abstract deadline is today at 4:59pm EDT.

Tuesday, June 2, 2009


Other folks already have some nice STOC summaries up (Bill/Lance, Michael), so I won't repeat them except to say that not only was Shafi Goldwasser's Athena Lecture excellent, her jokes were intelligent too (if you missed it, it looks like there might be a video posted here some day). Truly a talk worth seeing.

Another talk that's gotten less attention, but that I thought was exciting, was the paper "Homology flows, cohomology cuts" by Erin Chambers, Jeff Erickson, and Amir Nayyeri. There's no point paraphrasing when they say it so well themselves:

We describe the first algorithms to compute Maximum flows in surface-embedded graphs in near-linear time. Given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, our algorithm computes a maximum (s,t)-flow in in O(g7 n log2n log2C) time for integer capacities that sum to C, or in (g log n)O(g) n time for real capacities. Except for the special case of planar graphs, for which an O(n log n)-time algorithm has been known for 20 years, the best previous time bounds for maximum flows in surface-embedded graphs follow from algorithms for general sparse graphs. Our key insight is to optimize the relative homology class of the flow, rather than directly optimizing the flow itself. A dual formulation of our algorithm computes the minimum-cost cycle or circulation in a given (real or integer) homology class.

Check out their full paper from Jeff's site, where I also stole the nice graphic and abstract from.

I was volunteering for the conference (Maryland was the local host) and spent some of my time working the registration desk, which gave me a nice chance to see people as they entered. It was nice to meet some of you in person. Even though the business meeting was a bit tame (perhaps Chazelle should be given best paper awards at every conference to add entertainment value...), it seems to have been a successful conference. Congrats and thanks to all the organizers/speakers/PC members/etc.