On 10/09/2010 3:32 PM, Simonson, Lucanus J wrote:
It doesn't sound like a tree. I don't quite grasp what you mean by adjacent edges in an edge bound.
The empirical scaling factor of an optimal sweepline that uses a tree should be about n^1.05 for all test cases.
I agree the data structure that I've used is not a typical tree, but it's not a simple linked list either. It is however pretty much exactly the data structure that's described by Vatti in his original paper. As per his algorithm - polygons are divided into series of left and right bounds (connected edges that form either a left or right border or 'bound' of a polygon). There may be any number of left and right bounds for any polygon (unless it's convex in which case there will only be one of each). Anyhow, what I meant by 'adjacent edges' was the next (or previous) edge in a linked list that makes a given bound. Concerning issues of scaling, I believe the sweep line approach that I've followed (as outlined by Vatti) is near maximally efficient. A left and a right bound will meet at a 'local minima', so there's also a local mimima structure which contains pointers to its associated left and right bounds and the Y coordinate of the local minima. These local minima's are sorted (on their Y values) into another linked list (LML). With this structure there's almost no 'lookup' required as the sweep progresses, except to check at the start of each sweep whether the Y coordinate of the first local minima in LML needs to be popped from LML while adding its 2 left and right bound edges to the 'active edge list' (AEL). Of course it's not really possible to condense Vatti's data structure into a few sentences, let alone the implemtation logic, by I understand you already have a basic understanding at least of this.
There is a pretty significant difference between making the sub-optimal sweepline algorithm numerically robust and the optimal tree based sweepline algorithm numerically robust, so you might want to step back and decide whether you are looking to improve the robustness of your current algorithm or to improve the algorithm then come back to robustness.
I believe my current algorithm (which very closely adhers to Vatti's data structure) is close to maximally efficient. The data structure at least doesn't need touching but no doubt the implementation can always be improved.
I have been mentoring a successful Google Summer of Code project this year that implemented an optimal and robust sweepline algorithm for the vornoi diagram of points and soon line segments. The robust and optimal sweepline for line segment intersection and polygon clipping is also achievable, but very tricky. Your work shows that you know how to test your code well, which counts for a lot. I'd like to encourage you to persevere in pursuing robustness as well as algorithmic optimality.
I'd certainly welcome your continuing feedback/suggestions/help along these lines. Angus