Angus Johnson wrote:
I believe Clipper is currently substantially more reliable that GPC because I don't think GPC properly deals with complex intersections. I'm not aware of the tree data structure recommended by Vatti et al. (I couldn't find any mention of it in Vatti's original paper.) My implementation uses linked and double linked list data structures which are described in Vatti's paper. I would be very interested if you could provide a link to Vatti's and Bentley-Ottman's discussion of a tree structured approach.
Concerning robustness - I believe numerically robust problems only arise when complex intersections are encountered. Specifically, how does an algorithm cope with mulitple intersections occuring in very close proximity. Simply relying on 'tolerance' when using floating point values will guarantee failure at some point. On failure, intersection operations are performed in an incorrect order resulting in incorrect edge sorts. This fortunately is easily detected after each sweep. While, not a final solution, I have an update that's close to release that tests the edge sorts before actually processing the intersections. If the test sort order is wrong, the 'tolerance' value is adjusted (both up and down) and retested. Fortunately, this testing has only a minor impact on performance (%10). In my stress tests*, and using this 'tuned' tolerance approach, the incedence of test failure drops to about one in a million tests. (*The test involves generating random subject and clip polygons with 100 vertices each rotated at varying degrees where the vertices are specifically tuned to generate may complex intersections.) Given that complex intersections in clipping operations are infrequent events in normal use (which is why GPC rarely blinks in normal use), I would argue that the likelihood of users encountering a failure with Clipper would be very much lower than one in a million. You also mentioned scaling as an issue with floating point implementations. I agree that it is an issue and that input polygons with very small or very large coordinate values should be scaled. Anyhow, a fully numerically robust solution is still my final goal.
That's a worthwhile goal to be sure. When I discussed scalability I meant algorithmic complexity and ability to process large data volumes in input geometry in a single test case. An algorithm that scales near linearly will be able to process inputs on the order of one hundred million verticies in reasonable run time, for example. If your sweepline data structure is a linked list and not a tree or bucketed list then your algorithmic complexity must be worse than the O(n log n) wrt input verticies plus intersection vertices described in Vatti's paper. In that case I wouldn't expect it to scale past tens of millions of vertices in reasonable runtime. In my testing the empirical scaling factor for GPC was about O(n^1.75) so you could be substantially faster than that but still worse than O(n log n). While you mention that odds of failure may be one in a million tests for tests of a given size, if you increase the size of the input by a factor of one million your odds of failure get close to one in one. That was the point I was trying to make about algorithmic scalability being undermined by lack of robustness. It doesn't matter if your implementation could process one hundred million vertex test cases in reasonable runtime if the odds of it succeeding are remote. I just wanted to clarify this point because it isn't always intuitively obvious. Also, I believe GPC does handle high order intersections because I'm quite certain I was generating large numbers of them in the tests I was running that GPC seemed to handle just fine. I could be wrong of course, but I suggest you double check that GPC doesn't properly deal with complex intersection. I know it doesn't deal with overlapping or self intersecting polygons at all, but that is a separate issue from polygons that have vertices in common or polygons that abut along edges but do not overlap. GPC is used quite extensively and paid for by many people, we should be responsible about making claims about its short comings. Regards, Luke