Angus Johnson wrote:
Anyhow, reworking the data with the correct number of edges, the equation for this (near worst case) test polygon returns an equation of y = 1E-08x^1.4821 . If I generate the squiggly lines vertically in my test polygon (and destress the vatti algorithm) which is likely to be closer to normal use, I suspect this equation will drop further.
OK, when I rotated the test polygon 90 deg the trendline equation becomes : y = 2E-07x^1.0841 I would suggest that in normal use, my clipping library would perform somewhere between O(n^1.0841) and O(n^1.4821) though I suspect nearer to the lower value.
Here's the function that produces my test polygons (generating what I believe would be near worst case vertices for the Vatti algorithm):
//struct TDoublePoint { double X; double Y; }; //typedef std::vector< TDoublePoint > TPolygon;
TPolygon MakeTestPoly(int startX, int startY, int boxSize){ int k,x,y;
TPolygon result; result.resize(boxSize * boxSize +2); x = startX+4; y = startY; k = 0; for (int i = 1; i <= boxSize; ++i) { for (int j = 1; j <= boxSize; ++j) { x = x +(i % 2)*4 -2; y = y + (j % 2)*2 -1; result[k].X = x; result[k].Y = y; k++; } y = y + 4; x = x +(i % 2)*4 -2; } result[k].X = startX; result[k].Y = y -4; result[k+1].X = startX; result[k+1].Y = startY; return result; }
Just a C++ style pointer, we have what we call resource acquisition is initialization (RAII) policy where we initialize all variables at the time they are declared rather than deferring that to later. The rationale for this is that it avoids bugs introduced later when a variable is attempted to be used between where it was declared and where it is initialized resulting in undefined behavior and different behavior on different platforms/compiler optimization flags. RAII becomes much more important when memory allocation, locks, file handles, sockets and other tricky to handle resources are thrown into the mix because it allows you to encapsulate the acquisition and reliquishment of those resources in the declaration and destructor call for your objects which is much less error prone than the alternative. TPolygon MakeTestPoly(int startX, int startY, int boxSize){ int k = 0; int x = startX+4 int y = startY; TPolygon result; //it would be better if the TPolygon constructor accepted the size result.resize(boxSize * boxSize +2); //making this line uneeded //and to help prevent people from using the TPolygon but forgetting to resize it first. for (int i = 1; i <= boxSize; ++i) { for (int j = 1; j <= boxSize; ++j) { x = x +(i % 2)*4 -2; y = y + (j % 2)*2 -1; //squiggles right twices as fast as it squiggles up result[k].X = x; result[k].Y = y; k++; } y = y + 4; x = x +(i % 2)*4 -2; } result[k].X = startX; result[k].Y = y -4; result[k+1].X = startX; result[k+1].Y = startY; return result; } This case is actually a common case and not a worst case for your algoirthm because the average number of line segments intersecting your sweepline is more like O(n^1.5) because the line segments are very short on average. If you instead make n horizontal rectangles with random start and end coordinates and a small height within separate ranges of X for the start and end you will get O(n) unique x coordinates and on average O(n) (roughly 1/2 n) line segments intersecting your sweepline data structure at each unique x. This will be the worst case input and have expected runtime complexity of O(n^2) using a linked list as the sweepline data structure while an optimal Vatti using a tree would have O(n log n) complexity and empirical scale factor of about O(n^1.05). Regards, Luke