I'm curious if experienced users can comment on CGAL's numerics approach (https://www.cgal.org/exact.html) where CGAL states that they track error bounds explicitly and fall back to high-precision exact arithmetic.
In numerical linear algebra, problems have an intrinsic condition number, and most algorithms admit backward or forward stability in terms of that condition number, implying a kind of Lipschitz continuity in computation: you can't be too far off from the true exact solution, and if your input was rounded, then it's still close enough to the exact solution.
The documentation seems to imply that CG problems don't have this kind of regularity ("no such arsenal"), hence the need for fallback to exact.
But if this regularity doesn't exist in CG (i.e., you can create natural geometric constructs for which low-bit perturbations result in large-scale changes in the output), then why would exact numerics even save us here? Wouldn't it be possible to construct examples where we could make the amount of precision required super-polynomial in the input bitwidth? Or are there different regularity theorems which limit the complexity of the output (but still require, say, polynomially-increased precision).
The numerics in computational geometry are used for making combinatorial decisions (do three points make a left or a right turn in the plane; is point p inside or outside of the circumsphere defined by these three points; etc). These are called predicates in the CG literature. What makes this difficult is that multiple different predicates interact (e.g., take the three points in different order), and the answers they give need to be consistent. If you want some nice examples of how very simple things can go wrong, see L. Kettner, K. Mehlhorn, S. Pion, S. Schirra, and C. Yap, “Classroom examples of robustness problems in geometric computations,” Comput. Geom., vol. 40, no. 1, pp. 61–78, 2008.
Most predicates boil down to a decision of whether some quantity is less than, equal, or greater than zero. CGAL implements filtered predicates — a work of art in my opinion — where they use ordinary computation and if the result is far enough away from zero, return its sign. If not, they switch to higher precision or interval arithmetic. A good explanation of how this works (what "far enough from zero" actually means) is in O. Devillers and S. Pion, “Efficient Exact Geometric Predicates for Delaunay Triangulations,” in Proceedings of the 5th Workshop on Algorithm Engineering and Experiments, pp. 37–44, 2003.
The larger problem of symbolically perturbing input, so that it's in general position (in a sense, what to do if your predicate returns 0) was a very active research topic in the 90s, and both the problem and much of the work is explained really well in R. Seidel, “The Nature and Meaning of Perturbations in Geometric Computing,” Discrete Comput. Geom., vol. 19, no. 1, pp. 1–17, 1998.
Thank you for the detailed reply. "Classroom examples of robustness problems in geometric computations" is great; I look forward to doing a deep read there.
Your reference, "The Nature and Meaning of Perturbations in Geometric Computing" itself refers to "On degeneracy in geometric computations" (https://dl.acm.org/doi/10.5555/314464.314474) which I think points to a less tame reality where the answer to my question (continuity claims about CG outputs) is "it depends on the CG problem".
I think this is an interesting question, and I'd be curious to know the answer, too.
That said, I might push back on the idea that CG problems don't have this kind of regularity. They give the example of a point being either on a line determined by two other points, or to one or the other side of it. They say that you need to know this information exactly, which is combinatorial.
It's not so clear to me that you really do need to know this information. For this example, you could represent the line as a zero level set, and then the query above is answered by the sign of the value of the zero level set function evaluated at this point. There is some latent numerical error here, but this problem is well-conditioned in the sense you mention above.
I think it would be interesting to see whether it's possible to build a library of CG routines based on a "fuzzier" level set conception of things.
I'm not the best person to give counterexamples here, but a classical demo of "lack of regularity" would be the point-in-polygon problem (https://en.wikipedia.org/wiki/Point_in_polygon).
Say you have a spiral-shaped but simple polygon that's wrapped around a query point. Even if your query point is far from the innermost "loop" of the spiral, a raycast could be arbitrarily confused if the sprial has many "thin" loops on the outside. (However, notably, the bits precision required would be a reasonable function of the spiral's specification. My point is just that fuzziness in general can be a compounding problem as a CG algorithm "progresses".)
In numerical linear algebra, problems have an intrinsic condition number, and most algorithms admit backward or forward stability in terms of that condition number, implying a kind of Lipschitz continuity in computation: you can't be too far off from the true exact solution, and if your input was rounded, then it's still close enough to the exact solution.
The documentation seems to imply that CG problems don't have this kind of regularity ("no such arsenal"), hence the need for fallback to exact.
But if this regularity doesn't exist in CG (i.e., you can create natural geometric constructs for which low-bit perturbations result in large-scale changes in the output), then why would exact numerics even save us here? Wouldn't it be possible to construct examples where we could make the amount of precision required super-polynomial in the input bitwidth? Or are there different regularity theorems which limit the complexity of the output (but still require, say, polynomially-increased precision).