Abstract

How do we model, structure or quantify geometric information in geometric data? We study the three components of this question through the lens of computational geometry. We review how geometric information and computations on this data can be theoretically modelled. Specifically, we discuss two common ways to model computations in
... read more
theoretical computer science: the word RAM which fixes a maximal number of bits per variable (the word size) and the real RAM which assumes that a single register may contain an arbitrary real value up to infinite precision. We study the execution of real RAM programs on the word RAM using smoothed analysis, a probabilistic paradigm used to model the day-to-day inaccuracies of algorithmic input. We show that under smoothed analysis, real RAM programs are `correctly’ executed using the common logarithmic word size. That is, both the real RAM program and its word RAM equivalent reach the same conclusion. Next, we discuss to structure geometric data for efficient reuse. This question is studied in three contexts. First, we show how to preprocess a simple polygon of n vertices such that given two query segments (each traversed by an entity at constant speed), we can decide if there is a time where both entities are mutually visible in sublinear time. Next, we show how to preprocess a trajectory P of n vertices such that given a query segment q, we can compute the Fréchet distance between P and q in sublinear time. Finally, we show how to dynamically maintain a smooth quadtree with worst case constant update time. In the third part of this thesis, we reason about how to mathematically quantify the inherent imprecision in geometric measurements. We model this imprecision as follows: the algorithmic input is an imprecise point set. That is, a set of n regions that each contain a unique point. The goal is to preprocess the regions, such that a data structure can be computed on the point set in time faster than would-be possible without the given regions. If the regions are pairwise disjoint, we show how to construct the sorted order of a one-dimensional imprecise point set, or the Pareto front of a planar imprecise point set in efficient time. We show that our runtime is not only worst-case optimal, but even optimal for every set of (circular or rectangular) regions for the worst-case point set corresponding to those regions. The above research questions result in a variety of computational lower bounds, expected values on complexity and algorithmic upper bounds. These fundamental results contribute to our theoretical and mathematical understanding of geometric data.
show less