Abstract
Since the beginning of this century there has been an explosive growth in the use of GPS technology. It has many mainstream uses such as powering car navigation systems and allowing phones to geolocate themselves and it is also widely used for more specialized activities such as tracking animal populations
... read more
or freight ships. If a GPS tracking device is used to track the location of a person, animal, or object over a longer period of time, the result is a sequence of timestamped positional measurements. Such a sequence is called a trajectory. For points in time in between measurements, we can approximate the position of the tracked object at that time by linearly interpolating between the nearest samples. We can imagine this as connecting the measured points with straight line segments to create a polygonal curve.
The amount of available trajectory data has grown immensely as GPS technology has become commonplace. By analyzing this data we can gain a deeper understanding of the movements of what was tracked. Since trajectory databases can be very large and more and more data is generated every day, there is a need for good algorithms that can analyze the data automatically. Algorithms are also needed to preprocess the data, by doing tasks such as error correction or compressing the data. In this thesis, four different algorithmic tasks pertaining to processing trajectory data are studied. The thesis contains a combination of both theoretical and practical research. In the theoretic parts we consider how the trajectory processing tasks can be modeled and if efficient algorithms exist that can solve them. In the experimental parts we implement several trajectory algorithms and apply them to real-world trajectory data. We consider a different task in each chapter. Specifically, we investigate the following four tasks:
Firstly, we study outlier detection in trajectory data. We introduce algorithms that approach this problem by considering the physical characteristics (such as the maximum speed) of the object that was tracked. We experimentally compare these algorithms to benchmark algorithms.
Secondly, we study trajectory simplification, or put more broadly the simplification of polygonal curves. We look into algorithms for finding a minimum complexity simplification under a variety of constraints.
Thirdly, we study how to compute a good representative for a cluster of trajectories. We implement the Central Trajectories algorithm by Van Kreveld et al. and use it to run experiments on real data.
Lastly, we study the generalization of road networks and how trajectories can be used for data driven approaches to generalization.
By looking into these tasks using a combination of both theoretical and experimental research we aim to broaden our understanding of trajectory data and how it can be used.
show less