Abstract
Privacy is the right of individuals to keep personal information to themselves or restrict the information to those that they are comfortable sharing with. When individuals use online systems, they should be given the right to decide what information they would like to share and what to keep private. When
... read more
a piece of information pertains only to a single individual, preserving privacy is possible by providing the right access options to the user. However, when a piece of information pertains to multiple individuals, such as a picture of a group of friends or a collaboratively edited document, deciding how to share this information and with whom is challenging. The problem becomes more difficult when the individuals that are affected by the information have different, possibly conflicting privacy constraints. In this dissertation, we investigate collaborative privacy mechanisms that can provide a fair resolution that also considers equity for preserving the privacy of all parties in question. We mainly focus on the online social networks domain, since it is widely used and contains pieces of content that can affect privacy of more than just one individual. We propose an auction-based mechanism, where each related party bids for a privacy preference for a collaborative privacy decision. The mechanism aims to be robust, immmune to abuse and fair to all parties, so that each individual has a similar amount of effect in privacy decisions. We also prevent undersharing as well as oversharing, to ensure each piece of content that does not depict a privacy violation is shared in the system. We employ software agents to assist the users to participate in this mechanism, which is capable of learning privacy preference of all users with different levels of knowledge and motivation. Privacy assistant agents provide privacy preserving resolutions to conflicts where the users do not have to spend time and effort and can use the online social networks in a similar way that are provided by the widely used applications. Furthermore, we investigate the effect of human values in privacy decisions, in order to have an understanding of the thinking process behind online social network users. This leads us to social norms, which emerge from a group of users behaving in a similar way in similar circumstances. We propose a mechanism to identify these norms and incorporate them in collaborative privacy decisions, where users have the choice to simply follow the norms without the need of a complex decision system, and can still preserve privacy while preventing significant privacy violations. We evaluate our work with multi-agent simulations and case studies and report the results in terms of success in preserving privacy, equity and usability. We show that our work offers an easy-to-use solution for preventing privacy violations, where each user's privacy concerns are taken into account, regardless of their knowledge about privacy or their motivation to be a part of the process.
show less