Collaboration among organizations or individuals is common. While these participants are often unwilling to share all their information with each other, some information sharing is unavoidable when achieving a common goal. The need to share information and the desire to keep it confidential are two competing notions which affect the outcome of a collaboration. When collaborating agents share sensitive information to achieve a common goal it would be helpful to them to decide whether doing so will lead to an unwanted release of confidential data. These decisions are based on which other agents are involved, what those agents can do in the given context, and the individual confidentiality preferences of each agent. This thesis proposes a formal model of collaboration which addresses confidentiality concerns. We draw on the notion of a plan which originates in the Artificial Intelligence literature. We use data confidentiality policies to specify the confidentiality concerns of each agent, and offer three ways of defining policy compliance. We also make a distinction between systems containing only well-balanced actions in which the pre- and post-conditions are of the same size, and general systems that contain un-balanced actions. For each definition of policy compliance and for each type of system, we determine the decidability and complexity of scheduling a plan leading from a given initial state to a desired goal state while simultaneously deciding compliance with respect to the agents' policies.
Thesis (Ph.D. in Mathematics) -- University of Pennsylvania, 2009. Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3548. Adviser: Andre Scedrov.