150 Western Avenue, Allston, MA 02134

View map

The talks will begin at 2:30pm. There will be refreshments before the talks at 2:00pm outside of SEC LL2.224

 

Title: Towards a Unified Theory of Network Sparsification

Speaker: Aaron (Louie) Putterman, graduate student in Computer Science at Harvard SEAS
Abstract: Networks model complex systems in terms of local interactions among the various elements. Network analysis reveals underlying implicit structures (for example: communities in social networks) and is central to solving global optimization problems such as routing data efficiently, determining server placements to optimize performance, and identifying likely failure points in a network.

Today’s networks are massive, modeling many interactions between hundreds of millions of entities. Analyzing such networks is challenging due to the sheer number of entities, but becomes even more complex when the number of interactions grows super-linearly with the number of entities. In this talk, I will discuss recent progress on network sparsification, a technique that reduces the size of these networks to manageable scales, while maintaining theoretical guarantees on its similarity to the original network.

 

Title: Making Differential Privacy Usable Through Human-Centered Tools

Speaker: Priyanka Nanayakkara, Postdoctoral Fellow in the Center for Research on Computation and Society at Harvard SEAS

Abstract: It is often useful to learn patterns about a population while protecting individuals’ privacy. Differential privacy (DP) is a state-of-the-art framework for limiting how much information is revealed about individuals during analysis. Under DP, statistical noise is injected into analyses to obscure individual contributions while maintaining overall patterns. The amount of noise is calibrated by a unit-less privacy loss parameter, ε, which controls a tradeoff between strength of privacy protections and accuracy of estimates. This tradeoff is difficult to reason about because it is probabilistic, non-linear, and inherently value-laden. However, for DP to be broadly usable, people across the data ecosystem must be able to effectively reason about it.

 

In my work, I develop human-centered tools for data curators, data analysts to reason about DP and its tradeoffs. In this talk, I will illustrate my approach by presenting an interactive visualization interface, Visualizing Privacy (ViP), for data curators setting ε while balancing accuracy and privacy. I will also describe a controlled user study, where potential curators without DP expertise used ViP to complete tasks related to setting ε. To end, I will describe current lines of work aimed at further improving DP’s usability.

0 people are interested in this event