Image: Pietro Jeng/Unsplash

In an age of abundant information, one of the major questions to answer is how to quantify the value of this data. “The potential value of data is not just about the quality of the information contained in a dataset, but about what types of questions we can answer with it,” says SFI External Professor Amos Golan (American University). “On one hand, this is a very philosophical question, on the other hand, we want to make it very empirical.”

This April 2–3, researchers from around the world will meet at SFI for a two-day workshop, titled “The Potential Value of Data,” to discuss methods for quantifying the potential value of specific datasets. “Information can appear to be useless until a model is constructed that renders it useful,” says SFI External Professor John Harte (UC Berkeley), who is co-organizing the workshop with Golan and Min Chen, a professor at Oxford University.

The effort to quantify the value of data was prompted in part by recent efforts from government agencies to compile and maintain publicly available datasets. Given the enormous cost requirements of creating and maintaining these datasets, this creates a need to quantify the value of existing datasets and predict the value of future datasets.

“Usually when people talk about the value of data, they look at what people have already gotten out of a dataset, such as the number of papers published, but a more important issue is to think about the potential value,” says Golan.

In addition to discussing ways of quantifying the value of existing datasets, they also plan to discuss methods for optimizing future datasets, which includes identifying specific high-value information that will increase the number of questions it can answer. “Thinking about the questions that you can answer with the data can help us in the practical sense, because then we can also evaluate what is the data that we wish we had, and what would be the cost of acquiring it,” Golan says.

Sometimes the answer is very simple.