Rafael Treibich
DKK 1.577.472,00
Independent Research Fund Denmark (DFF)
Big Theory of Big Data: Theoretical Foundations of Data Aggregation Methods
Information can be difficult to process, conceptualise, and leverage into better
decisions. Big data applications in particular require mechanisms reducing its
"bigness" in a theoretically well founded way, much like Google's PageRank
algorithm summarises the World Wide Web into a simple ranking of websites
relevant to an internet search. Analogously, in this project, we will develop
methods to evaluate the reliability of an information source, elicit information from
large groups of individuals robustly, and characterise information diffusion
patterns. We will derive these methods from basic information aggregation
criteria/axioms specifying the key features of the data we care about. This analysis
enables a theoretically founded transformation of complex, high dimensional data
into easily interpretable indices.