For neuroscientists studying complex systems, patterns exhibit valuable data that may or may not correspond to higher levels of cognitive processes. Tyler Millhouse proposes a criterion evaluating just how real a pattern is likely to be, improving a SFI External Professor Daniel Dennett’s 1991 explanation, which utilized ‘compressibility’ to determine how genuine a pattern is likely to be. Dennett characterized genuine patterns by whether complicated scientific data can be properly represented by smaller scientific models, and just how extremely detailed pictures may be compressed into JPEG files that capture the important elements of the original image. Millhouse further argues that the more complex the interpretation required, the less real the pattern is likely to be.
My aim is to argue for a new and more demanding criterion for the reality of patterns. This criterion is inspired by real patterns, both in its original form and in later interpretations, but it also builds on algorithmic information theory and on similarity criteria of model fidelity.
To understand how Dennett makes philosophical use of the connection between regularity and compression, it is worth revisiting one of his central examples—image compression. The presence of a pattern in data is a matter of degree, a philosophical insight that is exemplified by the checkerboard compression test:
Dennett’s second key insight is that patterns in data are objective. A pattern exists in some data—is real—if there is a description of the data that is more efficient than the bit map, whether or not anybody can devise it. These data may be used to explain anything from baseball statistics to astronomical observations in the actual world. The important thing is that we have some raw data that we can retrieve from a more economical representation. The accuracy and economy of this description tell us how much of a pattern is there in the data, and the specifics of that description inform us about the pattern itself.
The objective interpretation of these patterns must answer at least two questions: ‘What is there?’ and ‘How does it behave?’ To this end, he proposes a simplicity constraint model that improves instantiation.
Mapping simplicity depends on a kind of structural similarity between the instantiating system and instantiated model (in this case, a physical system and model of computation, respectively). In particular, it suggests that mappings will be simpler when the physical states corresponding to each computational state are (i) similar to each other and (ii) different from the physical states corresponding to other computational states. In other words, mappings will be simpler when distinctions drawn by the model reflect patterns of similarity and difference present in the instantiating system.
This criterion is based on real patterns, but it addresses many major problems about the original’s commitment to realism. It does this by combining compression and similarity ideas with the notion that model fidelity equates to a type of structural similarity between a model and its target system.
“This is about getting us to reflect on how much interpretive work we do,” Millhouse says, “And it also cautions us to think about how scientific theorizing works in general. It’s easy to come up with reasons your theory is ok despite evidence to the contrary.”
This work suggests that the amount of ‘reading into’ we have to do is closely connected to what it means for the world to really exhibit a pattern.
Really Real Patterns, Tyler Millhouse
Published: June 2021
DOI: https://doi.org/10.1080/00048402.2021.1941153