Science

Everything is a model

by Scott Yanco and Elizabeth Pansing

Thylacoleo_skeleton_in_Naracoorte_Caves

“It’s all shadows on the wall. Plato’s “Allegory of the Cave” serves as a useful (if unintentional) metaphor for how science derives meaning from models.”

On a sunny, early spring day, a group of graduate students and faculty are walking across campus. One professor is discussing some problems she encountered with a particular analysis.  Someone suggests she use a linear model, to which she replies: “Ah, models! Why is it always models?! Can’t I just test my hypothesis?”. No one is quite sure what to say (besides extolling the benefits of linear models) but someone should have replied: “Everything is a model.”

This conversation is representative of how many ecologists view models: juxtaposed with “real” statistical tests.  But this dichotomy is only imagined. All quantitative research methods are based on models. All statistical tests, all summary statistics, all raw data, and even our ideas are models. Failing to appreciate the ubiquity of models leads to misunderstanding the epistemology of science itself.  Conversely, realizing that all science is an act in model building leads to more creative and robust inquiries, and, ultimately, better inference.

Models Are Models

A model is an abstraction of some real thing or process meant to represent the pertinent features thereof while disregarding unnecessary detail. Typically, when ecologists think of models they picture a particular kind: simulation models. Simulation models are mathematical and/or computerized abstractions of a process built from a researcher’s assumptions about system function. Simulation models can be empirically parameterized, or researchers can explore the parameter space. Importantly, this type of model is not fit to data but instead generates “data” through the simulation process.

Critically, the assumptions of simulation models constrain subsequent inference; knowledge gained is true given that the model assumptions are correct. Put another way, any inference made based on a simulation model is really inference relative to or about that model. As we will see below, having model assumptions limit the inferential space is not unique to simulation models but rather an often-overlooked but pervasive feature of all inference.

Statistical “Tests” Are Models

Despite common perception, model assumptions also constrain inferences from statistical tests. Statistical tests are simply hypotheses (almost always the null hypothesis) confronted with data. All statistical tests constrain inference in that they evaluate the plausibility of observed data given the posited model (or, sometimes, the plausibility of a model given some data). In either case, the inference is always relative to a proposed model. Consider a simple example: the common two-sample t-test. This test simply estimates the probability that the observed difference in means between groups (or a larger difference) arose from a probability distribution (i.e., a model) where that difference doesn’t exist.  We call this probability the “p-value”.  If the p-value is small enough, we infer that the data are unlikely to have been generated by that “no difference” model. Note that this p-value is, by definition, a conditional probability; it is the probability of having observed the difference between groups that you observed (or a larger difference) given the model for the null hypothesis. The resulting inference is constrained by the assumptions of the null model that we proposed and the inference is entirely relative to that model.

Because researchers call approaches like the t-test (or ANOVA, Chi-squared, etc.) “tests” and apply data to them, they are often tempted to view them as distinct from and, perhaps, better than simulation models. However, we have just seen that the same constraints that apply to simulation models also apply to statistical “tests”; our inference is only valid within the bounds established by the model we propose. Furthermore, even though we have brought data to bear on this process, all we have learned is the plausibility of those data predicated on the given model.

Descriptive Statistics Are Models

Even descriptive statistics calculated from a dataset invoke a model.  For example, reporting the mean and standard deviation of a sample indicates a belief that the sample follows a Gaussian distribution (since those are the parameters needed to describe that distribution). Indeed, even setting aside distributional assumptions, reporting something like a median indicates that researchers believe the sample is best described by its central tendency. Other descriptive statistics could also apply (e.g., range, variance, etc.) and would indicate a belief that the best description for the data set is that summary value. In the plainest terms, a summary statistic is an abstraction of a set of numbers meant to describe the “relevant” information contained within that set—in other words, a model.

The models invoked in selecting a summary statistic also constrain the knowledge they produce. By summarizing a set of numbers as the median we choose to disregard the variance in the data. By using the median, we imply that the central tendency is the ecologically relevant aspect of the dataset. It doesn’t matter that summary statistics are not typically viewed as inferential, the information they convey carries with it the constraints of the model used to generate the summary.

Hypotheses Are Models

A hypothesis is a researcher’s noetic model of how the world works and the data collected reflect the researchers’ conceptual model of how to test that hypothesis. The mathematical hypotheses we use for null hypothesis testing purposes are models (e.g., H0: 𝝻1-𝝻2 = 0, which represents the null hypothesis for the two-sample t-test described above). Multi-model frameworks also operate on mathematical definitions of our working hypotheses about how a process works. Any time we see AIC, BIC, etc., the author is putting forth multiple hypotheses in the form of mathematical equations, and subsequently confronting those models with data. When using a multi-model approach, we might ask how probable our data are given the hypotheses we’ve put forth, but we can also ask which model/hypothesis is the most parsimonious given our data. It is worth noting that we are constrained to the set of hypotheses that we present. Hypotheses that we do not propose cannot be considered, and we are therefore limited by the assumed set of hypotheses.

We can take this a step further and out of the mathematical realm entirely. When a colleague or committee member asks for a diagram depicting a process of interest, the resulting diagram represents a hypothesis and a model of the system or process. Eventually, these diagrams could be written as mathematical equations, but don’t have to be. They are still models.

Conclusion

Everything is a model. This can extend from the way our brains process stimuli, to conceptual models drawn for colleagues, to the models we use for quantitative analysis. Every one of these is a useful simplification of the world around us—one that makes general assumptions about the process in which we’re interested and reduces complexity to manageable levels. The practicable point for ecologists is to recognize that the process of science is entirely an exercise in proposing and evaluating models.

Accepting that all science is an exercise in model building promotes two critical realizations: 1) We must stop thinking of models as less trustworthy or “unreal”; and, 2) We must think more clearly and accurately about what our statistics actually do. Both of these realizations lead to better inference and, therefore, better science. On the one hand we should realize that the inference produced by simulation models is no less constrained or caveated than inference derived from data. On the other hand, we should be more cautious with statistical tests and ensure that the inference we report acknowledges its constraint by underlying model assumptions.

Author biographies:  

Scott Yanco is a PhD student at the University of Colorado Denver. He studies avian movement ecology using a combination of simulation modeling and empirical inference. Twitter: @scottyanco

Elizabeth Pansing is a doctoral candidate at the University of Colorado Denver. She is a forest ecologist interested the effects of forest disturbances and climate influence tree population dynamics. She is currently using simulation models to investigate the interaction between increased fire frequency and population dynamics in a long-lived conifer.

Image credit: Image is in public domain (via Wikimedia Commons).

Categories: Science

Tagged as: , ,