top of page

A Primer on IMEC by Max Maier

Updated: Mar 16, 2022

This blog post is a primer for

Maier, M., van Dongen, N. N. N., & Borsboom, D. (2021). Comparing Theories with the Ising Model of Explanatory Coherence. https://doi.org/10.31234/osf.io/shaef

 

Scientific theories are our most important epistemic achievements. Our knowledge of individual pieces of information is little compared to theories such as general relativity and evolution by natural selection that shape our understanding of many different kinds of phenomena.

Thagard (1988, p.33)


A good scientific theory is like a crystal ball, making predictions and explaining beyond what was observed or tested. For instance, evolution through natural selection explains why there are different species, why whales and mammals that roam the land have comparable skeletal structures, and even allows us to speculate about life on other planets (as illustrated in the recent Netflix series Alien Worlds). Theorizing is important even to such everyday activities as cooking. A novice cook will probably want to try out every new recipe themselves before cooking it for guests as it is not possible for him to accurately predict the flavor of the food solely based on ingredients and the cooking process. However, with experience comes the ability to envision the meal from certain principles (e.g., flavors should be contrasted and balanced: a dish that is too savory can be balanced by adding something acidic, like vinegar or lemon juice, as a contrast), which gives the ability to more accurately predict the taste of a new recipe and create new recipes.


Unfortunately, most fields in psychology are characterized by a lack of theories or an overabundance of weak theories. These theories make imprecise predictions, have unclear assumptions, and are sometimes even self-contradictory. This makes psychological science hyper empirical and its progress slow (Borsboom, 2013). Fried (2020, p. 272) states it concisely with the following example:

Strong theories enable us to test what would happen in situations that are not actually realized. For example, we know quite a bit about skyscrapers and earthquakes, and can test what would happen to a specific skyscraper under a specific earthquake scenario in theory (e.g. via a computational model), allowing the construction of better skyscrapers . Imagine we could do such a thing in clinical psychology: testing the effect of a treatment without actually conducting a clinical trial!


While much research and essays have focussed on the problems of bad theory in psychology and ways to move forward (e.g., Borsboom et al., 2021; Gigerenzer, 1991; Meehl, 1978), it appears little progress has been achieved. Often advice takes the form of general recommendations or principles such as “theories with fewer assumptions are better” or “theories that make more precise predictions are better”. However, in practice, researchers usually face trade-offs between different desiderata of theory appraisal and there are few tools available to evaluate theories side-by-side and weigh their qualities against each other.


Useful would be tools akin to the information criteria used in regression analysis. For example, the (W)AIC or BIC are useful for determining whether adding an additional predictor to a regression model is expected to improve predictive performance or worsen it due to overfitting. We assert that theorists could benefit from similar measures to determine whether the additional ad-hoc assumptions added to a theory are justified to explain more phenomena, or whether it risks overfitting. In the extreme case, one could modify a theory to account for all contingencies by adding a new assumption for each new experiment that is run. While such a theory would explain all experimental outcomes, we would hardly trust such a theory to successfully account for new phenomena. A systematic approach is necessary for weighing the complexity and explanatory relations of a theory against the number of phenomena it explains.


One tool that captures different considerations of theory comparison is the theory of explanatory coherence (TEC; Thagard, 1989). TEC is a metatheory based on the following fundamental principles (Thagard, 2000, p. 43)

  1. Symmetry. Explanatory coherence is a symmetric relation, unlike, say, conditional probability. That is, two propositions p and q cohere with each other equally.

  2. Explanation. (a) A hypothesis coheres with what it explains, which can either be evidence or another hypothesis. (b) Hypotheses that together explain some other proposition cohere with each other. (c) The more hypotheses it takes to explain something, the lower the degree of coherence.

  3. Data Priority. Propositions that describe the results of observations (usually claims about phenomena) have a degree of acceptability on their own.

  4. Contradiction. Contradictory propositions are incoherent with each other.

  5. Acceptability. The acceptability of a proposition in a system of propositions depends on its coherence with them.

From these principles, TEC jointly captures and trades-off different considerations in theory evaluation, such as explanatory breadth, refutation, simplicity, and downplaying potentially irrelevant evidence.


In our recent paper (Maier, van Dongen, & Borsboom, 2021), we implemented this theory of explanatory coherence through an Ising model. The Ising model is a network model originally developed in the context of statistical ferromagnetism to describe the polarization of elemental magnets (Ising, 1925). It consists of nodes (elemental magnets), which can be in two states, connections (edges) between these nodes, and thresholds on the nodes that represent external forces on some of the nodes. In the Ising Model of Explanatory Coherence (IMEC), the nodes represent hypotheses of the theory or phenomena that the theory explains, connections between them represent explanatory or contradictory relations, and the thresholds on phenomena nodes represent empirical evidence for those phenomena.


While a detailed demonstration of the strengths and weaknesses of IMEC is beyond the scope of this blog post, Figure 1 illustrates an application to psychology. The Figure shows the comparison of two competing theories of intelligence research, the mutualism theory and common cause theory. The yellow circles correspond to the hypotheses of the mutualism theory and the green circles correspond to hypotheses of the common cause theory. The squares represent phenomena of intelligence that are to be explained by the theory (blue = supportive empirical evidence, red = contradicting empirical evidence). In addition, blue connections between nodes represent explanatory relations, while red lines represent contradiction. The phenomena and hypotheses can be found in Table 1.


IMEC allows us to calculate the relative explanatory coherence of each of the theories as a function of the number of propositions and explained phenomena as well as the specific explanatory relations. The model presented in Figure 1 indicates that the mutualism theory is more explanatory coherent than the common cause theory (with an explanatory coherence of 0.788 on a scale of 0 to 1). A noteworthy feature of IMEC is that it allows us to identify critical experiments that could adjudicate between theories. For example, E8 represents the fact that a biological cause of intelligence has not been found, which is implied by the common cause theory. We can calculate how the relative explanatory coherence would change if we discovered such a unifying biological cause. This would mildly reduce the coherence of the mutualism theory to 0.782 and boost the coherence of the common cause theory to 0.836. Thus, the discovery of such a biological cause would change which theory of intelligence should be prefered.


This brief demonstration shows how IMEC can be used to assess the quality of psychological theories. While extensions of IMEC, for example with sensitivity analysis, are currently areas of active investigation in our research group, we believe that the current version can guide psychologists in theory evaluation, hopefully improving the qualities of theories in the field. IMEC is available in the R-package IMEC on CRAN, which we hope will further facilitate adoption among empirical researchers. For those interested in a detailed description of IMEC and a more extensive treatment of the intelligence example, a preprint of our paper is available here https://psyarxiv.com/shaef/.


Figure 1. Comparison Between Positive Manifold and Latent Variable Theories of Intelligence.




Table 1. Intelligence phenomena and the hypotheses of the intelligences theories.

PHENOMENA

ID

Description

E1

There exists a positive manifold between test score

E2

Intelligence is hard to predict from early childhood performance.

E3

Intelligence can be described by a hierarchical factor structure.

E4

General intelligence is highly heritable.

E5

Jensen Effect - The larger the g-loading of cognitive tests the larger the heritability.

E6

Flynn effect - IQ increased over time.

E7

Differentiation effect - The positive Manifold is not uniform in the population.

E8

No biological variable could be identified as the unitary cause of positive manifold.

MUTALISM THEORY

ID

Description

HM1

Each cognitive process supports the development of other processes.

HM2

Growth can be described by a logistic model.

HM3

There are small genetic intercorrelations of resources.

HM4

Intelligence and environment cause each other reciprocally.

COMMON CAUSE

ID

Description

HL1

Intelligence is caused by a common latent variable.

HL2

Intelligence and environment cause each other reciprocally.

Note. Extracted from van der Maas et al. (2006).




References

  • Borsboom, D. (2013). Theoretical amnesia. http://osc.centerforopenscience.org/2013/11/20/theoretical-amnesia/

  • Borsboom, D. (2017). A network theory of mental disorders. World Psychiatry, 16(1), 5-13.

  • Borsboom, D., van der Maas, H. L. J., Dalege, J., Kievit, R. A., & Haig, B. D. (2021). Theory construction methodology: A practical framework for building theories in psychology. Perspectives on Psychological Science, 16(4), 756–766. https://doi.org/10.1177/1745691620969647

  • Fried, E. I. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry, 31(4), 271-288.

  • Gigerenzer, G. (1991). From tools to theories: A heuristic of discovery in cognitive psychology. Psychological Review, 98 (2), 254–267. https://doi.org/10.1037/0033-295X.98.2.254

  • Maier, M., van Dongen, N. N. N., & Borsboom, D. (2021). Comparing Theories with the Ising Model of Explanatory Coherence. https://doi.org/10.31234/osf.io/shaef

  • Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir karl, sir ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46 (4), 806–834. https://doi.org/10.1037/0022-006X.46.4.806

  • Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir karl, sir ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46 (4), 806–834. https://doi.org/10.1037/0022-006X.46.4.806

  • Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12 (3), 435–467. https://doi.org/10.1017/S0140525X00057046

  • Thagard, P. (2000). Coherence in thought and action. MIT press.

  • van der Maas, H. L., Dolan, C. V., Grasman, R. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113 (4), 842–861. https://doi.org/10.1037/0033-295X.113.4.842


Comments


bottom of page