The post Tutorial paper on new methods for estimating psychological networks appeared first on Psych Networks.

]]>In a recently published paper on Personality and Individual Differences (Costantini et al., 2017), we provide a tutorial in R on new methods for estimating and analyzing personality and psychopathology networks. We focus on datasets that are often collected in psychology, but that are not often used to their full potential: Datasets including multiple groups and datasets including irregularly spaced repeated measures.

In personality and psychopathology research, it is relatively common to collect data on *different groups of individuals* (e.g., patients vs. controls, males and females, individuals assigned to different experimental conditions etc.). The patterns of similarities and differences among these groups are often particularly interesting. If one estimates a single network across groups (e.g., using graphical lasso; Friedman, Hastie, & Tibshirani, 2008), one completely loses track of inter-group differences. Conversely, if one estimates a separate network independently for each group, one does not exploit inter-group similarities to improve estimates. Furthermore, one cannot be sure whether the differences among the estimated networks reflect genuine differences or just small sampling fluctuations. We propose to jointly estimate networks in different groups using the *Fused Graphical Lasso* method (FGL; Danaher, Wang, & Witten, 2014), an extension of the graphical lasso algorithm that includes a lasso regularization on the differences of the parameters across groups. The FGL jointly estimates different networks across groups of individuals, by exploiting their similarities without masking their differences^{1}.

In the case of irregularly spaced repeated measures, temporal networks (which encode cross-lagged relationships among variables; Bringmann et al., 2013) cannot be easily estimated. This is the case of event-contingent Ecological Momentary Assessment (EMA) data, in which the presentation of a questionnaire is connected to an event and not to a specific timing. This data-collection strategy typically results in participants filling the questionnaire in different moments and a different number of times. As discussed by Epskamp and colleagues (2017), repeated measures can be used not only to estimate temporal networks, but also contemporaneous and between-subject networks. Contemporaneous networks encode relationships among variables at the same timepoint, whereas between-subject networks encode involve stable differences among individuals. These types of networks can be both estimated even on irregularly-spaced repeated-measures data. Furthermore, one can use FGL to compute different between-subject networks and contemporaneous networks for each group.

For our tutorial, we consider a dataset in which male and female participants filled in a questionnaire every time they experienced a significant social interaction (event-contingent EMA) and rated their behavior and emotions experienced during the interaction, as well as those of the other individual. We use package *qgraph* (Epskamp, Cramer, Waldorp, Schmittmann, & Borsboom, 2012) and a newly developed package that implements FGL, *EstimateGroupNetwork* (Costantini & Epskamp, 2017), to estimate between-subject and contemporaneous networks. We also discuss how such networks can be interpreted and what they can reveal about personality and psychopathology dynamics.

**References**

- Bringmann, L. F., Vissers, N., Wichers, M., Geschwind, N., Kuppens, P., Peeters, F., … Tuerlinckx, F. (2013). A Network approach to psychopathology: New insights into clinical longitudinal data. PLoS ONE, 8(4), e60188. http://doi.org/10.1371/journal.pone.0060188
- Costantini, G., & Epskamp, S. (2017). EstimateGroupNetwork: Perform the Joint Graphical Lasso and select tuning parameters. R package version 0.1.2.
- Costantini, G., Richetin, J., Preti, E., Casini, E., Epskamp, S., & Perugini, M. (2017). Stability and variability of personality networks. A tutorial on recent developments in network psychometrics. Personality and Individual Differences. http://doi.org/10.1016/j.paid.2017.06.011
- Danaher, P., Wang, P., & Witten, D. M. (2014). The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(2), 373–397. http://doi.org/10.1111/rssb.12033
- Epskamp, S., Cramer, A. O. J., Waldorp, L. J., Schmittmann, V. D., & Borsboom, D. (2012). qgraph: Network visualizations of relationships in psychometric data. Journal of Statistical Software, 48(4), 1–18. http://doi.org/10.18637/jss.v048.i04
- Epskamp, S., Waldorp, L. J., Mõttus, R., & Borsboom, D. (2017). Discovering psychological dynamics: The gaussian graphical model in cross-sectional and time-series data. Retrieved from http://arxiv.org/abs/1609.04156v3
- Friedman, J., Hastie, T., & Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 432–441. http://doi.org/10.1093/biostatistics/kxm045

**Footnotes**

The post Tutorial paper on new methods for estimating psychological networks appeared first on Psych Networks.

]]>The post The network approach to psychopathology: pitfalls, challenges, and future directions appeared first on Psych Networks.

]]>Recent years have seen more attention to simulation studies, methodological rigor, and a focus on stability and replicability of network models. Consistent with this trend, it is not surprising that two papers are in print at the same time now that focus on challenges of the network approach. These papers were written independently of each other, and while Sinan was kind enough to refer to our preprint, we were unable to cite their paper because we were not aware of it.

The first paper, “Application of network methods for understanding mental disorders: pitfalls and promise”, was published by Guloksuz, Pries and van Os (from now on GPvO) in *Psychological Medicine*. The fairly concise paper is focused on four challenges: (1) a reductionist understanding of medicine and psychiatry, (2) a shortsighted view of signs and symptoms, (3) overlooking the limitations of available datasets, (4) and over-interpreting evidence (PDF).

The second paper, “Moving forward: challenges and directions for psychopathological network theory and methodology”, authored by me and Angelique Cramer^{1}, is in press in *Perspectives on Psychological Science*. The challenges we discuss are: (1) The validity of the network approach beyond some commonly investigated disorders, (2) the definition of psychopathological systems and their constituent elements, (3) how can we gain a better understanding of the causal nature and real-life underpinnings of associations among symptoms, (4) heterogeneity of samples studied with network analytic models, and (5) a lurking replicability crisis in this strongly data-driven and exploratory field. Reviewers requested fairly extensive additions, so the paper ended up also including an introduction to network theory, and a brief overview of network psychometrics (PDF).

Instead of discussing these papers separately, point by point, this blog post has the goal to highlight some of the key messages. This means that several challenges remain unaddressed, but I wanted to give an overview rather than reiterate all individuals points. The blog post follows this structure:

- Complex dynamical systems, common causes, & oversimplifications
- A multilayer perspective on psychopathology
- Hybrid models: the complex reality of psychopathology

- The heterogeneity and limited validity of DSM categories
- Heterogeneity of clinical populations
- It’s complicated!
- Limitations of the DSM, and a call for transdiagnostic work

- What to include in networks? Sign & symptom limitations
- Biology!
- Symptom semantics
- What variables to include in network models

- Overinterpretation of evidence: networks as a cautionary tale
- Stability and generalizability of networks
- Causality and cross-sectional data
- The good old within vs between debate

- Ways forward

Psychopathology network theory — symptoms are correlated because they interact with each other causally — has been pitted against the common cause model in many prior papers. Examples often raised are depression on the one hand (insomnia → fatigue → concentration problems), and measles on the other where symptoms are passive consequences of a common cause.

We use an adapted version of this figure in our paper to explain this difference.

Both challenges papers take issue with this oversimplification, for the following reasons.

GPvO argue that the separation between conceptual network and common cause models is somewhat artificial, and re-introduces a sort of Cartesian Dualism (mind vs matter), which is not helpful in advancing clinical science. Instead, they propose a multilayer network to explain the intricate interactions among more biological and more psychological variables (I say ‘more’ here to avoid reintroducing this dualism in the explanation of GPvO’s work):

From this perspective, so GPvO, lung cancer and major depression might be very similar, the only difference being “that we have a deeper understanding of underlying biological abnormalities in the former […] while psychiatric classification is stuck at the level of signs and symptoms.”

In my mind, this is an interesting idea and ties into similar work of interactions between layers of pure biology all the way to pure psychology, with plenty of intermediate layers. Causality among layers in this sort of model is usually very hard to disentangle because it can go both ways (cf. epigenetics).

Our paper highlights a different shortcoming: that the distinction between pure network and pure common cause models oversimplifies the complex reality of mental disorders. Is it really likely that there are no common causes for symptoms at all? And, on the other hand, do residual correlations in factor models not show us that the common cause model might better be complemented by a network model that can causally explain these associations?

We propose *hybrid models* in which both common causes and networks work together. For instance, a traumatic experience could explain the initial onset of PTSD symptoms, while a network takes over in the maintenance phase of the disorder. We discuss this for other disorders such as depression as well, and go through a few more complex examples, including e.g. moderators.

In both papers, the heterogeneity of mental disorders features prominently, which can be difficult to capture in network models. We discuss this in detail, advance network mixture models as a possible solution to heterogeneity in cross-sectional data, and discuss how recent advances in time-series models can be used to tackle the issue of heterogeneity in experience sampling data with several timepoints a day collected over several weeks. In such data, multilevel models allow for e.g. 3 patients to have different individual networks, and group-level networks and variability networks can be used to give complimentary information on how similar these networks are:

We also discuss other possibilities, such as clustering people in time-series networks, or modeling approaches such as GIMME developed by Gates & Molenaar that may offer insights into heterogeneity.

Heterogeneity connects to another point: while statistical models in cross-sectional data – including network models – are capable of investigating hypotheses at the group level (e.g., women have more strongly connected depression networks than men), such results may not generalize to all individuals the population consists of, the same way the group-level information “men are taller than women” does likely not hold for all individual people (my girlfriend, for instance, is 187cm tall).

We bridge the gap between (a) network models, (b) common cause models and (c) hybrid models on the one hand and nomothetic vs idiographic analyses on the other by concluding:

*“Therefore, an equally interesting and possibly more complicated question is: Which of the three models described above fits the psychopathology of a given person best? MD, for instance, could stem from a common cause (e.g., brain pathology), a network model (e.g., vicious circles between negative thoughts and emotions; Beck et al., 1979), or a hybrid model (e.g., a network following severe adversity), depending on the specific individual and her or his specific circumstances […]. This view stresses an idiographic perspective on mental health research and acknowledges that only embracing the heterogeneity of diagnostic categories will enable us to make true progress toward personalized medicine (Kramer et al., 2014; Molenaar, 2004).”*

GPvO also stress the topic of heterogeneity, and add that current datasets are often limited by design. For instance, while sad mood and anhedonia often come up as the most central symptoms in depression network studies, the reason could simply be that this is because they are necessary for a diagnosis, and because people without these symptoms are not included in clinical studies based on diagnostic interviews.

Both teams make the point that DSM categories have received considerable criticism in recent years. Current diagnoses may be somewhat reasonable clinical descriptions and a judicious starting points for network studies, but network theory will not be able to make up on its revolutionary promises if the datasets analyzed are not transdiagnostic in nature. One of the most central tenets of network theory is that “problems attract problems”, and this holds across diagnostic boundaries, and offers a different explanation for comorbidity than the notion that e.g. major depression and generalized anxiety disorder are the result of two distinct etiologies.

As GPvO put it:

*“However, it is difficult to understand the logic of inhibiting the potential of datasets by deliberately confining the network to boundaries of DSM […]. It is reassuring to see that this common practice has been changing gradually in more recent studies. Otherwise, one might ask that if network theory has no other option but to play the game with the rules of DSM, how will it turn out to be the promised game-changer?”*

Many studies in this field have re-analyzed existing datasets, which are largely based on rating scales or DSM diagnostic criteria. This is, as GPvO say above, a limitation by design. Many other variables might be interesting inclusions in network models, including environmental and biological variables.

GPvO write a general critique of the usefulness of signs and symptoms, arguing that they are “usually non-specific, commonly subjective, qualitative or difficult to quantify, and therefore, inadequate to unequivocally diagnose a patient”, and that clinical classification is therefore inherently limited. They make the case that insights into the underlying pathoetiology will lead to better classification. I would add here that many symptoms are featured multiple times in the DSM (fatigue and insomnia numerous times), and also shared with medical diagnoses, which makes them fairly bad candidates for any “classification” (with the idea that symptoms indicate an underlying disorder).

However, I am not sure this is a limitation of network theory, which in its purest form posits that a disorder such as major depression is nothing but an emergent property of symptom interactions. I believe this position to be overstated, and we talk about this in great detail in the hybrid model section of our paper and discuss that many local common causes exist that make symptom endorsement more likely, and speculate how this could work conceptually and statistically. Nonetheless, from this perspective, there would be no “underlying pathoetiology” as suggested by GPvO — the notion of an underlying disease is based on the common cause notion that a disorder is causally responsible for symptom covariation, in which case networks are not the models you want to use to look at your data. Such a “purist” network perspective would not rule out biological changes in patients with mental disorders of course, merely posit that these are not causal for symptomatology. GPvO offer a different approach, however, by suggesting a multilayer network with causal relationships between e.g. a ‘biological’ network and a ‘psychological’ network. Causes seem to go both ways, since GPvO draw undirected edges, which strikes me as very plausible. In this case, a psychological network can have an *underlying* biological network, although underlying here has different causal implications. GPvO also stress that biological and psychological variables may best be modeled together, and specifically mention inflammation and depression symptoms ^{2}.

In our paper, we first discuss the term ‘symptom’ more generally, showing that it actually makes little sense in the context of networks, given common dictionary definitions. The Cambridge dictionary, for instance, states:

For this reason, other researchers like Donald Robinaugh and Richard McNally have suggested to instead use the term “elements”. Instead of focusing on underlying biology like GPvO, we instead argue that many other psychological variables can be included in network studies, and that “the traditional conceptualization of the relation between mental disorders and symptoms has granted symptom variables a certain importance above and beyond other clinical variables”.

We move on to define dynamical systems, and discuss important additions for future studies (both variables in the system and variables in the external field). The transdiagnostic arguments provided above of course speak for including symptoms of numerous disorders, and beyond symptoms, important variables could be impairment of functioning, cognitive processes (e.g. self-esteem or a sense of self-efficacy), distress, approach or avoidance behaviors, positive or negative social interactions per day, rejection events, physical activities, and substance abuse.

Further, we discuss the consequences of failing to incorporate relevant variables in dynamical systems, and if items should be combined or not in case they are somewhat similarly phrased. In the first case, spurious edges among the other items in the network will likely emerge, and we require both statistical approaches and sound theory to define what we should model. In the second case, we describe the notion of *topological overlap* in more detail which might help to decide whether similar items (e.g. ‘feeling blue’ and ‘sad mood’ from the CES-D depression screener) should be featured in a network separately, or better combined to one single node:

In the left case, the two white items are highly correlated and occupy the same “position” in the network, which would suggest they ought to be merged. In the right case, they show differential associations (akin to weight and height that would be highly correlated, but are different things and would show different edges with variables such as “do I want to lose weight” or “can I see far in concerts”).

GPvO articulate a somewhat related challenge by criticizing that network researchers often combine items such as weight gain and weight loss, or psychomotor retardation and agitation, into one aggregate score, following the logic of the DSM. But this makes little sense, and research sides with the position of GPvO here^{3}.

Networks provide us with beautiful visualizations of models and data, and as my colleagues in Groningen say: there is a certain ‘Rohrschach’ element to looking at them. This is true, and more caution is required when interpreting networks (I have also devoted a recent blog post to this topic). There are several issues here.

An important topic we have worked on in the last years is network stability and generalizability, and Angelique and me also discuss it in-depth as core challenge in our paper. We talk about overfitting and exploratory models, and make the point that parameter (or model) stability is necessary, but of course not sufficient for replication of models across datasets. This means that we should put focused effort into testing the stability of our network models (and, of course, all other statistical models such as factor models), and in a second step investigate whether network models replicate across datasets or not.

How exactly is this related to visualizations of data? In our paper, we provide a toy example where we call for readers to write a brief network paper with us.

*“So let us write a quick paper together to see why stability matters. We estimate a network of 17 PTSD symptoms in a sample of 180 women with posttraumatic stress disorder: A strong edge emerges between 3 and 4, representing a clinically plausible association between being startled easily and being overly alert. We also observe a negative edge between symptoms 10 and 12 and conclude that people who do not remember the trauma are less likely to have trouble sleeping (and vice versa). In a second step, we investigate the centrality (connectedness) of nodes (Opsahl, Agneessens, & Skvoretz, 2010). In our example network, node 17 has the highest degree of centrality (1.25) and node 7 the lowest (0.65). We now finalize the paper and suggest that future studies should pay specific attention to edges 3–4 and 10–12 and that targeted treatment of node 3 may achieve the greatest benefits for patients. Success!”*

We then “stumble” across a second dataset that is similar to the first. The network in this second dataset is quite different, the reason being that parameters are not estimated accurately in either due to the low sample size. In essence, a lack of stability also means a lack of replicability, and we show that paying attention to parameter stability is a safeguard against drawing wrong conclusions that the data do not support.

Both papers stress the importance of understanding the limitations of data. Cross-sectional data are cross-sectional, and that is that. While there are different opinions on whether one can infer a *causal skeleton* from such data, e.g. via directed acyclical graphs, there is no question that one cannot infer causality from such data. And while we are at it, you can neither infer causality from time-series data (ask Granger). While I tend to disagree with the stricter critics of the network approach that the literature drowns in papers of causal inference in cross-sectional data, it *is* true that research has largely focused on cross-sectional data, which is somewhat of a mismatch with network theory that is inherently dynamic (i.e. temporal).

Our paper also features a chapter on how we need to move forward to truly advance networks as *causal* systems, with a focus on experimental manipulations and mechanistic explanations:

*If insomnia → fatigue is the true model underlying the observed correlation between insomnia and fatigue, intervening on insomnia should reduce subsequent fatigue. In contrast, this intervention will not be successful if a common cause underlies the two symptoms, in which case only intervening on the common cause will successfully cure both symptoms.*

Both papers also highlight, in different levels of detail, that between-person and within-person analyses are different and answer different questions. Some of the reviewers of our paper argued strongly against between-person network research in general, but we do not agree here, a point we make repeatedly in the paper. Both views are interesting and valid, but we should only draw conclusions that can be supported by the data.

In our paper, we discuss possibilities to develop network theory for other mental disorders that have not been investigated so far, and end with a call for action for both methodologists and researchers. Methodologists can advance the field by looking into (1) confirmatory network modeling, (2) network mixture models, (3) the statistical comparison of latent variable models to network models, (4) network a priori power analysis ^{4}, and (5) more non-technical tutorial papers. Clinical researchers can look into (1) development of network theory for different disorders, (2) collecting data that can be readily analyzed using network approaches (we don’t quite think about this enough before collecting data), (3) and always testing and reporting stability and accuracy which is necessary for drawing inferences about replicability.

GPvO end with a call for (1) more transdiagnostic research, (2) proper replication studies, (3) more cautionary interpretation of network studies in general, and (4) focused assessment of experience sampling methodology to collect time-series data.

Here comes a last final concluding sentence, I guess. But since I don’t expect anyone to read this far, I will just skip it ^{5}.

**References**

» Guloksuz, S., Pries, L.-K., & Van Os, J. (in print). Application of network methods for understanding mental disorders: pitfalls and promise. *Psychological Medicine*, 1–10. 10.1017/S0033291717001350. (PDF)

» Fried, E. I., & Cramer, A. O. J. (in print). Moving Forward: Challenges and Directions for Psychopathological Network Theory and Methodology. *Perspectives on Psychological Science*. 1-22. DOI 10.1177/174569161770589. (PDF)

The post The network approach to psychopathology: pitfalls, challenges, and future directions appeared first on Psych Networks.

]]>The post Network analysis innovations at APS Boston 2017: summary and slides appeared first on Psych Networks.

]]>For those of you who couldn’t be there, I wanted to provide a summary of the topics that were discussed, and upload all presentations I could gather. If there were other presentations I missed, please send me an email and I will update the collection below.

**Comorbidity of depression and OCD**

Payton Jones from Harvard University gave a talk about network analysis in a dataset of adolescent patients with depression and OCD (preprint; slides), and Claudia van Borkulo from the University of Amsterdam presented on predicting OCD remitters vs persisters using network analysis, modeling both depression and OCD symptoms (slides).

**Personalized complicated grief networks**

Don Robinaugh from Harvard University gave a presentation on time-series network analysis of patients with complicated grief, and presented 2 example networks. This is an ongoing study that looks into similarities and differences of complicated grief networks across people (slides).

**Idiographic analyses of patients with depression and anxiety**

Aaron Fisher from UC Berkeley presented some new results of his idiographic study that assesses networks of patients before treatment and uses insights gained from these networks to select specific treatment modules. Aaron blogged about this study here, and you can also find the dataset for re-analysis in his blog post. His presentation is available here.

**Hysteresis**

Claudia van Borkulo from the University of Amsterdam also gave a talk about a paper on hysteresis that was published by Angelique Cramer et al. in *Plos One* recently. The paper tackles the issue that networks with stronger connections may be more vulnerable to phase transitions, and that such networks may not return to healthy states even if the external stressor that triggered the transition is removed (a phenomenon called hysteresis). The slides are available here.

**Neuroimaging: the dynamics of cognitive flexibility in the human brain**

John Medaglia from University of Pennsylvania gave a presentation on the “The Dynamics of Cognitive Flexibility in the Human Brain” (slides).

**A network perspective on social anxiety disorder**

Alexandre Heeren from Harvard University gave insights into a new study on symptom-to-symptom associations in social anxiety disorder, which result in a densely connected network structure (slides).

**Replicability of PTSD networks**

I presented our research on fitting cross-sectional network models on 4 different large samples of traumatized patients to investigate the replicability and generalizability of PTSD networks (blog, paper, slides).

**Estimating intraindividual networks using autoregressive models**

Noémi Schuurman from Utrecht University provided an introduction to intraindividual network models, and showed some of the features Mplus 8 brings with it that are hard to tackle in R currently (such as correlated residuals). She also talked about two pitfalls: measurement error and standardization, which also feature prominently in her work (e.g. here or here). You can find Noémi’s presentation here.

**Inferring Causal Networks from Experimental Data**

Jolanda Kossakowsi from the University of Amsterdam gave a talk about the caveats of trying to infer causal networks from observational data, and presented her project in which she asks people the same set of attitudes multiple times in a row, each time “intervening” on one of the items (“imagine you would not like meat, how guilty would you feel if you ate meat”). This highly interesting innovation may allow for easier causal inferences (slides).

**Predicability and controllability of network models**

Jonas Haslbeck from the University of Amsterdam presented his work on predictability, an absolute metric that quantifies how much influence one can have on a node in the network via its neighbors. Jonas blogged about predictability here, and he wrote two papers about it: one technical paper which explains the metric in more detail, and one application paper in which we re-analyzed 25 network analysis datasets from 18 published papers to see how predictability looks like in the field so far. His slides are available here.

**Assessing the risk for the development of critical transitions in psychopathology**

Jolanda Kossakowsi also talked about critical phase transitions in psychopathology research where patients move from healthy to e.g. depressed network states, and presented new models that may enable us to detect such transitions (slides).

**A tutorial on regularized partial correlation networks**

I presented a brief introduction to the strengths, limitations, and future directions of regularized partial correlation networks in cross-sectional data, along with a tutorial on how to estimate these models in R under revision in *Psychological Methods* (tutorial paper, slides).

Thanks to everyone who made my APS 2017 such a fantastic experience, and a big shoutout to all presenters who shared their slides with me so I could make them publicly available! If you would like to reuse the slides, please contact the authors and also make sure to acknowledge them for their contributions.

The post Network analysis innovations at APS Boston 2017: summary and slides appeared first on Psych Networks.

]]>The post Network replicability: a cross-cultural PTSD study across of clinical datasets appeared first on Psych Networks.

]]>This is how the four networks look like:

I will discuss below (1) why research on replicability and generalizability is important, (2) why we should start focusing on estimating network and factor models in clinical datasets instead of community samples, and (3) discuss the paper in more detail.

We’ve worked hard in the last year to tackle the challenge of stability or accuracy of network models. As explained in a recent tutorial blog post, the R-package *bootnet* [1] can be used to investigate how stable state-of-the-art network models such as the Ising Model or the Gaussian Graphical Model are. The package was updated recently and now can also be used for relative importance networks, and we are working hard to be able to incorporate time-series models in the future.

Now that we can look into the stability of network models — and can show that network parameters can be estimated accurately in large samples — the next question is whether data-driven and exploratory network models generalize and replicate across different samples. Interestingly, there is no single published paper that I am aware of that has done so specifically. Note that a few papers exist that compared subsamples of the same dataset, such as groups of individuals with different substance abuse disorders [2].*

So about a year ago, I set out to write a paper on network replicability, and found a dozen of collaborators in Denmark, the Netherlands, and Italy who were willing to help with this endeavour.

In addition to replicability, there is a second main limitation of the network literature so far: similar to the SEM literature, the majority of prior studies were carried out in community or subclinical samples, rendering network structures in clinical samples largely unknown. This is most pronounced for the PTSD literature where more than 10 network papers have been published, but no single paper in a purely clinical dataset. Which is why we investigated the replicability of PTSD networks in 4 clinical samples.

I uploaded the preprint of the paper yesterday. The repository not only includes the paper itself, but also the covariance matrix of the four datasets, along with all network parameter estimates, symptoms means and standard deviations, and many other R objects to make the paper fully reproducible. While we cannot share the original datasets, the covariance matrices are sufficient to estimate network and factor models, and we encourage re-analysis of the data.

What did we do, and what did we find? We estimated state-of-the-art Gaussian Graphical Models in four clinical datasets of traumatized patients seeking treatment. For the first time, to our knowledge, we estimated four networks *jointly*, using the Fused Graphical Lasso (FGL) [3] implemented by Giulio Costantini who collaborated on the paper. The FGL improves network estimates by exploiting similarities among different groups in case such similarities emerge; otherwise, networks are estimated independently. We then compared the resulting networks in terms of network structure and centrality indices. Here is a short summary:

The network approach to psychopathology understands mental illness like Posttraumatic Stress Disorder (PTSD) as networks of causally interacting symptoms. The prior literature is limited in three aspects: studies estimated networks in one sample each, leaving open the question whether networks replicate across samples; studies estimated networks in primarily small samples that may lack power for reliable estimation; and studies examined community or subclinical samples, rendering the network structure in clinical samples unknown. In this cross-cultural multisite study, we estimated state-of-the-art regularized partial correlation networks of 16 PTSD symptoms across four datasets of traumatized patients (total N=2,782). Considerable similarities emerged, with high correlations between network structures (0.62 to 0.74) and centrality estimates (0.63 to 0.75). Only 1.7% to 6.7% of the 120 edges differed across networks. Despite sample differences, networks showed substantial similarities, suggesting that PTSD symptoms may be associated in similar ways. We discuss implications for generalizability and replicability.

First, it was interesting to see that symptom profiles were fairly similar across datasets, despite the very different composition and background of the four samples (ranging from a sample of predominantly male military veterans to another dataset of severely traumatized refugees with up to 30% psychotic symptoms). The plot below also contains standard deviations because there were some ceiling effects in dataset four, with very severe symptomatology and lower standard deviations, with a negative correlation of mean and standard deviation which can be nicely seen in the plot (details in the paper).

Second, network structures were not exactly the same: the omnibus network comparison test (with the null-hypothesis that all 120 edges per comparison are exactly identical) was significant for all pairs of networks. Overall, however, considerable similarities emerged, and correlations between network structures and centrality indices were high (see paper for more details).

Third, we estimated (a) a cross-sample network and (b) a cross-sample variability network. These depict similarities and differences across the four datasets.

Overall, we were surprised by the similarities of both symptom profiles and network structures across the four datasets, given the considerable differences in sample compositions. We conclude the paper with a call for more replicability work in psychopathology factor and network models:

We therefore conclude that investing time in more thoroughly conducted cross-sample studies for both network and factor models is warranted in order to facilitate insights about replicability and generalizability. We hope the present paper will encourage more researchers to do so.

» Fried, E. I., Eidhof, M. B., Palic, S., Costantini, G., Huisman-van Dijk, H. M., Bockting, C. L. H., Engelhard, I., Armour, C., Nielsen, A. B. S., Karstoft, K. (submitted). Replicability and generalizability of PTSD networks: A cross-cultural multisite study of PTSD symptoms in four trauma patient samples. DOI: 10.17605/OSF.IO/2T7QP. (Preprint and Supplementary Materials)

Introduction. The network approach to psychopathology understands disorders like Posttraumatic Stress Disorder (PTSD) as networks of mutually interacting symptoms. The prior literature is limited in three aspects. First, studies have estimated networks in one sample only, leaving open the crucial question of replicability and generalizability across populations. Second, many prior studies estimated networks in small samples that may not be sufficiently powered for reliable estimation. Third, prior PTSD network papers examined community or subclinical samples, rendering the PTSD network structure in clinical samples unknown. In this cross-cultural multisite study, we estimate and compare networks of PTSD symptoms in four heterogeneous populations of trauma patients with different trauma-types, including civilian-, refugee-, combat-, post-war off-spring-, and professional duty-related trauma.

Methods. We jointly estimated state-of-the-art regularized partial correlation networks across four datasets (total N=2,782), and compared the resulting networks on various metrics such as network structure, centrality, and predictability.

Results. Networks were not exactly identical, but considerable similarities among the four networks emerged, with moderate to high correlations between network structures (0.62 to 0.74) and centrality estimates (0.63 to 0.75); only few edges differed significantly across networks.

Conclusion. Despite differences in culture, trauma-type and severity of the four samples, the networks showed substantial similarities, suggesting that PTSD symptoms may be associated in similar ways. We discuss implications for generalizability and replicability. A step-by-step tutorial is available in the supplementary materials, including all analytic syntax and all relevant data to make the paper fully reproducible.

[2] Rhemtulla, M., Fried, E. I., Aggen, S. H., Tuerlinckx, F., Kendler, K. S., & Borsboom, D. (2016). Network analysis of substance abuse and dependence symptoms. Drug and Alcohol Dependence, 161, 230–237. http://doi.org/10.1016/j.drugalcdep.2016.02.005

[3] Guo, J., Levina, E., Michailidis, G., & Zhu, J. (2011). Joint estimation of multiple graphical models. Biometrika, 98(1), 1–15. http://doi.org/10.1093/biomet/asq060

* There is also a recent preprint by Forbes et al. to be published in the *Journal of Abnormal Psychology* that aimed to tackle replicability, but it makes more sense to discuss the paper once the commentaries on the paper are published that — to my knowledge — will point to substantial problems in the analyses.

The post Network replicability: a cross-cultural PTSD study across of clinical datasets appeared first on Psych Networks.

]]>The post Public time-series data of 40 outpatients (21 items, 130 measurements) appeared first on Psych Networks.

]]>As readers of this blog are likely aware, network analysis represents a new and exciting paradigm that holds the potential to better delineate the structure and dynamic organization of psychopathology. However, as I’ve written about elsewhere, the symptomatology of common psychiatric disorders is composed of time-varying phenomena, occurring within individuals. This presents sizable challenges to research generally, and network analysis specifically. Quite simply, time-varying phenomena can only be effectively observed through time-varying data collection and within-individual (i.e. idiographic) processes can only be uncovered via idiographic research methodologies. Peter Molenaar has written on these issues extensively and I encourage everyone to read his manifesto on idiographic science. To summarize his position, inter-individual variation and intra-individual variation are inherently unrelated, an assertion that Molenaar supports through mathematical proof. As a consequence, he argues, the burden of proof should fall on researchers to demonstrate that the nomothetic and idiographic are at least consonant – if not identical – if we want to generalize the former to the latter. Phenomena for which inter- and intra-individual processes (e.g. variance-covariance) are equivalent are known as ergodic. If a given statistical relationship exhibits the same relative strength and direction across both idiographic and nomothetic paradigms, then conclusions from one can be extrapolated to the other. However, Molenaar has argued that most psychological processes are unlikely to be ergodic. In fact, James Boswell and I found evidence that the well-known (and widely-replicated) positive correlation between depression and anxiety is not reliably reproduced within individuals, and may even exhibit a negative relationship over time (see Fisher & Boswell, 2016).

Thus, at the risk of denigrating or minimizing the groundbreaking, paradigm-shifting, and altogether important work that has been published in the past several years on cross-sectional and/or nomothetic data, I would argue that researchers should endeavor to collect intensive repeated measures data in order to conduct idiographic time series analyses of pathologic phenomena. While our cross-sectional and nomothetic work may help to establish new methods, develop rationales, stimulate hypothesis generation, and foment interest, we must be careful about the degree to which we use nomothetic research to understand the phenomenology and behavior of individuals. Ambulatory and ecological momentary assessments can be readily employed to collect hundreds of observations of thoughts, feelings, and actions, which can then be leveraged to produce multivariate time series for dynamic, intraindividual analysis.

Over the past three years, my lab has collected such data. In the course of conducting an open trial for a personalized, modular cognitive-behavioral therapy (see Fisher & Boswell, 2016 for more details), we have interviewed and enrolled 40 individuals with primary, DSM-based generalized anxiety disorder (GAD) and/or major depressive disorder (MDD). Prior to therapy, participants complete an intensive repeated measures paradigm in which they report their momentary experience of 21 items related to mood and anxiety psychopathology (such as avoidance behavior, positive and negative affect, anhedonia, depressed mood, worry, etc.). Surveys are completed 4x/day for approximately 30 days. To date, the average number of completed surveys has been 130.43 (SD = 19.27), with a range from 87 to 212.

Recently, we have developed a method for leveraging both structural equation modeling (SEM) and network analytic methodologies in order to analyze contemporaneous and time-lagged relationships in intraindividual mood and anxiety syndromes. This work is currently under review, however the revised manuscript and supporting documents can be found on the Open Science Framework at https://osf.io/zefbc/. While responding to reviewer feedback, it became apparent that the best way to present our methods and results – to maximize transparency, clarity, and technical explication – was to share the raw data alongside the code for data preparation and analysis. Thus, at the OSF location noted above, readers can find the complete multivariate time series for the 40 participants reported in the manuscript under review. In addition, readers will find the revised manuscript and step-by-step instructions for carrying out the analyses described in the manuscript. Participant data such as age, sex, ethnicity, and diagnosis can be found in Table 1.

To our knowledge this is the largest and most detailed data set of its kind and I am sincerely excited to share it with interested parties. As an advocate for idiographic science, I believe that seeing is believing. Therefore, it is my hope that working with these data will help to illuminate the necessity and potential impact of idiographic work. Our lab welcomes any and all potential collaborators. Feel free to contact me at *afisher@berkeley.edu* with any queries or comments. For those interested in working with these data, please note that there are currently two manuscripts in preparation: One on the idiographic factor structure (p-technique) of each time series and another examining the predictability of nodes in both contemporaneous and time-lagged networks. As other projects develop, we will endeavor to update this post.

Enjoy!

The post Public time-series data of 40 outpatients (21 items, 130 measurements) appeared first on Psych Networks.

]]>The post Public dataset with 1478 timepoints over 239 consecutive days appeared first on Psych Networks.

]]>The paper was published in the new Journal of Open Psychology Data that you may want to keep an eye on. They publish datasets, which incentivizes sharing of data because you can get cited for it. #openscience

The paper encompasses a dataset and a detailed description thereof. The data were used in a prior paper by Marieke Wichers and colleagues, who identified that the dynamic network of one remitted depression person showed signs of *critical slowing down* before he transitioned from a healthy to a depressed state. The authors argue that this could be an early warning signal which may enable us to predict such phase transitions in the future.

So how do the data look like? It is a n=1 case-study time-series dataset with 50 variables and 1478 timepoints assessed over 239 consecutive days. Many items were collected 10 times per days, others daily (sleep quality and mood) or weekly (depression checklist). Here a visualization of the data (AD = antidepressant) from Wichers et al. 2016.:

The reuse potential of this dataset is considerable. Not only does it exceed any other studies I am aware of in terms of measurement points, but it also features a critical transition where the patient relapses into depression. As Kossakowski and colleagues state: In general, the data “are suitable for various time-series analyses and studies in complex dynamical systems […].” Specifically, the data can be used for three main purposes. “First, the data are extremely suitable for researchers to validate new methods for predicting the onset of a critical transition. Second, there have been recent developments into estimating time-varying networks. This data can be used as an empirical example to show how time-varying networks can be estimated and how the network develops over time. Lastly, since items were measured at different time scales, this dataset can aid research that aims to combine (time-series) data from different time scales.”

Abstract

We present a dataset of a single (N = 1) participant diagnosed with major depressive disorder, who completed 1478 measurements over the course of 239 consecutive days in 2012 and 2013. The experiment included a double-blind phase in which the dosage of anti-depressant medication was gradually reduced. The entire study looked at momentary affective states in daily life before, during, and after the double-blind phase. The items, which were asked ten times a day, cover topics like mood, physical condition and social contacts. Also, depressive symptoms were measured on a weekly basis using the Symptom Checklist Revised (SCL-90-R). The data are suitable for various time-series analyses and studies in complex dynamical systems.

Kossakowski, J.J. et al., (2017). Data from ‘Critical Slowing Down as a Personalized Early Warning Signal for Depression’. Journal of Open Psychology Data. 5(1). DOI: http://doi.org/10.5334/jopd.29

The post Public dataset with 1478 timepoints over 239 consecutive days appeared first on Psych Networks.

]]>The post Tutorial: how to review psychopathology network papers appeared first on Psych Networks.

]]>- The nature of the investigated sample
- Item variability
- Selection effects and heterogeneity
- Appropriate correlations among items
- Publish your syntax and some important output
- Item content
- Stability
- Considerations for time-series models
- Further concerns and new developments

My main focus here is on network papers based on between-subjects cross-sectional data, although some of the points below also apply to within-subjects time-series network models. Note that these are all personal views, and I am sure other colleagues focus on other aspects when they review papers. Feedback or comments are very welcome.

In the structural equation modeling (SEM) literature, researchers often investigate the factor structure of instruments that screen for psychiatric disorders. Frequently, data come from general population samples which predominantly consist of healthy individuals. This means that a conclusion such as “we identified 5 PTSD factors” seems difficult to draw, seeing that only few people in the sample actually met the criteria for a PTDS diagnosis.

The same holds for network models: I have reviewed many papers recently that draw inferences about the network structure of depression, or about comorbidity, but they investigate largely healthy samples. I don’t think that’s a good idea for somewhat obvious substantive reasons: don’t draw conclusions about a disorder if you don’t study patients with that disorder. But there are also two statistical reasons why the above may be a challenge, to which I will get below.

One argument against my position is that when we study intelligence or neuroticism in SEM, we investigate the factor structure of these constructs in broad samples, and don’t focus on populations of very intelligent or very neurotic people. That is true, and in that sense what you should do here depends a bit on how you understand a given mental disorder. If you think depression is a continuum between healthy and sick, with a somewhat arbitrary threshold, maybe you’re ok studying a general population samples to draw conclusions about depression. If you think depressed people are those with a DSM-5 diagnosis of depression, it may make less sense if you study depression when only 5% of your sample meet these criteria. My main point is that as a reviewer, I’d like to see that you think about this problem.

Community samples may have very low levels of psychopathology, which can make the study of psychopathology in such samples difficult. If the item means are too low, the variability of the items becomes very small, and that can lead to estimation problems (an item without variability will not be connected in your estimated network because it cannot covary with other items). Note that technically this is not really an estimation problem – you can estimate your network just fine – but you may want to be aware of this when interpreting your network. I wrote a brief blogpost about this topic recently inspired by a paper from Terluin et al. where you can find more information.

The same problem can occur in samples with very severe levels of psychopathology or in case you have selection effects. For instance, we recently submitted a paper where the sample consisted of patients with very severe recurrent Major Depression. The mean of the “Sad Mood” item was 0.996 (item range 0-1), so we decided to drop it from the network analysis. Another example is the PTSD A criterion that states: “trauma survivors must have been exposed to actual or threatened death, serious injury, or sexual violence”. It makes little sense to include this item in a network analysis of PTSD patients because every person endorses this item, and it would show (partial) correlations of zero with other items.

Independent of what populations you study, differential item variability can pose a challenge to the interpretation of networks (items with low variability will unlikely end up being central items); that is, if an item in your network has a low variability and low centrality, it’s not clear if that item also would have a low centrality in a population in which it has a variability comparable to the other items. So I recommend you always check and report both means and variances of all your items. This can also be a problem when you, for instance, compare healthy and sick groups with each other. Multiple papers have now found that the connectivity or density of networks in depressed populations is higher than in healthy samples; connectivity is the sum of all absolute edge weights in a network. But this may be driven by differential variability: in healthy samples, the standard deviations of items will be smaller, which means items cannot be as connected as they are in the depressed sample.

A second problem with studying mental disorders in healthy people is that there are good reasons to assume that the factor or network structure of psychopathology symptoms differs between healthy and sick people. We have found very convincing and consistent evidence for this phenomenon in Major Depression, across four different rating scales and two large datasets. And there are also good substantive reasons why this could be the case (e.g. 1, 2, 3).

In healthy samples, you may thus often end up drawing conclusions – for instance about the dimensionality or network structure of depression or PTSD symptoms – that would not replicate in a sample of depressed or traumatized patients.

Related to this is the problem of selection or conditioning on sum-scores. This is a tricky problem, and I’ve been trying to wrap my head around this for over a year now. Essentially, if you select a subpopulation (e.g. people with depression) on a sum-score of items that you then put into a network, you get a biased network and factor structure. This is shown analytically in a great paper by Bengt Muthén 1989 to which Dylan Molenaar pointed me a few months ago. This also holds for other psychological constructs of course, such as personality or intelligence.

It’s not always easy to decide what type of correlation coefficient is appropriate for what type of data. In psychopathology research, data are often ordered-categorical and skewed. There are different ways to deal with this type of data, and Sacha Epskamp recently asked the following tricky question to students in the Network analysis course at University of Amsterdam:

*“Often, when networks are formed on symptom data, the data is ordinal and highly skewed. For example, an item ‘do you frequently have suicidal thoughts’ might be rated on a three point scale: 0 (not at all), 1 (sometimes) and 2 (often). Especially in general population samples, we often see that the majority of people respond with 0 and only few people respond with a 2. This presents a problem for network estimation, as such data is obviously not normally distributed. Which of the following methods would you prefer to analyze such highly skewed ordinal data?” *

We see the following 3 possibilities:

- You can dichotomize your data, but you will lose information.
- You can use polychoric or Spearman correlations that usually deal well with skewed ordinal variables.
- You can transform your data, for instance using the nonparanormal transformation. But that only works in certain cases.

Personally, I tend to dichotomize in case I only have 3 categories and items are skewed (e.g. this paper). With 4 or more categories, the polychoric correlation seems to work best, and there is also some unpublished simulation work showing this. What I want to see when I review papers is that researchers are aware of this problem, thought about it and tried to find out whether what they are doing with their data is ok. For instance, I usually do both polychoric and Spearman, and if there are dramatic differences, I have a problem I need to solve. Note that polychoric correlations have problems if there are too few observations in cross-tables, which can sometimes lead to spurious negative edges; in these cases, Spearman does better. We discuss this here in more detail.

What you can always deposit online, or publish as supplementary materials, is your R code. This will make your results reproducible. What you also should publish are the means and standard deviations of your items. In the best case, you can simply publish your data, but that is often not possible. If you have ordinal or metric data, you can make your networks reproducible by publishing the covariance matrix of your items (because we use this matrix as input when we estimate the Gaussian Graphical Model). In case of the Ising Model, we need the raw data to estimate the network model, so without data your network will not be reproducible. However, you can still publish your model output (i.e. the edges and threshold parameters) so that your graph itself becomes reproducible, and you can also publish the tetrachoric correlations among items for some more insights.

I have only recently come to pay attention to this more, but it seems important to consider the content of the instrument you want to investigate as a network of items. That sounds both obvious and vague, but in some of my prior work, I simply threw all items of a rating scale into a network analysis. For instance, consider this network from a paper we published on bereavement:

Thinking about this now, it seems that sad mood and feeling depressed are fairly similar to each other: do we really want to include both in one network? Or would it be better to represent these items as one node, each by averaging them, or estimating a latent variable? Angélique Cramer and me wrote about this in more detail in a paper entitled “Moving forward: challenges and directions for psychopathological network theory and methodology” that is currently under revision:

“*We see two remaining challenges pertaining to the topic of constituent elements: 1) what if important variables are missing from a system, and 2) what to do with nodes that are highly correlated and may measure the same construct (such as ‘sad mood’ and ‘feeling blue’)?*“

You can find the relevant section on pp. 17-19. I don’t have a solution, but it seems an important topic to pay attention to, especially since we work with partial correlation networks, and I wonder what remains of the association between sad mood and e.g. insomnia after we partial out depressed mood and feeling blue.

We have written about this a lot in the past, so I will only briefly reiterate: please check the stability, accuracy, and robustness of your network models (blog post; paper). It really helps the editor and reviewers of the paper to gauge its relevance and implications, but also helps readers to get a better grasp of the results. Conclusions should be proportional to evidence that is presented, and robust models certainly help with stronger evidence. I also gave a short presentation about robustness at APS 2016, and you can find a whole collection of updated slides on network robustness in the online materials of the network analysis workshop we have just 2 weeks ago.

I have only reviewed a few time-series papers, and other people are much better suited go give feedback here. A good start for state-of-the-art models, model assumptions, and pitfalls are recent papers by Kirsten Bulteel, Jonas Haslbeck, Laura Bringmann, and Noémi Schuurman.

You can find a number of additional concerns and topics I wonder about in our draft on challenges to the network approach, but these currently play do not a major role when I review papers. And maybe you’d like to add important concerns to the comments below. In general, please be careful with drawing causal inference from cross-sectional data, and keep in mind that between-subjects and within-subjects effects may be different from each other.

If you want to try out something novel that will hopefully become state-of-the-art in the next half year, check out the predictability metric that Jonas Haslbeck developed. This will result in something akin to R^2 of each node explain by all other nodes (indicated by the grey area around the nodes):

And it never hurts to have a research question of course: something you’re interested in, a hypothesis. I find network papers stronger and more interesting if it’s more than just applying

`estimateNetwork(data, default="EBICglasso")`

to a new dataset or disorder. The same applies to SEM papers as well of course.

Some of these concerns come from discussions with colleagues such as Sacha Epskamp, Angélique Cramer, Denny Borsboom, Jonas Haslbeck, Claudia van Borkulo, Kamran Afzali, and Aidan Wright. So kudos to them and all other colleagues who commented on this blog post, and who have helped me grasp these issues in the last 2 years.

The post Tutorial: how to review psychopathology network papers appeared first on Psych Networks.

]]>The post The meaning of model equivalence: Network models, latent variables, and the theoretical space in between appeared first on Psych Networks.

]]>Recently, an important set of equivalent representations of the Ising model was published by Joost Kruis and Gunter Maris in Scientific Reports. The paper constructs elegant representations of the Ising model probability distribution in terms of a network model (which consists of direct relations between observables), a latent variable model (which consists of relations between a latent variable and observables, in which the latent variable acts as a common cause), and a common effect model (which also consists of relations between a latent variable and observables, but here the latent variable acts as a common effect). The latter equivalence is a novel contribution to the literature and a quite surprising finding, because it means that a formative model can be statistically equivalent to a reflective model, which one may not immediately expect (do note that this equivalence need not maintain dimensionality, so a model with a single common effect may translate in a higher-dimensional latent variable model).

However, the equivalence between the ordinary (reflective) latent variable models and network models has been with us for a long time, and I therefore was rather surprised at some people’s reaction to the paper and the blog post that accompanies it. Namely, it appears that some think that (a) the fact that network structures can mimic reflective latent variables and vice versa is a recent discovery, that (b) somehow spells trouble for the network approach itself (because, well, what’s the difference?). The first of these claims is sufficiently wrong to go through the trouble of refuting it, if only to set straight the historical record; the second is sufficiently interesting to investigate it a little more deeply. Hence the following notes.

The equivalence between statistical network models (more specifically, random fields like the Ising model) and latent variable models (e.g., the IRT model) is not actually new. Peter Molenaar in fact suspected the equivalence as early as 2003, when he was still in Amsterdam, stating that “it always struck me that there appears to be a close connection between the basic expressions underlying item-response theory and the solutions of elementary lattice fields in statistical physics. For instance, there is almost a one-to-one formal correspondence of the solution of the Ising model (a lattice with nearest neighbor interaction between binary-valued sites; e.g., Kindermann et al. 1980, Chapter 1) and the Rasch model.” (see p. 82 of this book). Peter never provided a formal proof of his assertion, as far as I know, but clearly the idea that network models and other dynamical models bear a close relation to latent variable models was already in the air back then. I remember various lively discussions on the topic.

The connection between latent variables and networks got kick-started a few years later, when Han van der Maas walked into my office. At the time, he was thinking a lot about the relation between general intelligence and IQ subtest scores, as represented in Spearman’s famous g-factor model. In a conversation with biologists with whom he worked, Han had tried to explain what a factor model does and how it works statistically. Because they didn’t really get it, he had tried to use one of their own examples: lakes. Obviously, he said, the water quality in lakes is associated with various observables; for instance, the number of fish, the size of the algae population, the biodiversity of the lake’s ecosystem, the level of pollution, etc. Han explained that, in psychometrics, water quality would be thought of as a latent variable, which is measured through all of these indicators. He told me that the biologists had stared at him incredulously. No, they had answered, water quality is not really a latent variable. Rather, it is a description of a stable state of a complex system defined by the interactions between the observables in question. For instance, pollution can cause turbidity, which causes plants to die, which causes reduction of biodiversity, which allows the algae population to get out of hand, which increases turbidity, etc. (I am not a biologist, so if you really want to know what’s going on in shallow lakes, read this).

I vividly recall that Han sat in my room and said: couldn’t something like this be the case for general intelligence, too? That different cognitive attributes and processes, as measured by IQ-subtests (working memory, reading ability, general knowledge, etc.) influence each other’s development so as to create a positive manifold in the correlations between test scores? He drew arrows between boxes on a sheet of paper and held it up for me to see. I remember so well that I saw him do that. It was so simple but you could see that the ramifications were huge (although nobody at the time probably guessed just how huge). I said “I think that’s a really good idea”. He said: “Yes it is, isn’t it?!” and walked out with a smile. That drawing later became Figure 1b in Han’s mutualism model, eventually published in Psychological Review. The appendix to that paper formally proves that the mutualism model (which is basically a dynamical network model) can produce data that are exactly equivalent to the (hierarchical) factor model. That, I think, was the first real equivalence proof that was done in our group.

Essentially, this equivalence proof was what got the network approach going, because after many years of fruitless thinking about plausible causal mechanisms that would connect something like the g-factor to IQ or the internalizing-factor to insomnia, it suddenly appeared to us that networks could provide reasonable starting points for explaining correlation structures often observed in psychometrics in general, where the latent variable hypothesis provided very few believable stories. That’s why I decided to develop a general methodological framework around the idea that psychometric items can be profitably modeled using networks.

After starting our network research program in 2008, I played around with network models that we now know are in fact Ising models. I quickly found out that simulations from a network model for binary items produced data very close to IRT models, as would be expected from Peter Molenaar’s intuition and Van der Maas et al.’s proof. I gave several talks on this in various locations, include a keynote at a Rasch conference in 2010, where actual thunder broke after I said the Rasch model might be better thought of as a network (no coincidence, of course). However, I could never really show the formal equivalence between the network model and the latent variable model, partly because I lacked the mathematical skills and partly because I erroneously believed that I wasn’t simulating from an Ising model but from a closely related model.

That took a better mathematical mind: that of Gunter Maris, who was the first to really penetrate the nature of the correspondence between all of these models, and who, in a beautiful mathematical move, proved that they provide equivalent representations of the probability distribution of observables. I believe this happened in 2012, and consider this to be one of the main psychometric breakthroughs I have had the honor of witnessing. I expect it to have lasting effects on the psychometric landscape – we are merely at the beginning of exploiting the connection that this equivalence opens up: a secret tunnel that allows us to travel back and forth between a century of statistical physics literature and a century of psychometrics. The equivalence has now been written down in this chapter by Sacha Epskamp, which was written in 2014, in this paper by Maarten Marsman from around the same time, and of course, most recently, in the Kruis and Maris’ work. Also, Maarten Marsman has a forthcoming paper that extends the equivalence to a whole range of other models.

I have noted that some people think that, because there exists an equivalent latent variable model for each network model and vice versa, networks are equivalent to latent variables in general. This is erroneous. That one can always come up with some equivalent latent variable structure to match any network structure (and vice versa) doesn’t mean everything is equivalent to everything else. Care should be taken in distinguishing a statistical model, which describes the probability distribution of a given dataset gathered in a particular way, from the theoretical model that motivates it, which describes not only the probability distribution of this particular dataset, but also that of many other potential datasets that would, for instance, arise upon various manipulations of the system or upon different schemes of gathering data.

This all sounds highly theoretical and abstract, so it is useful to consider some examples. For instance, network structures that project from plausible latent variable models (e.g., scores on working memory items that really all depend on a common psychological process) can be (and as a rule are) highly implausible from a substantive theoretical viewpoint. Your ability to recall the digit span series “1,4,6,3,7,3,5” really doesn’t influence your ability to answer the series “9,3,6,5,7,2,4”; instead, both items depend on the same attribute, namely your memory capacity. This indicates that the theoretical model (both item scores depend causally on memory capacity) is more general, and thus different in meaning, from the statistical model (the joint probability distribution of the items can be factored as specified by an IRT model). Here, the statistical latent variable model is equivalent to a network model, but the theoretical model in terms of a common cause – memory capacity – is not.

Likewise, latent variable structures that project from networks can be highly implausible too. For example, an edge in a network between “losing one’s spouse” and “feelings of loneliness” can be statistically represented by a latent variable, as is true for any connected clique of observables. But from an explanatory standpoint, the associated explanation of the correlation in terms of a latent common cause makes no sense whatsoever. It is rather more likely that losing one’s spouse causes feelings of loneliness directly. Again, the difference in meaning between the theoretical model (losing one’s spouse causes feelings of loneliness) and the statistical model (the correlation between these variables remains nonzero after partialling out any other set of variables in the data) lies in the greater generality of the theoretical model, which extends to cases we haven’t observed (e.g., what would have happened if person i’s spouse had not deceased), cases in which we had used different observational protocols (e.g., observing the population register instead of administrating a questionnaire item on whether i’s spouse had deceased), or cases in which we would causally manipulate relevant variables. Statistical models by themselves do not allow for such generalizations (that is in fact one of the reasons that theories are so immensely useful).

Also, at the level of the complete model, the implications of latent variable models can differ greatly from those of network models. For instance, the model proposed for depression in Angélique Cramer’s recent paper is equivalent to some latent variable model, but not, as far as I know, to any of the latent variable models that have been proposed in the literature on depression. In general, if one has two competing theoretical models, one of which is a latent variable model with its structure fixed by theory, and the other of which is a network model with its structure also fixed, it will be possible to distinguish between these because the latent variable model equivalent to the postulated network model is not the same as the latent variable model suggested by the latent variable theory; Riet van Bork is currently working on ways to disentangle such cases.

Finally, even though latent variable models and network models may offer equivalent descriptions of a given dataset, they often predict very different behavior in a dynamic sense. For example, the network model for depression symptoms, in certain areas of the parameter space, behaves in a strongly nonlinear fashion (featuring sudden jumps between healthy and depressed states), while the most standard IRT model should show smooth and relatively continuous behavior. Relatedly, the network model predicts that sudden transitions between states should be announced by warning signals like an increase in the predictability of the system prior to the jump (critical slowing down, for which recent work has provided some preliminary evidence). There is no reason to expect that sort of thing to happen under the latent variable model that is equivalent to the network model, as estimated from the data, that was used to simulate from to get at these predictions.

So, the fact that two models are statistically equivalent with respect to a set of correlational data, does not render them equivalent in a general theoretical sense. One could say that while the data-models are equivalent, the theories that motivate them are not. This is why it is in fact not so difficult to come up with very simple data extensions that allow one to discriminate between observationally equivalent network and latent variable models. For example, in a latent variable model effects of external variables are naturally conceived of as going through the latent variable (e.g. genetic and environmental effects on phenotypic variation in behavioral traits, or life events that can trigger depressed episodes), whereas in network explanations these propagate through a network. This means that the latent variable model predicts the effect to be expressed proportional to the factor loadings (so the model should be measurement invariant over the levels of the external effect) while the network model predicts the external effect to propagate over the topology of the network (so the effect should be smaller on variables more distant from the place where the external effect impinges on the network).

Also, if experimental interventions are available, it should be reasonably easy to discriminate between latent variable models and network models, because in a latent variable model intervening on the observables is causally inefficient with respect to other observables in the model. This is because (in standard models) effects cannot travel from indicators to the latent variable, so they cannot propagate. However, in a network model, experimental manipulations of observables should be able to shake the system as a whole, insofar as the observables are causally connected in a network. So statistical equivalence under passive observation does not mean semantic or theoretical equivalence in general (see also Keith Markus’ interesting conclusion that (statistical) equivalence rarely implies theoretical (semantic) equivalence, but rather that statements to the effect that two models are statistically equivalent, as a rule, suggest that the models are not identical).

The post The meaning of model equivalence: Network models, latent variables, and the theoretical space in between appeared first on Psych Networks.

]]>The post How theoretically distinct mechanisms can generate identical observations appeared first on Psych Networks.

]]>The first sentence of the ‘About’ section on this website (and for that matter of most scientific publications about psychological networks) mentioned the increasing popularity of the network perspective in social sciences. Statements such as these essentially describe the increasingly popular practice among researchers to explain associations, observed between measured variables, as a consequence of mutualistic relations between these variables themselves.

Examining the structure of observed associations between measured variables is an integral part in many branches of science. At face value, associations inform about a possible relation between two variables, yet contain no information about the nature and directions of these relations. This is captured in the (infamous) phrase about the quantity measuring the extent of the interdependence of variable quantities: correlation does not imply causation. Making causal inferences from associations requires the specification of a mechanism that explains the emergence of the associations.

In our paper we discuss three of these, theoretically very distinct, mechanisms and their prototypical statistical models. These three mechanisms, represented within the context of depression, are;

The first mechanism represents the (until recently most dominant) perspective on psychopathology where a mental disorder is viewed as the common cause of its symptoms. The common cause mechanism is statistically represented by the latent variable model, and explains the emergence of observed associations through an unobserved variable (depression) acting as a common cause with respect to the observed variables (sleep disturbances, loss of energy, concentration problems). The manifest variables (symptoms) are thus independent indicators of the latent variable (mental disorder) and reflect its current state.

The network perspective on psychopathology is captured by, what we in our paper term, the reciprocal affect mechanism. In this framework the associations between observed variables are explained as a consequence of mutualistic relations between these variables. In this framework the unobservable variable depression does not exist, but is merely a word used to describe particular collective states of a set of interacting features.

The third, common effect, mechanism explains associations between observed variables as arising from (unknowingly) conditioning on a common effect of these variables, and is statistically represented by a collider model. In this framework the observed variables act as a collective cause towards an effect. An example of this is receiving a depression diagnosis (effect) as a consequence of the occurrence of multiple symptoms (causes) that are linked by the DSM to the term depression.

While each of these mechanisms proposes a radically different explanation for the emergence of associations between a set of manifest variables. We demonstrate in the paper that their associated statistical models for binary data are mathematically equivalent. From this follows that, each of these three mechanisms is capable of generating the exact same observations, and as such that any set of associations between variables that is sufficiently described by a statistical model in one framework, can be explained as emerging from the mechanism represented by any of the three theoretical frameworks.

Having multiple possible interpretations for the same model allows for more plausible explanations when it comes to the theoretical concepts and the causal inferences we obtain from the measurement model applied to our data. Furthermore, the historical success of theoretically very implausible models, such as the latent variable model can, in retrospect, arguably be explained by the equivalence of these three models.

However, it also means that obtaining a sufficient fit for the statistical models in one of these frameworks is by no means evidence that it is the mechanism from this framework that actually generated the observations. That is, there will always exist representations from the other mechanisms that can explain our observations equally well.

We should thus not only apply a network model to our data because it gives us a pretty picture (which it does), but because we believe that the associations between the variables we have measured are explained as a consequence of mutualistic relations between these variables themselves.

Abstract

Statistical models that analyse (pairwise) relations between variables encompass assumptions about the underlying mechanism that generated the associations in the observed data. In the present paper we demonstrate that three Ising model representations exist that, although each proposes a distinct theoretical explanation for the observed associations, are mathematically equivalent. This equivalence allows the researcher to interpret the results of one model in three different ways. We illustrate the ramifications of this by discussing concepts that are conceived as problematic in their traditional explanation, yet when interpreted in the context of another explanation make immediate sense.

— Kruis, J. and Maris, G. Three representations of the Ising model. Sci. Rep. 6, 34175; doi: 10.1038/srep34175 (2016).

The post How theoretically distinct mechanisms can generate identical observations appeared first on Psych Networks.

]]>The post Mplus 8.0 with Dynamic Structural Equation Models appeared first on Psych Networks.

]]>The network literature on psychopathology so far has been dominated by R, for instance when estimating cross-sectional network models via the packages *qgraph*, *IsingFit*, *mgm*, or *bootnet*, or time-series network models via *mlVAR* or *graphicalVAR*.

Modeling intensive time-series processes has become a topic important enough that the Mplus team asked Ellen Hamaker from Utrecht University to collaborate regarding the implementation of such models. I don’t know the details, but an invitation to a workshop states that Mplus will be able to estimate dynamical processes in data collected via “daily diaries, ecological momentary assessments (EMA), experience sampling methodology (ESM), and ambulatory assessments”. As usually, the Muthéns have found a sweet name and abbreviation for this: Dynamic Structural Equation Models (DSEM).

If you are interested in learning more about Mplus 8 that will come out this spring, there are several workshops planned. The first will take place July 13/14 2017 at Utrecht University, and I believe there will also be presentations by early adaptors of DSEM. If you are using Mplus 8 and want to present, email Rens van de Schoot.

Personally, I “grew up” with Mplus and it brings great benefits like fantastic state-of-the-art SEM modeling and very good support. Over the years, I have slowly transitioned to R because it is free open source software, deals much better with data management (with Mplus I usually need a second program to do the data management), and allows me to make my analyses 100% reproducible (everybody can get R and run my scripts, while not everybody can purchase Mplus and run my scripts). No matter how you stand on this debate, we should all be excited that Mplus is jumping on the dynamical process modeling bandwagon: this guarantees the thorough implementation of novel methodology, and the more software is out there that supports and further develops these models the better.

The post Mplus 8.0 with Dynamic Structural Equation Models appeared first on Psych Networks.

]]>