The post Experience sampling software ‘mobileQ’: new, free, open source appeared first on Psych Networks.

]]>

Experience Sampling Method (ESM) aka Ecological Ambulatory Assessment (EMA) is becoming ubiquitous in research investigating the nature and dynamics of everyday psychological phenomena.

When our research group started doing ESM research back in 2004, we used special wristwatches that could be programmed to beep at fixed timepoints. Participants wore these watches on their other wrist than their usual watch, and when they beeped, took out a booklet to complete a pencil-and-paper survey. To my surprise, dual watches did not become a fashion thing. This method helped us collect the data we were after, but it was laborious and time-consuming to enter the data and not great for participant compliance. I think it was around 2007 that we moved to palmtop computers. Yes, it fit in the palm of your hand, but “computer” was a bit of an overstatement! The device could be used for notes and calendar keeping, but could not call, text, or browse the internet, let alone tweet or find you a date (and it cost more than a modern smartphone!). But it worked well for beeping participants and digitally recording their responses to the questions we formulated. We used iESP to program our palmtop studies, and I am still indebted to Barrett and Barrett (2001)^{1} for making these devices amenable for ESM research and for instilling in me the jealousy to also one day publish a paper that has my name on it twice.

Flash forward to 2011, when we figured we should join the smartphone age. It was easy to purchase smartphones, but less easy to do ESM research on them. There were a few homegrown software platforms available, but they suffered from various ailments. There were a few paid platforms available too, but they quickly became expensive when you had plans to conquer the world, cure cancer, and halt all psychopathology with ESM research. So we thought it would be easiest to just make our own software to program smartphones. That was an *underthinkment*! It took us several years to make something that is easy to use, does not require any programming or coding experience from the part of the user, is reliable in terms of data collection and storage, and has enough flexibility to accommodate most of the varieties of research protocols typically encountered in ESM research. We called it *mobileQ*. While many people were involved in its development along the way, it was Kristof Meers who was the driving force and coding genius that made it all happen and work.

We and our collaborators have been using mobileQ for several years now. In total, it has collected over 7 million responses from 200,000+ surveys. From the start, it was our plan to also make our ESM smartphone brainchild freely available to other researchers. Yet, this also took longer than expected, as we had to properly document its features for novice researchers and to figure out the Terms and Conditions and Privacy statements researchers should agree to when wanting to use mobileQ. This turned out to be our biggest obstacle in today’s GDPR and lawsuit-obsessed age (a big thanks here to the KU Leuven R&D department).

So here it is.

We are very happy to make available to you, the research community, a free and flexible platform to perform ESM research. Let me explain a number of its features, advantages and possible disadvantages. Aside from being free, a big advantage is that we hope it is easy to use for novice researchers and is able to accommodate most (standard) ESM protocols, including different questions types, randomization options, flexible time-points, possibility for both time and event-sampling, and so on. As a simple research group, we cannot afford to set up a call-center or helpdesk to answer all questions and queries. Yet, we have documented all features as well as possible with an accompanying paper (Meers , et al., 2019)^{2} and instructional videos featuring the irresistible Aussie accent of Elise Kalokerinos and the video editing skills of Egon Dejonckheere. There is also a forum for researchers to post questions that we will also monitor ourselves.

One potential drawback perhaps is that mobileQ only works on Android phones, and we would advise that it would be best to work with specific types of research dedicated phones. That is, we suggest that researchers buy their own Android phones for their labs and install mobileQ on these phones (which are cheap these days: recommended devices do not cost a lot more than 100 euros or dollars). Why? One first reason is perhaps that we came from palmtops and that research dedicated devices did not seem unreasonable at the time. A second reason is that unlike today, when we started it was difficult to develop something that worked on both Android and iOS. But I also think there is a case to be made to use research dedicated phones. First, they may aid compliance. Participants see it as a research instrument that they may ignore less easily than their own smartphone, they cannot tinker with the notification settings (or avoid notifications!), and their phone cannot interfere with notification systems (something we have found problematic in some versions of Android with a particularly persistent battery saver model). Second, you have more control over what is displayed, and it is displayed in a way that is uniform across participants. Third, you can include all participants, even those without a smartphone (important for some populations e.g. older adults).

Currently, mobileQ is available for free use from our own KU Leuven protected servers. The platform is GDPR compliant and takes into account today’s security, legal, and privacy requirements (of course, you are still responsible for the nature of the data you collect). In addition, we will make the code open-source across the course of 2019. This option will allow to store your data on your own servers, a requirement perhaps at some institutions, especially when collecting sensitive data. Perhaps more importantly, it will also allow you, the research community, to further build on our work and adapt and enhance mobileQ for whatever purpose and functionality you need on top on what is already there.

Feel free to try it out.

From our point of view, we are happy we are able to give something to the research community that may hopefully help along your research, and in doing so, help us understand the complexity of human behavior, thoughts, and emotion (and who knows, cure cancer).

**Peter Kuppens**

with thanks to all involved in the development of mobileQ including Marlies Houben, Pete Koval, Madeline Pe, Koen Rummens, …

The post Experience sampling software ‘mobileQ’: new, free, open source appeared first on Psych Networks.

]]>The post Idiography: Where have we come from, where should we go to? appeared first on Psych Networks.

]]>Idiography – or the study of individuals – is achieving new prominence within psychology. In a field that seems inextricably linked to the study of an individual’s experience, personality, relationships, and symptomatology, it seems almost strange that idiographic methodology hasn’t always been at the forefront of psychological research. To be fair, numerous psychologists and therapists in the 20th century designed person-centered studies or integrated idiographic methodology into their clinical work. However, these earlier studies were limited by the use of (understandably) rudimentary methodology. Over the past decade or so, there has been substantial improvement in the data collection and statistical methodology available for N = 1 studies and a notable increase in the accessibility of these methods for psychologists. This has allowed those interested in idiography to model more complex psychological dynamics of an individual’s experience.

There has been a long history of idiographic work during the 20th century. Raymond Cattell introduced the use of the data-box as a method for orienting psychologists to person-centered research. A figure of the data-box is shown below. As opposed to studying several variables in many people at one point in time (see the front panel of the databox), a researcher could instead measure a set of variables in one individual over several points in time (see the shaded panel of the databox).

Clinicians and clinical scientists continued in this direction using analytic methods like P-technique or dynamic factor analysis to analyze a single patient’s psychotherapy. Similarly, psychologists began using these methods to examine the structure of personality. Numerous other researchers published ground-breaking idiographic studies, and reading their work can quickly turn into a delightful dive into the history of psychology and psychology methodology (personal experience of Marilyn L. Piccirillo, 2017 comprehensive exam studying)! A selected reading list of these historical studies can be found here.

During this early era of idiography, data collection and statistical methodology was understandably more basic and less comprehensive. With advancements in data collection techniques, we’re now able to use experience sampling methodology to collect in-vivo assessments of the individual’s mood or experience rather than relying on retrospective self-report. Likewise, our statistical methods are increasingly able to capture the complexity of psychological time-series data.

Several researchers have promoted idiography through the use of these improved methods. In a review of idiographic studies within psychology that I authored with Tom Rodebaugh, we highlighted work from Aaron Fisher, Aidan Wright, and colleagues who have published results demonstrating the idiographic nature of symptomatology in generalized anxiety and borderline personality disorders, respectively. Additionally, using a factor-based time-series approach, Peter Molenaar, Emilio Ferrer, John Nesselroade, and colleagues have examined the affective dynamics of interpersonal interactions between various close relatives. Notably, the work of Laura Bringmann, Ellen Hamaker, and colleagues in developing and testing time-varying analytic methods marks an important improvement in idiography. These analytic approaches can assist with modeling more complex psychological processes and can account for some violations to stationarity.

Other researchers have worked with group-level approaches that are also able to model individual-level processes. Methods such as group iterative multiple model estimation (GIMME) used by researchers including, Adriene Beltz, Kathleen Gates, Stephanie Lane, and Aidan Wright, as well as multilevel dynamic structural equation modeling, used by Ellen Hamaker and colleagues improve upon our ability to study individual-level processes within the context of the group. Most notably, in the case of the GIMME method, group-level models are constructed from individual-level models rather than relying on averages calculated across individuals. A selected reading list of more contemporary idiographic studies is included here.

It was truly exciting to review the work from our colleagues who are working to move the field of idiography forward. Their work demonstrated substantial progress towards using newer and more advanced time series methods. Yet, there is also considerable work ahead if we are to continue working towards integrating idiographic methods into applied areas, especially clinical work. Lian van der Krieke and colleagues have conducted some of the first dissemination and implementation work by developing an automated platform to collect and analyze idiographic data using vector autoregression (autoVAR). They have published studies measuring reactions and attitudes towards this automated platform, and the use of an automated platform may help to improve the accessibility of idiographic methods.

However, studies integrating idiographic methods into applied settings are still limited, which may be due to the lack of accessible information about how to best design an idiographic study. In a tutorial for clinical scientists and clinicians, Emorie Beck, Tom Rodebaugh, and I put forth our suggestions for designing an idiographic study based on the papers reviewed above and information gathered via personal communications at conferences, email, and academic Twitter.

Our main takeaways for designing an idiographic study are included below:

- Select items that can be measured continuously and use a continuous scale (i.e., preferably a 0 – 100 scale). Without a continuous scale, there may not be enough variance around each item to analyze.
- When possible, use more than one item to assess a given construct. Although participant burden is a concern, there are plenty of issues that can arise from relying on single item measurement. In terms of data analysis, you will also want to consider compositing items that are highly correlated or modeling latent variables.
- Be cautious about stationarity! Many of the easily accessible time-series methods assume stationarity around a given process, yet it is an open question as to whether psychological processes can ever be truly stationary. Consider collecting data during times of likely stationarity – such as when symptom change has stabilized. When it comes time to analyze your data, examine the data for trends that can be accounted for through detrending procedures or use a time-varying approach that can appropriately model nonstationarity.
- Consider the timing of your assessments. This may be one of the trickiest decision points, because we don’t have much, if any, empirical evidence as to the within and between person time trends of an emotional experience or symptom. Our best advice is to strike a compromise between numerous assessment points and participant burden. Our previous studies have administered surveys at 5 – 7 time points throughout a 12 – hour period. Regardless, be prepared to analyze data at different lags (i.e., Lag 1, Lag 2, Lag 3), as this will allow you to model multiple timescales of item and inter-item relationships.

The final two points above represent big hurdles for the field of idiography, and I’m looking forward to advancements in our understanding of how psychological systems change over time, both within and between individuals. In a field that is inherently focused on the individual, it is exciting to witness the rapid evolution of idiographic methodology as it can be used to improve our theoretical and clinical understanding of psychological systems!

The post Idiography: Where have we come from, where should we go to? appeared first on Psych Networks.

]]>The post ICPS 2019: Collection of presentations related to dynamical systems appeared first on Psych Networks.

]]>If you know about other talks I missed, please let me know and I’m happy to add them here anytime. Special thanks to Sacha and Payton for helping me collect these talks!

- Emily Bernstein: Unique And Predictive Relationships Between Components Of Cognitive Vulnerability And Symptoms Of Depression (slides)
- Giovanni Briganti: Network analysis of empathy items from the Interpersonal Reactivity Index in 1973 young adults (slides)
- Julian Burger: TIPS: Therapy Implications from Psychopathological Dynamical Systems (slides)
- Giulio Costantini: Towards disentangling correspondence and emergence: The case of conscientiousness (slides)
- Jonas Dalege: The Attitudinal Entropy (AE) Framework as a General Theory of Attitude (slides)
- Sacha Epskamp: Intermediate Stable States in Substance Use — Can allowing use prevent abuse? (slides)
- Sacha Epskamp: Network psychometrics — phase 2 (slides)
- Talya Greene: Dynamic network analysis of depression symptoms (slides)
- Alexandre Heeren: Deconstructing trait anxiety — A network perspective (slides)
- Adela Isvoranu: State of the Art and Clinical Applications of Network Psychometrics (slides)
- Payton Jones: Beyond symptoms — why diagnostic criteria are not enough for network analysis (slides)
- Payton Jones: Depression comorbidity – applying bridge centrality in networks to understand overlap with other mental disorders (slides)
- Payton Jones: Breaking the Assumption of Group Homogeneity in Networks — Partitioning Networks with Machine Learning (slides)
- Lachlan McWilliams: Reconceptualizing adult attachment relationships — a network perspective (slides)
- Lachlan McWilliams: A Network Perspective on the Relationship between Life Satisfaction and Depression (slides)
- Daniel Moriarity: Comparison of the networks of depression and anxiety symptoms in adolescents as a function of inflammation (slides)
- Maien Sachisthal: Uncovering Countries’ Science Interest Structure Using a Psychometric Network Approach (slides)
- Matthew Southward: Which deficits are most central to Borderline Personality Disorder? A network analysis of 4,000 participants (slides)

I also zipped all slides, which you can find here.

The post ICPS 2019: Collection of presentations related to dynamical systems appeared first on Psych Networks.

]]>The post Network models of factor scores: mixing apples with oranges appeared first on Psych Networks.

]]>I conducted some sensitivity and robustness analyses for the paper, and became interested in all the different ways in which one can summarize the subscale scores (e.g sum scores vs factor scores), and how this affects the resulting correlation matrices and regularized partial correlation networks. Since a lot of this was beyond the scope of the paper, I wanted to write this up here, in a tutorial with open code and open data. This seems especially relevant because more and more researchers are interested in modeling networks of meaningful constructs, and I firmly believe that subscales can have a lot of advantages over individual items in case they represent such meaningful constructs; see Giovanni’s paper — and also our recent paper on schizotypal personality — for examples. Folks with more time & talent could easily turn this tutorial into a simulation study. Hint hint.

In any case, you can find data and code for this tutorial here. I will not show all code below, only the most relevant parts. Now, let’s start to mix apples and oranges! Overall, the goal of the tutorial here is to estimate correlation coefficient and regularized partial correlation networks based on either sum-scores or factor scores of the items belonging to each of the 7 subscales. Finally, we will also explore what happens if we estimate the factor scores not based on simple structure, but allow for cross-loadings.

Before we start, please make sure to add Sacha Epskamp’s paper on generalized network psychometrics to your reading list, which is the first comprehensive publication on combining latent variable models and network models. Sacha wrote the R-package *lvnet*, which allows you to tackle the challenge we faced in Giovanni’s paper in one single step. *lvnet* first takes out the shared variances of items and models them as latent variables, and then models the the relations among the latent variables as a network. This is called a *latent network model* — here an example from Sacha’s paper:

Unfortunately, we were unable to use *lvnet* for the paper because we had too many items; *lvnet* does not scale well beyond 20 or so items. If you have fewer items, this is likely what you want to do.

In a first step, we estimate correlations of the 7 domains estimated via sum-scores, and compare that to correlations among the 7 domains estimated via factor-scores (using a confirmatory factor analysis).

We estimate sum-scores by adding up items, and estimate factor scores in a 7-factor model in lavaan:

```
cmodel <- ' FS =~ c7 + c10 + c16 + c24 + c29
C =~ c3 + c12 + c20 + c25 + c32
A =~ c1 + c4 + c17 + c21 + c30
GL =~ c2 + c8 + c18 + c26 + c31
AC =~ c13 + c19 + c22 + c27 + c33
V =~ c5 + c11 + c14 + c28 + c34
OA =~ c6 + c9 + c15 + c23 + c35 '
fit <- cfa(cmodel, data=data)
```

Fit is surprisingly good for a 7-factor simple structure model^{1}, with a significant chi-square of ~1593 and 539df in n=680; CFI=0.91, TLI=0.9, and RSMEA=0.05.

In the next step, we now estimate and visualize correlations among either sum scores or factor scores:

```
sum <- read_delim("sumscoredata.csv", ";", escape_double = FALSE, trim_ws = TRUE)
factor <- read_delim("factorscoredata.csv", ";", escape_double = FALSE, trim_ws = TRUE)
cor_sum <- cor_auto(sum)
cor_fac <- cor_auto(factor)
layout(t(1:2))
nw_sum <- qgraph(cor_sum, details=T, maximum=.72, title="correlation among sum scores")
nw_fac <- qgraph(cor_fac, details=T, title="correlation among factor scores")
dev.off()
```

As you can see, the correlations in the case of factor scores (compared to sum-scores) are considerably larger, due to disattenuation of the correlation coefficients due to unreliability. That is, if we assume that variation in the latent variables causes variation in the observed items, i.e., if we believe that the observed items are passive indicators that are caused by the latent variable — an assumption I find highly plausible given the nature of the scale — then the results can be interpreted in the sense that the factor scores remove measurement error. This increases relationships between subscales because we have more reliable subscale scores now^{2}.

The next code snippet shows the sum of all correlations for each case, and you can see that the sum of correlations in case of factor scores is larger compared to the sum scores:

```
sum(abs(cor_sum[upper.tri(cor_sum)])) # 4.91
sum(abs(cor_fac[upper.tri(cor_fac)])) # 6.95
```

The two correlation matrices are nearly perfectly linearly related, with a correlation of 0.99:

In a second step, we estimate regularized partial correlation networks (Gaussian Graphical Models, GGMs) on the data. They look (somewhat?) different^{3}, and once again, the coefficients in the GGM are stronger:

```
n1 <- estimateNetwork(sum, default="EBICglasso", threshold=TRUE, lambda.min.ratio=0.001)
n2 <- estimateNetwork(factor, default="EBICglasso", threshold=TRUE, lambda.min.ratio=0.001)
layout(t(1:2))
plot1 <- plot(n1, layout='spring', title="sum score GGM", details=TRUE, maximum=0.55)
plot2 <- plot(n2, layout=plot1$layout, title="factor score GGM", details=TRUE)
dev.off()
```

The correlation among the two adjacency matrices is 0.92, and again, the coefficients seem to be linearly related and stronger in case of the factor scores.

The plot below summarizes both correlations and regularized partial correlation structures for sum scores vs factor scores:

The specific results in this paper are entirely consistent with the disattenuation hypothesis: correlations and regularized partial correlations among subscales estimated based on factor scores of the 7 subscales are stronger than those estimated based on sum scores. And while the two GGMs differ in that the factor score GGM has 2 negative edges, the analysis here reveals that this is likely due to the fact that the lasso put these specific negative relations to zero in the sum score GGM. This is the case because the reliability of the sum score subscales was lower, which translates into lower power for the lasso to detect relations above zero. We can sort of test this by estimating a partial correlation network without regularization, in which case we'd expect that the coefficient for the negative edge that is featured in the regularized factor score GGM, but not in the regularized sum score GGM, would be slightly negative. This is indeed the case:

```
n1pcor <- estimateNetwork(sum, default="pcor")
n2pcor <- estimateNetwork(factor, default="pcor")
layout(t(1:2))
plot1 <- plot(n1pcor, layout=plot1$layout, title="sum score GGM, no regularization", details=TRUE, maximum=0.58)
plot2 <- plot(n2pcor, layout=plot1$layout, title="factor score GGM, no regularization", details=TRUE)
dev.off()
```

Overall, I'd like to see more simulation work on this, but I see no principled obstacle for using factor scores for subscales, and advantages such as removing measurement error (if that indeed makes sense of the items & construct under scrutiny).

This blog led to some interesting discussions, and I would like to highlight two points specifically.

First, as pointed out by Carlo Chiorri, correlations among factors scores tend to be inflated in case of cross-loadings when simple structure is enforced. Assume we have a data-generating model with some cross-loadings (i.e. some items load on more than 1 factor). Now we fit two models: model 1, a simple structure CFA model as we do above, and model 2, a factor model where we model all cross-loadings via ESEM (Exploratory Structural Equation Modeling). The inter-correlations among factors scores in model 1 will be larger than in model 2. This has been shown in the 2013 paper by Marsh and colleagues.

If we estimate this ESEM model and compare the resulting networks to the two models we estimated above (sum-score and factor-score networks), we get a network model that is somewhere between the sum-score network (smaller relations) and the factor score network (larger relations). This makes sense: there is some disattenuation (inflation?), but it is smaller than in the simple structure model.

It's been a while since I fit ESEM models in lavaan, if somebody knows a quick way to do that, please let me know and I'll add the code here.

Second, Erikson Kaszubowski summarized some general issues of using factor scores as observed variables, which are worth quoting in full:

The problem of using factor score in any subsequent analysis as an observed variable is a very old problem in factor analysis literature, as you probably know. It's mainly a problem because there is an infinite number of solutions that satisfy the equations used to estimate factor scores. Most software packages (like lavaan) simply spit the least square estimates for the factor scores, which have some interesting properties (well, they ARE least square estimates and they also maximize factor score and latent factor correlation). But they don't preserve latent variable correlation: even if the model has orthogonal latent variables, the factor scores computed from the solution will correlate.

Correlation between factor scores computed using least square estimates are usually an overestimate of latent factor correlation (compare cor(factor_scores) with inspect(lav_obj, 'std.all')$psi. And the problem goes deeper: we can build factor scores that are orthogonal or better reflect the correlation between latent variables (the 'Anderson' and 'ten Berge' methods in 'psych' package), but, given the infinite number of possible solutions, there is some arbitrarity in factor score solutions and their correlation matrix.

Given factor scores indeterminacy, I would suggest three alternatives to using (least squares) factor scores:

(1) Apply the latent variable network model from lvnet to single indicator latent variable using some reliability estimate to fix parameters.

(2) Apply the network model directly to the estimated latent variable correlation (or covariance) matrix. Not the best alternative, but still possible.

(3) Evaluate factor score indeterminacy to guarantee the indeterminacy is small enough to be ignored and proceed with the analysis with factor scores anyway.

You can find code & data here; source for the image "apples & oranges": Michael Johnson, flickr.

The post Network models of factor scores: mixing apples with oranges appeared first on Psych Networks.

]]>The post Fixed-margin sampling & networks: New commentary on network replicability appeared first on Psych Networks.

]]>

The methodological journal *Multivariate Behavioral Research* just published our latest contribution to the debate surrounding the replicability of psychological networks (the pre-print and codes were already available on OSF). To recap, last year, the *Journal of Abnormal Psychology* published a series of four papers:

- A paper claiming networks have limited replicability (Forbes, Wright, Markon, & Krueger, 2017a)
- Our commentary on this paper showing the networks to replicate well in a comprehensive re-analysis
^{1}(Borsboom et al., 2017) - A commentary by Steinley, Hoffman, Brusco, & Sher (2017) introducing a new method and claiming networks do not differ from what is expected by chance, supporting arguments of Forbes et al.
- And a rebuttal of the original authors (Forbes, Wright, Markon, & Krueger, 2017b), relying on the work by Steinley et al. (2017) as well as a literature review in PTSD networks to present further evidence that networks have limited replicability.

Papers 1, 2, and in some extent 4 have already extensively been discussed online, and I will not discuss them in detail again here. The Psychosystems group posted a short statement on its blog, Eiko posted a longer blog on the whole process, and I posted a public post-publication peer review on pubpeer (the original authors responded to these, so make sure to read their comments as well as ours to get a fair and balanced overview). We mentioned working on a (critical) commentary on paper 3 in these discussions as well.

In an unprecedented display of scientific integrity, Douglas Steinley himself invited us in response to submit this commentary to the prestigious methodological journal *Multivariate Behavioral Research* instead of the *Journal of Abnormal Psychology*, which we happily accepted. This brings me to the topic at hand. In this blog post, I will summarize the two main points of our commentary, showing that the conclusions made in paper 3 are unwarranted. Next, I will showcase an example not discussed in our commentary, in which the proposed methodology has strong utility, by re-analyzing a 10-year old network of the DSM-IV-TR.

The commentary by Steinley et al. (2017) (paper 3) introduces a new method for creating *“confidence intervals” *from network parameters and descriptives*.* We term this method “fixed-margin sampling” as it entails generating new random binary datasets while keeping the margins (row and column totals) intact. These sampled datasets can subsequently be used to create intervals for any statistic. Using this method, the authors conclude that *“many of the results are indistinguishable from what would be expected by chance”*, labeling such findings *“uninteresting”*, and suggesting that* “previously published findings using *[eLasso]* should be reevaluated using the above testing procedure.”* Forbes et al. (2017b) re-iterate the last statement in paper 4:* “this finding highlights the central role that Steinley et al.’s (2017) proposed method should have in psychopathology network research going forward.”*

In our new commentary in *Multivariate Behavioral Research*, we show that the work of Steinley et al. (2017) relies on a misinterpretation of psychological networks. The crux of the matter lies in this paragraph:

“Clearly, psychopathology networks fall into the class of affiliation matrices where the connections are measured between observation and diagnostic criteria. The relationships between the criteria are then then derived by transforming the two-mode affiliation matrix to a one-mode so-called “overlap/similarity” matrix between the criteria, where traditional network methods are applied to this overlap/ similarity matrix.”

Steinley et al. (2017) interpret Ising models used in psychology as one-mode projections of so-called two-mode or bipartite graphs. That means that they interpret a standard person by symptom data matrix:

Depressed mood | Fatigue | |

Bob | 1 | 1 |

Alice | 1 | 0 |

To actually encode a network:

Depressed mood — Bob — Fatigue — Alice

Of which the symptom by symptom network is a so-called *projection*:

Depressed mood – Fatigue

That is, depressed mood and fatigue interact with one-another because they share one person: Bob. Similarly, Bob and Alice interact with one-another because they share one symptom: fatigue. But this is not the intention of the Ising model, which is a model for conditional independencies. In fact, one core assumption in many multivariate statistical models is that the cases (Bob and Alice) are *independent*, which means they do not interact with one-another because they share a symptom. The symptom fatigue is also a different property of Alice and Bob, and not an entity in the world they both interact with.

While keeping the column totals intact has little to no effect in generating such data, keeping the row totals (in this case: number of symptoms per person) intact has a striking effect; it leads to highly one-dimensional models used as null-distribution:

This means that due to latent variable – network equivalences, fixed-margin sampling takes a fully connected network model as null-distribution to test estimated network models. Such a procedure will lead to false conclusions on the importance of estimated network parameters. We show in our commentary that the method performs poorly in classifying true effects as interesting and false effects as uninteresting.

Fixed-margin sampling generates data under a particular kind of unidimensionality: a model in which each item is interchangeable (Verhelst, 2008). Such a model is also known as the Rasch model. As the DSM classification of disorders typically treats symptoms as interchangeable, it is interesting to see how well combining fixed-margin sampling with the eLasso Ising estimation method performs as a non-parametric test for the Rasch model. This may be worthwhile, as it would give us insight in where the data diverge from the Rasch model and thus alternative explanations are warranted (although not required). We investigated this in two simulation studies. In one simulation study, we simulated data under the following model:

By varying the C parameter (correlation between factors), we can change the model from two independent variables (C = 0) to one latent variable (C = 1), and by increasing the R parameter (residual effect), we can add two violations of the one- or two-factor model. The results are as followed:

The colored areas in the background show the probability to flag the edges related to parameter R as not being in line with the Rasch model. It shows that the method works very well in detecting these local violations of the Rasch model. The boxplots show global departures and should be high if all edges are flagged as departures from the Rasch model. This should be the case in the C = 0 condition but doesn’t happen often. This shows that while this method is powerful in detecting local departures from the Rasch model, it is far less powerful in detecting global departures form the Rasch model. As such, I would recommend using this method to gain insight in where unidimensionality does not hold, but not to use it as a test for the Rasch model itself by counting the number of flagged edges.

While fixed-margin sampling should not be used to assess psychometric networks that are based on estimating statistical models from large sample sizes of independent cases (e.g., the Ising model), the method has strong utility in the analysis of one-mode network structures that are derived from bipartite graphs. One such a network is actually the first network I ever constructed and analyzed: the DSM-IV-TR network (Borsboom, Cramer, Schmittmann, Epskamp, & Waldorp, 2011):

I worked on this network about 10 years ago as an undergraduate student, long before we even entertained the notion of estimating network models from data. All the codes and data used for the network visualizations are still online. To create this network, we created an affiliation matrix of 439 symptoms by 148 disorders, encoding if a symptom was listed as a symptom of a disorder in the DSM-IV-TR. The data simply is a 439 row and 148 column matrix with 0 indicating a symptom is not listed in a disorder and 1 indicating a symptom is listed in a disorder. This dataset can subsequently be transformed to a 439 by 439 adjacency matrix encoding if symptoms are both listed in at least one shared disorder by multiplying the data with it’s transpose and making every non-zero element one^{2}.

While the dataset used for this network looks similar to a dataset you may use when estimating an Ising model (zeroes and ones), it is actually a very different kind of data. In an Ising model, the more cases we add the more precise our estimates of the network model: if we double the sample size from 10.000 to 20.000 we would not expect a completely different model, merely to be able to estimate the parameters even more precise. In the DSM-IV-TR affiliation matrix, however, this is not the case: doubling the number of symptoms listed will fundamentally change the interpretation of the model (doubling the number of nodes), and doubling the number of disorders listed will fundamentally change the structure of the network. We also cannot do this, as we already listed all symptoms and disorders from the DSM-IV-TR. Rather than columns representing random stochastic variables and rows representing independent realizations, the columns and rows both represent simply static entities: words in a book. The network structure is simply a description of this book, and equivalent to adding more cases would be to test more books (e.g., Tio, Epskamp, Noordhof, & Borsboom, 2016).

This means we also cannot bootstrap the dataset, as resampling symptoms with replacement or dropping symptoms hardly makes sense. So what can we do? The fixed-margin sampling method described by Steinley et al. (2017) actually gives a very nice new tool to investigating such structures. Given that some symptoms are listed in many disorders (e.g., insomnia is listed in 17 disorders), and some disorders feature many symptoms (e.g., Schizoaffective Disorder lists 33 symptoms), we would expect certain levels of connectivity by chance alone. If that is the case, the network structure itself is not very *interesting*, and investigating the symptom and disorder sum totals would be sufficient by itself.

I re-investigated the dataset using fixed-margin sampling and constructed 1,000 networks (codes available here). These are three random samples of the generated networks:

In this case, there is no need for any quantitative analysis and the plots themselves already reveal a remarkable difference between the networks expected by chance alone and the network observed in the DSM-IV-TR: the fixed-margin sampling networks are far denser (more edges) and interconnected. This means that we can conclude that there is structure in the DSM-IV-TR, and symptoms are not randomly assigned to disorders. Of course, there is a structure in the DSM-IV-R imposed by the chapters alone (e.g., mood disorders, personality disorders, etcetera). A follow-up analysis could be to split up the data per chapter, apply fixed-margin sampling to each block, and subsequently combine the data again. Three snapshots of these networks are as follows:

These look **much** more similar to the observed DSM-IV-TR network, which means that the clustering per chapters already explains a lot of the structure. However, these networks are still denser (number of edges ranging from 3,513 to 3,674, compared to 2,626 in the observed network), meaning that investigating the graph structure is still interesting. When looking at strength centrality, we can see that in the high ranges of strength centrality the observed node strengths are *less* than could be expected by chance:

Here, red dots indicate nodes with a strength that was not in the expected interval by fixed-margin sampling.

To conclude, our new manuscript shows that the fixed-margin sampling routine proposed by Steinley et al. (2017) should not be used to evaluate psychometric network models, but shows promise in detecting local departures from Rasch models. Furthermore, the method of fixed-margin sampling is highly valuable in analyzing typical network structures that are constructed rather than estimated. I think that the combination of our commentary in *the Journal of Abnormal psychology *last year (Borsboom et al., 2017) and the new commentary discussed in this blog post safely put most criticism raised in last years series of papers to rest, and I look forward moving this discussion further in discussing crucial challenges network analysis faces in the coming years, of which there are many (see, e.g., comment # 5 on the pubpeer discussion, several publications on challenges to network analysis, and continued debate on the interpretation of networks).

If you would like to study fixed-margin sampling yourself, all codes for our simulations are available on the Open Science Framework. These rely on both R and Matlab, however, to fully replicate the analysis as proposed by Steinley et al. (2017). For R-based alternatives, the R packages RaschSampler and vegan should have similar performance.

Borsboom, D., Cramer, A. O. J., Schmittmann, V. D., Epskamp, S., & Waldorp, L. J. (2011). The Small World of Psychopathology. *PLoS ONE*, *6*(11), e27407.

Borsboom, D., Fried, E., Epskamp, S., Waldorp, L., Van Borkulo, C., Van Der Maas, H., & Cramer, A. (2017). False alarm? A comprehensive reanalysis of “Evidence that psychopathology symptom networks have limited replicability” by Forbes, Wright, Markon, and Krueger. *Journal of Abnormal Psychology*, *126*(7), 989–999. http://doi.org/10.17605/OSF.IO/TGEZ8

Forbes, M. K., Wright, A. G. C., Markon, K. E., & Krueger, R. F. (2017a). Evidence that Psychopathology Symptom Networks have Limited Replicability. *Journal of Abnormal Psychology*, *126*(7), 969–988. http://doi.org/10.1037/abn0000276

Forbes, M. K., Wright, A. G. C., Markon, K. E., & Krueger, R. F. (2017b). Further evidence that psychopathology networks have limited replicability and utility: Response to Borsboom et al. and Steinley et al. *Journal of Abnormal Psychology*, *126*(7), 1011–1016.

Steinley, D., Hoffman, M., Brusco, M. J., & Sher, K. J. (2017). A Method for Making Inferences in Network Analysis: Comment on Forbes, Wright, Markon, and Krueger (2017). *Journal of Abnormal Psychology*, *126*(7), 1000–1010.

Tio, P., Epskamp, S., Noordhof, A., & Borsboom, D. (2016). Mapping the manuals of madness: Comparing the ICD-10 and DSM-IV-TR using a network approach. *International Journal of Methods in Psychiatric Research*, *25*(4), 267–276. http://doi.org/10.1002/mpr.1503

Verhelst, N. D. (2008). An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals. *Psychometrika*, *73*(4), 705–728. http://doi.org/10.1007/s11336-008-9062-3

**Footnotes**

The post Fixed-margin sampling & networks: New commentary on network replicability appeared first on Psych Networks.

]]>The post (Mis)interpreting Networks: An Abbreviated Tutorial on Visualizations appeared first on Psych Networks.

]]>Network analysis is an exploding field! I absolutely love seeing the constant flow of new papers and new researchers using network methods.

With such a quickly growing science, it’s difficult to keep up! Although I have personally found the network community to be very welcoming, friendly, open, and accessible, that doesn’t negate the fact that there is just a lot of information to keep up with.

As I work to keep up and learn new information, I’ve become aware of some mistakes I made early on. This tutorial is intended to keep you from making the same mistakes that I did.

At this point, I’ve seen at least a few dozen symposium presentations on network analysis, many of them from researchers just starting out with network analysis. Here are some of the most frequent errors:

“The somatic symptoms of depression were out on the periphery, barely part of the network”

“Extraversion is right in the middle of the personality network”

This misinterpretation pops up all the time. I blame linguistics.

In reality, there are several different types of node centrality, and none of them necessarily correspond to network plots. You have your centrality values and centrality plots—use those instead of looking at the network plot. Eiko wrote about this and similar centrality interpretation problems in a recent blog post.

“As you can see, sad mood and agitation were on opposite ends of the network”

“Surprisingly, weight gain and weight loss were right next to each other”

Again, not so. A good way to reality check is to look at the edges: if node distance corresponds perfectly to node similarity, all edges of a certain thickness should have exactly the same length, and all edges of the same length should have the same thickness (hint: that’s rarely if ever true).

“Intrusive thoughts were far to the right, close to the depression cluster”

This one is rarer but pops up occasionally. I see people make this error especially when there are meaningful clusters. Resist temptation; if you want to know if a node is “close to the depression cluster”, use bridge centrality instead.

“As you can see from the plots, the networks did not replicate well, indicating that edges in network analyses are mostly comprised of measurement error”

*Cough*. Relating to Error #3, in most network plots rotation is totally arbitrary (the enemy’s gate is down!). In addition, certain types of network plots (e.g., force-directed) are very unstable even with similar networks. This can wreak some serious havoc when trying to interpret multiple networks.

In my experience it’s much more informative to use a correlational approach (e.g., do the edges correlate? does centrality correlate?) to judge replicability (Eiko discussed these and similar metrics in the section “A word of caution” in this blog post). For plotting, it’s best to either use an consistent averaged layout for both plots or the Procrustes method (see Figure 6 in the full tutorial).

One way to fix the interpretation problem is to stop making any visual interpretation! Certainly, we shouldn’t pretend we understand 20-dimensional causal information just because we made a 2-dimensional plot of partial correlations (!).

But the whole point of a visualization is to help us understand our data better. And although we should stick to the numbers for our research conclusions, there is something to be said for exploratory hypothesis generation that comes from good visualizations (as long as you don’t pretend that these hypotheses were confirmatory all along).

So our second option is to try and do the best we can to make accurate visualizations, while simultaneously reigning ourselves in with visual interpretations. Here is a super quick overview of some of the options.

This is a short version of the open-access tutorial. You’ll need the qgraph and networktools packages for the code to work, and we’ll get some data from package MPsychoR. First some code for getting a network:

`library(qgraph)`

library(networktools)

library(MPsychoR)

data(Rogers)

mynetwork <- EBICglasso(cor_auto(Rogers), nrow(Rogers))

`myqgraph <- qgraph(mynetwork, layout="spring")`

Most networks you see "in the wild" are plotted with the Fruchterman-Reingold algorithm. This algorithm works by treating each network edge is like a spring—it pulls when connected nodes get to far away and pushes when they get too close.

This creates really nice-looking networks in which nodes never overlap, and edges are mostly about the same length (the "resting state" for the spring forces). In very sparse networks, it can be a good way to visualize clusters. But all of the Big Four are dangerous here.

`MDSnet(myqgraph, MDSadj=mynetwork)`

Multidimensional scaling solves our Error #2—distances between nodes actually become interpretable in an MDS plot. In other words, the algorithm works so that nodes placed close together usually share a strong relationship, and nodes far apart do not. This is, of course, accounting for the fact that we've squashed everything down into just two dimensions—so stay careful with interpretations!

`PCAnet(myqgraph, cormat=cor_auto(Rogers))`

You've probably heard of PCA—but for plotting a network? PCA is a simplification method—it tries to squash all of your complex data down into just a few variables. This is perfect for us, because our plots have only two (count 'em) dimensions! The idea here is that we give each node a score on Component #1 and on Component #2, and then use these scores to plot on an X/Y axis (this solves Error #3). We preserve complexity in the form of network edges but make the plot as simple as two principal components. If you're feeling adventurous, you could even come up with labels for what the dimensions might mean.

`EIGENnet(myqgraph)`

If you liked PCA, you're in for a treat with eigenmodels. PCA is great, but it requires that you either have the original data or a correlation matrix from that original data. In other words, PCA isn't really based on your network per se, it's just based on the same data that generated the network. Thankfully, someone[https://www.stat.washington.edu/~pdhoff/code.php] came up with a way to extract latent variables from symmetric relational data (AKA undirected network data). The interpretation is similar to PCA plotting, but everything comes straight from the network itself.

And that's it!

If you liked the abbreviated version, you can check out the full tutorial for a deeper look at the same concepts and some more sophisticated code. Happy visualizations!

Citation:

Jones, P. J., Mair, P., & McNally, R. (2018). Visualizing Psychological Networks: A Tutorial in R. Frontiers in Psychology, 9, 1742. https://doi.org/10.3389/fpsyg.2018.01742

The post (Mis)interpreting Networks: An Abbreviated Tutorial on Visualizations appeared first on Psych Networks.

]]>The post How to interpret centrality values in network structures (not) appeared first on Psych Networks.

]]>- A paper by Madhoo & Levin 2016 prompted me to write a tutorial on community detection
- A paper by Terluin et al. 2016 led to a blog post on differential variability
- A paper by Afzali et al. 2017 prompted me to write a tutorial on network stability
- A paper by Guloksuz et al. 2017 led me to write a piece on challenges of the network approach
- And a paper by Forbes et al. 2017 led to another blog post on stability

I wanted to write about centrality inference for a while, and a new paper published in *Molecular Psychiatry*, one of the leading journals in psychiatry in terms of impact factor and visibility, convinced me I should write this up. The paper is entitled “The symptom network structure of depressive symptoms in late-life: Results from a European population study”, by Murri and colleagues. This paper is written up in a similar way to many other papers, and I really don’t mean to single out this specific paper or the specific authors here. It just comes at a time where I don’t want to prepare my course or review the paper for Abnormal … so here we go.

After estimating network structures, e.g. among symptoms, in between-subjects (cross-sectional) or within-subjects (time-series) data^{1}, researchers often calculate centrality estimates. This provides information about the inter-connectedness of a variable. There are different ways to do that, and many different centrality measures exist.

For instance, this R syntax creates a small network, and shows that the green node has a centrality of 6 because it is connected to 6 other variables:

```
library("qgraph")
AM <- matrix(0,10,10)
AM[1,2] <- AM[2,1] <- AM[2,3] <- AM[3,2] <- AM[2,4] <- AM[4,2] <- AM[3,4] <- AM[4,3] <- AM[3,6] <- AM[6,3] <- AM[3,8] <- AM[8,3] <- AM[3,9] <- AM[9,3] <- AM[3,10] <- AM[10,3] <- AM[4,7] <- AM[7,4] <- AM[5,7] <- AM[7,5] <- AM[5,10] <- AM[10,5] <- AM[9,10] <- AM[10,9] <- 1
gr <- list(c(1,2,4:10), 3)
names <- c("1","3","6","3","2","1","2","1","2","3")
N <- qgraph(AM, groups = gr, color=c('#cccccc', '#3CB371'), labels=names,
border.width=3,edge.width=2, vsize=9,
border.color='#555555', edge.color="#555555", label.color="#555555")
```

In the abstract of their paper, Murri et al. conclude, after estimating a network structure in cross-sectional data:

*Death wishes, depressed mood, loss of interest, and pessimism had the highest values of centrality. Insomnia, fatigue and appetite changes had lower centrality values […]. In conclusion, death wishes, depressed mood, loss of interest, and pessimism constitute the “backbone” that sustains depressive symptoms in late-life. Symptoms central to the network of depressive symptoms may be used as targets for novel, focused interventions and in studies investigating neurobiological processes central to late-life depression.*

I am not sure this necessarily follows, and I will explain below why.

Researchers often estimate centrality values after the network structures are estimated, and then use these to draw substantive inferences. One common inference in cross-sectional data is that central symptoms are the most important symptoms, another that we should intervene on central symptoms. Murri et al. above describe that central symptoms "sustain" depression, which has a clear temporal component to it.

There are number statistical and substantive concerns you should keep in mind here. And just to clarify this again, this is not an exercise in finger-pointing. While I have always tried to be careful, and while my colleagues will tell you how careful I try to be when it comes to causal language (thanks in large part to my education as a postdoc in the lab of Francis Tuerlinckx), I am sure a few sentences have slipped through my fingers in papers I am co-author on. In my own work, the strongest statement I could find is in my first network paper, where we found loneliness to play a crucial role in bereavement, in longitudinal data. In the abstract, we concluded that "future studies should examine interventions that directly target such symptoms". In the discussion, we wrote that "that intervention programs should directly target loneliness". This is supported by and embedded in the clinical literature on the relation between loneliness and bereavement, but writing this paper today I would clarify that this conclusion does not follow from the network model alone.

So what are the main concerns and pitfalls when interpreting centrality values?

For my very first paper published in 2014, I analyzed the relations between 14 specific depression symptoms and impairment. It turned out that some symptoms explained a lot more variance than other symptoms. One of the reviewers raised the concern of *differential variability*, which I have not forgotten since. Differential variability means that items differ in their variability (standard deviation and variance), and items with little to no variability cannot relate to other variables. Since centrality is a function of relations among items, such floor or ceiling effects that can stem from differential variability will affect centrality estimates. Terluin et al. 2016 wrote a paper specifically about this for network models, which I discuss in more detail elsewhere. This means that when you estimate centrality, you should consider checking means and variances of items, and try to understand how these values determine network structures and centrality estimates. In other words, what happens if you correlate centrality values and standard deviations of your items … and they result in a correlation of 0.5? It's worth thinking about this.

Another issue is reliable estimation: Are you sure the most central symptom in your network is actually meaningfully more central than the other symptoms? This is similar to other statistics, where the mean height of 177cm in a group of men vs 171 cm in a group of women does not tell you if there is a meaningful (or statistically significant) difference between the height of men and women — unless you know the sample size and distributions. You can test this statistically, and probably want to do that before drawing inferences. We describe here how to do that in detail, via the centrality difference test.

What if 3 nodes in your network actually measure the same latent variable, such as the CES-D scale that captures sad mood, feeling blue, and feeling depressed? Your network will feature strong edges between these nodes, and their centrality will be very high, but intervening on either to decrease the others would not be a real "network intervention" because all you do is reducing sadness by intervening on sadness. That may be interesting all by itself, but the edges between these 3 items are not legitimate putative causal relations: They are simply shared variances due to measuring the same thing multiple times, as we highlight (along with a potential solution to this) in our challenges paper published in *Perspectives on Psychological Science*.

There is also the danger of conditioning on colliders or other estimation problems. Conditioning on colliders, for instance, will induce artificial edges in your network that are not part of the true model. In other words, be careful not to confuse an estimated parameter (like an edge weight) for the truth … obviously, this applies to all models, and not only network models.

Finally, there is the issue of mixing levels. Network models in cross-sectional data are estimated on between-subjects data, and as has been highlighted in recent work, it does not automatically follow that such results lead to proper conclusions regarding the within-subjects level. I am not saying it never follows, and I think these levels might align quite often, but it is an empirical question we have not yet answered. Here a fairly strongly worded recent investigation by Fisher et al. published in *PNAS *showing that there are important differences between these levels; Simpson's paradox is also highly relevant on this context.

Now let's assume our network is estimated without any problems or bias, and concentration problems is the most central depression symptom in the network structure of symptoms based on cross-sectional data. Can we conclude that it is the "most important" problem, and that we should focus our interventions on concentration problems?

As we wrote up in the discussion section of a recent paper we published in *Clinical Psychology Science*, this conclusion does not necessarily follow, for various reasons.

*It is important to highlight that centrality does not automatically translate to clinical relevance and that highly central symptoms are not automatically viable intervention targets. Suppose a symptom is central because it is the causal endpoint for many pathways in the data: Intervening on such a product of a causal chain would not lead to any changes in the system.*

In other words, the endpoint of a causal chain would end up being a highly central symptom in your network structure *if* there are many problems that lead to this specific symptom. Given the cross-sectional nature of your data, you cannot find evidence for this temporal relationship, and will, in this case, draw wrong causal inferences that does not follow from the results. So I urge caution with these and similar interpretations.

*Another possibility is that undirected edges imply feedback loops (i.e., A—B comes from AB), in which case a highly central symptom such as insomnia would feature many of these loops. This would make it an intervention target that would have a strong effect on the network if it succeeded — but an intervention with a low success probability, because feedback loops that lead back into insomnia would turn the symptom “on” again after we switch it “off” in therapy.*

Put differently, it is an important and non-trivial question if it might not be worthwhile to intervene on peripheral (non central) symptoms, because the probability of switching them permanently is higher: Few other symptoms will keep them in their original state. If sleep problems leads to 5 other problems, but is at the same time the consequence of 5 problems, it will be nearly impossible to simply target insomnia via interventions because you don't target the *causes* of insomnia.

*A third example is that a symptom with the lowest centrality, unconnected to most other symptoms, might still be one of the most important clinical features. No clinician would disregard suicidal ideation or paranoid delusions as unimportant just because they have low centrality values in a network. Another possibility is that a symptom is indeed highly central and causally affects many other nodes in the network but might be very difficult to target in interventions. As discussed by Robinaugh, Millner, and McNally (2016), “Nodes may vary in the extent to which they are amenable to change” (p. 755).*

I believe these are significant challenges to common centrality interpretations. We conclude in the paper by stating:

*In sum, centrality is a metric that needs to be interpreted with great care and in the context of what we know about the sample, the network characteristics, and its elements. If we had to put our money on selecting a clinical feature as an intervention target in the absence of all other clinical information, however, choosing the most central node might be a viable heuristic.*

There is other critical work on centrality on the way. One paper that is accepted in the* Journal of Consulting and Clinical Psychology*, by Rodebaugh and colleagues, features a detailed empirical investigation of centrality in both cross-sectional and time-series data. You can find the preprint here. The most relevant parts of the abstract read:

*We first estimated a state-of-the-art regularized partial correlation network based on participants with social anxiety disorder (N = 910) to determine which symptoms were more central. Next, we tested whether change in these central symptoms were indeed more related to overall symptom change in a separate dataset of participants with social anxiety disorder who underwent a variety of treatments (N = 244). […] Centrality indices successfully predicted how strongly changes in items correlated with change in the remainder of the items. Findings were limited to the measure used in the network and did not generalize to three other measures related to social anxiety severity. In contrast, infrequency of endorsement ^{2} showed associations across all measures. […] The transfer of recently published results from cross-sectional network analyses to treatment data is unlikely to be straightforward.*

The whole idea of network theory is that things are complicated. We should draw inferences proportional to this level of complexity, and be careful of over-interpreting our data. Obviously, this is just as important for time-series analyses, where we have time as an additional (and very important) dimension, but that only buys us Granger-causality, and only helps with a few of the issues described above.

A crucial step forward is to actually *test interventions* in patients based on centrality (and other) estimations, and I am excited to see such projects putting network theory to the test — and provides a fantastic opportunity for falsification of network theory we should all embrace.

**EDIT 11-29-2018: **

There are two new preprints discussing centrality inferences critically, which you can find here (Dablander & Hinne, 2018) and here (Bringmann et al., 2018).

The post How to interpret centrality values in network structures (not) appeared first on Psych Networks.

]]>The post Bootstrapping edges after regularization: clarifications & tutorial appeared first on Psych Networks.

]]>One of the core features of the R package bootnet is bootstrapping of network edge weights. Bootstrapping is a procedure where you estimate your network structure and parameters of interest many times (e.g. 1000), each time with a slightly different sample. You obtain these different samples by drawing people from your data randomly with replacement. This means that in your first bootstrap, Bob might be in there 3 times (but not Alice), whereas in the second bootstrap, Alice is in there twice (but Bob is absent). The larger the sample, and the more similar the people to each other, the more stable your parameters will be.

We put together bootnet to give you an idea about the stability of the edge weights and other parameters. If the edge between two nodes, A and B, is widely different every time you resample, it means your bootstrapped 95% CI will be all over the place. I have described bootnet and its functionalities in a previous tutorial blog post, and we have a tutorial paper on bootnet that was published in 2017.

Here, for instance, is the output of the `bootnet()`

function in bootnet from our recent CPS paper for dataset 1, available in the supplementary materials:

On the y-axis are all 120 edges in the network (labels omitted to keep it legible), the x-axis shows the strength of the edge weights (you can see that nearly all edge weights are positive). The red dots are the point estimates of all edges, the grey area the “95% bootstrapped CI”, as I used to call it. The important point is that many of the CIs will just overlap with zero. How do we interpret this? Usually, if a point estimate of a parameter is 0.1 (e.g. a correlation), we do not know if that parameter is different from zero. This is a normal situation in statistics, and the reason why we usually look at the CI coverage: if the CI includes 0, we interpret the parameter as likely not being different from 0.

In the case of regularized partial correlation networks, the story is different. If an edge is 0.1 *after regularization*, that means we have two types of information about the parameter: 1) our best guess is that the parameter is 0.1; 2) our best guess is that the parameter is different from 0.

Why? because we use regularization, a well-validated, sophisticated statistical technique to only keep coefficients in the network that are not zero^{1}. Obviously, regularization can still lead to errors, there are situations in which regularization does not do well, and there are numerous other methods that should be considered when estimating networks (for a summary on these points, see Sacha Epskamp’s recent blog post). But the main point is that we have to interpret the 95% CI of regularized edge weights differently than we usually do.

For the supplementary materials of a network paper on depression symptoms & inflammation that we are about to submit, Jonas Haslbecks helped us look at this topic from a somewhat different angle, and also provided some insights on the topic. It would take me more sentences to reiterate what Jonas said very concisely, so I will simply paste the relevant part of the supplementary materials here:

“In order to quantify the uncertainty associated with all edge-estimates, we computed a bootstrapped sampling distribution based on 100 bootstrap samples, for each of the edge-estimates. For each of the six networks estimated in the main article we present summaries of the p(p-1)/2 bootstrapped sampling distributions, one for each edge parameter. Specifically, we display the 5% and 95% quantiles of the bootstrapped sampling distribution and show the proportion of nonzero estimates on point that indicates the mean of the sampling distribution.

Because we use regularization to estimate the network models, all edge-estimates are biased towards zero, which implies that all sampling distributions are biased towards zero. Thus, these sampling distributions are not Confidence Intervals (CIs) centered on the true (unbiased) parameter value. This means that if the quantiles of the bootstrapped sampling distribution overlap with zero it could be that the corresponding CI does not overlap with zero. However, if the quantiles of the bootstrapped sampling distribution do not overlap with zero, we know that also the corresponding CI does not overlap with zero (explained in detail in Epskamp, Borsboom & Fried, 2017). Further details of bootstrap analyses are available in the supplemented R code.”

Jonas also produced the following plot^{2}:

The numbers show how often an edge was estimated non-zero in the 100 bootstraps. As you can see, the edge C2—C7 was included in all networks, and while the 95% bootstrapped CI of D1—C1 does include zero, it was estimated to be non-zero in 78% of the 100 estimated networks. The code for these plots can be found in the supplementary materials of our paper.

And as announced a while ago on Twitter, Sacha has recently implemented a function somewhat similar to what Jonas had put together in bootnet 1.1. This version of bootnet is currently available on github, and should be on CRAN soon. As Sacha explained in the blog, you can now plot the quantile intervals only for the times the parameter was not set to zero, in addition to a box indicating how often the parameter was set to zero.

```
install.packages("devtools")
library("devtools")
install_github("sachaepskamp/bootnet")
library("bootnet")
library("psych")
data(bfi)
network1 <- estimateNetwork(bfi[,1:5], default = "glasso")
boot1 <- bootnet(network1, nBoots = 500, nCores = 8)
plot(boot1, plot = "interval", split0 = TRUE, order="sample", labels=FALSE)
```

The above code will lead to a few warnings (ignore for the purpose of this tutorial^{3}), and leads to the following figure:

The saturation is proportional to how often an edge was included in the network. The figure doesn't scale too well at present (i.e. to more than 5 or 10 nodes), but it's something you'd likely report in the supplementary materials anyway, and not in the main part of your paper.

Thanks to Sacha and Jonas for the work they've put into this. Oh, and you know what's also new? Bootnet estimates and tells you how long your coffee break should be ...

The post Bootstrapping edges after regularization: clarifications & tutorial appeared first on Psych Networks.

]]>The post APS 2018: Collection of all network presentations appeared first on Psych Networks.

]]>Large conferences like APS can sometimes be too generalist, too broad, and can lack more detailed information, a focus on more specialist questions. This was not the case at APS regarding network modeling. I counted 5 symposia on network-related topics, and several more in which network talks were featured. It felt like a small mini-conference on networks, with a ton of familiar faces in the symposia, and also a ton of new faces. There were many insightful talks, but also very in-depth discussions with many very well informed audience members. Not only were there numerous in-depth talks; together, the presentations also covered a very wide range of topics, ranging from the equivalence between factor and network models, nomothetic analyses in cross-sectional and longitudinal data, idiographic analyses in time-series data, and clinical trials with interventions based on network models, all the way to numerous statistical extensions such as novel centrality indices, or entirely new network models.

I was most excited to see severals talks about the integration of 1) new data assessment strategies (e.g. via smartwatches); 2) new methodological tools to analyze such data; and 3) empirical studies in everyday clinical practice. When we submitted our own symposium on the topic to APS, we ended the symposium title with a question mark: “From description to intervention: Can network models based on ambulatory assessments provide novel treatment targets?” At APS, I realized how many groups in the world are working on this intersection of methodology and clinical psychology. This is crucial and will allow us to actually test and try to falsify network theory, and will show us how useful the framework really is. But talks were not limited to clinical psychology and methodology – personality is also becoming a hot topic that networks models are applied to more often.

Here is a brief summary of all talks & presenters; the order is simply the order by which I obtained the presentations. You can find all slides on the Open Science Framework^{1}.

- Richard McNally | Bayesian Network Analyses of Symptoms in Patients with Bipolar Disorder
- Riet van Bork | Simplicity of networks and factor models
- Adela Isvoranu | Big5 In Schizophrenia: Personality through Exploratory Structural Equation Modeling and Network Analysis
- Alexandre Heeren | Mapping Network Connectivity among Symptoms of Social Anxiety and Comorbid Depression in People with Social Anxiety Disorder
- Sacha Epskamp | Intra-individual Networks and Latent Variable Models
- Sacha Epskamp | Personalized Networks in Clinical Practices: Recent developments, Challenges and Future Directions.
- Emorie Beck | Idiographic Personality: A Methodological Perspective on Measuring and Modeling Individuals
- Adriene Beltz | Behavioral Networks in Oral Contraceptive Users: Exploring Ovarian Hormone Links to Gendered Cognition and Personality Qualities
- Casper Albers | Changing Individuals Modelling smooth and sudden changes in temporal dynamics
- Aaron Fisher | Data-DrivenCase Conceptualization: Applying Research to Routine Care
- Payton Jones | Bridge centrality: identifying bridge symptoms in psychopathology networks
- Charlotte Vrijen | Personalized interventions based on experience sampling can effectively improve pleasure
- Tim Kaiser | Intersession Processes in Psychotherapy
- Oisín Ryan | Centrality and Interventions in Continuous-Time Dynamical Networks
- Siwei Liu | Can We Use the Random Effects Estimates in Multilevel Models to Characterize Individuals?
- Benjamin Bellet | Bereavement Outcomes as Causal Systems
- Angelique Cramer | The baby and the bathwater: The promise of both nomothetic and idiographic (network) modeling
- Aidan Wright | Toward an Individualized Psychology: Promises and Challenges in Modeling the Individual
- Katherine Jonas | A Comparison of Network Models and Latent Variable Models for Longitudinal Data
- Date van der Veen | Prel@pse– Preventing Relapse in OCD, a proof of principle study
- Julia Möller | Mixed emotions about school: A co-endorsement network analysis of positive and negative emotions

In case you are interested, I wrote two other posts about APS 2018 on my personal blog: The first covers issues with transparency, inclusion, and open science at APS; the second summarizes our APS symposium entitled “Measurement Schmeasurement”, featuring talks by Jessica Flake, Mijke Rhemtulla, Andre Wang, Scott Lilienfeld, and yours truly.

I also want to highlight briefly that psych-networks.com is transitioning more into a community platform, featuring many guest bloggers. I want this site to become a hub of communication among network researchers in psychology, where they can post new papers, new ideas, discuss hot topics, and so forth. So if you want to write something, please contact me, and I’d be very happy to see if we can make it work! This opportunity is meant for everybody, from very early career researchers all the way to professors. Since last year, most guest bloggers were male, I would like to feature more female guest bloggers … help me make it happen!

The post APS 2018: Collection of all network presentations appeared first on Psych Networks.

]]>The post New paper on the role of stabilizing and communicating symptoms appeared first on Psych Networks.

]]>As two graduate students in the Psychological Methods department at the University of Amsterdam, we were familiarized with the work of Cramer and Borsboom on conceptualizing mental disorders as complex networks of interacting symptoms. This conceptualization signifies the role of symptoms and their interactions within and across disorders, and has inspired novel theoretical definitions of clinical concepts such as core symptoms and comorbidity^{1}.

We often found ourselves discussing the potential of tools and metrics from other research areas using network analytic techniques. In the summer of 2016 we came across Santo Fortunato’s Community detection in graphs (2010) – an excellent paper on various applications and implications of network analytic techniques^{2}. One specific sentence caught our attention:

“Identifying modules and their boundaries allows for a classification of vertices, according to their structural position in the modules. So, vertices with a central position in their clusters, i.e. sharing a large number of edges with the other group partners, may have an important function of control and stability within the group; vertices lying at the boundaries between modules play an important role of mediation and lead the relationships and exchanges between different communities.” (p. 3)

Reading this passage immediately sparked a discussion on the numerous possibilities of utilizing the community detection toolbox to develop empirical definitions of these theoretical concepts. The notion of “vertices with a central position within their cluster […] may have an important function of control and stability within the group” can readily be translated to the idea of core symptoms. Similarly, the idea that “vertices lying at the boundaries between modules play an important role [… in] exchanges between different communities” can be mapped onto the theoretical definition of comorbidity within the network perspective on psychopathology.

In our paper, entitled “The role of stabilizing and communicating symptoms given overlapping communities in psychopathology”, we aspired to complement the statistical toolbox of the network approach to psychopathology by exploring what overlapping community detection analysis has to offer. Using community detection and inspecting the differential role of symptoms within and between communities offers a framework to study the clinical concepts of comorbidity, heterogeneity and hallmark symptoms. Symptoms with many and strong connections within a community, defined as stabilizing symptoms, could be thought of as the core of a community, whereas symptoms that belong to multiple communities, defined as communicating symptoms, facilitate the communication between problem areas.

We applied community detection to a large dataset (N=2089) assessing a variety of psychological problems using the Symptom Checklist 90. We identified 18 communities of closely related symptoms. Importantly, these communities are empirically derived instead of theoretically defined. In the paper we illustrate how the proposed definitions on the differential role of symptoms can inform us on the structure of the psychopathological landscape: both globally as well as locally. As such, we adopted established metrics in network science to accelerate our understanding of the psychopathological landscape.

Figure 1. Illustration of (a) the local structure of Feelings of Worthlessness community, (b) its connection to other communities; and (c) a symptom-level example of its connection to the community Worried about Sloppiness.

From our perspective, this endeavour highlights that diving into the world of network science across all kinds of research areas can inspire great advances for the toolbox we use to study psychopathology networks. Drawing inspiration from fields concerned with complex systems such as brain networks, economic networks and social networks, the options seem infinite – and we cannot wait to explore them.

**Footnotes:**

The post New paper on the role of stabilizing and communicating symptoms appeared first on Psych Networks.

]]>