Elsevier

Neuropsychologia

Volume 138, 17 February 2020, 107352
Neuropsychologia

Testing the magnocellular-pathway advantage in facial expressions processing for consistency over time

https://doi.org/10.1016/j.neuropsychologia.2020.107352Get rights and content

Highlights

  • This study examined the reliability of magnocellular bias for processing emotions.

  • Participants completed a facial emotion identification task.

  • Analyses revealed an advantage for the magnocellular pathway in processing facial expressions.

  • Reaction time patterns of results were highly stable over time.

Abstract

The ability to identify facial expressions rapidly and accurately is central to human evolution. Previous studies have demonstrated that this ability relies to a large extent on the magnocellular, rather than parvocellular, visual pathway, which is biased toward processing low spatial frequencies. Despite the generally consistent finding, no study to date has investigated the reliability of this effect over time. In the present study, 40 participants completed a facial emotion identification task (fearful, happy, or neutral faces) using facial images presented at three different spatial frequencies (low, high, or broad spatial frequency), at two time points separated by one year. Bayesian statistics revealed an advantage for the magnocellular pathway in processing facial expressions; however, no effect for time was found. Furthermore, participants’ RT patterns of results were highly stable over time. Our replication, together with the consistency of our measurements within subjects, underscores the robustness of this effect. This capacity, therefore, may be considered in a trait-like manner, suggesting that individuals may possess various ability levels for processing facial expressions that can be captured in behavioral measurements.

Introduction

The ability to identify facial expressions has been central to human evolution as a social species. It enables individuals to determine the emotional states and intentions of others and, as such, is crucial for anticipating and predicting social and environmental situations. Hence, from an evolutionary perspective, rapid processing of facial expressions is essential for survival, as it facilitates the detection of potential dangers, such as approaching predators or aggression in others. Moreover, the accurate identification and evaluation of facial expressions guides our reactions to others, thereby facilitating social interactions (Adolphs, 2003; Dolan, 2002; Erickson and Schulkin, 2003).

The identification of facial expression relies on intact processing of visual stimuli. Starting at the retina, the visual system is divided into two sub-systems: the magnocellular (M) and parvocellular (P) pathways. The M-pathway is composed of large, rapidly conducting neurons that specialize in the processing of rapid changes in stimuli, such as detection of movement. These cells quickly transfer information to fast responding brain areas, such as the prefrontal cortex (Bar et al., 2006) and the amygdala (Vuilleumier et al., 2003). In contrast, the P-pathway is composed of smaller, less rapidly responding cells that specialize in processing fine visual details. These cells project information to the visual cortex through the ventral visual stream (Schechter et al., 2003).

The neurons in the M and P pathways respond to different physical properties of the stimuli. One of the properties that determines this response is spatial frequency (Kaplan, 2004; Legge, 1978; Slaghuis and Curran, 1999; Tootell et al., 1988). Whereas the M-pathway responds mainly to low spatial frequencies (LSF), providing relatively large and coarse details, the P-pathway responds mainly to high spatial frequencies (HSF), providing fine and detailed information about visual stimuli (Butler et al., 2001; Merigan and Maunsell, 1993). Thus, filtering visual stimuli by specific spatial frequency bands (LSF vs. HSF) enables to differentially probe the M and P pathways.

Various studies have utilized this technique to investigate the interplay between the M-pathway and P-pathway in processing facial expressions, by using various methodologies, including behavioral (Jahshan et al., 2017), electromagnetic (Holmes et al., 2005a, 2005b; Maratos et al., 2009; Vlamings et al., 2009), fMRI (Rotshtein et al., 2007; Winston et al., 2003), neuropsychological (Burra et al., 2019), intracranial electrophysiology (Méndez-Bértolo et al., 2016), and transcranial magnetic stimulation (Bognár et al., 2017). Most of these studies revealed a clear advantage for the LSF in processing facial expressions, suggesting that the M-pathway is more relevant than the P-pathway for processing facial expressions (Bocanegra and Zeelenberg, 2009; Pourtois et al., 2005; Rassovsky et al., 2013; Vuilleumier et al., 2003). For example, in a series of five studies, Holmes et al., 2005a, 2005b compared filtered LSF and HSF faces, displaying fearful and neutral expressions, and found a significant advantage for processing LSF fearful expressions. They concluded that information at a coarse spatial scale (i.e., the M-pathway) is crucial for rapid emotional processing. These authors’ decision to focus on fearful expressions was based on previous findings showing that individuals are more attuned to, and process more easily, stimuli signaling threat (i.e., angry or fearful facial expressions) than other types of stimuli (Fox et al., 2000; Haberkamp et al., 2018; Öhman et al., 2001; Tipples et al., 2002).

Another example for the advantage of the M-pathway in processing facial expressions comes from a very unique fMRI study investigating the role of spatial frequency in processing facial expressions in a stroke survival suffering from complete cortical blindness (Burra et al., 2019). The authors reported that fearful LSF image was sufficient to activate the amygdala, suggesting the existence of a subcortical, predominantly M-pathway, route that is sensitive to LSF. This route bypasses the visual cortex, enabling rapid responses by the amygdala. This conclusion was supported by another study that directly recorded the neural activity with high temporal resolution in eight patients suffering from epilepsy (Méndez-Bértolo et al., 2016). Results from this study showed a very rapid response in the amygdala (as early as 74 ms post-stimulus) to LSF fearful expressions, again supporting the advantage of the M-over the P-pathway in processing LSF facial expressions.

As discussed above, from an evolutionary standpoint, prioritizing attention to fearful expressions increases one's survival skills, as the source of threat can be more rapidly identified and addressed. Nevertheless, it seems that the M-pathway's advantage is not limited for fearful expressions. Indeed, Kumar and Srinivasan (2011) found that the accurate identification of happy facial expression was biased toward LSF, with HLF more relevant for the identification of sad expressions. Another similar study reported an interaction between spatial frequency and facial expression indicating that happy expressions were easier to judge when presented in LSF than in HSF (Morawetz et al., 2011). In yet another study that assessed participants' sensitivity to facial expressions (by judging whether the expression was genuine or faked), it was found that participants were much more sensitive to happy expressions (as well as for fearful and pain expressions) when presented in LSF (Wang, 2016).

Despite the general agreement in the literature regarding the advantage of the M-pathway in processing facial expressions (whether they are threat-related or positive) and the common use of LSF to probe the M-pathway, no study to date has investigated the consistency of this effect. Indeed, test-retest approaches have been employed for establishing reliability of other cognitive measures. For example, Hockey and Geffen (2004) evaluated the reliability of the visuospatial n-back task, a commonly used paradigm for assessing working memory. Using Pearson correlation between first and second administration, they found that RT (but not accuracy) was a highly reliable measure. In another study, Cohn et al. (2002) assessed the temporal stability of facial expressions over intervals of 4–12 months. Employing facial EMG, automatic feature-point tracking, and manual FACS coding. They reported that individual differences in facial expression were stable over time and comparable in magnitude to stability of self-reported emotion. Thus, given the wide use of this paradigm over the years, both for probing the visual system in healthy subjects and in assessing affect processing abnormalities in clinical populations (Butler et al., 2001; Jahshan et al., 2017; Laprévote et al., 2010; McBain et al., 2010), it is essential to establish its temporal stability.

The aim of the current study was to fill this void in the literature by (1) examining the reliability of this advantage over time for both reaction time and error rate; (2) providing further support for the M-pathway advantage in processing facial expressions; and (3) testing whether fearful expressions have a superiority over happy expressions, vis-a-versa, or whether they are equally processed, thereby addressing the inconsistency in the literature on this issue. Thus, capitalizing on prior research, we compared M versus P processing of fearful, happy, and neutral expressions. The reliability of this effect was examined at two time points separated by one year. We hypothesized that (a) similar to previous findings, the M-pathway will have an advantage in processing fearful and happy expressions, and (b) as the advantage of the M-pathway in processing facial expressions is likely based on a hardwired bottom-up mechanism (i.e., the differential properties of the M and P neurons), this advantage is expected to remain stable over time. Due to inconsistency in the literature on the relative superiority in processing fearful and happy expressions, we did not have a directional hypothesis for the third aim of the study.

Section snippets

Participants

Sixty healthy individuals were recruited for the first stage of the experiment (T1). Participants were first-year undergraduate students in Psychology at Bar-Ilan University and received course credit for their participation. All participants had normal or corrected to normal vision. In the following year, all 60 participants were re-contacted and asked to return to the second stage of the experiment (T2), for which they were offered compensatory payment. Nineteen subjects from the original

Results

The BANOVA for RT revealed that the best model predicting RT is the model including time, spatial frequency, facial expression, and the interaction between spatial frequency and facial expression, BF10 = 5.47e+56 (in favor of H1), indicating that the data are 5.47e+56 times more likely under the alternative hypothesis than under the null hypothesis (Rouder et al., 2012). Kass and Raftery (1995) proposed a classification of the strength of evidence of Bayes factors for H0 and H1 (which are

Discussion

In the present study, the Bayesian analytical approach was employed to replicate M-pathway bias toward visual emotion processing and to evaluate the temporal stability of this effect. As hypothesized, time did not influence the effect of spatial frequency on facial expression processing (i.e., there was no support for any interaction with time). Although this finding was consistent with the results from the classic ANOVA (see Appendix A), the use of Bayesian statistics in the present study

Funding source

The work was supported by the Israel Science Foundation (ISF) for scientific research and development (grant 621/14).

CRediT authorship contribution statement

Maor Zeev-Wolf: Data curation, Formal analysis, Writing - original draft. Yuri Rassovsky: Conceptualization, Formal analysis, Funding acquisition, Methodology, Supervision, Validation, Writing - review & editing.

Acknowledgments

MZF collected the data, conducted data analyses, and prepared the first draft of the manuscript. YR conceived the study, participated in data interpretation, and edited the manuscript. We would like to tank Nathan Kohn-Magnus and Shirel Haker for their tremendous help in running the experiment.

References (48)

  • C. Morawetz et al.

    Effects of spatial frequency and location of fearful faces on human amygdala activity

    Brain Res.

    (2011)
  • J.N. Rouder et al.

    Default Bayes factors for ANOVA designs

    J. Math. Psychol.

    (2012)
  • I. Schechter et al.

    Magnocellular and parvocellular contributions to backward masking dysfunction in schizophrenia

    Schizophr. Res.

    (2003)
  • J.S. Winston et al.

    Effects of low-spatial frequency components of fearful faces on fusiform cortex activity

    Curr. Biol.

    (2003)
  • Maor Zeev-Wolf et al.

    Fine-coarse semantic processing in schizophrenia: a reversed pattern of hemispheric dominance

    Neuropsychologia

    (2014)
  • R. Adolphs

    Cognitive neuroscience: cognitive neuroscience of human social behaviour

    Nat. Rev. Neurosci.

    (2003)
  • M. Bar et al.

    Top-down facilitation of visual recognition

    Proc. Natl. Acad. Sci. U.S.A.

    (2006)
  • I. Blanco et al.

    Don't look at my teeth when I smile: teeth visibility in smiling faces affects emotionality ratings and gaze patterns

    Emotion

    (2017)
  • B.R. Bocanegra et al.

    Emotion improves and impairs early vision

    Psychol. Sci.

    (2009)
  • A. Bognár et al.

    Transcranial stimulation of the orbitofrontal cortex affects decisions about magnocellular optimized stimuli

    Front. Neurosci.

    (2017)
  • P. Butler et al.

    Dysfunction of early-stage visual processing in schizophrenia

    Am. J. Otol.

    (2001)
  • J.F. Cohn et al.

    Individual differences in facial expression: stability over time, relation to self-reported emotion, and ability to inform person identification

  • R.J. Dolan

    Emotion, cognition, and behavior

    Science

    (2002)
  • E. Fox et al.

    Facial expressions of emotion: are angry faces detected more efficiently?

    Cognit. Emot.

    (2000)
  • Cited by (4)

    View full text