People sometimes have specific behavioral responses to simuli in their environnement; these behabioral responses are called evaluative responses. During my PhD, I have been focusing on how people learn learn new evaluative responses. Specifically, I have been interested in the mental processes involved in evaluative response acquisition through approach and avoidance training.
I am also interested in advances in statistical modelling, aiming at two objectives: assessing the validity of the statistical methods we use in psychology and making robust methods more accessible.
In this context, I have been developping the
JSmediation package, a package
providing an easy to use interface to test statistical mediation with the
joint-significance method in R (see Yzerbyt, Muller, Batailler, & Judd, 2018).
Among other projects, I also contributed to an introduction to bayesian
mixed-modelling (Nalborczyk, Batailler, Lœvenbruck, Vilain, & Bürkner,
Research practices have changed a lot in the past few years in psychology. I am interested in these changes, how to popularize them, and how to teach about the psychology credibility revolution (Vazire, 2018).
MSc in Social Psychology, 2016
Univ. Grenoble Apes, France
BSc in Psychology, 2014, 2014
Université Pierre Mendès-France, France
Research across many disciplines seeks to understand how misinformation spreads with a view towards limiting its impact. One important question in this research is how people determine whether a given piece of news is real or fake. The current article discusses the value of Signal Detection Theory (SDT) in disentangling two distinct aspects in the identification of fake news: (1) ability to accurately distinguish between real news and fake news and (2) response biases to judge news as real versus fake regardless of news veracity. The value of SDT for understanding the determinants of fake news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news. Implications of SDT for the use of source-related information in the identification of fake news, interventions to improve people’s skills in detecting fake news, and the debunking of misinformation are discussed.
Approach/avoidance paradigms could constitute an interesting alternative in measuring intergroup attitudes, notably if they overcome one criticism often addressed toward classic indirect tasks: Measuring attitudes beyond the influence of cultural knowledge. Using intergroup stimuli and a population likely to be exposed to a similar cultural knowledge, we observed two informative results regarding this issue: Approach/avoidance effects measured by the Visual Approach/Avoidance by the Self Task (VAAST) varied across participants (i.e., consistent with the variability of intergroup attitudes; Experiment 1) and both participants of dominant and non‐dominant groups produced an ingroup bias (Experiment 2). A last experiment (Experiment 3) showed that compatibility scores in the VAAST predict trustworthiness ratings of the ingroup/outgroup. This experiment also investigated potential differences between the VAAST and the IAT. These results suggest that approach/avoidance tasks (notably the VAAST) could be relevant to assess personal attitudes when it comes to normatively sensitive topics.
Bayesian multilevel models are increasingly used to overcome the limitations of frequentist approaches in the analysis of complex structured data. This tutorial introduces Bayesian multilevel modeling for the specific analysis of speech data, using the brms package developed in R. In this tutorial, we provide a practical introduction to Bayesian multilevel modeling by reanalyzing a phonetic data set containing formant (F1 and F2) values for 5 vowels of standard Indonesian (ISO 639-3:ind), as spoken by 8 speakers (4 females and 4 males), with several repetitions of each vowel. We first give an introductory overview of the Bayesian framework and multilevel modeling. We then show how Bayesian multilevel models can be fitted using the probabilistic programming language Stan and the R package brms, which provides an intuitive formula syntax. Through this tutorial, we demonstrate some of the advantages of the Bayesian ramework for statistical modeling and provide a detailed case study, with complete source code for full reproducibility of the analyses.
In light of current concerns with replicability and reporting false-positive effects in psychology, we examine Type I errors and power associated with 2 distinct approaches for the assessment of mediation, namely the component approach (testing individual parameter estimates in the model) and the index approach (testing a single mediational index). We conduct simulations that examine both approaches and show that the most commonly used tests under the index approach risk inflated Type I errors compared with the joint-significance test inspired by the component approach. We argue that the tendency to report only a single mediational index is worrisome for this reason and also because it is often accompanied by a failure to critically examine the individual causal paths underlying the mediational model. We recommend testing individual components of the indirect effect to argue for the presence of an indirect effect and then using other recommended procedures to calculate the size of that effect. Beyond simple mediation, we show that our conclusions also apply in cases of within-participant mediation and moderated mediation. We also provide a new R-package that allows for an easy implementation of our recommendations.
Because approach/avoidance is a crucial response to environmental stimuli, this type of action should have left its trace on our sensorimotor system. Recent work, however, downplayed the role of sensorimotor information in producing approach/avoidance compatibility effects (i.e., faster response times to approach positive stimuli and avoid negative stimuli, than the reverse). We suggest that this is likely due to an overemphasis of the role of motor aspects of arm movement in these effects. The goal of this research is therefore to reevaluate the role of sensorimotor information in the production of compatibility effects by suggesting that large and replicable effects can be observed when the task simulates the visual information that comes with whole-body movements. In line with this idea, we present six experiments showing that such a task (the Visual Approach/Avoidance by the Self Task; VAAST) can produce large and replicable compatibility effects. Importantly, these experiments also test the core aspects producing these effects. These experiments reassert the role of sensorimotor information in the production of approach/avoidance compatibility effects. This entails, however, focusing on the visual information associated with whole-body movements instead of motor aspects associated with arm movements.
Good self-control has been linked to adaptive outcomes such as better health, cohesive personal relationships, success in the workplace and at school, and less susceptibility to crime and addictions. In contrast, self-control failure is linked to maladaptive outcomes. Understanding the mechanisms by which self-control predicts behavior may assist in promoting better regulation and outcomes. A popular approach to understanding self-control is the strength or resource depletion model. Self-control is conceptualized as a limited resource that becomes depleted after a period of exertion resulting in self-control failure. The model has typically been tested using a sequential-task experimental paradigm, in which people completing an initial self-control task have reduced self-control capacity and poorer performance on a subsequent task, a state known as ego depletion. Although a meta-analysis of ego-depletion experiments found a medium-sized effect, subsequent meta-analyses have questioned the size and existence of the effect and identified instances of possible bias. The analyses served as a catalyst for the current Registered Replication Report of the ego-depletion effect. Multiple laboratories (k = 23, total N = 2,141) conducted replications of a standardized ego-depletion protocol based on a sequential-task paradigm by Sripada et al. Meta-analysis of the studies revealed that the size of the ego-depletion effect was small with 95% confidence intervals (CIs) that encompassed zero (d = 0.04, 95% CI [−0.07, 0.15]. We discuss implications of the findings for the ego-depletion effect and the resource depletion model of self-control.