Recent & Upcoming Talks

2023

The new statistics (Cumming, 2007) are simply not that new anymore. Effect sizes have been around since the 1940-1950s (Huberty, 2002), yet despite their popularity in recent decades (speculatively in reaction to known flaws in NHST) there are still problems surrounding reporting practices, misconceptions about effect sizes, and potential gaps in our pedagogical approaches to teaching about effect sizes. Within biomedical and psychological research, there are numerous effect sizes measures which have can be categorized into 8 techniques for estimating and interpreting effect sizes (Cook et al., 2014; Lakens & Caldwell, 2019). This project focuses on first assessing the prevalence of these 8 techniques and then on identifying technique and/or instructor related barriers which hinder the use of these techniques within classrooms. Hopefully, steps towards mitigating those barriers and closing the educational gap for students as they transition to becoming researchers and consumers of information can be found and implemented.

2022

Data cleaning is a crucial skill required by every researcher, yet there is little time dedicated (or even available) to formally teach data cleaning principles and procedures within psychology statistics classes. The aim of this current project is to develop online learning-focused content centered around data cleaning as way to promote a deeper understanding of data. However, designing this type of content, and its organizational structure, needs to consider both user experience (UX) and behavioural design right from its inception if it aims to effectively accomplish its learning objectives. Evaluation methods for UX will be discussed along with how behavioural design elements can be leveraged to increase the overall UX and promote the desired learning outcomes.

2019

The gist of the talk will be things I have learned so far regarding improving my own research practices, as well as a curated list of fantastic resources and tips to help anyone become just a little more open. I will also be reflecting on my time spent at Oxford University this past September at the Oxford | Berlin Summer School on Open Science.

As researchers, we are trained to be critical of the research we consume, but even the most well trained eye can miss details hidden within reported statistics (both descriptive and inferential). Tools have been developed which can detect inconsistencies in reported statistics and even reconstruct plausible sample distributions. While nothing can replace the careful scrutiny of a research article in its entirety, these tools can help at any stage of the publication process by allowing readers to glean further insights into sample data not readily available within the text.

2018

As psychologists, methodological rigor is our currency. The value of our research is directly linked with the degree to which we follow sound research practices. Without such practices, the results from even the best-intentioned research will have shadows of doubt cast upon it. Any way by which we can improve our craft should be explored, and one such improvement is checking for statistical anomalies in reported results. A few technical solutions will be presented which can test for statistical anomalies contained within research reports ranging from reported test statistics and their associated p-values (StatCheck) to statistically impossible reported means, standard deviations, variances, and standard errors for Likert-type scales (GRIM, GRIMMER, and SPRITE). While these tools are a great addition to a researcher’s toolbox, they can also be utilized for large-scale literature reviews to assess how prolific such statistical anomalies are across the history of our discipline. While sporadic anomalies themselves are not necessarily symptomatic of larger problems, patterns of anomalies are another story. An easy-to-use SHINY application will be presented which showcases how to seamlessly organize results from large-scale literature reviews and produce simple graphics to explore patterns.

2017

Given the current climate surrounding the replication crisis facing scientific research, a subsequent call for methodological reform has been issued which explicates the need for a shift from null hypothesis significance testing to reporting of effect sizes and their confidence intervals (CI). However, little is known about the relative performance of CIs constructed following the application of techniques designed to accommodate for nonnormality and heterogeneity under the general linear model (GLM). We review these techniques of normalizing data transformations, Huber-White robust standard errors, percentile bootstrapping approaches; present an empirical illustration to demonstrate their construction and interpretation; and will discuss a planned Monte Carlo study designed to evaluate the performance of the CIs based on these techniques. The factors examined in the study are sample size, degree of multicollinearity among predictors, number of predictors, and skewness of the residuals. Based on the performance of CIs in terms of coverage, accuracy, and efficiency, general recommendations will be made regarding best practice about constructing CIs for the GLM under a wide range of data analytic conditions.

2016

Seniors are a particularly vulnerable group to gambling problems due to age-related cognitive decline, limited income, and other lifecycle events such as the loss of a partner. Using the Ontario Seniors Gambling data (N=2,103), an analysis was conducted to explore the effects of person-level, environmental, and person-level by environmental effects on gambling related outcomes. The primary analyses focused on a subset of meaningful predictors, demographic covariates, and gambling outcomes which were initially identified by the original analyses conducted by McCready et al. (2014). Logistic regression models were used to examine the predictors and possible interactions. Being married and formally employed were negative predictors of problem gambling, while specific avoidance motives, attitudes regarding the relative benefits versus harms of gambling, frequency of slot play, and spending more than $1000 annually all predicted problem gambling.

2015

This present research aims to explore the effectiveness of two educational interventions whose purpose is to develop students' informal inferential reasoning and improve their ability to make accurate informal statistical inferences. A modified assessment tool developed by David Trumpower (2011) tests the efficacy of these interventions. One intervention is a series instructions that walks subjects through an example of day-to-day reasoning that uses informal statistical reasoning. The other is an interactive computer task that aims at getting students to have a deeper understanding of important data characteristics that are involved in statistical reasoning. Results and implications will be discussed.