As psychologists, methodological rigor is our currency. The value of our research is directly linked with the degree to which we follow sound research practices. Without such practices, the results from even the best-intentioned research will have shadows of doubt cast upon it. Any way by which we can improve our craft should be explored, and one such improvement is checking for statistical anomalies in reported results. A few technical solutions will be presented which can test for statistical anomalies contained within research reports ranging from reported test statistics and their associated p-values (StatCheck) to statistically impossible reported means, standard deviations, variances, and standard errors for Likert-type scales (GRIM, GRIMMER, and SPRITE). While these tools are a great addition to a researcher’s toolbox, they can also be utilized for large-scale literature reviews to assess how prolific such statistical anomalies are across the history of our discipline. While sporadic anomalies themselves are not necessarily symptomatic of larger problems, patterns of anomalies are another story. An easy-to-use SHINY application will be presented which showcases how to seamlessly organize results from large-scale literature reviews and produce simple graphics to explore patterns.