Given the current climate surrounding the replication crisis facing scientific research, a subsequent call for methodological reform has been issued which explicates the need for a shift from null hypothesis significance testing to reporting of effect sizes and their confidence intervals (CI). However, little is known about the relative performance of CIs constructed following the application of techniques designed to accommodate for nonnormality and heterogeneity under the general linear model (GLM). We review these techniques of normalizing data transformations, Huber-White robust standard errors, percentile bootstrapping approaches; present an empirical illustration to demonstrate their construction and interpretation; and will discuss a planned Monte Carlo study designed to evaluate the performance of the CIs based on these techniques. The factors examined in the study are sample size, degree of multicollinearity among predictors, number of predictors, and skewness of the residuals. Based on the performance of CIs in terms of coverage, accuracy, and efficiency, general recommendations will be made regarding best practice about constructing CIs for the GLM under a wide range of data analytic conditions.