Lab 4: Model presentation: Using MCMC output for postestimation

Applied Bayesian Modeling (ICPSR Summer Program 2025)
Author
Affiliation

Ursinus College

Published

July 20, 2025

The purpose of this tutorial for our open lab is to show you some options to work with and efficiently present output from Bayesian models in article manuscripts: regression tables, regression plots, predicted probabilities, marginal effects, and others. This also reinforces the mechanics of working with MCMC output after fitting a Bayesian model in R (see the previous lab). While there are a number of dedicated R packages to generate similar output,1 this tutorial introduces the BayesPostEst package (Scogin et al. 2019) and aims to give you the tools to produce this output on your own and customize it, with the hope that you can use this in your own applied work.

1 Examples: marginaleffects (an amazing and fast-expanding set of tools for generating predicted probabilities, comparisons, etc.), tidybayes (the most comprehensive tool for MCMC output so far), bayesplot, bayestable, bayestestR, ggmcmc, sjstats, and sjPlot.

For consistency with course content, we begin with a model fit using rstanarm, but the tutorial below also works with models fit with rstan, JAGS, or BUGS.

This tutorial should be read in conjunction with Lecture #10 and the two articles on postestimation listed on the syllabus:

Examples of other relevant recent write-ups of these techniques in other disciplines are:

If you find any errors in this document or have suggestions for improvement, please email me.

Fitting the model

We pick up at Lab 3 and fit a model of individual voter turnout in Switzerland:

Code
cses.dat <- rio::import("https://www.jkarreth.net/files/full_cses3.dta")
swiss.dat <- cses.dat[cses.dat$election == "CHE_2007", ]
swiss.dat$agegroup <- cut(swiss.dat$age, 
                          breaks = c(18, 30, 45, 65, 96), include.lowest = TRUE)
library("rstanarm")
bayes.mod <- stan_glm(voted ~ agegroup + female + education + income + demsatis + information, 
  data = swiss.dat, 
  family = binomial(link = "logit"),
  prior = normal(location = 0, scale = 2.5),
  chains = 4,
  iter = 5000,
  warmup = 2500,
  thin = 1,
  init = 0,
  seed = 123,
  cores = 4)

Extracting the MCMC output

Next, we extract the MCMC output from the stanreg object created by stan_glm(). Here, we want to keep only the columns with draws from posterior distributions of our parameters of interest (regression coefficients). Hence, we subset the matrix of MCMC output to contain only the columns with the same names as the variables in the model matrix, i.e. the intercept and the predictors: (Intercept), agegroup(30,45], agegroup(45,65], agegroup(65,96], female, education, income, demsatis, information.

Code
swiss_mcmc <- as.matrix(bayes.mod$stanfit)
head(swiss_mcmc)
          parameters
iterations (Intercept) agegroup(30,45] agegroup(45,65] agegroup(65,96]   female education income demsatis information mean_PPD log-posterior
      [1,]       -1.47          0.1736           1.001            1.50 -0.25633    0.0604  0.255   -0.205       0.532    0.704         -1472
      [2,]       -2.28         -0.0629           0.724            1.49 -0.12144    0.2072  0.221   -0.127       0.584    0.705         -1472
      [3,]       -1.57          0.2111           0.900            1.57 -0.17817    0.0797  0.203   -0.143       0.558    0.715         -1471
      [4,]       -3.12          0.1142           0.989            1.65 -0.00836    0.3091  0.250   -0.196       0.576    0.688         -1477
      [5,]       -2.32          0.2280           1.037            1.76 -0.14972    0.2164  0.190   -0.184       0.517    0.696         -1472
      [6,]       -1.22          0.0497           0.885            1.48 -0.11219    0.0211  0.233   -0.139       0.564    0.698         -1474
Code
swiss_mcmc <- swiss_mcmc[, colnames(model.matrix(bayes.mod))]
head(swiss_mcmc)
          parameters
iterations (Intercept) agegroup(30,45] agegroup(45,65] agegroup(65,96]   female education income demsatis information
      [1,]       -1.47          0.1736           1.001            1.50 -0.25633    0.0604  0.255   -0.205       0.532
      [2,]       -2.28         -0.0629           0.724            1.49 -0.12144    0.2072  0.221   -0.127       0.584
      [3,]       -1.57          0.2111           0.900            1.57 -0.17817    0.0797  0.203   -0.143       0.558
      [4,]       -3.12          0.1142           0.989            1.65 -0.00836    0.3091  0.250   -0.196       0.576
      [5,]       -2.32          0.2280           1.037            1.76 -0.14972    0.2164  0.190   -0.184       0.517
      [6,]       -1.22          0.0497           0.885            1.48 -0.11219    0.0211  0.233   -0.139       0.564

Regression tables

Bayesian variants of the regression model don’t differ much in their interpretation from frequentist models—aside from how you interpret uncertainty around your estimate. Hence, if you want to present a regression table in your paper, a “Bayesian” regression table will look very similar to the “frequentist” regression tables you are all familiar with. For Bayesian models, you want to display the standard deviation of the posterior estimates in lieu of coefficient standard errors Alternatively, you may choose to display the 95% credible interval besides (or below) the posterior mean.

As before, you can use the mcmcTab() function to create a table:

Code
library("BayesPostEst")
mcmcTab(bayes.mod)
         Variable Median    SD  Lower  Upper
1     (Intercept) -1.913 0.350 -2.601 -1.241
2 agegroup(30,45]  0.168 0.144 -0.110  0.453
3 agegroup(45,65]  0.937 0.145  0.655  1.225
4 agegroup(65,96]  1.585 0.167  1.260  1.918
5          female -0.167 0.095 -0.353  0.022
6       education  0.136 0.051  0.038  0.236
7          income  0.227 0.038  0.154  0.303
8        demsatis -0.183 0.047 -0.276 -0.089
9     information  0.558 0.050  0.462  0.656

The function takes the following arguments:

  • sims: the posterior output from your model. mcmcTab() automatically recognizes posterior distributions that were produced by R2jags, rjags, R2WinBUGS, R2OpenBUGS, MCMCpack, rstan, and rstanarm.
  • ci: desired level for credible intervals; defaults to c(0.025, 0.975), i.e. a 95% credible interval
  • pars: character vector of parameters to be printed; defaults to NULL (all parameters are printed)
  • Pr: print the percent of posterior draws that have the same sign as the median of the posterior distribution; defaults to FALSE
  • ROPE defaults to NULL. If not NULL, a vector of two values defining the region of practical equivalence (“ROPE”); returns % of posterior draws to the left/right of ROPE. For this quantity to be meaningful, all parameters must be on the same scale (e.g. standardized coefficients or first differences). See Kruschke (2013) and Kruschke (2018) for more on the ROPE.

You can also export a conventional regression table to LaTeX or Word using mcmcReg. See the package website or Lab 2 for more details.

Regression dot plots

Coefficient dot plots (or caterpillar plots) are a more intuitive way to present regression results. Some readers can evaluate the size and significance of relationships more easily.

You have a variety of options to generate these plots. Here are a two, one from the BayesPostEst package, and one “by hand”:

Code
library("BayesPostEst")
mcmcCoefPlot(mod = bayes.mod)

Because function returns an underlying data frame, you can use that to create your own plot:

Code
plot_dat <- mcmcCoefPlot(mod = bayes.mod, plot = FALSE)
suppressPackageStartupMessages(library("tidyverse"))
ggplot(data = filter(plot_dat, variable != "(Intercept)"), aes(x = pe, y = variable)) +
  geom_vline(xintercept = 0, linetype = "dashed") +
  geom_point() +
  geom_errorbarh(aes(xmin = lo, xmax = hi), height = 0.1) +
  theme_bw() + ylab("") + xlab("Posterior estimate, with 95% credible intervals") + 
  scale_y_discrete(labels = c("Information", "Dissatisfaction with democracy", "Income", "Education", 
"Female", "Age 65+", "Age 45-64", "Age 30-44"))

As an alternative, you can show the full posterior distribution of each coefficient estimate. To do so, we first reorganize the MCMC output. Note that I’m using the dplyr::select() function below to remove columns I don’t want to retain for this plot:

Code
swiss_coefs <- select(as.data.frame(swiss_mcmc), -`(Intercept)`)

Next, I convert the posterior draws into a long dataframe for plotting:

Code
swiss_coefs_long <- pivot_longer(swiss_coefs, cols = `agegroup(30,45]`:information)
head(swiss_coefs_long)
# A tibble: 6 × 2
  name              value
  <chr>             <dbl>
1 agegroup(30,45]  0.174 
2 agegroup(45,65]  1.00  
3 agegroup(65,96]  1.50  
4 female          -0.256 
5 education        0.0604
6 income           0.255 

I can use the ggridges package to create density plots of each posterior coefficient:

Code
library("ggridges")
ggplot(data = swiss_coefs_long, aes(x = value, y = name)) + 
  stat_density_ridges(quantile_lines = TRUE, 
                      quantiles = c(0.025, 0.5, 0.975), 
                      alpha = 0.7) + 
  geom_vline(xintercept = 0, linetype = "dashed") + 
  theme_bw() + xlab("Posterior estimate") + ylab("") + 
  scale_y_discrete(labels = c("Age 30-44", "Age 45-64", "Age 65+", "Dissatisfaction with democracy", "Education", "Female", 
"Income", "Information"))

Alternatively, I could summarize the data myself and recreate the dot-and-whisker plot from above:

Code
swiss_coefs_long_sum <- summarize(group_by(swiss_coefs_long, name),
                                        median_coef = median(value),
                                        lower_coef = quantile(value, probs = c(0.025)),
                                        upper_coef = quantile(value, probs = c(0.975)))
ggplot(data = swiss_coefs_long_sum, aes(x = median_coef, y = name)) + 
  geom_errorbarh(aes(xmin = lower_coef, xmax = upper_coef), height = 0.1) + 
  geom_point() + 
  geom_vline(xintercept = 0, linetype = "dashed") + 
  xlab("Posterior estimate") + ylab("") + 
  theme_bw() + 
  scale_y_discrete(labels = c("Age 30-44", "Age 45-64", "Age 65+", "Dissatisfaction with democracy", "Education", "Female", 
"Income", "Information"))

Predicted probabilities

For nonlinear models, predicted probabilities are usually an intuitive option to present interpretable results to readers. Two popular approaches plot the predicted probability of the outcome across the simulated range of a predictor while holding all other covariates constant at typical values (the average-case approach) or across all observations (the observed-values approach). To do this using the output of a Bayesian model for binary outcomes, follow the steps below, which are also documented in Lecture 10.2

2 The marginaleffects package also offers a very accessible way to compute most of the quantities and graphs in this section - check it out in action below!

Generate predicted probabilities for a simulated “average” case

The following approach (which you also saw in Lecture 10) follows the practice in King, Tomz, and Wittenberg (2000) and calculates the predicted probability for each value of a focal predictor with all other predictors set to one “typical” value (usually a mean or a median).

The relevant function in the BayesPostEst package is mcmcAveProb; you can find more information about it here.

Code
library("BayesPostEst")

Note: as with every R function, you can type the name of the function into the console to see what the function actually does.

Some of the objects to feed to these arguments are best created before using the function:

Code
swiss_mm <- model.matrix(bayes.mod)
swiss_mcmc <- as.matrix(bayes.mod$stanfit)[, 1:ncol(swiss_mm)]
dissatis_sim <- seq(from = min(swiss.dat$demsatis, na.rm = TRUE), 
                  to = max(swiss.dat$demsatis, na.rm = TRUE),
                  by = 1)

Now, you can call the function and generate predicted probabilities across the range of the information variable:

Code
dissatis_pp_sim <- mcmcAveProb(modelmatrix = swiss_mm, 
                                   mcmcout = swiss_mcmc, 
                                   xcol = 8, 
                                   xrange = dissatis_sim, 
                                   link = "logit", 
                                   ci = c(0.025, 0.975))

The resulting object is a data frame:

Code
dissatis_pp_sim
# A tibble: 5 × 4
      x median_pp lower_pp upper_pp
  <int>     <dbl>    <dbl>    <dbl>
1     0     0.630    0.563    0.692
2     1     0.587    0.521    0.648
3     2     0.542    0.473    0.609
4     3     0.496    0.417    0.576
5     4     0.451    0.360    0.546

You may then plot the predicted probabilities for easier communication:

Code
p_dissatis_pp_sim <- ggplot(data = dissatis_pp_sim, aes(x = x, y = median_pp)) + 
geom_segment(aes(y = lower_pp, yend = upper_pp, xend = x)) + 
  geom_point() + 
  xlab("Dissatisfaction with democracy") + ylab("Pr(Voted)") + 
  theme_bw()
p_dissatis_pp_sim

Generate average predicted probabilities for all observed cases

The following approach (which you also saw in Lecture 10) follows the points raised by Hanmer and Kalkan (2013) that the previous approach is somewhat limited in exploring the full range of cases. In the words of the authors, “this approach creates a weaker connection between the results and the larger goals of the research enterprise … rather than seeking to understand the effect for the average case, the goal is to obtain an estimate of the average effect in the population.

BayesPostEst also provides a function for this approach: mcmcObsProb.

Because we already created the input objects above, we can move directly to calling the function. Note that this function can be quite slow as it calculates probabilities for \(n \times k \times z\) cases, where \(n\) is the number of observations in the data, \(k\) is the number of posterior draws, and \(z\) is the number of distinct values of the simulated variable.

Code
dissatis_pp_obs <- mcmcObsProb(modelmatrix = swiss_mm, 
                                   mcmcout = swiss_mcmc, 
                                   xcol = 8, 
                                   xrange = dissatis_sim, 
                                   link = "logit", 
                                   ci = c(0.025, 0.975))

The resulting object is again a data frame:

Code
dissatis_pp_obs
# A tibble: 5 × 4
      x median_pp lower_pp upper_pp
  <dbl>     <dbl>    <dbl>    <dbl>
1     0     0.736    0.710    0.760
2     1     0.704    0.688    0.721
3     2     0.671    0.650    0.692
4     3     0.636    0.600    0.671
5     4     0.599    0.545    0.653

You may then plot the predicted probabilities for easier communication:

Code
p_dissatis_pp_obs <- ggplot(data = dissatis_pp_obs, aes(x = x, y = median_pp)) + 
  geom_segment(aes(y = lower_pp, yend = upper_pp, xend = x)) + 
  geom_point() +
  xlab("Dissatisfaction with democracy") + ylab("Pr(Voted)") + 
  theme_bw()
p_dissatis_pp_obs

Comparing the two side by side is useful to adjudicate the sensitivity of the results to either approach:

First differences

You can also easily use the posterior distribution to calculate first differences, i.e. the difference in predicted probabilities for two types of cases - one with low and one with high values in a predictor of interest.

First differences “by hand”

First, here is some step-by-step code; this echoes the walkthrough for predicted probabilities in Lecture 10. I calculate here the first difference in Pr(Voted) between men and women.

Code
swiss_sim_woman <- as.matrix(data.frame(constant = 1, 
                                          `agegroup(30,45]` = 1,
                                          `agegroup(45,65]` = 0,
                                          `agegroup(65,96]` = 0,
                                          female = 1, 
                                          education = median(model.frame(bayes.mod)$education), 
                                          income = median(model.frame(bayes.mod)$income), 
                                          demsatis = median(model.frame(bayes.mod)$demsatis), 
                                          information = median(model.frame(bayes.mod)$information)))

swiss_sim_man <- as.matrix(data.frame(constant = 1, 
                                          `agegroup(30,45]` = 1,
                                          `agegroup(45,65]` = 0,
                                          `agegroup(65,96]` = 0,
                                          female = 0, 
                                          education = median(model.frame(bayes.mod)$education), 
                                          income = median(model.frame(bayes.mod)$income), 
                                          demsatis = median(model.frame(bayes.mod)$demsatis), 
                                          information = median(model.frame(bayes.mod)$information)))

swiss_coefs <- as.matrix(bayes.mod$stanfit)[, c(1:ncol(swiss_sim_woman))] 

Xb_woman <- t(swiss_sim_woman %*% t(swiss_coefs))

Xb_man <- t(swiss_sim_man %*% t(swiss_coefs))

swiss_pp_woman <- exp(Xb_woman) / (1 + exp(Xb_woman))
swiss_pp_woman <- as.data.frame(swiss_pp_woman)
names(swiss_pp_woman) <- swiss_sim_woman[, "female"]
swiss_pp_woman$Iteration <- 1:nrow(swiss_pp_woman)

swiss_pp_man <- exp(Xb_man) / (1 + exp(Xb_man))
swiss_pp_man <- as.data.frame(swiss_pp_man)
names(swiss_pp_man) <- swiss_sim_man[, "female"]
swiss_pp_man$Iteration <- 1:nrow(swiss_pp_man)

swiss_pp_woman_long <- pivot_longer(data = swiss_pp_woman, 
                               names_to = "female", 
                               values_to = "Pr_Voting_Women", 
                               cols = -Iteration)

swiss_pp_man_long <- pivot_longer(data = swiss_pp_man, 
                               names_to = "female", 
                               values_to = "Pr_Voting_Men", 
                               cols = -Iteration)

swiss_fd <- inner_join(swiss_pp_woman_long, swiss_pp_man_long,
                       by = "Iteration")

swiss_fd$fd <- swiss_fd$Pr_Voting_Women - swiss_fd$Pr_Voting_Men

You can now plot the posterior distribution of the difference in voting propensity between men and women:

Code
ggplot(data = swiss_fd, aes(x = fd)) + 
  geom_density(fill = "gray", color = NA) + 
  xlab("First difference in voting propensity: Women - Men\n(negative numbers indicate a lower voting propensity for women)") + 
  ylab("") + 
  scale_y_continuous(breaks = FALSE) + 
  geom_vline(xintercept = 0, linetype = "dashed") + 
  theme_bw()

Using a function to calculate first differences

To facilitate the calculation and plotting of first differences for all explanatory variables, BayesPostEst also contains a function: mcmcFD. It requires the following arguments, most of which are familiar from above:

  • modelmatrix: model matrix, including intercept. Create with model.matrix(rstanarm object)
  • mcmcout: posterior distributions of all coefficients in matrix form - can easily be created from rstan, MCMCpack, R2jags, R2OpenBUGS, etc.
  • link: type of generalized linear model; a character vector set to “logit” (default) or “probit”
  • ci: the desired credible interval for the first differences; defaults to c(0.025, 0.975)
  • percentiles: the percentiles of each explanatory variable to compare; defaults to c(0.25, 0.75) for continuous variables. Binary variables are automatically set to 0 and 1. Note: for explanatory variables that are categorical with few values (e.g., 1, 2, 3, 4) and do not vary much across categories, the default c(0.25, 0.75) may return a first difference estimate of 0. This is because in that case, the 25th and 75th percentile of the variable are identical. You need to expand the range of percentiles in that case, e.g. to c(0.1, 0.9). I will eventually update the function to handle this case better. (And of course, you’re always welcome to fork the repository on Github and rework the function yourself!)
  • fullsims: should the function return the full distribution of first differences or only the median and credible interval? Defaults to FALSE.
Code
swiss_fd <- mcmcFD(modelmatrix = swiss_mm, 
                  mcmcout = swiss_mcmc, 
                  ci = c(0.025, 0.975), 
                  percentiles = c(0.1, 0.9), 
                  fullsims = FALSE)

The resulting first differences can then be plotted:

Code
ggplot(data = swiss_fd, aes(x = median_fd, y = VarName)) + 
  geom_segment(aes(x = lower_fd, xend = upper_fd, yend = VarName)) + 
  geom_point() + theme_bw() + 
  geom_vline(xintercept = 0, linetype = "dashed") + 
  xlab("Difference in Pr(Voted) between low and high values") + 
  ylab("")

Alternatively, you can plot the full density of the distribution of each first difference. Simply using plot() does this for you once you created a full posterior distribution of first differences with mcmcFD():

Code
swiss_fd_full <- mcmcFD(modelmatrix = swiss_mm, 
                  mcmcout = swiss_mcmc, 
                  ci = c(0.025, 0.975), 
                  percentiles = c(0.1, 0.9), 
                  fullsims = TRUE)
plot(swiss_fd_full)

Note that this plotting function sets the x-axis to the percentage point change in \(\text{Pr}(y = 1)\), e.g. those in the age group of 45-64 years voted at a 20% higher rate than those in the baseline (18-29).

This function also offers a visualization of the “region of practical equivalence” (ROPE) to visualize what a meaningful first difference would look like. See the package vignette for details.

Postestimation with convenient functions from the modelsummary and marginaleffects packages

You can also speed up many of the operations before by using the marginaleffects and modelsummary packages as long as your Bayesian model was estimated with rstanarm or brms. As a general guide, please read through the excellent documentation of both packages and try out some of the code below!3

3 Please note that not all of the functions below reproduce the exact output generated by the BayesPostEst functions above. That’s because some of the defaults and estimation methods in marginaleffects differ. See the documentation of each package for details.

Regression tables

Use the modelsummary function from the modelsummary package. The documentation is rich and should help you customize this table in almost any way possible!

Code
library("modelsummary")
modelsummary(models = list(bayes.mod))
(1)
(Intercept) -1.913
agegroup(30,45] 0.168
agegroup(45,65] 0.937
agegroup(65,96] 1.585
female -0.167
education 0.136
income 0.227
demsatis -0.183
information 0.558
Num.Obs. 2713
R2 0.153
Log.Lik. -1459.761
ELPD -1468.8
ELPD s.e. 25.4
LOOIC 2937.6
LOOIC s.e. 50.7
WAIC 2937.6
RMSE 0.42

Regression dot plots

From the same package, use the modelplot function. Again, see the manual for customization options!

Code
modelplot(models = list(bayes.mod),
          coef_omit = "(Intercept)")

Predicted probabilities

Turning to the marginaleffects package, please review the documentation carefully – the package is under active development and the functions shown here might change over time.

To plot predicted probabilities across the range of a variable, use plot_predictions. Please refer to the documentation for more details on how thse probabilities are calculated. One of the main advantages of plot_predictions (and its cousin, predictions) is that it allows for specifying values for all other covariates or combinations of covariates, even in the case of multilevel models.

First, I plot a similar quantity to “predictions for the average case”; marginaleffects refers to these as “adjusted predictions at representative values”:

Code
library("marginaleffects")
plot_predictions(model = bayes.mod, 
                 condition = "demsatis")

Next, I use marginaleffect’s cousin of the “observed case approach”, here called the “average adjusted prediction by group”:

Code
plot_predictions(model = bayes.mod,
                 by = "demsatis")

First differences

To plot first differences (as defined above), use the comparisons function. Note that I am spelling out the differences of the explanatory variables (unlike mcmcFD above, which automatically chooses the 25th and 75th percentile for numeric variables, or the percentiles supplied by the user, as I did above). For other reasons for slight differences in the numbers below compared to mcmcFD above, please refer to the documentation of the comparisons function.

Code
swiss_fd2 <- comparisons(model = bayes.mod,
              newdata = "median",
              comparison = "difference",
              variables = list(agegroup = "reference",
                             female = c(0, 1),
                             education = "minmax",
                             income = "minmax",
                             demsatis = c(0, 3),
                             information = c(0, 3)))

I can then plot these comparisons:

Code
ggplot(data = swiss_fd2, aes(x = estimate, y = paste(term, contrast, sep = " "))) + 
  geom_segment(aes(x = conf.low, xend = conf.high, yend = paste(term, contrast, sep = " "))) + 
  geom_point() + theme_bw() + 
  geom_vline(xintercept = 0, linetype = "dashed") + 
  xlab("Difference in Pr(Voted) between low and high values") + 
  ylab("")

Predicted probabilities after ordered or multinomial logit/probit models

To generate similar figures, please refer to the code in the slides for Lectures 11-13. Here are some examples of useful presentations of predicted probabilities across the range of predictors.

Probability of each outcome in an ordered logistic regression.

Probability of each outcome in an ordered logistic regression.

Differences in the probability of each choice in a multinomial logistic regression. I can replicate a similar version of the plot above in just one line of marginaleffects code:

Code
plot_predictions(vote_mod, condition = c("econselfworse")) + 
  facet_wrap(~ group) + 
  ylab("Probability of voting") + 
  xlab("Personal economic situation is bad") + 
  scale_x_discrete(labels = c("No", "Yes"))

To add the party names after estimating the model, I use this trick:

Code
party_names <- c(
                    `1` = "CDU/CSU",
                    `2` = "SPD",
                    `3` = "FDP",
                    `4` = "Linke",
                    `5` = "Greens"
                    )
plot_predictions(vote_mod, condition = c("econselfworse")) + 
  facet_wrap(~ group, labeller = as_labeller(party_names), ncol = 1) + 
  ylab("Probability of voting") + 
  xlab("Personal economic situation is bad") + 
  scale_x_discrete(labels = c("No", "Yes")) + 
  coord_flip()

Differences in the probability of each choice in a multinomial logistic regression.

Differences in the probability of each choice in a multinomial logistic regression. In addition to the more laborious code in my slides, the marginaleffects package offers some functions to create similar plots with ease. Please refer to the expanding documentation for examples.

Model fit: ROC curves and other assessments

Regression models for binary and ordinal/categorical outcomes offer a variety of tools for assessing model fit. Many of these measures evaluate how well the model classifies cases by the outcome variable. The BayesPostEst package offers a function to plot precision-recall curves and receiver operating characteristic curves: mcmcRocPrcGen. See the package vignette for details.

The authors of rstan and rstanarm also implemented a popular tool: approximate leave-one-out cross-validation (LOO, LOOIC). See the vignette for the rstanarm package.

References

Hanmer, Michael J., and Kerem Ozan Kalkan. 2013. Behind the Curve: Clarifying the Best Approach to Calculating Predicted Probabilities and Marginal Effects from Limited Dependent Variable Models.” American Journal of Political Science 57 (1): 263–77. https://doi.org/10.1111/j.1540-5907.2012.00602.x.
King, Gary, Michael Tomz, and Jason Wittenberg. 2000. Making the Most of Statistical Analyses: Improving Interpretation and Presentation.” American Journal of Political Science 44 (2): 347–61. http://www.jstor.org/stable/2669316.
Kruschke, John K. 2013. “Bayesian Estimation Supersedes the t-Test.” Journal of Experimental Psychology: General 142 (2): 573–603. https://doi.org/10.1037/a0029146.
———. 2018. “Rejecting or Accepting Parameter Values in Bayesian Estimation.” Advances in Methods and Practices in Psychological Science 1 (2): 270–80. https://doi.org/10.1177/2515245918771304.
Mize, Trenton D. 2019. “Best Practices for Estimating, Interpreting, and Presenting Nonlinear Interaction Effects.” Sociological Science 6 (4): 81–117. https://doi.org/10.15195/v6.a4.
Mood, Carina. 2010. “Logistic Regression: Why We Cannot Do What We Think We Can Do, and What We Can Do about It.” European Sociological Review 26 (1): 67–82. https://doi.org/10.1093/esr/jcp006.
Muller, Clemma J, and Richard F MacLehose. 2014. Estimating predicted probabilities from logistic regression: different methods correspond to different target populations.” International Journal of Epidemiology 43 (3): 962–70. https://doi.org/10.1093/ije/dyu029.
Scogin, Shana, Johannes Karreth, Andreas Beger, and Rob Williams. 2019. “BayesPostEst: An r Package to Generate Postestimation Quantities for Bayesian MCMC Estimation.” Journal of Open Source Software 4 (42): 1722. https://doi.org/10.21105/joss.01722.