The purpose of this tutorial is to demonstrate how to merge and process data into one dataframe in R. This includes data from different sources and at different levels of analysis. To get to this point, the tutorial will show:
I recommend one additional resource to learn more about how to manage and manipulate data in R:
As always, it is useful to set a project-specific working directory—especially if you work with many files. You can set the WD to the location of your R script using our previous approach:
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))
Alternatively, use the here package to set your WD at the same location as your R script. Or, use an RStudio project to organize folders and files automatically.
Since many of you work with data you collect yourself, you will often enter data directly into spreadsheets. I don’t have any specific recommendations for this, but I strongly encourage you to use the following workflow:
Remember, one of the goals of our course is to help you build a better relationship with your future self. Following this workflow will help with that.
I recommend keeping a separate script file for data management and data analysis. This will also help you maintain a reproducible workflow and keep your code manageable. For instance, in my projects, I typically have at least two R scripts in my project folder:
project_datamgmt.R
, which starts by importing the
original source data and cleans and prepares it for analysisproject_analysis.R
, which conducts all analysis and
creates tables and graphsA very good model to follow is the Project TIER protocol, as illustrated in Day 5 of our course. You can take a look at the demo project. An R version, created by me, is available on my Github page.
You will data in many different formats when doing research. For this
purpose, the R package “rio” (with which you’re already familiar) is
particularly useful. Its developer describes it as “a set of tools aims
to simplify the process of importing/exporting data.” The package has
two main functions, import()
and export()
. It
allows you to import and export data from/to the following popular
formats (and others that I don’t list here):
.tsv
).csv
).RData
).dta
).sav
, .por
).xls
).xlsx
).mtp
).ods
)tsv
)For more information, see a readme page for the “rio” package on Github: https://github.com/leeper/rio.
Important note on package versions: Because data formats change frequently (e.g., with new versions of commercial software), dealing with data import and export requires special attention. Be sure to always use the most recent version of the “rio” package and all dependencies.
In this example, we import a dataset from the Afrobarometer project
into R. The Afrobarometer is an African-led series of national public
attitude surveys on democracy and governance in Africa, and you can find
more information on it at http://www.afrobarometer.org/. The survey data are
provided to scholars in SPSS format. SPSS is a statistical software
package akin to Stata or R. At http://www.afrobarometer.org/data/merged-data, you can
find a download link for the fourth round of the Afrobarometer. The file
is called “merged_r4_data.sav”. Let’s use this link to read this dataset
into R using the import()
function from the “rio”
package.
First, install the “rio” package if you didn’t do so before You only have to do this once. If you’ve already installed the “rio” package, you do not need to do so again.
install.packages("rio")
Next, load the package.
library("rio")
Now you can import the Afrobarometer dataset in your R environment.
I’ll call it ab
. This will take a few seconds since the
dataset is over 15MB big. Note: as always, R will search for the
file in your working directory.
<- import(file = "merged_r4_data.sav") ab
Alternatively, you can also import the file directly from its source by using the URL from the Afrobarometer website. But if the file is removed from that site, the code below won’t work.
Before actually looking at the dataset itself, you should look at its dimensions.
dim(ab)
## [1] 27713 294
You can also use the glimpse()
function from the dplyr
package (which is part of the “tidyverse” - which we’ll use more
below).
library("tidyverse")
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✔ ggplot2 3.3.6 ✔ purrr 0.3.4
## ✔ tibble 3.1.7 ✔ dplyr 1.0.9
## ✔ tidyr 1.2.0 ✔ stringr 1.4.0
## ✔ readr 2.1.2 ✔ forcats 0.5.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
glimpse(ab)
## Rows: 27,713
## Columns: 294
## $ COUNTRY <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ RESPNO <chr> "BEN0001", "BEN0002", "BEN0003", "BEN0004", "BEN0005", "BEN00…
## $ URBRUR <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2…
## $ BACKCHK <dbl> 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2…
## $ REGION <dbl> 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 107, 1…
## $ DISTRICT <chr> "COTONOU", "COTONOU", "COTONOU", "COTONOU", "COTONOU", "COTON…
## $ EA_SVC_A <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_SVC_B <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_SVC_C <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ EA_SVC_D <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_FAC_A <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_FAC_B <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_FAC_C <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_FAC_D <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_FAC_E <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_SEC_A <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_SEC_B <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ EA_ROAD <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0…
## $ NOCALL_1 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ NOCALL_2 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ NOCALL_3 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ NOCALL_4 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ NOCALL_5 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ NOCALL_6 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ NOCALL_7 <dbl> 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 997, 9…
## $ PREVINT <dbl> 1, 2, 1, 2, 1, 2, 1, 2, 2, 1, 1, 2, 1, 2, 1, 2, 2, 1, 1, 2, 1…
## $ THISINT <dbl> 2, 1, 2, 1, 2, 1, 2, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 2, 2, 1, 2…
## $ ADULT_CT <dbl> 1, 1, 1, 1, 1, 4, 2, 3, 2, 1, 3, 1, 2, 2, 1, 2, 1, 1, 2, 1, 1…
## $ CALLS <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ DATEINTR <date> 2008-06-23, 2008-06-23, 2008-06-24, 2008-06-24, 2008-06-23, …
## $ STRTIME <time> 18:30:00, 19:40:00, 18:30:00, 17:20:00, 17:33:00, 18:33:00, …
## $ Q1 <dbl> 38, 46, 28, 30, 23, 24, 40, 50, 24, 36, 22, 31, 50, 19, 41, 2…
## $ Q2 <dbl> 0, 9, 0, 1, 0, 0, 0, 1, 0, -1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, …
## $ Q3 <dbl> 100, 104, 101, 100, 100, 100, 109, 100, 101, 100, 100, 100, 1…
## $ Q3OTHER <chr> "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "…
## $ Q4A <dbl> 2, 2, 1, 4, 3, 1, 2, 1, 3, 2, 3, 3, 5, 2, 1, 2, 2, 2, 2, 3, 3…
## $ Q4B <dbl> 2, 3, 3, 3, 2, 2, 2, 1, 3, 2, 3, 3, 3, 4, 1, 2, 2, 2, 2, 3, 3…
## $ Q5 <dbl> 2, 2, 3, 4, 4, 2, 2, 2, 2, 2, 3, 3, -1, 3, 4, 4, 2, 2, 2, 3, …
## $ Q6A <dbl> 2, 2, 2, 3, 2, 2, 2, 2, 4, 3, 4, 3, 3, 4, 1, 4, 2, 2, 3, 2, 2…
## $ Q6B <dbl> 2, 3, 3, 3, 4, 3, 2, 2, 3, 3, 4, 3, 3, 3, 1, 4, 2, 2, 3, 3, 2…
## $ Q7A <dbl> 9, 9, 4, 4, 4, 3, 2, 2, 4, 1, 3, 2, 4, 4, 4, 4, 4, 4, 9, 4, 4…
## $ Q7B <dbl> 9, 9, 4, 4, 5, 4, 2, 2, 4, 9, 3, 2, 4, 4, 4, 4, 4, 4, 9, 4, 4…
## $ Q8A <dbl> 2, 1, 0, 0, 0, 1, 4, 0, 2, 0, 0, 0, 1, 0, 2, 0, 1, 0, 4, 3, 0…
## $ Q8B <dbl> 0, 0, 0, 0, 0, 0, 2, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 2, 0…
## $ Q8C <dbl> 1, 1, 0, 0, 0, 3, 2, 4, 1, 0, 0, 0, 1, 0, 2, 0, 0, 0, 2, 2, 0…
## $ Q8D <dbl> 1, 1, 0, 0, 0, 1, 2, 4, 1, 0, 0, 0, 1, 0, 2, 0, 1, 0, 0, 1, 0…
## $ Q8E <dbl> 4, 4, 0, 3, 2, 3, 3, 4, 2, 4, 0, 0, 1, 0, 4, 0, 3, 0, 4, 3, 0…
## $ Q9A <dbl> 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 1, 0, 0, 2…
## $ Q9B <dbl> 0, 0, 2, 0, 0, 0, 0, 0, 1, 0, 2, 0, 0, 0, 0, 1, 1, 0, 0, 0, 2…
## $ Q9C <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0…
## $ Q10 <dbl> 4, 4, 3, 4, 4, 3, 3, 4, 1, 1, 4, 3, 4, 3, 2, 3, 3, 3, 4, 3, 3…
## $ Q11 <dbl> 1, 1, 3, 3, 3, 4, 2, 1, 1, 1, 3, 2, 2, 2, 4, 4, 3, 2, 1, 4, 2…
## $ Q12A <dbl> 4, 3, 2, 4, 4, 4, 3, 4, 4, 4, 3, 4, 4, 4, 4, 0, 3, 4, 3, 4, 3…
## $ Q12B <dbl> 4, 4, 4, 3, 4, 4, 4, 3, 3, 2, 3, 2, 1, 4, 1, 0, 9, 0, 0, 4, 0…
## $ Q12C <dbl> 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 9, 0, 0, 0, 0…
## $ Q13 <dbl> 3, 3, 2, 0, 1, 3, 2, 3, 2, 2, 2, 2, 1, 0, 1, 0, 2, 2, 3, 3, 2…
## $ Q14 <dbl> 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2…
## $ Q15A <dbl> 4, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4, 4, 3, 4, 3, 4, 4, 3, 4, 4, 3…
## $ Q15B <dbl> 4, 4, 4, 4, 4, 3, 1, 4, 4, 4, 2, 4, 3, 4, 1, 3, 3, 3, 4, 4, 3…
## $ Q15C <dbl> 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 4, 3, 4, 4, 4, 3, 3, 4, 4, 3…
## $ Q16 <dbl> 4, 4, 3, 4, 2, 4, 4, 4, 4, 4, 2, 2, 4, 4, 4, 3, 3, 3, 4, 4, 2…
## $ Q17 <dbl> 1, 1, 3, 1, 3, 2, 4, 3, 2, 1, 3, 3, 4, 3, 4, 4, 2, 2, 1, 1, 2…
## $ Q18 <dbl> 1, 4, 3, 4, 4, 1, 2, 4, 1, 4, 2, 3, 2, 1, 1, 1, 3, 2, 4, 4, 3…
## $ Q19 <dbl> 4, 4, 3, 3, 4, 4, 4, 3, 4, 4, 3, 3, 3, 4, 4, 3, 3, 2, 4, 4, 3…
## $ Q20 <dbl> 4, 4, 3, 3, 3, 4, 4, 3, 4, 4, 3, 2, 3, 4, 4, 1, 2, 3, 4, 4, 2…
## $ Q21 <dbl> 4, 4, 3, 3, 1, 4, 3, 4, 1, 4, 3, 2, 3, 4, 4, 4, 3, 2, 4, 4, 3…
## $ Q22A <dbl> 1, 1, 1, 2, 2, 0, 1, 2, 0, 1, 1, 1, 2, 0, 1, 1, 0, 0, 1, 2, 1…
## $ Q22B <dbl> 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 2, 0, 2, 0…
## $ Q23A <dbl> 4, 4, 0, 1, 1, 2, 1, 1, 1, 4, 1, 1, 1, 1, 0, 1, 3, 3, 1, 4, 1…
## $ Q23B <dbl> 3, 1, 0, 1, 0, 1, 1, 1, 1, 3, 1, 1, 1, 1, 0, 1, 3, 1, 1, 4, 1…
## $ Q23C <dbl> 1, 1, 0, 1, 0, 1, 0, 1, 1, 4, 9, 1, 0, 1, 0, 0, 0, 1, 1, 4, 1…
## $ Q23D <dbl> 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 6, 4, 1, 1, 1, 1…
## $ Q24A <dbl> 2, 3, 2, 2, 3, 3, 0, 0, 2, 2, 2, 2, 0, 3, 3, 9, 2, 2, 2, 2, 2…
## $ Q24B <dbl> 2, 0, 2, 2, 3, 3, 2, 0, 0, 0, 2, 2, 0, 2, 0, 9, 2, 2, 0, 1, 1…
## $ Q25A <dbl> 0, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2…
## $ Q25B <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2…
## $ Q25C <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2…
## $ Q26A <dbl> 7, 2, 7, 7, 7, 7, 7, 7, 7, 2, 7, 7, 7, 7, 7, 7, 7, 2, 7, 7, 2…
## $ Q26B <dbl> 7, 1, 7, 7, 7, 7, 7, 7, 7, 1, 7, 7, 7, 7, 7, 7, 7, 1, 7, 7, 1…
## $ Q27A <dbl> 0, 0, 0, 0, 3, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0…
## $ Q27B <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0…
## $ Q27C <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 1, 0, 0, 0, 2, 0, 0, 1, 0, 0, 0…
## $ Q28A <dbl> 7, 7, 7, 7, 2, 7, 7, 7, 1, 2, 1, 7, 7, 7, 1, 9, 7, 1, 7, 7, 7…
## $ Q28B <dbl> 7, 7, 7, 7, 1, 7, 7, 7, 1, 2, 2, 7, 7, 7, 2, 9, 7, 1, 7, 7, 7…
## $ Q29A <dbl> 4, 4, 2, 1, 1, 1, 2, 1, 1, 4, 2, 3, 2, 2, 1, 2, 3, 3, 4, 2, 3…
## $ Q29B <dbl> 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 2, 3, 1, 1, 2, 1, 3, 3, 1, 1, 3…
## $ Q29C <dbl> 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 2, 3, 2, 2, 1, 1, 3, 3, 1, 1, 3…
## $ Q30 <dbl> 3, 3, 3, 2, 3, 2, 2, 3, 3, 3, 2, 3, 2, 1, 3, 3, 3, 3, 3, 3, 2…
## $ Q31 <dbl> 1, 1, 3, 3, 2, 3, 4, 2, 1, 1, 2, 3, 1, 1, 1, 1, 3, 2, 1, 1, 3…
## $ Q32 <dbl> 1, 4, 3, 3, 3, 1, 3, 4, 4, 1, 2, 3, 3, 1, 4, 4, 2, 3, 4, 1, 3…
## $ Q33 <dbl> 1, 1, 3, 1, 1, 3, 2, 2, 1, 1, 1, 3, 3, 1, 1, 4, 3, 3, 1, 1, 3…
## $ Q34 <dbl> 4, 4, 3, 4, 4, 3, 3, 1, 1, 4, 3, 2, 3, 2, 1, 4, 2, 3, 1, 4, 3…
## $ Q35 <dbl> 1, 1, 2, 1, 1, 3, 2, 1, 1, 1, 2, 2, 1, 1, 1, 4, 2, 2, 1, 1, 3…
## $ Q36 <dbl> 1, 1, 2, 3, 1, 2, 2, 2, 1, 1, 1, 3, 3, 1, 1, 1, 2, 3, 1, 1, 3…
## $ Q37 <dbl> 4, 4, 2, 2, 4, 2, 3, 2, 4, 4, 3, 3, 3, 2, 4, 4, 3, 3, 4, 4, 3…
## $ Q38 <dbl> 1, 1, 3, 1, 2, 2, 2, 2, 1, 1, 2, 3, 1, 2, 1, 1, 3, 3, 1, 1, 2…
## $ Q39 <dbl> 3, 4, 3, 3, 3, 3, 3, 2, 4, 1, 3, 3, 3, 3, 1, 4, 2, 2, 1, 4, 2…
## $ Q40A <dbl> 4, 4, 4, 4, 4, 1, 3, 4, 1, 4, 4, 4, 4, 3, 2, 4, 4, 4, 4, 3, 4…
## $ Q40B <dbl> 1, 3, 2, 1, 3, 3, -1, 3, 4, 2, 2, 1, 1, 4, 4, 2, 2, 2, 1, 4, …
## $ Q41A1 <chr> "-1", "-1", "-1", "-1", "-1", "NAGO MATHURIN", "-1", "-1", "N…
## $ Q41A2 <dbl> 9, 1, 9, 9, 9, 2, 9, 9, 2, 1, 9, 9, 3, 1, 9, 9, 1, 9, 9, 3, 9…
## $ Q41B1 <chr> "-1", "LAWANI SOULE MANA", "-1", "-1", "-1", "-1", "-1", "LAW…
## $ Q41B2 <dbl> 9, 3, 9, 9, 1, 1, 9, 3, 3, 2, 3, 9, 3, 1, -1, 9, 9, 9, 9, 3, …
## $ Q42A <dbl> 4, 4, 4, 3, 2, 3, 2, 3, 3, 3, 3, 3, 2, 4, 3, 4, 4, 4, 4, 3, 3…
## $ Q42B <dbl> 4, 4, 4, 4, 4, 3, 4, 3, 4, 4, 4, 4, 3, 3, 3, 2, 3, 3, 4, 4, 3…
## $ Q42C <dbl> 2, 2, 3, 3, 2, 1, 2, 3, 2, 2, 3, 3, 3, 3, 1, 2, 2, 2, 2, 2, 2…
## $ Q42D <dbl> 1, 1, 2, 2, 3, 2, 2, 2, 1, 1, 2, 2, 2, 3, 2, 2, 1, 1, 1, 1, 1…
## $ Q43 <dbl> 4, 4, 2, 2, 2, 3, 2, 4, 4, 3, 2, 3, 3, 1, 2, 4, 3, 3, 3, 2, 3…
## $ Q44A <dbl> 4, 3, 2, 2, 4, 1, 4, 4, 4, 3, 2, 4, 2, 2, 1, 4, 3, 3, 4, 4, 4…
## $ Q44B <dbl> 5, 4, 2, 4, 2, 4, 4, 4, 5, 5, 2, 4, 2, 2, 4, 5, 3, 3, 5, 4, 4…
## $ Q44C <dbl> 5, 5, 2, 4, 4, 4, 4, 4, 5, 5, 2, 4, 2, 4, 4, 4, 3, 3, 5, 4, 4…
## $ Q45A <dbl> 0, 0, 1, 1, 3, 1, 2, 1, 1, 1, 1, 0, 2, 2, 1, 1, 1, 1, 0, 1, 1…
## $ Q45B <dbl> 0, 1, 1, 0, 2, 1, 1, 1, 0, 2, 1, 0, 1, 0, 1, 1, 0, 0, 9, 1, 0…
## $ Q45C <dbl> 0, 2, 1, 0, 2, 1, 1, 1, 0, 2, 1, 0, 1, 3, 2, 1, 0, 0, 0, 2, 0…
## $ Q45D <dbl> 0, 2, 1, 0, 2, 0, 2, 1, 0, 0, 1, 0, 1, 2, 3, 1, 0, 0, 0, 2, 0…
## $ Q45E <dbl> 0, 3, 1, 0, 2, 3, 2, 1, 0, 0, 1, 0, 2, 0, 0, 1, 0, 0, 0, 2, 0…
## $ Q46 <dbl> 3, 3, 2, 2, 2, 1, 3, 3, 3, 2, 2, 2, 1, 3, 2, 2, 2, 2, 3, 3, 2…
## $ Q47 <dbl> 3, 1, 1, 2, 3, 1, 0, 2, 3, 1, 1, 1, 2, 3, 3, 3, 2, 2, 3, 3, 1…
## $ Q48A <dbl> 0, 0, 1, 1, 1, 2, 0, 1, 0, 0, 1, 1, 0, 3, 0, 0, 2, 2, 0, 0, 1…
## $ Q48B <dbl> 0, 0, 1, 1, 1, 2, 2, 1, 0, 1, 1, 1, 0, 3, 0, 3, 2, 2, 0, 0, 2…
## $ Q49A <dbl> 3, 1, 3, 1, 3, 1, 2, 3, 2, 0, 1, 0, 3, 3, 2, 1, 2, 3, 2, 2, 3…
## $ Q49B <dbl> 1, 1, 2, 1, 3, 2, 1, 2, 2, 2, 1, 0, 2, 3, 1, 1, 2, 2, 2, 2, 2…
## $ Q49C <dbl> 1, 1, 2, 0, 1, 2, 0, 1, 2, 1, 1, 0, 1, 1, 0, 1, 1, 2, 9, 2, 1…
## $ Q49D <dbl> 0, 1, 2, 0, 1, 0, 1, 1, 2, 1, 1, 0, 1, 1, 9, 1, 1, 3, 1, 1, 2…
## $ Q49E <dbl> 1, 0, 2, 0, 1, 2, 0, 2, 2, 2, 1, 0, 2, 1, 1, 1, 1, 3, 1, 1, 3…
## $ Q49F <dbl> 1, 0, 2, 1, 1, 1, 0, 1, 2, 2, 1, 0, 2, 0, 1, 1, 1, 2, 1, 1, 2…
## $ Q49G <dbl> 1, 2, 2, 1, 1, 2, 1, 1, 2, 1, 1, 0, 1, 1, 1, 1, 1, 2, 2, 2, 1…
## $ Q49H <dbl> 0, 2, 2, 1, 3, 2, 1, 1, 2, 1, 1, 0, 1, 3, 1, 1, 1, 2, 2, 2, 1…
## $ Q49I <dbl> 1, 1, 2, 1, 1, 1, 0, 1, 2, 2, 1, 0, 1, 1, 1, 9, 1, 2, 1, 1, 2…
## $ Q50A <dbl> 9, 2, 2, 3, 3, 1, 1, 1, 3, 3, 1, 2, 1, 0, 1, 0, 2, 1, 2, 2, 1…
## $ Q50B <dbl> 3, 1, 2, 2, 3, 1, 1, 1, 3, 2, 1, 2, 1, 2, 2, 0, 2, 1, 2, 2, 1…
## $ Q50C <dbl> 2, 1, 2, 3, 3, 1, 2, 1, 2, 1, 1, 2, 1, 0, 2, 0, 2, 2, 1, 1, 1…
## $ Q50D <dbl> 2, 2, 2, 2, 3, 1, 2, 1, 3, 3, 1, 2, 1, 1, 1, 0, 2, 1, 2, 2, 1…
## $ Q50E <dbl> 1, 1, 3, 2, 3, 2, 2, 1, 2, 1, 1, 2, 1, 2, 9, 0, 2, 2, 1, 1, 1…
## $ Q50F <dbl> 2, 2, 3, 2, 3, 2, 2, 1, 2, 2, 1, 2, 1, 0, 1, 0, 2, 2, 3, 2, 1…
## $ Q50G <dbl> 2, 2, 3, 2, 3, 2, 2, 1, 2, 3, 1, 2, 1, 0, 1, 0, 2, 2, 3, 2, 1…
## $ Q50H <dbl> 0, 1, 2, 9, 3, 1, 3, 1, 1, 0, 1, 2, 1, 0, 2, 0, 2, 2, 1, 1, 1…
## $ Q51A <dbl> 7, 0, 7, 7, 7, 7, 7, 0, 7, 2, 7, 2, 0, 7, 0, 0, 7, 7, 7, 7, 7…
## $ Q51B <dbl> 0, 0, 7, 7, 7, 7, 2, 0, 7, 0, 7, 2, 0, 7, 0, 0, 7, 7, 7, 7, 7…
## $ Q51C <dbl> 7, 7, 7, 7, 7, 7, 7, 0, 7, 7, 7, 2, 0, 7, 0, 0, 7, 7, 7, 7, 7…
## $ Q52 <dbl> 4, 4, 2, 4, 3, 4, 4, 3, 3, 4, 2, 1, 3, 3, 3, 3, 1, 2, 4, -1, …
## $ Q53A <dbl> 2, 2, 4, 3, 4, 3, 1, 2, 2, 0, 3, 2, 2, 3, 2, 9, 2, 3, 2, 2, 3…
## $ Q53B <dbl> 2, 2, 4, 3, 0, 1, 1, 0, 0, 0, 3, 2, 0, 0, 2, 9, 3, 3, 2, 2, 3…
## $ Q54A <dbl> 0, 1, 1, 1, 1, 1, 2, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 2, 0, 1, 0…
## $ Q54B <dbl> 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 2, 0…
## $ Q54C <dbl> 0, 0, 1, 1, 1, 0, 2, 3, 0, 3, 0, 1, 0, 1, 2, 0, 2, 1, 0, 0, 1…
## $ Q55 <dbl> 4, 4, 3, 3, 4, 4, 3, 3, 4, 4, 4, 2, 4, 1, 4, 1, 3, 3, 4, 4, 2…
## $ Q56PT1 <dbl> 4, 7, 1, 1, 8, 1, 1, 3, 17, 1, 1, 1, 1, 1, 1, 3, 3, 2, 7, 1, …
## $ Q56PT2 <dbl> 1, 1, 9, 3, 16, 14, 3, 13, 1, 3, 7, 14, 13, 4, 23, 1, 8, 8, 1…
## $ Q56PT3 <dbl> 32, 13, 16, 10, 14, 13, 7, 7, 13, 15, 14, 20, 23, 24, 20, 1, …
## $ Q57A <dbl> 3, 3, 3, 3, 3, 3, 1, 3, 3, 1, 3, 2, 3, 3, 2, 1, 3, 3, 2, 2, 3…
## $ Q57B <dbl> 2, 2, 3, 3, 3, 3, 3, 3, 2, 1, 2, 1, 3, 4, 2, 2, 3, 3, 2, 1, 3…
## $ Q57C <dbl> 2, 2, 2, 1, 4, 1, 3, 3, 2, 1, 1, 1, 2, 3, 2, 3, 3, 3, 3, 2, 3…
## $ Q57D <dbl> 2, 2, 3, 1, 3, 1, 3, 3, 2, 1, 1, 1, 3, 4, 1, 4, 3, 1, 2, 2, 3…
## $ Q57E <dbl> 1, 1, 3, 1, 3, 1, 3, 3, 1, 1, 1, 1, 2, 3, 1, 4, 3, 1, 1, 1, 2…
## $ Q57F <dbl> 3, 3, 2, 2, 3, 1, 3, 3, 3, 3, 1, 1, 3, 3, 3, 3, 3, 2, 3, 1, 2…
## $ Q57G <dbl> 3, 3, 2, 3, 3, 1, 3, 4, 3, 2, 1, 3, 3, 3, 3, 4, 3, 2, 4, 2, 2…
## $ Q57H <dbl> 3, 3, 2, 2, 3, 1, 3, 3, 3, 3, 2, 3, 3, 3, 9, 3, 3, 2, 3, 3, 3…
## $ Q57I <dbl> 3, 3, 2, 2, 3, 3, 3, 4, 3, 3, 2, 3, 3, 3, 3, 3, 2, 3, 2, 2, 3…
## $ Q57J <dbl> 3, 3, 2, 2, 3, 1, 3, 4, 3, 2, 2, 3, 3, 4, 3, 3, 2, 3, 3, 2, 3…
## $ Q57K <dbl> 3, 2, 2, 2, 3, 1, 3, 4, 2, 1, 2, 1, 3, 4, 9, 3, 2, 3, 2, 2, 3…
## $ Q57L <dbl> 3, 3, 2, 2, 3, 1, 3, 4, 3, 1, 2, 2, 3, 4, 3, 3, 3, 3, 4, 3, 3…
## $ Q57M <dbl> 2, 3, 2, 2, 3, 1, 3, 3, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 1, 2, 3…
## $ Q57N <dbl> 2, 3, 2, 2, 4, 1, 2, 3, 2, 2, 2, 2, 2, 3, 2, 3, 2, 3, 1, 2, 3…
## $ Q57O <dbl> 2, 2, 2, 2, 3, 3, 2, 3, 3, 2, 2, 2, 2, 3, 1, 3, 3, 3, 1, 2, 3…
## $ Q57P <dbl> 3, 3, 2, 2, 3, 3, 3, 3, 3, 3, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3…
## $ Q58A <dbl> 2, 2, 2, 2, 4, 2, 4, 2, 2, 2, 2, 2, 2, 2, 4, 4, 1, 4, 4, 4, 1…
## $ Q58B <dbl> 1, 1, 1, 9, 1, 2, 4, 2, 1, 1, 4, 4, 1, 2, 1, 2, 1, 4, 1, 1, 1…
## $ Q58C <dbl> 1, 1, 1, 1, 1, 1, 4, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1…
## $ Q58D <dbl> 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 2, 1, 3, 1, 1, 1…
## $ Q58E <dbl> 4, 2, 2, 3, 3, 2, 2, 2, 2, 2, 4, 3, 1, 2, 3, 2, 4, 4, 2, 2, 3…
## $ Q58F <dbl> 4, 4, 2, 1, 2, 1, 4, 2, 4, 4, 4, 3, 2, 2, 2, 2, 4, 4, 4, 4, 3…
## $ Q58G <dbl> 1, 1, 2, 1, 1, 1, 4, 1, 1, 1, 4, 2, 1, 2, 1, 2, 4, 4, 1, 1, 1…
## $ Q58H <dbl> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 4, 2, 1, 1, 1, 4, 4, 4, 1, 1, 1…
## $ Q59A <dbl> 2, 3, 2, 3, 2, 1, 9, 3, 2, 2, 2, 2, 2, 4, 1, 3, 2, 2, 1, 1, 2…
## $ Q59B <dbl> 3, 3, 2, 3, 2, 1, 9, 3, 2, 2, 2, 3, 2, 2, 3, 4, 2, 2, 1, 1, 2…
## $ Q59C <dbl> 2, 9, 2, 2, 2, 1, 3, 3, 3, 3, 2, 2, 2, 2, 9, 4, 2, 2, 3, 2, 2…
## $ Q59D <dbl> 3, 3, 2, 2, 2, 1, 2, 3, 3, 2, 2, 3, 2, 3, 4, 4, 2, 2, 2, 2, 2…
## $ Q59E <dbl> 2, 2, 2, 3, 2, 2, 3, 3, 2, 4, 2, 3, 2, 4, 3, 3, 2, 3, 1, 3, 2…
## $ Q59F <dbl> 2, 2, 2, 3, 2, 1, 3, 4, 2, 4, 2, 3, 2, 4, 3, 3, 2, 3, 2, 3, 2…
## $ Q60A <dbl> 9, 9, 2, 2, 2, 1, 3, 3, 2, 1, 1, 2, 2, 3, 1, 9, 2, 2, 3, 1, 2…
## $ Q60B <dbl> 9, 9, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 3, 1, 9, 2, 2, 3, 2, 3…
## $ Q60C <dbl> 9, 9, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 1, 1, 9, 2, 2, 3, 2, 2…
## $ Q60D <dbl> 9, 9, 2, 2, 2, 1, 3, 2, 2, 1, 1, 2, 2, 1, 4, 9, 2, 2, 3, 2, 2…
## $ Q60E <dbl> 9, 9, 2, 2, 2, 2, 3, 2, 2, 2, 1, 2, 2, 3, 9, 9, 2, 2, 3, 2, 2…
## $ Q60F <dbl> 9, 9, 2, 2, 2, 1, 3, 3, 3, 3, 1, 2, 2, 3, 3, 9, 2, 2, 3, 2, 2…
## $ Q61 <dbl> 1, 9, 3, 3, 4, 4, 2, 1, 4, 1, 3, 2, 2, 4, 1, 3, 3, 3, 4, 3, 2…
## $ Q62A1 <dbl> 0, 9, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q62A <dbl> 7, 0, 7, 7, 2, 7, 7, 7, 7, 7, 0, 7, 2, 7, 7, 7, 7, 7, 7, 7, 7…
## $ Q62B <dbl> 7, 0, 7, 7, 2, 7, 7, 7, 7, 7, 0, 7, 1, 7, 7, 7, 7, 7, 7, 7, 7…
## $ Q62C <dbl> 7, 0, 7, 7, 2, 7, 7, 7, 7, 7, 0, 7, 0, 7, 7, 7, 7, 7, 7, 7, 7…
## $ Q62D <dbl> 7, 0, 7, 7, 0, 7, 7, 7, 7, 7, 0, 7, 0, 7, 7, 7, 7, 7, 7, 7, 7…
## $ Q62E <dbl> 7, 0, 7, 7, 0, 7, 7, 7, 7, 7, 0, 7, 1, 7, 7, 7, 7, 7, 7, 7, 7…
## $ Q62F <dbl> 7, 0, 7, 7, 0, 7, 7, 7, 7, 7, 0, 7, 0, 7, 7, 7, 7, 7, 7, 7, 7…
## $ Q63A <dbl> 9, 9, 3, 3, 3, 2, 9, 9, 9, 9, 3, 3, 2, 1, 9, 9, 3, 3, 9, 3, 3…
## $ Q63B <dbl> 9, 9, 3, 3, 3, 4, 9, 9, 9, 9, 3, 3, 2, 4, 9, 9, 3, 3, 9, 9, 3…
## $ Q63C <dbl> 9, 9, 3, 3, 3, 2, 9, 9, 9, 9, 3, 3, 2, 2, 9, 9, 3, 3, 9, 9, 3…
## $ Q63D <dbl> 9, 9, 3, 3, 3, 1, 9, 9, 9, 9, 3, 3, 2, 1, 9, 9, 3, 3, 9, 9, 3…
## $ Q64A <dbl> 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0…
## $ Q64B <dbl> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0…
## $ Q64C <dbl> 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q64D <dbl> 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1…
## $ Q64E <dbl> 1, 1, 1, 1, 0, 0, 1, 0, 9, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0…
## $ Q65 <dbl> 9, 9, 3, 2, 4, 4, 4, 4, 9, 1, 2, 2, 2, 1, 2, 9, 3, 3, 9, 9, 2…
## $ Q66 <dbl> 9, 9, 4, 4, 2, 2, 1, 2, 9, 1, 3, 2, 2, 2, 4, 9, 3, 2, 9, 9, 3…
## $ Q67 <dbl> 9, 9, 0, 0, 0, 3, 0, 0, 9, 3, 0, 1, 0, 2, 3, 9, 0, 1, 9, 9, 1…
## $ Q68 <dbl> 9, 9, 3, 3, 3, 3, 3, 3, 9, 3, 3, 3, 2, 3, 1, 9, 3, 3, 9, 9, 3…
## $ Q69 <dbl> 9, 9, 3, 3, 3, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 9, 3, 3, 9, 9, 3…
## $ Q70A <dbl> 3, 9, 3, 3, 4, 2, 4, 4, 3, 3, 3, 2, 4, 3, 3, 3, 3, 4, 3, 3, 3…
## $ Q70B <dbl> 9, 9, 3, 9, 3, 2, 3, 3, 3, 2, 3, 1, 3, 3, 9, 3, 3, 4, 3, 3, 3…
## $ Q70C <dbl> 9, 9, 3, 2, 3, 3, 3, 2, 3, 2, 2, 3, 2, 3, 9, 9, 2, 4, 3, 2, 3…
## $ Q71 <dbl> 3, 3, 4, 3, 3, 1, 3, 4, 2, 3, 4, 1, 4, 3, 3, 3, 4, 3, 4, 3, 3…
## $ Q72A <dbl> 2, 2, 2, 1, 1, 2, 2, 2, 2, 3, 1, 2, 2, 2, 9, 1, 2, 2, 1, 1, 1…
## $ Q72B <dbl> 2, 2, 2, 1, 1, 1, 2, 2, 3, 3, 1, 2, 2, 2, 9, 3, 2, 2, 1, 1, 1…
## $ Q73A <dbl> 3, 3, 3, 3, 3, 1, 3, 3, 3, 1, 3, 3, 3, 0, 3, 3, 3, 3, 3, 3, 3…
## $ Q73B <dbl> 3, 3, 3, 3, 3, 1, 3, 3, 3, 9, 3, 3, 3, 0, 3, 9, 3, 3, 3, 3, 3…
## $ Q73C <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 1, 9, 3, 3, 3, 1, 3, 3, 3, 3, 3, 3, 3…
## $ Q74 <dbl> 1, 9, 3, 2, 2, 2, 3, 2, 9, 9, 1, 2, 3, 4, 1, 9, 2, 3, 1, 1, 3…
## $ Q79 <dbl> 100, 104, 990, 100, 108, 101, 109, 100, 101, 104, 104, 101, 1…
## $ Q79OTHER <chr> "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "…
## $ Q80 <dbl> 3, 3, 7, 3, 3, 4, 3, 3, 3, 3, 4, 3, 3, 2, 9, 2, 3, 2, 3, 3, 3…
## $ Q81 <dbl> 9, 5, 7, 3, 3, 5, 4, 1, 3, 5, 3, 3, 3, 2, 9, 1, 3, 4, 5, 5, 3…
## $ Q82 <dbl> 0, 0, 7, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 3, 9, 0, 0, 0, 0, 0…
## $ Q83 <dbl> 2, 4, 7, 5, 5, 5, 5, 5, 5, 4, 5, 2, 5, 5, 3, 5, 4, 5, 4, 4, 5…
## $ Q84A <dbl> 0, 0, 2, 2, 1, 2, 0, 0, 0, 0, 1, 0, 2, 1, 1, 1, 2, 1, 0, 2, 2…
## $ Q84B <dbl> 0, 0, 1, 2, 1, 0, 0, 0, 0, 0, 1, 0, 2, 1, 1, 1, 2, 1, 0, 1, 2…
## $ Q84C <dbl> 0, 0, 1, 2, 1, 0, 0, 0, 0, 0, 1, 0, 2, 1, 1, 1, 1, 1, 0, 0, 2…
## $ Q85 <dbl> 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1…
## $ Q86 <dbl> 100, 997, 997, 100, 106, 997, 997, 100, 997, 997, 997, 100, 9…
## $ Q87 <dbl> 0, 0, 4, 0, 0, 0, 3, 0, 2, 4, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0…
## $ Q88A <dbl> 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 0, 4, 0, 0, 0, 0, 4, 0…
## $ Q88B <dbl> 0, 0, 0, 0, 3, 1, 1, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q88C <dbl> 0, 0, 0, 0, 3, 1, 0, 0, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q88D <dbl> 9, 3, 1, 1, 3, 3, 3, 4, 2, 3, 2, 2, 1, 3, 3, 0, 1, 1, 3, 4, 1…
## $ Q88E <chr> "FON,FRANCAIS", "FRANCAIS,FON,YORUBA", "YORUBA,ADJA", "FON,FR…
## $ Q88F <dbl> 2, 3, 2, 2, 3, 4, 3, 2, 4, 3, 3, 3, 1, 1, 2, 3, 2, 2, 2, 3, 1…
## $ Q89 <dbl> 4, 2, 4, 3, 4, 4, 5, 2, 4, 4, 5, 2, 0, 4, 4, 4, 2, 2, 0, 7, 0…
## $ Q90 <dbl> 2, 18, 18, 1, 2, 2, 2, 12, 13, 2, 1, 1, 2, 2, 2, 12, 25, 1, 2…
## $ Q91 <dbl> 4, 1, 4, 4, 4, 3, 4, 4, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4, 4, 4, 4…
## $ Q92A <dbl> 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0…
## $ Q92B <dbl> 1, 1, 1, 9, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0…
## $ Q92C <dbl> 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 9, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0…
## $ Q93A <dbl> 3, 3, 3, 3, 2, 2, 1, 3, 2, 1, 2, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3…
## $ Q93B <dbl> 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q94 <dbl> 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 4, 0…
## $ Q95 <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0…
## $ Q96 <dbl> 997, 997, 997, 997, 997, 997, 2, 997, 997, 997, 997, 998, 998…
## $ Q97 <dbl> 100, 100, 100, 100, 100, 102, 998, 100, 999, 100, 101, 100, 1…
## $ Q98A <dbl> 3, 3, 1, 2, 2, 3, 2, 9, -1, -1, -1, -1, -1, -1, -1, 9, 1, 2, …
## $ Q98B <dbl> 3, 3, 1, 2, 2, 1, 2, 9, 3, 3, 1, 1, 1, 3, 9, 3, 1, 2, 3, 2, 3…
## $ Q98C <dbl> 3, 3, 1, 2, 2, 2, 2, 9, 3, 3, 2, 1, 1, 3, 9, 3, 2, 2, 3, 2, 3…
## $ Q98D <dbl> 3, 3, 1, 2, 2, 2, 2, 9, 3, 3, 2, 1, 1, 3, 9, 3, 2, 1, 3, 2, 3…
## $ Q98E <dbl> 3, 3, 1, 2, 2, 2, 2, 9, 3, 3, 1, 1, 1, 3, 9, 2, 2, 1, 3, 2, 3…
## $ Q98F <dbl> 3, 3, 1, 2, 2, 2, 2, 9, 3, 2, 2, 1, 1, 3, 2, 3, 2, 2, 3, 2, 3…
## $ Q98G <dbl> 3, 2, 1, 2, 2, 0, 2, 9, 3, 2, 2, 1, 1, 3, 1, 9, 2, 2, 3, 2, 3…
## $ Q98H <dbl> 3, 3, 1, 2, 2, 2, 2, 9, 3, 3, 2, 1, 1, 1, 9, 9, 2, 2, 3, 2, 3…
## $ Q98I <dbl> 3, 3, 1, 2, 2, 0, 2, 9, 3, 3, 2, 1, 1, 3, 9, 3, 2, 2, 3, 2, 3…
## $ Q98J <dbl> 3, 3, 1, 2, 2, 0, 2, 9, 3, 3, 2, 1, 1, 3, 9, 1, 2, 2, 3, 2, 3…
## $ Q98J1 <dbl> 3, 3, 1, 2, 2, 0, 2, 9, 3, 3, 1, 1, 1, 3, 2, 3, 2, 2, 3, 2, 3…
## $ Q98K <dbl> 1, 1, 3, 9, 2, 3, 2, 9, 1, 1, 1, 3, 9, 3, 9, 9, 3, 3, 1, 1, 9…
## $ Q99A <dbl> 2, 2, 3, 2, 3, 3, 2, 9, 2, 4, 2, 2, 3, 2, 9, 4, 3, 3, 9, 5, 3…
## $ Q99B <dbl> 2, 1, 3, 2, 3, 3, 2, 9, 2, 4, 1, 2, 4, 2, 9, 1, 2, 3, 9, 5, 3…
## $ Q99C <dbl> 1, 2, 3, 2, 3, 2, 2, 9, 2, 3, 1, 2, 4, 2, 2, 2, 2, 3, 9, 4, 3…
## $ Q100 <dbl> 19, 17, 1, 1, 20, 17, 1, 1, 19, 19, 17, 1, 17, 2, 19, 19, 1, …
## $ ENDTIME <time> 19:30:00, 20:40:00, 19:10:00, 18:17:00, 18:25:00, 19:35:00, …
## $ LENGTH <dbl> 60, 60, 40, 57, 52, 62, 74, 42, 76, 87, 46, 56, 75, 63, 101, …
## $ Q101 <dbl> 2, 1, 2, 1, 2, 1, 2, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 2, 2, 1, 2…
## $ Q102 <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q103 <dbl> 100, 2, 2, 100, 2, 2, 2, 2, 2, 2, 2, 100, 100, 2, 2, 101, 101…
## $ Q104 <dbl> 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 5, 1, 1, 1…
## $ Q105A <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q105B <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q105C <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q105D <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q105E <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ Q106 <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 3, 0, 0, 0, 0, 0…
## $ Q107A <chr> "000", "000", "000", "000", "000", "000", "000", "000", "0", …
## $ Q107B <chr> "000", "000", "000", "000", "000", "000", "000", "000", "0", …
## $ Q107C <chr> "000", "000", "000", "000", "000", "000", "000", "000", "0", …
## $ Q108A <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q108B <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q108C <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1…
## $ Q108D <dbl> 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q108E <dbl> 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q108F <dbl> 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q110 <chr> "BEN10", "BEN10", "BEN12", "BEN12", "BEN11", "BEN11", "BEN09"…
## $ Q111 <dbl> 34, 34, 30, 30, 26, 26, 26, 26, 34, 34, 30, 30, 26, 26, 26, 2…
## $ Q112 <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Q113 <dbl> 1, 1, 2, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 2…
## $ Q114 <dbl> 101, 101, 101, 101, 2, 2, 2, 2, 101, 101, 101, 101, 2, 2, 2, …
## $ Q115 <dbl> 7, 7, 8, 8, 8, 8, 7, 7, 7, 7, 8, 8, 8, 8, 7, 7, 8, 8, 7, 7, 8…
## $ Withinwt <dbl> 1.412669, 1.412669, 1.412669, 1.412669, 1.412669, 1.412669, 1…
## $ Acrosswt <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ Combinwt <dbl> 1.412669, 1.412669, 1.412669, 1.412669, 1.412669, 1.412669, 1…
This is a fairly large dataset. While it doesn’t matter for computing time, sometimes you way want to trim datasets down a bit. From the codebook at http://afrobarometer.org/sites/default/files/data/round-4/merged_r4_codebook3.pdf, I determine that I’ll be interested only in the following variables:
COUNTRY
URBRUR
: Urban or Rural Primary Sampling UnitQ42A
: In your opinion how much of a democracy is
[Ghana/Kenya/etc.]? today?Q89
: What is the highest level of education you have
completed?Q101
: Respondent’s genderQ1
: Respondent’s ageSo we use the select()
function from the “dplyr” package
to create a new object, ab.small
, that contains only these
six variables. Here, we also introduce the “pipe” symbol
|>
. The pipe was recently added to base-R, but has been
around for a while. Its main purpose is to make long, complex operations
easier to read:
x |> mean() |> round() |> print()
can be read as: Start with x
, then mean()
,
then round()
, then print()
. You will see the
pipe used a lot in most tidyverse documentation. You can set a shortcut
for the pipe under Tools \(\rightarrow\) Global Options \(\rightarrow\) Code.
<- ab |> select(COUNTRY, URBRUR, Q42A, Q89, Q101, Q1)
ab.small dim(ab.small)
## [1] 27713 6
str(ab.small)
## 'data.frame': 27713 obs. of 6 variables:
## $ COUNTRY: num 1 1 1 1 1 1 1 1 1 1 ...
## ..- attr(*, "label")= chr "Country"
## ..- attr(*, "format.spss")= chr "F4.0"
## ..- attr(*, "labels")= Named num [1:20] 1 2 3 4 5 6 7 8 9 10 ...
## .. ..- attr(*, "names")= chr [1:20] "Benin" "Botswana" "Burkina Faso" "Cape Verde" ...
## $ URBRUR : num 1 1 1 1 1 1 1 1 1 1 ...
## ..- attr(*, "label")= chr "Urban or Rural Primary Sampling Unit"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:2] 1 2
## .. ..- attr(*, "names")= chr [1:2] "Urban" "Rural"
## $ Q42A : num 4 4 4 3 2 3 2 3 3 3 ...
## ..- attr(*, "label")= chr "Q42a. Extent of democracy"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:8] -1 1 2 3 4 8 9 998
## .. ..- attr(*, "names")= chr [1:8] "Missing" "Not a democracy" "A democracy, with major problems" "A democracy, but with minor problems" ...
## $ Q89 : num 4 2 4 3 4 4 5 2 4 4 ...
## ..- attr(*, "label")= chr "Q89. Education of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:13] -1 0 1 2 3 4 5 6 7 8 ...
## .. ..- attr(*, "names")= chr [1:13] "Missing" "No formal schooling" "Informal schooling only" "Some primary schooling" ...
## $ Q101 : num 2 1 2 1 2 1 2 1 1 2 ...
## ..- attr(*, "label")= chr "Q101. Gender of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 1 2
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Male" "Female"
## $ Q1 : num 38 46 28 30 23 24 40 50 24 36 ...
## ..- attr(*, "label")= chr "Q1. Age"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 998 999
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Refused" "Don't know"
summary(ab.small)
## COUNTRY URBRUR Q42A Q89
## Min. : 1.00 Min. :1.00 Min. :-1.000 Min. :-1.00
## 1st Qu.: 6.00 1st Qu.:1.00 1st Qu.: 2.000 1st Qu.: 2.00
## Median :12.00 Median :2.00 Median : 3.000 Median : 3.00
## Mean :11.21 Mean :1.62 Mean : 3.452 Mean : 3.27
## 3rd Qu.:16.00 3rd Qu.:2.00 3rd Qu.: 4.000 3rd Qu.: 5.00
## Max. :20.00 Max. :2.00 Max. : 9.000 Max. :99.00
## Q101 Q1
## Min. :1.000 Min. : -1.00
## 1st Qu.:1.000 1st Qu.: 25.00
## Median :2.000 Median : 33.00
## Mean :1.501 Mean : 47.68
## 3rd Qu.:2.000 3rd Qu.: 45.00
## Max. :2.000 Max. :999.00
How would you achieve the same using square brackets?
Now that we’ve read the dataset into R, we can process it for further analysis - which we’ll do further below in this tutorial.
Datasets in SPSS format often contain variables with value labels.
You can see this above in the output following the
str(ab.small)
command. Value labels can be useful to help
you quickly identify the meaning of different codes without revisiting
the codebook, e.g. that with Q101
, 1
stands
for Male
and 2
for female. In many situations,
this makes your life easier.
There are a few ways to display and use labels instead of numbers in R. One of them is through the sjlabelled package. The vignettes (1, 2) offer more detail, but here is the gist:
library("sjlabelled")
##
## Attaching package: 'sjlabelled'
## The following object is masked from 'package:forcats':
##
## as_factor
## The following object is masked from 'package:dplyr':
##
## as_label
## The following object is masked from 'package:ggplot2':
##
## as_label
table(ab.small$Q42A)
##
## -1 1 2 3 4 8 9
## 15 1875 7338 8249 7310 1202 1724
table(as_label(ab.small$Q42A))
##
## Missing Not a democracy
## 15 1875
## A democracy, with major problems A democracy, but with minor problems
## 7338 8249
## A full democracy Do not understand question/democracy
## 7310 1202
## Don't know Refused
## 1724 0
What do the numbers under the country names tell you?
For this example, we use the European Social Survey, an academically
driven cross-national survey that has been conducted every two years
across Europe since 2001. You can find more information on the ESS at http://www.europeansocialsurvey.org/ (under Data and
Documentation > Round 6) after you register on the site to access
data and codebooks. Let’s download the Stata version of the 2012 round
of the ESS, called “ESS6e02_1.dta”, and use the import()
function again to read the dataset into R.
<- import(file = "ESS6e02_1.dta") ess
Before actually looking at the dataset itself, you should look at its dimensions.
dim(ess)
## [1] 54673 626
This is again a fairly large dataset, so let’s trim it down a bit. From the variable list at http://www.europeansocialsurvey.org/docs/round6/survey/ESS6_appendix_a7_e02_1.pdf, I decide that I’ll be interested only in the following variables:
cntry
: Countrytrstlgl
: Trust in the legal system, 0 means you do not
trust an institution at all, and 10 means you have complete trust.lrscale
: Placement on left right scale, where 0 means
the left and 10 means the rightfairelc
: How important R thinks it is for democracy in
general that national elections are free and fairyrbrn
: Year of birthgndr
: Genderhinctnta
: Household’s total net income, all
sourcesAgain, we use R’s indexing structure to create a new object,
ess.small
, that contains only these seven variables.
<- ess |> select(cntry, trstlgl, lrscale, fairelc, yrbrn, gndr, hinctnta)
ess.small dim(ess.small)
## [1] 54673 7
str(ess.small)
## 'data.frame': 54673 obs. of 7 variables:
## $ cntry : chr "AL" "AL" "AL" "AL" ...
## ..- attr(*, "label")= chr "Country"
## ..- attr(*, "format.stata")= chr "%2s"
## $ trstlgl : num 0 0 2 7 6 5 10 9 7 0 ...
## ..- attr(*, "label")= chr "Trust in the legal system"
## ..- attr(*, "format.stata")= chr "%10.0g"
## ..- attr(*, "labels")= Named num [1:14] 0 1 2 3 4 5 6 7 8 9 ...
## .. ..- attr(*, "names")= chr [1:14] "No trust at all" "1" "2" "3" ...
## $ lrscale : num 0 88 5 5 10 5 1 5 5 0 ...
## ..- attr(*, "label")= chr "Placement on left right scale"
## ..- attr(*, "format.stata")= chr "%10.0g"
## ..- attr(*, "labels")= Named num [1:14] 0 1 2 3 4 5 6 7 8 9 ...
## .. ..- attr(*, "names")= chr [1:14] "Left" "1" "2" "3" ...
## $ fairelc : num 10 10 10 88 10 10 10 10 10 10 ...
## ..- attr(*, "label")= chr "National elections are free and fair"
## ..- attr(*, "format.stata")= chr "%10.0g"
## ..- attr(*, "labels")= Named num [1:14] 0 1 2 3 4 5 6 7 8 9 ...
## .. ..- attr(*, "names")= chr [1:14] "Not at all important for democracy in general" "1" "2" "3" ...
## $ yrbrn : num 1949 1983 1946 9999 1953 ...
## ..- attr(*, "label")= chr "Year of birth"
## ..- attr(*, "format.stata")= chr "%10.0g"
## ..- attr(*, "labels")= Named num [1:3] 7777 8888 9999
## .. ..- attr(*, "names")= chr [1:3] "Refusal" "Don't know" "No answer"
## $ gndr : num 1 2 2 1 1 1 2 2 2 2 ...
## ..- attr(*, "label")= chr "Gender"
## ..- attr(*, "format.stata")= chr "%10.0g"
## ..- attr(*, "labels")= Named num [1:3] 1 2 9
## .. ..- attr(*, "names")= chr [1:3] "Male" "Female" "No answer"
## $ hinctnta: num 5 2 2 99 2 2 1 2 4 1 ...
## ..- attr(*, "label")= chr "Household's total net income, all sources"
## ..- attr(*, "format.stata")= chr "%10.0g"
## ..- attr(*, "labels")= Named num [1:13] 1 2 3 4 5 6 7 8 9 10 ...
## .. ..- attr(*, "names")= chr [1:13] "J - 1st decile" "R - 2nd decile" "C - 3rd decile" "M - 4th decile" ...
summary(ess.small)
## cntry trstlgl lrscale fairelc
## Length:54673 Min. : 0.000 Min. : 0.00 Min. : 0.00
## Class :character 1st Qu.: 3.000 1st Qu.: 4.00 1st Qu.: 8.00
## Mode :character Median : 5.000 Median : 5.00 Median :10.00
## Mean : 7.054 Mean :17.54 Mean :10.91
## 3rd Qu.: 7.000 3rd Qu.: 8.00 3rd Qu.:10.00
## Max. :99.000 Max. :99.00 Max. :99.00
## yrbrn gndr hinctnta
## Min. :1909 Min. :1.000 Min. : 1.00
## 1st Qu.:1950 1st Qu.:1.000 1st Qu.: 3.00
## Median :1964 Median :2.000 Median : 6.00
## Mean :1981 Mean :1.546 Mean :20.06
## 3rd Qu.:1980 3rd Qu.:2.000 3rd Qu.:10.00
## Max. :9999 Max. :9.000 Max. :99.00
table(ess.small$hinctnta)
##
## 1 2 3 4 5 6 7 8 9 10 77 88 99
## 5063 5492 5011 4888 4538 4324 4147 3829 3262 3427 6211 4342 139
Now that we’ve read the dataset into R, we can process it for further analysis. We won’t revisit the ESS data in this tutorial.
Datasets in Stata format often contain variables with value labels.
You can see this above in the output following the
str(ess.small)
command. Value labels can be useful to help
you quickly identify the meaning of different codes without revisiting
the codebook, e.g. that with gndr
, 1
stands
for Male
and 2
for female. In many situations,
this makes your life easier.
Just like with SPSS above, the as_label()
function is
handy to print value labels in tables or graphs. Just wrap
as_label()
around your variable of interest where
needed.
table(as_label(ess.small$hinctnta))
##
## J - 1st decile R - 2nd decile C - 3rd decile M - 4th decile F - 5th decile
## 5063 5492 5011 4888 4538
## S - 6th decile K - 7th decile P - 8th decile D - 9th decile H - 10th decile
## 4324 4147 3829 3262 3427
## Refusal Don't know No answer
## 6211 4342 139
Before any analysis, you will often, if not always, need to process data that you obtained from elsewhere or that you collected yourself. In this section, we’ll go over some typical scenarios for this.
Often, you need to make sure that the variables have the correct
numerical or character values. Different data sources often use
different codes for missing values, for instance -99
,
-9999
, .
, or NA
. Let’s work
through this with the ab.small
dataset. First, I check the
structure of the dataset:
str(ab.small)
## 'data.frame': 27713 obs. of 6 variables:
## $ COUNTRY: num 1 1 1 1 1 1 1 1 1 1 ...
## ..- attr(*, "label")= chr "Country"
## ..- attr(*, "format.spss")= chr "F4.0"
## ..- attr(*, "labels")= Named num [1:20] 1 2 3 4 5 6 7 8 9 10 ...
## .. ..- attr(*, "names")= chr [1:20] "Benin" "Botswana" "Burkina Faso" "Cape Verde" ...
## $ URBRUR : num 1 1 1 1 1 1 1 1 1 1 ...
## ..- attr(*, "label")= chr "Urban or Rural Primary Sampling Unit"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:2] 1 2
## .. ..- attr(*, "names")= chr [1:2] "Urban" "Rural"
## $ Q42A : num 4 4 4 3 2 3 2 3 3 3 ...
## ..- attr(*, "label")= chr "Q42a. Extent of democracy"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:8] -1 1 2 3 4 8 9 998
## .. ..- attr(*, "names")= chr [1:8] "Missing" "Not a democracy" "A democracy, with major problems" "A democracy, but with minor problems" ...
## $ Q89 : num 4 2 4 3 4 4 5 2 4 4 ...
## ..- attr(*, "label")= chr "Q89. Education of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:13] -1 0 1 2 3 4 5 6 7 8 ...
## .. ..- attr(*, "names")= chr [1:13] "Missing" "No formal schooling" "Informal schooling only" "Some primary schooling" ...
## $ Q101 : num 2 1 2 1 2 1 2 1 1 2 ...
## ..- attr(*, "label")= chr "Q101. Gender of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 1 2
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Male" "Female"
## $ Q1 : num 38 46 28 30 23 24 40 50 24 36 ...
## ..- attr(*, "label")= chr "Q1. Age"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 998 999
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Refused" "Don't know"
It looks like we’re dealing with all numeric variables here. This is good, but you want to make sure that codes for missing observations, or otherwise non-numeric values, are discarded appropriately. The most fail-safe way to do this is to convert the variable into a factor manually and to assign value labels as you find them in the codebook.
Here, I can already conclude that I only wish to keep the values of
this variable that indicate a substantive response, as missing values or
refusals to answer won’t be part of my statistical model. Therefore, I
create a new variable perceivedDem
that recodes all values
to missing (NA
in R) that are not 1, 2, 3, or
4.
$perceivedDem <- ifelse(ab.small$Q42A < 1 | ab.small$Q42A > 4,
ab.smallNA,
$Q42A)
ab.smalltable(ab.small$perceivedDem)
##
## 1 2 3 4
## 1875 7338 8249 7310
table(is.na(ab.small$perceivedDem))
##
## FALSE TRUE
## 24772 2941
The ifelse()
function has three main arguments:
ab.small$Q42A < 1 | ab.small$Q42A > 4
. Here,
|
stands for “or”.NA
means: replace with NA
.ab.small$Q42A
means replace with current values of
ab.small$Q42A
.Next, I want to assign the relevant values to the four levels of the
newly created variables perceivedDem
, so we can create a
second version of perceivedDem
as a factor and assign these
levels:
$perceivedDem_factor <- factor(ab.small$perceivedDem,
ab.smalllevels = c(1, 2, 3, 4),
labels = c("Not a democracy",
"A democracy, with major problems",
"A democracy, but with minor problems",
"A full democracy"))
table(ab.small$perceivedDem_factor)
##
## Not a democracy A democracy, with major problems
## 1875 7338
## A democracy, but with minor problems A full democracy
## 8249 7310
Lastly, let’s compare our new variable with the original one to make sure we didn’t mix up labels:
table(ab.small$perceivedDem_factor, ab.small$Q42A)
##
## -1 1 2 3 4 8 9
## Not a democracy 0 1875 0 0 0 0 0
## A democracy, with major problems 0 0 7338 0 0 0 0
## A democracy, but with minor problems 0 0 0 8249 0 0 0
## A full democracy 0 0 0 0 7310 0 0
Now let’s make the education variable an ordinal numerical variable. First, let’s have a look at the variable in its current form:
table(ab.small$Q89)
##
## -1 0 1 2 3 4 5 6 7 8 9 99
## 10 4365 1260 5111 3897 5950 4165 1674 649 506 92 34
We should first recode “Missing”, “Don’t know”, and “Refused” to
NA
. Again, we create a new variable, education.
$education <- ifelse(ab.small$Q89 == 99 |
ab.small$Q89 == 998 |
ab.small$Q89 == -1,
ab.smallNA, ab.small$Q89)
table(ab.small$education)
##
## 0 1 2 3 4 5 6 7 8 9
## 4365 1260 5111 3897 5950 4165 1674 649 506 92
We now have numeric values. If we wanted to put labels on the
variable, we can use the factor
function again. This time,
instead of typing the value labels manually, I use the
get_labels()
function from the sjlabelled package, and pick
only those labels corresponding to substantive answers to the question.
Here, these are the labels after the first (“missing”) and before the
last two (“don’t know” and “refused”).
get_labels(ab.small$Q89)
## [1] "Missing"
## [2] "No formal schooling"
## [3] "Informal schooling only"
## [4] "Some primary schooling"
## [5] "Primary school completed"
## [6] "Some secondary school/high school"
## [7] "Secondary school completed/high school"
## [8] "Post-secondary qualifications, not university"
## [9] "Some university"
## [10] "University completed"
## [11] "Post-graduate"
## [12] "Don't know"
## [13] "Refused"
$education_factor <- factor(ab.small$education,
ab.smalllevels = c(0:9),
labels = get_labels(ab.small$Q89)[2:11])
table(ab.small$education_factor)
##
## No formal schooling
## 4365
## Informal schooling only
## 1260
## Some primary schooling
## 5111
## Primary school completed
## 3897
## Some secondary school/high school
## 5950
## Secondary school completed/high school
## 4165
## Post-secondary qualifications, not university
## 1674
## Some university
## 649
## University completed
## 506
## Post-graduate
## 92
Again, a quick comparison with the original variable:
table(ab.small$education_factor, ab.small$Q89)
##
## -1 0 1 2 3 4
## No formal schooling 0 4365 0 0 0 0
## Informal schooling only 0 0 1260 0 0 0
## Some primary schooling 0 0 0 5111 0 0
## Primary school completed 0 0 0 0 3897 0
## Some secondary school/high school 0 0 0 0 0 5950
## Secondary school completed/high school 0 0 0 0 0 0
## Post-secondary qualifications, not university 0 0 0 0 0 0
## Some university 0 0 0 0 0 0
## University completed 0 0 0 0 0 0
## Post-graduate 0 0 0 0 0 0
##
## 5 6 7 8 9 99
## No formal schooling 0 0 0 0 0 0
## Informal schooling only 0 0 0 0 0 0
## Some primary schooling 0 0 0 0 0 0
## Primary school completed 0 0 0 0 0 0
## Some secondary school/high school 0 0 0 0 0 0
## Secondary school completed/high school 4165 0 0 0 0 0
## Post-secondary qualifications, not university 0 1674 0 0 0 0
## Some university 0 0 649 0 0 0
## University completed 0 0 0 506 0 0
## Post-graduate 0 0 0 0 92 0
Next, let’s check the Q101
variable for gender:
table(ab.small$Q101)
##
## 1 2
## 13837 13876
I want this variable to be numerical so that males are 0 and females 1. Here, this is easy because males are currently 1 and females coded as 2.
$female <- ab.small$Q10 - 1
ab.smalltable(ab.small$female)
##
## 0 1
## 13837 13876
$female_factor <- factor(ab.small$female,
ab.smalllevels = c(0, 1),
labels = c("Male", "Female"))
Lastly, let’s check the age variable.
table(ab.small$Q1)
##
## -1 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
## 6 852 944 1066 778 948 908 886 1147 837 829 1012 789 1149 524 922
## 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
## 590 534 873 581 479 694 463 818 341 578 410 350 607 346 297 479
## 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
## 297 536 233 311 248 245 242 266 189 229 162 328 134 147 144 109
## 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
## 156 114 95 146 77 154 90 91 66 64 82 61 37 58 36 59
## 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 97
## 17 22 20 11 18 5 15 8 5 6 3 2 1 1 2 2
## 98 99 100 110 998 999
## 1 2 1 1 8 319
We can see that the values 999
, 998
, and
-1
don’t seem to correspond to realistic ages. A quick look
at the codebook reveals the following:
These values correspond to “Don’t know”, “Refused”, and “Missing”. So
let’s recode those again to NA
as we did with the democracy
variable above:
$age <- ifelse(ab.small$Q1 == 999 |
ab.small$Q1 == 998 |
ab.small$Q1 == -1,
ab.smallNA, ab.small$Q1)
summary(ab.small$age)
## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 18.00 25.00 33.00 36.33 45.00 110.00 333
ggplot(data = ab.small, mapping = aes(x = age)) +
geom_histogram() +
xlab("Age")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 333 rows containing non-finite values (stat_bin).
Now let’s look at our dataset:
str(ab.small)
## 'data.frame': 27713 obs. of 13 variables:
## $ COUNTRY : num 1 1 1 1 1 1 1 1 1 1 ...
## ..- attr(*, "label")= chr "Country"
## ..- attr(*, "format.spss")= chr "F4.0"
## ..- attr(*, "labels")= Named num [1:20] 1 2 3 4 5 6 7 8 9 10 ...
## .. ..- attr(*, "names")= chr [1:20] "Benin" "Botswana" "Burkina Faso" "Cape Verde" ...
## $ URBRUR : num 1 1 1 1 1 1 1 1 1 1 ...
## ..- attr(*, "label")= chr "Urban or Rural Primary Sampling Unit"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:2] 1 2
## .. ..- attr(*, "names")= chr [1:2] "Urban" "Rural"
## $ Q42A : num 4 4 4 3 2 3 2 3 3 3 ...
## ..- attr(*, "label")= chr "Q42a. Extent of democracy"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:8] -1 1 2 3 4 8 9 998
## .. ..- attr(*, "names")= chr [1:8] "Missing" "Not a democracy" "A democracy, with major problems" "A democracy, but with minor problems" ...
## $ Q89 : num 4 2 4 3 4 4 5 2 4 4 ...
## ..- attr(*, "label")= chr "Q89. Education of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:13] -1 0 1 2 3 4 5 6 7 8 ...
## .. ..- attr(*, "names")= chr [1:13] "Missing" "No formal schooling" "Informal schooling only" "Some primary schooling" ...
## $ Q101 : num 2 1 2 1 2 1 2 1 1 2 ...
## ..- attr(*, "label")= chr "Q101. Gender of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 1 2
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Male" "Female"
## $ Q1 : num 38 46 28 30 23 24 40 50 24 36 ...
## ..- attr(*, "label")= chr "Q1. Age"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 998 999
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Refused" "Don't know"
## $ perceivedDem : num 4 4 4 3 2 3 2 3 3 3 ...
## $ perceivedDem_factor: Factor w/ 4 levels "Not a democracy",..: 4 4 4 3 2 3 2 3 3 3 ...
## $ education : num 4 2 4 3 4 4 5 2 4 4 ...
## $ education_factor : Factor w/ 10 levels "No formal schooling",..: 5 3 5 4 5 5 6 3 5 5 ...
## $ female : num 1 0 1 0 1 0 1 0 0 1 ...
## ..- attr(*, "label")= chr "Q101. Gender of respondent"
## ..- attr(*, "format.spss")= chr "F3.0"
## ..- attr(*, "labels")= Named num [1:3] -1 1 2
## .. ..- attr(*, "names")= chr [1:3] "Missing" "Male" "Female"
## $ female_factor : Factor w/ 2 levels "Male","Female": 2 1 2 1 2 1 2 1 1 2 ...
## $ age : num 38 46 28 30 23 24 40 50 24 36 ...
For our further analyses, let’s keep only the variables we just
created. For this, we create a new object, ab.work
by
keeping only the columns we need. I would generally recommend working
with numeric variables rather than factors if you will use these
variables in regression, but there are contexts where factors and their
labels are useful. So for now, I’ll keep both the numeric and factor
versions of perceivedDem
and education
in my
working dataset.
<- ab.small |> select(COUNTRY, URBRUR, perceivedDem_factor, female, age, education, education_factor) ab.work
Alternatively, we could also drop the columns we don’t need by putting a \(-\) sign in front of the variable. This might make more sense if you want to keep many and drop few variables:
<- ab.small |> select(-Q42A, -Q89, -Q101, -Q1) ab.work
Finally, let’s summarize our dataset:
summary(ab.work)
## COUNTRY URBRUR perceivedDem
## Min. : 1.00 Min. :1.00 Min. :1.000
## 1st Qu.: 6.00 1st Qu.:1.00 1st Qu.:2.000
## Median :12.00 Median :2.00 Median :3.000
## Mean :11.21 Mean :1.62 Mean :2.848
## 3rd Qu.:16.00 3rd Qu.:2.00 3rd Qu.:4.000
## Max. :20.00 Max. :2.00 Max. :4.000
## NA's :2941
## perceivedDem_factor education
## Not a democracy :1875 Min. :0.000
## A democracy, with major problems :7338 1st Qu.:2.000
## A democracy, but with minor problems:8249 Median :3.000
## A full democracy :7310 Mean :3.154
## NA's :2941 3rd Qu.:5.000
## Max. :9.000
## NA's :44
## education_factor female female_factor
## Some secondary school/high school :5950 Min. :0.0000 Male :13837
## Some primary schooling :5111 1st Qu.:0.0000 Female:13876
## No formal schooling :4365 Median :1.0000
## Secondary school completed/high school:4165 Mean :0.5007
## Primary school completed :3897 3rd Qu.:1.0000
## (Other) :4181 Max. :1.0000
## NA's : 44
## age
## Min. : 18.00
## 1st Qu.: 25.00
## Median : 33.00
## Mean : 36.33
## 3rd Qu.: 45.00
## Max. :110.00
## NA's :333
Note that there are also other functions to recode variables, aiming
to make the process more convenient than what you see here with the
ifelse()
function. Some of these include:
car::recode()
, like so:
ab.small$age <- car::recode(ab.small$Q1, "c(999, 998, -1) = NA, else = ab.small$Q1")
dplyr::recode()
, like so:
ab.small$female <- dplyr::recode(ab.small$Q10, "1" = 0, "2" = 1)
I’m prefixing each recode()
function by the package name
because the functions have the same names. Also: Do keep a close eye on
variable types (numeric, factor, string, …) before writing your code to
recode variables. The tidyverse package offers some more functionalities
to recode factor variables that you might find useful.
Factors can be useful in some contexts because they carry extra information (numeric values and value labels). But if the underlying numeric values are important for your calculations, you should know that the numeric values underlying factors always start with 1. So if you have a factor “female”, it will typically take the values 1 and 2 (and not 0 and 1, which you might expect). The code above shows you how to deal with this. If this is of interest to you, you can view more on where factors are more useful than numeric variables in this talk by Amelia McNamara.
In any event, be sure to always inspect your variables of interest to
make sure you know whether you’re dealing with a numeric, factor, or
character variable, and what labels are associated with which values, if
any. Use the str()
or class()
functions for
this.
class(ab.work$education_factor)
## [1] "factor"
In the article Tidy Data you learn that the default
structure of dataset consists of rows and columns. However, datasets
typically have more underlying structure than just rows and columns. For
instance, the ab.small
dataframe is properly structured so
that each observation is one row and each variable is one column. But
each observation comes from one of the 20 countries in the survey. You
can therefore think as respondent i being also nested in
country j.
This can become important when you want to summarize information
across a group indicator j, such as countries in this case.
What if you were interested in creating a country-level variable that
measure the average perception of the country’s political system as
democratic, based on the perceivedDem
variable we created
from the Afrobarometer. To do this, you need to “collapse” the dataset
by country and create a new dataset with only country names and the
average value of perceivedDem
by country.
For this operation, we will begin to make use of the “dplyr” package in R. This package is a great tool for “data wrangling” and will become on of your standard tools. It is also part of the tidyverse set of packages. Take a look at the Github page for this package, where you can find more information on how this package works.
We’ll first create a new dataset named ab.country
with
only mean values of perceived democracy by country.
If we collapse the ab.work
dataset by countries, how
many rows will the resulting dataset have?
<- ab.work |> group_by(COUNTRY) |>
ab.country summarize(perceivedDem = mean(perceivedDem, na.rm = TRUE))
Note the use of na.rm = TRUE
within the
mean
function. The perceivedDem
variable has
missing values, and R cannot calculate a mean on a vector with missing
values unless na.rm
is set to TRUE
.
What command/s would you use if you wanted to summarize this variable with the median instead of the mean? What if you wanted to calculate its range?
Also note that we don’t pass ab.work
to the
summarize()
function, but we wrap it into
group_by()
. Here, we tell R to summarize
ab.work
by the grouping variable COUNTRY
.
This is the dataset we just created:
ab.country
## # A tibble: 20 × 2
## COUNTRY perceivedDem
## <dbl> <dbl>
## 1 1 3.19
## 2 2 3.49
## 3 3 2.85
## 4 4 3.11
## 5 5 3.44
## 6 6 2.57
## 7 7 2.51
## 8 8 2.91
## 9 9 2.80
## 10 10 2.82
## 11 11 2.95
## 12 12 2.97
## 13 13 3.15
## 14 14 2.43
## 15 15 2.57
## 16 16 2.82
## 17 17 3.23
## 18 18 2.74
## 19 19 2.69
## 20 20 2.06
If you wanted to simply add the average perceivedDem
variable to your original survey data without collapsing it, you can do
this using the exact same language, but substitute mutate
for summarize
. This will be the typical step if you
wish to create group-level means for further analysis, as we did in
class today.
<- ab.work |> group_by(COUNTRY) |>
ab.work mutate(perceivedDemAvg = mean(perceivedDem, na.rm = TRUE))
If we look at the data, you will notice a new variable on the right:
head(ab.work)
## # A tibble: 6 × 10
## # Groups: COUNTRY [1]
## COUNTRY URBRUR perceivedDem perceivedDem_factor education
## <dbl> <dbl> <dbl> <fct> <dbl>
## 1 1 1 4 A full democracy 4
## 2 1 1 4 A full democracy 2
## 3 1 1 4 A full democracy 4
## 4 1 1 3 A democracy, but with minor problems 3
## 5 1 1 2 A democracy, with major problems 4
## 6 1 1 3 A democracy, but with minor problems 4
## education_factor female female_factor age perceivedDemAvg
## <fct> <dbl> <fct> <dbl> <dbl>
## 1 Some secondary school/high school 1 Female 38 3.19
## 2 Some primary schooling 0 Male 46 3.19
## 3 Some secondary school/high school 1 Female 28 3.19
## 4 Primary school completed 0 Male 30 3.19
## 5 Some secondary school/high school 1 Female 23 3.19
## 6 Some secondary school/high school 0 Male 24 3.19
To center an individual-level variable by a group mean, you similarly
use mutate()
to calculate the group mean first, and then
subtract the group mean from the individual-level variable. Here, if I
wanted to center the individual-level variable age
by the
group (country) mean of age, I would first generate
age_country_mean
and then subtract
age_country_mean
from each individual respondent’s
age
to create age_ctd
. Then, I could
standardize it by dividing by two country-specific standard deviations
(age_country_sd
) and generate age_ctd_sdt
:
<- ab.work |> group_by(COUNTRY) |>
ab.work mutate(
age_country_mean = mean(age, na.rm = TRUE),
age_country_sd = sd(age, na.rm = TRUE))
$age_ctd <- ab.work$age - ab.work$age_country_mean
ab.work$age_ctd_std <- ab.work$age_ctd / (2 * ab.work$age_country_sd)
ab.work|> select(age_ctd, age_ctd_std) |> summary() ab.work
## Adding missing grouping variables: `COUNTRY`
## COUNTRY age_ctd age_ctd_std
## Min. : 1.00 Min. :-23.373 Min. :-0.7599
## 1st Qu.: 6.00 1st Qu.:-10.858 1st Qu.:-0.3951
## Median :12.00 Median : -3.215 Median :-0.1112
## Mean :11.21 Mean : 0.000 Mean : 0.0000
## 3rd Qu.:16.00 3rd Qu.: 8.437 3rd Qu.: 0.2968
## Max. :20.00 Max. : 70.983 Max. : 2.3969
## NA's :333 NA's :333
In cross-country survey data, you can think of respondents i nested in countries j. When you work with macro-economic or macro-political data from several countries over multiple time points, you can similarly think of countries i and years t. One row in your dataset is then a country-year it. In this context, you can use the same operations as above.
As an example, we’ll introduce a handy data compendium from the University of Gothenburg, Sweden. The Quality of Government project, online at http://qog.pol.gu.se/data/, has put together one master dataset containing dozens of political and economic indicators for a large number of countries over long time spans. If you work with cross-country data, this makes your life much easier.
First, have a look at the codebook at http://qog.pol.gu.se/data/datadownloads/qogstandarddata. This codebook is a great example of how to document a dataset. When you collect your own data and share it with others, you should aim for a similarly clear style of documentation.
Let’s now use the “QoG Standard Time-Series Data (version January
2020)”. I first downloaded the .csv version of “The QoG Time-Series Data
(version January 2019)” from http://qog.pol.gu.se/data/datadownloads/qogstandarddata
(look for the link on the right) to my working directory, and then read
it into R with the familiar import()
function (you could
use read.csv()
as well). Note: this is a
large file (93MB), so be patient when downloading it.
<- import("qog_std_ts_jan20.csv")
qog dim(qog)
## [1] 15614 2086
This dataset is organized in the structure I described above: countries i and years t, with one row being one country-year observation. Countries are identified by names and a set of country codes.
As in the earlier examples, let’s assume we’re working on a project where we only need a limited set of indicators. In this case, let’s focus on measures for democracy and economic development. After consulting the codebook, we decide we want to work with the following variables:
For country identifiers, we’ll keep the Correlates of War country
codes ccodecow
and the country names cname
as
well as a regional identifier ht_region
; we’ll only look at
the years 2000-2010 for now. This means we’ll be subsetting the dataset
on two dimensions: we only need 5 columns, and we only need the rows for
which the condition year >= 2000 & year <= 2010
apply. You will remember that R operates by rows first, and columns
second, with a comma separating the two indices within hard brackets
[ , ]
. That way, we can use one line of code to create our
desired subset, which we call qog.small
:
<- qog[qog$year >= 2000 & qog$year <= 2010,
qog.small c("cname", "ccodecow", "ht_region", "year", "gle_cgdpc", "uds_mean")]
str(qog.small)
## 'data.frame': 2321 obs. of 6 variables:
## $ cname : chr "Afghanistan" "Afghanistan" "Afghanistan" "Afghanistan" ...
## $ ccodecow : int 700 700 700 700 700 700 700 700 700 700 ...
## $ ht_region: int 8 8 8 8 8 8 8 8 8 8 ...
## $ year : int 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 ...
## $ gle_cgdpc: num 694 637 790 870 917 ...
## $ uds_mean : num -1.363 -1.632 -0.878 -0.885 -0.702 ...
You’ve already worked on transforming variables before, so let’s briefly reiterate one type of transformation and introduce a few new ones.
Have a quick look at the GDP per capita variable.
ggplot(data = qog.small, mapping = aes(x = gle_cgdpc)) +
geom_histogram() +
xlab("GDP per capita")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 206 rows containing non-finite values (stat_bin).
This variable has a strong right skew, and we may consider using the
logarithmic transformation for it. For this, we create a new variable,
gle_cgdpc_ln
:
$gle_cgdpc_ln <- log(qog.small$gle_cgdpc)
qog.smallggplot(data = qog.small, mapping = aes(x = gle_cgdpc_ln)) +
geom_histogram() +
xlab("GDP per capita (logged)")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 206 rows containing non-finite values (stat_bin).
Time-series cross-sectional (TSCS) data are data with multiple observations over time (“time-series”) for multiple units (“cross-sectional”). You should be familiar with some basic data manipulation that applies specifically to time-series cross-sectional data. This includes the creation of time-series operators.
To start, let’s focus on one single time series though: we’ll only pick the United States from our QoG dataset. In the Correlates of War (COW) country code system, the United States is assigned the ID 2 (see http://www.jkarreth.net/countrycodes.html for a sortable list of all COW country codes).
<- filter(qog.small, ccodecow == 2)
usa usa
## cname ccodecow ht_region year gle_cgdpc uds_mean gle_cgdpc_ln
## 1 United States 2 5 2000 35412.48 1.561573 10.47482
## 2 United States 2 5 2001 36122.87 1.559541 10.49468
## 3 United States 2 5 2002 36804.33 1.544341 10.51337
## 4 United States 2 5 2003 38098.34 1.567313 10.54793
## 5 United States 2 5 2004 40142.37 1.558781 10.60019
## 6 United States 2 5 2005 42329.65 1.549633 10.65324
## 7 United States 2 5 2006 44447.90 1.551548 10.70207
## 8 United States 2 5 2007 46208.62 1.553307 10.74092
## 9 United States 2 5 2008 46519.89 1.556611 10.74764
## 10 United States 2 5 2009 44845.30 1.560328 10.71097
## 11 United States 2 5 2010 46203.23 1.559407 10.74080
Now, let’s say we want to create a variable that expresses the
absolute change in GDP per capita from year to year. We can do this in
two steps. First, we create a “lag” of GDP per capita, using the
lag()
function. This function takes two arguments. The
first is the vector from which we are creating the lag, and the second,
k
, is the number of time units (here: years) by which we
want to lag the original variable. Before we use the lag
function, I sort the data by years to make sure that the lag is taken
from the previous year. I do this using the arrange
function, which is part of the “dplyr” package that you loaded
above.
<- arrange(usa, year)
usa $gle_cgdpc_lag <- lag(usa$gle_cgdpc, k = 1)
usa usa
## cname ccodecow ht_region year gle_cgdpc uds_mean gle_cgdpc_ln
## 1 United States 2 5 2000 35412.48 1.561573 10.47482
## 2 United States 2 5 2001 36122.87 1.559541 10.49468
## 3 United States 2 5 2002 36804.33 1.544341 10.51337
## 4 United States 2 5 2003 38098.34 1.567313 10.54793
## 5 United States 2 5 2004 40142.37 1.558781 10.60019
## 6 United States 2 5 2005 42329.65 1.549633 10.65324
## 7 United States 2 5 2006 44447.90 1.551548 10.70207
## 8 United States 2 5 2007 46208.62 1.553307 10.74092
## 9 United States 2 5 2008 46519.89 1.556611 10.74764
## 10 United States 2 5 2009 44845.30 1.560328 10.71097
## 11 United States 2 5 2010 46203.23 1.559407 10.74080
## gle_cgdpc_lag
## 1 NA
## 2 35412.48
## 3 36122.87
## 4 36804.33
## 5 38098.34
## 6 40142.37
## 7 42329.65
## 8 44447.90
## 9 46208.62
## 10 46519.89
## 11 44845.30
I can now create the “change in GDP per capita” variable by simply subtracting the previous from the current value:
$gle_cgdpc_ch <- usa$gle_cgdpc - usa$gle_cgdpc_lag
usa usa
## cname ccodecow ht_region year gle_cgdpc uds_mean gle_cgdpc_ln
## 1 United States 2 5 2000 35412.48 1.561573 10.47482
## 2 United States 2 5 2001 36122.87 1.559541 10.49468
## 3 United States 2 5 2002 36804.33 1.544341 10.51337
## 4 United States 2 5 2003 38098.34 1.567313 10.54793
## 5 United States 2 5 2004 40142.37 1.558781 10.60019
## 6 United States 2 5 2005 42329.65 1.549633 10.65324
## 7 United States 2 5 2006 44447.90 1.551548 10.70207
## 8 United States 2 5 2007 46208.62 1.553307 10.74092
## 9 United States 2 5 2008 46519.89 1.556611 10.74764
## 10 United States 2 5 2009 44845.30 1.560328 10.71097
## 11 United States 2 5 2010 46203.23 1.559407 10.74080
## gle_cgdpc_lag gle_cgdpc_ch
## 1 NA NA
## 2 35412.48 710.391
## 3 36122.87 681.457
## 4 36804.33 1294.012
## 5 38098.34 2044.031
## 6 40142.37 2187.277
## 7 42329.65 2118.250
## 8 44447.90 1760.723
## 9 46208.62 311.270
## 10 46519.89 -1674.590
## 11 44845.30 1357.929
And now we can create a quick plot of percentage growth in GDP per
capita, using another new variable, gle_cgdpc_growth
, that
we create as well:
$gle_cgdpc_growth <- usa$gle_cgdpc_ch / usa$gle_cgdpc_lag
usaggplot(data = usa, mapping = aes(x = year, y = gle_cgdpc_growth)) +
geom_line() +
geom_hline(yintercept = 0, linetype = "dashed") +
scale_x_continuous(breaks = seq(from = 2000, to = 2010, by = 2)) +
xlab("") + ylab("Economic growth")
We can also create lags in time-series cross-sectional data as in the
qog.small dataset, where we want lags for individual units. For this, we
return to the mutate
function from the “dplyr” package that
you already used above. We also need to take into account the grouping
structure and use the group_by
function for that
purpose.
First, I eliminate all observations that do not have COW country codes, assuming that these are duplicate observations. In your research, you should carefully check such cases by hand.
<- qog.small |> filter(!is.na(qog.small$ccodecow)) qog.small
Next, I sort the dataset by country codes and then years. I do this in order to have the data ready for creating lagged variables further below.
<- qog.small |> arrange(ccodecow, year) qog.small
Which country and year will be first (and last) in this sorted dataset?
Now, I use again the mutate()
function to add a variable
to this dataset without changing the structure of the dataset.
<- qog.small |> group_by(ccodecow) |>
qog.small mutate(gle_cgdpc_lag = lag(gle_cgdpc, k = 1))
tail(qog.small)
## # A tibble: 6 × 8
## # Groups: ccodecow [1]
## cname ccodecow ht_region year gle_cgdpc uds_mean gle_cgdpc_ln gle_cgdpc_lag
## <chr> <int> <int> <int> <dbl> <dbl> <dbl> <dbl>
## 1 Samoa 990 9 2005 2924. 0.388 7.98 2787.
## 2 Samoa 990 9 2006 3030. 0.243 8.02 2924.
## 3 Samoa 990 9 2007 2829. 0.243 7.95 3030.
## 4 Samoa 990 9 2008 2864. 0.241 7.96 2829.
## 5 Samoa 990 9 2009 3142. 0.641 8.05 2864.
## 6 Samoa 990 9 2010 3155. 0.641 8.06 3142.
You can use all sorts of other variable creation commands like you
did above for the usa
time series.
Often in your work, you will find yourself having to combine datasets from different sources. We’ll work through two quick examples returning to the Afrobarometer survey data from the beginning of this tutorial.
Let’s say we are interested in the relationship between a common measure for democracy, the Unified Democracy Score, and survey respondents’ perception of how democratic their country is. We already have both pieces of information. Recall that we created a country-level version of the Afrobarometer data that contains the average perception of democracy:
ab.country
## # A tibble: 20 × 2
## COUNTRY perceivedDem
## <dbl> <dbl>
## 1 1 3.19
## 2 2 3.49
## 3 3 2.85
## 4 4 3.11
## 5 5 3.44
## 6 6 2.57
## 7 7 2.51
## 8 8 2.91
## 9 9 2.80
## 10 10 2.82
## 11 11 2.95
## 12 12 2.97
## 13 13 3.15
## 14 14 2.43
## 15 15 2.57
## 16 16 2.82
## 17 17 3.23
## 18 18 2.74
## 19 19 2.69
## 20 20 2.06
And, we have the qog.small
dataset containing the
Unified Democracy Scores variable.
To bring the two together, we need two things: first, matching identifiers that help our software package assign the appropriate observations to each other. The QoG data contains COW country codes, but our Afrobarometer data only has country numbers. We need to create a character vector with country names first. To do this, we use the same practice as above: create a factor using the labels included in the vector.
Again, double-check the codebook to make sure all labels are applied correctly.
$country_factor <- factor(ab.country$COUNTRY,
ab.countrylevels = unique(ab.country$COUNTRY),
labels = get_labels(ab.country$COUNTRY))
Again, double-check the codebook to make sure all labels are applied correctly.
ab.country
## # A tibble: 20 × 3
## COUNTRY perceivedDem country_factor
## <dbl> <dbl> <fct>
## 1 1 3.19 Benin
## 2 2 3.49 Botswana
## 3 3 2.85 Burkina Faso
## 4 4 3.11 Cape Verde
## 5 5 3.44 Ghana
## 6 6 2.57 Kenya
## 7 7 2.51 Lesotho
## 8 8 2.91 Liberia
## 9 9 2.80 Madagascar
## 10 10 2.82 Malawi
## 11 11 2.95 Mali
## 12 12 2.97 Mozambique
## 13 13 3.15 Namibia
## 14 14 2.43 Nigeria
## 15 15 2.57 Senegal
## 16 16 2.82 South Africa
## 17 17 3.23 Tanzania
## 18 18 2.74 Uganda
## 19 19 2.69 Zambia
## 20 20 2.06 Zimbabwe
Now, we can use the countrycode()
function from the package with
the same name to add a variable ccodecow
to our
Afrobarometer data. Install the “countrycode” package once, and then
load it every R session where you use it. Have a look at the help file
for the countrycode()
function before using it here.
library("countrycode")
$ccodecow <- countrycode(ab.country$country_factor, origin = "country.name",
ab.countrydestination = "cown")
ab.country
## # A tibble: 20 × 4
## COUNTRY perceivedDem country_factor ccodecow
## <dbl> <dbl> <fct> <dbl>
## 1 1 3.19 Benin 434
## 2 2 3.49 Botswana 571
## 3 3 2.85 Burkina Faso 439
## 4 4 3.11 Cape Verde 402
## 5 5 3.44 Ghana 452
## 6 6 2.57 Kenya 501
## 7 7 2.51 Lesotho 570
## 8 8 2.91 Liberia 450
## 9 9 2.80 Madagascar 580
## 10 10 2.82 Malawi 553
## 11 11 2.95 Mali 432
## 12 12 2.97 Mozambique 541
## 13 13 3.15 Namibia 565
## 14 14 2.43 Nigeria 475
## 15 15 2.57 Senegal 433
## 16 16 2.82 South Africa 560
## 17 17 3.23 Tanzania 510
## 18 18 2.74 Uganda 500
## 19 19 2.69 Zambia 551
## 20 20 2.06 Zimbabwe 552
We can now prepare the QoG data for the merge. Recall that the Afrobarometer survey was conducted in 2008, so limiting ourselves to the QoG data from that year probably makes sense.
.2008 <- qog.small |> filter(year == 2008) qog
Now we use the left_join()
function to create a new
object that contains both the ab.country
and
qog.2008
datasets. Because we are only interested in the
observations for which we have survey data, we use
left_join()
, indicating to add observations to all
observations on the “left” dataset in the function call. (Look up
right_join()
and inner_join
, too.)
<- left_join(x = ab.country, y = qog.2008,
ab.qog by = "ccodecow")
ab.qog
## # A tibble: 20 × 11
## COUNTRY perceivedDem country_factor ccodecow cname ht_region year
## <dbl> <dbl> <fct> <dbl> <chr> <int> <int>
## 1 1 3.19 Benin 434 Benin 4 2008
## 2 2 3.49 Botswana 571 Botswana 4 2008
## 3 3 2.85 Burkina Faso 439 Burkina Faso 4 2008
## 4 4 3.11 Cape Verde 402 Cape Verde 4 2008
## 5 5 3.44 Ghana 452 Ghana 4 2008
## 6 6 2.57 Kenya 501 Kenya 4 2008
## 7 7 2.51 Lesotho 570 Lesotho 4 2008
## 8 8 2.91 Liberia 450 Liberia 4 2008
## 9 9 2.80 Madagascar 580 Madagascar 4 2008
## 10 10 2.82 Malawi 553 Malawi 4 2008
## 11 11 2.95 Mali 432 Mali 4 2008
## 12 12 2.97 Mozambique 541 Mozambique 4 2008
## 13 13 3.15 Namibia 565 Namibia 4 2008
## 14 14 2.43 Nigeria 475 Nigeria 4 2008
## 15 15 2.57 Senegal 433 Senegal 4 2008
## 16 16 2.82 South Africa 560 South Africa 4 2008
## 17 17 3.23 Tanzania 510 Tanzania 4 2008
## 18 18 2.74 Uganda 500 Uganda 4 2008
## 19 19 2.69 Zambia 551 Zambia 4 2008
## 20 20 2.06 Zimbabwe 552 Zimbabwe 4 2008
## gle_cgdpc uds_mean gle_cgdpc_ln gle_cgdpc_lag
## <dbl> <dbl> <dbl> <dbl>
## 1 1387. 0.534 7.24 1374.
## 2 12935. 0.550 9.47 10689.
## 3 993. -0.310 6.90 950.
## 4 3498. 1.13 8.16 3129.
## 5 2056. 0.546 7.63 1888.
## 6 1367. 0.200 7.22 1321.
## 7 1398. 0.373 7.24 1363.
## 8 417. 0.261 6.03 421.
## 9 830. 0.292 6.72 799.
## 10 877. 0.264 6.78 774.
## 11 924. 0.325 6.83 888.
## 12 731. 0.101 6.59 691.
## 13 4560. 0.343 8.43 4645.
## 14 1868. -0.0706 7.53 1965.
## 15 1482. 0.325 7.30 1447.
## 16 8159. 0.592 9.01 7865.
## 17 1209. -0.0813 7.10 1121.
## 18 1190. -0.0518 7.08 1139.
## 19 1657. 0.188 7.41 1628.
## 20 3582. -0.694 8.18 3832.
And now we can investigate the relationship between respondents’ average perception of democracy and the Unified Democracy Scores, which are based on information about political institutions and political processes.
ggplot(data = ab.qog, aes(y = perceivedDem, x = uds_mean)) +
geom_point() +
ylab("Perceived Democracy (average response)") +
xlab("Unified Democracy Score")
Of course we can also estimate a bivariate regression model of this relationship:
<- lm(perceivedDem ~ uds_mean, data = ab.qog)
mod summary(mod)
##
## Call:
## lm(formula = perceivedDem ~ uds_mean, data = ab.qog)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.42451 -0.24345 -0.01489 0.19484 0.53863
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.73580 0.07814 35.010 < 2e-16 ***
## uds_mean 0.53326 0.17764 3.002 0.00765 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.2926 on 18 degrees of freedom
## Multiple R-squared: 0.3336, Adjusted R-squared: 0.2966
## F-statistic: 9.012 on 1 and 18 DF, p-value: 0.007653
A typical use case in multilevel analysis is to merge data at level-1 with data at level-2, or a higher level. We had initially created a version of the Afrobarometer data that contains a number of variables of interest at the level of the survey respondent:
glimpse(ab.work)
## Rows: 27,713
## Columns: 14
## Groups: COUNTRY [20]
## $ COUNTRY <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ URBRUR <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2,…
## $ perceivedDem <dbl> 4, 4, 4, 3, 2, 3, 2, 3, 3, 3, 3, 3, 2, 4, 3, 4, 4,…
## $ perceivedDem_factor <fct> "A full democracy", "A full democracy", "A full de…
## $ education <dbl> 4, 2, 4, 3, 4, 4, 5, 2, 4, 4, 5, 2, 0, 4, 4, 4, 2,…
## $ education_factor <fct> "Some secondary school/high school", "Some primary…
## $ female <dbl> 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0,…
## $ female_factor <fct> Female, Male, Female, Male, Female, Male, Female, …
## $ age <dbl> 38, 46, 28, 30, 23, 24, 40, 50, 24, 36, 22, 31, 50…
## $ perceivedDemAvg <dbl> 3.190941, 3.190941, 3.190941, 3.190941, 3.190941, …
## $ age_country_mean <dbl> 35.38674, 35.38674, 35.38674, 35.38674, 35.38674, …
## $ age_country_sd <dbl> 12.82612, 12.82612, 12.82612, 12.82612, 12.82612, …
## $ age_ctd <dbl> 2.613255, 10.613255, -7.386745, -5.386745, -12.386…
## $ age_ctd_std <dbl> 0.10187237, 0.41373592, -0.28795706, -0.20999117, …
And, we still have the qog.small
dataset containing the
Unified Democracy Scores variable.
To bring the two together, we again need two things: first, matching identifiers that help our software package assign the appropriate observations to each other. The QoG data contains COW country codes, but our Afrobarometer data only has country numbers. We need to create a character vector with country names first. To do this, we use the same practice as above: create a factor using the labels included in the vector.
Again, double-check the codebook to make sure all labels are applied correctly.
$country_factor <- factor(ab.work$COUNTRY,
ab.worklevels = unique(ab.work$COUNTRY),
labels = get_labels(ab.work$COUNTRY))
Again, double-check the codebook to make sure all labels are applied correctly.
table(ab.work$country_factor)
##
## Benin Botswana Burkina Faso Cape Verde Ghana Kenya
## 1200 1200 1200 1264 1200 1104
## Lesotho Liberia Madagascar Malawi Mali Mozambique
## 1200 1200 1350 1200 1232 1200
## Namibia Nigeria Senegal South Africa Tanzania Uganda
## 1200 2324 1200 2400 1208 2431
## Zambia Zimbabwe
## 1200 1200
Now, we can use the countrycode()
function from the
package with the same name to add a variable ccodecow
to
our Afrobarometer data.
$ccodecow <- countrycode(ab.work$country_factor, origin = "country.name",
ab.workdestination = "cown")
table(ab.work$country_factor, ab.work$ccodecow)
##
## 402 432 433 434 439 450 452 475 500 501 510 541 551
## Benin 0 0 0 1200 0 0 0 0 0 0 0 0 0
## Botswana 0 0 0 0 0 0 0 0 0 0 0 0 0
## Burkina Faso 0 0 0 0 1200 0 0 0 0 0 0 0 0
## Cape Verde 1264 0 0 0 0 0 0 0 0 0 0 0 0
## Ghana 0 0 0 0 0 0 1200 0 0 0 0 0 0
## Kenya 0 0 0 0 0 0 0 0 0 1104 0 0 0
## Lesotho 0 0 0 0 0 0 0 0 0 0 0 0 0
## Liberia 0 0 0 0 0 1200 0 0 0 0 0 0 0
## Madagascar 0 0 0 0 0 0 0 0 0 0 0 0 0
## Malawi 0 0 0 0 0 0 0 0 0 0 0 0 0
## Mali 0 1232 0 0 0 0 0 0 0 0 0 0 0
## Mozambique 0 0 0 0 0 0 0 0 0 0 0 1200 0
## Namibia 0 0 0 0 0 0 0 0 0 0 0 0 0
## Nigeria 0 0 0 0 0 0 0 2324 0 0 0 0 0
## Senegal 0 0 1200 0 0 0 0 0 0 0 0 0 0
## South Africa 0 0 0 0 0 0 0 0 0 0 0 0 0
## Tanzania 0 0 0 0 0 0 0 0 0 0 1208 0 0
## Uganda 0 0 0 0 0 0 0 0 2431 0 0 0 0
## Zambia 0 0 0 0 0 0 0 0 0 0 0 0 1200
## Zimbabwe 0 0 0 0 0 0 0 0 0 0 0 0 0
##
## 552 553 560 565 570 571 580
## Benin 0 0 0 0 0 0 0
## Botswana 0 0 0 0 0 1200 0
## Burkina Faso 0 0 0 0 0 0 0
## Cape Verde 0 0 0 0 0 0 0
## Ghana 0 0 0 0 0 0 0
## Kenya 0 0 0 0 0 0 0
## Lesotho 0 0 0 0 1200 0 0
## Liberia 0 0 0 0 0 0 0
## Madagascar 0 0 0 0 0 0 1350
## Malawi 0 1200 0 0 0 0 0
## Mali 0 0 0 0 0 0 0
## Mozambique 0 0 0 0 0 0 0
## Namibia 0 0 0 1200 0 0 0
## Nigeria 0 0 0 0 0 0 0
## Senegal 0 0 0 0 0 0 0
## South Africa 0 0 2400 0 0 0 0
## Tanzania 0 0 0 0 0 0 0
## Uganda 0 0 0 0 0 0 0
## Zambia 0 0 0 0 0 0 0
## Zimbabwe 1200 0 0 0 0 0 0
We can now prepare the QoG data for the merge. Recall that the Afrobarometer survey was conducted in 2008, so limiting ourselves to the QoG data from that year still makes sense.
.2008 <- qog.small |> filter(year == 2008) qog
Now we use the left_join()
function to create a new
object that contains both the ab.country
and
qog.2008
datasets. Because we are only interested in the
observations for which we have survey data, we use
left_join()
, indicating to add observations to all
observations on the “left” dataset in the function call. (Look up
right_join()
and inner_join
, too.)
<- left_join(x = ab.work, y = qog.2008,
ab.survey.qog by = "ccodecow")
glimpse(ab.survey.qog)
## Rows: 27,713
## Columns: 23
## Groups: COUNTRY [20]
## $ COUNTRY <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ URBRUR <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2,…
## $ perceivedDem <dbl> 4, 4, 4, 3, 2, 3, 2, 3, 3, 3, 3, 3, 2, 4, 3, 4, 4,…
## $ perceivedDem_factor <fct> "A full democracy", "A full democracy", "A full de…
## $ education <dbl> 4, 2, 4, 3, 4, 4, 5, 2, 4, 4, 5, 2, 0, 4, 4, 4, 2,…
## $ education_factor <fct> "Some secondary school/high school", "Some primary…
## $ female <dbl> 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0,…
## $ female_factor <fct> Female, Male, Female, Male, Female, Male, Female, …
## $ age <dbl> 38, 46, 28, 30, 23, 24, 40, 50, 24, 36, 22, 31, 50…
## $ perceivedDemAvg <dbl> 3.190941, 3.190941, 3.190941, 3.190941, 3.190941, …
## $ age_country_mean <dbl> 35.38674, 35.38674, 35.38674, 35.38674, 35.38674, …
## $ age_country_sd <dbl> 12.82612, 12.82612, 12.82612, 12.82612, 12.82612, …
## $ age_ctd <dbl> 2.613255, 10.613255, -7.386745, -5.386745, -12.386…
## $ age_ctd_std <dbl> 0.10187237, 0.41373592, -0.28795706, -0.20999117, …
## $ country_factor <fct> Benin, Benin, Benin, Benin, Benin, Benin, Benin, B…
## $ ccodecow <dbl> 434, 434, 434, 434, 434, 434, 434, 434, 434, 434, …
## $ cname <chr> "Benin", "Benin", "Benin", "Benin", "Benin", "Beni…
## $ ht_region <int> 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,…
## $ year <int> 2008, 2008, 2008, 2008, 2008, 2008, 2008, 2008, 20…
## $ gle_cgdpc <dbl> 1387.37, 1387.37, 1387.37, 1387.37, 1387.37, 1387.…
## $ uds_mean <dbl> 0.5338026, 0.5338026, 0.5338026, 0.5338026, 0.5338…
## $ gle_cgdpc_ln <dbl> 7.235165, 7.235165, 7.235165, 7.235165, 7.235165, …
## $ gle_cgdpc_lag <dbl> 1373.85, 1373.85, 1373.85, 1373.85, 1373.85, 1373.…
And now we can investigate the relationship between respondents’ average perception of democracy and the Unified Democracy Scores, which are based on information about political institutions and political processes - but with a multilevel model, using the individual predictor of education, gender, and age.
library("lme4")
<- lmer(perceivedDem ~ education + female + age + uds_mean + (1 | country_factor),
mod.lmer data = ab.survey.qog)
summary(mod.lmer)
## Linear mixed model fit by REML ['lmerMod']
## Formula:
## perceivedDem ~ education + female + age + uds_mean + (1 | country_factor)
## Data: ab.survey.qog
##
## REML criterion at convergence: 62855.8
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.02019 -0.79660 0.07749 0.82491 2.41052
##
## Random effects:
## Groups Name Variance Std.Dev.
## country_factor (Intercept) 0.08007 0.283
## Residual 0.75520 0.869
## Number of obs: 24531, groups: country_factor, 20
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 2.8328086 0.0790143 35.852
## education -0.0358514 0.0032160 -11.148
## female 0.0245008 0.0112994 2.168
## age 0.0001443 0.0004165 0.346
## uds_mean 0.5318391 0.1724952 3.083
##
## Correlation of Fixed Effects:
## (Intr) eductn female age
## education -0.194
## female -0.114 0.146
## age -0.235 0.281 0.145
## uds_mean -0.524 0.001 -0.001 -0.005
This tutorial concludes with two common ways of creating new variables: recoding a variable into a binary variable, and cutting a dichotomous variable into percentiles.
Sometimes you may be interested in condensing information into a
binary yes/no indicator. For instance, instead of including an education
measure with 10 categories (which we have in our survey data, as
education
and education_level
), you may want
to know whether someone finished high school or not. To do this, we
return to using the ifelse()
function. Let’s create a
variable hs
that is set to 1 for all survey respondents
that finished high school and to 0 for all who did not. For this, we
need to recall the levels of the education
variable.
table(ab.work$education_factor, ab.work$education)
##
## 0 1 2 3 4 5
## No formal schooling 4365 0 0 0 0 0
## Informal schooling only 0 1260 0 0 0 0
## Some primary schooling 0 0 5111 0 0 0
## Primary school completed 0 0 0 3897 0 0
## Some secondary school/high school 0 0 0 0 5950 0
## Secondary school completed/high school 0 0 0 0 0 4165
## Post-secondary qualifications, not university 0 0 0 0 0 0
## Some university 0 0 0 0 0 0
## University completed 0 0 0 0 0 0
## Post-graduate 0 0 0 0 0 0
##
## 6 7 8 9
## No formal schooling 0 0 0 0
## Informal schooling only 0 0 0 0
## Some primary schooling 0 0 0 0
## Primary school completed 0 0 0 0
## Some secondary school/high school 0 0 0 0
## Secondary school completed/high school 0 0 0 0
## Post-secondary qualifications, not university 1674 0 0 0
## Some university 0 649 0 0
## University completed 0 0 506 0
## Post-graduate 0 0 0 92
So, values of 5 and above were assigned to respondents who completed
high school. We use this cutoff for our new binary variable
hs
.
$hs <- ifelse(ab.work$education >= 5, 1, 0)
ab.worktable(ab.work$education_factor, ab.work$hs)
##
## 0 1
## No formal schooling 4365 0
## Informal schooling only 1260 0
## Some primary schooling 5111 0
## Primary school completed 3897 0
## Some secondary school/high school 5950 0
## Secondary school completed/high school 0 4165
## Post-secondary qualifications, not university 0 1674
## Some university 0 649
## University completed 0 506
## Post-graduate 0 92
What would you do if you wanted to create a “trichotomous” variable with values of 0, 1, and 2 for respondents whose education ended before high school (0), after high school (1), and who graduated from a post-secondary institution (2)?
For many variables, percentiles are often more useful to interpret than absolute values. Let’s take a slightly challenging example and say we want to create an indicator for groups of 10% of GDP per capita for countries. That is, the bottom 10% of countries receive a 0, the next 10% a 1, and so forth. To do this, we combine a couple of R functions:
cut()
divides the range of a continuous variable into
intervals and codes the values according to which interval they
fall.quantile()
calculates percentiles (and requires the
na.rm
argument to be set to TRUE
!)as.numeric()
converts a factor into a numeric
variableWe work with the qog.small data again.
<- quantile(qog.small$gle_cgdpc, probs = seq(0, 1, by = 0.1), na.rm = TRUE)
deciles $gle_cgdpc_dec <- cut(qog.small$gle_cgdpc,
qog.smallbreaks = deciles, include.lowest = TRUE)
table(qog.small$gle_cgdpc_dec)
##
## [173,940] (940,1.55e+03] (1.55e+03,2.53e+03] (2.53e+03,3.58e+03]
## 210 209 209 209
## (3.58e+03,5.28e+03] (5.28e+03,7.96e+03] (7.96e+03,1.17e+04] (1.17e+04,2.08e+04]
## 210 209 209 209
## (2.08e+04,3.12e+04] (3.12e+04,1.13e+05]
## 209 210
If we want to make this new variable a numeric variable, we can do the following:
$gle_cgdpc_dec2 <- as.numeric(qog.small$gle_cgdpc_dec)
qog.smalltable(qog.small$gle_cgdpc_dec, qog.small$gle_cgdpc_dec2)
##
## 1 2 3 4 5 6 7 8 9 10
## [173,940] 210 0 0 0 0 0 0 0 0 0
## (940,1.55e+03] 0 209 0 0 0 0 0 0 0 0
## (1.55e+03,2.53e+03] 0 0 209 0 0 0 0 0 0 0
## (2.53e+03,3.58e+03] 0 0 0 209 0 0 0 0 0 0
## (3.58e+03,5.28e+03] 0 0 0 0 210 0 0 0 0 0
## (5.28e+03,7.96e+03] 0 0 0 0 0 209 0 0 0 0
## (7.96e+03,1.17e+04] 0 0 0 0 0 0 209 0 0 0
## (1.17e+04,2.08e+04] 0 0 0 0 0 0 0 209 0 0
## (2.08e+04,3.12e+04] 0 0 0 0 0 0 0 0 209 0
## (3.12e+04,1.13e+05] 0 0 0 0 0 0 0 0 0 210
You can read about wide and long formats in the Tidy Data article. Sometimes you will encounter data in wide format, where different measurements of the same variable (e.g., measurements of income over several years) are entered in a spreadsheet as columns, not rows. This is the case, for instance, for economic indicators released by statistics offices.
R offers some easy options to convert these data into long format. We’ll briefly work along two examples.
The first example that is provided by UCLA’s Stat Consulting Group at https://stats.oarc.ucla.edu/stata/modules/reshaping-data-wide-to-long/. First, we read in a dataset in wide format. Note: the UCLA link I used to use for the data may be offline, so I’m providing the data for course participants on Canvas. Download the dataset, place it in the working directory for this lab, and proceed.
<- import(file = "faminc.dta")
dat.wide dat.wide
## famid faminc96 faminc97 faminc98
## 1 3 75000 76000 77000
## 2 1 40000 40500 41000
## 3 2 45000 45400 45800
Now, we use the pivot_longer()
function from the “tidyr”
package that you loaded with the tidyverse package. Read the documentation
for examples. Here, I wish to convert the dat.wide
dataframe so that year
is my new identifier (key),
faminc
is the name of the new variable measuring the value
of interest, and famid
is the identifier of units in the
existing data.
<- dat.wide |> pivot_longer(
dat.long cols = -famid,
names_to = "year",
values_to = "faminc")
dat.long
## # A tibble: 9 × 3
## famid year faminc
## <dbl> <chr> <dbl>
## 1 3 faminc96 75000
## 2 3 faminc97 76000
## 3 3 faminc98 77000
## 4 1 faminc96 40000
## 5 1 faminc97 40500
## 6 1 faminc98 41000
## 7 2 faminc96 45000
## 8 2 faminc97 45400
## 9 2 faminc98 45800
Next, I convert the year
variable into a true numeric
variable, removing the faminc
prefix:
$year <- as.numeric(gsub(pattern = "faminc",
dat.longreplacement = "19",
x = dat.long$year))
arrange(dat.long, famid, year)
## # A tibble: 9 × 3
## famid year faminc
## <dbl> <dbl> <dbl>
## 1 1 1996 40000
## 2 1 1997 40500
## 3 1 1998 41000
## 4 2 1996 45000
## 5 2 1997 45400
## 6 2 1998 45800
## 7 3 1996 75000
## 8 3 1997 76000
## 9 3 1998 77000
We can also use these functions for “messier” datasets. For instance, let’s say we want to download some economic indicators from the U.S. Bureau of Economic Analysis. The website “U.S. Economy at a Glance” offers these data as an Excel spreadsheet (link on the top right: http://www.bea.gov/newsreleases/xls/glance.xlsx).
After downloading this spreadsheet, we get the following data:
I clean these data up by hand in Excel so that they can be read into R:
<- import("commerce202006.csv")
com.wide com.wide
## Percent change at seasonally adjusted annual rate (unless otherwise noted)
## 1 Gross domestic product*
## 2 Gross domestic purchases*
## 3 Personal consumption expenditures*
## 4 Nonresidential fixed investment*
## 5 Residential investment*
## 6 Exports of goods and services*
## 7 Imports of goods and services*
## 8 Government consumption expenditures and gross investment*
## 2018Q1 2018Q2 2018Q3 2018Q4 2019Q1 2019Q2 2019Q3 2019Q4 2020Q1
## 1 2.5 3.5 2.9 1.1 3.1 2.0 2.1 2.1 -5.0
## 2 2.5 2.8 4.9 1.4 2.3 2.6 2.2 0.6 -6.1
## 3 1.7 4.0 3.5 1.4 1.1 4.6 3.2 1.8 -6.8
## 4 8.8 7.9 2.1 4.8 4.4 -1.0 -2.3 -2.4 -7.9
## 5 -5.3 -3.7 -4.0 -4.7 -1.0 -3.0 4.6 6.5 18.5
## 6 0.8 5.8 -6.2 1.5 4.1 -5.7 1.0 2.1 -8.7
## 7 0.6 0.3 8.6 3.5 -1.5 0.0 1.8 -8.4 -15.5
## 8 1.9 2.6 2.1 -0.4 2.9 4.8 1.7 2.5 0.8
<- pivot_longer(com.wide, cols = -1, names_to = "quarter", values_to = "change")
com.long names(com.long) <- c("indicator", "quarter", "change")
<- com.long |> pivot_wider(names_from = "indicator", values_from = "change")
com.long com.long
## # A tibble: 9 × 9
## quarter `Gross domestic product*` `Gross domestic purchases*`
## <chr> <dbl> <dbl>
## 1 2018Q1 2.5 2.5
## 2 2018Q2 3.5 2.8
## 3 2018Q3 2.9 4.9
## 4 2018Q4 1.1 1.4
## 5 2019Q1 3.1 2.3
## 6 2019Q2 2 2.6
## 7 2019Q3 2.1 2.2
## 8 2019Q4 2.1 0.6
## 9 2020Q1 -5 -6.1
## `Personal consumption expenditures*` `Nonresidential fixed investment*`
## <dbl> <dbl>
## 1 1.7 8.8
## 2 4 7.9
## 3 3.5 2.1
## 4 1.4 4.8
## 5 1.1 4.4
## 6 4.6 -1
## 7 3.2 -2.3
## 8 1.8 -2.4
## 9 -6.8 -7.9
## `Residential investment*` `Exports of goods and services*`
## <dbl> <dbl>
## 1 -5.3 0.8
## 2 -3.7 5.8
## 3 -4 -6.2
## 4 -4.7 1.5
## 5 -1 4.1
## 6 -3 -5.7
## 7 4.6 1
## 8 6.5 2.1
## 9 18.5 -8.7
## `Imports of goods and services*`
## <dbl>
## 1 0.6
## 2 0.3
## 3 8.6
## 4 3.5
## 5 -1.5
## 6 0
## 7 1.8
## 8 -8.4
## 9 -15.5
## `Government consumption expenditures and gross investment*`
## <dbl>
## 1 1.9
## 2 2.6
## 3 2.1
## 4 -0.4
## 5 2.9
## 6 4.8
## 7 1.7
## 8 2.5
## 9 0.8
Now all I need to do is change the variable names. For this, I use
the gsub()
function, which is very helpful for the purpose
of changing each element of a character vector:
names(com.long) <- gsub(pattern = "[/*]",
replacement = "",
x = names(com.long))
com.long
## # A tibble: 9 × 9
## quarter `Gross domestic product` `Gross domestic purchases`
## <chr> <dbl> <dbl>
## 1 2018Q1 2.5 2.5
## 2 2018Q2 3.5 2.8
## 3 2018Q3 2.9 4.9
## 4 2018Q4 1.1 1.4
## 5 2019Q1 3.1 2.3
## 6 2019Q2 2 2.6
## 7 2019Q3 2.1 2.2
## 8 2019Q4 2.1 0.6
## 9 2020Q1 -5 -6.1
## `Personal consumption expenditures` `Nonresidential fixed investment`
## <dbl> <dbl>
## 1 1.7 8.8
## 2 4 7.9
## 3 3.5 2.1
## 4 1.4 4.8
## 5 1.1 4.4
## 6 4.6 -1
## 7 3.2 -2.3
## 8 1.8 -2.4
## 9 -6.8 -7.9
## `Residential investment` `Exports of goods and services`
## <dbl> <dbl>
## 1 -5.3 0.8
## 2 -3.7 5.8
## 3 -4 -6.2
## 4 -4.7 1.5
## 5 -1 4.1
## 6 -3 -5.7
## 7 4.6 1
## 8 6.5 2.1
## 9 18.5 -8.7
## `Imports of goods and services`
## <dbl>
## 1 0.6
## 2 0.3
## 3 8.6
## 4 3.5
## 5 -1.5
## 6 0
## 7 1.8
## 8 -8.4
## 9 -15.5
## `Government consumption expenditures and gross investment`
## <dbl>
## 1 1.9
## 2 2.6
## 3 2.1
## 4 -0.4
## 5 2.9
## 6 4.8
## 7 1.7
## 8 2.5
## 9 0.8
For more info on how to use gsub()
, see http://www.endmemo.com/program/R/gsub.php.
R also allows users to easily read tables from PDF files and process them for data analysis.
To import tables from PDF files into R, I recommend the “tabulizer” package. This package works well in my experience, but it may require some extra work to install and configure Java on your computer (and the rjava interface for R). This page has some suggestions for troubleshooting common issues during installation as well as some code examples.
Data management and data processing will almost always eat up much of the overall time you spend on a quantitative research project, especially when you work with clustered data. Often, you might end up spending 80% of your time on data management before getting to the remaining 20% that it takes you to analyze your data. Developing good habits and an easy-to-use toolset will therefore go long ways to make you a more efficient (and effective) researcher.
Here are some more tutorials on data cleaning and data processing if you find some time and want to practice your skills: