Applied exercises

This section provides some exercises that are meant to deepen your knowledge in the topics covered in this section and to gain experience solving real-world problems.

1. Computing the OLS estimator

In this exercise you will compute the OLS estimator on a simulated data set using basic MATLAB commands. Please refer to the theory section below for the necessary formulas.

Import the simulated data from olsdata.m and compute the OLS estimator β^\hat{\beta} using matrix expressions. Create a results matrix which stacks the estimated parameters and the values supplied in the vector beta_true side by side. Are the estimated and the true values close?

Next, read up on MATLABs regress function on the MATLAB documentation page on regress. Estimate the OLS coefficient using this function and compare the results to the ones you computed manually.

Theoretical Background

Let yy be a N×1N \times 1 vector of data on the dependent variable and let XX be a N×KN \times K matrix with data on the regressors where the first column is a vector of ones.

The OLS estimator of the regression coefficients is defined as β^=(XX)1(Xy)\hat{\beta} = (X'X)^{-1} (X'y).

2. Computing the log-likelihood of a logit model

In this exercise you will compute the log-likelihood of a logit model on a simulated data set using basic MATLAB commands. Please refer to the theory section below for the necessary formulas.

Import the simulated data from logitdata.m and calculate the value of the log-likelihood for different values of the parameter vector β\beta using matrix expressions.

Approximately, for which value of β\beta is the log-likelihood maximal?

Theoretical Background

Consider the following discrete choice logit model with no constant and one regressor

yi=xiβ+εiy_i = x_i \beta + \varepsilon_i

where all variables are scalars and yiy_i is a binary variable (i.e. it has 0/1 values).

The log-likelihood of the data given a value for the parameter vector β\beta is defined as

L(β)=i=1N  yiln(exp(xiβ)1+exp(xiβ))+(1yi)ln(11+exp(xiβ))\mathcal{L}(\beta) = \sum^N_{i=1} \; y_i \ln\left( \frac{exp(x_i \beta)}{1+exp(x_i \beta)} \right) + (1-y_i) \ln\left( \frac{1}{1+exp(x_i \beta)} \right)

3. Estimating a factor model using Principal Components (Advanced)

In this exercise you will estimate a factor model on a simulated data set using basic MATLAB commands. Please refer to the theory section below for the necessary formulas.

Read up on MATLABs eig function on the MATLAB documentation page on eig. Use the eig function to estimate the matrix of factors FF and loadings Λ\Lambda for the following dataset for r=2r=2.

Caution: By default, eig sorts the eigenvalues and corresponding vectors in ascending order of magnitude of the eigenvalues. Make sure you extract the rr eigenvectors corresponding to the rrlargest eigenvalues.

Theoretical Background

We will use the following factor model

Xt=Λ  Ft+utX_t = \Lambda \; F_t + u_t

where XtX_t is large N×1N \times 1 vector of series which we would like to explain by a lower number of factors. FtF_t is a r×1r \times 1 vector of factors and utu_t an N×1N \times 1 vector of idiosyncratic shocks. Λ\Lambda is a matrix of factor loadings of dimension N×rN \times r. TT is the number of observations.

Under some normalizations, the rr factors and their factor loadings Λ\Lambda can be estimated by principal components using the following formulae.

F^=T  EV(XX)1:rΛ^=F^X/T\hat{F} = \sqrt{T} \; EV(XX')_{1:r}\hspace{1.5cm} \hat{\Lambda} = \hat{F}' X / T

where FF is a T×rT \times r matrix and XX is a T×NT \times N matrix. EV(A)1:rEV(A)_{1:r} denotes the first rr eigenvectors of the matrix AA which correspond to the rr largest eigenvalues.

Last updated