☕
An Introduction to MATLAB
  • An Introduction to MATLAB
  • Where do I start?
  • 1. Crashcourse
    • Intro to Crashcourse
    • Graphical User Interface
    • Essential commands
      • Matrix Input and Access
      • Matrix Algebra
      • Logical Operations
    • Best practices
    • Self-Assessment
    • Applied exercises
  • 2. MATLAB Programming
    • Intro to MATLAB Programming
    • Programming Fundamentals
    • Conditions
    • Loops
    • Custom Functions
    • Debugging
    • Applied Exercises
  • 3. Data, Graphics & Reporting
    • Intro to Data, Graphics & Reporting
    • Working with Datasets
    • Creating Graphs
    • Applied Exercises
  • 4. RNGs & Simulations
    • Intro to RNGs & Simulations
    • Random Number Generation
    • Monte-Carlo Simulations
    • Applied Exercises
  • 5. Numerical Methods
    • Intro to Numerical Methods
    • Numerical Optimization
    • Numerical Solvers
    • Applied Exercises
Powered by GitBook
On this page
  • 1. Forecast density
  • Theory
  • 2. Finding the probability of an event using Monte Carlo simulation
  • 3. Large-sample distribution of the OLS estimator

Was this helpful?

  1. 4. RNGs & Simulations

Applied Exercises

This section provides some exercises that are meant to deepen your knowledge in the topics covered in this section and to gain experience solving real-world problems.‌

PreviousMonte-Carlo SimulationsNextIntro to Numerical Methods

Last updated 4 years ago

Was this helpful?

1. Forecast density

In this exercise you will simulate a density forecast from a simple AR(1) model.

  • Using an AR(1) model (see below) and assuming that ρ=0.7\rho=0.7ρ=0.7 and yT=1.5y_{T}=1.5yT​=1.5, simulate 1000 paths for the process, 20 periods into the future, assuming that the error term is iid N(0,0.52)N(0,0.5^2)N(0,0.52).

  • Using the prctile function, find the median path for the process.

  • Create a plot over time of the median path and the 16th16^{th}16th and 84th84^{th}84th percentiles of the forecast "distribution".

Theory

To forecast from an AR(1) model, in general we need to know the parameter values and the last value of the process, yTy_{T}yT​. We can then construct the forecast for time T+1T+1T+1 as

y^T+1=ρ  yT\hat{y}_{T+1} = \rho \; y_{T}y^​T+1​=ρyT​

We can then use the forecasted value for T+1T+1T+1 to create a forecast for T+2T+2T+2.

Note that above the error term is ignored as E(ϵT+i∣ΩT)=0,∀i>0E(\epsilon_{T+i}\vert \Omega_T)=0, \forall i>0E(ϵT+i​∣ΩT​)=0,∀i>0 where ΩT\Omega_TΩT​ is the information set at time TTT. We can however use Monte Carlo simulation to incorporate the uncertainty inherent in the model and get a "distribution" of forecasts.

2. Finding the probability of an event using Monte Carlo simulation

In this exercise you will write a Monte-Carlo simulation to answer a simple question on probabilities.

  • Write a script that performs a Monte Carlo simulation to find the probability that the sum of the numbers coming up on two (fair) dice is equal to 6.

  • Perform the simulation 10, 100, 1000, 10,000 times and compare the results to the theoretical answer.

3. Large-sample distribution of the OLS estimator

In this exercise you will simulate the large-sample distribution of the OLS estimator.

    • For each replication, estimate the OLS coefficients, the variance of the error term, and the variance of the OLS coefficients.

  1. Run the following experiments

    • Check that the OLS estimate has the right mean, i.e. compare the mean of the estimated coefficients to the true coefficients.

    • Check that the variance of the OLS coefficients is correctly estimated, i.e., compare your estimate to the covariance of the coefficient estimates using the cov function.

Write a Monte Carlo simulation to explore these large sample properties. Assume that the true model is y=β0+β1x1+β2x2+ϵ y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \epsilony=β0​+β1​x1​+β2​x2​+ϵ, where β0=1,β1=2,\beta_0=1, \beta_1= 2, β0​=1,β1​=2, and β2=5\beta_2=5β2​=5and

ϵ∼N(0,1)\epsilon \sim N(0,1)ϵ∼N(0,1). Let the sample size be 50 and set the number of replications to 2000.

Create the data sets assuming that the regressors are iid N(0,1)N(0,1)N(0,1)

Check that the OLS estimator has the right distribution: Compare the CDF of the normalized estimate of β2\beta_2β2​ to the CDF of the standard normal distribution

From standard asymptotic theory we know that the OLS estimator is normally distributed with mean equal to the true coefficients and covariance equal to σ2(X′X)−1\sigma^2(X'X)^{-1}σ2(X′X)−1.