## Theory

Metropolis-Hastings is a Markov Chain Monte Carlo (MCMC) method for sampling from a posterior where the normalizing constant is unknown. There are a number of steps: We initialize the algorithm with a guess . One choice could be . Then for

1. We generate a proposal from a proposal distribution

- We compute the acceptance ratio
We set with probability , else

A popular proposal distribution is a random walk of the form with random variables generated from a multivariate normal. The scale of the covariance matrix is tuned to achieve a good acceptance rate. The optimal acceptance rate for random walk Metropolis-Hastings is between and depending on the number of latent variables, so PyFlux will run a tuning stage until the acceptance rate falls between these goal posts. Once tuning is done, we can obtain draws from the algorithm.

## PyFlux

First we will load some stock data using Pandas DataReader function:

```
import numpy as np
import pyflux as pf
from pandas.io.data import DataReader
from datetime import datetime
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
%matplotlib inline
fb = DataReader('FB', 'yahoo', datetime(2013,1,1), datetime(2016,6,10))
fb['Logged Adj Close'] = np.log(fb['Adj Close'].values)
plt.figure(figsize=(15,5))
plt.plot(fb.index[1:len(fb.index)],np.diff(fb['Adj Close'].values))
plt.ylabel('Returns')
plt.title('IBM Returns')
plt.show()
```

We will estimate an ARIMA(2,1,2) on this dataset using Metropolis-Hastings. But first we need to check we are happy with our prior specification:

```
model = pf.ARIMA(data=fb,ar=2,ma=2,integ=1,target='Logged Adj Close')
print(model.latent_variables)
```

```
Index Latent Variable Prior Prior Latent Vars V.I. Dist Transform
======== ========================= =============== ========================= ========== ==========
0 Constant Normal mu0: 0, sigma0: 3 Normal None
1 AR(1) Normal mu0: 0, sigma0: 0.5 Normal None
2 AR(2) Normal mu0: 0, sigma0: 0.5 Normal None
3 MA(1) Normal mu0: 0, sigma0: 0.5 Normal None
4 MA(2) Normal mu0: 0, sigma0: 0.5 Normal None
5 Sigma Uniform n/a (non-informative) Normal exp
```

The prior on the constant is weakly informative for this problem. The AR and MA priors are a little more informative to discourage values that would create non-stationarity. Plotting the pdf of the Normal prior with the associated hyperparameters gives:

```
mu = 0
sigma = 0.5
x = np.linspace(-3, 3, 100)
plt.figure(figsize=(15,5))
plt.title("Default prior for the AR and MA latent variables")
plt.plot(x,mlab.normpdf(x, mu, sigma));
```

In practice, we may want a little stronger regularization here, but lets proceed and run random-walk Metropolis-Hastings using the

*M-H*model fit option:

```
model = pf.ARIMA(data=fb,ar=2,ma=2,integ=1,target='Logged Adj Close')
x = model.fit("M-H",nsims=100000)
```

```
Acceptance rate of Metropolis-Hastings is 0.00172
Acceptance rate of Metropolis-Hastings is 0.25594
Tuning complete! Now sampling.
Acceptance rate of Metropolis-Hastings is 0.25698
```

We can plot the latent variables using

**plot_z**– below we plot the AR and MA latent variables:

```
model.plot_z(list(range(1,5)))
```

For informal checks to see if the algorithm appears to have converged, we can call the

**trace_plot**method upon the model’s latent variables object:

```
model.latent_variables.trace_plot(figsize=(15,15))
```