# Quant Questions:

most recent 30 from quant.stackexchange.com 2013-05-22T11:19:23Z
Updated: 48 min 32 sec ago

### How to fix ARMA coefficients in fGarch package?

Tue, 05/21/2013 - 15:22

I want to fix certain coefficients in the ARMA equation of the garchFit command of the fGarch package in R?

E.g. consider:

garchFit(spr ~arma(2,2) + garch(1,1), data = mydata, cond.dist = "ged")

Now, I want to fix the ar1 coefficient to be zero, how can I do this?

I know, that this is possible in e.g. STATA, EVIEWS, rugarch package, Matlab, but how can I do this with the fGarch package in R?

### VaR for portfolio of funds

Tue, 05/21/2013 - 06:32

Let's assume we need to calculate a 1-day VaR for a portfolio of funds. Funds are traded, they can be bought and sold every day. We know exactly what the assets in each fund are. What is the right way to calculate VaR?

1. flatten the portfolio and calculate VaR on a portfolio of assets
2. treat each fund as a tradeable asset and calculate VaR based on that?

### Sign Bias test output interpretation of rugarch?

Mon, 05/20/2013 - 04:19

I fitted an ARMA-GARCH process to my data and looked at the output:

*---------------------------------* * GARCH Model Fit * *---------------------------------* Conditional Variance Dynamics ----------------------------------- GARCH Model : sGARCH(1,1) Mean Model : ARFIMA(0,0,0) Distribution : norm Optimal Parameters ------------------------------------ Estimate Std. Error t value Pr(>|t|) omega 0.000006 0.000001 4.2209 2.4e-05 alpha1 0.092386 0.011281 8.1898 0.0e+00 beta1 0.893276 0.012512 71.3935 0.0e+00 LogLikelihood : 6751.414 Information Criteria ------------------------------------ Akaike -5.2313 Bayes -5.2245 Shibata -5.2313 Hannan-Quinn -5.2289 Q-Statistics on Standardized Residuals ------------------------------------ statistic p-value Lag[1] 7.939 0.004839 Lag[p+q+1][1] 7.939 0.004839 Lag[p+q+5][5] 15.939 0.007021 d.o.f=0 H0 : No serial correlation Q-Statistics on Standardized Squared Residuals ------------------------------------ statistic p-value Lag[1] 1.390 0.238324 Lag[p+q+1][3] 9.242 0.002365 Lag[p+q+5][7] 11.963 0.035303 d.o.f=2 ARCH LM Tests ------------------------------------ Statistic DoF P-Value ARCH Lag[2] 5.474 2 0.06478 ARCH Lag[5] 8.973 5 0.11014 ARCH Lag[10] 12.090 10 0.27910 Nyblom stability test ------------------------------------ Joint Statistic: 0.4445 Individual Statistics: omega 0.1021 alpha1 0.2020 beta1 0.2157 Asymptotic Critical Values (10% 5% 1%) Joint Statistic: 0.846 1.01 1.35 Individual Statistic: 0.35 0.47 0.75 Sign Bias Test ------------------------------------ t-value prob sig Sign Bias 2.020 4.344e-02 ** Negative Sign Bias 1.014 3.105e-01 Positive Sign Bias 1.579 1.145e-01 Joint Effect 21.632 7.779e-05 *** Adjusted Pearson Goodness-of-Fit Test: ------------------------------------ group statistic p-value(g-1) 1 20 70.53 7.516e-08 2 30 91.63 2.040e-08 3 40 120.81 2.683e-10 4 50 134.53 6.620e-10 Elapsed time : 0.405601

Now, I want to interpret the results of the sign bias test. As you can see, there is no negative or positive sign bias, but a sign bias and a joint effect. So how do I interpret this? Is this an evidence for the leverage effect, or what does it tell me? Should I use an apARCh instead?

### Why are Fund Managers Average/Minimum Purchase Price From 13F Form Data The Same?

Sun, 05/19/2013 - 13:13

Using the fund managers current and past 13F form filings you can calculate the average, minimum, and number of shares of a stock they listed for each quarter.

Looking at the data to see who bought AAPL last quarter (2013-03-31) and performing the calculations resulted in all their average and minimum prices for AAPL being the same price:

Average: $467.26 Minimum:$420.05

The following links also list their average and minimum price for AAPL as the same:

Why in this example (and in general) are all fund managers who performed buy, add, reduce of AAPL have an average price of $467.26 and a minimum price of$420.05?

### Replicating strategy in the Black-Scholes model

Sun, 05/19/2013 - 09:51

I have a two-asset Black-Scholes model for a financial market:

$dB_t=B_t r dt$

$dS_t=S_t(\mu dt+\sigma dW_t)$

I introduce a European claim $\xi=max(K,S_T)$ with maturity $T$, for some fixed $K$. I have calculated what the no-arbitrage price of this claim should be at each time $t<T$ by computing expectations under the equivalent martingale measure, which is a function of $S_t$, $t$, and the fixed parameters in the model. and I am now asked to find a replicating portfolio in the original 2 asset market for this claim.

I know that if $V(t,S)$ is a solution to the Black-Scholes PDE subject to the terminal condition $V(T,S)=\max(K,S)$, then $V(t,St)$ is a no-arbitrage time-$t$ price for $\xi$, and that the trading strategy given by taking initial wealth to be $V(0,S_0)$ and the time-$t$ holding in the stock to be $\frac{\partial V}{\partial S}$ is a replicating strategy for the claim.

If I view the pricing function I originally found (by computing expectations) as a function $\xi(t,St)$, is it necessarily true taking initial wealth to be $\xi(0,S_0)$ and taking time-t holding in the stock to be $\frac{\partial \xi}{\partial S}$ will give a replicating portfolio? It should be, simply due to the fact there is a unique equivalent martingale measure in this market, so there must be a unique no-arbitrage time -$t$ cost for the claim at each time $t$, and so $\xi(t,S_t)$ must solve the Black-Scholes PDE.

My question is, is it possible to prove that this trading strategy does replicate the claim without appealing to the fact that the pricing function solves the Black-Scholes PDE?

### Interpretation and consequences of the Nyblom test in the rugarch package?

Sun, 05/19/2013 - 04:04

I fitted a garch model using rugarch of the r package and I got the following output:

*---------------------------------* * GARCH Model Fit * *---------------------------------* Conditional Variance Dynamics ----------------------------------- GARCH Model : sGARCH(1,1) Mean Model : ARFIMA(0,0,0) Distribution : norm Optimal Parameters ------------------------------------ Estimate Std. Error t value Pr(>|t|) omega 0.000006 0.000001 4.2209 2.4e-05 alpha1 0.092386 0.011281 8.1898 0.0e+00 beta1 0.893276 0.012512 71.3935 0.0e+00 LogLikelihood : 6751.414 Information Criteria ------------------------------------ Akaike -5.2313 Bayes -5.2245 Shibata -5.2313 Hannan-Quinn -5.2289 Q-Statistics on Standardized Residuals ------------------------------------ statistic p-value Lag[1] 7.939 0.004839 Lag[p+q+1][1] 7.939 0.004839 Lag[p+q+5][5] 15.939 0.007021 d.o.f=0 H0 : No serial correlation Q-Statistics on Standardized Squared Residuals ------------------------------------ statistic p-value Lag[1] 1.390 0.238324 Lag[p+q+1][3] 9.242 0.002365 Lag[p+q+5][7] 11.963 0.035303 d.o.f=2 ARCH LM Tests ------------------------------------ Statistic DoF P-Value ARCH Lag[2] 5.474 2 0.06478 ARCH Lag[5] 8.973 5 0.11014 ARCH Lag[10] 12.090 10 0.27910 Nyblom stability test ------------------------------------ Joint Statistic: 0.4445 Individual Statistics: omega 0.1021 alpha1 0.2020 beta1 0.2157 Asymptotic Critical Values (10% 5% 1%) Joint Statistic: 0.846 1.01 1.35 Individual Statistic: 0.35 0.47 0.75 Sign Bias Test ------------------------------------ t-value prob sig Sign Bias 2.020 4.344e-02 ** Negative Sign Bias 1.014 3.105e-01 Positive Sign Bias 1.579 1.145e-01 Joint Effect 21.632 7.779e-05 *** Adjusted Pearson Goodness-of-Fit Test: ------------------------------------ group statistic p-value(g-1) 1 20 70.53 7.516e-08 2 30 91.63 2.040e-08 3 40 120.81 2.683e-10 4 50 134.53 6.620e-10 Elapsed time : 0.405601

Now I am wondering about the Nyblom stability test. It basically tests the changing of the paramters over time. The null hypothesis of stability is rejected if the values of the relevant test is larger than the corresponding critical values.

0.4445 is not larger than 0.846, 1.01 and 1.35. So we cannot reject the stability hypothesis. So my parameters stay the same over time and I have no problems with my model, right?

Now I have cases, in which the joint test statistic is larger than the critical values and also single parameters, which have a test statistic which larger than the critical value. My question is now, how to interpret such results. Mainly, this means, that my paramters are not constant over time, so they change, there is a break point, right?

Can I now say the following?: Since the Nyblom test shows, that some parameters vary over time, it would be good to do a rolling forecast. So I reestimate the parameters over time and look how they change. Since the nyblom test showed, that the parameters do change over time I would expect that I can see this with the rolling forecast, that the parameters which the nyblom test said which vary are indeed more varying over time than other parameters, which turned out in the nyblom test to stay constant. Will this work? Or is it not possible to connect the nyblom test to rolling forecasting?

### Optimal mortgage rate strategy

Sat, 05/18/2013 - 11:50

When buying a mortgage, you can choose to "lock in" a rate at any point within 60 days of your closing date. Once locked in, you can't revert.

This makes it a secretary problem - in the traditional problem, we would want to lock in at the lowest point after waiting $\sqrt{60}$ days. However, unlike in the traditional problem the rates are not i.i.d., so it becomes harder.

One model is to assume prices form a random walk; I've found this paper whose abstract says that the optimal strategy in random walk secretary problems is to choose the first rate, but the text is behind a pay wall so I'm not sure of all the assumptions.

Can someone point me to a reference on optimal stopping in the case of locking in mortgage rates?

### Is Unexpected Loss ever used in Basel II?

Sat, 05/18/2013 - 10:29

In Basel II, EL is useful. It's calculated as

$$EL = PD \cdot EAD \cdot LGD$$

in advance IRB (internal rate-based approach),

Correlation $$R = 0.12 \frac{1 – e^{-50 \cdot PD}}{1 – e^{-50}} + 0.24 \cdot (1- \frac{ 1 – e^{-50 \cdot PD}} {1 – e^{-50}} )$$

$$b = [0.11852 – 0.05478 \ln(PD)]^2$$

Capital requirement $$K = \{ LGD \cdot N(\sqrt{\frac{1}{1 – R}} \cdot G(PD) + \sqrt{\frac{R}{1 – R}} \cdot G(0.999)) – PD \cdot LGD\} \cdot \frac{1 + (M – 2.5) b}{1 – 1.5 b}$$

here Ln denotes the natural logarithm; N(x) denotes the cumulative distribution function for a standard normal random variable; G(z) denotes the inverse cumulative distribution function for a standard normal random variable (i.e. the value of x such that N(x) = z).

Afterwards,

Risk-weighted assets $$RWA = K \cdot 12.5 \cdot EAD$$

then

$$CAR = \frac{Tier 1 capital + Tier 2 capital}{Total Asset}$$

-- Basel II defines limits on CAR.

But, for unexpected loss, did Basel II make any restriction on it?

FRM has a set of formula calculating UL from LGD, EAD etc... Unexpected Loss $$UL = EAD \sqrt{PD\cdot \sigma_{LGD}^2 + LGD^2 \cdot \sigma_{PD}}$$

### When the Inverse Correlation between the SPX and VIX breaks down

Sat, 05/18/2013 - 09:03

As we all know the S&P and its implied vol, the VIX, generally move in opposite direction. To a large extent, the correlations makes sense. IV is one of the main drivers of the price of options, going long options is also going long IV. When the market drops, option prices, adjusted for the drop in the the S&P price, experience a further appreciation, the extra appreciation o the options above and beyond the the price movement can be attributed to an increase in IV. My question is when IV(the VIX) and SPX correlation breakdown from its usual inverse relationship( corr=-.95), what do you think this implies about future returns? Why does these instances occur? There are many times when the correlations drops to 0 and becomes positive!

Below is a picture with the 20 day correlation between the two indices.

### Different results between Box-Ljung test and ARCHLM test?

Sat, 05/18/2013 - 07:45

I fitted a garch model using the rugarch package. The output (extract) is as follows:

Now I have trouble interpreting the results of Q-Statistics? First of all to test the mean equation, we look at the standardized residuals. These standardized residuals should behave iid(0,1). Since the p-values is very small, we can conclude, that they are not independent, since there exist serial correlation. Is this right?

To test the volatility equation we look at the standardized squared residuals, they should also behave iid(0,1). Since the p-values for lag order higher than one is small, we can conclude, that they are not independent. So there still exist arch effects, Right?r

Now my problem is, that the ARCHLM test also looks at the standardized squared residuals and comes to a different result: The p-value is not very small, so H0 cannot be rejected. That means, there are no ARCH effects left?

If I look at the plots I get:

and

### What's the first time-integral of price called?

Fri, 05/17/2013 - 11:11

In general I'm wondering about the names of time-derivatives of price.

E.g. in physics the first few time-derivatives of position are:

• f(x) = displacement
• f'(x) = velocity
• f''(x) = acceleration

And the first integral (anti-derivative) of displacement is called absement.

What would the equivalent financial terms be?

### Quality of GAINDATA timestamps

Fri, 05/17/2013 - 10:42

Does anyone have a view on the quality of the timestamping of GAINCAPITAL's free historical data.

There is non-FX data there and I wonder if the timestamps are in sync?

### Desired portfolio volume

Fri, 05/17/2013 - 09:25

I am working on a toy model, in part of which an investor has to decide (based on some utility theory) how much money to invest in a given portfolio. For simplicity, assume that the portfolio is already constructed, it has an expected return $\mu$ and the volatility $\sigma$ which are known to the investor. If the investor invests $x$ in the portfolio, he gets $(1+\rho)x$ on the next step where $$\rho \sim\mathscr N(\mu,\sigma^2)$$ is a stochastic return on the investment. Suppose, that at the current moment the investor has $X$ as his capital. Are there any formulas from the utility theory on how to compute the desired level of investments given $X,\mu$ and $\sigma$ - and perhaps some additional parameters such as risk aversion of the investor?

### Confused about APARCH not a APGARCH?

Fri, 05/17/2013 - 06:02

I am wondering about the apARCH model: As you can see here, it clearly has both terms which a garch model has. The aparch volatilit equation is given by

And the standard garch volatility equation is given by

\begin{align} \sigma^2_t&=\omega + \sum_{i=1}^r \gamma_i \epsilon_{t-i}^2 + \sum_{j=1}^s \delta_j \sigma^2_{t-j} \end{align}

So one can see, that both models cleary use the $\epsilon$ and $\sigma$ term. My question is therefore, why is the apARCH model called apARCh and not apGARCH? And if I am wrong, what is the difference to thy apGARCH (does it exist?)

### Is my VaR calculation correct?

Fri, 05/17/2013 - 05:33

I want to use a ARMA-GARCH process to calculate the value at risk.

I use the rugarch package of R.

First of all, I specify my model:

mymodel<-ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(1, 1), include.mean = FALSE), distribution.model = "norm")

Then I fit it:

modelfit<-ugarchfit(spec=mymodel,data=mydata)

Now comes the crucial point: I want to use the fitted values to calculate the VaR. The model is given by:

\begin{align} r_t=\mu_t + a_t =\mu_t+\sigma_t\epsilon_t= \alpha_1 r_{t-1}+ \beta_1a_{t-1} \\ \sigma^2_t=\gamma_0 + \gamma_1a^2_{t-1} + \delta_1\sigma^2_{t-1} \end{align}

where

$a_t=\sigma_t \epsilon_t$ and $\epsilon_t$ is iid(0,1)

The Value at Risk (as far as I know) can now be calculated via: \begin{align} \hat{VaR}_{0.99,T|T-1}&=\hat{\mu}_{T|T-1} + \hat{\sigma}_{T|T-1} * q_{0.99} \end{align} I therefore need the estimated values of the $\mu$ and $\sigma$. I am not sure how I can get them? Is one of these correct and if both are not correct, what is correct:

This here:

spec = getspec(modelfit); setfixed(spec) <- as.list(coef(modelfit)); forecast = ugarchforecast(spec, n.ahead = 1, n.roll = 1900, data = mydata[1:1901, ,drop=FALSE], out.sample = 1900); sigma(forecast); fitted(forecast)

or do I have to take the fitted and sigma values of the modelfit?

fitted(modelfit) sigma(modelfit)

I want to calculate the VaR using the fitted and sigma values and I can control them via the quantile function, which gives me the quantiles directly:

quantile(modelfit,0.99) quantile(forecast,0.99)

Nevertheless my problem stays the same? What is correct? To use the fitted and sigma values of the modelfit or of the forecast, or if both is not correct, what is correct?

Thanks a lot for sharing your wisdom!

### R Outputs from Johansen test. Linear combination still not stationary?

Thu, 05/16/2013 - 13:46

I am trying to see if house price is cointegrated with interest rate, per capita income and rental vacancy rate and got the following output from ca.jo in R:

# Johansen-Procedure # ###################### Test type: maximal eigenvalue statistic (lambda max) , with linear trend Eigenvalues (lambda): [1] 0.52471580 0.12579545 0.10395269 0.06262468 Values of teststatistic and critical values of test: test 10pct 5pct 1pct r <= 3 | 8.47 6.50 8.18 11.65 r <= 2 | 14.38 12.91 14.90 19.19 r <= 1 | 17.61 18.90 21.07 25.75 r = 0 | 97.44 24.78 27.14 32.14 Eigenvectors, normalised to first column: (These are the cointegration relations) y.l2 income.l2 interest.l2 vac.l2 y.l2 1.00000000 1.00000000 1.0000000 1.00000000 income.l2 -10.16285869 -1.32443038 -12.6597547 0.61669614 interest.l2 -0.06759846 -0.35179735 -0.1535533 0.02143767 vac.l2 0.22771577 0.02087503 -0.4814448 0.02113804

So from what I understand, the output indicates that there is one cointegration relation. And using the 1st eigenvector, I should get that y(which is log_price)=-10.16*income-0.0675*interest+0.227*vacancy rate. However, I ran a ADF test on this combination and got p-value 0.11 (meaning the combination is still non-stationary!). Why is that? Am I using the wrong thing? What does the ".12" mean after the variable names?

Thank you for any help!

### Exact value of mean reversion rate knowing terminal value of the process

Thu, 05/16/2013 - 13:20

Let you have the following mean reverting process:

$\text{d}x_{t}=a(\theta-x_{t})\text{d}t$,

where the diffusion term is absent, that is this process is not stochastic.

Let you know the value of $\theta$.

You also know that at time $t=T$ it must be $x_{T}\simeq\theta$.

(That is when $|x_{T}-\theta|$ is so small to be negligible because $x_{t}=\theta$ when $t\rightarrow\infty$).

Does any closed form and/or a proxy of $a(\theta,T)$ exist?

### Is it random walk?

Thu, 05/16/2013 - 11:36

I would like to ask a question about random walk. Campbell, Lo & Mackinlay defined the random walk, in the following way (RW3):

$$cov[f(r_{t}),g(r_{t+k})]=0,\qquad k\neq0$$

for all $f(\cdot)$ and $g(\cdot)$, where $f(\cdot)$ and $g(\cdot)$ are linear functions, and $\{r_{t}\}$ is a series of returns. So, the question is about the following equation: $$r_{t}=\alpha\varepsilon_{t-1}^{2}+\varepsilon_{t},\qquad\varepsilon_{t}\sim IID(0,\sigma^{2}).$$

Is it random walk or not? And why? (I have an idea, but i don't know, if it's true.)

Thanks

### Add transaction costs to prediction

Thu, 05/16/2013 - 07:12

An algorithm predicts price movement by some certainty and it invests proportional to the confidence level. Predictions range from -1 to +1, -1 meaning sell for a value of $1 +1 meaning buy for a value of$1. Then the profit is calculated by multiplying the prediction with the relative price movement of the security traded.

Now assume a transaction cost of 0.6%. How does that change the profit the algorithm makes? For now, we only calculate the transaction cost for one cycle, i.e buy or sell once and the next time step sell or buy again in order to realize the profit.

So to clarify. I have two variables pred which is a prediction ranging from -1 and +1. I also have d_price which is the relative price movement of the security. This can be 0.0003 or -0.002 or something similar. You calculate this by d_price = (price_t1 - price_t0) / price_t0

I have this eqution now:

profit = pred * d_price

The algorithm makes two trades. It makes a trade when it makes the prediction at time step t0, then it makes another trade at time step t1 in order to realize a profit. So if it predicts +0.5 and then the relative price movement is +0.01 then the profit it makes is 0.005.

What I'm asking about is how this changes when there is a transaction cost of trans=0.006. The transaction cost if percentage based, meaning if I buy 1 amount, I will receive 0.994 only. Likewise, if I sell 1 amount I will receive price * 0.994

profit = f(pred,d_price,trans)

What is f ?

### Why was SSF and Futures on Stocks Banned in US Until Recently [closed]

Thu, 05/16/2013 - 06:39

I have heard that Futures on Stocks were not allowed in US until recently. What is the rationale behind this?