In this chapter, there is description of data used in this study, which are The Hang Seng Index (HSI), The Hongkong and Shanghai Banking Corporation Holding plc (HSBC Hldgs) and Oriental Press Group Limited. After that, methodologies of predictions based on past volatilities and GARCH class conditional volatility models are presented. Finally, I will talk about how to evaluate the forecasting performances in according to select the best models. Now, let me introduce the data analyzed in my study.
Global investors treat the Hang Seng Index (HSI) as an indicator of the performance of the stock market in Hong Kong. The HSI was publicly launched on 24 November 1969 and is one of the earliest stock market indexes in Hong Kong. It is now maintained by Hang Seng Indexes Company Limited, which is a subsidiary of Hang Seng Bank, one of the constituent companies. The HSI is a freefloat-adjusted market capitalization weighted stock market index. : The current price at Day t : The closing price at Day t-1 IS : The number of issued shares FAF : Freefloat-adjusted Factor, which is between 0 and 1 CF: Cap factor, which is between 0 and 1 It consists of 43 constituent companies which represent around 60% of the total market capitalization of the Hong Kong Stock Exchange (HKEX). In order to show the price movements of the major sectors of the market clearer, the HSI constituent stocks are grouped into four sub-indexes: Properties, Utilities, Commerce and Industry, and Finance. Normally, qualified potential constituents are companies with a primary listing on the Main Board of the Hong Kong Stock Exchange (HKEX). In recent years, more and more mainland China companies listed on the Hong Kong Stock Exchange (HKEX), they can also be one of the qualified potential constituents if they meet several conditions. The company's total ordinary share capital must be in the form of H shares, the shares of the company incorporated in mainland China that are traded on the Hong Kong Stock Exchange. And also, the company needs to complete the process of Share Reform first. As a result, the company has no unlisted share capital. As the market capitalization, the turnover ranking and the financial performances of the companies may change in different periods, the list of constituent stocks is reviewed quarterly. In this paper, the daily closing Hang Seng Index and the weekly Hang Seng Index are used. The data set ranging from 1 July 1997 to 30 June 2008 is retrieved from Datastream, a U.K. incorporated data service company. The data set is partitioned into the in-sample estimation periods 1997 - 2007, the out-of-sample forecast periods 2007 - 2008. This separation provides 2467 and 522 in-sample observations for the daily series and weekly series respectively. For out-of-sample observations, there are 246 for daily series and 52 for weekly series. In order to obtain more accurate results, public holidays and special incidents leading the public announcement of non-trading, such as the black signal of the Rainstorm Warning System and no. 8 storm force wind signal of Hong Kong's Tropical Cyclone Warning Signals. During this period, there were quite a lot of economic events affecting the stock market in Hong Kong. Hong Kong is one of the victims of 1997 Asian Financial Crisis. After the HSI peaked at 16820, it was attacked by international speculators, leading to a 50% plunge. In 1998, the Hong Kong government intervention on stock market to purchase component shares supported the market and the HSI rebounded to 18000. The rise in both the interest rate and the crude oil price burst the dot-com bubble in 2000. The HSI dropped to 14000 and then fluctuated between 14000 and 16000. 911 Incident occurred in 2001. The HSI kept falling and the minimum was 8894. The stock market started to recover after Mainland and Hong Kong Closer Economic Partnership Arrangement (CEPA) was signed on 29 June 2003. In August 2007, mainland China declared the plan to allow part of citizens to invest on Hong Kong market directly. In October, stimulated by the news of AH stock hedging program, the HSI break through 30000. Unfortunately, the HSI was affected by the fallen chain of major stock markets in the world and dropped 2061.23 on 22 January 2008. During this period, the average point of the HSI is 14313.39 and the stand deviation is 4563.68. The lowest point in these eleven years was 6660.42 which occurred on 13 August 1998. 31638.22 was the highest point and occurred on 30 October 2007.
Besides the HSI, one of the constituent stocks is randomly selected. At the beginning of the selection process, each constituent stock is assigned a number, 1 - 43. Then, a number, between 1 - 43, is randomly generated by Excel. If the assigned number of the company matches the generated number, that company will be selected and studied in this paper. The Hongkong and Shanghai Banking Corporation Holding plc (HSBC Hldgs) is the selected constituent company and belongs to the Finance Sub-indexes. HSBC hldgs is one of the most favourite stocks in Hong Kong. In 1865, it was founded in Hong Kong in March and in Shanghai one month later. The bank's headquarter was in Hong Kong until 1992. Then it moved to London as a condition of the acquisition of Midland Bank in UK, and also due to the handover of Hong Kong's sovereignty. Currently, HSBC hldgs is both the world's largest banking and financial services group and the 5th largest stock in the Hong Kong Stock Exchange (HKEX) by market capitalization. HSBC hldgs is also listed on the Bermuda, New York, London and Paris Stock Exchanges. It is a constituent of the FTSE 100 Index and the largest company listed on the FTSE. In 2010, CEO Michael Geoghegan moved to Hong Kong since HSBC hldgs focuses on Asia more now. Similarly, both daily and weekly closing prices of HSBC hldgs ranging from 1 July 1997 to 30 June 2008 are retrieved from Datastream. The data set is also divided into the in-sample estimation periods 1997 - 2007 and the out-of-sample forecast periods 2007 - 2008. The number of in-sample observations and out-of-sample observations for both daily and weekly series are also the same as the HSI series and the non-trading days are excluded. For in-sample observations, there are 2467 for daily series and 522 for weekly series. For out-of-sample observations, there are 246 for daily series and 52 for weekly series. During this period, the average closing price of HSBC hldgs is 96.92 and the stand deviation is 23.26. The lowest closing price in these eleven years was 40.48 which occurred on 21 September 1998. 140.586 was the highest closing price and occurred on 15 October 2007.
In this paper, another stock which is not the constituent stock is also studied. The stock is selected randomly like the chosen constituent stock. Oriental Press Group Limited is selected and is incorporated in Hong Kong. The group takes part in the publication of daily newspapers, Oriental Daily News and The Sun, and The Sun Racing Journal. Oriental Daily News was first published in 1969. Today, the paper is at the top in the daily circulation and has a record readership of around 3,100,000. Unlike Oriental Daily News, The Sun targets at a younger and fresher reader groups. The Sun was published since 1999, and can also be found outside Hong Kong. By considering a lot of Hong Kong people living in North America and in order to develop the market of The Sun, there is a North America version in New York and Toronto. The Sun Racing Journal was established in 1991. It is one of the major horse racing magazines in Hong Kong. Like the HSI and HSBC Hldgs, both daily and weekly closing prices of Oriental Press Group Limited ranging from 1 July 1997 to 30 June 2008 are retrieved from Datastream. The data set is also divided into the in-sample and the out-of-sample with the same period as the above two series. The number of in-sample observations and out-of-sample observations for both daily and weekly series are also equal to as that of the HSI series and HSBC hldgs series. During this period, the average closing price of Oriental Press Group Limited is 1.499 and the stand deviation is 0.657. The lowest closing price in these eleven years was 0.533 which occurred on 23 June 1998. 3.175 was the highest closing price and occurred on 14 January 2004.
As this paper is studying the volatility, I focus on the returns of the HSI and the stocks rather than the index and closing prices. Therefore, a transformation of data is needed before modeling. Data are transformed into daily returns by taking the first difference of the natural logarithm of the daily index and closing prices. : The daily return : The current price at Day t : The closing price at Day t-1 Mean equals to 0.00138 and stand deviation is 0.0176. Mean equals to 0.00163 and stand deviation is 0.0169. Mean equals to -0.00032 and stand deviation is 0.0315.
If a time series can be estimated using time series models, it must be a stationary process, as non-stationary process cannot be studied directly. If the joint probability distribution of time series data keeps constant when time is moving, this data set can be said to be stationary and the mean and variance do not change over time. On the contrary, a non-stationary process whose joint probability keep changing in the period. The mean and variance vary at different time points. However, usually after a non-stationary series has been transformed, such as differencing or log-difference, the series become stationary. Accordingly, it is necessary to test whether the data set is under a stationary process before any further analyzing. The Augmented Dickey and Fuller (ADF) test is employed in this paper. The ADF test is an improved version of the Dickey and Fuller (DF) test. Both tests' null hypotheses are the series with a unit root, which means the series is non-stationary. The DF test is only based on a simple regression. where the residuals are followed the Dickey-Fuller Distribution. The hypothesis testing is The test statistic is After that, comparing the test statistic with the relevant critical value for the Dickey-Fuller test, if the value is larger than the critical value, the null hypothesis can be rejected, and the model is stationary; vice verse. The ADF test is based on a more complicated more. where is a constant, is the coefficient of a deterministic trend and is the summation of the lag orders of the autoregressive process.
Probably, the historical mean provides the easiest way to forecast the volatility. All the in-sample observations are used and equally weighted to forecast. where is the number of out-of-sample observations
Under moving average method, the forecast of the volatility is given by an unweighted average of the in-sample observations. Unlike the historical mean method, not all the in-sample observations are used. The average is based on the in-sample observations over a particular time interval of fixed length. where T is called the moving average period or 'rolling window'. Three different lengths are considered for each frequency. For daily data, three months, six months and one year are chosen. For monthly data, six months, one year and two years are chosen.
Both historical mean and moving average methods assume that the volatility will be in stable or change slowing with a trend. But, if the volatility fluctuates unpredictably, the best forecast of the volatility of next period is the current real volatility.
Under exponential smoothing method, the forecast of the volatility is a weighted average of the previous actual observation and the previous forecast. where is the smoothing factor, and must be between 0 and 1. If is equal to zero, the forecast exactly equals the prior actual observation and the exponential smoothing method becomes random walk. When approaches one, the preceding forecast is the majority. The value of smoothing factor is determined by minimizing the in-sample sum of squared errors. The estimated smoothing factors for daily data are between 0.64 and 0.95. For weekly data, the values lie in the range 0.88 to 0.92.
The exponential weighted moving average (EWMA) method is similar to the exponential smoothing method. The only difference is that the prior actual volatility is replaced by a moving average method. where is a smoothing factor lying between 0 and 1. Like the exponential smoothing method, the EWMA method becomes random walk if the smoothing factor is equal to zero. When the factor is getting closer and closer one, the moving average will be weighted heavier and heavier. For daily data, the estimated values of factors are in the range 0.001 to 0.53. The smoothing factors lie between 0.001 and 0.4 for weekly data.
If the variance of the data is constant, the process is called homoscedastic. However, a lot of time series data in the real world have time-varying variance. In 1982, Engle proposed the autoregressive conditional heteroscedasiticity (ARCH) model to deal with time series data. Four years later, Bollerslev suggested the GARCH model. A GARCH (p , q) model is where , , and . There are joint estimations of both the conditional mean and the conditional variance process in the GARCH model. The error term, , follows a normal distribution with zero mean and the variance which varies with time. indicates the persistence of shocks to volatility. If the sum is equal to one, the GARCH model becomes the integrated-GARCH (IGARCH) model.
In the GARCH model, it assumes that shocks with equal magnitude have the same impacts on the volatility no matter which sign shocks are, positive or negative. However, the different sign of the shock may have a different impact on the volatility. In order to parameterize this idea, the TGARCH model is one of a number of ways. The TGARCH model was proposed by Glosten, Jagannathan and Runkle in 1993. The model is express as where if , and if . Consequently, a positive shock has an impact of on the volatility, and a negative shock has an impact of on the volatility. If is greater than zero, the negative shock has an larger impact on the volatility and vice versa. quantifies the persistence of shocks.
The AGARCH model is another way to parameterize the idea that different sign of shock with equal magnitude has a different impact on the volatility. The AGARCH model looks quite similar to the TGARCH model and is expressed as If is a positive number, the positive shock has a smaller impact on the volatility and vice versa.
The EGARCH model (Nelson, 1991) does not have the non-negativity constraint on the parameter, and , in the above three GARCH models. And also, the EGARCH model is the third asymmetric kind of GARCH models in this paper. The model is expressed as where The positive value of indicates that the positive shock has an larger impact on the volatility and vice versa.
After the volatility has been forecasted, the forecasters need to be evaluated to see how small the forecast errors are. There are several loss functions for the forecast error. The choice of loss functions totally depends on investors' preferences.
The mean error (ME), mean absolute error (MAE) and root mean squared error (RMSE) are the easiest and commonest ways to measure the forecasting performance. The result of the ME can be an indicator showing the direction of overprediction or underprediction on average. As the errors by overprediction and underprediction offset each other, it is not surprising that the ME statistic usually is the lowest one among the others. Unlike the ME, the MAE does not have the problem of the offsetting effect. The RMSE provides a measurement if the investor prefers to impose a more heavily penalty upon the larger forecast errors.
As mentioned before, all investors have their individual preference for the loss functions. It is likely that investors have asymmetric loss functions rather than symmetric loss functions. Referring the past research (Pagan and Schwert, 1990; Brailsford and Faff, 1996), the mean mixed error is employed in this paper. Firstly, the mean mixed error with a more heavily penalty on the forecast errors by underpredicting is Secondly, the mean mixed error penalizing the overpredicted forecast errors move heavily is where O is the number of overpredictions and U is the number of underpredication in the out-of-sample forecasts.
A professional writer will make a clear, mistake-free paper for you!Get help with your assignment
Please check your inbox
I'm Chatbot Amy :)
I can help you save hours on your homework. Let's start by finding a writer.Find Writer