Learn advanced forecasting models through a practical course with R statistical software using S&P 500® Index ETF prices historical data. It explores main concepts from proficient to expert level which can help you achieve better grades, develop your academic career, apply your knowledge at work or do your advanced investment management or sales forecasting research. All of this while exploring the wisdom of best academics and practitioners in the field.
Become an Advanced Forecasting Models Expert in this Practical Course with R
•Read S&P 500® Index ETF prices data and perform advanced forecasting models operations by installing related packages and running script code on RStudio IDE.•Identify Box-Jenkins autoregressive integrated moving average model integration order through level and differentiated time series first order trend stationary augmented Dickey-Fuller and Phillips-Perron unit root tests.•Recognize autoregressive integrated moving average model autoregressive and moving average orders through autocorrelation and partial autocorrelation functions.•Estimate autoregressive integrated moving average models such as random walk with drift and differentiated first order autoregressive.•Identify seasonal autoregressive integrated moving average model seasonal integration order through level and seasonally differentiated time series first order seasonal stationary Hylleberg-Engle-Granger-Yoo seasonal unit root test.•Estimate seasonal autoregressive integrated moving average models such as seasonal random walk with drift and general seasonal.•Automatically select non-seasonal or seasonal autoregressive integrated moving average model with lowest information loss criteria.•Estimate autoregressive fractionally integrated moving average models such as fractional random walk and fractionally differentiated first order autoregressive.•Evaluate autoregressive integrated moving average models forecasting accuracy through mean absolute error, root mean squared error scale-dependent and mean absolute percentage error, mean absolute scaled error scale-independent metrics.•Identify general autoregressive conditional heteroscedasticity modelling need through autoregressive integrated moving average model squared residuals or forecasting errors second order stationary Engle autoregressive conditional heteroscedasticity and Ljung-Box autocorrelation tests.•Recognize non-Gaussian general autoregressive conditional heteroscedasticity modelling need through autoregressive integrated moving average and general autoregressive conditional heteroscedasticity model with highest forecasting accuracy residuals or forecasting errors multiple order stationary Jarque-Bera and Q-Q plot normality tests.•Estimate autoregressive integrated moving average models with residuals or forecasting errors assumed as Gaussian or Student-t distributed and with Bollerslev simple, Nelson exponential or Glosten-Jagannathan-Runkle threshold general autoregressive conditional heteroscedasticity effects such as random walk with drift and differentiated first order autoregressive.•Assess autoregressive integrated moving average model with highest forecasting accuracy standardized residuals or forecasting errors strong white noise modelling requirement.Become an Advanced Forecasting Models Expert and Put Your Knowledge in Practice
Learning advanced forecasting models is indispensable for finance careers in areas such as portfolio management and risk management. It is also essential for academic careers in advanced applied statistics, econometrics and quantitative finance. And it’s necessary for advanced sales forecasting research.
But as learning curve can become steep as complexity grows, this course helps by leading you step by step using S&P 500® Index ETF prices historical data for advanced forecast modelling to achieve greater effectiveness.
Content and Overview
This practical course contains 48 lectures and 6.5 hours of content. It’s designed for advanced forecasting models knowledge level and a basic understanding of R statistical software is useful but not required.
At first, you’ll learn how to read S&P 500® Index ETF prices historical data to perform advanced forecasting models operations by installing related packages and running script code on RStudio IDE.
Then, you’ll define Box-Jenkins autoregressive integrated moving average models. Next, you’ll identify autoregressive integrated moving average models integration order through level and differentiated time series first order trend stationary augmented Dickey-Fuller and Phillips-Perron unit root tests. After that, you’ll identify autoregressive integrated moving average models autoregressive and moving average orders through autocorrelation and partial autocorrelation functions. For autoregressive integrated moving average models, you’ll define random walk with drift and differentiated first order autoregressive models. Later, you’ll define seasonal autoregressive integrated moving average models. Then, you’ll identify seasonal autoregressive integrated moving average models seasonal integration order through level and seasonally differentiated time series first order seasonal stationary Hylleberg-Engle-Granger-Yoo seasonal unit root test. Next, you’ll identify seasonal autoregressive integrated moving average models seasonal autoregressive and seasonal moving average orders through autocorrelation and partial autocorrelation functions. For seasonal autoregressive integrated moving average models, you’ll define seasonal random walk with drift and general seasonal models. After that, you’ll automatically select non-seasonal or seasonal autoregressive integrated moving average model with lowest information loss criteria. For information loss criteria, you’ll define Akaike, corrected Akaike and Schwarz Bayesian information criteria. Later, you’ll define autoregressive fractionally integrated moving average models. Then, you’ll identify autoregressive integrated moving average models fractional integration order through level and fractionally differentiated time series first order trend stationary augmented Dickey-Fuller and Phillips-Perron tests. Next, you’ll identify autoregressive fractionally integrated moving average models autoregressive and moving average orders through autocorrelation and partial autocorrelation functions. For autoregressive fractionally integrated moving average models, you’ll define fractional random walk and fractionally differentiated first order autoregressive models. After that, you’ll automatically select autoregressive fractionally integrated moving average model with lowest information loss criteria. Later, you’ll evaluate autoregressive integrated moving average models forecasting accuracy. For forecasting accuracy metrics, you’ll define scale-dependent mean absolute error, root mean squared error and scale-independent mean absolute percentage error, mean absolute scaled error.
Next, you’ll define general autoregressive conditional heteroscedasticity models. Then, you’ll identify general autoregressive conditional heteroscedasticity modelling need through autoregressive integrated moving average model squared residuals or forecasting errors second order stationary Engle autoregressive conditional heteroscedasticity and Ljung-Box autocorrelation tests. After that, you’ll identify general autoregressive conditional heteroscedasticity model autoregressive and moving average orders through autocorrelation and partial autocorrelation functions. Later, you’ll define autoregressive integrated moving average models with residuals or forecasting errors assumed as Gaussian or normally distributed and with Bollerslev simple, Nelson exponential or Glosten-Jagannathan-Runkle threshold general autoregressive conditional heteroscedasticity effects. For general autoregressive conditional heteroscedasticity models, you’ll define random walk with drift and differentiated first order autoregressive. Then, you’ll evaluate general autoregressive conditional heteroscedasticity models forecasting accuracy.
After that, you’ll define non-Gaussian general autoregressive conditional heteroscedasticity models. Next, you’ll identify non-Gaussian general autoregressive conditional heteroscedasticity modelling need through autoregressive integrated moving average and general autoregressive conditional heteroscedasticity model with highest forecasting accuracy residuals or forecasting errors multiple order stationary Jarque-Bera and Q-Q plot normality tests. Then, you’ll define autoregressive integrated moving average models with residuals or forecasting errors assumed as Student-t distributed and with Bollerslev simple, Nelson exponential or Glosten-Jagannathan-Runkle threshold general autoregressive conditional heteroscedasticity effects. Later, you’ll evaluate non-Gaussian general autoregressive conditional heteroscedasticity models forecasting accuracy. Finally, you’ll evaluate autoregressive integrated moving average and non-Gaussian general autoregressive conditional heteroscedasticity model with highest forecasting accuracy standardized residuals or forecasting errors strong white noise modelling requirement
Use R to work on real world time series analysis and forecasting examples. Applied data science with R.
4.6
★★★★★ 4.6/5
4,380 students