Introduction - If you have any usage issues, please Google them yourself
This code proposes a general framework for out-of-sample predictive ability testing and forecast selection when the model can be misspecified. It can be applied to different types of forecasts issued from both nested and non-nested models using different estimation techniques for a general loss function (chosen by the user). It accommodates both conditional and unconditional evaluation objectives. The null is H0: E( Loss(model A)- Loss(model B) )= 0. The sign of the test-statistics indicates which forecast performs better: a positive test-statistic indicates that model A forecast produces larger average loss than the model B forecast (model B outperforms model A), while a negative sign indicates the opposite.