Improved algorithm for cleaning high frequency data: An analysis of foreign currency

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

High-frequency data are notorious for their noise and asynchrony, which may bias or contaminate the empirical analysis of prices and returns. In this study, we develop a novel data filtering approach that simultaneously addresses volatility clustering and irregular spacing, which are inherent characteristics of high-frequency data. Using high frequency currency data collected at five -minute intervals, we find the presence of vast microstructure noise coupled with random volatility clusters, and observe an extremely non-Gaussian distribution of returns. To process non-Gaussian high-frequency data for time series modelling, we propose two efficient and robust standardisation methods that cater f or volatility clusters, which clean the data and achieve near-normal distributions. We show that the filtering process efficiently cleans high-frequency data for use in empirical settings while retaining the underlying distributional properties.

Cite

CITATION STYLE

APA

Jayawardena, N. I., West, J., Li, B., & Todorova, N. (2015). Improved algorithm for cleaning high frequency data: An analysis of foreign currency. Corporate Ownership and Control, 12(3CONT1), 125–132. https://doi.org/10.22495/cocv12i3c1p1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free