This article examines the use of artificial intelligence (AI) and deep learning, specifically, to create financial robo-advisers. These machines have the potential to be perfectly honest fiduciaries, acting in their client’s best interests without conflicting self-interest or greed, unlike their human counterparts. However, the application of AI technology to create financial robo-advisers is not without risk. This article will focus on the unique risks posed by deep learning technology. One of the main fears regarding deep learning is that it is a “black box”, its decision-making process is opaque and not open to scrutiny even by the people who developed it. This poses a significant challenge to financial regulators, whom would not be able to examine the underlying rationale and rules of the robo-adviser to determine its safety for public use. The rise of deep learning has been met with calls for ‘explainability’ of how deep learning agents make their decisions. This paper argues that greater explainability can be achieved by describing the ‘personality’ of deep learning robo-advisers, and further proposes a framework for describing the parameters of the deep learning model using concepts that can be readily understood by people without technical expertise. This regards whether the robo-adviser is ‘greedy’, ‘selfish’ or ‘prudent’. Greater understanding will enable regulators and consumers to better judge the safety and suitability of deep learning financial robo-advisers.
CITATION STYLE
Chia, H. (2019). In Machines We Trust: Are Robo-Advisers More Trustworthy Than Human Financial Advisers? Law, Technology and Humans, 1(1), 129–141. https://doi.org/10.5204/lthj.v1i0.1261
Mendeley helps you to discover research relevant for your work.