Tuning fuzzy controller using approximated evaluation function

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A fuzzy controller requires a control engineer to tune its fuzzy rules for a given problem to be solved. To reduce the burden, we develop a gradient-based tuning method for a fuzzy controller. The developed method is closely related to reinforcement learning, but takes advantages of a practical assumption made for faster learning. In reinforcement learning, values of problem states need to be learned through lots of trial-and-error interactions between the controller and the plant. And the plant dynamics should also be learned by the controller In this research, we assume that an approximated value function of the problem states can be represented as a function of a Euclidean distance from a goal state and an action executed at the state. And, using it as an evaluation function, the fuzzy controller is tuned to have an optimal policy for reaching the goal state despite an unknown plant dynamics. Our experimental results on a pole-balancing problem show that the proposed method is efficient and effective in solving not only a set-point problem but also a tracking problem.

Cite

CITATION STYLE

APA

Naba, A., & Miyashita, K. (2005). Tuning fuzzy controller using approximated evaluation function. In Advances in Soft Computing (pp. 113–122). Springer Verlag. https://doi.org/10.1007/3-540-32391-0_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free