This paper examines numerical issues in computing solutions to networks of stochastic automata. It is well-known that when the matrices that represent the automata contain only constant values, the cost of performing the operation basic to all iterative solution methods, that of matrix-vector multiply, is given by ρN = ΠNi=1 ni × ΣNi=1 ni, where ni is the number of states in the ith automaton and N is the number of automata in the network. We introduce the concept of a generalized tensor product and prove a number of lemmas concerning this product. The result of these lemmas allows us to show that this relatively small number of operations is sufficient in many practical cases of interest in which the automata contain functional and not simply constant transitions. Furthermore, we show how the automata should be ordered to achieve this.
CITATION STYLE
Fernandes, P., Plateau, B., & Stewart, W. J. (1998). Efficient Descriptor-Vector Multiplications in Stochastic Automata Networks. Journal of the ACM, 45(3), 381–414. https://doi.org/10.1145/278298.278303
Mendeley helps you to discover research relevant for your work.