Optimizing neural networks on SIMD parallel computers

  • Di Blas A
  • Jagota A
  • Hughey R
  • 7

    Readers

    Mendeley users who have this article in their library.
  • 9

    Citations

    Citations of this article.

Abstract

Hopfield neural networks are often used to solve difficult combinatorial optimization problems. Multiple restarts versions find better solutions but are slow on serial computers. Here, we study two parallel implementations on SIMD computers of multiple restarts Hopfield networks for solving the maximum clique problem. The first one is a fine-grained implementation on the Kestrel Parallel Processor, a linear SIMD array designed and built the University of California, Santa Cruz. The second one is an implementation on the MasPar MP-2 according to the "SIMD Phase Programming Model", a new method to solve asynchronous, irregular problems on SIMD machines. We find that the neural networks map well to the parallel architectures and afford substantial speedups with respect to the serial program, without sacrificing solution quality. © 2004 Elsevier B.V. All rights reserved.

Author-supplied keywords

  • Combinatorial optimization
  • Hopfield neural networks
  • Maximum clique
  • Parallel programming models for irregular problems
  • Single Instruction-multiple data parallel computer

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Andrea Di Blas

  • Arun Jagota

  • Richard Hughey

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free