Minsky, chomsky and deep nets

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When Minsky and Chomsky were at Harvard in the 1950s, they started out their careers questioning a number of machine learning methods that have since regained popularity. Minsky’s Perceptrons was a reaction to neural nets and Chomsky’s Syntactic Structures was a reaction to ngram language models. Many of their objections are being ignored and forgotten (perhaps for good reasons, and perhaps not). While their arguments may sound negative, I believe there is a more constructive way to think about their efforts; they were both attempting to organize computational tasks into larger frameworks such as what is now known as the Chomsky Hierarchy and algorithmic complexity. Section 5 will propose an organizing framework for deep nets. Deep nets are probably not the solution to all the world’s problems. They don’t do the impossible (solve the halting problem), and they probably aren’t great at many tasks such as sorting large vectors and multiplying large matrices. In practice, deep nets have produced extremely exciting results in vision and speech, though other tasks may be more challenging for deep nets.

Author supplied keywords

Cite

CITATION STYLE

APA

Church, K. W. (2018). Minsky, chomsky and deep nets. In Lecture Notes in Computer Science (Vol. 11107 LNAI, pp. 3–14). Springer Verlag. https://doi.org/10.1007/978-3-030-00794-2_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free