Philosophy of Artificial Intelligence

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Turing on the integration of human and machine intelligence
    (Springer, 2016) Sterrett, Susan G.
    Philosophical discussion of Alan Turing’s writings on intelligence has mostly revolved around a single point made in a paper published in the journal Mind in 1950. This is unfortunate, for Turing’s reflections on machine (artificial) intelligence, human intelligence, and the relation between them were more extensive and sophisticated. They are seen to be extremely well-considered and sound in retrospect. Recently, IBM developed a question-answering computer (Watson) that could compete against humans on the game show Jeopardy! There are hopes it can be adapted to other contexts besides that game show, in the role of a collaborator of, rather than a competitor to, humans. Another, different, research project --- an artificial intelligence program put into operation in 2010 --- is the machine learning program NELL (Never Ending Language Learning), which continuously ‘learns’ by ‘reading’ massive amounts of material on millions of web pages. Both of these recent endeavors in artificial intelligence rely to some extent on the integration of human guidance and feedback at various points in the machine’s learning process. In this paper, I examine Turing’s remarks on the development of intelligence used in various kinds of search, in light of the experience gained to date on these projects.
  • Item
    Bringing up Turing's 'Child-Machine'
    (Springer, Berlin, 2012) Sterrett, Susan G.
    Turing wrote that the "guiding principle" of his investigation into the possibility of intelligent machinery was "The analogy [of machinery that might be made to show intelligent behavior] with the human brain." [10] In his discussion of the investigations that Turing said were guided by this analogy, however, he employs a more far-reaching analogy: he eventually expands the analogy from the human brain out to "the human community as a whole." Along the way, he takes note of an obvious fact in the bigger scheme of things regarding human intelligence: grownups were once children; this leads him to imagine what a machine analogue of childhood might be. In this paper, I'll discuss Turing's child-machine, what he said about different ways of educating it, and what impact the "bringing up" of a child-machine has on its ability to behave in ways that might be taken for intelligent. I'll also discuss how some of the various games he suggested humans might play with machines are related to this approach.
  • Item
    Too many instincts: contrasting philosophical views on intelligence in humans and non-humans
    (Taylor and Francis Ltd, 2002) Sterrett, Susan G.
    This paper investigates the following proposal about machine intelligence: that behaviour in which a habitual response that would have been inappropriate in a certain unfamiliar situation is overridden and replaced by a more appropriate response be considered evidence of intelligence. The proposal was made in an earlier paper (Sterrett 2000) and arose from an analysis of a neglected test for intelligence hinted at in Turing's legendary 'Computing Machinery and Intelligence'; it was also argued there that it was a more principled test of machine intelligence than straightforward comparisons with human behaviour. The present paper first summarizes the previous claim then looks at writings about intelligence, or the lack of it, in animals and machines by various writers (Descartes, Hume, Darwin and James). It is then shown that, despite their considerable differences regarding fundamental things such as what kinds of creatures are intelligent and the relationship between reason, instinct and behaviour, all of these writers would regard behaviour that meets the proposed criterion as evidence of intelligence. Finally, some recent work in employing logic and reinforcement learning in conjunction with 'behaviour-based' principles in the design of intelligent agents is described; the significance for the prospect of machine intelligence according to the proposed criterion is discussed.
  • Item
    Nested algorithms and "the original imitation game test": a reply to James Moor
    (Kluwer Academic Publishers, 2002) Sterrett, Susan G.
    In "The Status and Future of the Turing Test" (Moor, 2001), which appeared in an earlier issue of this journal, James Moor remarks on my paper "Turing's Two Tests for Intelligence." In my paper I had claimed that, whatever Turing may or may not have thought, the test described in the opening section of Turing's now legendary 1950 paper "Computing Machinery and Intelligence" is not equivalent to, and in fact is superior to, the test described in a passage that occurs much later in Turing's paper (i.e., in Section 5 of Turing, 1950). I'm pleased Moor chose to give such prominence to my point, and very happy to see that he recognized that my claim was a normative one about the superiority of one test over another, rather than a claim about Turing's intentions. However, as I think the way he describes my point could lead to misunderstandings, I'd like to clarify the points I made. One major clarification is which two tests I am contrasting; another is that the difference in overall structure of the two tests is of philosophical significance.
  • Item
    Turing's two tests for intelligence
    (Kluwer Academic Publishers, 2000) Sterrett, Susan G.
    On a literal reading of 'Computing Machinery and Intelligence', Alan Turing presented not one, but two, practical tests to replace the question 'Can machines think?' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as 'the Turing Test'. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one's habitual responses; thus the test's applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human's linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) 'Turing Test' has been dismissed.