Skip to content

Intelligence has always been artificial or at least artefactual.

I (Keith O’Rourke) thought I would revisit a post of Andrew’s on artificial intelligence (AI) and statistics. The main point seemed to be that “AI can be improved using long-established statistical principles. Or, to put it another way, that long-established statistical principles can be made more useful through AI techniques.” The point(s) I will try to make here are that AI, like statistics, can be (and has been) improved by focusing on representations themselves. That is focusing on what makes them good in a purposeful sense and how we and machines (perhaps only jointly) can build more purposeful ones.

To start, I’ll suggest the authors of the paper Andrew reviewed might wish to consider Robert Kass’ arguments that it is time to move past the standard conception of sampling from a population to one more focused on the hypothetical link between variation in data and its description using statistical models . Even more elaborately – on the hypothetical link between data and statistical models where here the data are connected more specifically to their representation as random variables.  Google “rob kass pragmatic statistics” for his papers on this as well as reactions to them (including some from Andrew).

Given my armchair knowledge of AI, the focus on representations themselves and how algorithms can learn better ones first came to my attention in a talk by Yoshua Bengio at the 2013 Montreal JSM. (The only reason I went was the David Dunson had suggested to me that the talk would be informative). Now, I had attended many of Geoff Hinton’s talks when I was in Toronto, but never picked up the idea of learning ways to represent rather than just predict. Even in the seminar he gave to a group very interested in the general theory of representation – the Toronto Semiotic Circle in 1991 (no I am not good at remembering details – its on his CV).  Of course this was well before deep neural nets.

So what is my motivation and sense of purpose for this post. The so what? Perhaps as Dennett put it “produce new ways of looking at things, ways of thinking about things, ways of framing the questions, ways of seeing what is important and why” [The intentional stance. MIT Press, Cambridge (1987)]. For instance, at 32:32 in  Yann LeCun – How Does The Brain Learn So Quickly?  there is a diagram depicting reasoning with World Simulator -> Actor -> Critic. Perhaps re-ordering as Critic -> Actor -> World Simulator, re-expressed as Aesthetics -> Ethics -> Logic and then expanded as what you value should set out how you act and that how you represent what to possibly act upon – would be insightful. Perhaps not. But even Dennett’s comment does seem to be about representing differently to get better, more purposeful representations.

Of course, there will be expected if not obligatory “what insight can be garnered from CS Peirce’s work” in this regard.

p.s. A variety of comments suggest some further clarification. By claiming artificial intelligence has always been part of  human reasoning, I was trying to deflate the hype rather than add to it. When I started this post, I was very wary of claims of AI learning ways to represent or even representing at all. However, I came to think it was more reasonable not to speculate or argue for any limits  to what machines could do nor what things could or could not actually represent. Peirce did argue that signs stand for something to someone or something. For instance, I do think it is accepted that bees do represent where they obtained the nectar to other bees. However, the current differences between human and machine representation that were pointed to below and more fully in some of the links, are huge.  Given this, my primary interest is in “how we and machines (perhaps only jointly) can build more purposeful ones”. For instance, I believe autonomous driving vehicles are a wasteful distraction that is likely delaying the development and deployment of life saving computer assisted driving aids.

It turns out Peirce thought AI was an important topic. “Precisely how much of the business of thinking a machine could possibly be made to perform, and what part of it must be left for the living mind, is a question not without conceivable practical importance; the study of it can at any rate not fail to throw needed light on the nature of the reasoning process.” Logical machines. 1887

Now, he argued that much of representing is done outside one’s mind, using artefacts, paper and machines. As Steiner put it “for Peirce, artefacts and machines can be parts of cognitive processes: a logical argument concerning the semiotic [representing] character of mental phenomena, and (especially) a functional argument on the constitutive role of these machines and artefacts for human intelligence.” That is representing along with critical thinking has for a very long time been made more useful through AI-like techniques.  Today, I believe some are even speculating that representing was first done outside the (pre)human’s mind, using artefacts that may have then led to the development of language and internal representations in the mind.

The main take home point perhaps being that externalizing the representations allow us to stare at them, if not them stare back at us. And publicly, so others can also stare and perhaps point out how the representations could be made less wrong. That is we (or some of us) treat representations as thinking devices to be tried out and reworked – controlled – to better address purposes. The most import purpose being to represent reality so as to be able to act with out frustration, because the representation somehow connects us with that reality we have no direct access to.

Perhaps also the main point of this blog?

OK, if really interested – read Steiner’s paper.  Here are a couple quotes.

“The idea that the cognitive processes of individuals may extend beyond their skin and skull, as they are notably composed of, constituted by, or spatially distributed over the manipulation, use or transformation of artefacts and machines was already suggested by Peirce. But not only: I now want to show how Peirce’s philosophy is very relevant if we want to inquire about the differences between the reasoning abilities of machines and human intelligence, in a framework in which human intelligence is notably made of machines, symbols, and their use.”

“The difference between human reasoning and machine reasoning is basically related neither to consciousness nor to originality, but to the degrees of control, purpose, and reflexivity human reasoning can exhibit … Human intelligence – including how we acquire and exercise self-control, purpose, and reflexivity – is basically made up of exo-somatic artefacts (including representational systems) and their use”.




  1. Corey says:

    “…that externalizing the representations allow as to stare at them, if not them stare back out us.”

    I’m staring, but I’m not comprehen– oh, oh, wait, it’s just typos.

    “…that externalizing the representations allow us to stare at them, if not them to stare back at us.”

  2. Sounds like a guy’s perspective. LOL kinda kidding. I would speculate that AI amenable to degrees of control, purpose, etc. But that focus on AI constraina creativity and why creativity has become a premium today. And why originality also near extinction.

    • Keith O'Rourke says:

      Well from Steiner – “machines are machines because we do not want them to be original! If an automatic system were to display originality in the production of its behaviour – or, more precisely, too many unpredictable reactions – we would not call it or use it as a “machine” anymore!”

      I don’t think Peirce argued creativity would always be beyond machines but there was this point he did make “The secret of all reasoning machines is after all very simple. It is that whatever relation among the objects reasoned about is destined to be the hinge of a ratiocination, that same general relation must be capable of being introduced between certain parts of the machine.” (Similar I think to the point Corey makes below.) So we would need that creativity be capable of being introduced between certain parts of the machine.

      But then, I am more interested in what can be done jointly.

    • Keith O’Rourke says:

      On second thought: Silly reader, creativity is only for the confused ;-)

      Though maybe the words control and purpose were taken in ways other than intended – creativity is of “some definite human purpose”.

      [CS Peirce] invented the name pragmatism. Some of his friends wished him to call it practicism or practicalism … but … along with nineteen out of every twenty experimentalists who have turned to philosophy … praktisch [practical] and pragmatisch [pragmatic] were as far apart as the two poles, the former belonging in a region of thought where no mind of the experimentalist type can ever make sure of solid ground under his feet, the latter expressing relation to some definite human purpose.

      WHAT PRAGMATISM IS The Monist, 15:2 (April 1905), pp. 161-181.

  3. If I may offer a variation on Peirce: Precisely how much of the business of lifting a machine could possibly be made to perform, and what part of it must be left for the living body, is a question not without conceivable practical importance; the study of it can at any rate not fail to throw needed light on the nature of leverage.

    Or, to use my favorite example, the idea that a machine can “represent” makes sense only if we’re willing to grant that a cardboard board box can “add” because if we put two apples in there, and then another two apples, there are in fact four apples in the box. I’m not willing to grant that this constitutes being able to add.

    Machines carry out physical operations and sometimes we use physical operations to think through a problem. Long division is assisted by a piece of paper where we keep track of the process. But no one has ever proposed that the piece of paper displays intelligence on its own.

    So, when Peirce asks what part of “the business of thinking” can be given to a machine, he’s really just talking about helpful devices like an abacus, which is to calculating as a lever is to moving a boulder. At no point will machines do anything other than provide leverage for our own thoughts. This is because in order to represent something we have to first imagine it, and imagination requires us to bring thoughts into contact with feelings. We don’t ask precisely how much of the business of feeling a machine could possibly be made to perform because the whole point of the lever is that it does its part without feeling the weight. That’s its utility to us.

    The idea of “intelligent” machines has caused a lot of confusion. I don’t think any fancy statistical footwork will save it from nonsense.

    • Corey says:

      Can we upgrade that to: the idea that a machine can “represent” makes sense only if we’re willing to grant that a calculator can “add” because its causal structure is isomorphic to the axioms of arithmetic truncated at some upper limit of representation.


      • I’m trying to imagine the intermediate step: the idea that a machine can “represent” makes sense only if we’re willing to grant that a cardboard box can “add” because its causal structure is …

        and I wonder if we can’t just keep going:

        … isomorphic to the axioms of arithmetic truncated at some upper limit of representation.

        The thing is that any physical fact (like a box of apples) is “isomorphic to the axioms of arithmetic” and, indeed, first-order propositional logic, simply because two things can’t be in the same place at the same time, and nothing can be somewhere else without being moved. That is, again, if you put two apples in a box, leave them there, an put in another two, then the box contains four apples. Of course, what you did was add those two apples. You did it.

        • Corey says:

          Our brains are made up of components whose operation, at the lowest level, we understand to be carrying out computations via neural processing. I’m basically asserting substrate independence — whatever it is that our brains are doing that we call “representation” can, in principle, be done by a physical system that we construct that has the same kind of causal structures that we’ve got squishing along in our noggins.

  4. Thomas Basbøll says:

    I’m curious to know what evidence you believe we have for thinking of the brain as engaged fundamentally (“at the lowest level”) in computation. But, more interestingly, I’m curious to know what you mean by “carrying out computations”. The point of my box of apples is that, though it’s clearly a description of a physical process, we wouldn’t say that the box is computing anything. The same goes for an abacus. In both cases, it is the operator that is using the machine to assist in a computation. The machine itself is not, properly speaking, a “computer”. That, too, is just a kind of hype.

    My view is, first, that the brain isn’t fundamentally a computer. Computing is just one of the things we do (probably using more than just our brain, by the way) and it’s one of the more trivial things. Second, representation is a prerequisite for computation and representation requires imagination, which is not a purely cognitive operation. It requires feeling. (An image is a coordination of thought and feeling, as anyone can discover in their own experience.)

    Finally, no mechanical process actually “computes” anything in the sense of manipulating representations that reliably predict the results of physical operations, like how many apples there will be in a box after you put a certain amount in it. A machine is merely a reliable physical process. We can build one process to simulate another. And it may be faster and cheaper than the first. We, not the process, then the use the one process to represent an aspect of the other.

    For example, instead of moving thousands of apples around in boxes and counting how many I end up with in one of them, I can move some beads around on an abacus and work out the result without doing all that work. It’s me, not the abacus, that represents the apples with the beads. I have never met an electronic computer that has convinced me it is capable of more qua representation and computation. It remains a more sophisticated abacus, a box of apples.

    • Jens Åström says:

      To me, these types of arguments use a strawman type of computing device. Sure, an abacus cannot represent/use semantics, but that doesn’t show that no computer ever could. It’s a bit like John Searle’s Chinese room is a strawman version of a computer that “speaks” Chinese. But this debate seems unsolvable through argument, given it’s long history. Someone will have to show the workings of the brain in practice for it to die.

      I’m betting my money on that it is possible to build a “computer” (meaning a device that performs logical operations on sensory data) that is able to represent and “think” much like we humans do, and when all the various functions and self-reflecting routines are there, there won’t be any magical stuff left to explain.

      This, from my armchair…

      • Interesting that you invoke “magic stuff”. I would counter that the AI hype precisely black-boxes those “various functions and self-reflecting routines”, hiding them from view like a professional magician, so that the machine appears intelligent but is really channeling the programmer’s intelligence. Getting a machine to appear intelligent is like making it look like I can levitate. In the first case I have to hide the intervention of human agency that explains everything; in the second case I have to hide the mechanism of leverage.

        Imagine someone betting that one day a machine will be built that can read minds (i.e., discern thoughts in brains). Your bet has the same odds. In fact, I think they are exactly equally likely. My money says the probability of both is zero. If I were a betting man I would bet any money at any odds and allow any amount of time. It will never happen.

    • Corey Yanofsky says:

      I’m curious to know what evidence you believe we have for thinking of the brain as engaged fundamentally (“at the lowest level”) in computation.

      Well, the evidence *I* have is many years of academic training in relevant disciplines. To start with, there was a B.Sc. in biochemistry; electrochemistry of the nervous system formed an appreciable part of the syllabus. Voltage-gated ion channels, action potentials, synapses and dendrites and axons, oh my. Neurons and glia, neurotransmitters and neuroreceptors, agonists and antagonists (but no protagonists for some reason). Signal transmission in neurons is very well understood; the reason an artificial neural net’s “neurons” do what they do — take a bunch of transformed input values and output the sum — is because this is in essence what biological neurons do (only real neurons do it with spike train firing rates instead of continuous signals). After that, there was the doctorate in biomedical engineering. In my graduate coursework I learned that “neural circuitry” isn’t just figurative language but rather literally how various sub-components of the nervous system (composed of aggregations of neurons) are understood to perform signal processing. The basic math is the same as that used in mechanical and electrical engineering to design and analyze the properties of electronic and electromechanical devices such as sensors and actuators. You can look up the vestibulo-ocular reflex for an example of the kinds of systems that can be understood in this way.

      And here’s the thing: in terms of information processing, there’s no other kind of thing but neurons in the brain. We know and have known for decades how the “transistors” of the brain work on the sorts of time scales at which thought takes place. You say, “A machine is merely a reliable physical process.” I’m here to tell you that your brain is made up of components that, at the lowest level, are also simply reliable physical processes. The mystery is in how to hook them together to do the sort of information processing that the human brain does.

      • Corey Yanofsky says:

        (It’s funny that your objection is about the triviality of the process of moving apples into and out of boxes. A wave in the ocean can travel far but its propagation doesn’t require the actual molecules of water to move very far; in a similar fashion, the propagation of electric signals along neurons doesn’t require the ions carrying the electric charge to move very far either. When the electric signal moves along a neuron the ions are moving back and forth across the cell membrane at a single location — rather like a bunch of apples being put into and taken out of boxes.)

      • Thomas Basbøll says:

        Keep in mind that I don’t think even a computer is best understood as computational. That’s what my box of apples is for. I understand how an electric circuit works and how it can be designed to add numbers. But I don’t believe that a circuit ever actually represents something and carries out a computation; what it does, as you say, is transform inputs into outputs. Until we interpret them as such, these are neither “values” nor “sums” but charges. The circuit is a physical process but so too the box of apples. Intelligence is something different.

        I’m prepared to accept that the brain is a physical process. What I’m not prepared to accept is that a circuit board is somehow a better model for the totality of that process (i.e., what goes on in the brain) than, say, a kitchen sink. It seems implausible to me that the invention of electronics somehow gave us a complete conceptional model for how the brain produces consciousness, letting us do away everything we thought we knew about the “soul” until then, including, say, “the anatomy of melancholy”. All that careful first-person observation of the mind and heart by philosophers and poets should be discarded because someone figured out how to make series of lights go on off when you throw switches and turn knobs? Why? Why even think that “signal processing” captures the essence of what the brain does?

        At the end of the day, I simply have no empathy with computers. They don’t seem to be doing anything like what I’m doing when I’m thinking or writing. They’re not even doing what I’m doing when I’m calculating, computing. I have great respect for electrical engineers, though.

        • Corey Yanofsky says:

          I’m not really interested in wrangling about the definitions of representation, or intelligence, or consciousness. It’s enough that human brains are capable of them, whatever they are, and also that human brains are highly structured agglomerations of a basic component that transforms inputs to outputs. You assert that transforming inputs to outputs shouldn’t be understood as “carrying out a computation.” Well that’s fine too — I’m not interested in wrangling about the definition of computation either.

          But you want to start with the premise “I simply have no empathy with computers — they don’t seem to be doing anything like what I’m doing when I’m thinking or writing,” and somehow draw the conclusion that, “At no point will machines do anything other than provide leverage for our own thoughts… because in order to represent something we have to first imagine it, and imagination requires us to bring thoughts into contact with feelings” and this is just an argument from personal incredulity.

          • Thomas Basbøll says:

            If by personal incredulity you mean that I don’t believe the hype then, yes, you’ve got it exactly right.

            I’m really just saying that the invention of the digital computer gives me no better reason to believe that we’ll one day invent a “thinking machine” — i.e., that we’ll build something with it’s own cognitive agency — than the abacus did. I’m sure that belief in golems was encouraged by increasingly realistic statues. It became easier and easier to imagine them coming alive. But of course granite statues were not a step toward androids–not even a small one. A lever is more like a robot than a statue, and a statue is as much like a human being as a robot, i.e., it only looks like one in a certain set of ways.

            • Corey Yanofsky says:

              I’m really just saying that the invention of the digital computer gives me no better reason to believe that we’ll one day invent a “thinking machine” — i.e., that we’ll build something with it’s own cognitive agency — than the abacus did.

              To me it seemed you were making a rather stronger claim than this one — that you were saying there’s something that we can do that a machine never can even in principle, a thing you described with the phrase “bring thoughts into contact with feelings” (I don’t really know what this actually means) and called a prerequisite to imagination. If this is not the claim you meant to make or if at this time you don’t want to defend it then I’m left with no objection… but with one comment. The relevant comparison that bears on the question of whether it’s possible in principle (and as a consequence, whether we’ll be able to) build a machine with <insert human cognitive capacity here> isn’t abacus to digital computer but rather brain-constituents to reliable-mechanisms-in-general.

              • Thomas Basbøll says:

                I’m saying something like this: there’s something that I can do, which I call “thinking” and, in my experience, it is essentially bound to something else I do, namely, “feeling”. I do these things, always, together in something I call my “imagination”. I gather from the history of philosophy and poetry that I’m not alone in having an imagination, and I recognize my own experiences in the writings of others, though we’re obviously all struggling to really understand what’s going on. They idea of “pure thought”, i.e., without feeling, and not carried out in imagination, is nonsensical to me. But I guess I should admit I just can’t imagine it.

                When I say “I have no better reason to believe” in AI on the basis of existence of computers than the existence the abacus I do ultimately mean that this thing I do is not something I can imagine a machine ever doing. Think of the abacus as being to the computer as block of clay to a sculpture of a human figure. Now imagine someone saying, “See? It’s coming along. Soon we’ll breathe life into this thing and we’ll have a full blown golem!” My point is: we could have just as plausibly said that about the unformed block of clay. The statue is not more “lifelike” in the sense we’re need, i.e., it is not more “animate”.

                Do I think people can do things that clay can’t do “in principle”? Yes, I do. Does that logically rule out some weird magic clay that gets up and walks around and talks like a person? Not strictly, I guess. But I don’t think the burden of proof is on me. I can’t prove that a rock can’t think either. But why would I ever need to? No rock has ever seemed even a little thoughtful. The same goes for every computer I’ve ever seen.

              • Corey Yanofsky says:

                I originally replied with a link to Terry Bisson’s short story “Meat”, but I guess the spam filter ate it. It’s a short and worth googling and reading just for the entertainment value, and it’ll probably do a better job of conveying what I mean by “substrate independence” than I can do with any amount of blather.

              • Corey Yanofsky says:

                er, it’s very short

              • Thomas Basbøll says:

                I don’t think intelligence is substrate independent.

              • Keith O’Rourke says:

                Comments from meat are immediately eaten by the spam filter – no exceptions!

      • Christian Hennig says:

        Terms like “computation” and “representation” are assigned to phenomena by human beings, and human beings have a choice of framework and conception when communicating and thinking about phenomena (which, when it comes to data analysis, is one of the points of the original posting).

        Do computers “represent”? Actually A does not represent B generally and objectively, A represents B in the view of a certain observer (who may or may not be a user of that representation). One can legitimately say that the electric circuits in computer represent something for the computer scientist who uses and understand the workings of the computer. This doesn’t mean that the computer represents something *in its own view*; which is my interpretation of why Thomas is writing that the computer doesn’t represent. Does the computer think, does it compute? Again I’d think these can only be answered relative to the observer, and one may hold that the computer doesn’t qualify as observer itself, but this, too, is debatable and ultimately observer-dependent.

        I am wary of the AI hype as well but I think that “the computer is just like a cardboard box” vs. “the human mind is just like a computer” won’t settle this controversy.

        By the way, “the computer is just like a cardboard box” vs. “the human mind is just like a computer” itself are issues of representation. All representations are essentially different from what they’re representing, so the question shouldn’t really be which one of these is “true” but rather: How are these representations used, by what observer, are they fit for purpose, and perhaps: is the purpose a good one?

        • Thomas Basbøll says:

          I think we agree about this. It’s likely that it’s only possible to represent something if one is capable of representing oneself. The only purpose of my cardbox analogy is the challenge people to explain how digital computers are somehow more “promising”. The answer always turns out to be science fictions in which an intelligent black box is installed in an otherwise inanimate husk. Without that device it’s just an ordinary robot.

  5. Dale Lehman says:

    While I find the subject fascinating, and these contributions beyond what I could say, I don’t find this discussion to be focused on the most important issues. It is far too theoretical for me. Clearly, computers have been programmed to do things beyond what we had envisioned even a few years ago. They can “learn” the rules of a game and then successfully win those games. They have clearly been programmed to do this, so in a sense, they are not displaying thinking or imagination. But I’ve also seen a remarkable video recently showing elephants in Japan doing calligraphy. Truly remarkable. Perhaps they have been “taught” to do this or perhaps they are displaying their own intelligence. Issues such as whether the machines/elephants display “thinking” or “imagination” are ultimately very important, but I’m not sure they can be resolved – and I’m pretty sure I won’t be contributing to that understanding. At some point, it becomes impossible for practical human beings to tell the difference. We might as well refer to them as displaying “intelligence” because most of us can’t tell whether that is true or not.

    What I think is more important is human behavior in this world. Surely there is something to be gained by addressing how humans should treat or coexist with elephants (not to mention each other). Similarly, there is something to be gained by addressing how humans coexist with computers. We are well past the point that this issue should have been addressed. Many people (and much science fiction) have addressed visions of the world with “intelligent” computers/machines, but the moral issues are not being addressed quickly enough to be meaningful. As algorithmic decision making takes over more an more of our lives, we are hardly posing the right questions, not to mention getting any practical answers. I fear that there is little control that humans can exercise that can be effective any more. And, this discussion about whether or not the machines are really displaying anything like “intelligence” don’t seem to be getting at the heart of the issue. Perhaps I am wrong and someone can convey how this discussion is anything more than a diversion from the real issues posed by “artificial intelligence.”

    • Thomas Basbøll says:

      I think the real issues posed by information technology are like the issues of nuclear power and nuclear weapons. Computers make us able to a things we once couldn’t. But there dangers. And there are questions about who benefits and who suffers as a result.

      There are meaningful questions about the ethical treatment of animals that spring, I think, from our natural empathy with them. We demean ourselves when we treat them cruelly. But there are no similar questions about the ethical treatment of computers and raising them is probably a diversion from the political questions about who should control and benefit from information technology.

      I don’t think it’s always insincere–the confusion is sometimes genuine–but it’s a mistake to think that there is some interesting confrontation coming between humans and (“intelligent”) machines. It will be a proxy war between the humans who control the machines and the humans are controlled by them.

      • Dale Lehman says:

        Perhaps more to the point, the war between humans who “think” they control the machines and the humans who are controlled by them. Those that think they are in control have sufficient control to unleash the algorithms but often do not fully understand the consequences. I think this is where we are with AI today. Their lack of understanding does not prevent them from having the ability to act. And, those of us subject to the algorithms, have limited ability (and increasingly are unwilling to use that ability) to prevent its use. There are benefits (e.g., Siri, Cortana, etc.) but also costs. It is the lack of meaningful discussion of these – and more importantly a lack of institutions that might meaningfully allow us to control the development – that disturbs me. I think even the notion of “control” is problematic. I am reluctant to permit anyone to have such control, but absence of control means that those producing the algorithms are free to proceed as they see fit. “Do no harm” is all we have to protect us. Is it enough?

        • Thomas Basbøll says:

          I’d say it’s more like this: a war between humans who claim “the machines are out of control” but are really just unleashing managed chaos on the world (the 2008 financial crisis offers a good model for this sort of irresponsibility with benefits) and humans who, believing this lie, think the machines have taken over and are now the enemy (I think the Luddites might be a good early model of this misunderstanding). Like I say, it’s a proxy war where one side is essentially saying, “We aren’t killing people, guns are killing people!” Hmmm….

          My point is just that the “trick” is granting agency to the machines, rather than assigning responsibility to those who could, at any time, at least unplug them.

          • Dale Lehman says:

            I’m in agreement with this. I think it is a step in the right direction to stop granting agency to machines and recognize that it is our (collective and individual) responsibility to control them. I was trying to steer the conversation away from what I see as relatively unproductive and uninteresting – whether AI is a representation apart from humans – towards some meaningful conversation about how we are using AI. AI is to far along already and the next few years will see its use explode – for both good and bad. Yet I don’t see that we are any closer to control (or, for that matter, even discussion) of its consequences than we have been in the past.

            • Thomas Basbøll says:

              I guess I’m fixated on the term AI. If we just called it “automation” I wouldn’t have a problem. All the same issues arise. What processes do we want automated and what processes do we want to have a hand in ourselves? What should be allowed to happen without a thought (or feeling) from humans? I think AI is a marketing spin on automation that is trying to get us to accept thoughtlessness where we otherwise wouldn’t even consider it.

      • Dzhaughn says:

        “But there are no similar questions about the ethical treatment of computers.”

        I am not sure it is so clear. What about an emulated brain, ala Robin Hanson’s Age of Em? (Basically we scan the neurons in a brain and provide simulated stimulation. It is not technically that difficult, only need 1000x more computing and imaging power. One doesn’t need to “understand” the brain in the large to do this. The result ought to be able to use language, learn, remember, have a sense of self, have motivations and “tastes.”)

        Is it ethical to torture the emulation to measure its behavior, get it to its reveal secrets, or make it do some work? Is it acceptable to offer it pleasure, conditionally, to do the same? Can rights to do such to an emulation of oneself be sold?

        Does the emulated self have rights? Even if not, is it ethical to ignore our empathy with an emulation? Is it ethical for an emulation of person A to ignore its empathy with an emulation of person B? (Emulation B would certainly “feel that way.”)

        • Thomas Basbøll says:

          I haven’t read Hanson’s book but it looks like the sort of hype I don’t believe. One reason is that, as far as I know, no significant progress has yet been made in emulating the 302 neurons of the worm C. elegans. (See the OpenWorm project.)

          Philosophically, the problem with an emulation is that its embodiment is arbitrary. We could give it the illusion of having a body with any amount of powers. This arbitrariness would make imagination both unnecessary and impossible. (Does the emulation ever feel the need to have a meal or get into shape or find a mate?) Descartes was wrong: we cannot be ourselves merely by thinking we are. We must be the bodies we have, become what they do. You’d have to emulate a brain, its body, and its universe.

          • Keith O’Rourke says:

            > You’d have to emulate a brain, its body, and its universe.
            Or at least a large enough fish bowl.

            But it may need many AI’s along with artefacts that some would use to represent outside “themselves” in a way that they and other AI’s would realise what they are doing (representing for some purpose).

            As I put it in the post “The main take home point perhaps being that externalizing the representations allow us to stare at them, if not them stare back at us. And publicly, so others can also stare and perhaps point out how the representations could be made less wrong.” See also the p.s. I added to the post.

            So I cannot imagine how this would happen or even if it is necessary, but that is not sufficient for me to assert it cannot happen.

            • My beef with the hype is its opportunism about digital computers. If I’m right, the invention of the microchip didn’t make AI more possible than it was at the time of Babbage or, indeed, Gilgamesh. So it’s not that I’m using my inability to imagine AI to justify an assertion that it cannot happen. I’m trying to shift the burden of proof to those who suggest that, after Turing, I’m some sort pessimist. I don’t have to say it cannot happen because I simply have no reason to think it ever will. No better reason than the Sumerians had when they congratulated themselves on the marvelous utility of the abacus.

              • Keith O'Rourke says:

                I put this in one of my comments though likely should have put it in the post – Peirce: “The secret of all reasoning machines is after all very simple. It is that whatever relation among the objects reasoned about is destined to be the hinge of a ratiocination, that same general relation must be capable of being introduced between certain parts of the machine.”

                So nothing qualitatively special about digital computers – though they are very fast and flexible.

              • Corey Yanofsky says:

                Thomas, you keep focusing on the — I’m not sure what to call it: boringness? trivitality? non-thought-likeness? — of abacuses and digital computers while failing to grapple with the fact that brains are made of neurons, and neurons have no more quality of thought than any other reliable implementation of an input-output transformation. It was the discovery of the fact that the basic constituent of the brain is machine-like and does not derive its power from, I don’t know, having an inscription of the 72-lettered name of God in very small type, that gave people evidence that an artificial intelligence was possible.

                I don’t know; maybe people forget that it wasn’t even clear that organic materials could be created from non-organic precursors until urea was synthesized in 1828.

              • Thomas Basbøll says:

                Notice that in this image the objects are given and can then correlated with the parts of the machine. And it is precisely the mind (“intelligence”) that is “capable of introducing” the relations involved. The mind is not simply a structure of relations between parts of a machine that are (somehow, but how?) as given as the objects it thinks about.

                If there’s nothing qualitatively special about digital computers, I rest my case. There’s no reason to think computers will ever think. Artifice has always been intelligent. But I don’t buy that “intelligence has always been artificial”. It’s as natural as it gets.

              • @Corey, Thomas,

                This is just Searle all over again:

                The thing that Searle (and similar argument) just doesn’t seem to acknowledge and is highly relevant in my opinion is that whether we’re simulating a brain/mind via a “Chinese Room” or “Waterpipes” or whatever the thing is MINDBOGGLINGLY ENORMOUS.

                The Brain has something like 100 Billion neurons, and something like 100 Trillion synapses. Presumably a network of water pipes simulating a brain at that scale, made from say 1/2 inch copper would mass so much that it would collapse on itself and form a small moon.

                The “Chinese Room” with its guy sitting there transcribing things according to the rules listed in a book would of course require a volume of books with something like the mass of the crust of the earth to encode the program, and would at the conceivable rate of translation for such a simulation do something like one sentence in the time it takes since the earth was formed 4 billion years ago.

                But make it all very small and fast, like, I don’t know, maybe each unit would be about the size of… say a neuron, and each connection about the size of say… a dendrite synapse… and you can now do trillions of calculations in the time it would take the chinese room occupant to turn a single page.

                Scale matters. One ant is “never going to devastate a town” but a massive swarm might do so just fine.

                If a “neural computer chip” can be made to connect 100 billion units through 100 trillion connections, and the operation of the device rapidly iterated through a genetic algorithm at the rate of say 10000 generations per day of 1 a population of 1 million configurations, with genetic selection appropriately based on achieving the appropriate goals, in a few weeks it will sing, dance, write poetry, and eloquently demand representation in congress.

              • Thomas Basbøll says:

                @Corey: “…neurons have no more quality of thought than any other reliable implementation of an input-output transformation. It was the discovery of the fact that the basic constituent of the brain is machine-like…”

                I simply deny the completeness of the neurological description of the brain, i.e., its fundamental “machine-like-ness”, with respect to its ability (which I’ll grant, for the sake of argument, it has) to produce consciousness, my experience of thinking.

                If it’s any consolation, I also deny the completeness of the electro-mechanical description of my arm with respects to the experience of lifting stuff. It’s experience that needs explaining and the nerves-muscle-blood-and-bone machine just doesn’t give me a satisfying explanation.


                I like my box of apples better than Searle’s Chinese Room but I do see the connection. The point is not that a more complicated room is needed. The point is that Chinese is already in the room and is not (on Searle’s description of how it works) understood by the room. The billions of neurons can explain the production of “outputs” at the end of nerves, twitches of muscle etc. But they can’t explain the meaning and you therefore can’t have a causal explanation that has synaptic activity and then, presto, a poem or a dance at the end of it … unless, like Searle’s room, you encode the poetry in the neurons prior to the analysis. And that’s begging the question. To use Corey’s very apt image, in order to be an empirical model of the actual brain we’d have to first discover “an inscription of the 72-lettered name of God in very small type” among the neurons of the brain.

                Tell me how moving billions of (very small) apples (very fast) around in billions (very small) boxes could produce a poem that is not already implicit in the arrangement of the boxes and we’re going someone. But this just isn’t possible.

                It will, in any case, NOT “sing, dance, write poetry, and eloquently demand representation in congress,” unless it has a body that wants these things.

              • Thomas Basbøll says:

                correct: we’re going someWHERE, not someone.

              • My fundamental position is that all the poetry in the world is already encoded in the Lagrangian of the universe. It’s all subatomic particles jiggling.

              • From the link under “The Brain Simulator Reply”

                ‘Searle thinks, obviously, “the man certainly doesn’t understand Chinese, and neither do the water pipes.”‘

                But it’s patently obvious to me that the water pipes understand Chinese, and Searle gives no real reason why this shouldn’t be so, just basically that “water pipes can’t understand chinese”

                The massive complexity of the waterpipes is easily ignorable because it’s *physically impossible* to build such a large water-pipe system. 100 trillion analog valves… Among other things, the speed of sound propagation in water is sufficiently slow that you simply can not do it. One of the fundamental dimensionless ratios that controls things is t_t/(D/c) where t_t is the time required for an elementary thought, and D/c is the time required for information to propagate across the diameter of the organ at the speed c of the information carrying quantity. In the case of water pipes that’s c = 1498 m/s speed of sound in water. If we acknowledge that we can think several things per second, we’re now talking about 1/(D/1498) since the ratio needs to be greater than than 1 we’re limiting the size of the water-pipe brain to 1500 m diameter. As a sphere, this means 3.375e-5 m^3 per dendrite valve, which for spherical valves indicates about 3cm diameter, and this is just the volume of the valves, with no interconnections.

                If we assume multiple trips back and forth between different neurons propagating around… it becomes clear why we need neurons whose bodies are on the order of around 10 microns diameter and propagation of electrical signals of on order 10m/s in neurons.

                Let’s calculate the dimensionless quantity for the brain, call it 10cm diameter, 10m/s propagation 1 second per “thought”, 1/(0.1/10) = 100 so we can engage information to flow between any two parts of the brain 100 times per thought.

                If all you’re saying is that there are fundamental physical limitations such as this which means brains can’t be the size of planets or operate from principles other than lightweight electron flows… then I don’t think this is all that controversial. If what you’re saying is that it really does require the inscription of God’s name on each and every thoughtful object… then I think we can just agree to disagree.

              • Corey says:

                Daniel, I’m aware that we’re recapitulating the Chinese Room argument; I thought that if Thomas wasn’t familiar with the terms of the debate and the literature then other readers also might not be and so I should keep my argument on the object level as much as possible. I’ve also tried so say something at least a little bit novel (in the context of just this thread) in each comment so I’m not simply repeating myself. But maybe some links are appropriate.

                Do I think people can do things that clay can’t do “in principle”? Yes, I do. Does that logically rule out some weird magic clay that gets up and walks around and talks like a person? Not strictly, I guess.

                Thomas, in the above quote you endorse the coherence of the notion of a p-zombie. I personally think the notion is incoherent.

              • I suspected as much Corey but it seemed worth pointing out the existence of decades of philosophy to the audience, whoever they are? Also why is it that all the cool kids can’t live within at least a few hours of me? One day I will have a reason to go to your corner of Canada I guess. Hopefully.

              • Thomas Basbøll says:

                @Corey: We agree that a p-zombie is incoherent (that’s what I meant by “magically”). My entire argument is that if you describe the brain only as a causal electro-chemical machine then you are positing a p-zombie. What you seem to be saying is that since this notion is incoherent, once a machine passes the Turing the test, we have to believe it has conscious experience. Surely that is begging the question, but we agree to a point: I’m saying that, without conscious experience, no machine will ever really pass the Turing test.

                @Daniel: (In continuation of this very point.) I don’t think all arguments over the Chinese room apply to my box of apples. The important difference is that Searle is asking us to imagine a room that “seems” to know Chinese. But then he tells us how it works and now we understand how this appearance is possible. (It’s like revealing the trick that makes a mentalist seem to be able to read your mind.) I’m asking you to imagine a box that obviously doesn’t know how to add. And I’m trying to show you that a computer, likewise, doesn’t really know how to add.

                @Both: I’m not saying that God’s name is in fact inscribed in our brains. I’m characterizing the incompleteness of our account of the brain qua “conscious machine” by suggesting that it would still need such an inscription, which is absurd. Searle’s room can’t learn (and certainly can’t invent) Chinese. Chinese has to be “installed” in it.

                I’d still like to hear your prediction of when the OpenWorm project will successfully have a simulated the 302 neurons of C. elegans, such that the computer “behaves” like an actual worm. I want to stress that I think that notion is incoherent too. If you don’t make a robot worm (but just track simulated synaptic output under conditions of simulated “environmental” input) you’re begging the question.

              • Thomas Basbøll says:

                Another way of putting it: I think saying that E.T.A. Hoffman’s Olimpia is a “fantasy” and Philip K. Dick’s Rachael Rosen is “science fiction” is a false distinction. I’m saying that nothing happened, whether in (brain) science or (information) technology, between 1816 and 1968 to make these literary inventions more “realistic”. The so-called science fiction of artificial intelligence is, at best, a kind of a magic realism. Spike Jonze’s Her is a wonderful example.

              • Thomas, I think you misunderstand Corey’s point. He claims there is no such thing as a p-zombie, so anything which replicates a brain *is* a conscious object not a p-zombie.

                Yet you state “The mind is not simply a structure of relations between parts of a machine” and you basically say that it would require “magic” for us to turn silicon (clay) into a brain simulation.

                And yet you also say that a p-zombie is incoherent, which I take to mean that if we could create an artificial brain that replicates a real one you would agree that it is conscious. So my conclusion is you have an incoherent view. Unless your view is just that boundary conditions matter and we have to simulate all the sensory neurons too. But then did Christopher Reeve become a p-zombie after his spinal trauma? I say no. Some limited sensory input is evidently required, but not a full body, people don’t become p-zombie when a leg is amputated for example.

                The openworm project is not that interesting for philosophy of mind honestly. No one I know thinks that worms are conscious. They simply don’t have the neural complexity required. Hell even humans aren’t conscious if you give them an appropriate anaesthetic.

              • “My entire argument is that if you describe the brain only as a causal electro-chemical machine then you are positing a p-zombie.”

                No you are positing that brains are mind machines.

                If you reject the existence of a mind in said brain, then you are positing a p-zombie. A p zombie is physically in every way measurable like a human down to the atomic level, and yet has no consciousness because of a lack of whatever “magic” makes a “true” consciousness.

              • Let’s a say that a p-zombie is an object that is physically and behaviorally indistinguishable from a human being but it doesn’t have consciousness. I agree with you and Corey that this is incoherent.

                A “mind machine” is, in a sense, the opposite. It is an object that may be radically different from a human being physically and behaviorally (it is “substrate independent”) and yet have consciousness. I think that notion, too, is incoherent.

                I don’t think the brain is a “mind machine”. I think it is an organ in a human body and its function, like the function of many of our organs, is still only partially understood. I would not be surprised if the heart plays an important role in feeling emotions (as poets have suggested), that we can’t think without feeling, and that any simulated brain would be missing something essential if it didn’t have a pulse, a beat, a rhythm of the heart.

                Even if we simulate a brain, synapse for synapse, neuron for neuron, we will not simulate its mind-supporting function. We don’t know that’s that is how it works.

                It’s not that a C. elegans has anything approaching consciousness. It’s that it has exactly 302 neurons and we don’t know how to make a complete simulation even of that. My prediction is that a complete neuronal simulation of a brain will fail to produce anything resembling human behavior. I will be still more confident about this when a complete neuronal simulation of C. elegans fails to produce worm-like behavior.

              • Thomas: you are aware of the existence of this right?


                I deny anyone with that installed is now a p-zombie.

                The incontrovertible fact is that damage to the brain causes personality changes, and general cognitive changes, and that suggest that those parts of the brain were important to the consciousness of the human involved. And, people do brain damaging experiments to animals as well, reliably producing the same kinds of results. Furthermore brain surgeons regularly stimulate the brains of patients and invoke memories, feelings, taste, etc to map out regions when tumors need to be removed… All the scientific evidence we have is consistent with the notion that brains are mind machines.

              • Carlos Ungil says:

                > All the scientific evidence we have is consistent with the notion that brains are mind machines.

                But what does it have to do with discussion? If you managed to run a (necessarily imperfect) computer simulation of a brain do you think the simulation would have a mind? Maybe the “mind” emerges from something that you left out of the simulation. A computer simulation of the brain is clearly not in every way measurable like a brain down to the atomic level.

              • Thomas Basbøll says:

                I don’t deny that the brain is important for thinking. What I’m denying is that an electrical description of the brain captures this importance and that, therefore, a machine that reverse engineers its electrical activity will be able think.

                (Of course I know that there are artificial hearts. But no one has yet proposed to make an artificial brain essentially aware of an artificial heart. Indeed, the point here is that our actual brains are dependent on our actual hearts and, I suspect, we can feel this in ways that are important for consciousness.)

              • Carlos, in philosophy we often wind up taking liberties. Of course, it’s possible to botch the simulation of a brain. But suppose you didn’t. Suppose you actually accidentally started up your electronic brain-copy of me and hooked it up to a mechanical body, and it got up, went over to the computer and started spouting off on the internet about how Searle is totally missing the point, and Bayes is not about frequency of occurrence and we really need to teach people more about dimensional analysis… would you deny that it had a mind simply because it wasn’t built of meat by a biological growth process involving decades of cell division?

              • Carlos: Thomas seems to be of the opinion that even a perfect copy of a brain couldn’t exhibit consciousness without something “extra” yet that extra isn’t any of the things that would be electrically disconnected from the brain by a spinal trauma in the neck…

                Well, of course chemicals that pass the blood brain barrier alter the consciousness of humans. The question is do they alter the consciousness of humans *without* altering the electrical activity? All the evidence suggests that things like LSD and epinephrine and melatonin and soforth alter electrical activity. They are mechanisms for altering consciousness sure, but they seem to do so by altering electrical activity. I would bet that you can’t give someone with a severed spinal cord hallucinations by injecting LSD into the veins of their arm while cutting off the blood supply to the head and providing a separate blood supply to the brain from a separate vat of blood with a temporary artificial heart, the drug needs to reach the neurons and have an effect on the electrical behavior at the synapses.

                To me the point is this: minds can be made by biological machines called brains, and if anything behaves as if it were conscious (ie. it speaks to us coherently about its feelings and thoughts over a vast variety of topics appropriate to a conscious human) then it is conscious. All I need next is that in principle we can create a complex enough electrical circuit that it would speak to us coherently about its feelings and thoughts over a vast variety of topics… and we have that *the electrical circuit is by our definition conscious* It doesn’t need to replicate the electrical activity of the brain, it could do so by some other means, but it needs to replicate the external behavior of a conscious being.

                Thomas seems to deny that it is possible *even in principle* to create such an electrical circuit because it doesn’t have … something “a heart” or deep biological needs for oxygen or something… I’m not even sure what.

                I just claim that this question is partially empirical — can we do it? We’ll know when we succeed — and partially theoretical: we know that any such electrical circuit will require a lot of complexity, a lot of “processing power” and will need to obey basic physical principles like t_t/(D/c) > 1 and so will need to be of a certain scale and use a certain kind of information transfer material (like electrons probably, and very likely NOT pressure waves traveling through tubes)

            • Alex Gamma says:

              1. The Chinese room is a red herring. It is not a good thought experiment because it makes a tacit assumption that is so speculative that we have no reason to accept it: that a conscious agent that is a cog in a larger system would share the conscious experience of the larger system if it produced any.

              2. Philosophical zombies. Corey, you say the notion of a p-zombie is incoherent. That in itself doesn’t tell me much. Why do you find it incoherent? Is it logically impossible? Logically inconsistent? Nomologically impossible? And then: what, for you, follows from that?

              Also, I’m not sure how you guys (Corey, Thomas, Daniel) think p-zombies are relevant to a discussion of whether computers or machines can have intelligence or consciousness? (I’m not saying they’re irrelevant, I just get the impression that you’re using them inappropriately.)

              • Alex: I think you misunderstand the Chinese Room argument, though I also think that Searle is just stubborn and makes mistakes in wording that suggest he interprets things in your way, or leans on this misunderstanding.

                1) The point of the “Chinese Room” is that it produces an object (a room) which externally appears to translate Chinese. He then shows us the inside workings (it’s a guy transcribing characters according to a rule book, a 1940’s era “computer” such as the people who calculated things with pencil, paper, and slide-rule for the Manhattan project). And asks us “can this room possibly understand chinese? No way”. Searle makes the mistake of being far too philosophical and not physical enough “it’s just a rule book” what he fails to realize is that any such rule book would require something like a couple trillion volumes at 1000 pages a volume. It would never work precisely because it couldn’t be as complex as the language processing center of your brain and still be feasible even on the time-scales of between the dinosaur mass extinction and now. Then he imagines us simulating a brain using water pressure instead of voltage… again he totally ignores the enormous complexity of such a system, it seems obvious “my sink doesn’t understand chinese” but the water-pressure brain simulator is both physically impossible (due to the speed of sound in water) and if it were somehow possible due to using some enormously stiffer fluid with 10 or 100 times the speed of sound, would be enormously complex: hundreds of trillions of valves and inevitably the size of a whole city. It’s more like an ant colony than a sink by multiple orders of magnitude.

                It isn’t that the person inside the room doesn’t understand chinese that’s at question (though Searle seems to lean on that to some extent) but that the *room itself* appears to externally. Is there a device we could put into the room, other than a physical human who knows chinese, which would in fact “understand” chinese? Searle extrapolates from imaginary rule-books and water-pipes that are totally non-physical and pointless to answer “No”. The point of the “brain simulator” response is that you can’t extrapolate from your armchair understanding of the physics of dishwashers and your experience of sitting in a library to the neuroscientists understanding of the reality of the neural circuitry of a real brain processing language. Emergent phenomenon are well known in all areas where complexity gets sufficiently large… things happen when you put together several million much less billion neurons that don’t happen in C. elegans 302 neurons.

                2) The point of a p-zombie is that it’s the logical endpoint of a “proof by induction”: “1 neuron isn’t conscious”, “2 neurons aren’t conscious”, … “302 neurons aren’t conscious”… I conclude that no number of neurons is conscious without something else besides just the neurons.

                This “proof by induction” only works if you actually build a 100 billion neuron human brain simulator, exact in every connection detail and timing and conduction speed etc electrical behavior and yet though it replicates the electrical activity of my brain, it doesn’t even remotely try to output the neural signals required to type this message. You can’t get there by imagining dramatically less complex things and saying “they don’t do it so nothing can”.

              • Alex Gamma says:

                Daniel, what you just wrote (8:45 am / 9:11 am) is helpful.

                However, in your first post you also mis-characterize the p-zombie. It’s crucial that it is not a replicate of the brain *simpliciter*, but a *physical* replicate of the brain. Of course, an identical copy (a replicate simipliciter) of a conscious brain would also be conscious.

                I also sensed that Thomas might be confused on the p-zombie issue. However, the larger issue of how p-zombies are relevant to the question of machine consciousness is still important: p-zombies are part of an argument against physicalism, with the conclusion that there is an epistemological, and possibly ontological, divide between the physical and the non-physical. The argument is orthogonal to the artificial-biological divide. Without further assumptions, there is no implication from the truth or falsity of physicalism to the possibility or impossibility of conscious machines.

                Whether machine consciousness is possible is a perfectly open question. There are no arguments on either side that would significantly tip the balance. This is partly because we have a sample size of 1 for “systems known to have consciousness”, and the one element is something like “biological (human) life”. This is no good basis for out-of-sample predictions.

              • Thomas Basbøll says:

                Alex, I agree with you in one sense that the question is “perfectly” open, i.e., that “there are no arguments on either side that would significantly tip the balance.” But I assume that Daniel and Corey think that the probability of discovering, say, plant intelligence is lower than the probability of inventing artificial intelligence. I don’t think there is a significant difference in these probabilities. That is, I don’t think there’s any more reason to think that machines can think than there are to think that plants can think. This, in my view, is the core of the disagreement.

              • Alex, my post got held up in the filter… so we’re talking over each other a bit here. I’m glad some of my earlier posts helped you understand my point.

                As for the replicate “simpliciter” which I take to mean a real atom-for-atom replicate… yes this is the thought experiment which exposes the notion of p-zombie to ridicule in my opinion. But the point of it is: consciousness is a property of at least one type of physical arrangement of atoms (the brain). Thomas seems to deny this, he believes something else is needed… but can’t say what, but waves in the general direction of a heart or other parts of the body, a need for oxygen, some chemicals, processing foods, or something.

                I don’t think the argument about physicalism is orthogonal to artificial vs biological. There’s nothing in principle different about an electron moving around inside a neuron and an electron moving around inside a capacitor. they’re both electrons. If the electrical activity of the brain induces a state we call consciousness… then it’s the pattern of the electron movements that matter. in particular, *if* we can produce an electrical circuit that mimics the electrical outputs of the neurons of the brain, then hook it up to the motor and sensory neurons of a body, it will *by virtue of the fact that we’ve already assumed it replicates the electrical outputs of the brain neurons* get up and walk around and talk and have feelings. If that electrical activity is occurring inside silicon capacitors and field effect transistors, it won’t affect the consciousness of the being.

                So, there are two questions:

                1) Empirically, can we ever produce an electrical circuit made of silicon and soforth that produces behavioral complexity “from the outside” as complex as a human before we start a Nuclear War and obliterate the human race or are impacted by an asteroid? …. this is a technological question. We have to agree that we won’t make any architectural assumptions, it could be an in-silico neuro-replicate, it could use some other basic architecture, but its actions need to be consistent with passing a DEEP Turing test. We need to be able to “talk to HAL” about both quantum physics and the difficulties of women living in a social system of the 1700’s and the relative merits of the taste of two recipes of soup and the importance of the fossil record for our understanding of the evolution of vascular plants etc etc.

                2) In principle, if we did produce a silicon electrical circuit capable of producing behavioral complexity observed “from the outside” as complex as a human, would it be conscious? This is a philosophical question, and it still seems that people are wont to answer this in a non-physical way: no it wouldn’t be conscious because only brains can be conscious. This is Searle’s answer. Brains have mysterious “causal powers” that we don’t understand and only they can be conscious. I think it is mysticism of the first order.

              • Thomas: if you claim simply that in the future of the human race prior to extinction we won’t succeed in actually producing any artificial object that acts conscious… this is just an empirical question we’ll have to wait for an answer, or for our descendents to answer perhaps. This is the “It Won’t Ever Happen” hypothesis.

                if you deny that it’s really even in principle possible for anything but a human organism with a brain, and some other important bits you’re not sure what they are… to ever be conscious… This is the “It Can’t Ever Happen” hypothesis. And well I just disagree.

                Anyway, at this point I think my job is done, that being to explicate some of what has already been argued to death since the 1980’s when Searle rejected the AI boom. After the AI winter of 1984 ( the only people who spent much time on this topic were professional philosophers and they did spend a bunch of time. It’s worth knowing about that before replicating all that stuff (doomed to repeat history etc etc)

                Now that AlphaGo has beat the best human Go player, and AlphaZero has (maybe) beaten the previous brute-force champion chess computer Stockfish and it’s routine to have computers do facial recognition and transcribe sort of ok versions of people’s voicemail, we’re seeing a new interest in the ideas of AI… well these are just tricks. Deep AI is a different thing.

              • Thomas Basbøll says:

                OK, Daniel, thanks for the exchange. I agree that those are “just tricks”. “AI” is a spin on those tricks that make them seem deeper than they are. Every winter has a spring, so the there’ll no doubt be funding booms and busts in this area for a long a time. They will produce ever more impressive tricks, and no doubt very useful machines. But I, for one, will not believe the hype. We have never actually made a significant advance towards deep AI.

              • Corey Yanofsky says:

                Sorry, been busy this morning.

                Thomas, apologies — I was sloppy and should have been more specific. The type of p-zombie you were implicitly talking about when you wrote “weird magic clay that gets up and walks around and talks like a person” is a “behavioral zombie”; so not necessarily strictly physiologically identical to a human, but having the behavior of one without subjective experience. (Sorry too, Daniel, for misleading you about what I was claiming — the link behind “incoherent” was for entertainment purposes only. Again: sloppy.) The part where you accept the coherence of the notion is where you write that it’s not strictly ruled out on your view. (Stipulating to the coherence of some notion is a weak admission in the context of a conversation about actually building such a thing.)

                Carlos, I only brought up p-zombies because the behavioral version of the notion was implicit in what Thomas wrote and I wanted to introduce to some secondary material concerning arguments for and against reductive physicalism.

                To be honest, I’m only interested in artificial consciousness insofar as I’d like to avoid building one if at all possible and thereby create a being worthy of moral consideration. (Thomas, you no doubt find this worry vacuous.) I doubt that consciousness is necessary for the economic function of an artificial intelligence, to wit, creating strategies and plans of action that steer the universe into states high in some preference ordering. (You can call that capacity “intelligence” if you like, or not; regardless, that’s what I find most worthy of thought.)

              • Thomas, yes you’re welcome, and yes I agree with you, they’re tricks. I had a discussion with my cousin’s boyfriend about whether strong AI was coming and going to destroy the world or whatever. He was really honestly concerned about that. I said no way, what is of more concern is that many many individual “tricks” will replace much of the specialized knowledge work we currently pay humans to do, and our social, legal, and economic systems will collapse before we figure out how to compensate because social, legal, and economic systems are pretty slow to adapt.

                examples: car mechanic diagnostics, retail investment advisors, store stock keeping, checkout counters, bank tellers, estate and trust lawyers, roofing contractors, structural engineers, recording / sound engineers, CPU designers, commercial airline pilots… these could all be individually replaced (in large part, say 90%) by individual specialized computer systems with mechanical robot actuators for physical actions. And maybe faster than you think. Suppose that the application of large quantities of computing power plus large quantities of training data plus “deep learning” toolkits utilizing basic neural network models etc make it possible for 95% of people to go to a web site, describe their estate planning issues using spoken language, or car troubles or whatever and get a response from a computer that solves their problem, dispatches a robot to fix the car, or sends a drone that delivers a paper document, films the appropriate people signing the document, scans the document and emails it to them with cryptographic signature validating the video witnessing, and then delivers the original to a government recording office for official records…

                I mean, 90% of everything everyone does in an office is kind of pointless make-work due to rules and regulations, fixing problems other people created due to errors, or coffee-drinking anyway ;-)

              • Corey: I agree with you that accidentally building an artificial consciousness would be a real problem. I also agree with Thomas that it’s not really likely to happen in the next couple of weeks, years, even decades. The 100 billion neurons with 100 trillion connections simulator version would require hundreds of trillions of transistors, and that’s around 10000 or 100000 times as much as we can put on a silicon chip today ( And the interconnectedness would need to be far more general than the very specialized arrays we currently use, and we’re not really trying to do that at the moment anyway, so general purpose interconnected neuron simulators are not near…. Of course, I agree with Corey that a behavioral zombie is not a thing, so if we can create the behaviors in some way other than neural simulation, it’d be a conscious being worthy of moral consideration. Yet, I still don’t see that happening soonish either.

                I also think that the more important question is how well can we get machines to optimize the state of the world towards our own utility functions using smart “Tricks”: can we eliminate essentially all “hazardous” types of work for example: mining, chemical processing, professional driving, hazmat cleanup, whatnot. Or can we produce many important goods at very cheap prices: can we build a 2000 sqft modern house for $50,000 in materials and labor cost using robotic factories and assembly machines that take advantage of “tricks” for visual processing and planning and coordination etc? To the extent that automation enables these things due to “AI tricks” and semi-autonomous robotics and soforth… great, let’s do it. I do think it will require a change to society. But I have opinions on the kinds of changes we’ll need too:

              • Alex Gamma says:


                I’m sorry, but your use of the notion of a p-zombie is totally foreign to me and also to Wikipedia (

                To repeat, its central use is as part of an anti-physicalist argument, most famously developed by David Chalmers. That argument has nothing to do with a proof by induction that a bunch of neurons can’t produce consciousness.

                Yes, the zombie-based anti-physicalist argument cuts across the artificial-biological divide, simply because it is an argument about whether physical properties are all that’s needed to explain or to constitute/cause consciousness. It doesn’t differentiate between physical properties in brains vs silicon. In other words, if physicalism is false, then – in BOTH computers and humans – physical properties alone do not explain/determine consciousness. If it is true, then – in BOTH computers and humans – they do.

                Re: Chinese room. In the original paper, Searle writes: “As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories…”

                Seems to me he’s very much relying on the assumption that if the room produced conscious understanding, the human in the room would share in it.

              • Carlos Ungil says:

                > There’s nothing in principle different about an electron moving around inside a neuron and an electron moving around inside a capacitor.

                There are no free electrons moving around inside neurons, nerve impulses propagate by moving ions around.

                > If the electrical activity of the brain induces a state we call consciousness… then it’s the pattern of the electron movements that matter. in particular, *if* we can produce an electrical circuit that mimics the electrical outputs of the neurons of the brain

                Why would it be possible to produce an electrical circuit that mimics the electrical “outputs” of every neuron in the brain? They are not just a binary element producing a well defined output. You cannot even determine how a brain is wired, let alone create a fake one.

              • Alex: yet Searle also goes on (is it a different paper? it’s been years since I actually read these) to respond to the water-pipes example, purporting to show that it makes it completely ridiculous to suppose that you could simulate a brain in something other than actual neurons and produce consciousness. I personally think Searle is way off base, so you don’t have to argue with me that many of his arguments are almost non-sequitur ;-)

                I guess my assertion about induction is perhaps a little too abstractly related to the p-zombie to be obvious. You’ve got much of the gist though, the behavioral p-zombie argument says that there either is, or isn’t such a thing as a behavioral p-zombie. If there is then minds are not *merely* physical because we could produce a physical thing that was externally indistinguishable from a human, but it would be devoid of consciousness. If there isn’t then any “platform independent” implementation of a walking talking undergraduate student is conscious whether it has a silicon brain or a regular one. Searle clearly sets out to say that a chinese room no matter how perfectly it translates human languages from one to another is a p-zombie because …. well I don’t really buy his argument either… but it comes down to not having the right “causal power” to cause a mind:


                The next step in anti-physicality is to go beyond the behavioral p-zombie and really say that “mere perfect actual human brains” don’t make consciousness either… you need something else. And you can kind of hear this in Thomas’ argument about “Magic clay” or “hearts and feelings” or whatever. It seems that it’s not enough to output electrical signals to neurons that cause you to head-bang to Punk Rock and seek out nihilist tattoos and eat a lot of noodles and sign up for art history classes, and argue with your spouse about wall paint colors or whatever, you have to actually have the “special sauce” that comes with the full monty neurons embedded in meat with the “name of god tattooed on your soul” experience.

                And it’s that notion of the extreme p-zombie that is related to a kind of induction: planeria, and C elegans and Cnidaria and amphibians and soforth aren’t conscious and so as we get ever bigger in brain we don’t just arrive at consciousness eventually, it takes also the special sauce or name of god or whatever beyond just what brains do.

                If you accept a pure p-zombie can exist, and actually can exist even if it has a real brain, you’re a special kind of full on anti-physicalist, particularly if you don’t have a specific assertion for what is required beyond a “mere brain” such as some particular organs which have to be present. It’s going to get really hard to name what else besides the brain, because take any particular part of your body and likely someone has lived for some period of time without it and still expressed feelings of love for their family and etc etc (artificial heart, kidney transplants, liver transplants, etc etc).

                But if you are willing (as Thomas is) to say “behavioral p-zombies don’t exist, anything that behaves like a conscious human would BE conscious”. You can apparently still argue that only brains embedded in meat are capable of producing consciousness. It now becomes a technological argument.

                At this point, Thomas can argue that in his view it’s just absolutely logically inconceivable (to him) that a fancy computer could be hooked up between the ears of a recently deceased motorcycle victim and “wake up” and behave like a human, *no matter how sophisticated you created the electrical circuits* even if it could be shown in pure silicon simulation to for example output signals to the silicon spinal cord which would cause a human body to jump up and dance a jig or play the piano or demand to be fed Jamaican Meat Patties (Hi jrc!). His assertion sounds something like “once embedded into the meat, it would just fail to work”. He could then disbelieve the p-zombie while still also disbelieving the technological possibility of a brain-simulator. You could never make the brain simulator actually cause the behaviors needed, and so the p-zombie wouldn’t behave properly and so your fancy silico-brain would not have consciousness because it just won’t work ever….

                it’s really an assertion of an in principle empirically verifiable fact about technology, but made without evidence either way really, and the existence of things like cochlear implants and camera implants that produce rudimentary sight suggest it’s probably not obviously true, but it’s not refutable without exhibiting the full behavioral silico-human




                Nevertheless, I don’t particularly find it convincing as a pure assertion, and I honestly think that we could make a kind of “silico robo gibbon” or something much more easily than a silico robo human capable of commenting on Keats and translating Chinese to English. Now you could argue that a gibbon isn’t really conscious, but I won’t accept any argument that says that it’s ok to torture gibbons for fun, and so they get at least some kind of pass from me for having moral worth, at least some kind of pseudo-consciousness. So I take a continuum approach that says that we can get more or less close to consciousness, and for example dolphins or humpbacks or search and rescue dogs or the like have some level of it.


                The problem seems to be, we as humans just don’t know what it’s like to be a bat: and so some people suppose that it kind of isn’t “like” anything. Other people think that it’s probably a lot like being a human but you have a kind of fancy flashlight installed in your throat… or whatever

              • Carlos: fine, there’s nothing in principle very different about the flow of calcium ions between two probes of an electrolysis machine and the flow of calcium ions across a calcium ion channel in a nerve.

                my use of the word “possible” was to mean “logically possible” is it technologically achievable? Not at the moment, is it ruled out by logic alone? Not that I know of.

              • Thomas Basbøll says:

                Reading over these last few comments, I think that my position has been slightly misunderstood. (What my position is doesn’t really matter, of course, as long as we’re all enjoying the conversation, which I certainly am.) I’m not claiming to have some definitive argument against the logical possibility of a conscious “machine” (scare quotes leave a wide range for choice of “substrate”). I am actually trying to make a kind of statistical argument. My view is that the probability of a building a thinking machine is very low. I’m happy to grant that it is not zero, but, and here is the kicker: I don’t think the invention of the digital computer improved our chances.

                That is, I think it will only happen using a technology that is not just electronic and that has not yet been imagined. The only “logical” move in my argument is, perhaps, that I’m not entirely sure that the needed advance would be recognizably “technological” or “scientific”, i.e., I don’t think it will be the invention of a machine or the discovery of a process that will make the difference. I find it more plausible that the difference will be made by poetry or meditation, i.e., by our direct examination of the phenomenon of consciousness. (But I don’t think this is going to happen either!) In any case, so long as we are imagining the “artifice” as an electrical circuit modeled on the neurology of the brain, I’m not more hopeful about AI than I would be if someone proposed to accomplish it using a difference engine, a system of levers and pulleys, or, indeed, hydraulics.

                In short, I’m only against the hype that is obviously an attempt to fund projects that won’t (and often aren’t even intended to) produce (strong) AI but may, of course, produce perfectly useful apps that can be perfectly profitable for its inventors. I say something similar about SETI. I don’t believe that the invention of radio astronomy improved our chances of “contact”. But I do understand how the prospect can be used to attract funding to projects that will drive science and technology forward in other ways. In that light, Yuri Milner’s “Breakthrough” millions make sense. They don’t make any sense as a way of “finally” deciding whether there’s intelligent life in the galaxy.

              • Carlos Ungil says:

                > only brains embedded in meat

                Brains ARE meat (I ate one last week). An electronic model of the neurons is not brain (no more than a mechanical model). A computer simulation of a brain is not a brain (no more than a paper simulation).

                We “know” we have a conscious mind, apparently emerging from the brain functions, and are happy to extrapolate that to our fellow humans. At what age does it appear? Do other animals have it? Dogs? Birds? Squids? Slugs?

                You say that an artificial construct that behaves like a human would have a conscious mind, but it’s not clear why. It seems that an additional requirement for you is that it’s implemented as an electronic model of the brain or as a simulation of the brain in an electronic computer, but it’s not clear why.

              • “At what age does it appear?” asks Carlos. A very astute question.

                Would the simulated neurons have the plasticity of an actual brain? That is, would we simulate the brain of a newborn and then “stimulate” it into consciousness? (I guess we can have a discussion about whether babies are conscious or merely have the potential to become conscious.) Or would we simulate an adult brain that is already conscious? Even an adult brain, however, has some plasticity; it’s a circuit that is subject to change. Are those changes themselves part of consciousness? Are they caused only by the electrical process in the brain? That is, are we assuming that changes in the brain are caused by the activity of the brain?

                Like I say, very good questions. It adds a diachronic dimension to our attempts to emulate a brain. We can’t just copy a current configuration of neurons, we have to copy whatever governs its transformation into all future configurations. Some of the changes are no doubt the unfolding of a genetic program. Others are caused by sensory experience and these can, arguably, be captured in their entirety as “input” in an electrical system. But how important is the experience of a bottle of cheap wine? The experience of losing control on the slopes of a ski hill? Getting into a fist fight with another conscious being? Love and the other thousand natural shocks that flesh is heir to?

                The brain is indeed meat. That may not be at all trivial.

              • Carlos:”
                >> only brains embedded in meat

                > Brains ARE meat (I ate one last week).

                Yes, sure, brains are “meat” (taken to mean a more general “organic tissue” not more specific “muscle tissue”), but my impression was that Thomas argued we need something more than a brain for consciousness, this is either the name of god, or a soul (ie. an intangible nonphysical thing) or additional meat beyond the brain so to speak.

                Can we, in sci-fi theory, star-trek “transporter” *just the neurons* out of a body into a vat of supporting fluid that provides oxygen and nutrients to keep the neurons alive and functioning…. and still have a conscious mind associated with the resulting neurons? If so, then brains are mind machines, if not then either we also need additional meat, or the star-trek transporter wipes the special “name of god” off the neurons. Ie. either brains are incomplete mind machines (but we’re still physicalists and so believe some additional attachments of the body would suffice such as a heart or toenails or whatever) or we are nonphysicalist believers in some kind of “souls” and it’s not enough to even be a fully complete human body we also need something more than that.

                > An electronic model of the neurons is not brain (no more than a mechanical model). A computer simulation of a brain is not a brain (no more than a paper simulation).

                Of course an electronic model of a brain is not a brain, but that doesn’t mean *it’s not conscious*. Is “Data” from Star Trek The Next Generation (ST-TNG) not conscious? He displays behavior that suggests he’s fully able to reason and act largely human… well except emotion because they needed some drama for a tv show… but let’s just imagine him as also emotional. His “brain” is evidently an inorganic machine, but it’s capable of displaying *ALL* the behaviors associated with beings we consider conscious, not merely a few tricks. The believer in “behavioral p-zombies” might say “1) Data is not conscious because he’s not made of meat (or insert other reason here)” whereas the disbeliever says “2) there is no meaningful thing as a p-zombie, Data displays consciousness so he is” same thing for HAL in Stanley Kubrick’s movie “2001”

                I’m in category 2, things that display comprehensive conscious behavior are conscious…

                Thomas: Yes I did actually understand you to be making a kind of “technological” argument. But I also think taken as a whole several things you’ve said seem to conflict with each other. Nevertheless, let’s just say that I agree with you that we’re not anywhere “close” to producing “Data” the ST-TNG character. Nevertheless, I do think that if we *did* we’d *need to* acknowledge its consciousness as a matter of ethics, and it would be unethical to say torture the being for fun. I think Corey is with me on that.

                What I’m not clear on Thomas is whether you believe it is in theory possible to produce “Data” the android, or if you believe somehow that only biological neurons can ever display that level of consciousness. And does that mean that “intelligent life” on other planets if it exists must inevitably be DNA based and have neurons and muscles with all that entails?

                Also, we’re really out of reply depth here. Perhaps better to start a new thread near the bottom.

              • Carlos Ungil says:

                I cannot speak for Thomas. I can accept that the mind is somehow a product of the brain (and everything that goes with it: the sensorial inputs, the physiological environment including neurotransmisors, etc). I am a physicalist in that sense.

                For the sake of the argument, let’s say that you are Jeff Goldblum in “The Fly” (I’ve just learned that the short story it’s based on was published in Playboy!) and you successfully teletransport yourself. Would both the original and the copy remain conscious? I guess so, but being able to think about it doesn’t make it more possible than being able to think about travelling back in time makes it possible.

                > Of course an electronic model of a brain is not a brain, but that doesn’t mean *it’s not conscious*.

                But it doesn’t mean it’s concious either. Saying that something that behaves like a human (to what extent?) has to be conscious is a very weak argument. We don’t know how the consciousness emerges; it could be in the process and not in the outcome. And this “machine” is being designed to mimic the behaviour! If it adapted to its environment as a human (or even an animal) does, as Thomas mentioned, I would be much more ready to consider it has grown some kind of mind.

              • Carlos:
                “Saying that something that behaves like a human (to what extent?) has to be conscious is a very weak argument”

                Sure, I agree extent is important. Suppose it behaves as extensively as say someone with “locked in syndrome” (that is, say they can only blink their eyes ONE LONG for “yes” or TWO SHORT for “no”) . You can arrange to ask it any series of yes or no questions of rather extensive content and any connections you like between successive series of questions. And it will answer yes or no to each. This includes some kind of equivalent of say scanning your pencil across a chart of the alphabet and letting it spell out words.

                So, you have a long conversation with it about whether it prefers various different forms of modern art, whether it understands aspects of quantum physics, how Bayesian statistics works in science, what it thinks are the ethical obligations that humans have to non-human sentience, and you ask it to prepare a lecture on some rather advanced topic which it does over a period of weeks by “blinking its eyes” (whatever the equivalent is for this machine) to select letters and numbers, and eventually it uses voice synthesis to deliver the speech, like Stephen Hawking.

                I’m not talking about this kind of thing:

                I’m talking about something that interacts intellectually with the world around it, learns from observation, interacts etc.

              • Thomas Basbøll says:

                Thread continues here

        • Thomas Basbøll says:

          Here’s a working link to the OpenWorm project.

  6. Dale Lehman says:

    For some reason, my first post was lost in the ether (so if it suddenly appears, this is just my attempt to paraphrase what I thought I had already said before)

    I find this discussion too theoretical and possibly off topic – for a very important topic. The advances in AI make it almost impossible to tell whether computers now show “intelligence” or just are following programming. I recently saw an amazing video of elephants in Japan practicing calligraphy. I don’t know if they were trained to do so or are exhibiting true intelligence and creativity. As with computers, I’m not sure it matters – at least not nearly as much as the question of how humans are to behave/coexist with machines and elephants. I fear that we have barely started asking the right questions (aside from science fiction) while the advances in AI have only accelerated. At this point, I don’t think the crucial question is whether computers exhibit true intelligence or just follow programming. Of course, they have been programmed, but AI techniques have rapidly advanced to the point that it is hard to tell where the programming leaves off and where the machine picks up – they appear to be able to “learn” the rules of games and then win those games without any explicit programming for those particular games.

    As algorithms rapidly take over many decisions from humans, I think we need to be asking what our relationship with computers should be like. In many ways it is far too late to be asking this – and I fear we don’t have the institutions or ethics to begin to exert meaningful control over our creations.

    This theoretical discussion of whether our creations are exhibiting true intelligence seem off point to me. Computers are able to make “decisions” that go well beyond our ability to control, except in the abstract. Should we prevent this capability? Even if you think we should, could we? And, if some people believe we should rein in the capability of algorithms, how do our systems evolve when there are other people who do not exercise such control?
    I think these questions are too important for us to argue about whether the intelligence is truly intelligence or merely programming. Just as I believe it is critical for us to articulate our relationship with elephants regardless of whether their calligraphy represents training or innate intelligence. It is our relationships with matter, not the definition of the terms we use to describe these things. But perhaps I just can’t see through the theoretical argument well enough to see that this is what the posts are about.

    • Keith O'Rourke says:

      The original was being hold for approval – let me know if you want the original here instead or both.
      (I will respond at some point.)

    • Alex Gamma says:

      Dale, I’m very sympathetic to your viewpoint and how you put things into perspective.

      Just an addition: you say that “It is our relationships with [which?] matter, not the definition of the terms we use to describe these things.”

      When – instead of the question of “intelligence” – it comes to the question of machines one day becoming “persons”, with consciousness and the ability to suffer and all, then it will matter a lot how we view or describe them. (I’m not particularly convinced, though, that we’ll get there soon or ever.)

    • Dzhaughn says:

      “As algorithms rapidly take over many decisions from humans…”

      I think that reflects an apparent desparation but a secret assuagement. (Paraphrasing Borges.)

      Whenever did an algorithm volunteer for anything or insist on anything? Don’t blame the algorithm; alas, these choices our ours to make, and the outcome, alas, is our responsibility.

  7. Curious says:


    This notion:

    “To start, I’ll suggest the authors of the paper Andrew reviewed might wish to consider Robert Kass’ arguments that it is time to move past the standard conception of sampling from a population to one more focused on the hypothetical link between variation in data and its description using statistical models.”

    How can the distribution across a population of either real or hypothetical constructs be ignored when analyzing a problem?

    • Keith O'Rourke says:

      Not ignored but rather not simply taken to exist or be well defined. Kass and others argue that is too simplistic and detracts student from understanding statistical practice in many areas.

      • Curious says:

        If that is the case, then I think we need a different way of phrasing it.

      • Curious says:

        If something approaching a population level distribution does not exist — how could we possibly measure it?

        • Christian Hennig says:

          We can have a model, derive predictions from it, and compare these to what is observed in reality.
          Even if the result is positive, this doesn’t tell us hardly anything about “existence”, whether model assumptions are “truly” fulfilled etc. It can only tell us that the model is useful for prediction to the extent tested, and that it wasn’t wrong enough to fall at that hurdle.
          All (sets of) observations are compatible with infinitely many models so it is never possible to nail down a specific one as true or “existing”.

          • Curious says:

            I agree with your comments about models. That said, it is important to keep distinct model and existence. If we dismiss reality because we cannot adequately model it, this does not provide justification for any interpretation of a model. If we want to draw a generalizable inference from a model, even if only for predictive utility, we must be able to connect it to the causal sequences of reality in some way.

            Correlating height and weight will not allow us to increase height through an intervention that directly increases weight.

      • Curious says:

        What seems overly simplistic is the notion that employing a random sampling method provides an adequate means of satisfying distributional assumptions of modeling methods.

      • If you don’t like talking about populations existing, how about pragmatic exchangeability assumptions in modeling? There’s usually a lot of heterogeneity that’s not being modeled (other than perhaps by an item-level effect). So there will often be large residual unexplained variance in the outcome of interest. Making exchangeability assumptions and using hierarchical priors can reduce that unexplained variance in measurable ways through calibration on held out data. At least that’s the perspective I try to emphasize in my teaching (and my case study on repeated binary trials).

        The rise of the machine(s) learning is due in large part to its pragmatic focus on prediction and its success in building classifiers. It’s really hard to argue with results like Google’s spell checker and the ATM that reads handwritten checks for deposit. And it looks like we get self-driving cars before personal jet packs. One of the advantages of the approach to Bayes laid out in the first section of the first chapter of BDA is its pragmatic, application-oriented approach to model design and selection.

        I also agree with some of Andrew’s recent comments in another post that the statistics perspective can help in understanding and evaluating machine learning methods. There’s value there even if classical or Bayesian estimation techniques aren’t used (often due to massive non-identifiability, combinatorial multimodality, or heuristics like early stopping or adversarial data imputation).

        One area I’d really like to see pushed is probabilistic corpora and evaluation in terms of more decision-theoretic goals. The applied machine learning people do this, but the theoreticians still seem to concentrate mainly on 0/1 loss in all their evaluations. I never found customers in industry who wanted 0/1 outputs—they always wanted ranking and were always concerned about sensitivity/specificity (recall and something related to precision) tradeoffs. Probabilistic corporara are a plug-in update to most machine learning algorithms (you just need weighted data rather than every item being weighte 1). The idea that we can make a gold standard of digits from the wild without censoring is just wrong from an informational point of view—there are scratches on envelopes where we can’t tell if the addressor intended to write 0 or 6, or 4 or 9, or 1 or 7, even with context. I realize MNIST was collected in an artificial way with writers that were told what to write in boxes—still I would’ve expected a large number of clerical errors there where respondents filled in the wrong number (there were thousands and thousands of trials and people aren’t that accurate, even for simple tasks like this). So I’d very much like to see how these systems trained to win the MNIST contest work on real digits. My bank’s ATM hasn’t performed at 99% accuracy—I’ve maybe deposited a few dozen checks and I’ve had to make a couple corrections (I could’ve gotten unlucky, but I don’t think the checks I’ve depositied were unusually hard to read—I could easily make out what they said). And the ATM has two sources of information—the digits form and the written form. The problem’s a bit harder in that they need to find those areas on the check, but that’s largely regimented for checks.

  8. Alex Gamma says:

    “For Peirce, artefacts and machines can be parts of cognitive processes…”

    For a modern take on this, see Chalmers and Clark’s thesis of the “extended mind”:

    “A subject’s cognitive processes and mental states can be partly constituted by entities that are external to the subject, in virtue of the subject’s interacting with these entities via perception and action.”

    Clark & Chalmers, 1998, The extended mind:
    Chalmers, forthcoming, Extended Cognition and Extended Consciousness:

  9. Alex Gamma says:

    “Representation” is a much overextended term.

    The key insight, in my view, is that the only true notion of representation that we have derives from our phenomenal consciousness. It is only there that we know (assuming the existence of an external world) that our visual perception of a tree is a *representation* of the tree because we experience this representation *consciously*. All other uses are derived.

    The second important insight is that derived cases get whatever persuasiveness they have by ignoring that the claimed representation only is one by virtue of there being a human constructing it in their consciousness. Computer code, results of a calculator, pictures from a camera, words in a book: they do not represent things by themselves. The relevant representation happens in our minds. Without minds, there is no reason to speak of representations existing.

    Very similar reasoning applies to the notion of a “function”.

  10. Thomas Basbøll says:

    Thread continued from here

    I think AI enthusiasts have a peculiar disdain for the fleshy side of life (like those aliens in Bisson’s “Meat”) and a peculiar arrogance about how well we understand its mysteries. I think flesh, and especially human flesh, is much more than what science has yet discovered in it. We keep learning more and more about it. So I don’t need something more than meat to make my case. I just need to insist that we don’t yet understand our flesh fully. I think artists are much better at appreciating this.

    Now, Daniel asks me to take a position of the theoretical possibility of Data. My answer remains: however possible he is (I don’t know) he was a possible in 100 BC as he is today and was in 1816, 1968, 1995… Or we can put it this: I don’t think Data was more “futuristic” in 1253 than he is today. Inventing him is as imponderably far off in the future as he ever was. What I’m saying is that (strong) AI enthusiasts are no more rational medieval alchemists trying to make a Golem. And many AI propagandists are as cynical as a good many 15th-century priests peddling indulgences.

    I really like Carlos’s contributions, e.g.; “being able to think about it doesn’t make it more possible than being able to think about traveling back in time makes it possible.” That’s an excellent analogy. Is time travel “theoretically possible”? Sure. But only in a sense that has no bearing on whether or not it will ever happen. It’s possible by means hitherto unimagined by us.

    Last point: it is telling that Star Trek had to come up with a back story for Data that made him unique (at least within the Federation). Otherwise, of course, every Star Ship would be intelligent too. In fact, it’s completely unclear (and never dealt with, I think) why Data stays in that ridiculous body, rather than sometimes transferring himself into other machines, or controlling devices as extensions of his body by wireless, etc. Once we think it through we realize that flesh is needed to situate consciousness, to give it its necessary finitude. AI enthusiasts really are mainly seduced by fictions.

    • Corey Yanofsky says:

      I’ll admit, I’m kind of annoyed at the suggestion that artists have a better appreciation for the mysteries of “human flesh” than scientists who spend years engaging in experiments to uncover those very mysteries. Also just in terms of argumentation, maybe don’t follow the description of a fictional character’s backstory as telling (in support, one presumes, of your embodied cognition-ish views) with the assertion that people you disagree with are mainly seduced by fiction.

      • Thomas Basbøll says:

        I wasn’t taking a jab a scientists. I agree (and I gave them full credit for what they’re discovering.) It’s the AI propagandists that I think are puritans about bodily fluids.

        (You’re going meta on me, Corey, and I’m not going to follow you down that road. Sorry. I enjoy the conversation, not the meta-commentary.)

        • Corey Yanofsky says:

          But what if such an individual is also an AI enthusiast? (Personally I’m actually an AI alarmist — possibly something like Daniel’s cousin’s boyfriend — but my opinions on what’s possible in principle wouldn’t be all that different from an enthusiast.)

          • Thomas Basbøll says:

            Yes, enthusiasts and alarmists are both wrong (in my opinion) and in the same way. And as Daniel pointed out about, their error is worth correcting because it obscures the real dangers of information technology. We don’t need assurances that the machines won’t become smarter than us and take over. There’s no danger in that. We need to make sure that our little tricks and contrivances don’t interfere with our ability to take pleasure in life.

          • I find plenty of modern “AI” ish research alarming. My particular concern would be for a specialized “set of tricks” that make for autonomous war machines. It’s pretty easy to see how you could program a swarm of 100 pound drones to seek and destroy every living human inside a certain perimeter, delivering hand-grenades and 5.56 NATO ammunition or Sarin gas or whatever, and how say the US Govt (or the North Koreans or whatever) might want to make such a swarm… YIKES

            but i’m not concerned about someone accidentally starting up a “strong AI” on a server cabinet, and then having it decide to say create a massive “worm” program that hacks into and botnetifies all the other server cabinets in the world into a massive global consciousness which then holds all our communications and shipping and financial computing hostage unless we build it more hardware on which to multiply itself until we’re all working for it in the Matrix or whatever… Far more likely that some human made machines using AI tricks do something awful at the behest of a bad human actor. Or that we put huge numbers of people out of work and have a second, even worse Great Depression before we come up with a new economic system that doesn’t rely on “you have to work to survive, and by the way, robots are cheaper than anything you can do… so goodbye”

            • Corey Yanofsky says:

              Just for the record, “accidentally” isn’t the thing I worry about — more like, “under pressure from the expectation that someone else will start their own autonomous domain-general optimizing agent first”.

            • Thomas Basbøll says:

              One thing to keep in mind is that if the relevant new economic system were implemented (universal basic income, is the right idea) we’d stop hyping up AI and other grand technology projects (like SETI or going to Mars) and just start enjoying the work at hand. Our lives would improve so much, simply from freeing up time for leisure, that we wouldn’t try to automate every last chore out of existence. We’d learn to find satisfaction in doing things that actually need doing.

              If this system had been implemented 100, 1000, or 10,000 years ago, technology would have developed more slowly but human happiness would have grown steadily. Sure, a few lone geniuses would use their leisure to tinker with “thinking machines”. But they wouldn’t have the funding to make any real progress. They’d be, like I say, like alchemists trying to animate clay or turn lead into gold. And that, of course, is what I’m claiming they really are today. We just can’t see it because we’re so hopeful that our meaningless chores will be taken from us.

              • Well I’m not sure i follow you, but I think “meaningless chores will be taken from us” using “automation tricks” not general purpose AI. And I don’t think UBI would stop that from happening. More like, as that happens, UBI will be the only thing that keeps economic information flowing. Dollars are really “bits” of information about what is desired where and how much. When people can’t earn them, they can’t spend them, and when they can’t spend them, we stop knowing what those people want or need, and so we don’t provide for their needs… and in the worst case they die, in the less worse case they spend 100% of their time gibbering to bureaucrats satisfying the particulars of the laws that allow them to receive particular kinds of welfare benefits so they can get their basic needs met… a particularly pernicious form of wealth destruction (wasting peoples useful time on satisfying meaningless rules).

                If we had UBI I think we’d still have plenty of AI hype, but one thing I think we’d have more of is people like the guys who created “Cards Against Humanity” who are on record saying that they give away the rights to the cards under creative commons and sell the physical objects just so that they can have resources to continue creating great games… in other words: they make money so that they can continue to work, not they work so they can make money.

              • Keith O'Rourke says:

                > If this system had been implemented …
                This seems very speculative and not at all a good guess to me.

                Many capable and potentially enthusiastic scholars have chosen to support themselves and their families over the years rather than do research. For instance, I left undergrad studies to support myself and put off graduate studies for 15 years to support my family – UBI would have prevented the first and perhaps the second.

                As for those who chose scholarship and risk starvation, CS Peirce is a case in point (a graduate student did find him unconscious once, due to lack of food to eat). I used to think it was unfortunate the Peirce did not get academic appointments (he only ever had a very short one) or funding. William James and others put together a basic income package for him at one point.

                But I have come to think that that is why he did so much useful for others scholarship. For instance, no distractions from university research VP’s to fluff up his CV and spend time getting awards and giving image building talks.

                Now research does require a lot of resources – but military interests have always seemed to be able to make those available :-(

              • I’ll cop to speculating there. :-)

                But I want to make clear that I’m not questioning that people would work on the project even if they had funding. I’m saying they wouldn’t be as interested in the distracting apps that extract the profits that motivate investment. (If UBI had been implemented in 1920, for example, I don’t think Facebook would have been of interest to anyone.) So the funding wouldn’t be there. Also, since UBI would bring about permanent peace on earth, the military wouldn’t have as much money to throw around. ;-)

            • Chris Wilson says:

              Let me interject in this long discourse that I wholeheartedly second Daniel’s thought here: “Far more likely that some human made machines using AI tricks do something awful at the behest of a bad human actor. Or that we put huge numbers of people out of work and have a second, even worse Great Depression before we come up with a new economic system that doesn’t rely on “you have to work to survive, and by the way, robots are cheaper than anything you can do… so goodbye””

    • It’s not at all clear to me that time travel is possible. I can think about it of course, but it seems to violate laws of physics.

      On the other hand, as a physics guy who is married to a biologist, and for Corey as someone trained in bioengineering I think, we get used to the idea that at its heart, meat is just all kinds of tiny machines held together by the microscopic equivalent of duct tape and bailing wire. I also get used to the idea of separating a system from its environment at an interface. My skin for example is a kind of interface between me and the outside world. Similarly say the spinal cord is an interface between my brain and its motor and sensory neurons interacting with my environment. I’m happy to say that meat is pretty amazing. But I don’t see how that makes it any less true that similarly comprehensively amazing electro-mechanical devices would be conscious. The character Data is embedded in a TV show, and so we need it to be a humanoid and have all kinds of limitations because that’s what makes stuff interesting on a TV show, who’d want to watch a cabinet full of blinking lights doing some amazing calculations, and controlling a puppet army of ant-like beings on a planet’s surface simultaneously viewing the surface of the planet from 1 million individual cameras mounted on the ant-like exoskeletons…. Try showing a wall full of 1 million separate 1 megapixel images to humans and they’ll spend a few minutes looking at one or two of the images, and then REALLY want to get out of the room because it’s kind of overwhelming.

      You can’t wrap your brain around it in any way, you don’t know what it’s like to be such a cabinet, but you’d have to call it conscious if it can interact purposefully with its environment and communicate to you through some kind of language.

      Now, you might well ask, is an ant colony conscious? And I don’t really have an answer for you. What I can say is individual ants don’t seem to be conscious, and there’s no particular “centralized” location controlling the ant, so I doubt the colony is conscious, that’s the purpose of my “cabinet of blinkenleitz” to centralize the control into an object that has the “mind” and head off this objection ;-)

      Anyway, we both agree that we’re not close to strong AI, but we’re certainly closer than 1253. In particular we know a LOT about biology that we didn’t in even 1853. We know how to interface electronics to meat in rudimentary ways, we can hook up a chimp brain to some wires and the chimp can control a robot arm with its brain alone: one of the things this tells me is that brains are not inherently different from mechanical devices. It’s sufficient for them to set up certain voltages, and they can learn to generate and use those voltages to control objects.

      So, I think we understand the rudiments of a brain-machine interface, this makes it possible to think about mindmachine-body interfaces, and soforth. People in 1253 might well have believed in tiny angels on the heads of pins inside the brain pulling puppet strings or whatever, our mechanistic description of biology is FAR FAR more complete than 1253.

      • Further cell-machine interfaces: a machine that uses rat neurons to detect adjacent objects via computation associated with inputs from piezoelectric distance sensors.

        The point of this isn’t to say “gee soon we’ll be growing brains that control robot bodies” or even “soon we’ll be making silicon circuits that replace the brain cells” or anything like that. The point is just: neurons are machines for making complex patterns of voltages.

      • Thomas Basbøll says:

        If I’m right, strong AI is at least 10,000 years away. I’m not sure that means we’re significantly closer than they were in 1253. This goes both for the state of “robotics” (admittedly nascent in 1253) and our understanding of human biology (largely in error in 1253, but not nothing). It depends on a game changing invention or discovery unlike any in recorded … or even pre-history. Fire and the wheel are as nothing to this.

        It’s interesting you mention brain-computer interfaces. I studied this a bit while I was doing my PhD. The team I looked at called theirs a “thought translation device” though it of course did no such thing. It just offered you an opportunity to train your slow cortical potential to move a cursor on a screen and spell out words. “Thought translation” was hype. And of course in the title of all their peer reviewed journal articles as well the journalism that covered it.

        • I agree with “thought translation” as hype, but I still think brain-computer interfaces show that neurons are fancy biological voltage controllers. I don’t think about how I touch-type, I just think about the words I want to type, and my hands kind of move in appropriate ways and words appear on the screen. To “just move my hands” my brain fires off certain neurons in certain patterns. If someone can make a robo hand that a person who lost the use of their hands can control at the level of skill that I touch-type (which is pretty substantial) after they spend years of training themselves (like I did) to touch-type with the robot-hand… this shows a form of symmetry between my hands and robot hands. These rudimentary interfaces linked above show that the principle is possible.

          At their core, all biological objects are “machines made of atoms” and this is what makes me believe that other machines made of atoms, where the atoms are not primarily carbon, hydrogen, oxygen, nitrogen, maybe calcium, and various trace elements… could also do a wide variety of stuff. *in principle* a comprehensively “intelligent and purposeful” machine made of silicon, copper, and titanium could exist…

          That it isn’t very likely to appear in the near future is still I think true, but I wouldn’t put a timescale on it, 10,000 years seems too long to me. You might have to work to convince me that we’re absolutely sure humans were conscious 10,000 years ago, you DEFINITELY would have to work to convince me that there were any comprehensively conscious beings 200,000 years ago (I’m not saying there weren’t just that I don’t take it for granted).

          • Thomas Basbøll says:

            I didn’t have time to look more closely into that chimp’s robot arm. I have questions about how it works. They’ve obviously locked down the chimp’s real arm. And its impulse seems obviously to be reach for the food with it. Whenever it does this the robot arm dutifully delivers the food. Does the chimp control the hand (i.e., opens and closes it)? Does it even guide the hand towards it mouth? Or this robot arm given the simple instruction “close hand, swing right” whenever a signal in the chimp’s brain that may be indistinguishable from the chimp trying to do anything else. The problem is that the situation gives the chimp only one thing it might want to do. So when the chimp signal an intention, any intention, its plausible to assume it wants the food. But situation also only allows for one simple action. Getting food. And the robot dutifully delivers. It actually underscores Keith’s point in the title of the post. Artificial intelligence is artifactual. It’s all about reducing the complexity of the environment to a computable level, at that point the computer might indeed “represent” everything that a human would. But a human would quickly get bored. Just as the chimp is not, I imagine, enjoying any of this.


              my understanding is these interfaces wire directly into certain areas of the brain and the person/monkey learns to move the arm basically by feedback “imagine something” and see what happens, eventually you learn to imagine words and your fingers fly across the keyboard… it’s not “oh the monkey clenched its fist, now the robot goes and gets the food that the robot knows is at location x,y,z” it’s more “the person’s brain is signalling move left, now it’s signalling move up, now its signalling turn … etc

              • Thomas Basbøll says:

                I’d have to look at the way the training is set up. It does look like the BrainGate chip is sensitive to motor impulses in the brain. That’s convenient, because it lets me use rough arm movements (or restrained-but-attempted arm movements) as signals. I’d want to see how much input the robot arm gets. (According to the video, the hand closes automatically. That’s not an instruction from the operator.) The question is how many options the operator has. The researcher is excited about the difference between a cursor on a 2D screen and the arm in 3D space. But is the “action space” really much more complex? That is, is the operator doing much more than moving joystick? Is she moving two joysticks? (That’s all we need for complete, 3D control.) I suspect the robot arm is hitting virtual walls all the time, parameters for its range of motion set up in its software. If I’m right it’s like bowling with railings for the kids so they never get a gutterball.

                I feel the same way about this video as I did in my own research, btw. I had to drop it. They’re doing something really good for these ALS patients, even if they’re not translating their thoughts, and even if the robot arm will never be better than a human assistant. The research itself is giving meaning to the patients’ lives. I couldn’t bring myself to spending a career debunking such devices as “hype”.

              • You’re right, it is probably a very restricted set of motions, otherwise the initial movements would look like the movements a baby makes with its arms… flailing, thrashing, poking itself in the eye. etc. Still, it “read” motor impulses in the brain, and results in motion that is useful to an ALS patient. All that is necessary for my point is that voltages in the brain = mechanical control, neurons are voltage makers. Neurons *are* machines, there’s no real clear bright divide between “natural” and “artificial” machines.

              • There is a qualitative difference between detecting motor impulses and reading motor intentions. Perhaps the chip is distinguishing between thoughts of my left and right arms, left and right hands, perhaps left and right shoulders, and perhaps combinations. That’s a lot of different levers to pull. But my intention to move the robot arm through a particular region of space is located somewhere else in my brain and is not detected by the computer. That is, it does not read my mental representations, it just detects some distinct mental activity. Ultimately, I’m just training my motor imagination to function like a joystick. When I know how to use a joystick to play a video game, it (the joystick) eventually drops out of my attention, and I my focus is only on the action on the screen. That can also happen when you’re using your motor impulses with ulterior intentions. At the end of the day, this is just a very fancy lever for someone whose ability to exert effort on one end is severely limited. It’s neither a harbinger of artificial intelligence nor of a mind-computer interface.

              • I don’t doubt that neurons are voltage makers. My point is precisely that that’s why they can’t be the “atoms” of an account of how the mind works. And if the mind reduces to the brain, then they can’t be the sole constituents of a complete account of the brain either. I’m agnostic about the reduction, but I like Carlos’s point that it’s entirely “possible” (i.e., open to “speculation”) that the brain’s consciousness-supporting functions occur at the quantum level. Neurons may be as “medieval” as thinking of brain/mind activity in terms of circulating fluids (humors). Perhaps an account of the brain’s “mentality” will require a complete simulation, not of its electrical states, but of the totality of its “wave function” (or whatever).

              • Thomas: suppose that someone creates a program that runs on a silicon computer chip that wirelessly controls a “spider” robot and is convincingly spider-ish… well it’s easy to see how maybe a spider isn’t conscious, but it isn’t so hard to imagine doing this today. The dynamics of controlling complex “skeletons” is something that Boston Dynamics is fully capable of doing today:

                the more difficult part is a more general behavior: decision making, image recognition, seeking mates, whatever.

                From the “gist” of your argument, what I hear is that even if we convincingly model all of this, the virtual spider won’t have even as much “consciousness” as a real spider… and if I step by step move towards say chimpanzees and eventually have a robot that displays *all* the behaviors of a chimpanzee from picking its nose to using tools to extract termites etc that the “mere fact” that it behaves *comprehensively* as a chimpanzee does not mean that it is conscious.

                This means you believe in the logical possibility of a p-zombie. Because a “chimp robot” that does everything a chimp does but doesn’t have chimp-level consciousness is by definition a behavioral chimp p-zombie.

                If you believe in the logical possibility of a p-zombie, how do you know that the p-zombie virus hasn’t overtaken say 50% of the current human population? Perhaps your bus driver or the sandwich maker at your local restaurant or the president of the united states is a p-zombie… totally unthinking, unfeeling, completely devoid of consciousness but still able to do all the things that regular humans do. I just don’t buy it. It’s not logically possible.

                The point here is that it *is* logically possible (note this is different from actually technologically possible) to create a robot that displays *all the behaviors* of a human. The existence of humans proves this, because humans *are* robots. In particular humans are made of carbon, oxygen, hydrogen, nitrogen and various other trace atoms. I simply deny the existence of anything other than atoms and their associated sub-atomic particles and the fields they produce etc. There’s not a “special neural sauce” in between the atoms.

                Saying that you need some “quantum special sauce” for consciousness is maybe fine with me. But saying that the special sauce “must be in a neuron or related/associated DNA/RNA/Protein based cell” is not fine with me. That we may need a comprehensive quantum computational device to really get to the point where we can implement all the behaviors needed for consciousness is ok as a hypothesis, it’s not logically ruled out in my world view, but in my world view, there’s nothing keeping that from being embodied in extremely complex 3D doped silicon structures. We already have macro-scale quantum nondeterministic behaviors in today’s Intel chips: the RdRand instruction generates a random number in part by quantum nondeterminism. Of course, I’m not suggesting that this is all we need, but it just *can’t* be the case in my opinion that only neurons can “compute” consciousness. Saying that implies some asymmetry in the laws of physics that makes “neurons” special. And there simply isn’t such an asymmetry in the laws of physics.

              • Keith O'Rourke says:

                > “chimp robot” that does everything a chimp [human] does but doesn’t have chimp[human]-level consciousness
                I would want to discern that they are representing something purposefully and reflecting on that. To get to the “
                externalizing the representations allow us to stare at them, if not them stare back at us. And publicly, so others can also stare and perhaps point out how the representations could be made less wrong.”

              • Thomas Basbøll says:

                Daniel, I already offered you a much simpler problem than that spider: C. elegans. Here we have the complete connectome and no working behavioral model, even in simulation, let alone a robot in a real environment. Your spider would need to pass its Turing test for a least six months in a reasonably complex environment. (We could put a real spider in an identical environment as a control.) It would need to spin, mate, kill and die in a realistic way. I’m happy to let its brain communicate with it by wifi, but its input must come solely from the spider itself. The computer, additionally, cannot have access to a database of observed spider behavior. I think this robot is also at least 10,000 years away (and forever, if I’m right).

                “I simply deny the existence of anything other than atoms and their associated sub-atomic particles and the fields they produce etc.” But you don’t, I hope, deny the possibility of undiscovered subatomic particles or fields that, once discovered, increase our predictive power over observed phenomena. Behavior, let’s agree, is an observable phenomenon over which we don’t yet have complete predictive power (not for humans, nor spiders, nor C. elegans). It’s not just that we need a bigger computer to compute all the elements of an existing theory. We may need any number of new theoretical particles and hypothetical fields to get the math to work out. Maybe the brain is a tangle of 13-dimensional superstrings in a behaviorally significant way, who knows. The point is just that we don’t even know what we don’t know that might be relevant. All we know is that we can’t predict how even 302 neurons produce behavior.

                And every time we discover a little causal mechanism in the brain, we are discovering something that is just another, like I say, little box that to our conscious eye represents whatever objects it contains. The representation is forever in the eye of that beholder, not in the mechanism we have found.

              • I don’t deny the possibility of additional physics, I just doubt strongly that it will have anything to do with behavior. We have a very very very comprehensive description of non-relativistic electrodynamics, that predicts experimental results accurately to 13 decimal places or more, and I just don’t think that anything more than electrodynamics is needed to explain biology. Essentially everything we experience is due to electrical forces and gravity. sub-nuclear physics is interesting but just doesn’t enter as far as I can tell.

                The big problem, in my view, of the OpenWorm project is that it’s actually not at ALL a simpler problem than a spider mechanically. The complexity of locomotion of a squishy bag of muscle cells is far more complex to model than something like a spider exoskeleton that can be approximated as a jointed set of rigid elements. So what you see on YouTube if you look at OpenWorm is people running the connectome on much much simpler to build hardware:


                or people working on some kind of rigid approximation to the more realistic mechanics without worrying about the neurology at all:


                Spiders are mechanically simpler, and neurologically much more complex. Since you and I are interested in questions of neurology, I think it’d be much more interesting to simulate a spider, use WiFi to transport the commands to the mechanical object and to collect the sensor data, and then have a nice server closet full of tons of computing power to try to simulate the neurology directly… see how it goes.

              • Thought experiment: create a large number of robo-spider exoskeletal machines, add solar panels, and randomly initially weighted artificial neural networks. Let them run for a while in a south-facing room with lots of light. Select for those neural networks whose batteries are still charged after N hours to remain in the population, those that seek each other out and touch special antennas together briefly get to “reproduce” additional copies of themselves. Those that use special “venom antennas” to touch other robots get to charge their batteries while those that are envenomated die right away. Run the simulation entirely in the computer using very coarse simulations for a while until you get reasonable candidate behaviors, and then run it in the actual hardware for weeks on end. By the end I strongly suspect you’d see spiders following patches of light as the sun moves, fighting over these patches, hiding from each other, and “mating”… it’s not that hard to imagine.

                I took a whole course on artificial life simulation from Daniel Ashlock back in the 90’s: he had various toy scenarios like this using pure computer simulations and complex fitness functions. He’d run them through evolutionary selection for weeks on end, and then look at the behaviors that evolved. You’d see very obvious patterns of behavior that evolved.

                For example, there would be little “light beacons” in the environment that gave each “bot” energy. when nothing else mattered the bots would just spin wildly around the “room” gobbling the beacons. But when collisions caused damage, the bots would evolve to avoid each other or the walls using controlled motions. When time didn’t matter, they’d go very slow to avoid collisions, if time mattered more, they’d tend to go slow in tight areas, and accelerate rapidly when they had open areas… etc etc. The environment was simple enough to simulate in a computer at super-real-time speeds, but the design of the bots was also simple enough that you could reproduce the bot as a physical object, and program its 4 or 5 neural circuits as real op-amps and off it’d go reproducing the computer experiments in the real world…

                I agree with you about LOTS of hype in the world of AI. In one sense, you know that someone is serious about research qua understanding when they are willing to work on something simple like these “bots” because the little “bot” is not sexy at all like “OpenWorm” with its 302 neurons and smooth-particle-hydrodynamic locomotion simulations, and blablabla… but it gets at the heart of real questions: How do behaviors arise as a consequence of relatively simple connectionist computing and relatively complex artificial evolution?

                so, I guess with that experience from long ago it colors what I think is really possible. That being said, I think we’re on the same wavelength when you talk about the quantity of hype and how it affects our ability to actually get anywhere answering the real questions rather than just checking off the boxes that make our project “TED-cool and radically fundable”. Ashlock left Iowa State University for Guelph in part because of the “hype problem”.

              • Thomas Basbøll says:

                Daniel, we disagree about the comparative complexity of spiders and C. elegans. I’m confident that C. elegans is a simpler system and that its behavior easier to predict than that of a spider. It therefore also constitutes an easier problem for simulation than a spider.

                The sun-chasing robots you are proposing are not spiders. They are obviously (on your description) machines operating like machines. They are not “behaving”. Their “adaptive” abilities are nothing like actual evolution. I would not be impressed with them qua progress towards AI.

                “How do behaviors arise as a consequence of relatively simple connectionist computing and relatively complex artificial evolution?”

                It’s a classic scientific question. And I hope we agree that the answer is a long way off. It turns out to be a long way off in the case of C. elegans. My beef is not with the question. It’s with the presumption that it’s just a matter of time before it is answered and that it’s answer will be complete enough to reduce the mind to “relatively simple connectionist comput[er].”

                How does personality arise as a consequence of four interacting bodily fluids guided by the providence of a supreme being?

                That’s a question that serious people were also once asking. It never resulted in a good predictive model of what people (or any other creature) actually do. Until we have such a model, I’m simply not going to be impressed (the hype depends on our being unduly impressed with what computers can do). We do have impressive models for inanimate matter. We know much, much less about living creatures and almost nothing about the workings of our own damnable souls. Or, I should say, we know nothing about our souls that reduces neatly to a combination of “relatively simple connectionist computing and relatively complex artificial evolution”.

          • Carlos Ungil says:

            I find interesting that you’re more inclined to see consciousness in a computer programmed to keep a conversation than in the first homo sapiens or even the people living in the Neolithic (who had “invented” language, figurative painting, farming, cattle, etc.).

            One last comment before leaving: There are theories about how consciousness emerges from quantum processes in the brain. They imply that what you need to have consciousness is an actual brain, not a model of the brain or a simulation of the brain or an ad-hoc machine that behaves as a person with a brain. What matters is not how you behave, but how your brain works. Of course these theories are completely speculative… as everything else that can be said on the subject.

    • Alex Gamma says:


      you’re concocting a theory mix of p-zombies and some Searle that doesn’t really make sense. To repeat, the p-zombie is a device in a thought experiment designed to show that physical properties are not sufficient to explain / cause consciousness. This has nothing to do with Searle’s claim that only biological matter can cause consciousness. So your statement

      “If there isn’t [such a thing as a behavioral p-zombie] then any ‘platform independent’ implementation of a walking talking undergraduate student is conscious whether it has a silicon brain or a regular one.”

      does not follow. What really follows from the inexistence of p-zombies is only that consciousness is physical, not that there’s any guarantee that you can build one in a non-biological system. This latter claim you’re importing from somewhere else, I don’t know where, but certainly not from the p-zombie-argument.

      I also don’t see where you get your “next step in anti-physicality” from: the claim that human brains don’t make consciousness either. Who holds that view? It’s certainly possible to hold it if you are a substance dualist, but not many scientists or philosophers are. Typical anti-physicalist positions hold e.g. that consciousness in some form is a fundamental property of the world (alongside mass, charge etc), or that it is ubiquitous. In both cases there would be strong reasons to assume that it was to be found in brains, too. So I’m not sure if you believe you’re arguing against a popular position here, or just against Thomas.

      Finally, your “If you accept a pure p-zombie can exist, and actually can exist even if it has a real brain, you’re a special kind of full on anti-physicalist, particularly if you don’t have a specific assertion for what is required beyond a ‘mere brain’…” again seems to show that you confuse the properties of the brain with the *physical* properties of the brain. The zombie argument, when accepted, doesn’t imply that mere brains cannot create consciousness, rather it implies that if the brain generates consciousness, it does so *also* by virtue of some *non-physical* properties.

      • From the Wiki: “However, physicalists like Daniel Dennett counter that Chalmers’s physiological zombies are logically incoherent and thus impossible.[3][4]”

        I’ll put myself down as a physicalist and a fan of Dennett. I therefore believe that if it acts like a conscious being it *is* a conscious being. this is the only definition of consciousness that makes sense to me and makes it so that I can realize that *you* and my children and soforth are conscious beings.

        Given that opinion, then I think that *if* you can build one in a non-biological system, it must be conscious. I don’t guarantee that this is technologically possible. I just find it extremely likely to be possible. Put another way, if conscious life evolved anywhere else in the universe, it seems extraordinarily coincidental if it’s entirely similar in DNA/RNA/Protein structure to life here…

        >So I’m not sure if you believe you’re arguing against a popular position here, or just against Thomas.

        I was arguing against what I took Thomas position to be, something about “magic clay” and “needing a heart” and a lighthearted nod towards the name of god being required and soforth. See for example:



        similar comments where he takes the position that something besides the electrical activity of neurons is required for consciousness, and that maybe having a heart is required for feelings (artificial hearts notwithstanding… I now take it that he’s being poetic really rather than scientific)

        again, I just do not accept the existence of p-zombies, I think they make no logical sense. Therefore, for me, whatever consciousness is, one example of it is *identical* with the *physical* properties of brains. I believe Christopher Reeve was still conscious after his spinal cord was severed, and I believe that he would still be conscious if his arteries could be perfused externally and his head removed from his body as grusome as that sounds. The consciousness is situated inside the brain, and I believe this because of all the evidence that consciousness changes when we alter the brain (stroke, injury, drugs, anaesthesia) and that it doesn’t dramatically alter when we injure other parts of the body (losing a limb, artificial heart, etc etc)

        • Carlos Ungil says:

          > I’ll put myself down as a physicalist and a fan of Dennett. I therefore believe that if it acts like a conscious being it *is* a conscious being.

          Maybe you are a functionalist like Dennett. Functionalism might imply physicalism but they are clearly not the same thing. Or maybe you are a behaviourist, some of your arguments including this quote point in that direction.

          “Theorists have not been much interested in the possibility of behavioural zombies, creatures that are exactly similar to conscious creatures in their behaviour but which lack consciousness. This is because almost all of the contending theories of consciousness admit that it is even nomologically possible that two things could be behaviourally identical and yet differ in their states of consciousness.”

          By the way, “if it acts like a conscious being it *is* a conscious being” reminds me of Monty Python (we burn witches, we also burn wood, wood floats and so do ducks, therefore if she weights the same as a duck, she’s a witch).

          • well, I know of no other logical system that lets me conclude that Carlos Ungil is conscious. I witness that you display *comprehensive* behaviors of a conscious being, responding very intelligently on a wide wide variety of subjects… I think this needs to mean you’re conscious. If I also have to verify that there is “special sauce” particularly *nonphysical* special sauce… then I have really no way to know that anything else is conscious. It makes no sense to me that there be any doubt in my mind about your consciousness, so I accept that *comprehensive* behavior is sufficient to define consciousness.

            Of course if outside conversations about the philosophy of Bayes you kept responding “tell me more about X” and “that’s interesting” and other meaningless phrases, I’d conclude you’re a specialized “conversation bot” rather than a conscious being. specialized “conversation bots” are tricks precisely because they aren’t *comprehensive* in their abilities.

      • Also from the wiki: “A p-zombie that is behaviorally indistinguishable from a normal human being but lacks conscious experiences is therefore not logically possible according to the behaviorist, so an appeal to the logical possibility of a p-zombie furnishes an argument that behaviorism is false. Proponents of zombie arguments generally accept that p-zombies are not physically possible, while opponents necessarily deny that they are metaphysically or even logically possible.”

        If you believe in p-zombies being logically possible, you refute the idea of behaviorism… whereas if you think they are illogical, then you can logically stick to the behaviorist view… which I do. Anything that behaves *comprehensively* as a conscious being must be conscious.

        I thought it was all fairly understandable but evidently something about my arguments has confused you. I still see it all as very clear and connected to the core of the p-zombie argument.

        • Alex Gamma says:

          I get the impression that besides Daniel mixing up p-zombies with Searle, there’s another mix-up going on regarding mental ascription (Dennett’s “intentional stance”). I get that impression from Daniel and Tom Dietterich’s statement below.

          Consciousness is categorically different from other mental concepts like “intelligence” or “understanding”. (Phenomenal) consciousness is a natural phenomenon that every normal conscious being can immediately observe/experience and verify for themselves. All other mental concepts are ultimately abstractions and only gain independent reality to the extent they have consciously experienced aspects. That also means that to the extent they are not experienced consciously, these mental concepts are ascriptions that we make from the third-person perspective based on broadly behavioral (incl. neurobiological) evidence. There is some opinion (“subjectivity”) involved in deciding whether to ascribe intelligence or understanding to, e.g. an artificial system.

          Although consciousness also has an “ascription problem”, it is importantly different from that of intelligence and other mental concepts: first, there is a matter of fact about each organism whether it has consciousness or not. And it is every bit as “matter-of-fact” as the question of whether there is a tree before my eyes or a stone in my hand. Second, the ascription problem comes in only because the nature of consciousness is such that it can only be strictly verified by the bearer of consciousness him-/herself. In that regard, consciousness in is unlike any other natural phenomenon. That means that, *as with intelligence etc*, outside observers have to make ascriptions of consciousness based on broadly behavioral evidence. However, these ascriptions do not settle the question in the same way as ascriptions of intelligence do: there is still an independent matter of fact whether any given system is conscious or not, it’s just that we cannot verify this due to the very special nature of consciousness. The practical question will always (presumably always) be whether or on what grounds to ascribe consciousness to an artificial system, but this should not obscure the fact that for any given system, there is a matter of fact whether there is something it is like for that system or not (i.e. whether it has consciousness or not).

          • Keith O'Rourke says:

            > “there is still an independent matter of fact whether any given system is conscious or not, it’s just that we cannot verify this due to the very special nature of consciousness”
            This seems like a very key point.

            (I believe Peirce speculated that consciousness was just awareness of one’s own representing and re-representing behaviors and the consistency of it. That likely cannot be verified either.)

  11. Tom Dietterich says:

    Here are a couple of remarks from the AI research perspective.

    1. A thought regarding “consciousness”: AI researchers are discovering that in order to make our systems robust, we need them to monitor their own behavior. They need to check whether the inputs are anomalous, whether the information flows in the software are unusual, and whether the output conforms to an expected distribution. (Incidentally, we use statistical models for each of these, of course.) These are early steps toward making AI systems that have a degree of self awareness. So this aspect of consciousness is arising in response to an engineering requirement.

    2. The role of simulation. Stuart Russell likes to point out that a computer that simulates playing chess is actually playing chess. I like to point out that a computer simulating human empathy is faking it. If a computer says “I know how you feel”, it is lying. A computer is not made of meat, so it cannot have the same internal experiences that we have. At some level, the substrate matters. And at other levels (and tasks), it does not.

    3. Realism vs. the “Intentional Stance”. Many AI researchers, including me, are not comfortable with realist terminology regarding “intelligence”, “consciousness”, “representation”, “agent”, “goal”, “action”, “knowledge”, “cause”, etc. Instead, we prefer to follow Dennett and treat these as formal terms that can be applied to model, predict, and debug the behavior of our AI systems (and of people). From this formal perspective, we judge a system to be intelligent if it takes actions that achieve its goals. The goals could include efficiency, risk, and so on in addition to things like “correctly translate this Chinese sentence”. This functional view is productive as a research paradigm and useful as an engineering discipline. We leave it to the neuroscientists and philosophers to determine whether AI research gives any insight into human intelligence.

    5. Regarding the original post: Every year, machine learning researchers discover another part of statistics that they need to know. This year, the most salient topic is the question of how the data are collected and what biases may result. We are also deepening our appreciation for making inferences about causality, because causal models are able to make predictions outside the training distribution. In short, the convergence between ML and Stat continues! And we all love Hamiltonian Monte Carlo and automatic differentiation!!

    • Keith O'Rourke says:

      Thanks – has remark 5. been written up somewhere that can be quoted ;-)

      I recall Geoffrey Hinton remarking in a talk sometime in the 1990,s “In ML we have two choices, learn statistics or make friends with a statistician. I am not going to comment on which of the two is easier!”

Leave a Reply