The Materialism of Limited Toolset

Wednesday, January 19, AD 2011

I make a point of always trying to listed on the EconTalk podcast each week — a venue in which George Mason University economics professor Russ Roberts conducts a roughly hour-long interview with an author or academic about some topic related to economics. A couple weeks ago, the guest was Robin Hanson, also an economics professor at GMU, who was talking about the “technological singularity” which could result from perfecting the technique of “porting” copies of humans into computers. Usually the topic is much more down-to-earth, but these kinds of speculations can be interesting to play with, and there were a couple of things which really struck me listening to the interview with Hanson, which ran to some 90 minutes.

Hanson’s basic contention is that the next big technological leap that will change the face of the world economy will be the ability to create a working copy of a human by “porting” that person’s brain into a computer. He argues that this could come much sooner than the ability to create an “artificial intelligence” from scratch, because it doesn’t require knowing how intelligence works — you simply create an emulation program on a really powerful computer, and then do a scan of the brain which picks up the current state of every part of it and how those parts interact. (There’s a wikipedia article on the concept, called “whole brain emulation” here.) Hanson thinks this would create an effectively unlimited supply of what are, functionally, human beings, though they may look like computer programs or robots, and that this would fundamentally change the economy by creating an effectively infinite supply of labor.

Let’s leave all that aside for a moment, because what fascinates me here is something which Roberts, a practicing Jew, homed in on right away: Why should we believe that the sum and total of what you can physically scan in the brain is all there is to know about a person? Why shouldn’t we think that there’s something else to the “mind” than just the parts of the brain and their current state? Couldn’t there be some kind of will which is not materially detectable and is what is causing the brain to act the way it is?

Continue reading...

15 Responses to The Materialism of Limited Toolset

  • This is like saying, “I’ve examined books with the most powerful microscopes and chemical detection kits, and I can’t detect anything except ink and paper. Therefore books do not refer to anything else and do not contain any ‘meaning’ — it’s all just ink and paper.”

  • The atheist would respond by saying:
    1. Our non-deterministic mind may be like a computer’s random number generator. In certain situations, or perhaps constantly, our brains pick random paths and this can be emulated by a computer though obviously the computer would end up picking different paths.
    2. Things like appreciation for beauty and justice are hardwired.
    3. It’s illogical to believe in something that has no proof of being or at the very least it’s reasonable not to believe in something that has no proof of being.

  • SB,

    Well, I think it’s a bit different, in that a book is a static record of information, while a human brain clearly has a lot going on in it — it’s just unclear to me a that the measurable activity includes the actual cause of the activity. But I’m having trouble coming up with another analogy. Perhaps trying to replicate a car and expecting it to drive itself around while neglecting to account for the existence of a driver?

    RR,

    Oh, and believe me, I’ve encountered those in conversations. However:

    1. The random explanation does not seem to explain the actual experience. My experience of why I married my wife rather that someone else seems neither deterministic nor random, it seems chosen.
    2. If so, there’s no particular reason we should adhere to them, and yet most people do not think that. (Actually, more frequently, I’m told that justice and beauty are evolutionary adaptations for efficiency and can be arrived at through game theory, but again I don’t think that fits with our experience.)
    3. My whole beef with this line of thinking is that we do have evidence for the existence of the will — the evidence of experience. But in this line of thinking we completely dispense with that experience of being an I who decides things and instead assume that we’re not, simply because the particular set of tools we are using isn’t able to come up with a measurable thing which correllates to our experience. Now, I can accept it if someone is willing to explicitly say that he’s making a dogmatic choice to believe in the existence only of what is physically measurable, but I’m unclear why that should be considered an obvious or even necessarily rational choice.

  • They’re still better off trying to make artificial intelligence because it will be the will and intellect of man essentially presupposing decision questions and programming in the appropriate answers. If they ported the brain of a man into a computer, the computer would fail miserably, but would have the benefit of proving the effect of the will.

    Here’s what would happen, a robot run on a ported human brain would not have the will to help keep it in check. Let’s say the robot is set off to engineer a hybrid melon that is larger, sweeter, and juicier. The robot will start comparing known melon varieties and then naturally, because it has the mind of a man, start thinking of boobs. There will be no will to consider social norms or inter-human consequences and then divert the attention to the task at hand. The robot will then head off to grope the nearest woman and won’t stop until someone pulls the plug.

  • Doubt it? Then address how a harmless discussion about artificial intelligence led to mentioning boobs?

    🙂

  • So in all those Star Trek episodes where Kirk had to make an evil super-computer blow up by telling it something like, “Everything I tell you is a lie,” the easier approach would have been to send Uhura into flash the computer?

  • “…the easier approach would have been to send Uhura into flash the computer?”

    Gives a new meaning to flash drive.

  • I think a computer with a ported human brain would still have a self-preservation instinct.

  • The funny thing is, I already know what would happen when the copy failed; it would be decided that the computer wasn’t set up right, or didn’t account for interactions properly, or other hardware failure.

    Failure is always a hardware problem, not a theory problem.

  • How’s this for an analogy: you walk down a beach with a metal detector. You find nothing but metal. You conclude that there’s nothing buried in the sand but metal, and since you’ve swept it already, there’s nothing left buried in the sand.

  • If it was possible, then the real question is will this human-computer hybrid have the same mental defects that humans do? If so, what will an interconnected, pervasive, system-wide binary intelligence with feelings of envy, greed, lust and pride do?

    Will it be SkyNet, or will it be the Borg?

    Either way, nothing good can come from it. One has to wonder why Bill Gates has been hiring biologists at an alarming rate? What is he really up to? With his intellectual inheritance of population control and eugenics – it could go either way – wipe people out, or assimilate them. Hmm . . . it is much more pleasant to think about a different kind of boob than Bill Gates.

  • AK – You raise a good point: why would anyone want to recreate the human brain, if not for its will? To a materialist, the human brain is only a thinking machine, and a screwy one at that. So why enshrine it? Why limit a computer to the confines of human thought?

  • Initially I thought that speeding up the computerized brain would be a benefit but trying to make it do things that the biological brain wouldn’t, might drive it crazy.

  • Pinky,

    If you are a materialist, then you necessarily live in fear of being wiped out of existence as if you never existed in the first place, since material existence is all that there is. Liberated from the oppressive commandments of an imaginary god, all ten of those pesky thou shat nots, then you are free to do all that is within your evolutionary impulses and technological know-how.

    What could be better than ‘living’ on forever, so you can become god, yourself? Since your thoughts and superiority are naturally selected by chance, then you MUST exercise this superior power, before another intelligent monkey figures it out and uses it against you.

    This brain download thingy will make the one who controls it, the god of the machine that is our paltry meaningless existence. Boy, I wish I’d spent more time studying computer science, now I’ll never get to be god.

  • Darwin — in either case, the question is whether the material, directly observable object is all there is, or whether there could be something beyond that.

The Promises of Artificial Intelligence

Friday, January 16, AD 2009

Most of us are familiar with some concept of artificial intelligence, be it Data from Star Trek: The Next Generation, C-3PO and R2D2 from Star Wars, HAL from 2001: A Space Odyssey, Skynet from The Terminator, or Joshua from War Games, to name a few popular examples. We’ve long been introduced to the notion of the struggle to determine if artificial intelligence constitutes life whether these beings, which we have created, deserve rights. We’ve also come across the notion of whether we need to restrict these beings so that they cannot turn and extinguish human life (think Asimov’s Three Laws of Robotics, and movies like The Terminator and The Matrix, where the artificial intelligence has turned on humankind). Yet we very rarely hear the debate as to whether such artificial intelligence can ever be a reality. In fact, and partially due to the promises made in the 50’s and 60’s, many people think that super-intelligent machines are destined to occur any day now.

Continue reading...

15 Responses to The Promises of Artificial Intelligence

  • The books of Father Stanley Jaki pretty well cover the topic.

  • “Did you know that we cannot truly generate random numbers on a computer?”

    Ryan! It warms my heart to see this post and that statement. I was just having a conversation with my wife the other day about this very thing. (I think we’d just watched an episode of Battlestar Galactica and the whole Cylon thing sparked it.) I was telling her about my grad school class in math modeling and operations research, and how random number generators always need algorithms with seeds. My take on the whole problem is the same as yours. If the cosmos is just colliding atoms without supernature, how do we escape determinism? Just how sohpisticated would a computer have to be to mimic a human mind and be self-aware? What is “understanding” and “meaning” in such a universe???

    Sometimes I just don’t get materialists…

  • In your introduction you state that we rarely debate whether AI is actually possible. Actually I think that there is way too much time spent on this question. All the available evidence indicates that the universe is Turing computable. If anyone can prove, or even find any evidence at all that there was a part of the universe (such as the human mind) that was not Turing computable that would be a huge revolution in physics bigger than anything since Newton.

    And that’s the problem with any contention that AI is not possible. A scientific demonstration that AI is not possible would amount to such new physics as I just mentioned above. Without a scientific demonstration you are left with saying that you could have something which passes every test you can devise for intelligence and yet you do not regard as being intelligent (likewise concious etc.). This has the standard solipsistic problems. So unless this is the possibility you are considering then the idea that AI is impossible (rather than just very very difficult) is mere wishful speculation and will remain so until some actual evidence is presented.

    I should also point out that Turing computation isn’t the only possible determinist framework for physical theories. But for you to be right would really imply that some form of hypercomputation is at work within the human brain/mind. Hypercomputation is a research interest of mine and take it from me there is no evidence that my research is physically relevant (let alone relevant to the philosophy of mine)!

  • “This has the standard solipsistic problems. So unless this is the possibility you are considering then the idea that AI is impossible (rather than just very very difficult) is mere wishful speculation and will remain so until some actual evidence is presented”.

    This is asking to prove a negative. If AI is possible, it is AI that must be demonstrated. Among the great problems [as usual] is that of defining intelligence. I take it to be the ability to make connections [inter legere] without having to install the connections in the machine. In a phrase, can the machine make its own connections.

  • It isn’t asking you to prove a negative because there are examples of evidence that would make the contention that AI is impossible more plausible:

    1) Finding a problem class which can be solved by minds (reliably) which is not Turing soluble. An example would be the Turing halting problem and another the word problem.

    Technically you’d need to show that the minds can do this without significant external input to rule out nature containing the necessary information but this is a logical subtlety.

    2) You could find new laws of physics that are not Turing computable (or Turing computable with some random noise added).

    If the laws of physics, relevant to the functioning of the human brain/mind, are Turing computable and we reject a solipsistic position then artificial intelligence is possible (or at least as possible as normal intelligence!). Now in order to contend that it is not, one would have to show that there are laws of physics that are relevant to the human brain/mind which are not Turing computable. A solipsistic position wouldn’t help because then you could not demonstrate that other people were intelligent.

    As I said before demostrating either (1) or (2) would qualify you for a nobel prize. This doesn’t mean you can’t! But it does make me doubtful.

    Furthermore the argument I am trying to make is for the possibility of AI in principle. Thus it is not necessary for me to exhibit an AI to prove my point. I doubt anyone will do that for at least another decade or two.

    Incidentally I meant “the philosophy of mind” in my original comment.

  • “Did you know that we cannot truly generate random numbers on a computer?”

    This is not quite correct. As far as we know, nuclear decay is non-deterministic and has been, and can be used in random number generators. Other sources of (as far as we know) truly random or random-enough numbers exist, including taking photographs of incoming cosmic rays, the time and type of user input and so on. This is not limited to seeding the generator, but, for example, the UNIX device /dev/random will force anything reading bits from it to wait until it has got enough entropy before continuing.

    But anyway, you don’t provide anything to tie together free will and self-awareness on the one hand, and intelligence on the other. You equate free will with nondeterminism – very dubious since it gets the “free” bit right but what happens to the will? A computer program which uses true randomness in combination with algorithmic rules does not have free will. Self-awareness is apparently something more than just “having information about oneself” (more generally, I presume you think that awareness is more than possessing information) since computers are already aware in this sense of their internal environments such as their temperature, and are easily made aware of other things.

    But even so, you don’t set up any implications between lack of these qualities and lack of intelligence. The Chinese Room thought experiment is interesting but hardly settling!

  • I will insert my admittedly uneducated, and largely intuitive perspective on this.

    If AI is possible, it would not look like human intelligence, making it a questionable possibility. Take for example this discussion, it demonstrates considerable intelligence among other capabilities in both interlocutors…. AI may be able calculate amazing scientific possibilities, but when it comes to non-material ideas there is no comparison between man and animal, nor do I think that there could be a reasonable comparison between man and machine.

    As a common person, in order to accept true intelligence in a machine it would have to be capable of developing abstract, non-material, and original ideas.

    God Bless,

    Matt
    ps. the computer’s self-awareness (as in it’s temperature) is not really the computer’s but the programmer’s awareness, encoded in the system in order to respond to a future event.

  • Response to Matt: My inuitions and yours differ here so I’m not prepared to accept an argument based just on your intuitions.

    I think the problem with your argument lies in the very dubious assumption that people have an unbounded capacity for abstract reasoning and for creating novel ideas (in the absence of significant environmental input). Sure we have some capability but your argument needs that capacitiy to be unlimited. Given what we know about the human brain/mind this would be a very speculative assumption.

    Artificial intelligence programs may well have limits to their ability to engage in abstract reasoning, create new ideas or understand concepts but the issue is whether its possible in principle to produce a program which has about the same level of limitation that humans have.

    In summary in order to show that AIs could not be intelligent (at the same level that humans are) you must not only show that artificial intelligence will be limited but you must also show that human intelligence is not likewise limited. But the same reasoning (based on the halting problem) that shows that AIs will have certain limits can be applied to humans if the laws of physics that are relevant to the brain/mind are Turing computable.

  • Well, thanks all for the interesting comments. I’ll try to address some things that caught my eye as demanding a response.

    This is not quite correct. As far as we know, nuclear decay is non-deterministic and has been, and can be used in random number generators. Other sources of (as far as we know) truly random or random-enough numbers exist, including taking photographs of incoming cosmic rays, the time and type of user input and so on. This is not limited to seeding the generator, but, for example, the UNIX device /dev/random will force anything reading bits from it to wait until it has got enough entropy before continuing.

    You then misread what I was meaning. For the first half of your response, you’re talking about seeding the generator or otherwise taking in random input to help produce numbers at random. I’m saying that no algorithm can, of itself, produce random numbers because the whole notion is contradictory. We cannot use deterministic means to produce random effects. As for taking in input to produce random effects, that does very well in practice, but does not alter my point. I’d also warn about paying too much attention to entropy in the matter of randomness, as the two are not necessarily correlated. Indeed, I can produce (with enough time) from an algorithm that takes in no input, a sequence with maximal entropy for any string length. Our standard compression algorithms increase the entropy of files by removing redundancy.

    Self-awareness is apparently something more than just “having information about oneself” (more generally, I presume you think that awareness is more than possessing information) since computers are already aware in this sense of their internal environments such as their temperature, and are easily made aware of other things.

    Self-awareness is the understanding of the concept “I” as distinct from “you” or “it”. Thus having data on processor temperature, failure status of devices, what devices are present, and whatnot does not constitute to self-awareness. Have data on “my” processor and “my” devices and whatnot is closer.

    You equate free will with nondeterminism – very dubious since it gets the “free” bit right but what happens to the will?

    As I feel nondeterminism is a component of free will (not necessarily the whole shebang), and we cannot compute nondeterministically, I felt the case sufficiently made there, though. You do have my back against the wall with:

    But anyway, you don’t provide anything to tie together free will and self-awareness on the one hand, and intelligence on the other.

    Tying these together is hard to do, and my attempt basically went like this: Suppose intelligence does not depend on free will. Then intelligence is deterministic (denying truly random in nature) and thus equivalent to a giant lookup table. Since I deny that intelligence is simply a lookup table (asserted by appeal to appeal), intelligence must depend on free will. This argument is full of gaps, so if anyone else would like to take a stab at it, I’d love to see what others can say!

    Thanks, C. Le Sueur!

    All the available evidence indicates that the universe is Turing computable.

    I have a hard time with that one. I think you’ll need to clarify “universe” in this discourse, because my universe contains abstract concepts that are not computable in any paradigm. And then I would appeal to the seemingly truly random events in nature, mainly those posed by quantum mechanics–particle decay, Heisenberg’s uncertainty principle, superposition of states of particles, and so on–and ask how you justify the computability of such phenomena. Do you hold to hidden variable theory?

    But for you to be right would really imply that some form of hypercomputation is at work within the human brain/mind.

    Assuming that intelligence, thought, etc are actually phenomena of computation, I would maybe concede that this statement is essentially correct. However, I’m not a student of mind/brain interaction, save on the theological side, so I can’t really add more to this argument than what I’ve said in my post. Theologically speaking, thought, self-awareness, and intelligence in general are manifestations of our spiritual souls, which in themselves have no parts, which to me denies that there is any computation (hyper or otherwise) going on in us. But I doubt that’s a satisfactory answer to your charge (indeed, I think I’m just copping out…).

    2) You could find new laws of physics that are not Turing computable (or Turing computable with some random noise added).

    There’s something about this statement I just don’t like, and I’m not sure I can put a finger on it. What specifically do you mean by laws being computable? I can think of a couple possible meanings of this–the effects of the laws can be simulated, or the laws are derived algorithmically from a set of axioms–but you’ll need to clarify.

    Now, I don’t mean any insult, but you do brandish “Turing computable” around like a magic sword, and I’m tempted to quote Inigo Montoya: “You keep use that word. I do not think it means what you think it means.” You said hypercomputation is a research area you’re interested in, but also spoke of being in philosophy of mind, so I need to ask. What is your field?

    I agree, though, that just about any “test” we can devise to prove or disprove intelligence runs the risk of being an argument for solipsism.

    Thanks, Barnaby! I hope we’ll hear more from you.

    ps. the computer’s self-awareness (as in it’s temperature) is not really the computer’s but the programmer’s awareness, encoded in the system in order to respond to a future event.

    Matt, this touches on exactly the problem I have with even producing good evolutionary algorithms, much less artificial intelligence. Programmers set up the environment, and so the whole process is completely determined from square one, even if we have a hard time seeing all the ramifications. (After all, there are only a finite number of chess games, at least once we include the 50 moves without a capture draw, but that finite number is so big that we could never examine every single game.) From a practical standpoint, I’d then argue that in order to produce A.I., we have to be able to fully understand our own intelligence, and that’s still a work in progress.

    Thanks, Matt!

  • Barnaby,

    i guess if you put enough artificial constraints then it’s impossible to prove ANYTHING is impossible.

    We know that man’s capacity to “engage in abstract reasoning, create new ideas or understand concepts” is not limitless, because that would make us God. But you’ve yet to show that AI is capable of ANY original thought let alone limitless.

    It seems to me that AI could achieve the level of intelligence of the highest animals short of humans, and with massive computational power, but that is distinct from human thought.

    Just curious, are you a materialist? It seems that you’re treating man as just a higher animal, rather than possessing an eternal soul.

    If you are arguing from a purely materialist perspective then it would be impossible to demonstrate the impossibility of AI achieving human intelligence.

    Matt
    ps. snootiness aside, do you REALLY believe intuitively that AI could ever participate in such a discussion?

  • Careful, Matt. I don’t think Barnaby is being snooty. Rather, I have a suspicion (and I hope he’ll either confirm or deny this) that he’s in a particular field like philosophy, rather than theology or computer science. I say this–and I’m not being mean-spirited, Barnaby, I promise!–because he seems to have appropriated the term “Turing computable” and is twisting it slightly to fit his field. Now, all fields do that to some extent (A.I. itself borrows heavily from psychology, and in ways that make psychologist flinch), so I’m not in any way calling him down for it. (If you want an example of something gets grossly pulled out of context, just think of Godel’s Incompleteness Theorems!) With a little more clarification, we should know exactly where each of us stands, and hey, we might have even more insightful dialogue!

  • Barnaby,

    I meant no offense by the “snootiness”, but a little sarcasm, and for that I apologize. I guess I was just trying to reject the idea that intuitive ideas ought to be rejected out of hand, or are not worth discussing. It’s my understanding that Einstein developed the special theory of relativity triggered by an intuition that it was the case.

    I think Ryan has very effectively placed a lot more intellectual rigor into the points I was trying to make.

    Matt

  • Response to Matt:

    No offense taken. I’m arguing that if AI is impossible then that would imply a revolution in physics. And I am concluding that until further evidence emerges we should assume that AI is possible.

    “..AI capable of ANY original thought..”. I would argue that you have not shown that people are capable of any original thought either by the exceedingly stringent definition you appear to be using. I am arguing that by any reasonable definition if people can reach a certain level of intelligence then that level can be reached by a suitably programmed, and powerful enough, computer.

    I don’t think the term materialist is very well defined so I wouldn’t call myself one. I do think that the laws of physics are Turing computable where they are relevant to the human brain/mind.

    I think there is a much bigger difference between today’s computers and ‘higher’ animals than between ‘higher’ animals and people. But never the less I really am convinced that artificial intelligence is possible! Furthermore my intuition that AI is possible is as strong as my intuition that other people think and feel. I am fascinated by the fact that others lack this intuition or have an opposing one. I try not to be over reliant on my intuitions, however, even when they are this strong.

    “If you are arguing from a purely materialist perspective then it would be impossible to demonstrate the impossibility of AI achieving human intelligence.”

    This is only true if you think the idea that the universe involves hypercomputation is not compatible with being a materialist. Do you assume a materialist must believe the universe has a finite number of laws of physics? Because if not then a materialist could in principle reject the possibility of AI (realised by faster computers of the type we have today rather than hypercomputers).

    Response to Ryan:

    “You’ll have to clarify universe in this dialogue”.

    I normally use the definition: “Causally connected region” and for ‘our universe’ I use “The unique, and smallest, causally connected region including myself”. I do not try to separate the universe up into domains such as material and spiritual.

    “Do you hold to hidden variable theory?”

    I meant to add the caveat: OR Turing computable with some random noise added. In any case I understand Feynman proved that the predictions of quantum mechanics can be computably calculated which I think is enough for the purposes of my argument.

    “What is your field?”

    I am a mathematician working within set theory on hypercomputation. If I have misused the term Turing computable it is through carelessness not a lack of understanding. Never the less I think that at worst I have failed to specify what I meant rigorously enough. I didn’t say at any point that I work in the philosophy of mind (I don’t). I just mentioned the area.

    “What specifically do you mean by laws being computable?”

    I mean that the predictions of those laws can be calculated (with initial conditions as input) by a Turing computer. Richard Feynman proved that quantum mechanics is computable in this sense. Strictly speaking the same is only true of general relativity under the assumption of a space time like the one we observe in our universe (but this is enough).

    “Intelligence, thought, etc. are actually phenomena…”

    Hmmm, I didn’t really mean to say this. I really ought to have said: But for you to be right would really imply that the physics relevant to the mind is not just a combination of Turing computation and randomness. This doesn’t really effect my argument though.

    Now that was a very long response! I’ve enjoyed this discussion and regret I may not have the time to continue it (I have my research to write up).

  • Barnaby,

    The philosophy of materialism holds that the only thing that can be truly proven to exist is matter, and is considered a form of physicalism. Fundamentally, all things are composed of material and all phenomena (including consciousness) are the result of material interactions; therefore, matter is the only substance.

    What I am saying is that we believe that there is more to man than the sum of his biological parts. Our thought processes extend beyond the material world to the non-material world. We possess an immortal soul which gives us this ability, which a purely material creature or construct could not. I suggest that this capacity is a critical component of human intelligence.

    Matt

  • Barnaby,

    Hey, thanks for clearing things up! Forgive my misconceptions. And now I’m curious. Can you pare down in a few sentences (they can be incredibly technical and terse, I don’t mind) what you’re looking into as far as hypercomputation? I admit, the extent of my knowledge of hypercomputation is limited to things like letting a Turing machine compute for infinitely long (which then removes concerns of computable reals among other things). Or do you have a paper you’d point me at? So… Any thoughts on the P v NP problem? Equal? Separate? Independent?

    “The unique, and smallest, causally connected region including myself”.

    As I note, I just have to laugh. This is so a mathematician’s answer! And I can say that, ‘cuz I ar one, too.

    In any case I understand Feynman proved that the predictions of quantum mechanics can be computably calculated which I think is enough for the purposes of my argument.,

    If you’re simply talking about the predictions being computable in that sense, then I suppose I don’t have too much to quibble about (other than maybe asking whether we’re talking completely computable, or probabilistically computable…). I certainly haven’t researched any into the computability of the laws physics in that regard, but then, your answer suggests you were stating a much weaker proposal than I originally thought.

    If I have misused the term Turing computable it is through carelessness not a lack of understanding. Never the less I think that at worst I have failed to specify what I meant rigorously enough.

    Well, now knowing that you’re mathematician working within the realm of hypercomputation, it now makes perfect sense why you’re fairly strident at saying “Turing computable”. In my field (resource bounded measure and dimension), all the notions of computability we work with are polynomial-time equivalent, so we tend to just say “computable”. I definitely retract my flippant Montoya comment.

    Hmmm, I didn’t really mean to say this. I really ought to have said: But for you to be right would really imply that the physics relevant to the mind is not just a combination of Turing computation and randomness. This doesn’t really effect my argument though.

    Well, this comes down to fundamental views of mind/brain interaction. If we suppose that all human thought, intelligence, and whatnot is determined by physical laws, if there’s nothing more than the brain at work, that’s one thing. If there’s a spiritual soul, which we can’t prove or disprove mathematically, but which is a doctrinal statement of the Catholic Church, then there’s more at play than are touched by physical laws. That’s the only point I was trying to make.

    Thanks again, Barnaby! Now, I should probably hit my research, as well.