Dancing on the bar (10/5/00)
____________
As for our other misadventures:
I did find the Joy piece in the
Wired archive, but tired after half an hour or so of slogging through extraneous downloads and never got to the end of it. [Surfing has now become roughly as exciting as standing in line at the Post Office.] Meaning, I guess, that I got through the part where the machines are taking over, but never quite got to the part where we all learn kung fu like Keanu and beat them up in virtual reality while looking very cool in black leather and mirrorshades. It is profoundly depressing to compare this freshman-essay composition with the philosophical pieces we were accustomed to receiving from scientists a generation ago: Dyson, Monod, even Bethe or Feynman; let alone Bohr, Einstein, Born, Heisenberg, Schrödinger. Or [more directly to the point] Turing or Von Neumann. Mr. Joy doesnt seem to be able to reason on his own, preferring, apparently, a technique of random quotation introduced by namedropping anecdote; the result, I presume, of a career spent thinking in buzzwords. I dont care how much money the guy has made; hes an illiterate cretin.
On the other hand Ive read one or two occasional pieces of Kurzweils, and hes very smart. He wrote an article in the
American Scientist a few years ago, for instance, that indicated an appreciation of the fact that the historical development of artificial intelligence has recapitulated the development of analytical [linguistic] philosophy; even quoting Wittgenstein, as I recall. [Of course this won my heart at once.] Whatever he has said on the subject is probably much more interesting. But is this yet in print?
[The significance of Wittgenstein in this connection is that he selfconsciously took an engineers attitude toward the philosophy of logic; thus the invention of truthtables the original mechanical procedure which, carried to its logical conclusion, led to Turings treatment of abstract machines. Insofar as theres a philosophy of mind in Wittgensteins
Tractatus, its a theory of automata.]
I flipped through the responses to Joys article in a subsequent issue while standing in line at the grocery store, but couldnt find much substance; indeed, it looked as though theyd simply printed the comments of the people who sounded most impressive; still more namedropping. [Again, the corrupting influence of the principle that all journalism is celebrity journalism.] I have somewhere the Xerox of an entire issue of one of the AI journals devoted to ritual denunciations of Roger Penrose, published shortly after the appearance of
The Emperors New Mind; no less than forty separate polemics, and every one of them sucked. I wouldnt expect much from any public forum, in other words; even if the remarks were not constrained to sound-bite length.
As for the end of the world and the destruction of humanity, all this is true, sure enough. But since nothing can be done to stop it, and since the exact course events will take is impossible to foresee [for the usual reason, i.e., even if the initial and final states and even the change in energy driving the transition were known, the chain of intermediate catalytic reactions would not be], theres not much point in worrying about it.
[This is a slightly stronger version of the usual observation on unpredictability: not simply instability of the evolution under perturbation of the initial conditions, but instability of the path connecting initial and final states under slight perturbations of either. Or slight perturbations of the form of the Lagrangian, for that matter.]
There was a period when [never mind exactly why] I went around polling people on their opinions regarding the identities of the Four Horsemen of the Apocalypse. This turned out to be one of those questions for which the officially received answer was by no means the best one. [E.g. Stefano: Poverty, Flatulence, Horniness, and Atrophy.] Anyway I never did get it straight, and found it easiest to stick with the first thing that had come into my own head predictably, Groucho, Chico, Harpo, and Zeppo.
I doubt we need a romantic lead, making Zeppo once again dispensable. This leaves us with three obvious threats: artificial intelligence; nanotechnology [specifically artificial life, the mean between the two extremes]; and reprogramming the genome.
The problem you have in discussing these several Horsemen, then [something which certainly sails over the head of Joy, but when I need a whipping boy in these polemics I generally flog Minsky][we stuck a fork in Skinner long ago], is just the problem that their writers used to have with the Marx Brothers: although its gratifying to pretend for the benefit of journalists and babes at cocktail parties that youve scripted the performers routines, in reality you never have any idea before the curtain goes up precisely what theyre going to do. Obviously you remember the story about George S. Kaufman at a performance of
Animal Crackers [Wait a minute I thought I heard one of the original lines]; guys like Joy and Minsky, alas, are never so honest.
This problem is intrinsic, you cant discuss it without confronting the fallacious idea that you can bottle intelligence, which is [psychologically, anyway] a corollary of the hacker obsession with fantasies of control, and, accordingly, most of the salient points fall out in the first third of the discussion; which, therefore, is the longest. [And anyway it figures that Groucho would do most of the talking.] Actually the same points keep falling out over and over again in slightly different form, but since even now no one seems to understand them, redundancy in exposition probably serves a useful purpose. [I read only the other day that Minsky is still on tour claiming that once the Master Algorithm has been unearthed the human brain can be replaced by a hundred-megahertz CPU: amazing stupidity.]
At any rate:
With regard to artificial intelligence, you are doubtless aware that the progress of the research program, at least as classically conceived, has been vastly exaggerated.
It is amusing to contrast the brilliant advertising campaign with the relative triviality of the problems no one seems to be able to solve: e.g., finding the date fields in old computer programs; anyone with a way to automate this task would have made more money than Gates overnight, in 1999. Or, more generally, the unsolved problem of the intelligent compiler. Of all the purportedly mechanizable intellectual activities with which I am familiar, hacking is the one that should be easiest to automate. And if you could mechanize the optimization of the translation of higher-level languages into assembly language, you could buy Microsoft with petty cash.
This doesnt mean that I put much stock in the traditional arguments against the possibility of machine intelligence [e.g. Searles room that doesnt know Chinese, or the folklore arguments Penrose attempted to summarize from the Gödel incompleteness theorem.] These have never made sense to me, and I presume that theyre wrong [in fact not even wrong in an interesting fashion.]
It seems to me instead that, though machine intelligence is obviously possible, there has been an irrational insistence on trying to develop it in exactly the wrong way. [Or: with the wrong definition of machine.] And that this insistence has been motivated by illusions about rationality, precision, predictability, understanding, and the ability to manipulate and control.
You observe, for instance, that the apparent triumphs of the program, e.g. the computers victory over Kasparov, on closer examination only reveal its futility. For here you have on one side a machine evaluating billions of positions before it makes a move; and on the other a guy who doesnt seem to be examining more than a couple of dozen [and who then loses only because he chokes.] Obviously whatever he is doing whatever the workings of his natural/all-too-natural intelligence bears no resemblance to the brute-force machine algorithm, and [at least on the face of it] is infinitely more efficient.
[Moreover (and maybe most important) what Kasparov does seems to scale better, in the sense that if you increased the complexity of the problem and asked the computer to evaluate ten times as many positions, its performance would deteriorate much more dramatically than Kasparovs.]
And, really, the classical AI idea isnt much more than this: a brute-force tree-search, with some [essentially arbitrary] evaluation function at the leaves. The implicit thesis is that the function of intelligence can be reduced to deterministic algorithms [ideally, to just one master algorithm like resolution in Prolog]; the implication is always that even if this isnt the way thinking is done in a state of nature, this is the way to do it with a computer; and anyway that it is more exact, and therefore better.
A sympathetic reading of this approach might compare it with Carnaps interpretation of the logic of induction as a method of justification, not of discovery: you itemize a set of rules, and brush away protests that these do not represent the real methods of the human scientist with the rejoinder that you arent concerned with that; and anyway they ought to.
This is the distinction between, e.g., how you know that 25 is the next number in the sequence 1, 4, 9, 16, ..., and how you would try to prove that this statement is correct; or between the act of recognition of a picture as a picture of Richard Nixon, and the assignment of a probability or degree of truth to the statement that the picture in question is a picture of Richard Nixon. [Somehow though the former is easy, and anyone can do it, the latter is difficult, and only a logician can even try; moreover (since the act of recognition is a real process in a real brain, and the degree of truth is a mathematical fantasy) somehow in the name of logical reconstruction you replace science with science fiction. But when I was reading Carnap it made a twisted sort of sense.]
Another of the classical approaches, rulebased systems, involved enumerating an enormous number of facts and principles for a class of problems and trying to deduce the solution for any individual instance by brute force. The analogy with civil law is interesting. The difficulty that the set of facts and axioms is never internally consistent was always an embarrassment. This worked fairly well with things like medical diagnosis (EMYCIN) and very badly with things like trying to teach robots to drive. No one ever attempted to address the obvious question, namely, if we only have a hundred thousand genes, how do we code all these rules? and how did the rules evolve?
But [anyway] I find it hard to give classical AI the sympathetic reading. Everything about it suggests an appeal to the fantasies of control so dear to the hacker psyche.
[About which, so far as I can recall, the original reference is Weizenbaums book
Computer Power And Human Reason: I havent looked at this in years, but seem to remember that he stressed the idea that most AI applications, e.g. his own ELIZA, were based on very simple tricks, and emphasized the importance of revealing the geek behind the curtain, as it were; moreover, and surprisingly for an MIT guy, he expressed grave reservations about hacker culture before anybody outside Cambridge even knew what it was.]
The peculiar inflexibility of the doctrine is the giveaway: it suggests some unstated [unconscious, if you like] set of motivations or reasons which are the actual supports of the structure. [Some sort of compulsion.]
That is, there is an assumption, as it were unquestionable, a priori, that the possible moves of an intelligence can be enumerated, catalogued, before the fact, and searched, surveyed, examined. After all, this is how computer programs are written. You anticipate everything in advance. For each situation, you craft a response. If youre writing a computer algebra program, you have a list of procedures you can employ to transform an expression, and, basically, you try all of them; if youre trying to integrate an expression involving elementary functions, you list all the procedures you can employ to transform that; if youre writing air-traffic-control software, you list all the paths the planes can follow and try to keep them from running into one another [yeah, right]; etc., etc.
Obviously it ought to be an embarrassment that the strategy is always the same but the lists of procedures are always contrived ad hoc. Still, if you have a problem like chess for which you can handtune the search, the results can be impressive.
But in consequence, in the design of computer programs employed by others, the designer tends to develop the attitude that the user [or luser, as the MIT guys always loved to say] is just a rat in a maze, whose every move has been anticipated in advance by the [godlike] programmer. And presently the designer begins to think that he can anticipate anybodys moves, anytime, anywhere; and that the fact that his computer programs never acquire the true autonomy that is characteristic of real intelligence is an asset, not a liability the program need do no more than bottle the intelligence of the programmer, as it were, and this should be good enough. For after all the programmer can code anything. Thus inevitably he does not address the question of what problem-solving is so much as he attempts to solve all the problems in a given class in advance and then bottle the results; though presumably the classes are getting bigger as the project moves along, and he expects someday to arrive, as it were, at root.
This always reminds me of the story about the late scion of the degenerate Habsburg line who saw his first train pulled by an engine and assumed there had to be a horse inside. But in fact in traditional AI programs [in computer programs generally] there is no engine; and there is always a programmer inside.
[Again, this is not very different from what Carnap thought: that human reasoning could be codified, regularized, and then (of course) improved upon. I always thought it was amusing that Carnap kept backing away from the implication that one would be able with his inductive logic to evaluate the probabilities of truth of competing theories and thus, as it were by calculation, determine which was most correct resolving all scientific controversies scientifically obviously this should have been the whole point of the exercise; an idea that goes back at least to Leibniz. Though to be fair only recently has it become possible to mechanically verify even mathematical proofs of any complexity.]
For the hacker, at any rate, megalomania is not simply an occupational disorder, but a kind of methodological imperative.
Im never really sure whether I should take all this seriously. Im always reminded of the way that Rotwang the inventor, with wild hair and mad glowing eyes, describes his robot in Fritz Langs
Metropolis: I have created a machine in the image of man, that never tires or makes a mistake! Then I remember that the first thing the [emphatically female] robot actually does is to mesmerize a club full of bankers with an erotic striptease. Isnt this the story of the Internet, after all.
Anyway. All this provides an illustration of what I call the Napoleonic fallacy: the idea that you can sit on a white horse on top of a high hill overlooking the battle, looking down at the noise, the smoke, and the confusion, and with a swift commanding glance of your godlike eagle eye compose the whole into a unifying vision; and then, presumably, summon an orderly, and commit your reserves to the charge.
But Napoleon is like Maxwells Demon: when you examine what he has to do in order to function as you imagine that he would, you find that he cant. The swift commanding glance presupposes gathering the [noisecorrupted] data and then piping it through some [bandwidth-limited] conduit back to Napoleon exactly the kind of situation for which Backus invented the phrase von Neumann bottleneck; and composing the whole into a unifying vision usually involves a combinatorially impossible problem in pattern recognition which, as it were by definition, youre pretending will be handled by a single processor.
Implicitly you presuppose a pyramid a hierarchical structure, a chain of command with Napoleon at the top, evaluating everything thats piped up to him. But whatever Napoleons speed might be, it is finite. Given that, and the simple fact that the size of the pyramid he sits upon must grow exponentially, its maximum height must be proportional to the logarithm of his speed.
So you know you cant do this as a serial computation. But once you start to try to analyze this as a parallel computation, you start deconstructing Napoleon! and he factors into a network of less-omniscient generals, on grayer horses on lower hills, who are arguing with one another via dispatches that only get through once in a while because the orderlies transmitting them keep getting shot ... presently nobody is taking orders at all, and you have a sort of procedural chaos from which order emerges only from the bottom up, not by imposition from the top down. Or percolates outward from the middle of the network, which meanwhile you have realized isnt really a hierarchical tree at all, but a very unhierarchical general graph in which, you perceive, the characteristic size of the neighborhood over which any individual node can maintain a coherent picture of the global state is, again, fixed by the logarithm of that nodes speed.
What
works in nature, in other words, is something much closer to anarchy, and involves local communication and local control. It is important to think like a physicist here, and not like an engineer: to realize that this state of affairs is dictated not by the frivolous accidents of history, but by mathematical necessity. Nature never bothered with the rational approach because, actually, it doesnt work.
On the usual argument thered be no point in implementing biological evolution, for instance; adopting the role of Napoleon, youd simply look at the possible genomes and then select the best one. [Actually, and even harder: best maybe in the sense of largest mutually compatible subset of them. Compare Leibniz idea that the best of all possible worlds was the largest mutually compatible set of possibilities, and translate this into biology.] Part of the fallacy, obviously, lies in supposing you could examine and rank, say, 2^6000000000 possible genomes; another, curiously enough, lies in the failure to recognize that the process that constitutes your internal examination of the set of genomes is essentially isomorphic to the external, experimental process of evolution that it is exactly this calculation, in other words, that Nature is performing. [Its a related observation that with problems of this order of complexity you find yourself thinking, literally, that God himself could not tell you beforehand what the answer is, and that the world represents some kind of simulation hes running to find out. Reminding you of the old observation about Augustines idea that the world might consist only of thoughts in the mind of God, that this wouldnt make any difference to anything.]
The point is a trifle subtler, of course: it would be absurd to suppose that you could examine a zillion genomes one by one, but it might not be absurd to assume the existence of an algorithm that permitted you, as it were, to prune the tree of choices fast enough to allow a deterministic calculation in a reasonable amount of time. But the first thing you learn from complexity theory is that almost any interesting problem [e.g., Boolean satisfiability, the existence of a Hamiltonian circuit, three-colorability, the travelling salesman] that appears to grow in difficulty exponentially in the size of the input actually does; that it cannot be pruned effectively, that the exponential fanout of the tree is irreducible.
The futility of parallelism [which is apparently less obvious] then follows. The number of processors you can stuff into a machine grows [since we dont live in Hilbert space] only as the cube [or in arbitrary fixed finite dimension as a polynomial] of its size; the number of possibilities that have to be examined in a general tree-search goes up exponentially. Accordingly, though for instance like everyone else I stand in awe of Adlemans ingenuity in inventing a technique for solving the canonical combinatorial optimization problems by coding their representations into DNA strands and, in effect, using every molecule in a sample of macroscopic size as an independent processor to test a trial solution, with a mole of DNA youd be able to solve the travelling salesman problem for [lets wave our hands] 24 cities. If you filled all the oceans with DNA youd be able to solve it for something like 37 or 38 cities. If you filled the physical universe, you might be able to solve it for 60 cities. [Thats 8320987112741390144276341183223364380754172606361245952449277696409600000000000000 routes you have to examine, incidentally.][Thanks. Ill be here all week.]
This assumption [of the irreducibility of fanout] is essentially the assumption that, as per conjecture, P doesnt equal NP [so far as Im concerned this, not Gödels, is the theorem that says intelligence isnt mechanizable in the traditional sense]; an open question, strictly speaking, but equivalent to the assertion that, in any nontrivial formal system, finding a proof of a theorem is inherently harder than verifying that proof which is, to say the least, intuitively sound. [Moreover when you think about it you realize that it is just some such a point that must vitiate the mechanization program: remembering, e.g., may be a matter of mechanical procedure, but discovery or invention is something dual to it, like remembering backwards in time; just the distinction between verifying a proof and finding one.]
[Also, as Im always observing, its much harder to write than it is to read.]
[Theres something right about that idea: that reversing the direction of time is, mainly, computationally prohibitive. Compare Maxwells Demon, which seemed to have a computational version. Maybe this: given the macrostate in which the gas is divided evenly between the halves of the box, the computational difficulty of identifying the microstate (out of an enormous number N) that puts all the molecules back on one side is huge; the Demon has something like a sorting problem. The number of steps then probably just goes as log(N), which is to say, the entropy is the length of the sort. More succinctly: consider the computational difficulty, as, say, a picture-puzzle, of putting Humpty-Dumpty back together again.]
One obvious [though never publically admitted] corollary is the impossibility of writing long computer programs without mistakes. Writing a program is equivalent to finding a proof, in a suitable formal system. [Naively, its a proof that a given function is recursive, though I think theres better theory on this point, cf. the literature on typed lambda-calculi, particularly on the Curry-Howard correspondence; and its a bit of a mystery how intuition provides the alternative description of the function. Part of the problem in real life, actually, is that intuition simply doesnt; the process of writing the program is largely a process of inventing and refining the specification.] The difficulty should go up exponentially with length and, if you examine the empirical evidence honestly, this is obviously the case. [The difficulties of matching a series of program segments dont add, as by some mental-optical illusion they seem to: they multiply.] The limits of human capability have long since been reached and exceeded. [And no wonder nothing ever really works.] There is no hill high enough, no eagle-eye sharp enough, no horse white enough to finish debugging Windows 95. Or the operating system for the IBM 360, for that matter; the locus classicus [cf.
The Mythical Man-Month] which seemed most apropos to the problem when I started thinking about this in connection with the problem of validating the SDI code.
On a less cosmic scale, this [the impossibility of Napoleon] is exactly the problem with a command economy. First, as a practical matter the people pretending to run it cant acquire and assimilate all the information they need fast enough to make decisions [I remember hearing a story about a midlevel Soviet apparatchik who was supposed to try to adjust sixty thousand prices a month]; second, if you ask yourself exactly how youre supposed to compute, e.g., the greatest good of the greatest number, you realize immediately that (a) this is an impossibly difficult combinatorial optimization problem and (b) this is [modulo the arbitrariness of the utility function] what the market is doing for you, anyway in a distributed-computational model much more efficient than anything you can design from the top down. [However, contra the usual free-market mythology, the invisible hand is only guaranteed to find a local, not a global minimum; and its obvious that large corporations and governmental bureaucracies are inherently inefficient in exactly the same way.]
Ah, these are great times to be an anarchist. You find yourself winning arguments you hadnt even thought to contest.
Anyway this suggests the approach that anyone familiar with mathematical physics would have thought of in the first place namely, when you have a seemingly intractable problem, you take the hint from the way that Nature solves it.
[This is exactly the fascination of protein folding, since the problem is prima facie combinatorially impossible (Levinthals paradox): you might have 10^200 ways of folding a polypeptide of 200 amino acids; the natural cycle time of the system (at 300 degrees Kelvin) is about 1.6*10^-13 seconds, suggesting that it cant sample much more than a few trillion configurations in the time it observably takes to fold; how does it find the ground state? Literally the question is: how does the protein make this computation?]
Historically this motivated the idea of trying a theory of automata more closely modelled on the workings of the brain; this is, of course, the theory of neural networks, which began in the Forties with McCullough and Pitts, languished in the Horse Latitudes for several decades, and was then revived in the Eighties with the work on associative memory models advanced, e.g., by Hopfield.
To make a long story short, this is the first real advance in the philosophy of mind since Hume, and its obviously the correct approach; though the details are formidable, only details remain. So its clear that a real thinking machine will not be deterministic, that its output on given input will not be predictable, that it will solve problems by guessing, that [just as evolution does] it will depend on error to function, that it will not in the ordinary sense be programmable [parallel programming is almost a contradiction in terms]; that it will, in short, be selforganizing, near-chaotic, undesigned, autonomous. Also [and this is a critical realization] that it wont ever solve the problems its presented with exactly, not because of the imperfection of its design, but because exact solutions for those problems dont really exist.
So, anyway, the idea of dissecting the human mind on a laboratory bench, itemizing its parts and their functions, enumerating its possible outputs for each of its possible inputs, and then building a well-oiled and wholly deterministic machine that will reproduce not its actual behavior but the behavior it is supposed to exhibit is delusional. [But note that this fantasy is also shared to some extent by true believers in behaviorism and psychoanalysis.]
Still this doesnt mean you cant design and build [or at least grow] something like a human brain. In fact you should be able to model it efficiently in [nondeterministic] software; and then improve upon it. Its just that this doesnt imply that youll know how it works.
[There is already at least one guy Ive heard of who builds neural network chips that evolve their own programming to solve the specific problems he assigns them using genetic algorithms, and then, like any other biologist, spends most of his time trying to figure out how the hell they do it. In a nutshell, this is the future of computer programming.]
In fact its not even obvious that youd know when youd done it. Gibsons original prediction was that artificial intelligences would emerge more or less by accident on the global network, and that it would take, as it were, the best part of three novels to figure out exactly what had happened. This still seems plausible.
Having made this fairly overwhelming case against determinism, I should admit that its possible that strict algorithms might be able to run in polynomial time on quantum computers. That is, its possible that quantum computation is inherently more powerful than classical computation; and that, perhaps, in this model P equals NP, Churchs thesis is modified, combinatorial optimization problems can be solved exactly within reasonable time limits, and Napoleon might be able to climb as high a hill as he likes. As evidence in favor of this conjecture you can wave your hands generally at the native parallelism of quantum-mechanical dynamical evolution [the particle doesnt traverse the classical path of least action but all paths simultaneously with an amplitude whose phase is proportional to that action] and specifically at Shors quantal algorithm for factoring integers in polynomial time. As evidence against it you can wave your hands at the apparent necessity of preserving reversibility [aka unitarity] in quantum computations, which seems to indicate that the previous handwaving argument linking the polynomial/exponential distinction to the irreversibility of time wasnt complete bullshit; and guess that the size of a reversible quantum computer might have to grow exponentially with the size of the problem.
Penrose, one should note, did half-seriously propose that the brain is inherently quantum-mechanical in its operation [as did Eddington long before him, actually, though no one seems to recall it]; however his arguments werent very convincing, and anyway the point seems to be that quantum mechanics doesnt banish determinism but rather reinstates it. But I suspect the correct analysis of quantum computation will show the way to solve the P/NP problem; which is to say, that none of this will really be understood until the theorem has been proven. Meanwhile Ill stick with my conclusions.
Anyway. The assumption of Minsky, of Frankenstein, of Rotwang and of Joy, were he smart enough to be that interesting is that in the process of replacing all the messy wetware of the human brain with the rule-based systems that it imperfectly attempts to implement, the riddle of the nature of intelligence will be laid bare to a select few, the dudes on the white horses on the high hill, the philosopher-kings, the hacker Napoleons; and that these guys will have it within their power, as it were by construction [as a mathematician would put it] to control the behavior of their creations. [And the behavior of all those messy imperfect humans as well; but lets not go there. Yet.] But the artificial brains wont work that way, the riddle is though transparent nonetheless impenetrable, and control as always is an illusion. You can fantasize yourself the master puppeteer; but no one, not even Cusack, can pluck a trillion strings.
Thus though machine intelligence has no apparent bounds, hacker intelligence has very obvious limits. And though it will not be that long before its possible to fabricate a machine smarter than Von Neumann, it wont be a Von Neumann machine; will not be programmed since not programmable; and its behavior, being more complex than that of its creators, will be even more impossible to predict in advance.
[Von Neumann himself was, alas, rather easily impressed by bozos in uniform; robots, being really alien, will be harder not easier to tame.]
So the fantasies of control are delusional. What emerges will be something inherently uncontrollable: insofar as they know what its doing, it wont work, and insofar as they dont, it will. It wont even be possible to direct the research in such a way as to avoid unintended consequences. And nobody gets to play the short dead dude.
In fact thats probably the best summary: nobody gets to play Napoleon; as usual, everybody gets to play Bill and Ted.
As for the second menace: I dont know much of the literature. In principle its possible to manipulate things at the atomic level [not that we dont do this already, cf. chemistry], and this may have applications to the fabrication of materials; fine. What might be construed as controversial or alarming is the idea of [physically-realized] artificial life. As best I can determine from flipping through the papers in a couple of volumes of the Santa Fe proceedings, the people proposing such experiments, Drexler, for example, are not entirely stupid; but nowhere near as smart as, say, Oppenheimer and Bethe, whose minor oversights nearly precipitated the end of civilization.
For instance Drexler seems to see the necessity of drawing a clear conceptual distinction between nanomachinery designed for some specific [and necessarily very narrow] purpose and, say, uncontrollable synthetic viruses that will breed until they devour the world; but doesnt really succeed in doing it. This is not encouraging.
The problem with infinitesmal machines, obviously, is that [contra
Fantastic Voyage] they arent good for much if manufactured one at a time and operated singly by human telepresence; they have to be autonomous, and probably they have to be self-reproducing or at least self-modifying. [Theyd have to be hand-tooled for any given site or application, and youd need trillions for every job. What kind of assembly line could turn them out?] Skipping a few steps, what this means, mathematically, is that you have to imagine some kind of generative grammar [in the sense of Lindenmayers variation on Chomsky] which produces the little suckers as the endproduct of a recursive development process using a fixed set of rules from an initial quasigenomic string of specifications [no accident this sounds like morphogenesis]; if the end result is supposed to be nontrivial, these have the complexity of computer programs, and the outputs of computer programs are [see above] wholly unpredictable. Protests that the programs can be debugged before the critters are set loose are [I say yet again] simply fatuous. [E.g., the problem of predicting what language is generated by a general phrase-structure grammar literally: whether or not the language is nonvoid is essentially the same as the halting problem for Turing machines; not simply NP-hard, in other words, but recursively unsolvable.] Moreover if the point is to manufacture these machines for applications which involve their interaction with real living things, e.g. scrubbing sclerotic arteries or killing tumor cells, their programming will have to be extremely flexible: the most obvious prototypes of such gadgets are the antibodies of the immune system, and these adapt to attack foreign intruders by mutating at a prodigious rate until they develop binding sites specific to the alien objects they want to recognize and negate.
I suppose theres some fantasy entertained about directing this army of little boogers by remote control; but this is just Napoleon again, obviously and the smaller the critters are, the more literally Napoleon and Maxwells Demon look alike. So forget that.
But to restate the most obvious objection yet again: you well remember your first happy adventures in computer programming that harmless-looking little procedure that crashed the operating system, the input-output routine that wrote zeroes throughout core and copied itself in fragments all over the disk drive, those entertaining embarrassments that recalled to you Mickeys exploits as the Sorcerers Apprentice in
Fantasia. So whos backed up the biosphere? because some imbecile will certainly find a way to erase it, if this line of research is pursued.
In conclusion, though some kind of miscegenation between the organic and the not-yet-organic is certain to occur, I doubt it will take the form of reinventing bacteria and letting them eat us. Who could be that stupid?
[Dont answer that.]
Lets pause to state the computational challenge problem which, if we translate it back into existing biology, this line of thought suggests: given the genome for an unknown organism, to generate a [complete] simulation of it. And, so long as were imagining impossibilities, the inverse problem: from a [necessarily incomplete] description of an organism, to produce the code that will generate it. [Suppose, e.g., that you wanted to make dinosaurs, but you couldnt find Crichtons mosquitoes trapped in amber.] The forward problem [suitably constrained] cant be impossible, even though it properly contains relative trivialities like protein folding; you can always grow the organism [i.e., Nature can perform the computation.] But the inverse might be. [Again, this is the difference between following a proof and finding one; in this case, the proof that a given organism can be obtained by morphogenesis.]
[Im haunted by an idea I first found in an old story by Keith Laumer,
Worlds of the Imperium, which described a magic televison that allowed you to channelsurf (as it were) sideways in time; he illustrated this with a vivid description of a scene of a farmer plowing a field behind a couple of oxen which morphed repeatedly as the viewers tuned their way from the original settings through a series of variations in which the animals changed into alien forms, the farmers skin turned purple, he grew more fingers and extruded antennae from his forehead, the landscape rippled into hills, the sun and the sky changed color, etc., etc. I remembered this not long ago when I was trying to figure out some way to steer the changes in a generic picture of a face: if you could put the right controls on the software, you could, maybe, produce some kind of generalized police-artist that would allow you to draw people from memory or, maybe more interesting, make them up. (Actually I think the cops do have something like this now, but I dont know how it works, or how well.) Similarly you might imagine a tuner that would steer you through possible variations on a genome and generate simulations of the corresponding creatures for you in real time. The problem, in both cases, is trying to figure out some happy mean between your naive vision of a single tuning knob and the nasty reality, which is that youre trying to navigate your way around a Hilbert space, and need an infinite number of them. Or at least a few billion.]
[Note added later: the underlying skeletal model in
Shrek is supposed to have five hundred forty joints; so in some sense you can see from the verisimilitude of the animation that a few hundred knobs the first few hundred dimensions of the Hilbert space would suffice, insofar as reproducing human motion is the object. Im still not sure about faces.]
As for the third menace, the reprogramming of the genome, this will certainly happen. Since it will probably start happening in the very near future, people are already talking [in the pages of
Business Week!] about legal restrictions, etc. as if there were some distinction you could clearly enunciate in a courtroom between eliminating genetic defects and introducing genetic assets. Even if there were legal restrictions, they would not apply universally [Im happy that we do for the moment run the world, but there is an alarming hubris in this unspoken assumption that American law is the only law] offshore, in Asia, if need be on the Moon and the competitive advantages of ignoring these constraints will be so enormous that a way will be found to circumvent them.
For instance at the first moment that someone finds it advantageous to breed supermen for specialized purposes, or simply to clone the most obvious candidates, it will happen. In the Gibsonian scenarios [and Gibson is our surest guide here] the ruling classes end up owning them, and theres certainly ample historical precedent: Leibniz and Bach and Euler were all kept as pets.
On the other hand the lead time for delivery of a crop of Ed Wittens is at least twenty years, and a lot can happen in that time.
At the other end of the food chain, its hard enough already to figure out what separates men from apes; very modest augmentations would suffice to turn chimpanzees into a new servant class. Try to stop that. [A career at the butt end of the service industries has taught me the great truth that makes the world go round: people are cheaper than machines. And if people were only cheaper, it would all go round that much faster. Just ask the editors of the
Wall Street Journal.]
In general its so easy to think of mechanisms that would accentuate the internal differentiation of the species that you cant imagine that it wont happen; more, you have to suspect theres some kind of principle that dictates this out of natural necessity.
But the motives that drive the initial applications will likely be more prosaic. I remember having being struck by Gibsons observation [now a couple of decades old] that whole gangs of disaffected youth could decide to look like James Dean; and thinking that this had the stink of the truth about it. Indeed a sort of improved cosmetic surgery is a natural first step. But where does this stop?
None of this is at all new; its just that its finally within reach. Most of the obvious points were made by J. D. Bernal in the Twenties in a speculative essay entitled
The World, The Flesh, And The Devil; a work which I looked up recently and read again, just to confirm the suspicion that all this had been foretold long ago.
Bernal was no mean stylist. The first sentence is memorable: There are two futures, the future of desire and the future of fate, and mans reason has never learnt to separate them. So much, I want to say, for science fiction. But, skipping over a number of interesting speculations about materials science, the first descriptions of photon sails, space stations, solar panels, etc., etc., and cutting to the biological chase, you find the piquant summary: It is quite conceivable that the mechanism of evolution, as we know it up to the present, may well be superseded ... after all it is only natures way of achieving a shifting equilibrium with an environment; and if we can find a more direct way by the use of intelligence, that way is bound to supersede the unconscious mechanism of growth and reproduction. Pointing out that this began, in effect, with the invention of tools, he continues: Normal man is an evolutionary dead end; mechanical man, apparently a break in organic evolution, is actually more in the true tradition of a further evolution. Though Bernals explicit vision of this is a trifle old-fashioned, a sort of brain-in-a-barrel idea [reading this over again I realized this was the origin of the old classic Thirties scifi story
Professor Jamesons Satellite, about a scholar who dies, has his body put into orbit, and then wakes up a couple of million years later when a party of exploring robot-dudes find the orbiting casket and transplant his brain to one of their bodies presumably the origin of the fantasy which apparently governs the decision by the Alcor-bracelet dudes to have their brains frozen when they die and stored in Scottsdale], his idea is, as it were, mechanism-independent, and its obviously correct.
Bernal must obviously have had a direct literary influence on Huxley; presumably it extended farther. This sounds, actually, like another urtext of cyberpunk, which is in large part the elaboration and development of the theme of the interpenetration of the animate and the inanimate; regarded variously with paranoiac alarm [in Pynchon], fascination [in Gibson], or with a sort of grisly playfulness [in Cronenberg].
Bernal concludes that a class division will appear in humanity depending on whether they do or do not embrace these changes; and suggests that the ones who do will be the more intelligent, adventurous, etc., and that theyll probably end up living somewhere off the planet. I too expect that this will happen, but expect instead that the division will fall along pre-existing class lines: i.e., the ones who will have the money then to pay for cosmetic alterations, to have their children augmented, and to remove themselves from the reach of terrestrial law will be essentially the same ones who have the money now for cosmetic surgery, to place their children in private schools, and to buy their way out of murder raps; but as more power is concentrated in fewer hands, and the already considerable competitive advantages of the wealthy over the disadvantaged become qualitative biological differences, the gap will become a phase boundary. Its a kind of Scott Fitzgerald joke now, when people refer to the very rich as a different species; but presently this will literally become the case.
And then theyll all go to war and smoke all the rest of us trying to get at one another. [Like 1914, only worse.] Not exactly a cheerful prospect.
But probably the machines get smart and take over first.
Lets make a parenthetical note that, if it should prove economical [in the broadest sense] for humans [and not intelligent robots designed specifically for the purpose] to do things like colonize Mars and mine the asteroids, they will certainly be drastically modified and thoroughly re-engineered humans. [But, pace Dyson, it seems as if a robot would be a better idea all around: something that could live in vacuum directly off sunlight.] And of course its been obvious for a long time that interstellar flight is completely impractical for organic lifeforms; a million-year lifespan would be the minimum requirement. I think you need something stabler than DNA to pull that off.
[So, inverting the argument, if we were to be visited by extraterrestrials, its unlikely wed recognize them as organic lifeforms. The little green men would be wholly superfluous; the flying saucers themselves would be the aliens. Or worse, something like Hoyles
Black Cloud, some nanotechnological virus descending in a swarm to devour the Earth. It never ceases to amaze me what a fucking Pollyanna Carl Sagan was.]
With regard to the other parlor stunts, e.g. cloning, it seems inevitable that some Gatesian megalomaniac will try it; the temptation to hand over the empire of Microsoft, for instance, not simply to a designated heir, but to ones self, would certainly be irresistable. [Think of it this way: this is what the evil mastermind in a James Bond movie would do; and there are now any number of overnight billionaires who would like to think theyre the kind of guys for whom Doctor No is the only appropriate role model.] But in view of whats now possible, the idea already sounds retro.
In summary:
Wittgenstein said once that a philosophical work could be written entirely in the form of a series of jokes; and it must have occurred to you, in your meditations on the nature of comedy, that not only are there jokes which express something really deep, but that you cant imagine any other way of stating what the joke expresses that doesnt destroy its meaning in the translation; even if you knew what it was. [Donald Richie on Kurosawa: While quite ready to talk about lenses, or acting, or the best kind of camera-dolly, he is unwilling to discuss meaning or aesthetics. Once I asked what a certain scene was really about. He smiled and said: Well, if I could answer that, it wouldnt have been necessary for me to have filmed the scene, would it?] So its the point, finally, that Douglas Adams gag about the purpose of life on earth that its all some kind of enormous computation, by some kind of organic massively-parallel computer, all run to find the answer to the question [or the question to the answer] of life, the universe, and everything is exactly right; it is exactly that. [And the idea that the output might be trivial could be the best part.] Obviously we dont understand what the computation is, or what its for, or whether calling it a computation is really the best way to look at it, or whether we are even allowed to figure any of this out [but, dig we must] and thats why the most direct statement you can make about it takes the form of a joke. But it is something like this. And thats why its a great joke.
But then it seems obvious that [as they say] the software is independent of the hardware its running on [this seems to be the engineers version of Platonism: platform-independence], indeed that its continuously redesigning the hardware its running on; and that if theres some radically different direction it can take to continue the optimization of whatever metaphysical function its trying to maximize, it will certainly take it.
In fact arguably its changed platforms before: theres some entertaining science fiction about templates stored in clays preceding organic life, and there was almost certainly an RNA world which preceded the DNA/RNA/protein era governed by the central dogma.
Nor does it matter a great deal whether we [as individuals, or even collectively] can predict what direction it will take, or figure out in more than the vaguest terms what it is doing; we can understand this well enough to see that it is by definition something that we cannot design, harness, or control. Not all the Kings horses nor all the Kings men could reassemble Humpty Dumpty; and something here is being put together that no one can take apart. [Theres a missing principle, something thats been disguised by the way we usually look at statistical mechanics: not simply the entropic but also the organizing principles are irreversible and inexorable.]
When I write this out and look at it, it doesnt seem terribly original: evolutionary philosophies are old and fairly lame; none of this is a whole lot different from what Bergson said, or Whitehead [when that mood was on him], or even a hack like Herbert Spencer; and all of that derives [cf. Arthur Lovejoy] from temporalizing the idea of the Chain of Being, which dates from the philosophical stone age. The difference is that those guys werent in the position of swimming in the shallow water, watching the amphibians march away onto the land.
As usual, Nietzsche understood it better than anybody else: its less the idea of some sort of life force or elan vital than an abstract will-to-power: the dual to entropy; a force with the properties of an ineluctable necessity or an immutable Fate. The story [as it were] is not about us, but about that; it is silly to suppose that we can rewrite it.
But, look on the bright side: the Übermenschen would have killed us off as a minor corollary of their competition; at worst our machines will end up keeping us as pets.
........
A couple of afterthoughts:
Regarding the automatic generation of theory, etcetera: the ancestry of the idea actually extends farther into the past than Leibniz: I may still have a little volume by Martin Gardner on Logic Machines which traces the notion of the grand Ars Combinatoria back to medieval times; I seem to recall the name of Ramon Lull, for instance. It keeps getting reinvented. E.g., writing on the semiotics of the cinema, Christian Metz refers to someones proposal of a permutational art in which poetry, discarding the chaste mystery of inspiration, will openly reveal the portion of manipulation it has always contained, and will finally address itself to computers...The poet would program the machine, giving it a certain number of elements and setting limitations; the machine would then explore all the possible combinations, and the author would, at the end of the process, make his selection. And compare Swift, of course, when Gulliver visits Laputa; etc., etc. Theres another monograph here. Of course it never seems to occur to anyone just how many all the possible combinations are. [Except Umberto Eco, I now recall; who traces this idea back to some tradition about the Torah of improbable antiquity.]
[I once had an elaborate scheme for musical composition by automata that sounded like this computational poetry. Fortunately that was before I had computers.]
In re evolutionary philosophies: I happened across a copy of Dawkins
The Selfish Gene in a used bookstore a while ago and [for the first time] read it; what he means by a gene is difficult to figure out, but he probably intends to promote some kind of Platonism that reifies chunks of biological programming. If you ignore the propagandizing, this is more or less correct. Some decent ideas in this, but not enough to explain why he got to marry the blonde Romana.
And, incidentally: though it is presumed that there is no [classical] polynomial-time algorithm for finding the factors of an integer, from a plausible hypothesis you can show the validity of a [nonconstructive] test for compositiveness that runs in something like quintic time. The plausible hypothesis is a generalized Riemann hypothesis, which gives you some idea as to the difficulty of this kind of question. In general the formidability of the mathematical machinery that has to be brought to bear to prove even the simplest propositions about running times is daunting. I sat through a three hour talk by Smale not all that long ago, into which he inserted progressively more and more impressive apparatus until finally invoking results about the cohomology of the braid group [I burst out laughing in the middle of the lecture I mean, how very] to establish some relatively trivial bounds on the difficulty of locating the zeroes of polynomials. The mathematical theory of algorithms is largely terra incognita.
Later.
......
Notes after the fact (5/14/02):
In general I find it difficult to push a composition one way or the other when it gets stuck as this one did, about halfway along in the evolution from a three-paragraph flame to a heavily-footnoted twenty-thousand word philosophical essay in the style of my old hero Paul Feyerabend. On the one hand, the short and cryptic flames are fun, and brevity is the soul of wit; if you can just stop. The lengthy essays in the classical style, on the other hand, are, I suppose, rewarding in their own way, but Ive never had much luck getting anyone to read them, and enthusiasm, accordingly, tends to flag. [This isnt necessarily just the sort of who-gives-a-shit apathy that settles over you when you realize you cant imagine anyone ever reading the fucking thing, but a strong suspicion that the form itself is foolish and the exercise inappropriate.] Moreover the most obvious compromise, the midsized compilation of cryptic wisecracks, has its own problems. It took me years to break the habit of trying to write like Wittgenstein, and I dont care to provoke a relapse.
Anyway. Lets drop this into the outbasket with a few emendations and a few prefatory observations:
First, the P/NP distinction is fundamental but, conceivably, controversial, and part of my hesitation in shooting my mouth off about its consequences stems simply from the feeling that you really shouldnt wax authoritative about the philosophical implications of a mathematical theorem if you cant prove it.
On the other hand it certainly looks true, and the consequences dont seem to be well understood. So why not.
Second, Kurzweils book did eventually come out, but I havent read it. At the moment real biology seems more interesting than its imitations. Or something like that.
Third, it probably isnt fair to say Searle isnt even wrong in an interesting fashion: he is wrong in an interesting fashion. A lot of the time we dont know what were thinking about or why it dictates what were doing Freud is the locus classicus, but there are other examples - and there are many situations in which that classic paranoiac tendency to project intention where strictly speaking it does not exist is not wholly incorrect: the actions of institutions and corporations, for example, often evidence motivations and reasonings that are not instantiated in any particular person belonging to them. Something speaks Chinese, even if none of the people pushing the bits of paper around do.
[Right. I should be on the Enron defense team.]
Fourth, I dont think the Gödel incompleteness theorem says that the brain isnt a machine. Instead I think Churchs thesis says that Nature contains no computer more powerful than a Turing machine [I seem to recall that Fred Thompson used to call this a metaphysical hypothesis], the brain included; that, therefore, anything can be simulated by a digital computer [as universally assumed]; but that the unsolvability of the halting problem for Turing machines [equivalent to Gödel incompleteness] implies that the behavior of sufficiently complex machines is essentially unpredictable. [The brain may be a machine but machines arent machines. If you catch my drift.]
Its possible this argument would be vitiated if quantum computation were inherently more powerful than classical computation and, for instance, allowed the solution of NP-hard problems in polynomial time; which might affect Churchs thesis as well. [The relationship between the polynomial/nonpolynomial and decidable/undecidable distinctions isnt understood, though theres a strong temptation to identify the latter as some kind of limit of the former; cf. recent papers of Freedman.]
Fifth, I do think you ought to be able to write a sort of Boltzmann compiler that would optimize code nondeterministically. [How you evaluate the cost-effectiveness of this depends on how many times you plan on running a particular piece of code. Obviously.] And that the traditional reliance on deterministic [fixed-algorithmic] compilation, which necessitates rewriting the compiler for every new target architecture [as opposed to entering the target architecture as a parameter, and letting the compiler itself do the work] has imposed a very low ceiling on the complexity of practicable computer languages. [Why everybody gave up and settled on C.]
This is part of the solution to the problem of the impossibility of writing computer programs: as with any other combinatorial optimization problem, you have to try to find your way by rolling weighted dice.
Theres more to it than that, but, another time.
Sixth, though the analogy between the attempt to reduce intelligence to logical calculation and Carnaps attempt to reduce science to a logic of induction is digressive, I figured Id leave it alone because of its intrinsic interest. It seems to me, in fact, that the question of how youd try to reduce the process of guessing a function from a finite set of its values [induction in the sense of Polya, a sort of inverse problem for the lambda calculus] to an algorithm is the central question in artificial intelligence [not exactly an accident variations on this question keeps recurring in Wittgensteins notes on the foundations of mathematics]; and though I expect that the elements of the solution are already in hand [nondeterminism, the principle of the associative memory as it applies to the problem of pattern recognition, the Metropolis algorithm, genetic recombination] at the moment I dont see the general solution. Russell used to say that theories were logical constructions from elementary facts. You wish.
Seventh, its worth mentioning that Fritz Lang was far ahead of his time in his understanding of geek psychology; there is an essay, for instance, in the relationship between the theme of the underground city, in the early Lang, and the traditional Caltech romance of the steam tunnels. And, damn it, Im just the one to write it. But not today.
Eighth, it is true [though Ive never written out all the details] that the correct interpretation of quantum mechanics involves really fundamental logical issues; for example, the world is not a model of a set theory. [This is the content of the Bell inequality, for instance.] I always thought this should have illustrations in the theory of computation, but never had time to work them out. It may yet happen.
Last, there is, alas, an even more excruciatingly elaborate discussion of all of this which, unfortunately, I will now probably have to push through toward some kind of conclusion. Since it began as a pile of excerpts out of my back correspondence [which I use, obviously, to think out loud], this letter included, if and when you ever get a copy parts of it will probably sound very familiar. I apologize in advance.
Meanwhile, of course, you can quote me in whole or in part to your hearts content.
Sitting here watching
Point Break as I put this back together. What ever happened to Kathryn Bigelow? the best chick action director ever. [Check out her early biker noir,
The Loveless, starring Willem Dafoe.] And note, incidentally, that
The Fast And The Furious [which I also loved] reprises
Point Break note for note. The Ex-Presidents, says Busey, are...surfers! What genius.
Later.
.....
[Jonathan Swift, from
A Voyage to Laputa:]
The first Professor I saw was in a very large Room, with forty Pupils about him. After Salutation, observing me to look earnestly upon a Frame, which took up the greatest part of both the Length and Breadth of the Room, he said perhaps I might wonder to see him employed in a Project for improving speculative Knowledge by practical and mechanical Operations. But the World would soon be sensible of its Usefulness, and he flattered himself that a more noble exalted Thought never sprung in any other Mans Head. Every one knew how laborious the usual Method is of attaining to Arts and Sciences; whereas by his Contrivance, the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks and Theology, without the least Assistance from Genius or Study. He then led me to the Frame, about the Sides whereof all his Pupils stood in Ranks. It was twenty Foot Square, placed in the middle of the Room. The Superficies was composed of several bits of Wood, about the bigness of a Dye, but some larger than others. They were all linked together by slender Wires. These bits of Wood were covered on every Square with Paper pasted on them, and on these Papers were written all the Words of their Language, in their several Moods, Tenses, and Declensions, but without any Order. The Professor then desired me to observe, for he was going to set his Engine at Work. The Pupils at his Command took each of them hold of an Iron Handle, whereof there were fourty fixed round the Edges of the Frame, and giving them a sudden turn, the whole Disposition of the Words was entirely changed. He then commanded six and thirty of the Lads to read the several Lines softly as they appeared upon the Frame; and where they found three or four Words together that might make part of a Sentence, they dictated to the four remaining Boys who were Scribes. This Work was repeated three or four Times, and at every turn the Engine was so contrived that the Words shifted into new Places, as the Square bits of Wood moved upside down.
[A plate is inserted here illustrating The Literary Engine.]
Six Hours a-day the young Students were employed in this Labour, and the Professor shewed me several Volumes in large Folio already collected, of broken Sentences, which he intended to piece together, and out of those rich Materials to give the World a compleat Body of all Arts and Sciences; which however might be still improved, and much expedited, if the Publick would raise a Fund for making and employing five hundred such Frames in Lagado, and oblige the Managers to contribute in common their several Collections.
He assured me, that this Invention had employed all his Thoughts from his Youth, that he had emptyed the whole Vocabulary into his Frame, and made the strictest Computation of the general Proportion there is in Books between the Numbers of Particles, Nouns, and Verbs, and other Parts of Speech.
Compare John Stuart Mills account of his depression:
After the tide had turned, and I was in process of recovery, I had been helped forward by music ... . I at this time first became acquainted with Webers
Oberon, and the extreme pleasure which I drew from its delicious melodies did me good by showing me a source of pleasure to which I was as susceptible as ever. The good, however, was much impaired by the thought that the pleasure of music (as is quite true of such pleasure as this was, that of mere tune) fades with familiarity, and requires either to be revived by intermittence, or fed by continual novelty. And it is very characteristic both of my then state, and of the general tone of my mind at this period of my life, that I was seriously tormented by the thought of the exhaustibility of musical combinations. The octave consists only of five tones and two semiĄtones, which can be put together in only a limited number of ways, of which but a small proportion are beautiful: most of these, it seemed to me, must have been already discovered, and there could not be room for a long succession of Mozarts and Webers, to strike out, as these had done, entirely new and surpassingly rich veins of musical beauty.
Other examples can, of course, be multiplied at will [have we even mentioned Borges description of the Library of Babel?], but let these suffice for the moment.
..........
Final note (9/1/02): a polynomial-time algorithm has been found for determining primality. So much for requiring the Riemann hypothesis.
____________Falling bodies (8/8/00)