Ellis (and other Unbelievers ;) fails to allow for the Star Trek/Neuromancer phenomenon; the "if we can imagine it, we can build it" idea which has worked for many, many apparently impossible technologies. It's not quite the wikialistic equivalent of "clap if you believe in fairies", but in some ways it's close. Basically, the more engineers that have been exposed to the idea of something technological, the more likely it is that they'll eventually work out a way to build it. Since there's nothing inherently impossible about general AI (even if you're religiously opposed to the idea of machine consciousness), given its commercial value alone it doesn't take much to suggest that it's in fact inevitable (as long as someone else's eschatology doesn't intervene. From there to the Singularity is, well, not exactly a leap of faith.
no subject