Review: WWW: Wonder, by Robert J. Sawyer

This was the final book in the WWW trilogy, showing the emergence of an intelligence within the framework of the internet, named Webmind. The first two books dealt with its creation and then its early forays into the public light. This last one deals with how it and humanity come to terms for peaceful coexistence.

With such examples as Hal and Skynet to prejudice us, it’s hardly surprising that Webmind was not received with open arms. Some want to kill it immediately. Others want to try to isolate it somewhere. But Webmind has its own priorities and shows itself to be a worthy opponent and a magnanimous winner. I don’t want to spoil it with specifics, but eventually Webmind proves itself to be a useful addition to humanity.

Meanwhile Caitlin, the blind teenage girl who discovered and nurtured Webmind, manages to ride out her celebrity status and move further into adulthood. There’s nothing particularly Sci-Fi about that part of the story, but it was sweet and kept Webmind’s increasingly high-stakes propositions tied to the realm of mere mortals.

All in all, it was a nice conclusion to the trilogy.

Review: WWW Wake, by Robert J. Sawyer

I don’t remember how I first came across this one, but the basic idea is that the Internet becomes self-aware. It was an idea I have toyed with from time to time but never figured out how to turn it into a story. Sawyer did.

It’s mostly told through the POV of a blind teenage girl who gets an experimental implant to grant her sight, but there are also some other characters scattered around the globe playing their own parts. While the girl’s operation is at first deemed a failure, time changes that. I don’t want to say too much about that, because it’s a spoiler worth preserving, though I will say I was initially annoyed by what she sees via her implant. Still, I recognized it was required by the plot, but I was glad to see it go.

We also see some of the story told from the POV of the emerging sentience of the internet. While generally told in small snippets, I found that part very interesting. Over the book, it goes from a barely aware sentience to a fully self-aware, communicative mind. That in itself was an interesting journey.

So, overall I enjoyed it, with only minor points of nit-picking. It’s clearly the first in a trilogy, so I look forward to seeing where the rest of this goes.

Von Neumann Sanity Check

Von Neumann probes are going to turn the universe into grey goo. Or at least that’s what I’m led to believe by most of the fictional representations of their existence. The negative consensus is that these self-replicating exploration probes will go mindlessly off-mission and keep going until they’ve turned every useful atom in the universe into one of their offspring.

It would be irresponsible for us or any other sentient biological race to create them. No, we biological folk would treat the universe much more ethically.

Ummm… why is that?

Humans have committed most of the supposed Von Neumann sins here on Earth over our long climb up to dominance. Why is that we trust a human colonization wave of self-replicating people more than the machines?

Is it merely that we would replicate more slowly? That doesn’t make much difference over astronomical time scales.

Is it that being biological makes us less prone to mistakes or bad ideas? Looking at some of the shit from our history does not bolster any sense of human infallibility.

Or is it simply that we humans have reached the point where we realize that unleashing a wave of unthinking resource eaters upon the universe would be a bad idea? Yes, we are wise, and so we know far better than any machine ever could.


I think the big disconnect for me is that we imagine we can create a self-replicating spacecraft capable of travelling to a new star system, exploring it, mining it for resources, building and launching child craft to carry on, perform its actual mission in that system, and yet not have sufficient intelligence and stability to stay on mission and not go all wonky and consume the universe.

Smart folks have suggested various ways to limit them by only producing so many offspring or other such hard-rule limits, but I feel they’re missing the more obvious solution: just make them as smart as we are and see to it that they have high ethical standards.

That may seem like I’m asking a lot, but given that we’re nowhere close to being able to launch interstellar probes at even one percent the speed of light or build self-replicating machines, I think that the AI folks have some time on their hands. Even pushing back the supposedly impending singularity back a couple of hundred years, I suspect our computing resources will get there before our spacecraft capabilities are ready.

Heck, if the “upload your brain” folks ever get their wish, it might effectively be us humans popping out into the cosmos. Send a lot of us, as many as you can cram into the silicon. Fifty or a hundred human minds aboard each probe ought to be able to make sound decisions about not smelting the entire universe. Plus, they’d be able to keep one another company.

I’d still worry a bit about one group going a little fanatical, so I’d ensure some cross-fertilization. Plot out the exploration so that two more child ships are sent to the same location but from multiple parent ships. Upon arriving together, the outgoing child ships would get a mixed crew of the minds from different parent crews. Then don’t leave operational machinery behind. Do your thing and move on. The vast majority of resources will be remain untouched.

It doesn’t seem that hard to me to plan something that won’t go catastrophically off-mission. Just give them the good sense that we seem to have.

Or is that the good sense we hope we have?