Review: Going Interstellar, by Johnson & McDevitt

This is a collection of short stories and essays edited by Les Johnson and Jack McDevitt:

I’ve reviewed other novels by McDevitt and enjoyed them quite a bit, so when this anthology came out, I was quick to buy it. Overall, it was pretty good, but its mixed nature made it inconsistent.

I enjoyed all the essays. They were full of facts, history, and a reasonable amount of hard science. They even had a few diagrams, so I’m glad I bought it in dead-tree edition rather than e-book. Mostly the essays dealt with various proposals for real interstellar spacecraft that would plod along at slower than the speed of light. While that can make for weak fiction, it’s actually possible by our current understanding of the universe. No magic physics is required.

The fiction was hit or miss for me. I did really enjoy one of the stories by McDevitt, and it truly did make me care about the main character, an AI computer that finally got a shot at the big game. A couple of others left me flat, and one truly disappointed me. It dealt with a multi-generation colony ship, and I found it lacking compared to my own novel of a similar colony ship. That’s not really fair to this story, of course, but that’s how it hit me.

So, if you want some info on real interstellar proposals, get this for the articles, and maybe check out the fiction.

DARPA’s 100-year Starship Program

I spent the weekend down in Houston for ApolloCon, and the most surprising panel I attended was on a DARPA program to lay down the research necessary to launch an interstellar ship a hundred years from now. To quote from their announcement last year:

In 1865, Jules Verne put forward a seemingly impossible notion in From Earth to the Moon: he wrote about building a giant space gun that would rocket men to the moon. Just over a century later, the impossible became reality when Neil Armstrong took that first step onto the moon’s surface in 1969.

A century can fundamentally change our understanding of our universe and reality. Man’s desire to explore space and achieve the seemingly impossible is at the center of the 100 Year Starship Study Symposium. The Defense Advanced Research Projects Agency (DARPA) and NASA Ames Research Center (serving as execution agent), are working together to convene thought leaders dealing with the practical and fantastic issues man needs to address to achieve interstellar flight one hundred years from now.

They’re not handing out research grants at the moment, but they hosted a symposium last fall to talk about issues from propulsion to philosophy, i.e. not just how to get there, but why we should go. This year, they kicked off the seed funding to create a private organization called the 100 Year Starship. They’re holding another symposium this September in Houston. Given that it’s just a few hours’ drive for me, I’m seriously thinking about it.

I mean, really, this is seriously cranking my geek.  Or… you know, something that sounds maybe a little less disturbing.

Von Neumann Sanity Check

Von Neumann probes are going to turn the universe into grey goo. Or at least that’s what I’m led to believe by most of the fictional representations of their existence. The negative consensus is that these self-replicating exploration probes will go mindlessly off-mission and keep going until they’ve turned every useful atom in the universe into one of their offspring.

It would be irresponsible for us or any other sentient biological race to create them. No, we biological folk would treat the universe much more ethically.

Ummm… why is that?

Humans have committed most of the supposed Von Neumann sins here on Earth over our long climb up to dominance. Why is that we trust a human colonization wave of self-replicating people more than the machines?

Is it merely that we would replicate more slowly? That doesn’t make much difference over astronomical time scales.

Is it that being biological makes us less prone to mistakes or bad ideas? Looking at some of the shit from our history does not bolster any sense of human infallibility.

Or is it simply that we humans have reached the point where we realize that unleashing a wave of unthinking resource eaters upon the universe would be a bad idea? Yes, we are wise, and so we know far better than any machine ever could.


I think the big disconnect for me is that we imagine we can create a self-replicating spacecraft capable of travelling to a new star system, exploring it, mining it for resources, building and launching child craft to carry on, perform its actual mission in that system, and yet not have sufficient intelligence and stability to stay on mission and not go all wonky and consume the universe.

Smart folks have suggested various ways to limit them by only producing so many offspring or other such hard-rule limits, but I feel they’re missing the more obvious solution: just make them as smart as we are and see to it that they have high ethical standards.

That may seem like I’m asking a lot, but given that we’re nowhere close to being able to launch interstellar probes at even one percent the speed of light or build self-replicating machines, I think that the AI folks have some time on their hands. Even pushing back the supposedly impending singularity back a couple of hundred years, I suspect our computing resources will get there before our spacecraft capabilities are ready.

Heck, if the “upload your brain” folks ever get their wish, it might effectively be us humans popping out into the cosmos. Send a lot of us, as many as you can cram into the silicon. Fifty or a hundred human minds aboard each probe ought to be able to make sound decisions about not smelting the entire universe. Plus, they’d be able to keep one another company.

I’d still worry a bit about one group going a little fanatical, so I’d ensure some cross-fertilization. Plot out the exploration so that two more child ships are sent to the same location but from multiple parent ships. Upon arriving together, the outgoing child ships would get a mixed crew of the minds from different parent crews. Then don’t leave operational machinery behind. Do your thing and move on. The vast majority of resources will be remain untouched.

It doesn’t seem that hard to me to plan something that won’t go catastrophically off-mission. Just give them the good sense that we seem to have.

Or is that the good sense we hope we have?