Review: Fated, by Benedict Jacka

This is the first book in the Alex Verus series, of which there are currently three – and a fourth one coming out later in 2013. A friend from Australia recommended them to me, saying “If you liked the Dresden Files, you’ll like these.” That set a high bar, but I was not disappointed.

In many ways, it’s a Dresden-like world with wizards and other magical creatures hiding beneath the surface. He even makes a cute reference to Dresden with a remark of “supposedly there’s a wizard in Chicago who advertises in the phone book.” There’s even some wizard organizations, of both good and evil varieties. And our hero is one of these wizards, somewhat caught between the two camps, much like Dresden.

But the similarities end there.

Alex Verus is no fire-wielding combat wizard. In fact, he’s quite the opposite. In face to face combat, he can’t do much more than throw a punch or try to trip you. You see, he’s a diviner who can see the future. But of course, seeing the future means you can change it, so what he really sees is the massively bifurcating tree of possible futures.

Do I turn right or left here? Do I say hello or run like hell? Do I accept this offer, or do I find out that it truly is an offer I cannot refuse? He can explore all those options and try to make the decisions that keep his body and soul joined. But he can only see so far, and he can’t see past someone else’s independent decisions. So, like most good diviners, he puts most of his efforts towards laying low and staying out of trouble.

But then trouble comes looking for him. The various powers-that-be want help cracking open a mysterious artifact, and to do that, they need a diviner. It’s a magical safe-box of sorts, and who else would ask to see the future of all those magical combination locks? Alex is not exactly at the top of the list for diviners, but the best of them have all coincidentally realized that now is a good time to be far, far away from this artifact and those who would open it. Alex isn’t quite that smart, or that lucky.

So he gets drawn back into a world he had done his best to leave behind, hoping he’s smart enough to find his way back out again when it’s all over.

I liked it. A lot. As much as I enjoy Harry Dresden blundering in with his blasting rod and .44 Magnum , he solves more of his day-to-day problems through brute force rather than cunning guile. Alex Verus doesn’t have the option of firepower.

He has to be smart. Or dead. He tries to be smart.

No Singularity

Last week I talked about missing out on the next big revolution in fiction, and how that can make it hard to make future fiction hard to write believably. However, if you thought I was going to go so far as to predict the impending technological singularity, you’re wrong.

The supposedly approaching technological singularity is some point of exponential advancement that changes the game so much, we cannot really see past it, and depending on the exact definition, I’ve seen it predicted to occur as early as 2011 and as late as 2050.

Well, I disagree. Depending on the more precise definition of this technological singularity, I say maybe, no, and Hell No. If you’ll bear with me on this rather long entry, I’ll explain why.

AI: the Easy Singularity

The tamest definition of this technological singularity is that we will create a computer intelligence that is more intelligent than the smartest humans. On the face of it, this seems believable. Given the advancements that Moore’s Law brought to computational power in the last fifty years, it might even seem to be inevitable.

In specific areas, we have already reached this. Notably, computers can play some games perfectly, i.e. they cannot be beaten. For other games, they can beat the best human players. Chess was a recent and notable triumph for the silicon team. But they are still losing other games to human players. (See this informative and humorous XKCD comic )

But skill at games is not the only measure of human intelligence. Visual and speech processing are still difficult for computers, though they are improving. Creativity is hard to measure, but with the exception of some isolated problems, computers have not shown much creativity. A sense of humor still seems a long ways off. The brass ring, of course, is the self-aware computer. That’s the real cogito ergo sum moment.

If Moore’s Law continues, we may reach the required processing power within the predicted timeframe, but I foresee a couple of problems to the hyper-intelligent computers of the singularity prediction.

The first problem is that reaching the level of even human intelligence is probably harder than it looks on paper. It’s about more than just processing power. Specifically, it’s going to require that we reach an understanding of how human intelligence works in the first place, and we’re simply not there yet. How is it that these sporadic neurons firing translate into the subjective experience of sentience? How important is the structure of the human brain that has evolved from more primitive brains? How do the chemical regulators keep our neural nets in good operating condition? It’s not just a matter of connecting enough transistorized neurons and flipping the switch. There’s structure and a billion years of Darwinian design at work.

The second problem is this notion of hyper-intelligence. Briefly consider qualitative aspects of intelligence vs. quantitative aspects of intelligence. A human is qualitatively more intelligent than a lizard. He thinks about problems, designs tools to solve them, and ultimately eats the lizard. Mmmm, that’s good lizard. Some humans (my wife, for example) are quantitatively more intelligent than I am. She can solve mathematical problems much faster than I can, but given enough time, I’ll get there eventually.

The upward ramp of Moore’s Law gives us a lot of hope for computers that could be quantitatively more intelligent than humans, but I don’t think automatically provides a qualitatively higher intelligence. Certainly, the old Church-Turing thesis is often interpreted to suggest that any calculation that can be performed (i.e. the human experience of consciousness) can be performed by a Turing machine, and that interpretation is one of the strongest arguments that increasing computing power will lead to human-level computer intelligence. However, the Church-Turing thesis also makes it clear that there are some problems (e.g. the halting problem) that are beyond the ability of a Turing machine.

Thus, it seems to me that while computers may become significantly quantitatively more intelligent than humans, there may be a real upper limit on qualitative improvements in intelligence. What would that look like? I would expect it to be like talking to someone who knows pretty much everything and can answer hard questions quickly, but they would still be just as clueless as we are on questions like “will I be happy with Sue?”

Of course, the zero-eth problem with all this is that Moore’s Law may not continue long enough to reach this singularity. For the last twenty years, I’ve been reading predictions that Moore’s Law only has another five to ten years left in it. Eventually, they’ll be right. I’m not saying that we’ll never get the required processing power – after all, evolution managed to crank it out – but we might have to give up on the notion of getting exponential results in logarithmic time.

So, will we see this easy singularity of artificial intelligence by 2050? Ehhh, maybe. Maybe not. I think we’ll see it eventually, but I don’t know if we’ll ever get that qualitative advance.

Impenetrable Wall: the Medium Singularity

More advanced definitions of “the singularity” typically say that once we build these hyper-intelligent computers, they will change the world in ways that we cannot imagine, and hence, we cannot see into the future past that event. After all, we only have normal intelligence, so how can we possibly guess at where hyper-intelligence is going to lead us?

Personally, I don’t think that gives human imagination enough credit. Scholarly study of this question leads to possibilities ranging from utopia to human extinction and all manner of possibilities in between. Utopias are fairly easy to imagine, though the road to reach them is hard. Human extinction has been over-imagined, from the Terminator to the Matrix. I think we’ve also seen plenty of in-between’s. One of my favorites is the Poul Anderson series ending in the Fleet of Stars, where hyper-intelligent computers simply want to manage humanity into a safe, peaceful, and boring existence.

We don’t seem to have any trouble imagining futures with hyper-intelligent computers, and face it, we’re getting by in this area on the odd-balls, the kooks, the SF-writers. Put some serious policy wonks on it, and we’ll soon be talking about the best tax strategies to manage Skynet’s homicidal rage.

Ah, but it’s not enough just to imagine the possibilities, is it? In order to foil this aspect of the singularity, we have to predict what’s going to happen beyond that impenetrable wall of exponential change. How on earth can we lowly humans do that?

Well, we can’t.

But we can’t predict what’s going to happen on this side of that impenetrable wall of change either. Who’s going to win the U.S. presidential election this fall? Will Iran build a nuclear bomb or fall to a populous revolution? Will wireless broadband ever reach parity with physical cables for the last-mile problem of connectivity? Will solar panels ever get cheap enough to drive us towards a privately-owned distributed power system, and if so, when? Will the Cubs ever get back to the World Series?

The only thing I can grant the singularity camp is that predictions beyond the achievement of hyper-intelligent computers will be more difficult, just as any significant change makes predictions more complex. The creation of the personal computer threw technologists for a loop. Ditto with the creation of the Web. However, some things remain the same, no matter how much change we throw at them. Top among them is human nature.

My predictions for a post-hyper-intelligent-computer world: Humans will be noble but petty. They will be greedy and charitable. They will love, and they will hate. Fathers will want to play ball with their sons, and daughters will declare that their mothers have RUINED THEIR LIVES!!! These things haven’t fundamentally changed in ten thousand years. The arrival of hyper-intelligent computers, friendly or not, won’t change them either.


Post-Humanism: The Really Hard Singularity

In fairness to the original singularity camp (Vernor Vinge, etc.), this kind of thing was not in their definition of the singularity. They were making what they felt were reasonable predictions up to the point where they felt they could no longer make such predictions. They didn’t sign on for humans becoming immortal demi-gods.

But I include this here because enough post-humanists (or trans-humanist, take your pick) have hitched their miraculous transformations onto the computing singularity bandwagon, and they’re making predictions in the same timeframe as the computer singularity folks. What’s more, I’ve run into too many woo-woo technology lovers who have looked at a few exponential charts and convinced themselves that the techno-rapture is at hand.

So, what the hell am I talking about here? Some folks believe that we’re on the verge of changing human nature in big ways. The most aggressive think that we’re going to download our minds into computers at the earliest opportunity, shedding our physical bodies like gas-guzzling SUV’s. Others think that life-extension is advancing rapidly towards the point that life expectancy will grow by greater than one year for each year that passes – essential immortality, even for those of us alive today. Still others think that we’re a generation away from engineering children who are as far in advance of us as the hyper-intelligent computer is ahead of my laptop.

To which I say: Bullshit, not likely, and not soon.

The notion of downloading into a computer has been around for a while. I can’t say when I first ran into it, but when I saw it dealt with in SF (again by Poul Anderson) it seemed an old concept to me. Old yes, but practical, no. The first thing I’ll throw out there are the technical problems with non-destructively reading a brain’s complete state, building an electronic system that can match it, and duplicating all the chemical support systems electronically. But they landed a man on the moon, so I won’t make it a sticking point.

The second problem, though, is a messier one. Would you really want to live as a computer? In most of the ways I’ve seen this envisioned, the downloads live virtual lives with no physicality. Perhaps they interact some with the physical world, but only at an intellectual level. Is that really enough for you?

I direct your attention again to that games diagram from XKCD. One of the games that computers will never play better than humans is “Seven Minutes in Heaven”. I think a human mind living in a computer would go mad without the comfort of physical touch, without the sensation of the wind and the rain, without the taste of food or the smell of freshly cut grass. I believe this goes beyond a mere craving. I think our minds need that physicality. It’s part of who we are. We are animals of flesh, not free-floating motes of intellect.

We could, of course, turn ourselves into robots, but they would have to be exceptional robots. More properly, they would have to be androids with at least all the senses and capabilities we have today. Again, that’s another technical challenge, but I’ll waive it here in Wonderland. Still, if we do manage all this, how different is our human nature? Haven’t we just turned ourselves into immortals with an off-site backup?

That brings me to that second notion of post-humanism: biological immortality through life-extension techniques. Again, there are technical problems, though before I waive my objection, let me point out that we know far less about manipulating biology with precision than we know about silicon, and there’s no Moore’s Law pushing us along here. Still, life expectancy is increasing. How far can it go?

The real problem is that we’re kind of fighting evolution here, or at the very least, evolution is not our friend in this case. We’ve been bred to breed and then die. Pass on our genes to the next generation, and evolution is done with us. At best, we’re useful to make sure that our genes continue on to a second or even third generation, but before long, we’re standing in Darwin’s way.

So we’ve been designed to not last that long, or at the very least, we’ve not been designed to last that long. Planned obsolescence at the genetic level. To get around that, we have to solve some problems that evolution has never bothered to try, and we’re trying to do it for people who are already up and moving. Personally, I think we’ve got a better shot at downloading into androids.

But perhaps that third post-human notion has some merit, eh? Design our kids to be immortal, immune to disease and age and any human frailty we want to edit out. How about that? We’ve mapped the human genome. Let’s start writing some new code.

I don’t think so, and for once, I’m not going to put the strongest barrier at the technical level – though be assured, that’s no cake walk either. Instead, it’s human nature that’s going to slow us down, and ironically, I think it will be parents’ love for their children that will limit the gifts that we give them.

Think about it. You and your spouse are about to start your family. This is today, or perhaps it’s a year after the hyper-intelligent computers have dropped by to say “dude”. Now a doctor tells you that he wants to significantly rewrite the genetic code of your offspring so that he’ll be smarter, healthier, and immortal. “Sounds great,” you say, but then you ask the first question any potential parent would ask. “How many times have you done this?”

“Ummm… well, never. You’ll be the first.”

“I’m sorry, but you need to get the fuck out of my house.”

Sure, sooner or later, someone would give it a shot, but 99.99% of parents would wait until that first 0.01% had grown up and designed some kids of their own. Then maybe another one or two percent of that next generation would try it. It would grow, generation by generation, until there would be a tipping point of everyone doing it, and the poor would be demanding universal genetic health care. But it would not happen overnight, and it sure as hell won’t happen in the next couple of generations from now as a number of folks are predicting. This will take a century or more, especially for some of the more radical proposals.

Still, in all three of these post-humanist scenarios, I think they fail on the impenetrable wall of unpredictability. People will still be people, even if they’re androids or immortal meat-bags. We can hope that they will be better people, but we’ve already known better people: Mother Theresa, the Dali Lama, Martin Luther King Jr., and of course, Tom Landry. (Go Cowboys!) We can readily imagine stories in a world filled with these types, just as we can imagine worlds filled with their opposites. Utopian and dystopian fantasies are a staple of the SF genre.

So, no, we’re not on the verge of some biotech rapture which blinds us to the future.

Story-telling: the Non-existent Singularity

But as much as I may poo-poo the likelihood of any of these singularity events, I don’t ignore them. Even if they never come to pass, they’re fun ideas to play with, simply because we SF geeks like to think about odd scenarios and then ask, “What happens next?” Because they postulate such a different world, we’re drawn to the other side of that impenetrable wall to explore, have fun, and tell stories.

It’s because of that imaginative drive that I don’t think any change will ever present us with an impenetrable wall.

And I also think it’s that same drive that gives us any chance of ever reaching those theoretical walls in the first place.

What Are We Missing?

One of the least avoidable dangers of writing about future worlds in science fiction is missing the technological revolution that’s just around the corner. Certainly, it’s equally easy to forecast a technology that never arrives, but that doesn’t date the story. A story written in the 1930’s with flying cars can still feel like the future, but one that leaves out computers is fatally dated.

Missing the Call…

The most glaring example of this that I’ve run into in recent memory was Connie Willis’ 1992 “The Doomsday Book”. It is an excellent novel and won both the Hugo and Nebula awards. It involves some time travel back to medieval England from the year 2054. It wasn’t the time travel that bothered me though, since that still feels like future-tech. No, what kept throwing me out of this future world of 2054 was that they had no mobile phones.

Certainly, they had advanced video phones, but all of them were tied to landlines. Normally, a little thing like this would have been easily ignored if it remained in the background, but it had a significant impact on the plot. Specifically, characters were trying to get hold of one another, and they kept missing each other because one or another was away from their desk or office when the call arrived. Not being able to get hold of various people was rapidly escalating into a life and death situation. I even recall one scene where someone is told to wait by the phone no matter what. Really – glued to the landline!

When I read it five or six years ago, I’d had a mobile phone for eight years or so, and in that time, they had already gone from miniature bricks that businessmen carried to the early smart phones that were well on their way to becoming ubiquitous. Now mobile phones are everywhere, from grandmothers to African bushmen, and it’s only been twenty years since Willis’ book was released. The notion of not being able to get hold of someone in an emergency because they’re away from their desk now seems ludicrous.

In fairness, it’s hard to fault Willis. In 1992, mobile phones really were bricks, and they were most common as car phones. Even then, they were idle toys for the rich or politically connected, not everyday tools for the common man. It wasn’t just that the technology got so much better so quickly. It’s that the demand that wasn’t there at all in 1992 became rampant in just a decade.

A Fleet of Missed Boats

But Willis is not alone in having missed out on the shape of technologies around the corner. Plenty of authors in the 1960’s familiar with room-sized computers completely missed the desktop computer that arrived just ten to fifteen years later. While several authors in the 60’s and 70’s talked about computers networked together, I don’t think many (or any) of them foresaw the massive peer-to-peer impact that the web has had on personal communications. And I think most everyone missed the pending collapse of the Soviet Union pretty much right up to summer of 1989.

Are We Forever Doomed?

So where does that leave us now? What technological revolutions are just around the corner waiting to mock today’s science fiction writers? Are we on the verge of common and effective anti-viral treatments, i.e. no more common cold, influenza, or AIDS? Are computer implants about to become not only possible but turn into the mobile phone of the next generation? Are we about to get that peace-loving world government, not through war or democratic revolution, but through that unexpected philosophy to be named later?

This is pretty hard to guess at because not getting caught by the unexpected revolution means guessing not just one thing but all things. Miss one life-changing advancement and your story could be like Connie Willis’ with everyone playing phone tag, afraid to get up from their desks. With possibly changes looming in computers, genetics, medicine, politics and more, it’s hard to know where to jump. Certainly, you can jump too far without much penalty since your flying car will be either commonplace or still futuristic, but if you don’t jump far enough in the right direction, you might start looking foolish in just a few years.


This might seem to be an endorsement of some variation on “the coming Sigularity”, but it’s not. I’ll talk about that in more detail next week, but for now I’m going to say that yes, this kind of guesswork is hard now, but it’s always been hard. It was hard for writers back in the 50’s, just as it was hard when Connie Willis missed mobile phones in 1992. Probably the biggest thing that’s changed in the last 60 years on this front is that now we have a real appreciation for how hard this kind of guesswork is.

But still, any tips for the future would be nice. What life-changing advance is waiting around the corner, hoping to make me look foolish in twenty years? I’d like to beat it if I can, but even if I can’t, at least I’ll be in good company.