.comment-link {margin-left:.6em;}

Future Imperative

What if technology were being developed that could enhance your mind or body to extraordinary or even superhuman levels -- and some of these tools were already here? Wouldn't you be curious?

Actually, some are here. But human enhancement is an incredibly broad and compartmentalized field. We’re often unaware of what’s right next door. This site reviews resources and ideas from across the field and makes it easy for readers to find exactly the information they're most interested in.


The future is coming fast, and it's no longer possible to ignore how rapidly the world is changing. As the old order changes -- or more frequently crumbles altogether -- I offer a perspective on how we can transform ourselves in turn... for the better. Nothing on this site is intended as legal, financial or medical advice. Indeed, much of what I discuss amounts to possibilities rather than certainties, in an ever-changing present and an ever-uncertain future.

Friday, April 07, 2006

A Wrinkle in the Science Found in Wrinkles in the Brain

The Washington Post reports a study suggesting the brains of highly intelligent children develop in ways distinctively different from those of ordinary children. The article notes:

The study is the first to try to measure whether differences in brain development are linked to intelligence, said researchers at the National Institute of Mental Health, who did several brain scans on 309 healthy children between the ages of 6 and 19.

The scans showed that children with the highest IQs began with a relatively thin cortex -- the folded outer layer of the brain that is involved in complex thinking -- which rapidly grew thicker before reaching a peak and then rapidly becoming thinner, said Philip Shaw, the lead investigator. Children of average intelligence had a thicker cortex around age 6, but by around 13 it was thinner than in children of superior intelligence.

A graphic actually notes the variations in thickness between the two groups converging around 19 years of age. Still, as the article also mentions, finding clear physical differences in brains that correlate to significant differences in intelligence has long been a critical goal of neurophysiology. Everything we can identify about the discernable causes or even the effects of heightened intelligence will enable us not only to measure such characteristics more accurately, but it will also give us an idea of how to increase it further -- through genetic engineering, conventional education, pharmaceuticals (nootropics), accelerated learning, etc. After all, if you have a number of benchmarks indicating relative intelligence in a subject, then one measure of an experiment's success is in seeing how they change. Of course, there are others, such as the mental health of the patient, but having a basic, verifiable scale is extremely useful for radical enhancement programs.
The study, being published today in the journal Nature, does not suggest any particular interventions that might boost a child's intelligence. But Richard J. Davidson, a brain imaging expert at the University of Wisconsin at Madison, said the fact that the region of the brain being studied is highly malleable suggests that experience and environmental cues may play a very important role in shaping intelligence.

And of course, the question is: What experiences and environmental cues are impacting these children? What genetic factors are? What educational factors? And what additional factors could, if only we knew to use them?

Just asking these questions increases the likelihood we will find answers to them. Unfortunately, we may not like all the answers we get. Just like steroids, there are apt to be some "enhancements" whose drawbacks are considered too severe by the rest of us. But if someone is in an extremely competitive environment (like professional sports, only with more abstract or visual/artistic thinking), they may feel the flaws are worth it. How do we regulate them without interfering too greatly in their personal freedom? Or, if we do not regulate such "advantages," what happens to the people who forego the short-term benefits in favor or their long term health or emotional or mental stability?

All questions which even a harmless experiment force to reconsider.

Future Imperative

Sunday, April 02, 2006

Of Dragon Eggs and Posthuman Gods...

Yes, clearly I should pay more attention to the date. When you read articles in a major paper like The Economist in the late afternoon on April 2nd, beware! The paper's editorial staff may not have a very solid grasp of when April 1st actually falls on the calendar. The sad thing isn't that the story is so outlandish, but that the computer modelling they're talking about is coming along in genuine scientific research. And that it is but one of many reasons why human augmentation technology is heading in our direction far faster than most people -- even most enthusiasts -- would have ever expected. But yes, this particular story is apparently false. If someone there in Britain could send The Economist a calendar, I think we'd all gain from it.

Nevertheless, I am going to include the following post because the technology imagined isn't all that implausible and such breakthroughs have implications worth considering. However long they may actually take.

You may soon be able to own your own dragon, a technological step forward which may not seem like the most revolutionary thing in the world right now... except that it might be the most revolutionary thing in the world right now. Why? Because the technology itself may have dramatic implications in two completely different ways.

The Economist.Com notes that Paolo Fril, the chairman and chief scientist at the corporation GeneDupe, "is a man with a dream. That dream is a dragon in every home.

GeneDupe is into the business of biotech pets. Specifically, mythological ones. (Their attempt at a goldfish with literally golden scales sank, literally, like a stone.) The Economist explains:

Making a mythical creature real is not easy. But GeneDupe's team of biologists and computer scientists reckon they are equal to the task. Their secret is a new field, which they call “virtual cell biology”.

Biology and computing have a lot in common, since both are about processing information—in one case electronic; in the other, biochemical. Virtual cell biology aspires to make a software model of a cell that is accurate in every biochemical detail. That is possible because all animal cells use the same parts list—mitochondria for energy processing, the endoplasmic reticulum for making proteins, Golgi body for protein assembly, and so on.

Armed with their virtual cell, GeneDupe's scientists can customise the result so that it belongs to a particular species, by loading it with a virtual copy of that animal's genome. Then, if the cell is also loaded with the right virtual molecules, it will behave like a fertilised egg, and start dividing and developing—first into an embryo, and ultimately into an adult.

Because this “growth” is going on in a computer, it happens fast. Passing from egg to adult in one of GeneDupe's enormous Mythmaker computers takes less than a minute. And it is here that Charles Darwin gets a look in. With such a short generation time, GeneDupe's scientists can add a little evolution to their products.

Using this rapid evolutionary process, GeneDupe's scientists have arrived at genomes for a range of mythological creatures—in a computer, at least. The next stage, on which they are just embarking, is to do it for real.

This involves synthesising, with actual DNA, the genetic material that the computer models predict will produce the mythical creatures. The synthetic DNA is then inserted into a cell that has had its natural nucleus removed. The result, Dr Fril and his commercial backers hope, will be a real live dragon, unicorn or what have you.

This technology could be revolutionary in two ways. First, sufficiently refined, this may prove to be a model for developing genetically augmented humans, either through germ-line manipulation or gene therapy. All you need to know is how theoretical augmentations work in your computer models (as well as in modified animals) before you engage in human trials.

But in what overt ways could the human brain be enhanced? Better circulation, a better bio-chemical balance, faster neural processing, better neural interconnectivity... there are a host of potential alterations, and while you would want to have a firm grasp on what the consequences of an enhancement should be, absolute knowledge is probably not a requirement for human trials with willing volunteers. There are plenty of human beings who would be eager to receive a dramatic improvement in their intelligence.

And greatly improved computer modelling of these modifications would be a major step towards bringing them about.

Being able to model the cell perfectly at the biochemical level is also a dream of some artificial intelligence researchers. The idea is that if all other attempts prove too challenging, eventually you'll be able to create a "thinking machine" simply by modelling every cell in a typical human mind. Obviously this brute force technique has its challenges, but given a human brain to model it may well be possible to duplicate the core functions of human intellect, if not the human experience, or human development and education.

But if you have human-equivalent computers, developed by whatever means, the argument is that they could then assist researchers in further improving computer technology, including their own minds. And by mass-producing computers ultimately capable of working much faster than human beings, without rest, you could quickly achieve a technological "Singularity" -- a point at which technology is advancing so rapidly it becomes impossible for people from our perspective to even guess intelligently about what will be happening next.

These two options are indeed dramatic, but oddly enough, there is a third. What happens if someone decides to bridge the gap between artificial intelligence and ordinary supercomputers by synthesizing a brain as close to human as possible without technically crossing that line... and then integrating that tissue into a computer system as "biochips"? Or, alternatively, what if someone uses a mix of "real" cells and virtually simulated neurons to create a human equivalent mind? One which really is more than a (slightly non-human) "brain in a jar," cybernetically connected to faster artificial systems? But rather, is partially composed of those systems itself?

These may seem like esoteric questions to discuss, but they are pressing for the very reason that very few people realize they exist or how soon these consequences may soon be knocking at our doors. Despite the fact that no one involved in this research seems to have had any of these possibilities in mind at the time...

Bio, Cyber, AI
Future Imperative