So what is cyberware, really? - Most dictionaries don't contain a definition for cyberware. This is unsurprising in this relatively new and unknown field. In science fiction circles, however, it is commonly known to mean the hardware or machine parts implanted in the human body and acting as an interface between our central nervous system and the computers or machinery connected to it. More formally: Cyberware is technology that attempts to create a working interface between machines/computers and the human nervous system, including (but not limited to) the brain. Examples of potential cyberware cover a wide range, but current research tends to approach the field from one of two different angles: Interfaces or Prosthetics.
Interfaces - The first variety attempts to connect directly with the brain. The data-jack mentioned earlier is probably the most well known, having heavily featured in works of fiction (even in mainstream productions such as "Johnny Mnemonic"). Unfortunately, it is currently the most difficult object to implement, but it is also the most important in terms of interfacing directly with the mind. For those of us who aren't science fiction fans, the data-jack is the envisioned I/O port for the brain. Its job is to translate our thoughts into something meaningful to a computer, and to translate something from our computer into meaningful thoughts for us. Once perfected, it would allow direct communication between your computer and your mind. Large university laboratories conduct most of the experiments done in the area of direct neural interfaces. For ethical reasons, the tests are usually performed on animals or slices of brain tissue from donor brains. The mainstream research currently focuses on electrical impulse monitoring, recording and translating the many different electrical signals that the brain transmits. A number of companies are working on what is essentially a "hands-free" mouse or keyboard [Lusted, 1996]. This technology uses these brain signals to control computer functions. The more intense research, concerning full in-brain interfaces, is being studied, but is in its infancy. Few can afford the huge cost of such enterprises and those who can find the work slow-going and very far from the ultimate goals. Current research has reached the level where hundreds of tiny electrodes are etched out of silicon, to be inserted into a nerve cluster. Unfortunately, research has not progressed beyond experiments on live tissue cultures.
Prosthetics - The second variety of cyberware consists of a more modern form of the rather old field of prosthetics. Modern prostheses attempt to deliver a natural functionality and appearance. In the sub-field where prosthetics and cyberware cross over, experiments have been done where microprocessors, capable of controlling the movements of an artificial limb, are attached to the severed nerve-endings of the patient. The patient is then taught how to operate the prosthetic, trying to learn how to move it as though it were a natural limb [Lusted, 1996]. Crossing over between prostheses and interfaces are those pieces of equipment attempting to replace lost senses. A great success in this field is the cochlear implant. A tiny device inserted into the inner ear, it replaces the lost functionality of damaged, or merely missing, hair cells (the cells that, when stimulated, create the sensation of sound). This device comes firmly under the field of prosthetics, but experiments are also being performed to tap into the brain itself. Coupled with a speech-processor, this could be a direct link to the speech centres of the brain [Branwyn, 1993].
Why make cyberware? - But why should we do this? Is it to be relegated amongst the techno-dreams of robot house-cleaning slaves or is it actually a relevant, practical technology? What is the use of developing the technology as a whole? Roderick Carder-Russell has expressed my own feelings in this paragraph from his webpage of collected cyberware links: “As we grow in age, we also evolve our person. We grow mentally, changing and adapting to new situations, gaining experience, becoming more intelligent and wise. Physically we grow in size, strength and if we consciously make an effort gain more talent, more precise control of our bodies. But there is a limit. The human mind has a processing threshold, the body can be pushed only so far before it fails. As we utilize new developments in aging research and begin to live longer, would we not want to also push back the limits on our minds and bodies?” Currently almost all research is aimed at the disabled. Most research seems to be in the field of prosthetics or neurophysiology. The advances happening now tend to be the prosthetic interfaces, the new sensory replacements, or brain-signal controlled computer cursors. However, in the future, the technology can benefit anyone. The main areas I see are education, entertainment, communications and transportable technology. Technology for these areas currently hold a large sway over the general populace (read big market share) and advances in them are always heartily welcomed, both by consumers and by producers.
Problems and difficulties: - I, personally, am all for our continued research in this area as I feel it will add so much to our understanding of ourselves. Though I am 100% behind it, I feel we must consider the possible consequences of bringing this technology into being. New technology always brings change and cyberware is likely to be no exception. We are likely to face replacements in the workplace, bankrupt businesses that are unable to cope with changes, a new elite, not to mention a new generation gap between the current generation and those who will have had the technology from birth. A big danger is the very real possibility of abuse. The criminal element has always been very effective at taking full advantage of whatever technology is available – and can often be extremely inventive. This is not a reason to not make the technology, but we will have to be on guard for a new breed of crime. There may be any number of new physiological problems relating to the equipment, not to mention the psychological shock of being in so different a situation. What happens when mind and machine become one? We begin to find important the old questions of what it means to be human.
How much flesh can be replaced before we are no longer a human with machine parts but a machine with human parts? If we are to move forward, we must be fully aware of what it is we are doing to ourselves, and what could come of that. One of the big difficulties with producing cyberware is the inherent complexity of the neural system. The individual human brain is incredibly complex. There are billions of neurons so connecting up to enough of them without having to have a truckload of wires poking out the back of the skull is quite a problem to surmount. In addition, even if we do connect up one brain perfectly – every person’s brain is different! The individual differences could make it very difficult to adapt the hardware from one person to another. The problems being faced in the lab haven’t even got that far yet. The difficulty still lies in how to connect up a small number of neurons without causing excess damage. Some significant breakthroughs have been occurring and a number of prosthetic advancements have worked successfully – but the risk is high. No one wants to risk their currently perfectly working neurons to possible harm for an uncertain reward. The process still needs a lot of refinement before the general populace would be likely to be interested.
Where are we going? - The technology available eventually is near infinite. I believe we will be able to see better prosthetics for limbs and senses – possibly breakthroughs in reconstructing some of the more complex senses (an artificial eye that works as well as the cochlear implant does would be nice). I do not think we will see any of a quality that would make it worthwhile replacing functional systems, yet, but I’m sure it will happen eventually. Enhancements are probably going to be more popular – silent communications or enhanced hearing, for example, or enhancements to muscles to allow you to move faster or lift heavier things. How about an internal air filter to allow you to filter out pollution without having to wear a gas mask! So, do we have a time frame? I’ll answer that with a quote. “Berger believes [his] team is about five years away from designing a brain implant for animals and about 10 to 15 years away from the first device for humans. With custom microchip designs taking weeks or months and other technical hurdles at every turn, it's certainly not a project for anyone with less than the patience of Buddha and the persistence of someone who sells insurance. That suits Berger just fine. Nobody ever said that building an electronic brain would be easy, and it's clear that he's just as infatuated with the process as with the thought of changing people's minds. "You build it neuron by neuron and chip by chip," he says. "You enjoy each experiment and piece of the puzzle while keeping a focus on the bigger picture." Berger won't rest until he has built a bionic brain. He definitely wants to get inside your head.” [Greengard]
Interfaces Signal monitoring and computer control - At present, the study of brain signals seems to be the widest area of research in this field. This probably has a lot to do with the fact that the machines for study already exist out there, and people have been studying brain signals for some time – this is merely an extension of current areas of research. The research that I classify in this field use machines to externally measure the natural signals of the brain. These signals are then run through a computer that attempts to interpret their meaning and act on them in a pre-programmed manner. It is possible to learn how to voluntarily control certain patterns of brainwaves. In doing so, we can provide a change that is strong enough to be detected by an EEG. The resulting change can then be used to signal a computer. This technique has been used in the laboratory to control the movements of a mouse cursor on screen. One problem with this method is that it can sometimes be difficult to train a person to use the technique effectively. Controlling brain waves involves a modification of thinking patterns – for example either doing hard puzzles in the head to produce one type of wave or calmly thinking of very little to produce another. The big problem is that there are only really a few different ways to think (while remaining awake) that are easy enough to learn. Often there aren’t enough input types to adequately control a simple computer function (such as a mouse cursor).
A popular device for research seems to be the visual keyboard. These use evoked potentials to allow the disabled to communicate with the outside world. Some people have been so severely disabled that they cannot move most of their body – or cannot control their muscles enough to make the fine movements necessary to communicate. The visual keyboard effectively bypasses the muscles entirely, instead using the brain’s natural reaction to a stimulus that it is expecting. The basic technique is to present the subject with the picture of a keyboard or a specialised group of words/commands. The person concentrates on the letter or word he or she wishes to convey. The computer then highlights each row of letters in turn. When the row containing the required letter is highlighted, the brain of the person will create a spike of activity (the evoked potential). The computer records this and assumes that the letter required is in that row. The columns are then highlighted and the person continues to concentrate on the required letter – eventually the computer also finds the correct column and so the letter is pinpointed. As you can imagine, this technique is very slow – words being spelled out often in around a minute – not so good if quick help is required but a marked increase in ability from having no method at all to communicate.
Another possibility has been created by BioControl Systems Inc (Palo Alto) and is called the Biomuse. This is a device that measures EMG in the muscles to determine if any muscle movements have been made. This device can be used to control cursor movements on a screen. It is quite simple to program a computer to interpret the movements of, say muscles on the face, into signals that can control a mouse. There are only 6 signals that need to be programmed: up/down, left/right, and left/right-click. Coupled with a mouse-controlled keyboard onscreen, this can (and has - see Lusted et al) allowed paraplegics to operate a computer - a task that would otherwise be completely out of their reach. Another working example of the above technique is the EOG MIDI device for paralysed people by BioControl Systems Inc. [Tonneson et al]. This device allows the physically disabled to create music through the movement of muscles that are not damaged. One example given in the article was a device that attached to the facial muscles of a person. It monitored the EOG of the muscles and interpreted certain movements as certain sounds. With a bit of practice, people can write their own music with it.
David Cole’s montage amplifier is a very bizarre example of electrical signal cyberware. It has been designed to be a primitive thought-transference device. It has been tested by volunteers and seems capable of transmitting some basic sensory information in a general manner. The science of it seems sensible enough though possibly a little dangerous. Firstly, it records the EEG pattern of one subject. An EEG is a pattern of electrical potentials and thus also magnetic fields. The signal is recorded and sent to the helmet on the second subject. This other helmet amplifies the signal and electrically charges the helmet so that the magnetic fields cause patterns that emulate the same electrical pattern in the brain of the second person. Supposedly this will overlay the EEG of one person onto the second person. Surprisingly, it seems to work reasonably well when the transmitting person is given simple stimuli – for example when a bright spot of light is in a certain area of the visual field, the receiver often perceives a phosphene in the same general area. This study was performed by some of the back yard neurohackers mentioned in [Branwyn].
There is no denying that EEGs and similar machines are very expensive. This is why this kind of research is usually only carried out by universities, hospitals and other large research institutions. To combat this problem, two smaller EEG machines: Aquathought's Mindset and Psychic Labs IBVA [Branwyn] have now come on the market. These are both much cheaper than the huge hospital machines. Even though they have fewer electrodes, they are quite usable for studying the brain at home and come with software that allows you to experiment with your own brainwave patterns. These two are much more accessible to the average researcher and so make it easier for more people to study this field. So, how useful are electrical signals? A lot has been achieved with these very simple methods, but I believe they are merely a stopgap method that will eventually be surpassed by the more intensive internal methods in the next section. They are so prolific because the technology is here now, and they are an extension of what we’ve already been doing. This is a very powerful motivator for research as it means that it costs less money per research project, even if the potential gains are not as good. It has been very important, however, for our understanding of the brain and how it signals. We are still, after all, going to have to learn these signals to be able to communicate. I just don’t believe that anything external to the skull has anywhere near enough power to actually produce complex communication with the human brain. After all, we’ve been trying to understand the brain via its EEG signals for decades. We’ve made some very useful discoveries regarding the overall working of the brain and how this can be interpreted – especially in the cases where something is obviously going wrong. It isn’t so useful, however, for fine discrimination of thoughts. Since this is the big goal of cyberware, I believe we don’t have a huge amount to gain from studies using external machines. We have to get inside the brain!
Direct Neural interfaces - Direct neural interfaces are the big goal of cyberware - the ability to directly talk to the brain through the interfaces that we have created for it. Despite this, little research has been done in this vital area and even less is published. My guess is that monetary constraints tend to stop a good many of these projects getting through – they are a form of very long range research that has only just recently been showing any solid benefits (mainly in the field of prosthetics). The lack of published material is likely due to the newness of the field. Anything that people have found out, they are still working on and probably don’t want to give their colleagues the vital link that will allow them to discover it first. They also might not be willing to put down their vague theories when they are only just beginning to form them, and are still working out the bugs. All in all, there is a distinct lack of any solid research material in this field. There have been a number of Internet sites and a few papers available, but most of them say pretty much the same thing. Usually along the lines of: “We are working on a “brain chip” that will hopefully provide an interface between the brain and a computer.” They sometimes give a very bare description of how this might be achieved (usually the array-of-spikes electrode set that I’ll describe below) and what benefits this may incur. I will try to expand on the scraps of information that seeped through with what I can foresee happening.
I think the most information I was able to find in one place was from a Dr Fromherz working at the Max Planck Institute in Germany. His team has been working in this field for years. His article describes the search for a safer, less damaging and more long-term method of interacting with a nerve through the use of silicon electrodes. This has been successful in the lab with very large single nerve cells (from a leech), but was more difficult with a smaller rat nerve cell (closer to our own size). He says that the technology is there and feasible now for such small pursuits, but it is a completely different matter when it comes to a living brain. Like most researchers in this field he has had to start with the most basic method of interfacing and is trying to build up from there. The technique he is starting with involves creating an array of very fine electrodes (what I tend to call the array-of-spikes as it resembles a tiny bed-of-nails) laser cut and acid-etched from a wafer of silicon (such as is used in microchips). This array can be implanted into a nerve bundle so that a large number of neurons can be accessed simultaneously. This technique seems very crude but should be recognised merely as an early attempt, a small beginning and an important first stage. This research has been very useful, as it has helped us understand the nerves enough to be able to start working on its use in prosthetic interfaces.
Another potential use for these interfaces is as a cure for brain damage. Many people suffer currently irreparable damage due to strokes, accidents and tumours, not to mention the degenerative brain diseases such as Alzheimer’s disease. Dr. Ted Berger is working on a brain implant that will address these types of problems. His approach is similar, at the moment, to Dr Fromherz’s – The study of donated brain tissue and the attempt to interface with the individual neurons in the hope of understanding the simple to work up to the more complex problem of the whole brain. While Fromherz’s research leans more to the hardware side - practical means of interfacing with individual nerves - Dr Berger is attempting to study the brain’s inputs and outputs enough to be able to create a working substitute for sections of lost brain tissue. His team is currently developing a microchip that attempts to mimic the activity of the hippocampus. This chip would take the information in short-term memory, repackage it, and move it into long term memory. The trick to learn is how the brain repackages the information so that the chip can emulate this function. Once working, this chip would help innumerable people with their memory problems [Fleischer]. As mentioned earlier, these interfaces are the most important research in terms of the long-term goals of cyberware technology. They are also the farthest away from practical implementation. Luckily, each step we take in other aspects of cyberware research helps us toward this Holy Grail.
Other weird interfaces - A very closely related field of research is that of Virtual Reality (VR). VR also seeks to communicate with the body about what it is doing and to return meaningful sense information back to it. The difference being that VR is entirely an external medium. However, some research overlaps with areas that cyberware research currently occupies. One example of this is the VR suit [Tonneson et al]. In these suits, EMG biosensors are used to determine position of muscles and state of relaxation. This conveys the body’s position to the computer and we can achieve a better representation of it in the virtual environment. The VR helmet could also be equipped with EOG sensors [Macauley] to make an accurate determination of the direction of the user’s gaze. Research into electrical signaling helps both fields, as we better understand the language of the peripheral nervous system and how to communicate with our muscles and sensory organs. An interesting technology that is likely to greatly benefit cyberware was created only recently. TG Zimmerman has created a device known as a Personal Area Network. This is based on the electrical field of the human body and allows transmission of information via human contact. This device promises to incorporate our wide range of personal information and communications devices into a network that can exchange data with one another. It also has the ability to exchange data with another person's network - the example given and tested is the exchange of business cards by simply shaking the other person's hand. This technology has great potential at integrating cyberware technology amongst itself and also amongst other people. The great benefit for this field is that one of the major problems that cyberware has to contend with is the huge amount of wiring that would have to drape through the body from implant to implant. If we could do away with all of that and merely have ‘sending’ and ‘receiving’ units attached to each implant everything would be a lot less messy. Network communications is certainly advanced enough to cope with the small networks we would likely start out with.
Wearable computers are also in the same realm as cyberware. The interfaces are external and utilise our existing sensory apparatus, but many of the principles and problems are similar. From wearable computing technology we can learn about how to make computers smaller and more man-portable. We strip the computers down to what we need for everyday living and how it can be more fully incorporated into our activities. As these two technologies develop in tandem, we may start to incorporate a mixture of the two. Some objects may still be too bulky to fashion into appropriate implants, or not used often enough to warrant permanent attachment. These will be carried externally, but remain able to interface with our internal cyberware. I believe that all these related fields of research will develop in parallel to cyberware, the fields cross-fertilising each other with ideas. New developments spurring new ideas in each field and lifting each other to greater heights. Such mixing should definitely be encouraged.
Prosthetics Limbs - Most current research in prosthetics is focused on the manipulation of data, or signal processing. A patient must learn how to control the muscle-movements required to activate the limb's movement. For example, a knee response drives the servo-electric motors of a prosthetic knee. A computer must be programmed to record and translate the signals it receives and interpret them correctly. Cyberware-related prosthetic research is still very far away to the average mechatronics engineer. At the moment, they are trying to deal more with the practicalities of driving prostheses off muscular movements – maximising output from a limited number of inputs. The difference to prostheses that cyberware can bring is that the interface is completely internal. It melds with the remnants of the body’s own neural pathways and thus movement becomes completely natural. Current prostheses tend to rely on learning a set of muscular ‘commands’ to give to the limb in question, but cyberware allows the use of the same signals as with any other working limb.
Building the internal interface - Fromherz illustrates a number of interesting possibilities. His research centres on building tiny arrays of even tinier silicon electrodes. This little array can be inserted onto the end of a severed nerve bundle and the signals sent to the bundle will be received and recorded. These signals can then be used to drive a prosthetic limb as though the brain is sending signals to the original limb. Another method involves a small sheet of metal, perforated with microscopic holes – each electrically separated and with wires leading out. The nerve is severed and then placed on either side of the sheet – each nerve grows through a hole to find a partner on the other side. In the case of both of the above methods, the signals received at the array would need to be recorded as the patient attempts different types of movements. Both the patient and the computer must learn how to interact – the patient learning what they need to do to move in a certain way, the computer learning what movement to make given a certain signal. The logical next step to the nerve-ending arrays, is technology that directlys taps the brain. Kalcher (et al) are working on a piece of hardware that uses an EEG to record signals in the precentral gyrus while a person moves their finger or foot. The studies went on to record what happens when the person merely thought about the movement. With a better method of connecting with a large section of brain (see Making the Connection), we could access the body's own map of itself stored in the precentral gyrus. Once linked to the prosthesis, the person would not be able to tell the difference between telling a normal limb to move and the telling the artificial limb to make an identical movement - the signal would be identical. What's more, signals can move through wire faster and with greater reliability than through a reflex arc, so the message would get through quicker and clearer.
Sensory replacements - Senses are one of the big, new things in prosthetics. Never before have we been able to create a machine so sophisticated as to attempt to match the capabilities of our own bodies. The attempts so far aren’t completely perfect, but the ability to mimic even a part of our senses is an amazing achievement. The complexity of the sensory systems is what has perplexed us for so long. The actual sense organs operate in a very simple way – light is focused on a large array of tiny receptors, sound is vibrated along a tube of receptors and so on. The complexity comes in as a problem of signal processing. Each receptor in the eyes, ears etc is very simple on its own – it records whether a certain type of stimulus is or is not present. It’s when you put the receptors altogether and try to build an overall picture of the stimulus coming in that the problem becomes very complex. The pattern matching that the human brain does is far beyond the capacity of any human-made machine, at least in the same time frame as the brain does it in. To fully replace a lost sensory organ, we must be able to match the complexity and size of the system present normally. This increases dramatically in difficulty, depending on how much of the sensory system is missing.
There are a number of new technologies that are brilliant at enhancing a system that doesn’t function at top-level. For example, the current hearing aid amplifies sound quite well, and glasses refocus light on a retina that is slightly the wrong distance from the focal point of the eye. Current breakthrough technology is focusing on the next level, where the receptors in the system are missing or damaged, but the underlying nerve structure remains intact. The best example of this is the cochlear implant. These devices are made for people who are missing the tiny hair cells found in the inner ear. Hair cells are the receptors for the ear, recording important differences in sound depending on where they are placed along the spiral-shaped cochlea. The device is implanted into the cochlea and tiny electrodes are inserted along its length. These electrodes send signals into the underlying nerves, taking over the job of the missing hair cells. “Although current versions of these devices may not match the fidelity of normal ears, they have proven very useful. Dr. Terry Hambrecht, a chief researcher in neural prosthetics, reports in the Annual Review of Biophysics and Bioengineering (1979) that implanted patients had "significantly higher scores on tests of lip-reading and recognition of environmental sounds, as well as increased intelligibility of some of the subjects' speech." [Branwyn]
Research is now focusing on creating a similar device to be able to replace missing receptors in the eyes. This would be accomplished with something very similar to the cochlear implant, merely requiring an array that can be fitted into the eyeball and attach to the retina. The next level up is to help people who have lost their sensory organs, but whose brain is still capable of understanding the sensory information. This is especially the case in people who, for example, were able to see at birth, but lost their eyes through misfortune. The techniques of seeing aren’t lost, merely the organ to see with. If we can successfully replace the eyeball itself (or whatever organ it is that was lost), and then successfully attach and communicate with the undamaged brain area that controls that sense, we would be able to restore the lost sense. This is where cyberware really comes into its own. Richard Alan Norman, of the University of Utah, has been studying how the use of phosphenes can help blind people to read Braille faster. Often a blind person has a problem with their eyes, but the visual cortex is still perfectly normal. He has been developing an electrode array that can be implanted onto the visual cortex that will create phosphenes in the perceived field of vision. The array creates the sensation of spots of bright light are even though the person cannot really ‘see’ anything. The array is programmed to create the patterns of spots that correspond to the Braille alphabet and can then ‘display’ them in the person’s perceived field of vision. Text can thus be ‘read’ quicker in this manner than through the usual method of reading Braille with the fingers. [Thomas]
The National Institute Of Health in the USA is currently working on a 38-electrode array to be placed into the visual mapping areas of the cortex. This device creates phosphenes in the visual field that give a rough ‘pixilated’ view of the world. This isn’t enough information to truly see by, but can help an otherwise completely blind person to orientate themselves and avoid some objects. It is also an interesting first step. Obviously, if the electrodes can be made fine enough, the array could contain many more electrodes and thus give a better and better precision to this artificial sight. Sensory replacement is not limited to sight and sound. With studies in the somato-senses (or senses of touch, temperature and vibration), those who have lost a limb may soon be able to regain some of their lost sensation. Funnily enough, this type of research tends to leak over to the field of robotics. It is very useful for a robot to be able to sense how much pressure an artificial limb is putting on an object it is holding. To pick up an object, it is useful to know how much pressure is being exerted – to tell if the object is going to slip out of grasp or if it will be crushed under too much pressure. Prosthetic limbs would also be greatly enhanced by this capability. Somato-sensory replacements deal with the attempt to recreate the sensation of touch. This is one of the most difficult to recreate as our sense of touch is really made up of many different types of sensations. Mainly, we have the sensation of pressure, of vibration, of temperature and of pain. These are all very important when it comes to fully understanding our environment through our skin. One of the big difficulties with this area is that we can indeed create sensors to sense each of these different aspects, but making them small enough to be able to cram them all into the volume usually assigned to skin is very difficult. The scaling problem is still being worked on and the sensors are being gradually reduced in size. Eventually they will be small enough to fit a fair number of them onto prosthetic devices, and then we will return again to the necessity for determining how to communicate with the body’s existing neural pathways in a meaningful way.
Where could we go? What is possible? - Certainly most of the important current research deals with aiding the disabled. The prosthetic devices mentioned in the previous section will open up new vistas for those who have had to make do with lesser quality replacements of lost limbs or senses. But eventually we will move on from using this technology to only replace missing or damaged human parts. At that point, we will start to fully utilise the potential that cyberware has to offer. As I have mentioned earlier, the main avenues I see cyberware flourishing in are: education, entertainment, communication and transportable technology. However, cyberware can also be split into the groups of people who are likely to be using it. One of the major subdivisions will be for military or law enforcement use. I am bringing this up first, as it tends to be the sort of thing talked about in fiction. Most cyberpunk literature tends to focus on the criminal element battling the forces of law and order – usually with a great deal of cybernetic enhancements on both sides. One of the big options is an enhanced limb capable of lifting greater weights or with faster reflexes. This could either be a replacement of the previous limb or an added extra limb (weird looking, but could have its uses). One of the interesting possibilities posited in fiction is known as wired reflexes. Our reflexes are merely a pathway built up out of connected neurons in the spinal cord – these have either been given to us at birth or slowly learned through long hours of practicing over and over. Wired reflexes allow us to create our own reflex arcs, hardwiring into place the sequence of events we would like to occur. Electricity travels through a wire faster than neurons can generally signal - this creates the added benefit that the reflexes can be sped up considerably.
Other physical augmentations in the literature tend to concentrate around hidden armour and weaponry. Guns or knives could be hidden inside arms or body cavities and triggered by a learned reflex or a wired command from a brain-connected control module. Amour could be hidden under the skin or interlaced around bones for greater protection while remaining hidden from view. This hidden armoury would be very useful for anyone that needs to remain under-cover (for example under-cover policemen). They would be able to have their own weapons and armour without raising any suspicion in the criminal group they are infiltrating, and will have the element of surprise in case they are attacked. There are enhanced senses, to better see or hear "the enemy". Replacing the sense organ with a better one would create this, but it will not possible for a long while yet as it will be some time before we can even match our current sensory organs’ abilities. Extra senses can be incorporated to enhance our perceptive abilities. For example, we could add an awful lot to our ability to see. Imagine the ability to see in UV or IR, to have light amplification added to our own eyeballs! We could tap into the nerve bundle for the appropriate sense organ and record from and/or add to the inputs to our brain. Why would we do this? Well, a recording from the human eye might be very useful – think of the possibility of keeping a mans-eye account of what happened on the scene of a crime – did the policeman have full view of the criminal as he was pulling his hand out of his jacket? In addition, if you can tap into the stream of sensory information fast enough, you might be able to work on the data and find things otherwise easy to miss. For example if it were necessary to follow one person in a crowd, the enhancements might be able to filter out some of the other movements and make it easier to keep an eye on the target.
Another popular device in fiction is the subvocal device. This could utilise EMG in the facial muscles to make actual voicing of commands unnecessary. This could easily be coupled with a receiving device placed on the tiny bones of the ear so that instructions can be received without anyone hearing them, though a tiny earplug speaker would do the trick in most circumstances. This could be extremely useful for anyone who wants to talk silently. The examples in literature often involve covert operatives and clandestine communications, allowing them to remain stealthily silent while sneaking through the enemy base. It could equally well be attached to a mobile phone so that anybody can talk with perfect clarity even in a noisy environment – or even in the movie theatres where people much prefer you to remain completely silent. This would also have the benefit for people who do their business over the telephone, but wish to keep their actual conversation confidential – nobody else could hear what you were saying. Another big subdivision of users would be the scientists using cyberware to study the human body. One of the most interesting potential uses is the study of human psychology and brain physiology. Tapping directly into the brain, we will be better able to read what it is doing and thus be able to try and understand what it is doing and why. Better understanding leads to better help, though we must avoid using it for better manipulation. This will be very useful for understanding how to help people with psychological problems and disabilities. For example, some deaf or blind people have no problems with the sensory organs, but the problem lies in the brain. With exploratory cyberware, we could access their brain patterns and compare them to a ‘normal’ brain, to find out why they are the way they are. This could lead to new ways of helping these people, not to mention better understanding of how we process data normally.
So what would be the Joe average upgrades? - For the average human, there would be numerous additions for better leisure activities, but these would also aid in better education and communications. As far as leisure goes, entertainment today seems to be requiring a more total sensory input. If we could tap into the sensory pathways to insert information into them, we could make the fantastic worlds that we create seem next to real. We would be operating through the very senses of the person who was receiving the information so it would be nearly indistinguishable from Real Life". This would be THE step in entertainment. Who wants the illusion of something when they can experience it for themselves, all from the comfort and safety of their own brain? Imagine – go skydiving, rock climbing, space walking – without any of the implicit dangers involved. Some will consider it better than the real thing – after all, it will be perfect, no problems, no hassles, no rain, no biting insects, no setting up beforehand or cleaning up afterwards – just the thrill of the activity itself. But these implants don’t have to be used exclusively for entertainment, they are also very useful in education – what better way to learn than to have actually been there, at least to have sensed it in a way your brain would see as “being there”. See, hear, touch and feel the different things that you are learning and you will learn much more effectively. As a learning aid, cyberware would be unparalleled. For example, a person with dyslexia has difficulty learning as reading can be a big problem – the words are difficult to read so they don’t go in well. With a cyberware implant, learning can be restructured in a way such that reading is not required. However, it’s not confined to learning disabilities. Cyberware can open up huge vistas of education – want to see what’s going on inside a nuclear reactor? Want to see what lives deep under the sea?
You could go anywhere and look and touch withoutworrying about cost (millions of people could share the same experience), and without worrying about the dangers of the real thing. The brain can also be enhanced to include more memory or easily accessible databases of information. If we can truly tap into the brain we will be able to store abilities or skills and transfer them from one person to another by merely ‘uploading’ the information. An example could be learning to dance – a person could record the ‘output’ signal from their limbs as they dance. This signal could be given to the person wishing to learn the dance, who plugs it into their own cyberware that sends the signals to their own muscles. This counts as practice – the person will eventually learn the dance and be able to do so without the use of the recording. Which brings me to the idea of the matrix. VR in a cyber-enhanced world would be unparalleled. You woud actually 'be there' as far as your brain was concerned. Do you want to visit your family, attend a conference, hug your absent partner? The illusion could be made very close to reality. The Internet, as we know it, would become a thing of the past.
Making the connection - How would the cyberware get wired into place? I've read a few different books that present ideas on this subject, and there are a couple of different paradigms. The most simple, and currently cheapest, of available methods is surgery. Dr Fromherz, at the Max Planck institute in Germany, has been researching the possibilities of arrays of electrodes that can be attached to a nerve ending or to a thin slice of brain tissue where each electrode can receive and transmit a signal. Presumably this array would be surgically put into the required position and tested for the required outputs. This is the array-of-spikes method I have mentioned earlier. A similar method involves the creation of a very fine mesh. The nerve bundle is cut then placed on either side of the mesh and encouraged to grow through it. Each nerve that passes through a hole can be individually accessed. The problems with these methods are many. First, they involve cutting nerves completely, a worrying prospect for most human beings who don't wish to lose functionality (but possibly not as bad for those who find that this is their only available choice). The worry is always there that it will not grow back properly! The second problem is that a simple array or layer of mesh doesn't give a huge amount of complexity - only a single layer of neurons can be accessed. This is fine when interfacing with a single nerve bundle (e.g. one that operates a muscle), but cannot access complex three dimensional structures. The third problem is that any complex structure (for example any part of the brain) would be out of the question as the process may result in irrevocable loss of the structure of the nerve bundle. This would also be applicable for attachment to some sensory nerve bundles (e.g. the optic nerves). If these were ever severed, even if all the nerves reattached to the bundle on the other side of the mesh, they might not reattach to the same ones and the resultant signal would be irrecoverably jumbled. You would have to learn how to see again almost from scratch!
Despite the remarkable work Fromhertz has been able to achieve, I believe that these methods will prove to be too time-intensive as well as very invasive. The work that he does is incredibly tiny compared to the number of nerves in the average human brain, and has a very useful niche in the prosthetic department, allowing severed nerve-endings to be retapped and transmit once more. However, I do not believe that it could ever be complex enough to allow full communication between man and machine. Luckily Fromherz is aware of this. He has outlined the great difference between these small-scale projects and the amount of effort that would be needed a) to get enough connections in the brain and b) to work out a way that the brain and computer could meaningfully communicate [Fromherz]. A really interesting theory is that given by Wu for the Shadowrun TM game system [Wu, 1992]. This consists of bio-engineered microorganisms that are attracted to a rare from of glucose. The organisms are injected with a tiny amount of the required conductive material then introduced to the body. They collect where this glucose is located (manipulated magnetically by the surgeon) and die from genetically engineered suicide genes. The conductive material is then left behind and the cell walls of the organisms are stripped away leaving a smooth coating of the stuff. This procedure (if it could be created and perfected) would be just as time-consuming as the open surgery but much less invasive. It would probably end up being more expensive, but could possibly be completed in a number of successive small sessions instead of a single long one. This theory seems to borrow a bit from PET scanning imaging techniques where radioactive liquid is taken in and accumulates in the areas of the brain that are active. Perhaps this sort of scanning could also be used to aid in the manipulation of microbes like this: "Ok, think about moving your hand … good, now keep thinking about it…"
There are still a number of possible problems with this sort of procedure (for example, would it be possible for the coating to be left smoothly behind or would it need something else for each of the little pieces to fuse together). Not to mention the fact that we may have nowhere near this level of genetic engineering technology yet! But it is still a very interesting and different approach to the problem. The most common solution (proffered in many works of futuristic fiction including the Cyberpunk game system [Cyberpunk]) is nanotechnology (or nanotech). Nanotech consists of tiny (molecular-level) machines that can perform any number of preprogrammed tasks, all controlled by a nano-scale computer (resembling a Babbage engine in construction, but being smaller than a pinhead). These machines would be inserted into the required area and do the very tiny, delicate manipulations required without damaging the nerves. This would be possible, as nanomachines are small enough to move atoms individually. It would seem, at first, that it would take forever for a nanomachine to build up enough circuitry to make anything worthwhile, but the case is that each nanocomputer could have a hierarchy of hundreds of thousands of these machines under its command. The computers, in turn, could number in the thousands and the whole system would be able to create all the delicate connections and interconnections necessary in only a few hours.
I believe that it would indeed be easily possible to do the job if we had a high level of nanotech. In fact, it looks, at present, to be the method with the most amount of potential for solving the complexity problems. Not only can they create the delicate structures necessary, but also with machines this small, they can crawl between the neurons of the brain without disturbing them, to place connections to any place you choose. Other possibilities allow for nanotech to monitor connections even after the initial insertion. There is a vast range of possible applications for nanotech, but they do not fall in the scope of this project. So what is the problem with using nanotech? Well firstly, the science doesn't exist yet. Depending on whom you ask, it is still only at the theoretical level or outright fiction. In other words, there are a few people willing to work on it, but it is generally believed to be too far into the world of science fiction to be considered seriously. This is despite the fact that some very influential and respected scientists of our time (for example Richard Feynman) have felt that is a field within our grasp right now.
In conclusion, brute open surgery would be effective for simple stuff and for the large pieces of cyberware such as the prosthetic attachments - it's what they're doing now. It's the complex, tiny stuff - the in-brain surgery that's difficult, that needs tinier scalpels than we’re used to seeing even in the best microsurgery. Are the microbes the go? Though a quaint idea I don't think they're really applicable. You'd still need lots of research for tailored genes and in how to smoothly apply the required substances. So, do we need nanotech? If we can get the technology it would be much easier. It would also speed up our understanding of the brain, as nanomachines would be able to access and monitor individual neurons in action. These together put forward a powerful case for trying to get nanotech up and running as a future source of cyberware surgery.
Impact on us - Let’s say cyberware is here, how would our life be different? It really depends on how far cyberware can truly be taken. If we never move beyond the small-scale stuff where all we can really access are a small group of neurons – not many people would be likely to take the plunge. Perhaps it would be become fashionable for the technology-conscious to have a head-linked cyber-phone or a digital watch that shines on the retina (so you can see the time no matter where you are), but it probably wouldn’t be that far-reaching. We’d see some odd new peripherals and a few gadgets that might eventually become commonplace, but all in all it probably wouldn’t be that intense. Now, if we had managed to create the ultimate goal, however, a way that we could link directly and think at our computers…If we have been able to create the ultimate goal, a direct link with the brain, we could have so much more. Our World Wide Web is already a great part of our everyday lives. With the ability to tap into our computers directly, we would be able to greatly enhance our ability to operate on this network. One of the most intriguing possibilities is the ability to truly work online – in an office that only exists on the web, the employees could all work from home – accessing it through their implants. Yes this sort of thing happens now (without the implants of course), but many workplaces require a higher level of face-to-face interaction that just isn’t the same through a videoconference or a text-interface. This technology could provide the link necessary for us to all be able to move to our homes and to work without having to leave. If nothing else, there’d be a lot less road-rage from traffic in the mornings!
Not everyone would have access to the technology. It is liable to be very expensive for a long time. This means that only those in high-paying jobs would have access to it in large amounts, but smaller things would come through and technology often filters through society eventually. One of the big areas of current society that would be affected would be the entertainment industry. Who wants to watch a movie on a flat screen when they could be right inside it? Possibly even playing out a character in their favourite story. Much game software is likely to be written for those who want to play a fully interactive game. Even passive entertainment takes on a new level when it incoporates all of the senses – smell the scent of the sea or feel a waterfall run through your hands, feel the heat of the sun as a famous battle is playing out before you. Leading off from this would be education. A better education is gained through experience than from being told. A fully-immersive environment could be used to great benefit in the teaching of many skills. Also, teaching in this manner would allow more students to be reached at once – especially in outlying areas – students would be able to interact with one another naturally even if they were unable to physically be present together. I believe that, like most radical technology changes, it will greatly enhance our standard of living. While,as per normal, the lives of the wealthy will be enriched greater than the lives of us ordinary men and women, we will still see great benefits . Even if the technology is not directly in our grasp, the people working in high-tech fields such as medicine and computers will have access to it. If they are better able to create what the create best, then the effects will filter down through all levels of society.
The issues - This section covers ethics, problems with creating cyberware, problems that cyberware might create and scattered other issues. I will present a number of what I believe to be important considerations in the undertaking of this field of research. I will posit a number of questions, and describe what I mean by each question but, in most circumstances, I won’t answer it. One of the major points about ethics is that most of the questions are under debate. The answers to these questions in unclear – people tend to have widely differing opinions, so I present these questions for consideration so that you may form your own opinion on the topic.
Experimentation in a new field - As with all new fields there is a question that pops up time and again. Is it ethical to experiment on animals (or even on humans)? Is the potential reward great enough? Obviously research will start on donated tissue before progressing to animals – but how much can we learn from such techniques? Medical research is notorious for deciding that human lives are worth more than anything else, but cyberware is only partly a medical field. Certainly it can be argued that its development will benefit the disabled and help us understand the human mind, but is this enough to justify it all? It’s a question that I’m glad I don’t have to answer right now. This sort of thing will be left to the individual medical boards approached by the research teams. One of the things I did come across, however, was a reference to the neurohackers. These are people who are perfectly willing to make themselves into guinea pigs to further this technology themselves. While the big medical boards are arguing whether to allow certain experiments, these people will be bypassing all of that and experimenting on each other. I do believe, however, that the tissue research is a dead end. Yes, It has helped Fromherz in the early stages of determining how to make the individual electrodes that he has been working on, but when dealing with cyberware, we are trying to learn how to deal with vast arrays of living, functioning brain. We can’t do that with donated tissue, it has to be still attached to a functioning individual – eventually it must be human. Yes, there are people willing to sign themselves away just to be the first to try this stuff, but that sort of technology would be the end result of years of animal testing. There seems to be an almost necessity to use animals as the middle step along the road to any sort of medical technology.
Weaponry - Do we need better/smarter weapons? Can we make cyberware weaponless (this seems an impossible task)? How about crime, will this be yet another step toward social dissolution? Will we just be putting better weapons in the hands of those prepared to use them against us? Should the possibility of having new weapons stop us? I think not. Anything can be turned into a weapon if used correctly – we shouldn’t cancel all research just because it might possibly be used thus. There are going to be those that develop this technology anyway, so we might as well study it so that at least we have it also. At least it isn't the sort of technology whose exclusive province is the field of warfare. I believe that the benefits are likely to outweigh the danger. We also must consider that even if we did decide to ban all research in this area – someone would be able to take advantage of our lack of this particular technology. If we study it, we’ll be able to devise ways of beating it in the case of criminal use or use in war.
Examples of current use - At the Alternative Control Technology Laboratory at the Wright-Patterson Air Force base in Dayton, Ohio, researchers are investigating ways of controlling flight simulators using EEG signals. Pilots are being trained to use brainwave type as an extra function in the control of aircraft. One of the main difficulties to be found was that it was more difficult to train the pilots to control their own brainwaves than it was worth. [Thomas] I would also think that under stress (such as in battle) it would be next to impossible to control your thoughts in that way. You would need to keep your wits about you and that would be much more difficult while trying to change what you are thinking about from one moment to the next.
Examples of potential future use - The US Army has developed wearable computing and communications devices such as head-mounted displays, cameras and personal communicators to receive and transmit information on the battlefield. Moreover, the army has already tested these "augmented soldiers" in the field, to good effect [Thomas]. The future of this technology pushes further into the realm of cyberware. The future is limited only by what we can imagine. Presently, the science of wearable computers helps soldiers with what they might need on the field – communication, tracking, mapping, augmented sight (UV/IR/light amplification etc), navigation, targeting etc. Anything that can be miniaturised and carried around is potentially useful. One of the most important points is – does any of this need to be wetwired? If you have a gadget that is perfectly useful to be carried around then packed away – it is quite often better than a cyber-enhancement that is unable to be taken out. An object should only be wired in if it is going to be useful all the time. Otherwise, you may as well just make it a wearable extra. That way it can be replaced if broken or outmoded – or transferred to another person – all with a much greater ease than if it was effectively soldered to the person. Those technologies that would be really useful to be hardwired into place are the ones that allow an interface to other modules. For example, the Smart Link interface is a Shadowrun idea – a general interface that has the ability to shine targeting crosshairs onto the retina of your eye. This enhances the ability of a person to shoot straight as it uses a more natural from of sight and doesn’t hinder the normal field of view. Weapons could then be fitted with the other half of the interface and include sensors telling the interface where the gun is pointing. Another benefit would be that when a new weapon came out the person would not need to learn a new way of shooting. They could use previously learned skills as the interface would not have changed. Hidden weaponry could be useful, but not for general soldiers. By hidden I mean so that it looks like the person is not carrying a weapon at all – even when searched. Examples such as a holdout pistol grafted into a body cavity or blades put into the arm (so that they can spring out through the wrists) are examples often quoted in fictional literature. Obviously these aren’t useful in general but only for people that require a cover – covert operatives, undercover police officers etc. This is, however, the very sort of weaponry that would be most likely used by the criminal element.
Jobs - Who will be replaced this time, and who will be disadvantaged? This seems to be the question that always comes up in regards to any form of ‘progress’. So many people have been ousted due to computing technology and machinery. This shouldn’t stop us from creating the technology, but should be prepared for. I do not see it happening much at first, though, as it does not directly take over any of our current job fields. I believe that most fields will probably upgrade their systems to cope with this new technology rather than throwing people out on their ears. This, however, leads us to another important question. Will it be yet another thing that struggling businesses must fork out for to survive amongst competition? I believe that most businesses won’t be affected very much, but certainly the information-rich businesses will find that cyberware will be necessary to survive amongst competitors. The first businesses to take advantage of all that this technology has to offer will have a decided advantage over their peers.
Haves vs. Have nots - Do we need yet another elite? The usual age-gap will not be the only divide. Discrimination based on augmented abilities may lead to more of the poorer classes being unable to get the best jobs – and thus stay poor. How about early on when it is experimental – the opposite may run true – only those who are very poor will consent to the most dangerous of early technology improvements – though there are numbers of volunteers just itching to be the first. Age gap problems will have a field day with this type of technology. This is so different from anything else that it will not be accepted by many people. There will be those who will need to learn the new technology merely to keep their jobs. Those that can adapt will be fine, but those that cannot, or will not, will be disadvantaged.
Physiological problems/benefits
Difficulties - There are a number of difficulties imposed by the physiology of the human body. The major difficulty is that the brain is an immensely complex structure, one that we still do not fully understand. Although researchers are perfectly capable of communicating with a few neurons (for example Fromherz’s arrays of electrodes), there are literally billions of neurons in the brain. We do not have the capability, at the moment, of building the huge systems required to even communicate with a large percentage of these. This problem outlines the near impossibility of using external stimulus techniques to understand brain function. An EEG can pick up the general electrical stimulation of the brain, but it lacks the acuity required to see individual neural function. I do not think that we will get it fine enough to be able to form a useful picture of what the brain is doing. I believe that there is very little chance of ever forming fully integrated communication with the computer using this method. It is, however, an ok compromise for those who cannot communicate in the normal way (i.e. paralysis victims where this is their only method of communication). The problem that is created is one of scaling. Theoretically we could create a billion electrodes and try to access each neuron. In practice, we wouldn’t need that many – we don’t need to talk to every neuron, just a few major structures – so say we create only a few million electrodes. Each one may be made quite small, but a million tiny objects can lead to some very huge pieces of circuitry. There isn’t a lot of room in the brain cavity for much more than the brain itself (the room that’s left is also helpful in case of accidents, to cushion blows to the head). Therefore, this huge, clunky piece of hardware with maybe hundreds of wires just won’t fit in the skull. As Greengard puts it: “We’d need an implant the size of a pickup to emulate a brain function”
The human brain is very complex, but also, every human brain is unique. Broad areas of the brain tend to correspond to broad categories of function, but anything beyond that starts to become speculation. Everyone develops slightly differently as they grow and have different experiences, so specific functions tend to fall into different areas. An implant can’t be made overly specific or you will find that it might only work for one person. Luckily, the human brain is very adaptable. If an implant has the general capability of fulfilling the required function, the neurons around it will adapt to utilise it better. The next problem is bandwidth. Current microchips operate at a speed that is far slower than our thought processes. Even though each operation is very fast, the human brain is massively parallel and can thus perform many operations at the same time, combining to a faster speed overall. If we were to try and talk to an implant made with today’s technology, it would take a long time for each function to be processed – too long for it to be truly feasible. Both of the above problems can be overcome with better technology in the realm of parallel chips. Armand R. Tanguay Jr., director of the Center for Neural Engineering at USC, is studying one possibility. He hopes to remedy the scaling and bandwidth problems through use of lasers and holography. Using light signals instead of wires, the chips can be stacked closer together and it potentially allows for real-time response [Greengard].
Physiological damage - In any form of neurosurgery today, nerves are damaged. This is because neurons are so small that it is almost impossible to protect all of them. The main problem with this is that neurons don’t recover! Once you kill a neuron it stays dead and the resulting memory loss can also be permanent. At the moment, the surgery that is damaging is only performed on people where the alternative is far worse or on neurons that have no other purpose (for example the severed nerve endings of an amputee). For cyberware to gain any popularity with the average human being, however, they will have to be able to ensure that the surgery is not going to be so damaging. The problem is that a spiked electrode has to penetrate the neuron, causing damage to the cell wall (and sometimes neighboring cells). This often reduces the life span of the neuron and, in some cases, kills it outright. This could be remedied by using tiny metal plates used in the proximity of the neuron (in much the same way as a synapse) but these are currently more difficult to make. "One of the main things frustrating this research is finding (or developing) materials that are not toxic to the organism and that will not be degraded by the organism." [Branwyn] Silicon breast-implants and IUD’s, for example, have led to some serious physical problems and even deaths. The human body has formidable defenses against invading hardware and it would be very sad to see someone having to fight for their life after some purely recreational implant breaks. In all of medical science, we are used to the possibility that our own body could reject the foreign objects. This is especially a problem when the object is important for maintaining life (for example a new valve for the heart). I do not know how difficult it is to get the body to accept technology in the body but any problems could be critical as they connect directly to the nervous system (a very delicate structure). This would be a very important problem to research.
Nervous overload has been mentioned in many places as a possible problem. For example, Johnny Mnemonic called it the black shakes. This was a nervous condition that affected a person who had been putting too much stress on their body through excessive use of cyberware implants. This theory is quite feasible as a problem. We already see the problems involved in high-stress work (for example RSI) and excessive use of technology (such as telephones). How much stress can our neurons take? We should study just how far our nerves are likely to take us before they tell us that it’s too much! I should also point out the danger of software problems. “For example, if we have software embedded in our brains, how do we ensure its quality and reliability? What happens when there is a new hardware upgrade or a new software release? What if somebody discovers a software bug or a design error? Even a Hollywood script writer would be hard-pressed to picture the consequences.” [Thomas] Cyberware is very close to something that is irreplaceable – our brain. We need to be very careful about getting the software as bug-free as possible.
We’ll need failsafe systems and an ability to turn them off manually, if required, without damage to them or us. Another physical danger comes with the problem with wiring throughout the body. There needs to be some way for implants to communicate. If the implant is near the site that it is hardwired to, this shouldn’t be a big problem. It the implant is separated from some of the hardware, however, wires might have to trail through the body. This could cause any number of problems, from rubbing against muscles and bones to problems with chemicals if the wires degrade, to other problems if the wire breaks, to the messy fact that you’d have scars along the entire length of the wire’s path. What would be the best solution would be wireless communications. An example of a possible solution is the Personal Area Network (or PAN), created by Dr Zimmerman. This technology utilises the electrical field of the body to send and receive electrical signals. It would make the implants slightly larger (needing transmitters and receivers as well as the usual circuitry) but would reduce the wiring down to next to nothing. Although this research is very preliminary and there are still many intimidating technical and biological hurdles (on-board signal processing, radio transmittability, learning how to translate neuronal communications [Branwyn]), the long-term future of this technology is exciting. As you can see, there are a number of problems that we can already see coming. These can be anticipated and possibly even solved before we have to deal with any of the repercussions, but once the technology is out there, what new problems will we find that never existed before? We can’t think of everything involved and there just has to be some risk involved in the process. This is fine for those who have nothing to lose – those that have already lost their sight or their limbs; it can only be of benefit to them. Most of us would have to think twice before diving into it. Who wants to risk their sight and limbs on a risky new enhancement when they are perfectly healthy to begin with? I have heard rumours that there are those that are so desperate to be a part of this that they are willing to sign away all liability just to be the first to have this done to them. However, this doesn’t necessarily make it legal or ethical.
Physiological benefits - Well, the obvious benefits come from the main purpose of cyberware – to augment or replace bodily functions as they currently are. Lost limbs, senses and brain functions could eventually be replaced by implants that replicate normal behaviour as much as possible. Enhancements could increase our abilities in many interesting and helpful ways. But there are other benefits apart from the obvious. I discovered a paper that tells how prostheses help phantom pain [Katz Pictures, 1999] by providing something for the old nerves to be able to operate. I find this an interesting possibility and hope that many more previously unknown benefits will follow.
Psychological problems/benefits
Psychological damage - Cyberware can have its effect on the minds of people as well. Here are some possible pitfalls that we should try and head off before they become problems. The world, today, is very stressful. We live our lives as fast-paced as we can handle and many people crumble under the pressure. Cyberware has the potential to add to that stress as well as to take away from it. Many technological improvements are posited as reducing workload and thus stress and allowing a person to have more leisure time. Unfortunately, most people try and use this technology to be able to fit more activity into their time without reducing their stress levels. Cyberware will likely run into this problem. We frequently overtax ourselves by not being able to get away from it all even now – how much worse will it get when your mobile telephone and your office computer are located in your head? Stress is a problem we already have to deal with in the over-worked. Cyberware will allow these harried workaholics access to more information, faster, now pumping it directly into their minds.
We will have to be prepared to help those that obsessively attempt to take more and more onto their plate until they cannot handle anything at all. Luckily, not everyone is susceptible to this sort of thing, only those who are already likely to try and take on the world in one go. Cyberware would be only one more weapon in the arsenal that they use, but psychologists should be taught to keep an eye out for this sort of thing happening. There are also some problems that have little to do with the actual computational side of the cyberware. How strange would it be to communicate with a computer? It’s hard enough communicating with human beings that have similar experiences, but a computer is completely alien. Typing on a keyboard is one thing, but having your brain wired into it might be something entirely different. Most people will be perfectly capable of adapting (humans are remarkable in this aspect), but there might be those that just cannot adjust. Consider the current-day problems with people trying to learn the interfaces of computers. A well-known example is of the elder generation attempting to learn how to use an ATM, for the first time. It’s not difficult for most people to learn, but some people just cannot adapt to the new technology. This problem becomes quite distressing when ATMs have become the only easy manner of interfacing with the bank. Cyberware presents the same problems – a new technology that people will need to learn. If it begins to permeate every level of society, there will be those that cannot adapt and they will have a hard time fitting in. This also leads into the section (above) on generation gaps/elitism.
Cyberpunk is a game set in the near future where this technology has been readily available for some time and is fully integrated into society. In the game, one of the more serious problems associated with cyberware is a psychological condition known as cyberpsychosis. This is what happens to a person who becomes enhanced to the point of being almost totally machine. It begins as a form of superiority complex in which the people perceive themselves to be better than an average human due to their enhancements. This in itself is a big problem and needs to be worked through with psychological counseling. Cyberpsychosis, however, is a case where it becomes extreme and the cyborg actually finds humans to be so inferior that they should be removed from the world. The person often believes that they are a representative of the next evolutionary step of humanity and that the previous step should be eliminated or turned into slave labour. Obviously this is a little far-fetched and would not happen in large numbers of cases (as with most forms of psychosis) but anyone undergoing enhancement might need to be educated against the possibility of such superiority complexes. This section is full of speculation. We can guess at problems that might maybe occur in a small number of people, but psychology is not so well known that we can fully determine the range of problems that might occur. We also cannot predict how many people will have problems, only guess by extrapolating from previous technology changes.
Psychological benefits - Ok, so those are the problems, what benefits could there be? I see the main benefit occurring in the arena of psychological research. This would become so much easier when you are directly plugged in. We could plug into various areas of the brain and see directly how the brain deals with different stimuli and how thoughts are formed. There is a large amount to be learned from the study of the brain and this technology (and spin-off technologies) would greatly aid research.
What it means to be human - This is not a new concept, but a question that has plagued philosophers for millennia. It is closely tied to the age-old question of where do 'we' reside? Does the 'soul' have a seat or is it ephemerally connected - only existing 'somewhere else'? Do we have a soul at all? Perhaps a better description would be 'consciousness'. I hope, in this way, to side step the nasty mystical arguments about the existence of a soul for the moment and concentrate on the former question. Many years ago, there was a thought experiment involving this topic that will serve to describe the problem. Consider the average human being. If, by some accident or through necessary surgery, some part of that person had to be removed (say a limb), that person would be likely to continue to live. Lets suppose that you continue removing limbs, that person will continue to live (given adequate medical attention etc). Now lets start removing other parts. The theory goes, that at whatever point the person dies, that part must be the seat of the soul. Now it's slightly flawed in that there are a number of organs without which the human body can't survive for very long. But bringing this experiment into the modern and future ages of medicine (where artificial organs would be usable) let us remove pieces but replace them with artificial parts. Most likely the human would be able to survive almost any replacement as long as it could fully replicate the function of the organ. At which point can we say that we have found the soul? What if we even find a way of transferring the state of the brain into a more sturdy possibly even enhanced artificial construct? Are you still the same person?
Masamune Shirow, in "Ghost in the shell", has his heavily 'enhanced' characters converse on this topic, asking themselves if they are human anymore. One of the characters goes to suggest that the only reason that she felt she was still human was that she was treated as one. The main storyline of the book (and movie) is that we are getting more like our machines at the same time that our machines are getting more like us. The main question being that if it is possible for a machine to create sentience (known in the book as a ghost), what would be so special about being human? Certainly programs might be created that seem to mimic humanity in every way – so then are they human? If not, why not? I hear someone at the back shout ‘Because they are not alive’. So, can you prove to me that you are alive? I don’t think you can, not in any way that would discount any possibility of sentience in a computer. The closest definition I have ever heard is from Rene Descartes: “I think therefore I am.” But could you not define thinking to include a computer program? So what defines humanity?
Appendix A: Neurophysiology primer - I thought for some while about whether or not to include this section. It is quite long and discusses many things that are not discussed in the project, above. I decided, in the end to include it as a full understanding of the brain can help in understanding the difficulties of creating cyberware. Above I argue for the need to determine how to communicate with the brain, below I discuss how single nerve cells communicate and what sorts of signals arise through that communication. I present this section not as something that must be known to understand the problems of cyberware, but to provoke further thought on how it could be used to understand and communicate with the most complex computers in the world – our own minds.
Structure of a neuron, Structure of a nerve cell - A nerve cell consists of three main parts. The largest part is the main body of the cell, called the soma. This contains the nucleus and the structures that keep the cell alive. From the soma come many branching fibres called dendrites. The dendrites are lined with specialised junctions, called synapses, through which a neuron receives information from other neurons. Some dendrites also contain dendritic spines - small growths that seem to play a part in learning and memory. The third main part is the axon. This is a single fibre thicker and longer than the dendrites. Mature cells either have one axon or none at all, but may have many dendrites. An axon often has many branches at the end farthest from the soma. In this way, a neuron carries information to many cells and also has many neurons connected to itself. The axon ends in a small branching structure that attaches to other nerve cells.
The action potential - At the simplest level of operation, it can be said that a cell receives its 'inputs' from the dendrites and sends its 'output' down the axon. This output is in the form of an electrochemical impulse. In other words, the exchange of charged chemical particles (ions) are used to send the messages. The cell sends an electrical impulse along the axon by exchanging ions through the cell membrane. In general, the neuron is slightly negatively charged with respect to the outside of the cell. This is known as the resting potential. Applying a small negative current to a neuron will depolarise it for a short time (i.e. make it closer to a neutral charge), it will quickly return to resting potential. If the current is raised very slowly, we will eventually reach the level called the threshold. Once we pass over the threshold, the current will cause gateways to open up allowing a massive, rapid flow of positive ions into the cell, this causes the potential to shoot up to a high positive level before dropping off again. This is known as an action potential. An action potential passes along an axon because the positive charge of an area slightly depolarises adjacent areas of the membrane. This sets off an action potential in this area that polarises surrounding areas etc. Thus the potential propagates down the length of the axon. Once an area has gone through an action potential, it becomes less permeable to the ions that are part of the potential. This stops the action potential from occurring and continuing forever or from moving wildly in any direction. The time it takes for an area to recover from an action potential is known as the refractory period. The refractory period thus sets a maximum on the firing frequency of the neuron. If the refractory period is short enough, several action potentials can move down an axon at the same time.
Some (but not all) axons are also sheathed in a substance known as Myelin. Myelin covers the length of the axon except for small nodes about 1mm apart. Myelin prevents the ions from moving through the membrane, but the nodes have many of the ion gates necessary for action potentials. When an action potential depolarises a node, the charge is strong enough to depolarise the next node, skipping all of the distance in between. In this way, the action potential quickly jumps from node to node down the length of the axon, much quicker than an action potential will usually propagate. This jumping effect is known as saltatory conduction. Action potentials are either all-or-nothing and their size is the same (completely independent of the size of the stimulus that created it). This all-or-none law is where the analogies with computers were formed, seeming so similar to 0/1-only signals. The timing of the action potentials conveys the ‘message’ of a neuron. There are several different ways to do this. A one-time action potential may signal 'something is here'. Another neuron may be constantly firing in its resting state, but change the frequency of action potentials in the presence of (or at higher intensities of) a certain stimulus. Other neurons signal by sending action potentials in clusters instead of being regularly spaced.
The graded potential - So far I've only talked about how the output of a neuron travels. The inputs (from the dendrites and there through the soma) propagate in a different way, called a graded potential. As the name implies, the potential is different for different levels of stimulus and the signal also degrades as it passes along the dendrite. Because of this, an axon that is attached to a synapse further from the soma will have less effect than one that is attached closer. It is in this region that the signals must gather to create the current to trigger an action potential to be generated. If the combined signals are stronger than the threshold value, the action potential starts and the neuron fires. If the cumulative electrical charge of the graded potential is below the threshold, the neuron will not fire. In addition, some nerves act to inhibit a neuron, i.e. they act to lower the cumulative result. This process of adding the combined contributions from neurons is modeled in neural network software (though greatly simplified).
The synapse - But how does a neuron actually affect another neuron? On the ends of the axon's branches are synaptic knobs. These contain vesicles (small cellular containers) of transmitter substance or neurotransmitter. When the neuron fires, this causes a change in ion levels inside the cell membrane, triggering the release of the transmitter substance into the synaptic cleft (the space between the synaptic knob and the dendrite it attaches to). The actual chemical effect of the neurotransmitter differs from neuron to neuron. There are many different types of neurotransmitter and it effects different neurons in different ways. I won't go into the chemistry. The most important difference is that some synapses are inhibitory (i.e. serve to reduce the likelihood that the next neuron will fire) and some are excitatory (serve to make it more likely that the next neuron will fire). When talking about adding up the inputs from various synapses, the excitatory synapses add to the total whereas the inhibitory ones subtract from it.
Structure of the Central Nervous System - The Central nervous System (CNS) is generally divided into 3 different sections: the periphery, the spinal cord and the brain itself.
The periphery - The periphery is made up of all the nerve endings and sensory nerves that are where information begins and ends in the body. If information is gathered here or this is the destination of a message to do something, it is likely to be a part of the periphery. The many sections of the periphery are complex and widely varied and beyond the scope of this primer.
The spinal cord - The spinal cord communicates between the periphery and the brain. Sensory nerves enter it, bringing information to the brain about our world. Motor nerves exit carrying messages to the muscles and organs informing them how to operate. In cross section, the central part of the spinal cord is vaguely H-shaped. The central H is known as grey matter and is composed mainly of un-myelinated inter-neurons, cell bodies and dendrites all tightly packed. The surrounding area is white matter that is composed mostly of myelinated axons (myelin is white). This is where information travels up to the brain. Each segment of spinal cord communicates with a particular section of the body and with the segments directly above and below it. An important part of the spinal cord is what is known as the reflex arc. Sensory nerves enter the spinal cord and synapse with small inter-neurons within. These synapse with more inter-neurons and with exiting motor neurons. It is here that reflexes are stored, creating a short loop direct from sensation to action completely bypassing the brain.
The brain - The brain itself consists of 3 major subdivisions: the hindbrain, the midbrain and the forebrain. I'll briefly go through the basic functions associated with each minor structure (the full amount is huge and really deserves further looking into if you are interested).
The hindbrain - The hindbrain (underneath and behind the larger parts of the brain) consists of the medulla, pons and cerebellum. The medulla, pons, midbrain and certain structures of the forebrain are also collectively known as the brain stem.
Medulla - Controls basic reflexes such as breathing and heart rate so damage to this area is invariably fatal. Because of the types of functions of the medulla, large doses of drugs that effect this area can also be extremely harmful. This is what happens when a person overdoses
Pons - French for bridge, this structure serves as the gateway for sensory nerves that cross from the left side of the body to the right side of the brain (and vice versa). It also contains nuclei that are centres for the integration of sensory information and often regulate motor output.
Cerebellum - This is best known for control of learned movements or classically conditioned responses (when you learn to ride a bike, the processes are stored here). All your programmed stuff goes here, so when you drive your car 'on-autopilot' this is the part you've put in charge
The midbrain - The midbrain mainly contains the superior and inferior colliculus. These play an important part of the routes for sensory information. The superior colliculus is active in vision and visio-motor coordination. The inferior colliculus deals with auditory information.
The forebrain - The major part of the forebrain is the cortex. This is the biggest section of the brain and is what most people think of when you say the word 'brain'. Hidden underneath lie other forebrain structures including the thalamus, the basal ganglia, and the limbic system. The cortex is quite complex and deserves its own subsection below.
Thalamus - This is main source of sensory input to the cerebral cortex. It acts as a way station for sensation, but also does a significant amount of processing on that information before it hands it on to the appropriate lobes of the cortex for final evaluation.
Basal ganglia - This contributes information to the cortex regarding movement, including speech and other complex behaviours.
Limbic system - This is a heavily linked set of structures controlling motivated and emotional behaviours (e.g. eating, drinking, sexual behaviour and aggression). The list below is a partial list of the limbic systems main structures.
Olfactory bulb - This is where we process the sense of smell. As we know, research has been carried out on the effect of smell on sexual behaviour through the use of pheromones. It occurs in the animal kingdom so we wonder if it affects us also.
Hypothalamus - Deals with regulation of motivational behaviours. It also regulates hormones by regulating the pituitary through both nerves and through hormones that it releases.
Pituitary gland - This is an endocrine gland (hormone producing) attached to the hypothalamus. It receives messages from the hypothalamus then releases hormones into the blood stream. It controls timing and amount of hormone secretion for the rest of the body.
Hippocampus - Plays a vital role in learning and memory.
The cerebral cortex - This is the biggest structure of the brain. It controls a wide variety of complex functions and behaviours, from sensation to actions to personality. The brain is split in two down the middle, each half known as a hemisphere. Generally, the right hemisphere deals with sensations and motor instructions for the left side of the body and vice versa. The cortex is further split into 4 main lobes; each hemisphere has one of the lobes but usually deals with that type of information from the opposite side of the body. There are notable exceptions, for example, the language centres seem to be split into a section dealing with the words and a section that deals mainly with the emotion put in via intonation.
The frontal lobe - This is generally thought to be the seat of personality. Movements are planned here and behaviours are modified. A large strip right at the back of the frontal lobe reaches around the head like a headband. This is known as the precentral gyrus and also as the primary motor cortex. It is the centre for control of fine movements of the body (for example precisely moving ones fingers). The precentral gyrus contains a detailed map of the body stretched over the surface of the lobe (see diagram below).
The Parietal Lobe - Located just behind the Frontal lobe, this area specialises in body information. Its expertise includes touch and muscle and joint receptors. At the very front of this lobe is the post-central gyrus. This sits directly behind the precentral gyrus and contains a very similar body map. The function of the post-central gyrus, however, is to receive sensations from each of the body areas.
The occipital lobe - Located right at the back of the head, the main function of this lobe is to process the information from our eyes. This is the largest area devoted to one small sense as we devote much of our attention to our vision - we rely on it more than any other sense. An interesting thing about our occipital lobe is that if we are damaged here, we can no longer see. Even if our eyes are still fully functional. An interesting phenomenon that shows this is called blindsight. If you shine a spot of light on a dark wall, a person with blindsight cannot see it. BUT, if you ask them to guess where they think it might be, they will point to it with disturbing accuracy. Obvious, information is still coming into the brain from the eyes; it just isn't being processed any more as a visual stimulus.
The temporal lobe - The temporal lobe is located around the sides of the brain, near the temples. It is the main centre for auditory information but seems to add something to recognition of complex visual images as well (such as recognition of faces). The temporal lobe also houses the two areas that deal with the understanding of language [Kalat].
Communicating with our brain - So, how do we communicate with our brain? What ways does our brain present us with information and how can we tap into that for greater understanding of how we think? Through the years, scientists have tried many methods, from the downright barbaric to the more modern (but usually incredibly expensive) methods. I'll go through the different methods, explaining what brain signals (or lack thereof) each method is attempting to utilise, providing a list of actual devices that use this method.
Brain signals, Electrical signals - As explained in the previous section, the brain's signals travel via electro-chemical processes. Though no actual 'electricity' (as in a flow of free electrons) is involved, the movement of charged particles causes a similar effect. Thus it has been possible to devise methods of measuring and manipulating the electric potentials that occur during brain activity. Electrical signals (as measured directly) can be used to directly control the movements of a cursor onscreen. Devices that detect and analyse the electrical potentials of the brain are: Electrodes, EEG (Electroencephalograph), MEG (Magneto-encephalograph)
Evoked potentials - Any device that measures the electrical potential of the brain can use this technique. For example, it is often used in conjunction with an EEG. When the brain is presented with a stimulus, it will exhibit a response approximately 300 milliseconds afterward. If you present a subject with this stimulus, you can record their brain patterns using an EEG and pinpoint where in their brain the response occurred. You can thus determine where in the brain that stimulus is dealt with. Evoked potentials can be used to determine whether a person is looking at a stimulus presented on a screen. It can therefore be used to create what has been dubbed the visual keyboard. Devices that detect and analyse evoked potentials of the brain are: EEG (Electroencephalograph), MEG (Magneto-encephalograph), EMG. The brain is not the only source of electrical potentials. An Electromyogram (EMG) is measured by placing electrodes over the muscles and reading the potentials created by their movements. If the signals recorded are processed by a computer, we can determine what sets of signals are related to certain movements of those muscles and thus determine how the muscles (and corresponding limbs) have moved. These signals have obvious usefulness for the control of prosthetic devices, and also for the determination of the position of limbs for VR.
EOG - An Electrooculogram is similar to an electromyogram but the sensors are placed on the muscles around the eyes. This way we can determine the direction of a person's gaze. This technology could be used to fix many medical problems (see Tonneson et al for more as this falls outside the scope of this project) but also has potential benefits for the emerging technologies for VR. It can also be used in a similar manner (and certainly much more effectively) to the visual keyboard. A company called BioControl Systems Inc has done just that with their device that they call the biomuse.
NMR - NMR stands for Nuclear Magnetic Resonance and is the same technique as used in MRI. Atoms have an inherent rotation that is usually in a random direction. When exposed to a magnetic field, they will align. When the magnetic field is turned off, they will release a little bit of energy that can be measured. See MRI for a more detailed description of the technique.
CBF - CBF stands for Cerebral Blood Flow and is an ingenious method for determining brain activity. How does it determine brain activity? Well, the neurons of the brain, like any other cells in the body, require nutrients to work. If they are active, they will therefore need blood to flow to them. If we can measure the rate of blood flow through a particular region of the brain, we can tell if it is more active than surrounding areas. The great benefit of using CBF is that you can tell what parts of the brain are active during the processing for a specific type of activity without having to directly touch or measure the brain. Through CBF methods you can test a person while doing various activities and plot the areas that dominate for certain types of related activities. CBF is used by: rCBF - regional Cerebral Blood Flow and MRI - Magnetic Resonance Imaging
rCBF - Regional Cerebral Blood Flow is a further improvement on the CBF method. It measures the blood flow in the brain by further utilising the way in which cells live. Brain cells require nutrients to operate and they will use glucose to respire when they are being very active. If a radioactive glucose is injected into the brain, we can detect where the radiation is strongest. rCBF is used by: PET - Positron Emission Tomography
Detection methods, Physically invasive methods - The methods below involve direct physical manipulation of the brain and often involve a lot of guesswork. They can usually only give us a vague idea of what is going on in the brain. If we want to be more specific, these techniques must be paired with more precise methods. However, they have been available to us for centuries and have contributed to all our early knowledge of brain function. The problem with these methods is that due to the potential for irrecoverable nerve damage, ethically, they should only be used if absolutely necessary. Backyard practitioners are rare, but are known to exist [Branwyn].
Lesions and ablations - The first method is available to anyone, though I wouldn't recommend it - you tend to run into legal problems... something to do with human rights! This is the general class of "lesions and ablations". Though you can't really communicate via this method, people have been using it for centuries to work out which areas of the brain control which function of the body. Basically, this method involves cutting a slice through axons (lesion) or cutting out a section of the brain (ablation) and watching the persons behaviour to see what they can no longer do. This is very damaging and very permanent and the results are often very ambiguous. For example, if a person now cannot recognise a person they once knew well - is it a problem with how they put the visual image together or did they lose a part of brain where the image of *this* person is stored? One of the most famous applications of this method is the good old frontal lobotomy. This involved cutting the paths to the frontal lobe and was performed on people considered to be 'unmanageably insane'. Given that a lot of what we would consider personality is stored here, this operation usually turned the person into a walking vegetable. It may also be interesting to note that alcohol affects us by decreasing the activity of the frontal lobe, much like a temporary lobotomy. It has been often suggested that the effects of long-term alcoholism closely resemble the loss of personality and drive of the typical pre-frontal lobotomy patient. However grotesque this method may have been in the past, be aware that it is still in use today, though in a slightly altered form. Whether or not there are secret laboratories of evil geniuses performing ablations on unwilling victims I don't know. I do know, however, that there are many strokes, brain cancers and accident victims every day of the year. Though it is unethical and illegal to be the cause of a brain lesion, you are quite allowed (with suitable permission) to study the effects on those who have suffered them through accident. Indeed there are many famous patients that have lent important information by happening to have an unusual difference of function after sustaining a head injury.
Natural development and arrested development - Some structures or cell types develop at a later age. By studying the abilities of a person as they grow, we can determine what these structures do. We can also study people who have a natural problem in development. We can study their brain and see what structures are deficient or oversized.
Electrodes - An electrode usually consists of a very thin wire that can be precisely positioned in the brain at almost any depth or position. Either a small current can be applied to the very end or it can record the electoral potential present at that point. Electrodes have been the mainstays for brain research for many years now (before the development of the large brain scanners that I'll talk about later). Electrodes have been used in innumerable experiments - usually involving animals, but there have been some with live human subjects. The benefit of electrodes is that they can directly stimulate a very precise area of neurons with a minimum of damage. Through vast amounts of selective stimulation, they have been used to fine-tune our knowledge of brain structures. Most of the maps of brain activity have been formulated by people selectively test-stimulating each area of brain in turn and asking their subject (awake at the time) what they can sense, and observing any changes in behaviour. The main problem with using electrodes in humans is that the electrodes do cause some small amounts of nerve damage and thus it is unethical to use them unnecessarily.
Studies with humans are often conducted just prior to major brain surgery, when a patient has sustained damage to their brain (either cancer or a blood clot). Obviously, they will try to minimise the amount of brain tissue that has to be removed. The surgeon will try and determine which is live tissue and which is not. The patient has their scalp anaesthetised locally and then removed to bare the brain. The surgeon then inserts electrodes into the surrounding areas and runs a small current through them one at a time. The patient is awake and must tell the surgeon if they can tell that something is different when the current is on. This technique is really only useful in brain areas where an electrode can produce an obvious effect. Areas such as the visual cortex or any other major sense will do this, as they will produce an obvious effect (e.g. a bright pinpoint of light in a specific area of the field of vision). If the damaged tissue is in an area that does not produce such an obvious effect (e.g. areas generally associated with memory storage) this technique is much less useful.
Electrodes are also used to passively measure the electrical potential of the neuron. This has been used to measure the effects of sub-threshold currents on a neuron and how currents can add or subtract from the electrical potential and how this effects the chances of an action potential. Electrodes can also be used to monitor whether or not a particular brain area is being used for a certain activity. For example, experiments have been performed on animals where they have had their heads fixed in place and had a tiny light shone onto a particular part of their field of vision. An electrode in their brain can determine which neurons are activated by this stimulus. As a method of finely detailed research, electrodes can be very useful. As a method of stimulating a very specific neuron, they are the best. As a method of communicating with the brain, they are still not so good. We would have to be able to attach an electrode to a very large section of neurons. For example to every axon in a nerve bundle (like the 1000's of nerves in the optic nerve bundle) for this to be much use. Regardless of this, electrodes are the main possibility for the application of cyberware. It is the main avenue in which we are progressing, even though I consider us to be still in the barbarism stage (current methods requiring a severed nerve ending and a spiked plate covered in microelectrodes). I believe that we must find a less invasive, but still very precise, method for reading the information coming from our nerves.
Non-invasive methods - Really these should be labeled 'less-invasive' as all methods affect the brain in some way, but these are considered less damaging to a human and generally are performed outside the human body. The problem with using these methods is that the equipment is usually prohibitively expensive for the average person. Research in this field is reduced to those who have access to the equipment and are willing to let them use it.
EEG - This device works by attaching several electrodes (not the pin-shaped ones described above, but ones that will lie flat against the skin) to the scalp. These measure the brain's electrical activity during thought processing. The output of the electrodes is amplified and recorded. The researcher can then analyse the data and can usually tell the overall state of the brain activity. A researcher can usually detect such things as whether the subject is asleep, dreaming, awake or whether they are problem solving or just daydreaming. Abnormalities in the EEG can be detected when the subject has severe problems such as epilepsy or a tumour. The EEG measures the average activity of a very large number of neurons under the electrode. Using a large number of electrodes can better pinpoint the location of the neural activity, but only to a certain extent. This is a large-scale, very generalised technique for measuring brain activity. EEG's can be used to control the movements of a cursor onscreen. They have also been used for some very heavy experimentation on direct brain stimulation using the so-called montage amplifier.
MEG - stands for Magneto-encephalography and is very similar to the EEG. Instead of measuring the electrical potential, it measures the magnetic fields caused by the electrical potentials. Apparently, this method allows a more precise localisation of regional activity. [Coren et al, p645]
CAT scanners - stands for Computerised Axial Tomography. To take a CAT scan, the physician starts by injecting dye into the blood stream. The patient’s head is then inserted into a large xray machine. The machine takes an xray then rotates around the head 1 degree and takes another and so on until 180 x-rays have been taken. A computer analyses the data received and creates a composite image of the brain. This technique is useful for determining the brain structure without having to open up the skull. Unfortunately, it is like taking a still picture. You get the idea of where everything is, but cannot see it in action.
PET - is Positron Emission Tomography and it relies on the use of regional Cerebral Blood Flow (rCBF) to determine the active parts of the brain. The substance that is injected into the brain decays in a known manner, ejecting a positron at a statistically reliable rate. Glucose is often used, as it will congregate in active neurons. When a positron hits an electron, the particles annihilate one another, releasing some of their pent-up energy in the form of gamma rays. The gamma rays go in opposite directions simultaneously. A PET scanner is comprised of gamma ray detectors. When they detect two simultaneous gamma rays, they determine the position they originated from by finding where they intersect. When many of these are recorded, a computer can then plot the brain activity of that region.
MRI - (Magnetic Resonance Imaging) is the same as NMR. Unlike a PET scan, it is capable of producing very detailed images of the brain without having to expose the brain to radioactivity of any sort. MRI utilises a very interesting property of the atoms that make up our brain. Each atom has a certain rotation. Usually the axes of rotation are randomly oriented, but a strong magnetic field will align the axes in the same orientation. The atoms of hydrogen bound to the blood cells are usually pinpointed, as they are easier to align. When a radio-frequency electromagnetic field is then applied to the aligned hydrogen atoms, they spin like tiny gyroscopes. When the field is turned off, they all relax into their previous positions, but simultaneously emit a very small amount of magnetic energy. When this energy is measured, we can deduce the concentration of hydrogen atoms in the region being monitored. This tells us which brain regions are active at present (see CBF for why). MRI is slow, however, taking around 15 minutes per scan. A newer, faster form of MRI called Echo-planar MRI can form images in less than a tenth of a second. This is fast enough to watch blood flowing. This device will be able to greatly help us to study the structure of activity in the brain. [BioControl Systems Inc] [Coren et al] [Kalat] [Lusted, 1996]
Appendix B: Glossary
cyborg - CYBernetic ORGanism – Generally speaking, any person or creature that is partly machine and partly organic. This definition is wide-ranging, including anything from a person with a pacemaker to someone who has replaced large sections of their body with artificial limbs and other enhancements. It can also be used as a term for machines that have been enhanced with organic parts (for example ‘The Terminator’ (in the movie of the same name) was a machine with a skin that had been grown from living skin cells.
phosphene - A phosphene is a bright light that is perceived when the visual cortex is stimulated. The eyes have not actually created this effect, but the brain thinks it has seen the light due to an emulation of the processes that are usually in place when a light is seen. These are the ‘stars’ that are seen when you rub your eyes or get hit on the head.
wetware - A slang term for the body’s own neural processing systems. It comes as an extension from the words hardware and software.
wetwired - A slang term from literature, generally meaning the wiring of something to the body.
References - The list of references below consists of the articles and books that contributed to my knowledge of this field. Not all of them are directly referred to in the text, but I considered each to be important and interesting in their own right. You may notice that some of these references are works of fiction. Don't be put off by this, much of our best science was first conceived in fiction. William Gibson (to take a more modern example of this effect) has had a profound effect on the science of computer communications. The Internet has been a creation at least partly attributed to his ideas of 'cyberspace'. Many are still working toward the full realisation of his dream of a 'consensual hallucination'. It is his dream that I myself strive toward in this project and hope that I can help it to someday be a fully developed technology.
Beardsley, Tim The Machinery of Thought. In Scientific American Trends in Neuroscience, August 1997
BioControl Systems Inc, Neural interface technology-The future of Human Computer Interaction. In: WebPages belonging to BioControl Systems Inc
Branwyn, Gareth The desire to be wired. In Wired 1.04, October 1993
Carder-Russell, Roderick A Personal Reasons for Seeking a Brain-Computer Interface In Human/Brain-Computer Interface WebPages, 1996.
Coren S, Ward L, Enns J, Sensation and Perception [4th Ed], Harcourt Brace college publishers, Fort Worth, 1994
Cyberpunk: 2.0.2.0: the role-playing game of the dark future [2nd Ed]. By R Talsorian Games Inc, Berkeley CA, 1993
Fleischer Brain Chip In Stepback: The Fleischer Files. Episode 101
Fromherz, P Neuron-Silicon Junction or Brain-Computer Junction? In: Ars Electronica Festival. Eds.: G. Stocker, C. Schöpf. Springer, Wien 1997, pp.158-161
Gibbs, W. Wayt Artificial Muscles. In Scientific American: Explorations (Smart Materials) May 1996
Gibbs, W. Wayt Mind Readings. In Scientific American: Analysis (Neuroscience), June 1996
Gibbs, W. Wayt Taking computers to task. In Scientific American: Trends in computing, July 1997
Gibson, William Neuromancer
Greengard, Samuel Head start In: Wired 5.02, Feb 1997
Johnny Mnemonic I need a ref for this.
Kalat, JW Biological psychology [5th Ed]
Brookes/Cole publishing Co., Pacific Grove California, USA, 1995
Kalcher J, Flotzinger D, Neuper Ch, Goelly S, Petz, Pfurtscheller G Brain-Computer Interface Prototype BCI2 for online classification of 3 types of movements
Department of Medical Informatics and Ludwig Boltzmann Institute of Medical Informatics and Neuroinformatics
Katz Pictures Seeing is believing. In New Scientist, 24th April 1999
Lusted, HS and Knapp, RB Controlling Computers with Neural Signals. In Scientific American, October 1996
Macauley, William R., From Rubber Catsuits to Silicon Wetware: Transforming the Human Body and Polymorphic Desire(s) in Synthetic Media
Margulis, Zachary Going mental: let your neurons do the typing. In Wired 1.04, October 1993
Nadis, Steve We can rebuild you. In Trends, October 1997
Newquist, H P The Brain Makers: Genius, Ego, and Greed in the Quest for Machines that Think. C 1994, Sam's Publishing (a division of Prentice Hall), Indianapolis.
Ridley, Kimberly Artificial sensations In Technology Review, 1994
Shirow, Masamune Ghost in the shell
Spence, Kristin Updata: When wet meets dry. In: Wired 4.08, August 1996
Thomas, Peter Planet science: Thought control In: New scientist
Tonneson, Cindy and Withrow, Gary BioSensors
Wu, Karl Shadowtech: A Shadowrun book, FASA Corporation, 1992
Zacks, Rebecca Spinal cord repair. In Scientific American: Explorations, 18 August 1997
T. G. Zimmerman Personal Area Networks: Near-field intrabody communication In IBM Systems Journal - 35-3&4, Vol. 35, Nos. 3; 4 – MIT