"Anyone who nonconsensually violates your brain/mind/mentation using Mengele-like methods is a Nazi pig. You do not care what a Nazi pig thinks. You do not care about a Nazi pig's opinions. You do not respond to a Nazi pig ridiculing you, threatening you, trying to distract you, or otherwise trying to manipulate you. You work to get a Nazi pig hanged." - Allen Barker, NPT Theorem

Thursday, May 31, 2012

Human Brain Implant Research Suspended At Major University by Andrew Brownstein

Human Brain Implant Research Suspended At Major University by Andrew Brownstein, Staff Writer The Albany Times Union 8-25-99 - Albany -- Professor whose work is at issue has focused on surgically inserted mind-control devices The University at Albany has shut down the research of a psychology professor probing the "X-Files" world of government surveillance and mind control. At conferences, in papers and research over two semesters, Professor Kathryn Kelley explored the claims of those who say they were surgically implanted with communications devices to read their thoughts. According to colleagues, Kelley has privately claimed the university is violating her academic freedom. She declined to discuss the matter with a reporter. Kelley's research and the controversy surrounding it echoes the experience of John Mack, a renowned Harvard psychiatrist who wrote the 1994 best seller "Abductions: Human Encounters with Aliens." By lending credence to the stories of those who claimed they were abducted and molested by space aliens, the book led to an unprecedented inquiry by the Harvard Medical School. A school committee eventually chastised Mack for engaging in unorthodox research and "affirming the delusions" of his patients.

But unlike Kelley, Mack has an international reputation. He earns hundreds of thousands of dollars in grants and won a Pulitzer Prize for his biography of T.E. Lawrence, known as Lawrence of Arabia. And while Harvard challenged Mack's conclusions, the investigation at UAlbany is focused on methods. Last week, university spokeswoman Mary Fiess released this statement on the matter: "The university imposed the suspension because of serious concerns that the experiment did not meet the standards governing such projects on campus. While we're working to gather all the facts in this case, we cannot comment further." A memo sent to all psychology professors and graduate students last week instructed them to refer calls "looking for information on any psychological research conducted in our department" to the university's public relations office. According to three sources -- two faculty members and a graduate student -- the school's Institutional Review Board, which monitors human research, closed the project when a student complained late last spring. The student, sources said, was not allowed to leave a lecture that was part of Kelley's experiment. Refusal to allow a subject to leave an experiment violates National Science Foundation guidelines.

Despite the inquiry, Kelley, a fully tenured professor who earned $67,000 last year, is slated to teach two graduate courses in the fall. The department became aware of Kelley's theories as early as the spring of 1998, when a note on her office door announced a lecture called "The Psychology of Invading the Self." The note described implant research funded by the National Security Agency and the Department of Defense with an annual budget of $2 billion. The "uninformed, unconsenting subjects" of these devices were typically "federal prisoners and political dissidents," the note said. At the same time, Kelley won approval from the review board to conduct research on "advances in technology that affect interpersonal communication." In a 16-page outline to the board, Kelley said she wanted to look at the uses of technology for "monitoring and control." She proposed presenting a lecture to research subjects and then having them respond to 60 questions about how the case study she would describe affected their views.

The interest in technology marked an extreme departure for Kelley, a professor at UAlbany since 1979. Kelley, who earned her Ph.D. from Purdue University, was a professor at Marquette University and the University of Wisconsin before joining the psychology faculty at UAlbany. Her previous research dealt with issues like health, date rape and risk-taking. With her ex-husband, distinguished psychology professor Donn Byrne, she co-authored a textbook on gender differences. The shift in the focus of her research puzzled many. Gregory George, a graduate student who has since left the university, said he was part of a team assigned to lay the factual foundation for the implants research. To his astonishment, he found several firms had developed "trans-tympanic transducers," instruments that function as mini-telephones, sending voice messages to the inner ear. Companies declined to market the product for fear of bad transmissions causing deafness, he said. George believed the point of the research was to look at how people would perceive those with the implants, and whether there might be a social stigma attached.

"Kathryn has never been one to go traditional," said George. "But some of us wondered why we were looking at the social stigma of something that hadn't been developed yet. Why not look at the stigma of using something more common, like a wheelchair?" Papers Kelley delivered at two recent conferences suggest that she was becoming fascinated with the subject of mind control. At the annual conference of the Eastern Psychological Association in Providence, R.I. -- attended by several UAlbany graduate students -- she delivered a paper that looked at implant claims as "one of the indicators of schizophrenia." Yet many colleagues began wondering to what extent Kelley believed that such implants were actually occurring. "A lot of people wonder where she draws the line," said one graduate student, who asked not to be named. "Is it hypothetical? Or is it fact?" In a more detailed treatment she gave at a conference earlier this month in Orlando, Fla., Kelley lent more credence to the phenomenon. She described how a subject might be implanted with the device during anesthesia, perhaps leaving tiny stitches visible in the ear. She called the devices RAATs, short for radio wave, auditory, assaultive, transmitting implants.

"When (short-wave) operators transmit to or scan RAAT implants in victims, they can talk to the victims remotely and anonymously, and hear the victim's speech and thoughts," Kelley wrote. The paper noted that the National Institutes of Health denied any governmental role in such research. The EPA is a respected psychological organization. But few professors had heard of the groups behind the Orlando meeting: the World Multiconference on Systemics, Cybernetics and Informatics, and the International Conference on Information Systems Analysis and Synthesis. The Web site for the organization, based in Venezuela, said it is devoted to cybernetics, which it describes as integrating various disciplines into "a whole that is permeating human thinking and practice." The current investigation into Kelley's work is considered highly sensitive at the university, coming four years after a gunman who claimed the government planted microchips in his body held a class of 37 students hostage and shot one student during a struggle. Ralph Tortorici, the gunman, recently hanged himself in his state prison cell. Without commenting on specifics, sociology Professor David Wagner, outgoing chair of the review board, said that shutting down a professor's research was "quite rare." Some faculty members said the last time they remember the board making such a move was in the early 1970s. Source: The UCLA Violence Project

next generation of implants - "...We’ve seen a few newsworthy brain implants in the last few years, including one designed to treat epilepsy and others that allow motor neurons to control computer cursors. But all of these devices were in the experimental phase of development. Medtronic’s DBS implants have been FDA approved for more than a decade (for some conditions) and such devices have been used tens of thousands of times. That’s beyond ‘experimental’, we’re reaching ‘well-tested’. If the number of patients treated with these devices continues to climb as it has in the past few years (we were at only 35,000 or so back in 2007) brain implants are going to become much more common in the next few years. Keep in mind that these first generation devices are still rather crude. The best Medtronic has to offer has just eight electrodes (4 per lead), and scientists can only roughly target the desired areas needed to alleviate symptoms for disorders like Parkinson’s. In many ways DBS implants are basically just pace-makers with wires leading into your head. Still, they’ve shown to reduce movement dysfunction in patients with Parkinson’s and dystonia, and to alleviate some cases of chronic pain. They’re also relatively safe, especially considering that you’re placing electrodes in the brain – the mortality rate is less than 1% And these devices are getting better. We’ve seen how the next generation of DBS implants for Parkinson’s will be able to actively monitor and respond to brain activity. In the future, optogenetics will allow doctors to use light, not electricity, to stimulate parts of the brain (as we’ve seen with rodents). How widespread might these types of devices become when they have the precision to target just a few neurons at a time, and can respond autonomously to treat patients on their own?..." (80,000 and Counting, Brain Implants on the Rise World Wide).

hippocampus for long-term memory - "...For this first hybrid circuit, they followed the lead of neurology pioneers such as Eric Kandel and used neurons from snails. These unappealing invertebrates are popular among neuroscientists, because their neurons are an order of magnitude larger than ours, and because circuits consisting of only a few cells can already display a measurable biological function. As a substrate to grow the cells on, the researchers designed a specific chip with 14 two-way junctions ( ie areas that can both send signals to neurons and receive signals back) arranged in a circle of about 200 μm diameter. Typically, they planted five to seven snail neurons onto such junctions and cultivated them for a few days, hoping that at least some would form electrical synapses with others. The experiment succeeded in producing a few such pairs of linked neurons that could build a bridge between a signal emitter and receiver in the silicon chip. Earlier this year, the equivalent achievement was also reported with a chemical instead of an electrical synapse. However, the process was much too inefficient and random to enable the construction of well-defined larger networks. If a complex and well-defined neuronal network cannot be generated on the chip directly, maybe the chip can be interfaced with a pre-existing network, for instance a brain? Following this line of research, Fromherz and Michael Hutzler have recently presented the first successful connection between a chip of the kind described above, containing capacitors to stimulate and transistors to sense nerve action, and a brain slice containing well-characterised neuronal connections. Specifically, the researchers turned their attention to the rat hippocampus, a brain region associated with long-term memory. It is known that in this part of the rat brain, a region known as CA3 stimulates the CA1 to which it is connected by extensive wiring. Brain slices can be prepared such that the cut runs alongside the CA3 to CA1 connection and makes this entire communications channel accessible to experiments. Using such slices, Hutzler and Fromherz demonstrated that their chip can (via its capacitor) stimulate the CA3 region such that these brain cells pass on the signal to CA1, where it can be recorded with the chip's transistors..." (Plugging brains into computers).

Brain implants, often referred to as neural implants, are technological devices that connect directly to a biological subject's brain - usually placed on the surface of the brain, or attached to the brain's cortex. A common purpose of modern brain implants and the focus of much current research is establishing a biomedical prosthesis circumventing areas in the brain that have become dysfunctional after a stroke or other head injuries. This includes sensory substitution, e.g. in vision. Other brain implants are used in animal experiments simply to record brain activity for scientific reasons. Some brain implants involve creating interfaces between neural systems and computer chips. This work is part of a wider research field called brain-computer interfaces. (Brain-computer interface research also includes technology such as EEG arrays that allow interface between mind and machine but do not require direct implantation of a device.) Neural-implants such as deep brain stimulation and Vagus nerve stimulation are increasingly becoming routine for patients with Parkinson's disease and clinical depression respectively, proving themselves as a boon for people with diseases which were previously regarded as incurable...In 1870, Eduard Hitzig and Gustav Fritsch demonstrated that electrical stimulation of the brains of dogs could produce movements. Robert Bartholow showed the same to be true for humans in 1874. By the start of the 20th century Fedor Krause began to systematically map human brain areas, using patients that had undergone brain surgery. Prominent research was conducted in the 1950s. Robert G. Heath experimented with aggressive mental patients, aiming to influence his subjects' moods through electrical stimulation. Yale University physiologist Jose Delgado demonstrated limited control of animal and human subjects' behaviours using electronic stimulation. He invented the stimoceiver or transdermal stimulator a device implanted in the brain to transmit electrical impulses that modify basic behaviours such as aggression or sensations of pleasure. Delgado was later to write a popular book on mind control, called "Physical Control of the Mind", where he stated: "the feasibility of remote control of activities in several species of animals has been demonstrated [...] The ultimate objective of this research is to provide an understanding of the mechanisms involved in the directional control of animals and to provide practical systems suitable for human application." In the 1950s, the CIA also funded research into mind control techniques, through programs such as MKULTRA. Perhaps because he received funding for some research through the US Office of Naval Research, it has been suggested (but not proven) that Delgado also received backing through the CIA. He denied this claim in a 2005 article in Scientific American describing it only as a speculation by conspiracy-theorists. He stated that his research was only progressively scientifically-motivated to understand how the brain works...(Wikipedia).

Related Reading:

Monday, May 28, 2012

What Cat Sees by Dr David Whitehouse

Scientists have literally seen the world through cat's eyes by Dr David Whitehouse. In what is bound to become a much debated and highly controversial experiment, a team of US scientists have wired a computer to a cat's brain and created videos of what the animal was seeing. According to a paper published in the Journal of Neuroscience, Garrett Stanley, Yang Dang and Fei Li, from the Department of Molecular and Cell Biology, University of California, Berkeley, have been able to "reconstruct natural scenes with recognizable moving objects". The researchers attached electrodes to 177 cells in the so-called thalamus region of the cat's brain and monitored their activity. The thalamus is connected directly to the cat's eyes via the optic nerve. Each of its cells is programmed to respond to certain features in the cat's field of view. Some cells "fire" when they record an edge in the cat's vision, others when they see lines at certain angles, etc. This way the cat's brain acquires the information it needs to reconstruct an image.

Recognisable objects - The scientists recorded the patterns of firing from the cells in a computer. They then used a technique they describe as a "linear decoding technique" to reconstruct an image.

Scientists saw recognisable objects - To their amazement they say they saw natural scenes with recognisable objects such as people's faces. They had literally seen the world through cat's eyes. Other scientists have hailed this as an important step in our understanding of how signals are represented and processed in the brain. It is research that has enormous implications.

Artificial brain extensions - It could prove a breakthrough in the hoped-for ability to wire artificial limbs directly into the brain. More amazingly, it could lead to artificial brain extensions. By understanding how information can be presented to the brain, some day, scientists may be able to build devices that interface directly with the brain, providing access to extra data storage or processing power or the ability to control devices just by thinking about them. One of the scientists behind this current breakthrough, Garrett Stanley, now working at Harvard University, has already predicted machines with brain interfaces. Such revolutionary devices should not be expected in the very near future. They will require decoding information from elsewhere in the brain looking at signals that are far more complicated than those decoded from the cat's thalamus but, in a way, the principle has been demonstrated.

Figure 2. - Reconstruction of natural scenes from the responses of a population of neurons. (a), Receptive fields of 177 cells used in the reconstruction. Each receptive field was fitted with a two-dimensional Gaussian function. Each ellipse represents the contour at one standard deviation from the center of the Gaussian fit. Note that the actual receptive fields (including surround) are considerably larger than these ellipses. Red: On center. Blue: Off center. An area of 32 by 32 pixels (0.2 degrees/pixel) where movie signals were reconstructed is outlined in white. The grid inside the white square delineates the pixels. (b), Comparison between the actual and the reconstructed images in an area of 6.4 degrees by 6.4 degrees (white square in (a)). Each panel shows four consecutive frames (interframe interval: 31.1 msec) of the actual (upper) and the reconstructed (lower) movies. Top panel: scenes in the woods, with two trunks of trees as the most prominent objects. Middle panel: scenes in the woods, with smaller tree branches. Bottom panel: a face at slightly different displacements on the screen. (c), Quantitative comparison between the reconstructed and the actual movie signals. Top: histogram of temporal correlation coefficients between the actual and the reconstructed signals (both as functions of time) at each pixel. The histogram was generated from 1024 (32x32) pixels in the white square. Bottom: histogram of spatial correlation coefficients between the actual and the reconstructed signals (both as functions of spatial position) at each frame. The histogram was generated from 4096 frames (512 frames/movie, 8 movies) (reconstructed images).  Source: What Cat Sees by Dr David Whitehouse (Sci/Tech, October 8, 1999 BBC News Online Science UK)


Friday, May 25, 2012

Remote Monitoring and Thought Inference

This article deals with what might commonly be called "mind reading." More specifically and technically it deals with methods by which the thoughts a person is thinking can be inferred by analyzing physiological data obtained about that person from any sort of sensor devices. (This inference need not be 100% effective to be a powerful technique for information gathering from, and harassment of, a targeted individual. Even a remote heartbeat and breathing monitor, combined with surveillance cameras, would allow for thought inferences that would be extremely unnerving to a targeted person.) Combining such capabilities with techniques and technologies described on other pages of this site would allow for a "full-duplex" sort of mind manipulation. This article at the DOE Openness site deals with the remote measurement of physiological data using microwave techniques. It is titled "Measurement of Heart and Breathing Signals of Human Subjects Through Barriers With Microwave Life-Detection Systems."

This 1981 Science Digest article by Gary Selden, "Machines That Read Minds" provides a good introduction to brainwave analysis as a means for thought inference. It describes some of the research being conducted at that time, and notes that the CIA actively funded such research. A technique described as brain fingerprinting has been developed to determine if a subject recognizes some aspect of a scene, etc. It is being marketed as a tool for law enforcement, like a brainwave version of a lie detector. It uses a technique like that described for the P300 "recognition wave" in the Gary Selden article above. This is a PubMed abstract of an article "Brain-wave recognition of words." See also the Related Articles. The full text of recent articles in the Proceedings of the National Academy of Sciences can be accessed online.

Here is a brief excerpt from an article titled "Thought Control," by Peter Thomas, in New Scientist, March 9, 1996: - Last year, at the University of Tottori, near Osaka in Japan, a team of computer scientists lead by Michio Inoue took this idea further by analysing the EEG signals that correspond to a subject concentrating on a specific word... The system depends on a database of EEG patterns taken from a subject concentrating on known words. To work out what the subject is thinking, the computer attempts to match their EEG signals with the patterns in the database. For the moment the computer has a vocabulary of only five words and takes 25 seconds to make its guess. In tests, Inoue claims a success rate of 80 percent, but he is working on improvements... Note that the time to "guess" is not really meaningful by itself (depending on the computer and algorithm used) and that significant improvements to both the signal measurements and the pattern matching algorithm are almost surely possible. Scientists from the Department of Molecular and Cell Biology at the University of California, Berkeley, have successfully wired a cat's visual system so that they are able to reconstruct computer images of what the cat sees. They attached electrodes to 177 cells in the thalamus region of the cat's brain. Here is a BBC article, on the research, and these are images of the reconstructed views. (Here is a local copy of the images page.) This page at the C.A.H.R.A. site includes an L.A. Times article from 1976 about research into inferring thoughts and remotely measuring magnetoencephalograph (MEG) data. One source of information about a person's thought process is where the eyes focus. Eye gaze monitoring devices are currently being marketed for the disabled and for applications like studying the effectiveness of advertising. One such system, originally developed at the University of Virginia, is called ERICA. Stanford University has an Advanced Eye Interpretation Project, and the following quote is from their web site. Our patented work on inferring mental states from eye-movement patterns allows us to better describe what users are doing. This ongoing research moves beyond merely asking where a person's eyes are focused, but aims to infer high-level behaviors from observing various patterns of eye-movement. Interesting results become apparent when behaviors such as "reading" and "searching" are analyzed as users interact with dynamic, complex applications.

What do people do when they read on-line news? How much do they read on which providers? The Stanford/Poynter Eyetrack2000 Project is analyzing typical users' eye-movements and surfing behavior as they read on-line news to answer these (and many more) questions. Here is a Scientific American article describing how researchers can predict a rhesus monkey's arm movements from brain recordings a fraction of a second before the monkey's muscles move. This Wired article from 1993, "The Desire to Be Wired," is about neural interfacing. Among other topics, it discusses independent, do-it-yourself researchers of brain interfacing devices. The following excerpt describes an experiment in EEG transference. It is not sufficiently documented in the article to be taken as a scientific study, but is nonetheless interesting. David Cole of the non-profit group AquaThought is another independent researcher willing to explore the inside of his own cranium. Over the years, he's been working on several schemes to transfer EEG patterns from one person's brain to another. The patterns of recorded brain waves from the source subject are amplified many thousands of times and then transferred to a target subject (in this case, Cole himself). The first tests on this device, dubbed the Montage Amplifier, were done using conventional EEG electrodes placed on the scalp. The lab notes from one of the first sessions with the Amplifier report that the target (Cole) experienced visual effects, including a "hot spot" in the very location where the source subject's eyes were being illuminated with a flashlight. Cole experienced a general state of "nervousness, alarm, agitation, and flushed face" during the procedure. The results of these initial experiments made Cole skittish about attempting others using electrical stimulation. He has since done several sessions using deep magnetic stimulation via mounted solenoids built from conventional iron nails wrapped with 22-gauge wire. "The results are not as dramatic, but they are consistent enough to warrant more study," he says. This cyberpunk site, for example, contains related information. See also the page on implants.

Tuesday, May 22, 2012

The Cold War Experiments

Radiation tests were only one small part of a vast research program that used thousands of Americans as guinea pigs.

U.S News and World Report, January 24, 1994. By Stephen Budiansky, Erica E. Goode and Ted Gest

On June 1, 1951, top military and intelligence officials of the United States, Canada and Great Britain, alarmed by the frightening reports of communist success at ``intervention in the individual mind,'' summoned a small group of eminent psychologists to a secret meeting at the Ritz-Carlton Hotel in Montreal. The Soviets had gotten Hungary's Joszef Cardinal Mindszenty, an outspoken anti-communist, to confess to espionage, and they also seemed to be able to indoctrinate political enemies and even control the thoughts of entire populations. the researchers were convinced that the communists' success must be the fruit of some mysterious breakthroughs. By the following September, U.S. government scientists, spurred on by reports that American prisoners of war were being brainwashed in North Korea, were proposing an urgent, top-secret research program on behavior modification. Drugs, hypnosis, electroshock, lobotomy -- all were to be studied as part of a vast U.S. effort to close the mind-control gap. New revelations that government cold war experiments exposed thousands of Americans to radiation have prompted fresh congressional inquiries, including a hearing last week on tests conducted on retarded children in Massachusetts. a Department of Energy hot line set to to handle calls from possible subjects of the tests has been swamped. But the radiation experiments are only one facet of a vast cold war research program that used thousands of Americans as guinea pigs.

From the end of world War II well in to the 1970s, the Atomic Energy Commission, the Defense Department, the military services, the CIA and other agencies used prisoners, drug addicts, mental patients, college students, soldiers, even bar patrons, in a vast range of government-run experiments to test the effects of everything from radiation, LSD and nerve gas to intense electric shocks and prolonged ``sensory deprivation.'' Some of the human guinea pigs knew what they were getting into; many others did not even know they were being experimented on. But in the life-and-death struggle with communism, America could not afford to leave any scientific avenue unexplored. With the cold war safely over, energy Secretary Hazel O'Leary has ordered the declassification of millions of pages of documents on the radiation experiments, and the administration is now considering compensating the hundreds of subjects of these odd and sometimes gruesome atomic tests. But the government has long ignored thousands of other cold war victims, rebuffing their requests for compensation and refusing to admit its responsibility for injuries they suffered. And the Clinton administration shows no sign of softening that hard line. ``We're not looking for drugs,'' says cabinet secretary Christine Varney. ``At least initially, we need to keep our focus limited to human radiation.''

In Clinton's court. Now, the only hope for thousands who were injured or who were experimented on without their informed consent is that President Clinton or Congress will take action to compensate the forgotten casualties of the cold war. Continued secrecy and legal roadblocks erected by the government have made it virtually impossible for victims of these cold war human experiments to sue the government successfully, legal experts say. Despite the administration's reluctance, Congress may be moving to seek justice for all the government's cold war victims. ``It's not just radiation we're talking about,'' says Democratic Sen. John Glenn of Ohio, a former Marine and astronaut who is holding hearings on the subject this week. ``Any place government experimenting caused a problem we should make every effort to notify the people and follow up. We ought to set up some sort of review and compensation for those who were really hurt.'' Many of the stories of people whose lives were destroyed by mind-altering drugs, electroshock ``treatments'' and other military and CIA experiments involving toxic chemicals or behavior modification have been known for almost 20 years. But U.S. News has discovered that only a handful were ever compensated -- or even told what was done to them. ``There has essentially been no legitimate followup, despite the CIA's promise to track down the victims and see what happened to them,'' says Alan Scheflin, a professor at Santa Clara University Law School and an authority on cold war mind control research. ``It's just one of the many broken promises.'' A CIA spokesman last week said the agency is searching its files for radiation tests but has no plans to revisit other human experimentation.

MKULTRA. Most victims have never been informed by the government of the nature of the experiments they were subjected to or, in some cases, even fact that they were subjects. In a 1977 hearing, then CIA director Stansfield Turner said he found the experiments ``abhorrent'' and promised that the CIA would find and notify the people used in the tests. Turner last week insisted that ``they found everyone they possibly could find.'' But internal memos and depositions taken from CIA officials in a lawsuit against the agency in the 1980s reveal that of the hundreds of experimental subjects used in the CIA's mind-control program, code-named MKULTRA, only 14 were ever notified and only one was compensated -- for $15,000. The 14 had all been given LSD surreptitiously by CIA agents in San Francisco in an attempt to test the drug in an ``operationally realistic'' setting. One of the victims, U.S. News discovered, was a San Francisco nightclub singer, Ruth Kelley, now deceased. In the early 1960s, according to a deposition from a CIA official who was assigned in the 1980s to track down MKULTRA victims, LSD was slipped into Kelley's drink just before her act at a club called The Black Sheep. The agents who had drugged her ``felt the LSD definitely took some effect during her act,'' testified Frank Laubinger, the official in charge of the notification program. One agent went to the bar the next day and reported that she was fine, though another recalled that she had to be hospitalized.

Most of the MKULTRA documents were destroyed in 1973 on orders of then CIA Director Richard Helms, and the records that remain do not contain the names of human subjects used in most of the tests. But they do clearly suggest that hundreds of people were subjected to experiments funded by the CIA and carried out at universities, prisons, mental hospitals, and drug rehabilitation centers. Even so, according to Laubinger's 1983 deposition, ``it was decided that there were no subjects that required notification other than those in the [San Francisco] project,'' and the CIA made no effort to search university records or conduct personal interviews to find other victims. Admiral Turner, in his 1983 deposition, conceded that ``a disappointingly small number'' were notified but defended the agency's continuing refusal to declassify the names of the researchers and universities involved. ``I don't think that would have been necessarily the best way,'' Turner said. ``Not in the litigious society we live in.'' In 1985, the agency successfully appealed to the Supreme court to block release of that information.

One of the grisliest CIA-funded experiments -- and one of only a few suits that have led to successful lawsuits -- involved the work of a Canadian psychiatrist, Dr. Ewen Cameron. In the 1950s, Cameron developed a method to treat psychotics using what he called ``depatterning'' and ``psychic driving.'' According to a grant application he submitted in 1957 to the Society for the Investigation of Human Ecology, a CIA-funded front set up to support behavior-control research, the procedure consisted of ``breaking down of ongoing patterns of the patient's behavior by means of a particularly intensive electroshocks (depatterning)'' -- and in some cases, with repeated doses of LSD. This was followed with ``intensive repetition (16 hours a day for six or seven days)'' of a tape-recorded message, during which time ``the patient is kept in partial sensory isolation.'' Cameron's application proposed trying a variety of drugs, including the paralytic curare, as part of a new technique of ``inactivating the patient.''

The 56-day sleep. The analogy to brainwashing was obvious to the CIA, which provided a $60,000 grant through the human ecology society. Nine of Cameron's former patients, who had sought treatment for depression, alcoholism and other problems at the Allan Memorial Institute at McGill University, where Cameron was a director, filed a lawsuit against the CIA in 1979. One patient, Rita Zimmerman, was ``depatterned'' with 30 electroshock sessions followed by 56 days of drug-induced sleep. It left her incontinent; others suffered permanent brain damage, lost their jobs or otherwise deteriorated. The case, Orlikow v. U.S., was settled in 1988 for $750,000. (Cameron died in 1967.) A more typical experience of those seeking recompense is that of Air Force officer Lloyd Gamble, who volunteered in 1957 to take part in a test at the Army Chemical Warfare Laboratories in Edgewood, Md. He told U.S. News that he was informed he would be testing gas masks and protective gear. Instead, he learned in 1975, he and 1,000 other soldiers were given LSD. ``If they had told me of the risks, I never would have done it,'' he says now. ``It was outrageous.'' He says after the test he was simply ``turned loose to drive from Aberdeen to Delaware'' while under the influence of LSD. ``I didn't even remember having been there.''

Gamble began suffering blackouts, periods of deep depression, acute anxiety and violent behavior. He attempted suicide in 1960, lost his top-secret security clearance and finally took early retirement in 1968. When he belatedly learned he had been given LSD, he sought recompense. The Justice Department rejected his request because the statute of limitations had expired; the Veterans' Administration denied disability payments, saying there was no evidence of permanent injury. The Defense Department says Gamble signed a ``volunteer's participation agreement'' and that he received two LSD doses. Gamble and others were told that ``they would receive a chemical compound, the effects of which would be similar to those of being intoxicated by alcoholic beverages.'' Democratic Rep. Leslie Byrne of Virginia is sponsoring a bill that seeks $253,488 for Gamble; DOD opposes the bill, saying there is ``insufficient factual basis'' for compensation. Such ``private bills'' usually are difficult to pass in the face of executive branch opposition.

Unreasonable men? Other cases filed by prisoners or soldiers who were given a variety of drugs have been dismissed by judges who have ruled that although the subjects did not learn until the 1970s exactly what had been done to them, the side effects and flashbacks they experienced immediately after the tests should have prompted ``a reasonable man to seek legal advice'' at the time. "The failure to notify and promptly compensate the people who were victimized by these cold war excesses is inexcusable,'' argues James Turner, one of the lawyers in the Orlikow case. But he says the courts and the agencies now have made it virtually impossible for a victim to succeed in a legal claim. ``Records are gone, key witnesses have died, people have moved; in the drug-testing cases, people are damaged in other ways, which undermines their credibility.'' The justifications offered for these tests cover everything from cloak-and-dagger schemes to discredit foreign politicians to training military personnel. The Army exposed as many as 3,000 soldiers to BZ, a powerful hallucinogen then under development as a chemical weapon. The drug attacks the nervous system, causing dizziness, vomiting, and immobility. Thousands more also participated in the Army's Medical Volunteer Program, testing nerve gas, vaccines and antidotes.

Talkative. The earliest behavior-control experiments were part of a 1947 Navy project called Operation CHATTER, which was seeking ``speech-inducing drugs'' for use in interrogating ``enemy or subversive personnel.'' The project was eventually abandoned because the drugs ``had such a bitter taste it was not possible to keep the human subjects from knowing'' they had been drugged. But by 1952, undaunted by such setbacks, secret psychological research was booming. ``One of the problems we had all the way along was the ingrained belief on the part of [CIA] agents that the Soviets were 10 feet tall, that there were huge programs going on in the Soviet Union to influence behavior,'' John Gittinger, a CIA psychologist who oversaw the Human Ecology society's operations, told U.S. News. A classified 1952 study by the U.S. government's Psychological Strategy Board laid out an entire agenda for behavior-control research. Calling communist brainwashing ``a serious threat to mankind,'' scientists urged that drugs, electric shock and other techniques be examined in ``clinical studies ... done in a remote situation.'' The report even mused about the potential of lobotomy, arguing that ``if it were possible to perform such a procedure on members of the Politburo, the U.S.S.R. would no longer be a problem to us,'' though it also noted that the ``detectability'' of the surgical operation made its use problematic.

Although there is no evidence that lobotomy experiments were ever performed, many other bizarre and intrusive procedures were. In 1955, the Army supported research at Tulane University in which mental patients had electrodes implanted in their brains to measure the LSD and other drugs. In other experiments, volunteers were kept in sensory-deprivation chambers for as long as 131 hours and bombarded with white noise and taped messages until they began hallucinating. The goal: to see if they could be ``converted'' to new beliefs. As recently as 1972, U.S. News found, the Air Force was supporting research by Dr. Amedeo Marrazzi, who is now dead, in which psychiatric patients at the University of Missouri Institute of Psychiatry and the University of Minnesota Hospital -- including an 18-year-old girl who subsequently went into a catatonic state for three days -- were given LSD to study ``ego strength.'' Gittinger concedes that some of the research was quite naive. ``We were trying to learn about subliminal perception and all the silly things people were believing in at that time,'' he says. One study even tried to see if extrasensory perception could be developed by ``training'' subjects with electric shocks when they got the wrong answer. But ``most of it was exciting and interesting and stimulating, and quite necessary as it happens, during that period of time,'' Gittinger insists. Another former CIA official, Sidney Gottlieb, who directed the MKULTRA behavior-control program almost from its inception, refused to discuss his work when a U.S. News reporter visited him last week at his home. He said the CIA was only trying to encourage basic work in behavioral science. But he added that after his retirement in 1973, he went back to school, practiced for 19 years as a speech pathologist and now works with AIDS and cancer patients at a hospice. He said he has devoted the years since he left the CIA ``trying to get on the side of the angels instead of the devils.''

Saturday, May 19, 2012

The "Soft Kill" Fallacy by Steven Aftergood

Details about programs to develop so-called "non-lethal" weapons are slowly emerging from the U.S. government's secret "black budget." The futuristic aura of many non-lethal weapons is seductive, and their advent has been heralded uncritically by many media reports of kinder, gentler weapons.1 But basic political, legal, and strategic questions about the utility of the non-lethal thrust remain unanswered-sometimes even unasked. "Non-lethal weapons disable or destroy without causing significant injury or damage," asserted Undersecretary of Defense Paul Wolfowitz in a March 1991 memorandum. This is an important misconception. Nevertheless, Wolfowitz wrote, "A U.S. lead in non-lethal technologies will increase our options and reinforce our position in the post­Cold War world."2 The concept of non-lethal weapons is not new; the term appears in heavily censored CIA documents dating from the 1960s. But research and development in non-lethal technologies has received new impetus as post­ Cold War Pentagon planning has shifted its focus to regional conflicts, insurgencies, and peacekeeping. Dozens of non-lethal weapons have been proposed or developed, mostly in laboratory-scale models. They encompass a broad range of technologies, including chemical, biological kinetic, electromagnetic, and acoustic weapons, as well as informational techniques such as computer viruses (see "A Non-lethal Laundry List," page 43).

Of course, the arsenal of conventional warfare already includes systems like electronic jamming devices and anti-radar missiles that are "non-lethal" in the sense that they disable enemy weapons-but only in the context of armed and deadly conflict. In contrast, the proponents of non-lethal weapons apparently envision a relatively benign battlefield. Sticky foams and "calmatives" would immobilize or sedate adversaries. Specially cultured bacteria would corrode and degrade components of weapons systems. Optical munitions would cripple sensors and dazzle, if not blind, soldiers. Acoustic beam weapons would knock them out. Netting and shrouds would thwart the movement of aircraft, tank, and armored vehicles. These and many other related technologies have already been demonstrated at a proof-of-concept level. In some cases, the proposed technologies raise questions about compliance with international agreements (see "Non-lethal Weapons May Violate Treaties," page 44). In the aftermath of the Persian Gulf War, the Pentagon has moved to coordinate these diverse research programs, as well as plan the acquisition of non-lethal weapons and their incorporation into military training.3 Current funding, which is secret, is probably no more than several tens of millions of dollars per year, although reportedly it could grow to more than $1 billion over the next several years.

One of the people spending that money is John B. Alexander of Los Alamos National Laboratory. The 32-year army veteran belongs to the "special technologies" group of the lab's International Technology Division, which assesses the national security implications of emerging technologies. Alexander coordinates the lab's multidisciplinary effort on non-lethal weapons. For his outstanding leadership in this capacity, Aviation Week & Space Technology honored him as one of its annual aerospace "laureates" this past January. Alexander also has a pronounced interest in paranormal phenomena. In a notorious 1980 article, "The New Mental Battlefield," which appeared in Military Review (a U.S. Army journal), Alexander wrote that "there are weapons systems that operate on the power of the mind and whose lethal capacity has already been demonstrated. . . . The psychotronic weapon would be silent, difficult to detect, and would require only a human operator as a power source." Discussing "out-of-body experiences," he asserted that "it has been demonstrated that certain persons appear to have the ability to mentally retrieve data from afar while physically remaining in a secure location. . . . The strategic and tactical applications are unlimited." (And in fact there is evidence that "remote viewers" have provided consulting services to U.S. intelligence agencies.)

"There is sufficient concern about psychic intrusion to cause work to begin on countermeasures," Alexander advised in the article. Last year, Alexander organized a national conference devoted to researching "reports of ritual abuse, near-death experiences, human contacts with extraterrestrial aliens and other so-called 'anomalous experiences,'" the Albuquerque Journal reported in March 1993. The Australian magazine Nexus reported last year that in 1971, Alexander "was diving in the Bimini Islands looking for the lost continent of Atlantis. He was an official representative for the Silva mind control organization and a lecturer on precataclysmic civilizations . . . [and] he helped perform ESP experiments with dolphins." In The Warrior's Edge: Front-line Strategies for Victory on the Corporate Battlefield-a 1990 book he co-authored with Maj. Richard Groller and Janet Morris-Alexander describes himself as having "evolved from hard-core mercenary to thanatologist." "As a Special Forces A-Team commander in Thailand and Vietnam, he led hundreds of mercenaries into battle," the book explains. "At the same time, he studied meditation in Buddhist monasteries and later engaged in technical exploration and demonstration of advanced human performance." Along the way, Alexander seems to have immersed himself in nearly every peripheral or imaginary mode of human experience. Alexander is forthright and unapologetic about his interest in the paranormal, insisting that it has no connection at all with his work at Los Alamos. He also says that a forthcoming book by Jim Marrs will "take the wraps off" the field of parapsychology and help to vindicate his position. Marrs is the author of a Kennedy assassination conspiracy book, Crossfire. John Alexander is by all accounts a resourceful and imaginative individual. He would make a splendid character in a science fiction novel. But he probably shouldn't be spending taxpayer money without adult supervision.

The first step in fathoming the non-lethal weapons program is to get past the name. When even proponents concede that non-lethal weapons are not necessarily non-lethal, why are they still called that? Because the term is politically attractive. "Major political benefit can be accrued by being the first nation to announce a policy advocating projection of force in a manner that does not result in killing people," Alexander writes (4). Various names were considered and are still sometimes used: soft kill, mission kill, less-than-lethal weapons, noninjurious incapacitation, disabling measures, strategic immobility. "Having been through a number of names, I can say that nothing has had the impact of 'non-lethal,'" avers Alexander. The growing prominence of the non-lethal program tends to validate this strategy. Rebelling against the program's marketing spin, analysts across the political spectrum have rejected the assertion that non-lethal weapons represent a new development in warfighting- or even a fruitful area for investment. Defense expert William M. Arkin says that the resurgence of interest in non-lethal weapons was spawned in part by "the use of special weapons [the Kit 2 carbon-fiber warheads on Tomahawk sea-launched cruise missiles] against electrical installations" in the Persian Gulf War. However, he notes, this "non-lethal" application devastated a civilian population that was otherwise largely spared the direct effects of bombing. Non-lethal weaponry, Arkin concludes, is a "fantasy program" (5).

Likewise, Eliot A. Cohen, writing in Foreign Affairs, declares that "the most dangerous legacy of the Persian Gulf War [is] the fantasy of near-bloodless use of force." Further, Cohen says, "the occupants of a helicopter crashing to earth after its flight controls have fallen prey to a high-power microwave weapon would take little solace from the knowledge that a non-lethal weapon had sealed their doom. Some of these weapons (blinding lasers, for example) may not kill, but have exceedingly nasty consequences for their victims. And in the end a disabling weapon works only if it leaves an opponent vulnerable to full-scale, deadly force" (6). Many official spokespersons concede the point and argue that non-lethality is being evaluated as an adjunct to-not a replacement for-large-scale conventional warfare. "We're not looking at this as a new warfighting strategy per se, but rather as another effective tool for the users," says Frank Kendall, director of the Pentagon's tactical warfare programs (7). But some, like Alexander of Los Alamos, are far more ambitious. Alexander envisions a fundamental reorganization of the military in which non-lethal weapons would apparently occupy a central position. "Non-lethal defense concepts are comprehensive, far beyond adjuncts to present warfighting capabilities. Non-lethal defense has applicability across the continuum of conflict up to and including strategic paralysis of an adversary" (8).

If there is any substantive basis for this contention, it is obscured by the secrecy surrounding the programs. Because most non-lethal weapons programs are classified, they are shielded from the democratic process. Such secrecy precludes informed public discussion of some of the most basic policy issues as well. One immediate consequence of the excessive secrecy is a wasteful duplication of effort. Justice Department officials who surveyed some of the "black budget" programs for possible law enforcement applications found the same technologies being developed in as many as six independent programs. "We've been startled at the number of times we've run into this," says David Boyd of the National Institute of Justice (9). The waste results from a lack of independent oversight of non-lethal programs, which- like other highly classified "special access" or "black" programs in defense and intelligence- operate beyond the reach of the checks and balances that U.S. citizens take for granted. Special access programs employ security measures that are draconian compared to ordinary classified programs. Cong. Patricia Schroeder, the Colorado Democrat who chairs the House Armed Services Research and Technology subcommittee, mentioned at a recent hearing that she'd asked the Defense Department for a list of special access programs and their costs. Her request was turned down. Anita Jones, the Pentagon director of defense research and engineering, claimed that providing the list to Schroeder-whose subcommittee must authorize spending for such programs-would be too dangerous to national security (10).

This decision was grotesque but not surprising. In the special access world, constitutional democracy does not apply, and government accountability is a subversive idea. The government secrecy system as a whole is among the most poisonous legacies of the Cold War-and a godsend for non-lethal weapons programs. Of course, there has always been a degree of secrecy in the U.S. government, and it's always been recognized that certain narrow areas must be protected from disclosure for the sake of a larger public interest. But the Cold War secrecy system still in place today goes far beyond those areas of consensus. It is in fact a kind of political cancer that has been allowed to spread unchecked into vast sectors of government, masking huge secret budgets, suppressing incredible volumes of historical and policy records, and concealing environmental data and other essential requirements of democratic decision-making and government accountability. Beyond simply concealing enormous quantities of information from the public, the Cold War secrecy system also mandates active deception. A security manual for special access programs authorizes contractors to employ "cover stories" to disguise their activities. The only condition is that "cover stories must be believable" (11).

Even the government is starting to recognize that official cover and deception programs are getting out of hand and need to be curtailed. A Joint Security Commission established by the secretary of defense and the director of central intelligence reported in March that "the use of cover to conceal the existence of a government facility or the fact of government research and development interest in a particular technology is broader than necessary and significantly increases costs." "For example, one military service routinely uses cover mechanisms for its [special access] programs without regard to individual threat or need. Another military organization uses cover to hide the existence of certain activities or facilities. Critics maintain that in many cases, cover is being used to hide what is already known and widely reported in the news media," the commission stated (12). The hazards of unaccountable government, from secret wars to secret radiation experiments, are well known. And yet the system continues. The Clinton administration has made progress toward reforming it, but measurable results still have not materialized. The nominal justification for secrecy in non-lethal weapons is that developing them on a totally unclassified basis would enable potential adversaries to duplicate the effort or develop countermeasures. This is a valid concern that is exploited beyond all justification to the point of concealing the budgets and even the very existence of many programs. If a non-lethal technology is so fragile that simply acknowledging its existence would negate its effectiveness, then it probably isn't worth much.

End Notes
1. "Soon, 'Phasers on Stun,'" Newsweek (Feb. 7, 1994), pp. 24­26; "Not So Deadly Weapons," Los Angeles Times, Dec. 20, 1994, p. A4.
2. Paul Wolfowitz, "Do We Need a Nonlethal Defense Initiative?" memorandum to the Secretary of Defense (March 10, 1991) USD(P).
3. Barbara Opall, "DoD to Boost Nonlethal Options," Defense News (March 28, 1994), p. 46.
4. John B. Alexander, Los Alamos National Laboratory Report No. LA-UR-92-3206.
5. William M. Arkin, "Arms Control After the Gulf War," (Center for International and Security Studies at Maryland, 1993), p. 26.
6. Eliot A. Cohen, "The Mystique of U.S. Air Power," Foreign Affairs ( Jan./Feb. 1994), p. 121.
7. Opall, Defense News.
8. John B. Alexander, Los Alamos National Laboratory Report No. LA-UR-92-3773.
9. Stacey Evers, "Police, Prisons Want Cheap Non-lethal Technologies," Aerospace Daily, Nov. 19, 1993, p. 299.
10. "Perry Studying Black Budget Oversight Procedures," Defense Daily, April 19, 1994, p. 106.
11. John Horgan, "Lying By the Book," Scientific American (Oct. 1992), p. 20.
12. "Redefining Security," a Report to the Secretary of Defense and the Director of Central Intelligence by the Joint Security Commission (Feb. 28, 1994), pp. 19­20.

A non-lethal laundry list: Dozens of technologies are being studied or developed under the elastic rubric of "non-lethal weapons." Some of the non-lethal weapons concepts most frequently cited in the unclassified literature were proposed or first publicly described by Janet Morris of the U.S. Global Strategy Council, whose advocacy helped motivate the current Pentagon initiative on non-lethal weapons. Publicly described weapons include:

Infrasound: Very low-frequency sound generators could be tuned to incapacitate humans, causing disorientation, nausea, vomiting, or bowel spasms.

Laser weapons: Low-energy laser rifles could dazzle or temporarily blind enemy soldiers, or disable optical and infrared systems used for target acquisition, tracking, night vision, and range finding. The International Committee of the Red Cross has initiated a humanitarian campaign to prohibit the use of lasers as weapons of war.

Supercaustics: These highly acidic chemical agents can be millions of times more caustic than hydrofluoric acid. They could destroy the optics of heavily armored vehicles, as well as tires and structural metals.

Biological agents: Microbial cultures can be "designed" to chew up almost anything. Scientists at Los Alamos National Laboratory reviewed naturally occurring organisms that could be cultured to enhance certain characteristics. "As a result, we discovered a bacterium that degrades a specific material used in many weapons systems" ("Antimaterial Technology," Los Alamos National Lab FY 1991 Progress Report, LA-12319-PR, p 19).

Acoustic beam weapons: High frequency acoustic "bullets" used against people would induce blunt-object trauma "like being hit by a baseball." According to John Alexander of Los Alamos, "Proof of principle has been established; we can make relatively compact acoustic weapons" (Mark Tapscott and Kay Atwal, "New Weapons That Win Without Killing on DOD's Horizon," Defense Electronics (Feb. 1993), p 41).

Combustion inhibitors: Chemical agents can be released into the atmosphere or introduced directly into fuel tanks to contaminate fuel or change its viscosity to degrade or disable all mechanical devices powered by combustion.

Mini-nukes: Jumping onto the non-lethal bandwagon, the redoubtable Edward Teller has proposed developing mini nuclear weapons with explosive yields of 100 tons ("The 'Soft Kill' Solution," March/April 1994 Bulletin). These would be non-lethal, one supposes, as long as the enemy does what he is told.

Among the many other technologies under consideration in the Pentagon's non-lethal program are sticky foams to immobilize individuals, anti-traction chemicals to slicken roads and runways, high-power microwave generators, mechanical entanglements, holographic projectors, non-nuclear electromagnetic pulse generators, neural inhibitors, and wireless stun devices (Elaine M. Grossman, "Pentagon to Set Priorities in Non-lethal Technologies, Weapons," Inside the Air Force (April 15, 1994), p1)-S. A.

"Non-lethal" weapons may violate treaties - Development of many of the proposed weapons described on these pages has been undertaken by NATO, the United States, and probably other nations as well. Most of the weapons could be considered "pre-lethal" rather than non-lethal. They would actually provide a continuum of effects ranging from mild to lethal, with varying degrees of controllability. Serious questions arise about the legality of these expensive and highly classified development programs. Four international treaties are particularly relevant.

The Biological Weapons Convention. - The development of biological agents for "non-lethal" uses such as degradation of aircraft fuel, lubricants, or electrical insulation would appear to violate the Biological Weapons Convention (BWC), which prohibits the development, production, or possession of biological agents that have no justification for prophylactic, protective, or other peaceful purposes. Although "protective purposes" is not defined in the treaty, by analogy with the Chemical Weapons Convention (CWC) it can be presumed to mean protection against dangerous biological agents. U.S. law implementing the treaty provides criminal penalties for the development or possession of "any biological agent" for use as a weapon; "biological agent" is defined to include any microorganism capable of causing "deterioration of food, water, equipment, supplies, or material of any kind; or deleterious alteration of the environment." There is no exemption for use in law enforcement.

The Chemical Weapons Convention and the Geneva Protocol. - The development of "non-lethal" chemical weapons, such as sedatives delivered in aerosols absorbed through the skin or supercaustics that corrode roads and tires (and inevitably also clothing, shoes, skin, and flesh), threatens to violate the Chemical Weapons Convention. The convention is expected to come into force next year. It prohibits the development, production, or retention for prohibited purposes of toxic chemicals, defined as "any chemical which through its chemical action on life processes can cause death, temporary incapacitation or permanent harm to humans or animals." The definition would include substances such as caustics and other harmful chemicals not usually classified as poisons. The convention permits the production of toxic chemicals if they are used for peaceful purposes, as in agriculture; protective purposes (against toxic chemicals); "military purposes not connected with the use of chemical weapons and not dependent on the use of the toxic properties of chemicals as a method of warfare"; and "law enforcement including domestic riot control purposes." The third of these permissible purposes might be construed to include chemicals such as supercaustics, on the grounds that "life processes" are not the intended target, provided that use of the chemicals as weapons would entail little contact with living things. For some weapons, this would be difficult to establish. The Geneva Protocol of 1925 prohibits "the use in war of asphyxiating, poisonous or other gases, and of all analogous liquid materials or devices." The "or other" appears to broaden the prohibition beyond asphyxiating or toxic substances.

Thus, the use in warfare of harmful chemicals such as supercaustics and sticky foams (which, in addition to forming a kind of "roach motel" for people, could act as asphyxiating agents) may be illegal, and the use of metal embrittlement agents, superlubricants, chemicals that interfere with fuel combustion, and so forth could also be questioned. Under the CWC, the fourth permissible purpose for developing chemical weapons agents-law enforcement, including domestic riot control-is the major reason currently offered for developing non-lethal weapons. This permissible purpose, however, contains an ambiguity in urgent need of clarification. "Law enforcement" is not defined in the treaty. Does it include anything more than riot control? If so, what? And what law? In contrast, the convention defines "riot control agents" narrowly and prohibits their use in warfare. Regarding law enforcement, it excludes the use of "Schedule 1" chemicals (one of several categories of chemical weapon agents) but says no more. This implies that for any law enforcement purposes other than domestic riot control, any non­Schedule 1 chemical may be developed, produced, acquired, stockpiled, or transferred as a weapon. Furthermore, although riot control agents must be declared, the treaty says nothing about declaring other agents that might be developed or held for law enforcement.

In the report containing the final text of the CWC, several of the national delegations to the negotiating body pointed out the problems raised by the undefined term "law enforcement' as a permissible purpose.1 One delegation stated that "this might give rise to far-fetched interpretations of what the negotiators intended." Indeed, the three delegations that commented on this issue interpreted this permissible purpose in widely different ways: as limited to domestic actions, as applicable outside national boundaries, or as including only domestic and U.N. peacekeeping activities. Unless the CWC Preparatory Commission takes steps to define more closely and limit this wild card, it could subvert much of the intent of the convention and render its elaborate verification mechanism futile. For domestic riot control, the development of chemical agents is clearly permissible under the CWC, although their use in warfare is prohibited. Riot control agents are defined in the CWC as chemicals "which can produce rapidly in humans sensory irritation or disabling physical effects which disappear within a short time following termination of exposure." This might be true of certain sedatives, depending on the dose; it is certainly not true of corrosive chemicals or immobilizing glues. No agent that causes a deleterious effect not automatically reversible can be considered acceptable and humane for use in domestic riot control; it would be unethical to subject innocent bystanders, children, or hostages to severe psychological stress, possible permanent injury, or death. The development of chemical weapons in the guise of domestic riot control agents must not be allowed as a means of circumventing the CWC. The treaty states that every chemical held for domestic riot control purposes must be declared; the CWC Preparatory Commission needs to specify that these chemicals must fit the convention's humanitarian definition of a riot control agent.

The Certain Conventional Weapons Convention (also known as the Inhumane Weapons Convention).2 Many of the non-lethal weapons under consideration utilize infrasound or electromagnetic energy (including lasers, microwave or radio-frequency radiation, or visible light pulsed at brain-wave frequency) for their effects. These weapons are said to cause temporary or permanent blinding, interference with mental processes, modification of behavior and emotional response, seizures, severe pain, dizziness, nausea and diarrhea, or disruption of internal organ functions in various other ways. In addition, the use of high-power microwaves to melt down electronic systems would incidentally cook every person in the vicinity. Typically, the biological effects of these weapons depend on a number of variables that, theoretically, could be tuned to control the severity of the effects. However, the precision of control is questionable. The use of such weapons for law enforcement might constitute severe bodily punishment without due process. In warfare, the use of these weapons in a non-lethal mode would be analogous to the use of riot control agents in the Vietnam War, a practice now outlawed by the CWC. Regardless of the level of injury inflicted, the use of many non-lethal weapons is likely to violate international humanitarian law on the basis of superfluous suffering and/or indiscriminate effects.3 In addition, under the Certain Conventional Weapons Convention, international discussions are now under way that may lead to the development of specific new protocols covering electromagnetic weapons; a report is expected sometime next year. The current surge of interest in electromagnetic and similar technologies makes the adoption of a protocol explicitly outlawing the use of these dehumanizing weapons an urgent matter. -Barbara Hatch Rosenberg

End Notes:
1. Conference on Disarmament, Report of the Ad Hoc Committee on Chemical Weapons to the Conference on Disarmament, Aug. 26, 1992, Nos. 22, 25, 34 (CD/1170).
2. The full name of this treaty is "Convention on Prohibition or Restriction of the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects."
3. Louise Doswald-Beck, ed., Blinding Weapons: Reports of the Meetings of Experts Convened by the International Committee of the Red Cross on Battlefield Laser Weapons, 1989­1991 (Geneva: Internal Committee of the Red Cross, 1993).

Wednesday, May 16, 2012

Cyberware Technology Project by Taryn East

So what is cyberware, really? - Most dictionaries don't contain a definition for cyberware. This is unsurprising in this relatively new and unknown field. In science fiction circles, however, it is commonly known to mean the hardware or machine parts implanted in the human body and acting as an interface between our central nervous system and the computers or machinery connected to it. More formally: Cyberware is technology that attempts to create a working interface between machines/computers and the human nervous system, including (but not limited to) the brain. Examples of potential cyberware cover a wide range, but current research tends to approach the field from one of two different angles: Interfaces or Prosthetics.

Interfaces - The first variety attempts to connect directly with the brain. The data-jack mentioned earlier is probably the most well known, having heavily featured in works of fiction (even in mainstream productions such as "Johnny Mnemonic"). Unfortunately, it is currently the most difficult object to implement, but it is also the most important in terms of interfacing directly with the mind. For those of us who aren't science fiction fans, the data-jack is the envisioned I/O port for the brain. Its job is to translate our thoughts into something meaningful to a computer, and to translate something from our computer into meaningful thoughts for us. Once perfected, it would allow direct communication between your computer and your mind. Large university laboratories conduct most of the experiments done in the area of direct neural interfaces. For ethical reasons, the tests are usually performed on animals or slices of brain tissue from donor brains. The mainstream research currently focuses on electrical impulse monitoring, recording and translating the many different electrical signals that the brain transmits. A number of companies are working on what is essentially a "hands-free" mouse or keyboard [Lusted, 1996]. This technology uses these brain signals to control computer functions. The more intense research, concerning full in-brain interfaces, is being studied, but is in its infancy. Few can afford the huge cost of such enterprises and those who can find the work slow-going and very far from the ultimate goals. Current research has reached the level where hundreds of tiny electrodes are etched out of silicon, to be inserted into a nerve cluster. Unfortunately, research has not progressed beyond experiments on live tissue cultures.

Prosthetics - The second variety of cyberware consists of a more modern form of the rather old field of prosthetics. Modern prostheses attempt to deliver a natural functionality and appearance. In the sub-field where prosthetics and cyberware cross over, experiments have been done where microprocessors, capable of controlling the movements of an artificial limb, are attached to the severed nerve-endings of the patient. The patient is then taught how to operate the prosthetic, trying to learn how to move it as though it were a natural limb [Lusted, 1996]. Crossing over between prostheses and interfaces are those pieces of equipment attempting to replace lost senses. A great success in this field is the cochlear implant. A tiny device inserted into the inner ear, it replaces the lost functionality of damaged, or merely missing, hair cells (the cells that, when stimulated, create the sensation of sound). This device comes firmly under the field of prosthetics, but experiments are also being performed to tap into the brain itself. Coupled with a speech-processor, this could be a direct link to the speech centres of the brain [Branwyn, 1993].

Why make cyberware? - But why should we do this? Is it to be relegated amongst the techno-dreams of robot house-cleaning slaves or is it actually a relevant, practical technology? What is the use of developing the technology as a whole? Roderick Carder-Russell has expressed my own feelings in this paragraph from his webpage of collected cyberware links: “As we grow in age, we also evolve our person. We grow mentally, changing and adapting to new situations, gaining experience, becoming more intelligent and wise. Physically we grow in size, strength and if we consciously make an effort gain more talent, more precise control of our bodies. But there is a limit. The human mind has a processing threshold, the body can be pushed only so far before it fails. As we utilize new developments in aging research and begin to live longer, would we not want to also push back the limits on our minds and bodies?” Currently almost all research is aimed at the disabled. Most research seems to be in the field of prosthetics or neurophysiology. The advances happening now tend to be the prosthetic interfaces, the new sensory replacements, or brain-signal controlled computer cursors. However, in the future, the technology can benefit anyone. The main areas I see are education, entertainment, communications and transportable technology. Technology for these areas currently hold a large sway over the general populace (read big market share) and advances in them are always heartily welcomed, both by consumers and by producers.

Problems and difficulties: - I, personally, am all for our continued research in this area as I feel it will add so much to our understanding of ourselves. Though I am 100% behind it, I feel we must consider the possible consequences of bringing this technology into being. New technology always brings change and cyberware is likely to be no exception. We are likely to face replacements in the workplace, bankrupt businesses that are unable to cope with changes, a new elite, not to mention a new generation gap between the current generation and those who will have had the technology from birth. A big danger is the very real possibility of abuse. The criminal element has always been very effective at taking full advantage of whatever technology is available – and can often be extremely inventive. This is not a reason to not make the technology, but we will have to be on guard for a new breed of crime. There may be any number of new physiological problems relating to the equipment, not to mention the psychological shock of being in so different a situation. What happens when mind and machine become one? We begin to find important the old questions of what it means to be human.

How much flesh can be replaced before we are no longer a human with machine parts but a machine with human parts? If we are to move forward, we must be fully aware of what it is we are doing to ourselves, and what could come of that. One of the big difficulties with producing cyberware is the inherent complexity of the neural system. The individual human brain is incredibly complex. There are billions of neurons so connecting up to enough of them without having to have a truckload of wires poking out the back of the skull is quite a problem to surmount. In addition, even if we do connect up one brain perfectly – every person’s brain is different! The individual differences could make it very difficult to adapt the hardware from one person to another. The problems being faced in the lab haven’t even got that far yet. The difficulty still lies in how to connect up a small number of neurons without causing excess damage. Some significant breakthroughs have been occurring and a number of prosthetic advancements have worked successfully – but the risk is high. No one wants to risk their currently perfectly working neurons to possible harm for an uncertain reward. The process still needs a lot of refinement before the general populace would be likely to be interested.

Where are we going? - The technology available eventually is near infinite. I believe we will be able to see better prosthetics for limbs and senses – possibly breakthroughs in reconstructing some of the more complex senses (an artificial eye that works as well as the cochlear implant does would be nice). I do not think we will see any of a quality that would make it worthwhile replacing functional systems, yet, but I’m sure it will happen eventually. Enhancements are probably going to be more popular – silent communications or enhanced hearing, for example, or enhancements to muscles to allow you to move faster or lift heavier things. How about an internal air filter to allow you to filter out pollution without having to wear a gas mask! So, do we have a time frame? I’ll answer that with a quote. “Berger believes [his] team is about five years away from designing a brain implant for animals and about 10 to 15 years away from the first device for humans. With custom microchip designs taking weeks or months and other technical hurdles at every turn, it's certainly not a project for anyone with less than the patience of Buddha and the persistence of someone who sells insurance. That suits Berger just fine. Nobody ever said that building an electronic brain would be easy, and it's clear that he's just as infatuated with the process as with the thought of changing people's minds. "You build it neuron by neuron and chip by chip," he says. "You enjoy each experiment and piece of the puzzle while keeping a focus on the bigger picture." Berger won't rest until he has built a bionic brain. He definitely wants to get inside your head.” [Greengard]

Interfaces Signal monitoring and computer control - At present, the study of brain signals seems to be the widest area of research in this field. This probably has a lot to do with the fact that the machines for study already exist out there, and people have been studying brain signals for some time – this is merely an extension of current areas of research. The research that I classify in this field use machines to externally measure the natural signals of the brain. These signals are then run through a computer that attempts to interpret their meaning and act on them in a pre-programmed manner. It is possible to learn how to voluntarily control certain patterns of brainwaves. In doing so, we can provide a change that is strong enough to be detected by an EEG. The resulting change can then be used to signal a computer. This technique has been used in the laboratory to control the movements of a mouse cursor on screen. One problem with this method is that it can sometimes be difficult to train a person to use the technique effectively. Controlling brain waves involves a modification of thinking patterns – for example either doing hard puzzles in the head to produce one type of wave or calmly thinking of very little to produce another. The big problem is that there are only really a few different ways to think (while remaining awake) that are easy enough to learn. Often there aren’t enough input types to adequately control a simple computer function (such as a mouse cursor).

A popular device for research seems to be the visual keyboard. These use evoked potentials to allow the disabled to communicate with the outside world. Some people have been so severely disabled that they cannot move most of their body – or cannot control their muscles enough to make the fine movements necessary to communicate. The visual keyboard effectively bypasses the muscles entirely, instead using the brain’s natural reaction to a stimulus that it is expecting. The basic technique is to present the subject with the picture of a keyboard or a specialised group of words/commands. The person concentrates on the letter or word he or she wishes to convey. The computer then highlights each row of letters in turn. When the row containing the required letter is highlighted, the brain of the person will create a spike of activity (the evoked potential). The computer records this and assumes that the letter required is in that row. The columns are then highlighted and the person continues to concentrate on the required letter – eventually the computer also finds the correct column and so the letter is pinpointed. As you can imagine, this technique is very slow – words being spelled out often in around a minute – not so good if quick help is required but a marked increase in ability from having no method at all to communicate.

Another possibility has been created by BioControl Systems Inc (Palo Alto) and is called the Biomuse. This is a device that measures EMG in the muscles to determine if any muscle movements have been made. This device can be used to control cursor movements on a screen. It is quite simple to program a computer to interpret the movements of, say muscles on the face, into signals that can control a mouse. There are only 6 signals that need to be programmed: up/down, left/right, and left/right-click. Coupled with a mouse-controlled keyboard onscreen, this can (and has - see Lusted et al) allowed paraplegics to operate a computer - a task that would otherwise be completely out of their reach. Another working example of the above technique is the EOG MIDI device for paralysed people by BioControl Systems Inc. [Tonneson et al]. This device allows the physically disabled to create music through the movement of muscles that are not damaged. One example given in the article was a device that attached to the facial muscles of a person. It monitored the EOG of the muscles and interpreted certain movements as certain sounds. With a bit of practice, people can write their own music with it.

David Cole’s montage amplifier is a very bizarre example of electrical signal cyberware. It has been designed to be a primitive thought-transference device. It has been tested by volunteers and seems capable of transmitting some basic sensory information in a general manner. The science of it seems sensible enough though possibly a little dangerous. Firstly, it records the EEG pattern of one subject. An EEG is a pattern of electrical potentials and thus also magnetic fields. The signal is recorded and sent to the helmet on the second subject. This other helmet amplifies the signal and electrically charges the helmet so that the magnetic fields cause patterns that emulate the same electrical pattern in the brain of the second person. Supposedly this will overlay the EEG of one person onto the second person. Surprisingly, it seems to work reasonably well when the transmitting person is given simple stimuli – for example when a bright spot of light is in a certain area of the visual field, the receiver often perceives a phosphene in the same general area. This study was performed by some of the back yard neurohackers mentioned in [Branwyn].

There is no denying that EEGs and similar machines are very expensive. This is why this kind of research is usually only carried out by universities, hospitals and other large research institutions. To combat this problem, two smaller EEG machines: Aquathought's Mindset and Psychic Labs IBVA [Branwyn] have now come on the market. These are both much cheaper than the huge hospital machines. Even though they have fewer electrodes, they are quite usable for studying the brain at home and come with software that allows you to experiment with your own brainwave patterns. These two are much more accessible to the average researcher and so make it easier for more people to study this field. So, how useful are electrical signals? A lot has been achieved with these very simple methods, but I believe they are merely a stopgap method that will eventually be surpassed by the more intensive internal methods in the next section. They are so prolific because the technology is here now, and they are an extension of what we’ve already been doing. This is a very powerful motivator for research as it means that it costs less money per research project, even if the potential gains are not as good. It has been very important, however, for our understanding of the brain and how it signals. We are still, after all, going to have to learn these signals to be able to communicate. I just don’t believe that anything external to the skull has anywhere near enough power to actually produce complex communication with the human brain. After all, we’ve been trying to understand the brain via its EEG signals for decades. We’ve made some very useful discoveries regarding the overall working of the brain and how this can be interpreted – especially in the cases where something is obviously going wrong. It isn’t so useful, however, for fine discrimination of thoughts. Since this is the big goal of cyberware, I believe we don’t have a huge amount to gain from studies using external machines. We have to get inside the brain!

Direct Neural interfaces - Direct neural interfaces are the big goal of cyberware - the ability to directly talk to the brain through the interfaces that we have created for it. Despite this, little research has been done in this vital area and even less is published. My guess is that monetary constraints tend to stop a good many of these projects getting through – they are a form of very long range research that has only just recently been showing any solid benefits (mainly in the field of prosthetics). The lack of published material is likely due to the newness of the field. Anything that people have found out, they are still working on and probably don’t want to give their colleagues the vital link that will allow them to discover it first. They also might not be willing to put down their vague theories when they are only just beginning to form them, and are still working out the bugs. All in all, there is a distinct lack of any solid research material in this field. There have been a number of Internet sites and a few papers available, but most of them say pretty much the same thing. Usually along the lines of: “We are working on a “brain chip” that will hopefully provide an interface between the brain and a computer.” They sometimes give a very bare description of how this might be achieved (usually the array-of-spikes electrode set that I’ll describe below) and what benefits this may incur. I will try to expand on the scraps of information that seeped through with what I can foresee happening.

I think the most information I was able to find in one place was from a Dr Fromherz working at the Max Planck Institute in Germany. His team has been working in this field for years. His article describes the search for a safer, less damaging and more long-term method of interacting with a nerve through the use of silicon electrodes. This has been successful in the lab with very large single nerve cells (from a leech), but was more difficult with a smaller rat nerve cell (closer to our own size). He says that the technology is there and feasible now for such small pursuits, but it is a completely different matter when it comes to a living brain. Like most researchers in this field he has had to start with the most basic method of interfacing and is trying to build up from there. The technique he is starting with involves creating an array of very fine electrodes (what I tend to call the array-of-spikes as it resembles a tiny bed-of-nails) laser cut and acid-etched from a wafer of silicon (such as is used in microchips). This array can be implanted into a nerve bundle so that a large number of neurons can be accessed simultaneously. This technique seems very crude but should be recognised merely as an early attempt, a small beginning and an important first stage. This research has been very useful, as it has helped us understand the nerves enough to be able to start working on its use in prosthetic interfaces.

Another potential use for these interfaces is as a cure for brain damage. Many people suffer currently irreparable damage due to strokes, accidents and tumours, not to mention the degenerative brain diseases such as Alzheimer’s disease. Dr. Ted Berger is working on a brain implant that will address these types of problems. His approach is similar, at the moment, to Dr Fromherz’s – The study of donated brain tissue and the attempt to interface with the individual neurons in the hope of understanding the simple to work up to the more complex problem of the whole brain. While Fromherz’s research leans more to the hardware side - practical means of interfacing with individual nerves - Dr Berger is attempting to study the brain’s inputs and outputs enough to be able to create a working substitute for sections of lost brain tissue. His team is currently developing a microchip that attempts to mimic the activity of the hippocampus. This chip would take the information in short-term memory, repackage it, and move it into long term memory. The trick to learn is how the brain repackages the information so that the chip can emulate this function. Once working, this chip would help innumerable people with their memory problems [Fleischer]. As mentioned earlier, these interfaces are the most important research in terms of the long-term goals of cyberware technology. They are also the farthest away from practical implementation. Luckily, each step we take in other aspects of cyberware research helps us toward this Holy Grail.

Other weird interfaces - A very closely related field of research is that of Virtual Reality (VR). VR also seeks to communicate with the body about what it is doing and to return meaningful sense information back to it. The difference being that VR is entirely an external medium. However, some research overlaps with areas that cyberware research currently occupies. One example of this is the VR suit [Tonneson et al]. In these suits, EMG biosensors are used to determine position of muscles and state of relaxation. This conveys the body’s position to the computer and we can achieve a better representation of it in the virtual environment. The VR helmet could also be equipped with EOG sensors [Macauley] to make an accurate determination of the direction of the user’s gaze. Research into electrical signaling helps both fields, as we better understand the language of the peripheral nervous system and how to communicate with our muscles and sensory organs. An interesting technology that is likely to greatly benefit cyberware was created only recently. TG Zimmerman has created a device known as a Personal Area Network. This is based on the electrical field of the human body and allows transmission of information via human contact. This device promises to incorporate our wide range of personal information and communications devices into a network that can exchange data with one another. It also has the ability to exchange data with another person's network - the example given and tested is the exchange of business cards by simply shaking the other person's hand. This technology has great potential at integrating cyberware technology amongst itself and also amongst other people. The great benefit for this field is that one of the major problems that cyberware has to contend with is the huge amount of wiring that would have to drape through the body from implant to implant. If we could do away with all of that and merely have ‘sending’ and ‘receiving’ units attached to each implant everything would be a lot less messy. Network communications is certainly advanced enough to cope with the small networks we would likely start out with.

Wearable computers are also in the same realm as cyberware. The interfaces are external and utilise our existing sensory apparatus, but many of the principles and problems are similar. From wearable computing technology we can learn about how to make computers smaller and more man-portable. We strip the computers down to what we need for everyday living and how it can be more fully incorporated into our activities. As these two technologies develop in tandem, we may start to incorporate a mixture of the two. Some objects may still be too bulky to fashion into appropriate implants, or not used often enough to warrant permanent attachment. These will be carried externally, but remain able to interface with our internal cyberware. I believe that all these related fields of research will develop in parallel to cyberware, the fields cross-fertilising each other with ideas. New developments spurring new ideas in each field and lifting each other to greater heights. Such mixing should definitely be encouraged.

Prosthetics Limbs - Most current research in prosthetics is focused on the manipulation of data, or signal processing. A patient must learn how to control the muscle-movements required to activate the limb's movement. For example, a knee response drives the servo-electric motors of a prosthetic knee. A computer must be programmed to record and translate the signals it receives and interpret them correctly. Cyberware-related prosthetic research is still very far away to the average mechatronics engineer. At the moment, they are trying to deal more with the practicalities of driving prostheses off muscular movements – maximising output from a limited number of inputs. The difference to prostheses that cyberware can bring is that the interface is completely internal. It melds with the remnants of the body’s own neural pathways and thus movement becomes completely natural. Current prostheses tend to rely on learning a set of muscular ‘commands’ to give to the limb in question, but cyberware allows the use of the same signals as with any other working limb.

Building the internal interface - Fromherz illustrates a number of interesting possibilities. His research centres on building tiny arrays of even tinier silicon electrodes. This little array can be inserted onto the end of a severed nerve bundle and the signals sent to the bundle will be received and recorded. These signals can then be used to drive a prosthetic limb as though the brain is sending signals to the original limb. Another method involves a small sheet of metal, perforated with microscopic holes – each electrically separated and with wires leading out. The nerve is severed and then placed on either side of the sheet – each nerve grows through a hole to find a partner on the other side. In the case of both of the above methods, the signals received at the array would need to be recorded as the patient attempts different types of movements. Both the patient and the computer must learn how to interact – the patient learning what they need to do to move in a certain way, the computer learning what movement to make given a certain signal. The logical next step to the nerve-ending arrays, is technology that directlys taps the brain. Kalcher (et al) are working on a piece of hardware that uses an EEG to record signals in the precentral gyrus while a person moves their finger or foot. The studies went on to record what happens when the person merely thought about the movement. With a better method of connecting with a large section of brain (see Making the Connection), we could access the body's own map of itself stored in the precentral gyrus. Once linked to the prosthesis, the person would not be able to tell the difference between telling a normal limb to move and the telling the artificial limb to make an identical movement - the signal would be identical. What's more, signals can move through wire faster and with greater reliability than through a reflex arc, so the message would get through quicker and clearer.

Sensory replacements - Senses are one of the big, new things in prosthetics. Never before have we been able to create a machine so sophisticated as to attempt to match the capabilities of our own bodies. The attempts so far aren’t completely perfect, but the ability to mimic even a part of our senses is an amazing achievement. The complexity of the sensory systems is what has perplexed us for so long. The actual sense organs operate in a very simple way – light is focused on a large array of tiny receptors, sound is vibrated along a tube of receptors and so on. The complexity comes in as a problem of signal processing. Each receptor in the eyes, ears etc is very simple on its own – it records whether a certain type of stimulus is or is not present. It’s when you put the receptors altogether and try to build an overall picture of the stimulus coming in that the problem becomes very complex. The pattern matching that the human brain does is far beyond the capacity of any human-made machine, at least in the same time frame as the brain does it in. To fully replace a lost sensory organ, we must be able to match the complexity and size of the system present normally. This increases dramatically in difficulty, depending on how much of the sensory system is missing.

There are a number of new technologies that are brilliant at enhancing a system that doesn’t function at top-level. For example, the current hearing aid amplifies sound quite well, and glasses refocus light on a retina that is slightly the wrong distance from the focal point of the eye. Current breakthrough technology is focusing on the next level, where the receptors in the system are missing or damaged, but the underlying nerve structure remains intact. The best example of this is the cochlear implant. These devices are made for people who are missing the tiny hair cells found in the inner ear. Hair cells are the receptors for the ear, recording important differences in sound depending on where they are placed along the spiral-shaped cochlea. The device is implanted into the cochlea and tiny electrodes are inserted along its length. These electrodes send signals into the underlying nerves, taking over the job of the missing hair cells. “Although current versions of these devices may not match the fidelity of normal ears, they have proven very useful. Dr. Terry Hambrecht, a chief researcher in neural prosthetics, reports in the Annual Review of Biophysics and Bioengineering (1979) that implanted patients had "significantly higher scores on tests of lip-reading and recognition of environmental sounds, as well as increased intelligibility of some of the subjects' speech." [Branwyn]

Research is now focusing on creating a similar device to be able to replace missing receptors in the eyes. This would be accomplished with something very similar to the cochlear implant, merely requiring an array that can be fitted into the eyeball and attach to the retina. The next level up is to help people who have lost their sensory organs, but whose brain is still capable of understanding the sensory information. This is especially the case in people who, for example, were able to see at birth, but lost their eyes through misfortune. The techniques of seeing aren’t lost, merely the organ to see with. If we can successfully replace the eyeball itself (or whatever organ it is that was lost), and then successfully attach and communicate with the undamaged brain area that controls that sense, we would be able to restore the lost sense. This is where cyberware really comes into its own. Richard Alan Norman, of the University of Utah, has been studying how the use of phosphenes can help blind people to read Braille faster. Often a blind person has a problem with their eyes, but the visual cortex is still perfectly normal. He has been developing an electrode array that can be implanted onto the visual cortex that will create phosphenes in the perceived field of vision. The array creates the sensation of spots of bright light are even though the person cannot really ‘see’ anything. The array is programmed to create the patterns of spots that correspond to the Braille alphabet and can then ‘display’ them in the person’s perceived field of vision. Text can thus be ‘read’ quicker in this manner than through the usual method of reading Braille with the fingers. [Thomas]

The National Institute Of Health in the USA is currently working on a 38-electrode array to be placed into the visual mapping areas of the cortex. This device creates phosphenes in the visual field that give a rough ‘pixilated’ view of the world. This isn’t enough information to truly see by, but can help an otherwise completely blind person to orientate themselves and avoid some objects. It is also an interesting first step. Obviously, if the electrodes can be made fine enough, the array could contain many more electrodes and thus give a better and better precision to this artificial sight. Sensory replacement is not limited to sight and sound. With studies in the somato-senses (or senses of touch, temperature and vibration), those who have lost a limb may soon be able to regain some of their lost sensation. Funnily enough, this type of research tends to leak over to the field of robotics. It is very useful for a robot to be able to sense how much pressure an artificial limb is putting on an object it is holding. To pick up an object, it is useful to know how much pressure is being exerted – to tell if the object is going to slip out of grasp or if it will be crushed under too much pressure. Prosthetic limbs would also be greatly enhanced by this capability. Somato-sensory replacements deal with the attempt to recreate the sensation of touch. This is one of the most difficult to recreate as our sense of touch is really made up of many different types of sensations. Mainly, we have the sensation of pressure, of vibration, of temperature and of pain. These are all very important when it comes to fully understanding our environment through our skin. One of the big difficulties with this area is that we can indeed create sensors to sense each of these different aspects, but making them small enough to be able to cram them all into the volume usually assigned to skin is very difficult. The scaling problem is still being worked on and the sensors are being gradually reduced in size. Eventually they will be small enough to fit a fair number of them onto prosthetic devices, and then we will return again to the necessity for determining how to communicate with the body’s existing neural pathways in a meaningful way.

Where could we go? What is possible? - Certainly most of the important current research deals with aiding the disabled. The prosthetic devices mentioned in the previous section will open up new vistas for those who have had to make do with lesser quality replacements of lost limbs or senses. But eventually we will move on from using this technology to only replace missing or damaged human parts. At that point, we will start to fully utilise the potential that cyberware has to offer. As I have mentioned earlier, the main avenues I see cyberware flourishing in are: education, entertainment, communication and transportable technology. However, cyberware can also be split into the groups of people who are likely to be using it. One of the major subdivisions will be for military or law enforcement use. I am bringing this up first, as it tends to be the sort of thing talked about in fiction. Most cyberpunk literature tends to focus on the criminal element battling the forces of law and order – usually with a great deal of cybernetic enhancements on both sides. One of the big options is an enhanced limb capable of lifting greater weights or with faster reflexes. This could either be a replacement of the previous limb or an added extra limb (weird looking, but could have its uses). One of the interesting possibilities posited in fiction is known as wired reflexes. Our reflexes are merely a pathway built up out of connected neurons in the spinal cord – these have either been given to us at birth or slowly learned through long hours of practicing over and over. Wired reflexes allow us to create our own reflex arcs, hardwiring into place the sequence of events we would like to occur. Electricity travels through a wire faster than neurons can generally signal - this creates the added benefit that the reflexes can be sped up considerably.

Other physical augmentations in the literature tend to concentrate around hidden armour and weaponry. Guns or knives could be hidden inside arms or body cavities and triggered by a learned reflex or a wired command from a brain-connected control module. Amour could be hidden under the skin or interlaced around bones for greater protection while remaining hidden from view. This hidden armoury would be very useful for anyone that needs to remain under-cover (for example under-cover policemen). They would be able to have their own weapons and armour without raising any suspicion in the criminal group they are infiltrating, and will have the element of surprise in case they are attacked. There are enhanced senses, to better see or hear "the enemy". Replacing the sense organ with a better one would create this, but it will not possible for a long while yet as it will be some time before we can even match our current sensory organs’ abilities. Extra senses can be incorporated to enhance our perceptive abilities. For example, we could add an awful lot to our ability to see. Imagine the ability to see in UV or IR, to have light amplification added to our own eyeballs! We could tap into the nerve bundle for the appropriate sense organ and record from and/or add to the inputs to our brain. Why would we do this? Well, a recording from the human eye might be very useful – think of the possibility of keeping a mans-eye account of what happened on the scene of a crime – did the policeman have full view of the criminal as he was pulling his hand out of his jacket? In addition, if you can tap into the stream of sensory information fast enough, you might be able to work on the data and find things otherwise easy to miss. For example if it were necessary to follow one person in a crowd, the enhancements might be able to filter out some of the other movements and make it easier to keep an eye on the target.

Another popular device in fiction is the subvocal device. This could utilise EMG in the facial muscles to make actual voicing of commands unnecessary. This could easily be coupled with a receiving device placed on the tiny bones of the ear so that instructions can be received without anyone hearing them, though a tiny earplug speaker would do the trick in most circumstances. This could be extremely useful for anyone who wants to talk silently. The examples in literature often involve covert operatives and clandestine communications, allowing them to remain stealthily silent while sneaking through the enemy base. It could equally well be attached to a mobile phone so that anybody can talk with perfect clarity even in a noisy environment – or even in the movie theatres where people much prefer you to remain completely silent. This would also have the benefit for people who do their business over the telephone, but wish to keep their actual conversation confidential – nobody else could hear what you were saying. Another big subdivision of users would be the scientists using cyberware to study the human body. One of the most interesting potential uses is the study of human psychology and brain physiology. Tapping directly into the brain, we will be better able to read what it is doing and thus be able to try and understand what it is doing and why. Better understanding leads to better help, though we must avoid using it for better manipulation. This will be very useful for understanding how to help people with psychological problems and disabilities. For example, some deaf or blind people have no problems with the sensory organs, but the problem lies in the brain. With exploratory cyberware, we could access their brain patterns and compare them to a ‘normal’ brain, to find out why they are the way they are. This could lead to new ways of helping these people, not to mention better understanding of how we process data normally.

So what would be the Joe average upgrades? - For the average human, there would be numerous additions for better leisure activities, but these would also aid in better education and communications. As far as leisure goes, entertainment today seems to be requiring a more total sensory input. If we could tap into the sensory pathways to insert information into them, we could make the fantastic worlds that we create seem next to real. We would be operating through the very senses of the person who was receiving the information so it would be nearly indistinguishable from Real Life". This would be THE step in entertainment. Who wants the illusion of something when they can experience it for themselves, all from the comfort and safety of their own brain? Imagine – go skydiving, rock climbing, space walking – without any of the implicit dangers involved. Some will consider it better than the real thing – after all, it will be perfect, no problems, no hassles, no rain, no biting insects, no setting up beforehand or cleaning up afterwards – just the thrill of the activity itself. But these implants don’t have to be used exclusively for entertainment, they are also very useful in education – what better way to learn than to have actually been there, at least to have sensed it in a way your brain would see as “being there”. See, hear, touch and feel the different things that you are learning and you will learn much more effectively. As a learning aid, cyberware would be unparalleled. For example, a person with dyslexia has difficulty learning as reading can be a big problem – the words are difficult to read so they don’t go in well. With a cyberware implant, learning can be restructured in a way such that reading is not required. However, it’s not confined to learning disabilities. Cyberware can open up huge vistas of education – want to see what’s going on inside a nuclear reactor? Want to see what lives deep under the sea?

You could go anywhere and look and touch withoutworrying about cost (millions of people could share the same experience), and without worrying about the dangers of the real thing. The brain can also be enhanced to include more memory or easily accessible databases of information. If we can truly tap into the brain we will be able to store abilities or skills and transfer them from one person to another by merely ‘uploading’ the information. An example could be learning to dance – a person could record the ‘output’ signal from their limbs as they dance. This signal could be given to the person wishing to learn the dance, who plugs it into their own cyberware that sends the signals to their own muscles. This counts as practice – the person will eventually learn the dance and be able to do so without the use of the recording. Which brings me to the idea of the matrix. VR in a cyber-enhanced world would be unparalleled. You woud actually 'be there' as far as your brain was concerned. Do you want to visit your family, attend a conference, hug your absent partner? The illusion could be made very close to reality. The Internet, as we know it, would become a thing of the past.
Making the connection - How would the cyberware get wired into place? I've read a few different books that present ideas on this subject, and there are a couple of different paradigms. The most simple, and currently cheapest, of available methods is surgery. Dr Fromherz, at the Max Planck institute in Germany, has been researching the possibilities of arrays of electrodes that can be attached to a nerve ending or to a thin slice of brain tissue where each electrode can receive and transmit a signal. Presumably this array would be surgically put into the required position and tested for the required outputs. This is the array-of-spikes method I have mentioned earlier. A similar method involves the creation of a very fine mesh. The nerve bundle is cut then placed on either side of the mesh and encouraged to grow through it. Each nerve that passes through a hole can be individually accessed. The problems with these methods are many. First, they involve cutting nerves completely, a worrying prospect for most human beings who don't wish to lose functionality (but possibly not as bad for those who find that this is their only available choice). The worry is always there that it will not grow back properly! The second problem is that a simple array or layer of mesh doesn't give a huge amount of complexity - only a single layer of neurons can be accessed. This is fine when interfacing with a single nerve bundle (e.g. one that operates a muscle), but cannot access complex three dimensional structures. The third problem is that any complex structure (for example any part of the brain) would be out of the question as the process may result in irrevocable loss of the structure of the nerve bundle. This would also be applicable for attachment to some sensory nerve bundles (e.g. the optic nerves). If these were ever severed, even if all the nerves reattached to the bundle on the other side of the mesh, they might not reattach to the same ones and the resultant signal would be irrecoverably jumbled. You would have to learn how to see again almost from scratch!
Despite the remarkable work Fromhertz has been able to achieve, I believe that these methods will prove to be too time-intensive as well as very invasive. The work that he does is incredibly tiny compared to the number of nerves in the average human brain, and has a very useful niche in the prosthetic department, allowing severed nerve-endings to be retapped and transmit once more. However, I do not believe that it could ever be complex enough to allow full communication between man and machine. Luckily Fromherz is aware of this. He has outlined the great difference between these small-scale projects and the amount of effort that would be needed a) to get enough connections in the brain and b) to work out a way that the brain and computer could meaningfully communicate [Fromherz]. A really interesting theory is that given by Wu for the Shadowrun TM game system [Wu, 1992]. This consists of bio-engineered microorganisms that are attracted to a rare from of glucose. The organisms are injected with a tiny amount of the required conductive material then introduced to the body. They collect where this glucose is located (manipulated magnetically by the surgeon) and die from genetically engineered suicide genes. The conductive material is then left behind and the cell walls of the organisms are stripped away leaving a smooth coating of the stuff. This procedure (if it could be created and perfected) would be just as time-consuming as the open surgery but much less invasive. It would probably end up being more expensive, but could possibly be completed in a number of successive small sessions instead of a single long one. This theory seems to borrow a bit from PET scanning imaging techniques where radioactive liquid is taken in and accumulates in the areas of the brain that are active. Perhaps this sort of scanning could also be used to aid in the manipulation of microbes like this: "Ok, think about moving your hand … good, now keep thinking about it…"

There are still a number of possible problems with this sort of procedure (for example, would it be possible for the coating to be left smoothly behind or would it need something else for each of the little pieces to fuse together). Not to mention the fact that we may have nowhere near this level of genetic engineering technology yet! But it is still a very interesting and different approach to the problem. The most common solution (proffered in many works of futuristic fiction including the Cyberpunk game system [Cyberpunk]) is nanotechnology (or nanotech). Nanotech consists of tiny (molecular-level) machines that can perform any number of preprogrammed tasks, all controlled by a nano-scale computer (resembling a Babbage engine in construction, but being smaller than a pinhead). These machines would be inserted into the required area and do the very tiny, delicate manipulations required without damaging the nerves. This would be possible, as nanomachines are small enough to move atoms individually. It would seem, at first, that it would take forever for a nanomachine to build up enough circuitry to make anything worthwhile, but the case is that each nanocomputer could have a hierarchy of hundreds of thousands of these machines under its command. The computers, in turn, could number in the thousands and the whole system would be able to create all the delicate connections and interconnections necessary in only a few hours.

I believe that it would indeed be easily possible to do the job if we had a high level of nanotech. In fact, it looks, at present, to be the method with the most amount of potential for solving the complexity problems. Not only can they create the delicate structures necessary, but also with machines this small, they can crawl between the neurons of the brain without disturbing them, to place connections to any place you choose. Other possibilities allow for nanotech to monitor connections even after the initial insertion. There is a vast range of possible applications for nanotech, but they do not fall in the scope of this project. So what is the problem with using nanotech? Well firstly, the science doesn't exist yet. Depending on whom you ask, it is still only at the theoretical level or outright fiction. In other words, there are a few people willing to work on it, but it is generally believed to be too far into the world of science fiction to be considered seriously. This is despite the fact that some very influential and respected scientists of our time (for example Richard Feynman) have felt that is a field within our grasp right now.

In conclusion, brute open surgery would be effective for simple stuff and for the large pieces of cyberware such as the prosthetic attachments - it's what they're doing now. It's the complex, tiny stuff - the in-brain surgery that's difficult, that needs tinier scalpels than we’re used to seeing even in the best microsurgery. Are the microbes the go? Though a quaint idea I don't think they're really applicable. You'd still need lots of research for tailored genes and in how to smoothly apply the required substances. So, do we need nanotech? If we can get the technology it would be much easier. It would also speed up our understanding of the brain, as nanomachines would be able to access and monitor individual neurons in action. These together put forward a powerful case for trying to get nanotech up and running as a future source of cyberware surgery.

Impact on us - Let’s say cyberware is here, how would our life be different? It really depends on how far cyberware can truly be taken. If we never move beyond the small-scale stuff where all we can really access are a small group of neurons – not many people would be likely to take the plunge. Perhaps it would be become fashionable for the technology-conscious to have a head-linked cyber-phone or a digital watch that shines on the retina (so you can see the time no matter where you are), but it probably wouldn’t be that far-reaching. We’d see some odd new peripherals and a few gadgets that might eventually become commonplace, but all in all it probably wouldn’t be that intense. Now, if we had managed to create the ultimate goal, however, a way that we could link directly and think at our computers…If we have been able to create the ultimate goal, a direct link with the brain, we could have so much more. Our World Wide Web is already a great part of our everyday lives. With the ability to tap into our computers directly, we would be able to greatly enhance our ability to operate on this network. One of the most intriguing possibilities is the ability to truly work online – in an office that only exists on the web, the employees could all work from home – accessing it through their implants. Yes this sort of thing happens now (without the implants of course), but many workplaces require a higher level of face-to-face interaction that just isn’t the same through a videoconference or a text-interface. This technology could provide the link necessary for us to all be able to move to our homes and to work without having to leave. If nothing else, there’d be a lot less road-rage from traffic in the mornings!

Not everyone would have access to the technology. It is liable to be very expensive for a long time. This means that only those in high-paying jobs would have access to it in large amounts, but smaller things would come through and technology often filters through society eventually. One of the big areas of current society that would be affected would be the entertainment industry. Who wants to watch a movie on a flat screen when they could be right inside it? Possibly even playing out a character in their favourite story. Much game software is likely to be written for those who want to play a fully interactive game. Even passive entertainment takes on a new level when it incoporates all of the senses – smell the scent of the sea or feel a waterfall run through your hands, feel the heat of the sun as a famous battle is playing out before you. Leading off from this would be education. A better education is gained through experience than from being told. A fully-immersive environment could be used to great benefit in the teaching of many skills. Also, teaching in this manner would allow more students to be reached at once – especially in outlying areas – students would be able to interact with one another naturally even if they were unable to physically be present together. I believe that, like most radical technology changes, it will greatly enhance our standard of living. While,as per normal, the lives of the wealthy will be enriched greater than the lives of us ordinary men and women, we will still see great benefits . Even if the technology is not directly in our grasp, the people working in high-tech fields such as medicine and computers will have access to it. If they are better able to create what the create best, then the effects will filter down through all levels of society.

The issues - This section covers ethics, problems with creating cyberware, problems that cyberware might create and scattered other issues. I will present a number of what I believe to be important considerations in the undertaking of this field of research. I will posit a number of questions, and describe what I mean by each question but, in most circumstances, I won’t answer it. One of the major points about ethics is that most of the questions are under debate. The answers to these questions in unclear – people tend to have widely differing opinions, so I present these questions for consideration so that you may form your own opinion on the topic.

Experimentation in a new field - As with all new fields there is a question that pops up time and again. Is it ethical to experiment on animals (or even on humans)? Is the potential reward great enough? Obviously research will start on donated tissue before progressing to animals – but how much can we learn from such techniques? Medical research is notorious for deciding that human lives are worth more than anything else, but cyberware is only partly a medical field. Certainly it can be argued that its development will benefit the disabled and help us understand the human mind, but is this enough to justify it all? It’s a question that I’m glad I don’t have to answer right now. This sort of thing will be left to the individual medical boards approached by the research teams. One of the things I did come across, however, was a reference to the neurohackers. These are people who are perfectly willing to make themselves into guinea pigs to further this technology themselves. While the big medical boards are arguing whether to allow certain experiments, these people will be bypassing all of that and experimenting on each other. I do believe, however, that the tissue research is a dead end. Yes, It has helped Fromherz in the early stages of determining how to make the individual electrodes that he has been working on, but when dealing with cyberware, we are trying to learn how to deal with vast arrays of living, functioning brain. We can’t do that with donated tissue, it has to be still attached to a functioning individual – eventually it must be human. Yes, there are people willing to sign themselves away just to be the first to try this stuff, but that sort of technology would be the end result of years of animal testing. There seems to be an almost necessity to use animals as the middle step along the road to any sort of medical technology.

Weaponry - Do we need better/smarter weapons? Can we make cyberware weaponless (this seems an impossible task)? How about crime, will this be yet another step toward social dissolution? Will we just be putting better weapons in the hands of those prepared to use them against us? Should the possibility of having new weapons stop us? I think not. Anything can be turned into a weapon if used correctly – we shouldn’t cancel all research just because it might possibly be used thus. There are going to be those that develop this technology anyway, so we might as well study it so that at least we have it also. At least it isn't the sort of technology whose exclusive province is the field of warfare. I believe that the benefits are likely to outweigh the danger. We also must consider that even if we did decide to ban all research in this area – someone would be able to take advantage of our lack of this particular technology. If we study it, we’ll be able to devise ways of beating it in the case of criminal use or use in war.

Examples of current use - At the Alternative Control Technology Laboratory at the Wright-Patterson Air Force base in Dayton, Ohio, researchers are investigating ways of controlling flight simulators using EEG signals. Pilots are being trained to use brainwave type as an extra function in the control of aircraft. One of the main difficulties to be found was that it was more difficult to train the pilots to control their own brainwaves than it was worth. [Thomas] I would also think that under stress (such as in battle) it would be next to impossible to control your thoughts in that way. You would need to keep your wits about you and that would be much more difficult while trying to change what you are thinking about from one moment to the next.

Examples of potential future use - The US Army has developed wearable computing and communications devices such as head-mounted displays, cameras and personal communicators to receive and transmit information on the battlefield. Moreover, the army has already tested these "augmented soldiers" in the field, to good effect [Thomas]. The future of this technology pushes further into the realm of cyberware. The future is limited only by what we can imagine. Presently, the science of wearable computers helps soldiers with what they might need on the field – communication, tracking, mapping, augmented sight (UV/IR/light amplification etc), navigation, targeting etc. Anything that can be miniaturised and carried around is potentially useful. One of the most important points is – does any of this need to be wetwired? If you have a gadget that is perfectly useful to be carried around then packed away – it is quite often better than a cyber-enhancement that is unable to be taken out. An object should only be wired in if it is going to be useful all the time. Otherwise, you may as well just make it a wearable extra. That way it can be replaced if broken or outmoded – or transferred to another person – all with a much greater ease than if it was effectively soldered to the person. Those technologies that would be really useful to be hardwired into place are the ones that allow an interface to other modules. For example, the Smart Link interface is a Shadowrun idea – a general interface that has the ability to shine targeting crosshairs onto the retina of your eye. This enhances the ability of a person to shoot straight as it uses a more natural from of sight and doesn’t hinder the normal field of view. Weapons could then be fitted with the other half of the interface and include sensors telling the interface where the gun is pointing. Another benefit would be that when a new weapon came out the person would not need to learn a new way of shooting. They could use previously learned skills as the interface would not have changed. Hidden weaponry could be useful, but not for general soldiers. By hidden I mean so that it looks like the person is not carrying a weapon at all – even when searched. Examples such as a holdout pistol grafted into a body cavity or blades put into the arm (so that they can spring out through the wrists) are examples often quoted in fictional literature. Obviously these aren’t useful in general but only for people that require a cover – covert operatives, undercover police officers etc. This is, however, the very sort of weaponry that would be most likely used by the criminal element.

Jobs - Who will be replaced this time, and who will be disadvantaged? This seems to be the question that always comes up in regards to any form of ‘progress’. So many people have been ousted due to computing technology and machinery. This shouldn’t stop us from creating the technology, but should be prepared for. I do not see it happening much at first, though, as it does not directly take over any of our current job fields. I believe that most fields will probably upgrade their systems to cope with this new technology rather than throwing people out on their ears. This, however, leads us to another important question. Will it be yet another thing that struggling businesses must fork out for to survive amongst competition? I believe that most businesses won’t be affected very much, but certainly the information-rich businesses will find that cyberware will be necessary to survive amongst competitors. The first businesses to take advantage of all that this technology has to offer will have a decided advantage over their peers.

Haves vs. Have nots - Do we need yet another elite? The usual age-gap will not be the only divide. Discrimination based on augmented abilities may lead to more of the poorer classes being unable to get the best jobs – and thus stay poor. How about early on when it is experimental – the opposite may run true – only those who are very poor will consent to the most dangerous of early technology improvements – though there are numbers of volunteers just itching to be the first. Age gap problems will have a field day with this type of technology. This is so different from anything else that it will not be accepted by many people. There will be those who will need to learn the new technology merely to keep their jobs. Those that can adapt will be fine, but those that cannot, or will not, will be disadvantaged.

Physiological problems/benefits
Difficulties - There are a number of difficulties imposed by the physiology of the human body. The major difficulty is that the brain is an immensely complex structure, one that we still do not fully understand. Although researchers are perfectly capable of communicating with a few neurons (for example Fromherz’s arrays of electrodes), there are literally billions of neurons in the brain. We do not have the capability, at the moment, of building the huge systems required to even communicate with a large percentage of these. This problem outlines the near impossibility of using external stimulus techniques to understand brain function. An EEG can pick up the general electrical stimulation of the brain, but it lacks the acuity required to see individual neural function. I do not think that we will get it fine enough to be able to form a useful picture of what the brain is doing. I believe that there is very little chance of ever forming fully integrated communication with the computer using this method. It is, however, an ok compromise for those who cannot communicate in the normal way (i.e. paralysis victims where this is their only method of communication). The problem that is created is one of scaling. Theoretically we could create a billion electrodes and try to access each neuron. In practice, we wouldn’t need that many – we don’t need to talk to every neuron, just a few major structures – so say we create only a few million electrodes. Each one may be made quite small, but a million tiny objects can lead to some very huge pieces of circuitry. There isn’t a lot of room in the brain cavity for much more than the brain itself (the room that’s left is also helpful in case of accidents, to cushion blows to the head). Therefore, this huge, clunky piece of hardware with maybe hundreds of wires just won’t fit in the skull. As Greengard puts it: “We’d need an implant the size of a pickup to emulate a brain function”

The human brain is very complex, but also, every human brain is unique. Broad areas of the brain tend to correspond to broad categories of function, but anything beyond that starts to become speculation. Everyone develops slightly differently as they grow and have different experiences, so specific functions tend to fall into different areas. An implant can’t be made overly specific or you will find that it might only work for one person. Luckily, the human brain is very adaptable. If an implant has the general capability of fulfilling the required function, the neurons around it will adapt to utilise it better. The next problem is bandwidth. Current microchips operate at a speed that is far slower than our thought processes. Even though each operation is very fast, the human brain is massively parallel and can thus perform many operations at the same time, combining to a faster speed overall. If we were to try and talk to an implant made with today’s technology, it would take a long time for each function to be processed – too long for it to be truly feasible. Both of the above problems can be overcome with better technology in the realm of parallel chips. Armand R. Tanguay Jr., director of the Center for Neural Engineering at USC, is studying one possibility. He hopes to remedy the scaling and bandwidth problems through use of lasers and holography. Using light signals instead of wires, the chips can be stacked closer together and it potentially allows for real-time response [Greengard].

Physiological damage - In any form of neurosurgery today, nerves are damaged. This is because neurons are so small that it is almost impossible to protect all of them. The main problem with this is that neurons don’t recover! Once you kill a neuron it stays dead and the resulting memory loss can also be permanent. At the moment, the surgery that is damaging is only performed on people where the alternative is far worse or on neurons that have no other purpose (for example the severed nerve endings of an amputee). For cyberware to gain any popularity with the average human being, however, they will have to be able to ensure that the surgery is not going to be so damaging. The problem is that a spiked electrode has to penetrate the neuron, causing damage to the cell wall (and sometimes neighboring cells). This often reduces the life span of the neuron and, in some cases, kills it outright. This could be remedied by using tiny metal plates used in the proximity of the neuron (in much the same way as a synapse) but these are currently more difficult to make. "One of the main things frustrating this research is finding (or developing) materials that are not toxic to the organism and that will not be degraded by the organism." [Branwyn] Silicon breast-implants and IUD’s, for example, have led to some serious physical problems and even deaths. The human body has formidable defenses against invading hardware and it would be very sad to see someone having to fight for their life after some purely recreational implant breaks. In all of medical science, we are used to the possibility that our own body could reject the foreign objects. This is especially a problem when the object is important for maintaining life (for example a new valve for the heart). I do not know how difficult it is to get the body to accept technology in the body but any problems could be critical as they connect directly to the nervous system (a very delicate structure). This would be a very important problem to research.

Nervous overload has been mentioned in many places as a possible problem. For example, Johnny Mnemonic called it the black shakes. This was a nervous condition that affected a person who had been putting too much stress on their body through excessive use of cyberware implants. This theory is quite feasible as a problem. We already see the problems involved in high-stress work (for example RSI) and excessive use of technology (such as telephones). How much stress can our neurons take? We should study just how far our nerves are likely to take us before they tell us that it’s too much! I should also point out the danger of software problems. “For example, if we have software embedded in our brains, how do we ensure its quality and reliability? What happens when there is a new hardware upgrade or a new software release? What if somebody discovers a software bug or a design error? Even a Hollywood script writer would be hard-pressed to picture the consequences.” [Thomas] Cyberware is very close to something that is irreplaceable – our brain. We need to be very careful about getting the software as bug-free as possible.

We’ll need failsafe systems and an ability to turn them off manually, if required, without damage to them or us. Another physical danger comes with the problem with wiring throughout the body. There needs to be some way for implants to communicate. If the implant is near the site that it is hardwired to, this shouldn’t be a big problem. It the implant is separated from some of the hardware, however, wires might have to trail through the body. This could cause any number of problems, from rubbing against muscles and bones to problems with chemicals if the wires degrade, to other problems if the wire breaks, to the messy fact that you’d have scars along the entire length of the wire’s path. What would be the best solution would be wireless communications. An example of a possible solution is the Personal Area Network (or PAN), created by Dr Zimmerman. This technology utilises the electrical field of the body to send and receive electrical signals. It would make the implants slightly larger (needing transmitters and receivers as well as the usual circuitry) but would reduce the wiring down to next to nothing. Although this research is very preliminary and there are still many intimidating technical and biological hurdles (on-board signal processing, radio transmittability, learning how to translate neuronal communications [Branwyn]), the long-term future of this technology is exciting. As you can see, there are a number of problems that we can already see coming. These can be anticipated and possibly even solved before we have to deal with any of the repercussions, but once the technology is out there, what new problems will we find that never existed before? We can’t think of everything involved and there just has to be some risk involved in the process. This is fine for those who have nothing to lose – those that have already lost their sight or their limbs; it can only be of benefit to them. Most of us would have to think twice before diving into it. Who wants to risk their sight and limbs on a risky new enhancement when they are perfectly healthy to begin with? I have heard rumours that there are those that are so desperate to be a part of this that they are willing to sign away all liability just to be the first to have this done to them. However, this doesn’t necessarily make it legal or ethical.

Physiological benefits - Well, the obvious benefits come from the main purpose of cyberware – to augment or replace bodily functions as they currently are. Lost limbs, senses and brain functions could eventually be replaced by implants that replicate normal behaviour as much as possible. Enhancements could increase our abilities in many interesting and helpful ways. But there are other benefits apart from the obvious. I discovered a paper that tells how prostheses help phantom pain [Katz Pictures, 1999] by providing something for the old nerves to be able to operate. I find this an interesting possibility and hope that many more previously unknown benefits will follow.

Psychological problems/benefits
Psychological damage - Cyberware can have its effect on the minds of people as well. Here are some possible pitfalls that we should try and head off before they become problems. The world, today, is very stressful. We live our lives as fast-paced as we can handle and many people crumble under the pressure. Cyberware has the potential to add to that stress as well as to take away from it. Many technological improvements are posited as reducing workload and thus stress and allowing a person to have more leisure time. Unfortunately, most people try and use this technology to be able to fit more activity into their time without reducing their stress levels. Cyberware will likely run into this problem. We frequently overtax ourselves by not being able to get away from it all even now – how much worse will it get when your mobile telephone and your office computer are located in your head? Stress is a problem we already have to deal with in the over-worked. Cyberware will allow these harried workaholics access to more information, faster, now pumping it directly into their minds.

We will have to be prepared to help those that obsessively attempt to take more and more onto their plate until they cannot handle anything at all. Luckily, not everyone is susceptible to this sort of thing, only those who are already likely to try and take on the world in one go. Cyberware would be only one more weapon in the arsenal that they use, but psychologists should be taught to keep an eye out for this sort of thing happening. There are also some problems that have little to do with the actual computational side of the cyberware. How strange would it be to communicate with a computer? It’s hard enough communicating with human beings that have similar experiences, but a computer is completely alien. Typing on a keyboard is one thing, but having your brain wired into it might be something entirely different. Most people will be perfectly capable of adapting (humans are remarkable in this aspect), but there might be those that just cannot adjust. Consider the current-day problems with people trying to learn the interfaces of computers. A well-known example is of the elder generation attempting to learn how to use an ATM, for the first time. It’s not difficult for most people to learn, but some people just cannot adapt to the new technology. This problem becomes quite distressing when ATMs have become the only easy manner of interfacing with the bank. Cyberware presents the same problems – a new technology that people will need to learn. If it begins to permeate every level of society, there will be those that cannot adapt and they will have a hard time fitting in. This also leads into the section (above) on generation gaps/elitism.

Cyberpunk is a game set in the near future where this technology has been readily available for some time and is fully integrated into society. In the game, one of the more serious problems associated with cyberware is a psychological condition known as cyberpsychosis. This is what happens to a person who becomes enhanced to the point of being almost totally machine. It begins as a form of superiority complex in which the people perceive themselves to be better than an average human due to their enhancements. This in itself is a big problem and needs to be worked through with psychological counseling. Cyberpsychosis, however, is a case where it becomes extreme and the cyborg actually finds humans to be so inferior that they should be removed from the world. The person often believes that they are a representative of the next evolutionary step of humanity and that the previous step should be eliminated or turned into slave labour. Obviously this is a little far-fetched and would not happen in large numbers of cases (as with most forms of psychosis) but anyone undergoing enhancement might need to be educated against the possibility of such superiority complexes. This section is full of speculation. We can guess at problems that might maybe occur in a small number of people, but psychology is not so well known that we can fully determine the range of problems that might occur. We also cannot predict how many people will have problems, only guess by extrapolating from previous technology changes.

Psychological benefits - Ok, so those are the problems, what benefits could there be? I see the main benefit occurring in the arena of psychological research. This would become so much easier when you are directly plugged in. We could plug into various areas of the brain and see directly how the brain deals with different stimuli and how thoughts are formed. There is a large amount to be learned from the study of the brain and this technology (and spin-off technologies) would greatly aid research.

What it means to be human - This is not a new concept, but a question that has plagued philosophers for millennia. It is closely tied to the age-old question of where do 'we' reside? Does the 'soul' have a seat or is it ephemerally connected - only existing 'somewhere else'? Do we have a soul at all? Perhaps a better description would be 'consciousness'. I hope, in this way, to side step the nasty mystical arguments about the existence of a soul for the moment and concentrate on the former question. Many years ago, there was a thought experiment involving this topic that will serve to describe the problem. Consider the average human being. If, by some accident or through necessary surgery, some part of that person had to be removed (say a limb), that person would be likely to continue to live. Lets suppose that you continue removing limbs, that person will continue to live (given adequate medical attention etc). Now lets start removing other parts. The theory goes, that at whatever point the person dies, that part must be the seat of the soul. Now it's slightly flawed in that there are a number of organs without which the human body can't survive for very long. But bringing this experiment into the modern and future ages of medicine (where artificial organs would be usable) let us remove pieces but replace them with artificial parts. Most likely the human would be able to survive almost any replacement as long as it could fully replicate the function of the organ. At which point can we say that we have found the soul? What if we even find a way of transferring the state of the brain into a more sturdy possibly even enhanced artificial construct? Are you still the same person?

Masamune Shirow, in "Ghost in the shell", has his heavily 'enhanced' characters converse on this topic, asking themselves if they are human anymore. One of the characters goes to suggest that the only reason that she felt she was still human was that she was treated as one. The main storyline of the book (and movie) is that we are getting more like our machines at the same time that our machines are getting more like us. The main question being that if it is possible for a machine to create sentience (known in the book as a ghost), what would be so special about being human? Certainly programs might be created that seem to mimic humanity in every way – so then are they human? If not, why not? I hear someone at the back shout ‘Because they are not alive’. So, can you prove to me that you are alive? I don’t think you can, not in any way that would discount any possibility of sentience in a computer. The closest definition I have ever heard is from Rene Descartes: “I think therefore I am.” But could you not define thinking to include a computer program? So what defines humanity?

Appendix A: Neurophysiology primer - I thought for some while about whether or not to include this section. It is quite long and discusses many things that are not discussed in the project, above. I decided, in the end to include it as a full understanding of the brain can help in understanding the difficulties of creating cyberware. Above I argue for the need to determine how to communicate with the brain, below I discuss how single nerve cells communicate and what sorts of signals arise through that communication. I present this section not as something that must be known to understand the problems of cyberware, but to provoke further thought on how it could be used to understand and communicate with the most complex computers in the world – our own minds.

Structure of a neuron, Structure of a nerve cell - A nerve cell consists of three main parts. The largest part is the main body of the cell, called the soma. This contains the nucleus and the structures that keep the cell alive. From the soma come many branching fibres called dendrites. The dendrites are lined with specialised junctions, called synapses, through which a neuron receives information from other neurons. Some dendrites also contain dendritic spines - small growths that seem to play a part in learning and memory. The third main part is the axon. This is a single fibre thicker and longer than the dendrites. Mature cells either have one axon or none at all, but may have many dendrites. An axon often has many branches at the end farthest from the soma. In this way, a neuron carries information to many cells and also has many neurons connected to itself. The axon ends in a small branching structure that attaches to other nerve cells.

The action potential - At the simplest level of operation, it can be said that a cell receives its 'inputs' from the dendrites and sends its 'output' down the axon. This output is in the form of an electrochemical impulse. In other words, the exchange of charged chemical particles (ions) are used to send the messages. The cell sends an electrical impulse along the axon by exchanging ions through the cell membrane. In general, the neuron is slightly negatively charged with respect to the outside of the cell. This is known as the resting potential. Applying a small negative current to a neuron will depolarise it for a short time (i.e. make it closer to a neutral charge), it will quickly return to resting potential. If the current is raised very slowly, we will eventually reach the level called the threshold. Once we pass over the threshold, the current will cause gateways to open up allowing a massive, rapid flow of positive ions into the cell, this causes the potential to shoot up to a high positive level before dropping off again. This is known as an action potential. An action potential passes along an axon because the positive charge of an area slightly depolarises adjacent areas of the membrane. This sets off an action potential in this area that polarises surrounding areas etc. Thus the potential propagates down the length of the axon. Once an area has gone through an action potential, it becomes less permeable to the ions that are part of the potential. This stops the action potential from occurring and continuing forever or from moving wildly in any direction. The time it takes for an area to recover from an action potential is known as the refractory period. The refractory period thus sets a maximum on the firing frequency of the neuron. If the refractory period is short enough, several action potentials can move down an axon at the same time.

Some (but not all) axons are also sheathed in a substance known as Myelin. Myelin covers the length of the axon except for small nodes about 1mm apart. Myelin prevents the ions from moving through the membrane, but the nodes have many of the ion gates necessary for action potentials. When an action potential depolarises a node, the charge is strong enough to depolarise the next node, skipping all of the distance in between. In this way, the action potential quickly jumps from node to node down the length of the axon, much quicker than an action potential will usually propagate. This jumping effect is known as saltatory conduction. Action potentials are either all-or-nothing and their size is the same (completely independent of the size of the stimulus that created it). This all-or-none law is where the analogies with computers were formed, seeming so similar to 0/1-only signals. The timing of the action potentials conveys the ‘message’ of a neuron. There are several different ways to do this. A one-time action potential may signal 'something is here'. Another neuron may be constantly firing in its resting state, but change the frequency of action potentials in the presence of (or at higher intensities of) a certain stimulus. Other neurons signal by sending action potentials in clusters instead of being regularly spaced.

The graded potential - So far I've only talked about how the output of a neuron travels. The inputs (from the dendrites and there through the soma) propagate in a different way, called a graded potential. As the name implies, the potential is different for different levels of stimulus and the signal also degrades as it passes along the dendrite. Because of this, an axon that is attached to a synapse further from the soma will have less effect than one that is attached closer. It is in this region that the signals must gather to create the current to trigger an action potential to be generated. If the combined signals are stronger than the threshold value, the action potential starts and the neuron fires. If the cumulative electrical charge of the graded potential is below the threshold, the neuron will not fire. In addition, some nerves act to inhibit a neuron, i.e. they act to lower the cumulative result. This process of adding the combined contributions from neurons is modeled in neural network software (though greatly simplified).

The synapse - But how does a neuron actually affect another neuron? On the ends of the axon's branches are synaptic knobs. These contain vesicles (small cellular containers) of transmitter substance or neurotransmitter. When the neuron fires, this causes a change in ion levels inside the cell membrane, triggering the release of the transmitter substance into the synaptic cleft (the space between the synaptic knob and the dendrite it attaches to). The actual chemical effect of the neurotransmitter differs from neuron to neuron. There are many different types of neurotransmitter and it effects different neurons in different ways. I won't go into the chemistry. The most important difference is that some synapses are inhibitory (i.e. serve to reduce the likelihood that the next neuron will fire) and some are excitatory (serve to make it more likely that the next neuron will fire). When talking about adding up the inputs from various synapses, the excitatory synapses add to the total whereas the inhibitory ones subtract from it.

Structure of the Central Nervous System - The Central nervous System (CNS) is generally divided into 3 different sections: the periphery, the spinal cord and the brain itself.
The periphery - The periphery is made up of all the nerve endings and sensory nerves that are where information begins and ends in the body. If information is gathered here or this is the destination of a message to do something, it is likely to be a part of the periphery. The many sections of the periphery are complex and widely varied and beyond the scope of this primer.

The spinal cord - The spinal cord communicates between the periphery and the brain. Sensory nerves enter it, bringing information to the brain about our world. Motor nerves exit carrying messages to the muscles and organs informing them how to operate. In cross section, the central part of the spinal cord is vaguely H-shaped. The central H is known as grey matter and is composed mainly of un-myelinated inter-neurons, cell bodies and dendrites all tightly packed. The surrounding area is white matter that is composed mostly of myelinated axons (myelin is white). This is where information travels up to the brain. Each segment of spinal cord communicates with a particular section of the body and with the segments directly above and below it. An important part of the spinal cord is what is known as the reflex arc. Sensory nerves enter the spinal cord and synapse with small inter-neurons within. These synapse with more inter-neurons and with exiting motor neurons. It is here that reflexes are stored, creating a short loop direct from sensation to action completely bypassing the brain.

The brain - The brain itself consists of 3 major subdivisions: the hindbrain, the midbrain and the forebrain. I'll briefly go through the basic functions associated with each minor structure (the full amount is huge and really deserves further looking into if you are interested).

The hindbrain - The hindbrain (underneath and behind the larger parts of the brain) consists of the medulla, pons and cerebellum. The medulla, pons, midbrain and certain structures of the forebrain are also collectively known as the brain stem.

Medulla - Controls basic reflexes such as breathing and heart rate so damage to this area is invariably fatal. Because of the types of functions of the medulla, large doses of drugs that effect this area can also be extremely harmful. This is what happens when a person overdoses

Pons - French for bridge, this structure serves as the gateway for sensory nerves that cross from the left side of the body to the right side of the brain (and vice versa). It also contains nuclei that are centres for the integration of sensory information and often regulate motor output.

Cerebellum - This is best known for control of learned movements or classically conditioned responses (when you learn to ride a bike, the processes are stored here). All your programmed stuff goes here, so when you drive your car 'on-autopilot' this is the part you've put in charge

The midbrain - The midbrain mainly contains the superior and inferior colliculus. These play an important part of the routes for sensory information. The superior colliculus is active in vision and visio-motor coordination. The inferior colliculus deals with auditory information.

The forebrain - The major part of the forebrain is the cortex. This is the biggest section of the brain and is what most people think of when you say the word 'brain'. Hidden underneath lie other forebrain structures including the thalamus, the basal ganglia, and the limbic system. The cortex is quite complex and deserves its own subsection below.

Thalamus - This is main source of sensory input to the cerebral cortex. It acts as a way station for sensation, but also does a significant amount of processing on that information before it hands it on to the appropriate lobes of the cortex for final evaluation.

Basal ganglia - This contributes information to the cortex regarding movement, including speech and other complex behaviours.

Limbic system - This is a heavily linked set of structures controlling motivated and emotional behaviours (e.g. eating, drinking, sexual behaviour and aggression). The list below is a partial list of the limbic systems main structures.

Olfactory bulb - This is where we process the sense of smell. As we know, research has been carried out on the effect of smell on sexual behaviour through the use of pheromones. It occurs in the animal kingdom so we wonder if it affects us also.

Hypothalamus - Deals with regulation of motivational behaviours. It also regulates hormones by regulating the pituitary through both nerves and through hormones that it releases.

Pituitary gland - This is an endocrine gland (hormone producing) attached to the hypothalamus. It receives messages from the hypothalamus then releases hormones into the blood stream. It controls timing and amount of hormone secretion for the rest of the body.

Hippocampus - Plays a vital role in learning and memory.

The cerebral cortex - This is the biggest structure of the brain. It controls a wide variety of complex functions and behaviours, from sensation to actions to personality. The brain is split in two down the middle, each half known as a hemisphere. Generally, the right hemisphere deals with sensations and motor instructions for the left side of the body and vice versa. The cortex is further split into 4 main lobes; each hemisphere has one of the lobes but usually deals with that type of information from the opposite side of the body. There are notable exceptions, for example, the language centres seem to be split into a section dealing with the words and a section that deals mainly with the emotion put in via intonation.

The frontal lobe - This is generally thought to be the seat of personality. Movements are planned here and behaviours are modified. A large strip right at the back of the frontal lobe reaches around the head like a headband. This is known as the precentral gyrus and also as the primary motor cortex. It is the centre for control of fine movements of the body (for example precisely moving ones fingers). The precentral gyrus contains a detailed map of the body stretched over the surface of the lobe (see diagram below).

The Parietal Lobe - Located just behind the Frontal lobe, this area specialises in body information. Its expertise includes touch and muscle and joint receptors. At the very front of this lobe is the post-central gyrus. This sits directly behind the precentral gyrus and contains a very similar body map. The function of the post-central gyrus, however, is to receive sensations from each of the body areas.

The occipital lobe - Located right at the back of the head, the main function of this lobe is to process the information from our eyes. This is the largest area devoted to one small sense as we devote much of our attention to our vision - we rely on it more than any other sense. An interesting thing about our occipital lobe is that if we are damaged here, we can no longer see. Even if our eyes are still fully functional. An interesting phenomenon that shows this is called blindsight. If you shine a spot of light on a dark wall, a person with blindsight cannot see it. BUT, if you ask them to guess where they think it might be, they will point to it with disturbing accuracy. Obvious, information is still coming into the brain from the eyes; it just isn't being processed any more as a visual stimulus.

The temporal lobe - The temporal lobe is located around the sides of the brain, near the temples. It is the main centre for auditory information but seems to add something to recognition of complex visual images as well (such as recognition of faces). The temporal lobe also houses the two areas that deal with the understanding of language [Kalat].

Communicating with our brain - So, how do we communicate with our brain? What ways does our brain present us with information and how can we tap into that for greater understanding of how we think? Through the years, scientists have tried many methods, from the downright barbaric to the more modern (but usually incredibly expensive) methods. I'll go through the different methods, explaining what brain signals (or lack thereof) each method is attempting to utilise, providing a list of actual devices that use this method.

Brain signals, Electrical signals - As explained in the previous section, the brain's signals travel via electro-chemical processes. Though no actual 'electricity' (as in a flow of free electrons) is involved, the movement of charged particles causes a similar effect. Thus it has been possible to devise methods of measuring and manipulating the electric potentials that occur during brain activity. Electrical signals (as measured directly) can be used to directly control the movements of a cursor onscreen. Devices that detect and analyse the electrical potentials of the brain are: Electrodes, EEG (Electroencephalograph), MEG (Magneto-encephalograph)

Evoked potentials - Any device that measures the electrical potential of the brain can use this technique. For example, it is often used in conjunction with an EEG. When the brain is presented with a stimulus, it will exhibit a response approximately 300 milliseconds afterward. If you present a subject with this stimulus, you can record their brain patterns using an EEG and pinpoint where in their brain the response occurred. You can thus determine where in the brain that stimulus is dealt with. Evoked potentials can be used to determine whether a person is looking at a stimulus presented on a screen. It can therefore be used to create what has been dubbed the visual keyboard. Devices that detect and analyse evoked potentials of the brain are: EEG (Electroencephalograph), MEG (Magneto-encephalograph), EMG. The brain is not the only source of electrical potentials. An Electromyogram (EMG) is measured by placing electrodes over the muscles and reading the potentials created by their movements. If the signals recorded are processed by a computer, we can determine what sets of signals are related to certain movements of those muscles and thus determine how the muscles (and corresponding limbs) have moved. These signals have obvious usefulness for the control of prosthetic devices, and also for the determination of the position of limbs for VR.

EOG - An Electrooculogram is similar to an electromyogram but the sensors are placed on the muscles around the eyes. This way we can determine the direction of a person's gaze. This technology could be used to fix many medical problems (see Tonneson et al for more as this falls outside the scope of this project) but also has potential benefits for the emerging technologies for VR. It can also be used in a similar manner (and certainly much more effectively) to the visual keyboard. A company called BioControl Systems Inc has done just that with their device that they call the biomuse.

NMR - NMR stands for Nuclear Magnetic Resonance and is the same technique as used in MRI. Atoms have an inherent rotation that is usually in a random direction. When exposed to a magnetic field, they will align. When the magnetic field is turned off, they will release a little bit of energy that can be measured. See MRI for a more detailed description of the technique.

CBF - CBF stands for Cerebral Blood Flow and is an ingenious method for determining brain activity. How does it determine brain activity? Well, the neurons of the brain, like any other cells in the body, require nutrients to work. If they are active, they will therefore need blood to flow to them. If we can measure the rate of blood flow through a particular region of the brain, we can tell if it is more active than surrounding areas. The great benefit of using CBF is that you can tell what parts of the brain are active during the processing for a specific type of activity without having to directly touch or measure the brain. Through CBF methods you can test a person while doing various activities and plot the areas that dominate for certain types of related activities. CBF is used by: rCBF - regional Cerebral Blood Flow and MRI - Magnetic Resonance Imaging

rCBF - Regional Cerebral Blood Flow is a further improvement on the CBF method. It measures the blood flow in the brain by further utilising the way in which cells live. Brain cells require nutrients to operate and they will use glucose to respire when they are being very active. If a radioactive glucose is injected into the brain, we can detect where the radiation is strongest. rCBF is used by: PET - Positron Emission Tomography

Detection methods, Physically invasive methods - The methods below involve direct physical manipulation of the brain and often involve a lot of guesswork. They can usually only give us a vague idea of what is going on in the brain. If we want to be more specific, these techniques must be paired with more precise methods. However, they have been available to us for centuries and have contributed to all our early knowledge of brain function. The problem with these methods is that due to the potential for irrecoverable nerve damage, ethically, they should only be used if absolutely necessary. Backyard practitioners are rare, but are known to exist [Branwyn].

Lesions and ablations - The first method is available to anyone, though I wouldn't recommend it - you tend to run into legal problems... something to do with human rights! This is the general class of "lesions and ablations". Though you can't really communicate via this method, people have been using it for centuries to work out which areas of the brain control which function of the body. Basically, this method involves cutting a slice through axons (lesion) or cutting out a section of the brain (ablation) and watching the persons behaviour to see what they can no longer do. This is very damaging and very permanent and the results are often very ambiguous. For example, if a person now cannot recognise a person they once knew well - is it a problem with how they put the visual image together or did they lose a part of brain where the image of *this* person is stored? One of the most famous applications of this method is the good old frontal lobotomy. This involved cutting the paths to the frontal lobe and was performed on people considered to be 'unmanageably insane'. Given that a lot of what we would consider personality is stored here, this operation usually turned the person into a walking vegetable. It may also be interesting to note that alcohol affects us by decreasing the activity of the frontal lobe, much like a temporary lobotomy.  It has been often suggested that the effects of long-term alcoholism closely resemble the loss of personality and drive of the typical pre-frontal lobotomy patient. However grotesque this method may have been in the past, be aware that it is still in use today, though in a slightly altered form. Whether or not there are secret laboratories of evil geniuses performing ablations on unwilling victims I don't know. I do know, however, that there are many strokes, brain cancers and accident victims every day of the year. Though it is unethical and illegal to be the cause of a brain lesion, you are quite allowed (with suitable permission) to study the effects on those who have suffered them through accident. Indeed there are many famous patients that have lent important information by happening to have an unusual difference of function after sustaining a head injury.

Natural development and arrested development - Some structures or cell types develop at a later age. By studying the abilities of a person as they grow, we can determine what these structures do. We can also study people who have a natural problem in development. We can study their brain and see what structures are deficient or oversized.

Electrodes - An electrode usually consists of a very thin wire that can be precisely positioned in the brain at almost any depth or position. Either a small current can be applied to the very end or it can record the electoral potential present at that point. Electrodes have been the mainstays for brain research for many years now (before the development of the large brain scanners that I'll talk about later). Electrodes have been used in innumerable experiments - usually involving animals, but there have been some with live human subjects. The benefit of electrodes is that they can directly stimulate a very precise area of neurons with a minimum of damage. Through vast amounts of selective stimulation, they have been used to fine-tune our knowledge of brain structures. Most of the maps of brain activity have been formulated by people selectively test-stimulating each area of brain in turn and asking their subject (awake at the time) what they can sense, and observing any changes in behaviour. The main problem with using electrodes in humans is that the electrodes do cause some small amounts of nerve damage and thus it is unethical to use them unnecessarily.

Studies with humans are often conducted just prior to major brain surgery, when a patient has sustained damage to their brain (either cancer or a blood clot). Obviously, they will try to minimise the amount of brain tissue that has to be removed. The surgeon will try and determine which is live tissue and which is not. The patient has their scalp anaesthetised locally and then removed to bare the brain. The surgeon then inserts electrodes into the surrounding areas and runs a small current through them one at a time. The patient is awake and must tell the surgeon if they can tell that something is different when the current is on. This technique is really only useful in brain areas where an electrode can produce an obvious effect. Areas such as the visual cortex or any other major sense will do this, as they will produce an obvious effect (e.g. a bright pinpoint of light in a specific area of the field of vision). If the damaged tissue is in an area that does not produce such an obvious effect (e.g. areas generally associated with memory storage) this technique is much less useful.

Electrodes are also used to passively measure the electrical potential of the neuron. This has been used to measure the effects of sub-threshold currents on a neuron and how currents can add or subtract from the electrical potential and how this effects the chances of an action potential. Electrodes can also be used to monitor whether or not a particular brain area is being used for a certain activity. For example, experiments have been performed on animals where they have had their heads fixed in place and had a tiny light shone onto a particular part of their field of vision. An electrode in their brain can determine which neurons are activated by this stimulus. As a method of finely detailed research, electrodes can be very useful. As a method of stimulating a very specific neuron, they are the best. As a method of communicating with the brain, they are still not so good. We would have to be able to attach an electrode to a very large section of neurons. For example to every axon in a nerve bundle (like the 1000's of nerves in the optic nerve bundle) for this to be much use. Regardless of this, electrodes are the main possibility for the application of cyberware. It is the main avenue in which we are progressing, even though I consider us to be still in the barbarism stage (current methods requiring a severed nerve ending and a spiked plate covered in microelectrodes). I believe that we must find a less invasive, but still very precise, method for reading the information coming from our nerves.

Non-invasive methods - Really these should be labeled 'less-invasive' as all methods affect the brain in some way, but these are considered less damaging to a human and generally are performed outside the human body. The problem with using these methods is that the equipment is usually prohibitively expensive for the average person. Research in this field is reduced to those who have access to the equipment and are willing to let them use it.

EEG - This device works by attaching several electrodes (not the pin-shaped ones described above, but ones that will lie flat against the skin) to the scalp. These measure the brain's electrical activity during thought processing. The output of the electrodes is amplified and recorded. The researcher can then analyse the data and can usually tell the overall state of the brain activity. A researcher can usually detect such things as whether the subject is asleep, dreaming, awake or whether they are problem solving or just daydreaming. Abnormalities in the EEG can be detected when the subject has severe problems such as epilepsy or a tumour. The EEG measures the average activity of a very large number of neurons under the electrode. Using a large number of electrodes can better pinpoint the location of the neural activity, but only to a certain extent. This is a large-scale, very generalised technique for measuring brain activity. EEG's can be used to control the movements of a cursor onscreen. They have also been used for some very heavy experimentation on direct brain stimulation using the so-called montage amplifier.

MEG - stands for Magneto-encephalography and is very similar to the EEG. Instead of measuring the electrical potential, it measures the magnetic fields caused by the electrical potentials. Apparently, this method allows a more precise localisation of regional activity. [Coren et al, p645]

CAT scanners - stands for Computerised Axial Tomography. To take a CAT scan, the physician starts by injecting dye into the blood stream. The patient’s head is then inserted into a large xray machine. The machine takes an xray then rotates around the head 1 degree and takes another and so on until 180 x-rays have been taken. A computer analyses the data received and creates a composite image of the brain. This technique is useful for determining the brain structure without having to open up the skull. Unfortunately, it is like taking a still picture. You get the idea of where everything is, but cannot see it in action.

PET - is Positron Emission Tomography and it relies on the use of regional Cerebral Blood Flow (rCBF) to determine the active parts of the brain. The substance that is injected into the brain decays in a known manner, ejecting a positron at a statistically reliable rate. Glucose is often used, as it will congregate in active neurons. When a positron hits an electron, the particles annihilate one another, releasing some of their pent-up energy in the form of gamma rays. The gamma rays go in opposite directions simultaneously. A PET scanner is comprised of gamma ray detectors. When they detect two simultaneous gamma rays, they determine the position they originated from by finding where they intersect. When many of these are recorded, a computer can then plot the brain activity of that region.

MRI - (Magnetic Resonance Imaging) is the same as NMR. Unlike a PET scan, it is capable of producing very detailed images of the brain without having to expose the brain to radioactivity of any sort. MRI utilises a very interesting property of the atoms that make up our brain. Each atom has a certain rotation. Usually the axes of rotation are randomly oriented, but a strong magnetic field will align the axes in the same orientation. The atoms of hydrogen bound to the blood cells are usually pinpointed, as they are easier to align. When a radio-frequency electromagnetic field is then applied to the aligned hydrogen atoms, they spin like tiny gyroscopes. When the field is turned off, they all relax into their previous positions, but simultaneously emit a very small amount of magnetic energy. When this energy is measured, we can deduce the concentration of hydrogen atoms in the region being monitored. This tells us which brain regions are active at present (see CBF for why). MRI is slow, however, taking around 15 minutes per scan. A newer, faster form of MRI called Echo-planar MRI can form images in less than a tenth of a second. This is fast enough to watch blood flowing. This device will be able to greatly help us to study the structure of activity in the brain. [BioControl Systems Inc] [Coren et al] [Kalat] [Lusted, 1996]

Appendix B: Glossary

cyborg - CYBernetic ORGanism – Generally speaking, any person or creature that is partly machine and partly organic. This definition is wide-ranging, including anything from a person with a pacemaker to someone who has replaced large sections of their body with artificial limbs and other enhancements. It can also be used as a term for machines that have been enhanced with organic parts (for example ‘The Terminator’ (in the movie of the same name) was a machine with a skin that had been grown from living skin cells.

phosphene - A phosphene is a bright light that is perceived when the visual cortex is stimulated. The eyes have not actually created this effect, but the brain thinks it has seen the light due to an emulation of the processes that are usually in place when a light is seen. These are the ‘stars’ that are seen when you rub your eyes or get hit on the head.

wetware - A slang term for the body’s own neural processing systems. It comes as an extension from the words hardware and software.

wetwired - A slang term from literature, generally meaning the wiring of something to the body.

References - The list of references below consists of the articles and books that contributed to my knowledge of this field. Not all of them are directly referred to in the text, but I considered each to be important and interesting in their own right. You may notice that some of these references are works of fiction. Don't be put off by this, much of our best science was first conceived in fiction. William Gibson (to take a more modern example of this effect) has had a profound effect on the science of computer communications. The Internet has been a creation at least partly attributed to his ideas of 'cyberspace'. Many are still working toward the full realisation of his dream of a 'consensual hallucination'. It is his dream that I myself strive toward in this project and hope that I can help it to someday be a fully developed technology.

    Beardsley, Tim The Machinery of Thought. In Scientific American Trends in Neuroscience, August 1997
    BioControl Systems Inc, Neural interface technology-The future of Human Computer Interaction. In: WebPages belonging to BioControl Systems Inc
    Branwyn, Gareth The desire to be wired. In Wired 1.04, October 1993
    Carder-Russell, Roderick A Personal Reasons for Seeking a Brain-Computer Interface In Human/Brain-Computer Interface WebPages, 1996.
    Coren S, Ward L, Enns J, Sensation and Perception [4th Ed], Harcourt Brace college publishers, Fort Worth, 1994
    Cyberpunk: the role-playing game of the dark future [2nd Ed]. By R Talsorian Games Inc, Berkeley CA, 1993
    Fleischer Brain Chip In Stepback: The Fleischer Files. Episode 101
    Fromherz, P Neuron-Silicon Junction or Brain-Computer Junction? In: Ars Electronica Festival. Eds.: G. Stocker, C. Schöpf. Springer, Wien 1997, pp.158-161
    Gibbs, W. Wayt Artificial Muscles. In Scientific American: Explorations (Smart Materials) May 1996
    Gibbs, W. Wayt Mind Readings. In Scientific American: Analysis (Neuroscience), June 1996
    Gibbs, W. Wayt Taking computers to task. In Scientific American: Trends in computing, July 1997
    Gibson, William Neuromancer
    Greengard, Samuel Head start In: Wired 5.02, Feb 1997
    Johnny Mnemonic I need a ref for this.
    Kalat, JW Biological psychology [5th Ed]
    Brookes/Cole publishing Co., Pacific Grove California, USA, 1995
    Kalcher J, Flotzinger D, Neuper Ch, Goelly S, Petz, Pfurtscheller G Brain-Computer Interface Prototype BCI2 for online classification of 3 types of movements
    Department of Medical Informatics and Ludwig Boltzmann Institute of Medical Informatics and Neuroinformatics
    Katz Pictures Seeing is believing. In New Scientist, 24th April 1999
    Lusted, HS and Knapp, RB Controlling Computers with Neural Signals. In Scientific American, October 1996
    Macauley, William R., From Rubber Catsuits to Silicon Wetware: Transforming the Human Body and Polymorphic Desire(s) in Synthetic Media
    Margulis, Zachary Going mental: let your neurons do the typing. In Wired 1.04, October 1993
    Nadis, Steve We can rebuild you. In Trends, October 1997
    Newquist, H P The Brain Makers: Genius, Ego, and Greed in the Quest for Machines that Think. C 1994, Sam's Publishing (a division of Prentice Hall), Indianapolis.
    Ridley, Kimberly Artificial sensations In Technology Review, 1994
    Shirow, Masamune Ghost in the shell
    Spence, Kristin Updata: When wet meets dry. In: Wired 4.08, August 1996
    Thomas, Peter Planet science: Thought control In: New scientist
    Tonneson, Cindy and Withrow, Gary BioSensors
    Wu, Karl Shadowtech: A Shadowrun book, FASA Corporation, 1992
    Zacks, Rebecca Spinal cord repair. In Scientific American: Explorations, 18 August 1997
    T. G. Zimmerman Personal Area Networks: Near-field intrabody communication In IBM Systems Journal - 35-3&4, Vol. 35, Nos. 3; 4 – MIT

Related Posts Plugin for WordPress, Blogger...
Watch live streaming video from targetedindividualscanada at livestream.com