Es el siglo IV a.C. y Herófilo, el padre de la anatomía, levanta su bisturí para brillar bajo el sol egipcio. A su alrededor, médicos de todo el Mediterráneo se agolpan para presenciar su incisión, esforzándose por ver mientras inclina su instrumento para la tarea que tiene por delante. Ya han aprendido mucho sobre la fisiología del ojo y se han sumergido en los misterios de las vísceras humanas.
Una vez más, el maestro parte la carne y entra en el sangriento laberinto de arterias y músculos. Con cada incisión, el cuerpo debajo se retuerce de dolor, se tensa contra las cuerdas que lo atan a la mesa de operaciones, porque no es un mero cadáver, sino un sujeto de prueba vivo que respira. Herófilo continúa su examen, ajeno a los gritos ahogados del prisionero condenado. Los académicos discuten el estudio de los poroi expuestos o nervios, incluso cuando chisporrotean con un tormento inimaginable.
Fue una época de gran aprendizaje en el mundo antiguo, y Alejandría fue el centro mismo de la misma. La familia Ptolomeo había establecido un museo o "casa de las musas" para el avance de la ciencia y la literatura. Los hallazgos de la familia ayudaron a llenar la legendaria Biblioteca de Alejandría y, durante un tiempo, la ciudad sirvió como reducto contra un mundo ensombrecido por la ignorancia. Se levantó la prohibición contra la disección de cadáveres.
Durante un período de aproximadamente 50 años, incluso la práctica tabú de la vivisección humana se convirtió en una práctica común. Después de todo, los eruditos no podían aprender mucho del estudio de los muertos. En una época en la que todavía se pensaba que los vasos sanguíneos transportaban aire, necesitaban abrir los cuerpos vivos a su escrutinio científico. ¿Por qué las vidas abandonadas de los condenados no deberían beneficiar a las generaciones venideras?
Herófilo supuestamente diseccionó a casi 600 prisioneros vivos, ganándose su lugar en la historia médica con sus diversos descubrimientos. Sin embargo, incluso en ese momento, muchos críticos expresaron su inquietud por la vivisección, independientemente de las recompensas. Sus escritos se perdieron para siempre en el año 272 d. C., cuando la gran biblioteca de la ciudad fue destruida por un incendio.
Mientras miramos hacia atrás a través de los siglos, Herófilo apenas se erige como un parpadeo distante de dilema moral. Más bien, la historia repleta de innumerables e inquietantes ejemplos de experimentación humana.
La sociedad moderna se encuentra a pocas décadas de distancia de algunos de los peores ejemplos de experimentación poco ética. Incluso hoy en día, la ciencia médica continúa avanzando sobre las espaldas de los sujetos de prueba humanos.
Contenido
El problema de la experimentación humana generalmente se reduce a un hecho básico:cuando la ciencia trata directamente con los humanos, tienes que estudiar a los humanos, eventualmente. Es así de simple. Ya sea que esté buscando curar dolencias y lesiones, construir un automóvil más seguro o diseñar un arma más letal, es posible que deba evaluar los umbrales humanos de enfermedades, estrés y lesiones.
Algunas otras opciones (y obstáculos éticos) tienden a presentarse antes de tomar la iniciativa de Herophilus y comenzar a atacar a los delincuentes convictos. Si su experimento exige absolutamente el uso de un sujeto humano, siempre puede recurrir a la variedad muerta. Incluso sin función, tienes forma. El famoso padre de la anatomía los usó para gran parte de su investigación.
Para abrir nuevos caminos, los pioneros médicos a menudo tenían que depender de los cuerpos de criminales ejecutados o robar cadáveres de tumbas y horcas. El robo de cuerpos se convirtió en una industria en crecimiento en el siglo XIX, ya que las escuelas de medicina requerían cuerpos frescos para que los jóvenes cirujanos practicaran. Hoy en día, los investigadores y estudiantes pueden acceder más fácilmente a cadáveres médicos legales.
A veces, los científicos requieren un sujeto vivo, un modelo de trabajo. En muchos casos, recurren al resto del reino animal. Solo en el siglo pasado, los chimpancés, los conejos y otros animales ayudaron en todo, desde la investigación de la poliomielitis y la exploración espacial hasta las pruebas de cosméticos y armas biológicas.
Dejando a un lado los dilemas morales y éticos, hay dos problemas clave con la investigación con animales. Primero, un conejo solo puede proporcionar retroalimentación fisiológica y de comportamiento. Incluso el primate más brillante no puede completar una sesión de preguntas y respuestas. En segundo lugar, está trabajando con una especie no humana, lo que dificulta o imposibilita el estudio de ciertas enfermedades, dolencias y escenarios específicos de los humanos.
La autoexperimentación a menudo ha demostrado ser un medio muy exitoso (y temerario) de investigación científica. Numerosos científicos se han infectado a propósito con enfermedades o parásitos para obtener una visión de primera mano del tema. Pierre y Marie Curie ganaron el Premio Nobel de física de 1903 por su investigación sobre la radiación, que consistía en pegar sales de radio peligrosas en la piel.
Sin embargo, la autoexperimentación tiene sus límites. Herófilo no podría haber realizado su propia vivisección, y mucho menos 600 procedimientos separados en sí mismo. Después de todo, un experimento es solo una fase del método científico:no tiene sentido morir a la mitad. Además, ¿cómo conservas la funcionalidad y la objetividad si sufres de la misma plaga que esperas curar?
Eso deja la experimentación humana y todas las connotaciones negativas que conlleva.
<h2> , '' :pageVisible }" xmlns='http://www.w3.org/2000/svg' width='22' height='10' viewbox='0 0 28.396 13.211'>
Human experimentation doesn't have to be an exercise in cruelty, yet we're not even a century removed from some of the more deplorable acts -- cases that could easily rival and even surpass the alleged crimes of Herophilus and his colleagues.
As in ancient Alexandria, scientists and doctors have often turned to the disenfranchised when the need for test subjects arises. After all, we experiment on animals by telling ourselves that the greater good outweighs the desires of a few lesser creatures. History has shown us where this line of thinking can lead when we see other humans as the lesser creatures in question.
J. Marion Sims is widely considered the father of gynecology and even became the president of the American Medical Association in 1876. Yet Sims developed his experimental surgeries by testing them on African slaves, often without anesthesia.
The United States often turned to prisoners for medical tests, such as the 1906 cholera experiments in the Philippines and the 1915 pellagra experiments in Mississippi. Poor and orphaned children suffered similar fates. In 1908, three Philadelphia physicians infected several orphans with tuberculosis, permanently blinding several. Between 1919 and 1922, Dr. Leo Stanley injected 656 prisoners at San Quentin Prison with animal testis trying to slow or reverse ageing [source:Lunenfeld]. Before the 1970s, roughly 90 percent of all pharmaceutical products were tested on prisoners [source:Proquest].
Yet when it comes to experiments on captives and prisoners, few examples resonate as strongly as the experiments conducted by the Nazis during World War II on Jews, gypsies and other targeted individuals. The doctors conducted cruel and often lethal experiments into wartime injury treatment generally by inflicting said injury on a captive patient. They froze victims to research hypothermia, placed them in compression chambers to test the effects of high-altitude flight. They also conducted brutal sterilization experiments in the name of racial superiority.
Similarly, Japan's infamous Unit 731 reportedly killed more than 10,000 Chinese, Korean and Russian prisoners of war to research and develop biological weapons. They infected inmates and performed vivisections on them, all in an attempt to craft even deadlier weapons out of disease.
How are we supposed to process such atrocities? And what do we do with the data gleaned from degradation and torture?
<h2> , '' :pageVisible }" xmlns='http://www.w3.org/2000/svg' width='22' height='10' viewbox='0 0 28.396 13.211'>
Humanity continues to march on through the years, bound to the linear procession of time. Any sober, honest glance back at the path we've traveled is often as horrifying as it is wondrous. Our collective history bristles with atrocity. We can't separate ourselves from it without also draping ourselves again in the cloak of ignorance.
Much the same can be said of science. While it continues to light the way to the future, portions of its momentum were purchased through horrible deeds. ¿Pero que podemos hacer? No one is arguing that we throw out everything we know about human anatomy in penance for Herophilus' vivisections.
Allied scientists faced such a dilemma at the end of World War II. What were they to do with Unit 731's medical findings on disease? Regardless of the methods used to obtain it, the information was valuable. It was like pondering a gold coin fetched from a vat of boiling oil. Were the U.S. scientists simply to flip it back into the searing pitch? Rather than see the information fall into the hands of the Russians, the U.S. bargained with the Japanese officers responsible:immunity from war crime prosecution in exchange for the ill-gained data [source:McNaught]. The officers were even given stipends.
Similar concerns have arisen concerning Nazi documents leftover from the Holocaust. With no regard for the welfare of their test subjects, German scientists at Dachau subjected victims to extremely low temperatures -- often resulting in death. Yet the Nazis also used these experiments to determine the best method of reviving hypothermia patients in tubs of hot water. Hypothermia researchers later insisted that the data was valuable, no matter how deplorable the methods used to obtain it.
Many critics argue that by using the data, we validate the crimes. Yet purely logistical criticisms arise as well. Can we trust Nazi doctors who sought politically motivated results, such as those that "proved" German racial superiority? In many cases, the experiments lacked proper methods and protocol. The subjects themselves, selected from the death camps, tended to be imperfect specimens to begin with, already suffering from malnourishment and psychological trauma.
Sometimes, however, the victims of atrocity have managed to obtain useful data from the conditions brought on by their tormenters. Jewish doctors documented starvation in the Warsaw ghetto -- notes that later aided the study of hunger-associated disease [source:NOVA].
<h2> , '' :pageVisible }" xmlns='http://www.w3.org/2000/svg' width='22' height='10' viewbox='0 0 28.396 13.211'>
We tend to place a great deal of trust in our physicians. With that trust comes the understanding that they won't secretly experiment on us. But there was a time when entering a teaching hospital as a patient might result in not only the procedure you needed, but also whatever the students needed to practice that week [source:Roach].
While the prospect of receiving an unmerited appendectomy at no extra cost may seem horrifying enough, more shameful experiments litter medical history. In 1932, the U.S. Public Health Service began its 40-year Tuskegee study of syphilis. During this time, African-Americans who sought treatment for the disease were deceived. Instead of administering proper medical attention, the doctors allowed their conditions to worsen in order to better study the illness. The U.S. military also tested unsuspecting patients, exposing citizens to germ warfare agents and LSD in the '50s and '60s. Various U.S. radiological tests also used unknowing subjects, injecting hospital patients with plutonium and feeding radioactive cereal to mentally disabled children [source:Proquest].
Human trials became increasingly essential to the pharmaceutical industry as the U.S. Food and Drug Administration began requiring stricter testing of new drugs in the 1930s. In 1966, the National Institutes of Health (NIH) established the NIH Policy for Protection of Research Subjects. This measure established review boards to monitor human experimentation. To this day, U.S. colleges and universities that perform any kind of human experimentation also have institutional review boards , or IRBs .
Subsequently, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (later the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research) entered the picture to refine the laws and practices surrounding human experimentation.
Throughout the 20th century, medical laws raced to keep pace with human experimentation -- often falling troublingly behind. For instance, 1964's Helsinki Declaration by the World Medical Association allowed for experimentation on incapacitated and incompetent individuals so long as legal guardians gave written consent.
Yet, even sheathed with the protection of signed written consent, human testing continues to bring up ethical concerns.
<h2> , '' :pageVisible }" xmlns='http://www.w3.org/2000/svg' width='22' height='10' viewbox='0 0 28.396 13.211'>
Walk into a Phase I U.S. clinical trial for a new drug and you'll likely find something between a frat house rec room and a hospital. Far from suffering, most of the test subjects would likely be busy watching TV or blasting through a few video games. As it turns out, catheters, needles and the occasional invasive surgical procedure don't really spoil a good time, especially when you're getting paid for it.
These mini-vacations for science can last anywhere from days to weeks and can pay in the thousands of dollars. Sometimes they're even held in hotel rooms. Phase I clinical trials typically involve otherwise healthy (though generally not gainfully employed) individuals. In this stage, researchers try to pinpoint dangerous side effects and potential complications. Phase II trials deal with dosing and efficiency, while Phase III trials enlist the help of actual patients to compare the experimental treatment to conventional ones, placebos or both.
A great deal of money and time go into these tests because pharmaceutical companies have a limited window to get a new drug out on the market and profit from it. U.S. drug patents only last 20 years; if a new medication is tied up in testing for a decade, then it will only have 10 moneymaking years left in it. While the companies themselves are frequently criticized for their commercialism, it does take a considerable financial investment to see a medical discovery all the way to the point where it can help patients -- even with limited or no human testing. Due to financial or logistical reasons, many potential medical breakthroughs don't even make it through the experimental period, which may be why researchers dub it "the valley of death."
Pharmaceutical companies used to rely more on university research facilities or teaching hospitals -- which, in turn, gave them access to students who might appreciate a spring break full of experimental psychotropic drugs and repeat viewings of "The Wall." The downside to this, however, was that it introduced academic bureaucracy into an already highly regulated process. The FDA required the tests to be supervised by an institutional review board, and these were generally staffed by university faculty.
To streamline this ordeal, pharmaceutical companies now deal largely with commercial contract research organizations , which handle all the testing. In a 2008 article in The New Yorker, Carl Elliot described the resulting situation as a subculture of guinea pigs -- they even have their own publications, detailing which studies have the best pay and perks.
<h2> , '' :pageVisible }" xmlns='http://www.w3.org/2000/svg' width='22' height='10' viewbox='0 0 28.396 13.211'>
Paying human test subjects is a reality many view as unavoidable. While afflicted individuals might line up for the possible benefits of Phase III testing, Phase I testing tends to require healthy specimens. Like it or not, most of these individuals aren't going to volunteer without financial compensation.
This dilemma has existed for more than a century. In 1903, a New York physician stirred up ethical debate when he offered $5,000 to any man or woman willing to cut an ear off for his studies. Is it ethical to purchase a slice of another individual's health, even with the greater good in mind? Despite all the cozy accoutrements and oversight, paid test subjects put their health and even their lives on the line in sometimes painful or degrading clinical trials. After all, Phase I tests exist to help identify harmful side effects. If your bottle of medication says that it might result in bowel control problems or suicidal thoughts, you can bet that someone received a paycheck for experiencing them at some point.
Plus, there's the concern that the resulting guinea pig lifestyle winds up appealing to a particular segment of the general population. In some cases, the individuals are professional test subjects, moving from one to another like a jobless Phish fan trailing a never-ending tour. In other cases, the test ranks are occupied by some of the very underprivileged classes preyed upon in less ethical times:the poor, the mentally handicapped and even undocumented immigrants.
Contract research organizations have also come under fire for their oversight and staffing. University institutional review boards placed more emphasis on curtailing overzealous academics, in addition to testing ethics. For-profit review boards, however, answer to the market's need for speed, which has resulted in overlooked ethical concerns such as unsafe testing environments and unlicensed medical staff members. The FDA on the other hand largely places more emphasis on data inspections and reportedly inspects only 1 percent of clinical trials [source:Elliot].
Meanwhile, headlines continue to document the role of human embryonic stem cells in experimentation. While these highly versatile cells may play a huge role in the development of cures for various diseases and even ageing, many oppose the destruction of human embryos to get them.
In one form or another, human experimentation is as old as human curiosity. We learned that fire burned because we tested it. As scientific research continues, the challenge is to keep the flames in check.
Explore the links on the next page to learn even more about scientific experimentation.
<h2> , '' :pageVisible }" xmlns='http://www.w3.org/2000/svg' width='22' height='10' viewbox='0 0 28.396 13.211'>