Thursday, April 21, 2011

Copy and original revisited

I realized one possible objection to my definition of copy as

Object A is a copy iff its creation was intended as a copy of some object B (or maybe even an idea of such an object) and A does not (numerically) equal B.



Imagine that an entrepreneur makes a supercomputer that is designed to make him lots of money. All he needs to do is supply it with lots of information about the world and the computer will do what it takes to best insure he gets money. The computer decides the best way is to make counterfeit paintings and sell them. The entrepreneur does not know what his computer is up to and just collects the cash it makes him. Since only a machine was involved with the manufacturing of the counterfeits, does that mean that copies of the painting were made without any intention?

I thought of this objection while reading Michael Rea's paper on pornography. He seems to think that the objection is a good one against certain definitions of porn.

However, I have my doubts because it seems that once machines as complex as the computer in the thought experiment is brought about, we can ascribe intentions to it. We already ascribe intentions to certain (complex) animals like dogs, apes, dolphins, etc. A dog e.g., brings you his frisbee. His intention seems to be that he wants to play. No problems there. I have no problems with intentions in many animals.

The issue is whether computers can do the things in the thought experiment without what can properly be called an intention on its part. It seems to be able to choose among available options and compute the best option and implement a plan of action and perform it despite the fact that it may not have the mental phenomenal content that people and animals who perform similar tasks have. Would that be enough to constitute intention? Can "intention" be defined loosely (so that it lacks for example a phenomenal content) but still maintain its close association with the phenomenon we attribute to our understanding of our own intentions? I'm not sure but it seems plausible to me that if the computer can do that it would have what can properly be called an intention.



Puzzle of the self-torturer revisted

One other thing I want to mention about the "puzzle" is that it assumes that pain is continuous and not noticeably discrete. However pain may have various threshold "steps" and experience of pain as it gradually increases may jump to the next step once it reaches certain thresholds. This would also need to be taken into account and it may be one objection raised to the possible rejoinder to my original response that the increments instead be in increases of pain levels (as opposed to external electric currents in the example).

Also see here for evidence of what I alluded to earlier in that blog about the psychological factors which come into evaluation of pain.

Dimensions

In 1968 Melzack and Kenneth Casey described pain in terms of its three dimensions: "Sensory-discriminative" (sense of the intensity, location, quality and duration of the pain), "Affective-motivational" (unpleasantness and urge to escape the unpleasantness), and "Cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion).[40] They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but “higher” cognitive activities (the cognitive-evaluative dimension) can influence perceived intensity and unpleasantness. Cognitive activities "may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both dimensions of pain, while suggestion and placebos may modulate the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed." (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435)



The old children's story of the frog in the pot who is slowly being boiled alive reminds me of this self torturer puzzle. The frog may suddenly feel a sharp rise in pain once some threshold (psychological or otherwise) is reached and jump right out instead of being boiled alive. That would seem realistic and explanatory and thus dissolve the puzzle of the self torturer as well.

Indeed there is some evidence from animal experiments that verify the threshold answer.

In 2002 Dr. Victor H. Hutchison, Professor Emeritus of Zoology at the University of Oklahoma, with a research interest in thermal relations of amphibians, said that "The legend is entirely incorrect!". He described how the critical thermal maximum for many frog species has been determined by contemporary research experiments: as the water is heated by about 2 °F, or 1.1 °C, per minute, the frog becomes increasingly active as it tries to escape, and eventually jumps out if the container allows it.[3][21]