I apologize that this is a bit verbose, but it is a bit of a complicated issue. I did read through the posting rules of the forum, so I tried to simplify this enough that it won't break the "more than 2 questions per post" rule.

So I am currently analyzing data for an experiment that we ran over the summer. The subjects of the experiments were patients with a very rare neurological condition known as pure word deafness. I won't go into the clinical details, but the data for the experiment consist of two sets of word. The first are the target words, the second are the response words. Each target has one response word associated with it (however, there are cases where the same response word is associated with multiple targets). The subjects were told to repeat after the experimenter; the target words are the "input" given to the subject, and the response words are the "output" (ie what they actually said). They almost never repeat the same word back, due to the nature of their condition.

We are interested in determining the nature of the relationship between the target and the response words; or, to be more precise, we want to figure out the probability of a particular word being used as the response to the target. Were the response words truly random, or is there some sort of definite relationship?

This type of data analysis is a bit new to me (this is my first job in a psychology department; previously I worked in biology/physiology). In any case, after conversing with the PI of our lab, here is the method that we used:

The target-response data was resorted in such a way that every target word was paired with every response word. That is,
hotel potato
hotel television
hotel ...
airplane potato
airplane television
airplane ...
(The actual data sets are much larger)

A numerical value is given to each pairwise target-response relationship. These are calculated using a database called WordNet. I won't go into the details because the WordNet system is a bit complicated, but for our purposes we chose three different measures of semantic relatedness: Jiang and Conrath info content, lesk (gloss overlap), and vectors. They use a series of algorithms to determine the relatedness between words as a factor of its "distance" to other words.

We then performed a Monte Carlo simulation on each set of data, and calculated the distributions/various descriptive statistics for each of the three measures. First of all, is a Monte Carlo simulation the most sound method for this type of data analysis? What about bootstrapping? Something else entirely?

If Monte Carlo is the right way to go, then how do I go about analyzing this? I have three distributions. The Jiang and Conrath data set distribution is approximately normal (almost no skew, but a negative kurtosis). However, the lesk and vectors distributions aren't even close to normal (I forget the word to describe this type of distribution, but it is shaped like a rectangle with a jagged top). What does this mean about the data? I am not entirely sure.

Essentially all I want to do is compare the original pairwise comparisons of the data set with the Monte Carlo simulations. With the Jiang and Conrath data set it will be easier because it is approximately normal(ish). But what about the other two?

Anyway, I appreciate any help, and thanks in advance.

Just to make sure everyone knows, I only have two questions:
1) Is Monte Carlo simulation the right way to look at this type of data?
2) If so, how do I go about analyzing the distributions?


Also, I apologize if the above isn't a good explanation. If you need me to be more specific or even show you what my data look like I would be glad to.