Language and information
Lecture 1. A formal theory of syntax
1.4. The likelihood constraint
[Audio recording—Opens a separate, blank window. Return to this window to read the transcript.]
Now, we're going to the second constraint. This constraint [just described] can give various sentences, but it still requires—it still is necessary to describe how particular words are chosen for a sentence, how certain combinations are more likely than others. This is done by the second constraint, which is on word likelihood. Whereas the first constraint created sentence structures, the second specifies word meaning. It does not necessarily create meanings, since many words with their meanings with their meanings must have existed singly before they were used in sentences, but it specifies meaning to any detail desired. The first constraint says frequency equals zero for words outside the required class in the argument position. This leaves room for any frequency larger than zero for words which are in the required class. Nothing says, however, that all words must have equal frequency in respect to their operator or argument, or that a frequency must be random, or must fluctuate. In fact, we find in a language that each word has a particular and roughly stable likelihood occurring as arguments or operators with a given other word, though there are many cases of uncertainty, and so on.
These roughly stable likelihoods, and especially the selection frequency, which will be mentioned in a moment, conform to and fix the meanings of the words, as we will see next week. Each word has a somewhat fuzzy selection of other words which are more likely than average to appear in a position for its argument. In the case of sleep, for instance, there are hundreds of words such as man and even tree in contrast with more words, such as earth or stone, under sleep. The set of words having this higher than average likelihood is called the selection, in this case, under sleep. The central meaning of a word is given by the selection of arguments under it, or of operators over it. In addition to the selection frequency, there exist words with exceptionally high likelihood, and words which have exceptionally high likelihood as a total of many ordinary likelihoods if it is in the selection of many operators. For example, if you take the indefinite nouns or pronouns, something and someone, they are in the selection of virtually every operator in the language, because almost everything you can say someone came, or fell, or went, and so forth, whatever it may be. As a result, their total frequency is high—we'll see in a moment what is the importance of a high likelihood—or the high likelihood may be relative to a particular operator, each word repetition, for instance. For example, word repetition in certain positions is especially frequent under certain conjunctions. If you take, for instance, Schnabel played Beethoven, but nevertheless Schnabel composed modern music. That was a ... or those sentences [xxx xxx xxx]. It is more likely than Schnabel played Beethoven, but Ethelworth Belge [?] composed modern music, which is a more peculiar sentence. That is, the consideration was that under but, as also under and or or, there is a greater likelihood for some word in the second sentence to be a repetition of the word in the corresponding position of the first sentence, than any other particular one word. Schnabel—in the sentence after Schnabel played Beethoven it was more likely that the next sentence would have as its subject Schnabel rather than any other one person, composer or non-composer.
The—we will see in the third constraint, that high-likelihood words may be reduced, even to zero; and indeed, these repetitions in the corresponding positions, can be zeroed, yielding, in this case, Schnabel played Beethoven, but nevertheless composed modern music.
Now ... I want to say that there is also exceptionally low likelihood that occurs. There are some words that have exceptionally low likelihood, and this also plays a role. Of this, I'll give an example in a moment. I first want to say, this constraint, namely of different likelihoods for different arguments, restricts the equiprobability of words. It specifies that for words with frequency larger than zero in the argument position of a given operator, some have high or very high frequency, and some close to zero. So these are also constraints on the combinations ... of words that we see.