Language and information

Lecture 4: The Nature of Language

4.5. Language as an evolving system

[Audio recording—Opens a separate, blank window. Return to this window to read the transcript.]

Now, a few general considerations emerge from this. Language evolved, and is evolving. We may be still, for that matter, at an early stage of it. That it evolved, we see by these stages of development, and also by its accretional structure, the fact, for instance, that expanded sentences contain the elementary sentences, they are not a fresh start. In general, each further relation that we have seen is defined on the preceding ones, and it retains them. It does not supplant them, and it is not the case of some completely new invention, something new is formed. When complex sentences were needed, not a new kind of thing was done. When reduced sentences were wanted, it wasn't that a new kind of shorthand talking was done. They always were done on the basis of what had been done before, by an additional operation, and they added their meaning, or their semantic effect to the meaning of the sentence that was formed before.

This explains, among other things, many side matters which perhaps I wouldn't have to go into. Why, for instance, not all possibilities in language have indeed developed. Why there is no addressing system in language, though the partial ordering and the linear ordering makes it possible. But it wasn't developed. There are things that simply did not happen. There are things which are treated in the most simple way possible, because the development stopped once an adequate stage was reached, if I may sort of speak teleologically for a moment. For example, very few operators in any language have more than two arguments. One can get the effect of an operator on three arguments by having a combination of two operators on two arguments. In general in languages there are only two levels of operator, no more. One can show that the effect of a third level of operator could be obtained—defined the way the others were, by the dependence properties of its arguments in turn—the effect could be obtained by combining second-level operators in the requisite way, and so on. So that there are various things which language has not done—either it hasn't gotten around to it, or it's too much trouble, and so forth.

Now in all this, we see the hallmark of a system that evolves, not a system with some kind of fixed structure a priori.

Furthermore, something about the uniqueness of language. Language is undoubtedly unique, as an object, but the individual processes which we saw create language, as they were surveyed here, are not unique. The various things that I mentioned, are things that exist elsewhere in the world, they are relations which are known elsewhere. Grammatical relation—[for example,] to be a subject or an object—is not something that is known anywhere else. But to have things depend on other things, and appear only if the other things appear, is a dependence that you can talk about, it's a relationship that you can talk about without it being connected to language. Language is a demanding structure. There are some things in it, which are therefore right, and some things that are not in it, and are therefore wrong. But these demands, as we have just seen, can be understood as institutionalizations of a less demanding and more naturally occurring use in the combinings of words. In other words, there are demands in language which are unique to language, but we have just seen that one can reach these demands by a process, by a process of institutionalization of custom, of convenience, of what makes sense, and so forth. This does not mean that you can make language simply whatever makes sense, because language is an institutionalization. But it is important to know that the demands of language, the rules of the grammar, are reachable as the end product of a process of institutionalization of something that is not unique, and of course the process of institutionalization itself is by no means unique, it is very widely known in culture.

We can even in principle, just to see what there is in language, what is unique, what one has to know about it, we can even in principle—and this is sort of half a joke, but not really—count the demands in language. We can count the demands that are needed in the language to enable a person to speak that language. The reason is as follows: each constraint, as they have been described here, deals with phonemic shapes, which themselves are combinations of phonemic distinctions, which can be—one can count what distinctions you need in order to establish all the phonemes; and they deal—each constraint deals with phonemic shapes, and with the likelihoods of their occurrence with respect to each other. The likelihoods of their occurrence also can be counted. The count would be very, very large. You'd have to count, so to speak—I don't know what—in ten million sentences certain likelihoods. Ten million sentences constructed only out of a vocabulary of 2000 or 4000 words, as a sample. But it is not an impossibility. Now, this is one point: the things are countable. The applications of the constraints therefore are countable, in the form that they have been presented here.

Furthermore, because each constraint is defined on the resultants of other constraints, we can arrange the counting activity to be sure that we are not counting anything twice, which is also very important. And this depends on the fact that the constraints are not independent things, but they are each defined on the other one, we can know if we have counted the thing and not counted it a second time.

Now, what does this mean? It means that in principle it is possible to see just how much a person has to know in order to speak a language within a given vocabulary limit. Nobody will do this counting, but there is nothing magical in having to know what is needed in order to speak.

Furthermore—and this is perhaps more important—we can see what kind of knowing is involved. We can see what is involved in knowing each such thing, in knowing phonemic distinctions, in knowing the phonemic composition of words, a lot of rote learning; in understanding and knowing—well, really in knowing—the dependence of word-occurrences on other word-occurrences. One doesn't have to understand it, but one has to know the classification of words in respect to this partial ordering, that is all. And, of course, also in knowing the reductions (well, we also have to know the likelihoods of words, that is by far the largest job, but again, only pairwise, only of the words in respect to their operators), and the rough meaning that is attached to such likelihoods; and the reduction in phonemic shape on certain fuzzy domains of low information. This is not so unique, and not so impossible.

Now, once on has a stock of words—meaning, of set sound-sequences, fixed sound-sequences referring to things and events—the overall picture that we see of language is a self-organizing system growing out of understandable combinings of words, and one doesn't need more.