Wednesday, December 31, 2008

Limitations of Computers as Translation Tools

Source:Translation Directory On line

By Alex Gross
http://language.home.sprynet.com

alexilen@sprynet.com


As should be more than evident from other contributions to this volume, the field of computer translation is alive and well—if anything, it is now entering what may prove to be its truly golden era. But there would be no need to point this out if certain problems from an earlier time had not raised lingering doubts about the overall feasibility of the field. Just as other authors have stressed the positive side of various systems and approaches, this chapter will attempt to deal with some of these doubts and questions, both as they may apply here and now to those planning to work with computer translation systems and also in a larger sense as they may be connected to some faulty notions about language held by the general public and perhaps some system developers as well. Explaining such doubts and limitations forthrightly can only help all concerned by making clear what is likely—and what is less likely—to work for each individual user. It can also clarify what the underlying principles and problems in this field have been and to some extent remain.

To begin with, the notion of computer translation is not new. Shortly after World War II, at a time when no one dreamt that word processors, spreadsheets, or drawing programs would be widely available, some of the computer's prime movers, Turing, Weaver and Booth among them, were already beginning to think about translation. (1) They saw this application mainly as a natural outgrowth of their wartime code-breaking work, which had helped to defeat the enemy, and it never occurred to them to doubt that computer translation was a useful and realizable goal.

The growing need to translate large bodies of technical information, heightened by an apparent shortage of translators, was one factor in their quest. But perhaps just as influential was a coupling of linguistic and cultural idealism, the belief that removing `language barriers' was a good thing, something that would promote international understanding and ensure world peace. Two related notions were surely that deep down all human beings must be basically similar and that piercing the superstratum of language divisions could only be beneficial by helping people to break through their superficial differences. (2) Underlying this idealism was a further assumption that languages were essentially some kind of code that could be cracked, that words in one tongue could readily be replaced by words saying the same thing in another. Just as the key to breaking the Axis code had been found, so some sort of linguistic key capable of unlocking the mysteries of language would soon be discovered. All these assumptions would be sorely tested in the decades ahead.

Some Basic Terms

Some of the most frequently used terms in this field, though also defined elsewhere in the book, will help the reader in dealing with our subject. It will quickly become evident that merely by providing these definitions, we will also have touched upon some of the field's major problems and limitations, which can then be explained in greater detail. For example, a distinction is frequently made between Machine Translation (usually systems that produce rough text for a human translator to revise) and Computer Assisted Translation devices (usually but not invariably software designed to help translators do their work in an enhanced manner). These are often abbreviated as MT and CAT respectively. So far both approaches require the assistance or active collaboration to one extent or another of a live, human translator. Under Machine Translation one finds a further distinction between Batch, Interactive, and Interlingual Approaches. A Batch method has rules and definitions which help it `decide' on the best translation for each word as it goes along. It prints or displays the entire text thus created with no help from the translator (who need not even be present but who nonetheless may often end up revising it). An Interactive system pauses to consult with the translator on various words or asks for further clarification. This distinction is blurred by the fact that some systems can operate in either batch or interactive mode. The so-called Interlingual approach operates on the theory that one can devise an intermediate `language'—in at least one case a form of Esperanto—that can encode sufficient linguistic information to serve as a universal intermediate stage—or pivot point—enabling translation back and forth between numerous pairs of languages, despite linguistic or cultural differences. Some skepticism has been voiced about this approach, and to date no viable Interlingual system has been unveiled.

Batch and Interactive systems are sometimes also referred to as Transfer methods to differentiate them from Interlingual theories, because they concentrate on a trade or transfer of meaning based on an analysis of one language pair alone. I have tried to make these distinctions as clear as possible, and they do apply to a fair extent to the emerging PC-based scene. At the higher end on mini and mainframe computers, there is however a certain degree of overlap between these categories, frequently making it difficult to say where CAT ends and MT begins.

Another distinction is between pre-editing (limiting the extent of vocabulary beforehand so to help the computer) and post-editing (cleaning up its errors afterwards). Usually only one is necessary, though this will depend on how perfect a translation is sought by a specific client. "Pre-editing" is also used to mean simply checking the text to be translated beforehand so as to add new words and expressions to the system's dictionary. The work devoted to this type of pre-editing can save time in post-editing later. A more extreme form of pre-editing is known as Controlled Language, whose severely limited vocabulary is used by a few companies to make MT as foolproof as possible.

Advocates of MT often point out that many texts do not require perfect translations, which leads us to our next distinction, between output intended for Information-Only Skimming by experts able to visualize the context and discount errors, and `Full-Dress' Translations, for those unable to do either. One term that keeps showing up is FAHQT for Fully Automatic High Quality Translation, which most in the field now concede is not possible (though the idea keeps creeping in again through the back door in claims made for some MT products and even some research projects). (3) Closer to current reality would be such descriptions as FALQT (Fully Automatic Low Quality Translation) and PAMQT (Partly Automatic Medium Quality Translation). Together, these three terms cover much of the spectrum offered by these systems.

Also often encountered in the literature are percentage claims purportedly grading the efficiency of computer translation systems. Thus, one language pair may be described as `90% accurate' or `95% accurate' or occasionally only `80% accurate.' The highest claim I have seen so far is `98% accurate.' Such ratings may have more to do with what one author has termed spreading `innumeracy' than with any meaningful standards of measurement. (4) On a shallow level of criticism, even if we accepted a claim of 98% accuracy at face value (and even if it could be substantiated), this would still mean that every standard double-spaced typed page would contain five errors—potentially deep substantive errors, since computers, barring a glitch, never make simple mistakes in spelling or punctuation.

It is for the reader to decide whether such an error level is tolerable in texts that may shape the cars we drive, the medicines and chemicals we take and use, the peace treaties that bind our nations. As for 95% accuracy, this would mean one error on every other line of a typical page, while with 90% accuracy we are down to one error in every line. Translators who have had to post-edit such texts tend to agree that with percentage claims of 90% or less it is easiest to have a human translator start all over again from the original text.

On a deeper level, claims of 98% accuracy may be even more misleading—does such a claim in fact mean that the computer has mastered 98% of perfectly written English or rather 98% of minimally acceptable English? Is it possible that 98% of the latter could turn out to be 49% of the former? There is a great difference between the two, and so far these questions have not been addressed. Thus, we can see how our brief summary of terms has already given us a bird's eye view of our subject.

Practical Limitations

There are six important variables in any decision to use a computer for translation: speed, subject matter, desired level of accuracy, consistency of translation, volume, and expense,. These six determinants can in some cases be merged harmoniously together in a single task, but they will at least as frequently tend to clash. Let's take a brief look at each:

1. Speed. This is an area where the computer simply excels—one mainframe system boasts 700 pages of raw output per night (while translators are sleeping), and other systems are equally prodigious. How raw the output actually is—and how much post-editing will be required, another factor of speed—will depend on how well the computer has been primed to deal with the technical vocabulary of the text being translated. Which brings us to our second category:

2. Subject matter. Here too the computer has an enormous advantage, provided a great deal of work has already gone into codifying the vocabulary of the technical field and entering it into the computer's dictionary. Thus, translations of aeronautical material from Russian to English can be not only speedy but can perhaps even graze the "98% accurate" target, because intensive work over several decades has gone into building up this vocabulary. If you are translating from a field whose computer vocabulary has not yet been developed, you may have to devote some time to bringing its dictionaries up to a more advanced level. Closely related to this factor is:

3. Desired level of accuracy. We have already mentioned the former in referring to the difference between Full-Dress Translations and work needed on an Information-Only basis. If the latter is sufficient, only slight post-editing—or none at all—may be required, and considerable cash savings can be the result. If a Full-Dress Translation is required, however, then much post-editing may be in order and there may turn out to be—depending once again on the quality of the dictionaries—no appreciable savings.

4. Consistency of vocabulary. Here the computer rules supreme, always assuming that correct prerequisite dictionary building has been done. Before computer translation was readily available, large commercial jobs with a deadline would inevitably be farmed out in pieces to numerous translators with perhaps something resembling a technical glossary distributed among them. Sometimes the task of "standardizing" the final version could be placed in the hands of a single person of dubious technical attainments. Even without the added problem of a highly technical vocabulary, it should be obvious that no two translators can be absolutely depended upon to translate the same text in precisely the same way. The computer can fully exorcize this demon and insure that a specific technical term has only one translation, provided that the correct translation has been placed in its dictionary (and provided of course that only one term with only one translation is used for this process or entity).

5. Volume. From the foregoing, it should be obvious that some translation tasks are best left to human beings. Any work of high or even medium literary value is likely to fall into this category. But volume, along with subject matter and accuracy, can also play a role. Many years ago a friend of mine considered moving to Australia, where he heard that sheep farming was quite profitable on either a very small or a very large scale. Then he learned that a very small scale meant from 10,000 to 20,000 head of sheep, a very large one meant over 100,000. Anything else was a poor prospect, and so he ended up staying at home. The numbers are different for translation, of course, and vary from task to task and system to system, but the principle is related. In general, there will be—all other factors being almost equal—a point at which the physical size of a translation will play a role in reaching a decision. Would-be users should carefully consider how all the factors we have touched upon may affect their own needs and intentions. Thus, the size and scope of a job can also determine whether or not you may be better off using a computer alone, some computer-human combination, or having human translators handle it for you from the start. One author proposes 8,000 pages per year in a single technical specialty with a fairly standardized vocabulary as minimum requirements for translating text on a mainframe system. (6)

6. Expense. Given the computer's enormous speed and its virtually foolproof vocabulary safeguards, one would expect it to be a clear winner in this area. But for all the reasons we have already mentioned, this is by no means true in all cases. The last word is far from having been written here, and one of the oldest French companies in this field has just recently gotten around to ordering exhaustive tests comparing the expenses of computer and human translation, taking all factors into account. (5)

As we can see quite plainly, a number of complications and limitations are already evident. Speed, wordage, expense, subject matter, and accuracy/consistency of vocabulary may quickly become mutually clashing vectors affecting your plans. If you can make allowances for all of them, then computer translation can be of great use to you. If the decision-making process involved seems prolonged and tortuous, it perhaps merely reflects the true state of the art not only of computer translation but of our overall knowledge of how language really works. At least some of the apparent confusion about this field may be caused by a gap between what many people believe a computer should be able to do in this area and what it actually can do at present. What many still believe (and have, as we shall see, continued to believe over several decades, despite ample evidence to the contrary) is that a computer should function as a simple black box: you enter a text in Language A on one side, and it slides out written perfectly in Language B on the other. Or better still you read it aloud, and it prints or even speaks it aloud in any other language you might desire.

This has not happened and, barring extremely unlikely developments, will not happen in the near future, assuming our goal is an unerringly correct and fluent translation. If we are willing to compromise on that goal and accept less than perfect translations, or wish to translate texts within a very limited subject area or otherwise restrict the vocabulary we use, then extremely useful results are possible. Some hidden expenses may also be encountered—these can involve retraining translators to cooperate with mainframe and mini computers and setting up electronic dictionaries to contain the precise vocabulary used by a company or institution. Less expensive systems running on a PC with built-in glossaries also require a considerable degree of customizing to work most efficiently, since such smaller systems are far more limited in both vocabulary and semantic resolving power than their mainframe counterparts.

Furthermore, not all translators are at present prepared to make the adjustments in their work habits needed for such systems to work at their maximum efficiency. And even those able to handle the transition may not be temperamentally suited to make such systems function at their most powerful level. All attempts to introduce computer translation systems into the work routine depend on some degree of adjustment by all concerned, and in many cases such adjustment is not easy. Savings in time or money are usually only achieved at the end of such periods. Sometimes everyone in a company, from executives down to stock clerks, will be obliged to change their accustomed vocabularies to some extent to accommodate the new system. (6) Such a process can on occasion actually lead, however, to enhanced communication within a company.

Deeper Limitations

NOTE: This section explains how changing standards in the study of linguistics may be related to the limitations in Machine Translation we see today and perhaps prefigure certain lines of development in this field. Those only interested in the practical side may safely skip this section.

Some practical limitations of MT and even of CAT should already be clear enough. Less evident are the limitations in some of the linguistic theories which have sired much of the work in this field. On the whole Westerners are not accustomed to believing that problems may be insoluble, and after four decades of labor, readers might suppose that more progress had been made in this field than appears to be the case. To provide several examples at once, I can remember standing for some time by the display booth of a prominent European computer translation firm during a science conference at M.I.T. and listening to the comments of passers-by. I found it dismaying to overhear the same attitudes voiced over and over again by quite sane and reasonable representatives from government, business and education. Most of what I heard could be summed up as 1) Language can't really be that complex since we all speak it; 2) Language, like nature, is an alien environment which must be conquered and tamed; 3) There has to be some simple way to cut through all the nonsense about linguistics, syntax, and semantics and achieve instant high quality translation; and 4) Why wasn't it all done yesterday?

To understand the reasons behind these comments and why they were phrased in this particular way—and also to understand the deeper reasons behind the limitations of computer translation—t may be helpful to go back to the year 1944, when the first stirrings of current activity were little evident and another school of linguistics ruled all but supreme. In that year Leonard Bloomfield—one of the three deans of American Linguistics along with Edward Sapir and Benjamin Lee Whorf (7)—was struggling to explain a problem that greatly perturbed him.

Bloomfield was concerned with what he called `Secondary Responses to Language.' By these he meant the things people say and seem to believe about language, often in an uninformed way. He called such opinions about language `secondary' to differentiate them from the use of language in communication, which he saw as `primary.' People delivering such statements, he observed, are often remarkably alert and enthusiastic: their eyes grow bright, they tend to repeat these opinions over and over again to anyone who will hear, and they simply will not listen—even those who, like the ones I met at MIT, are highly trained and familiar with scientific procedures—to informed points of view differing with their own. They are overcome by how obvious or interesting their own ideas seem to be. (8)

I would add here that what Bloomfield seems to be describing is a set of symptoms clinically similar to some forms of hysteria. As he put it:

`It is only in recent years that I have learned to observe these secondary ..... responses in anything like a systematic manner, and I confess that I cannot explain them—that is, correlate them with anything else. The explanation will doubtless be a matter of psychology and sociology.' (9)

If it is indeed hysteria, as Bloomfield seems to suggest, I wonder if it might not be triggered because some people, when their ideas about language are questioned or merely held up for discussion, feel themselves under attack at the very frontier of their knowledge about reality. For many people language is so close to what they believe that they are no longer able to tell the difference between reality and the language they use to describe it. It is an unsettling experience for them, one they cannot totally handle, somewhat like tottering on the edge of their recognized universe. The relationship between one's language habits and one's grasp of reality has not been adequately explored, perhaps because society does not yet train a sufficient number of bilingual, multilingual or linguistically oriented people qualified to undertake such investigations. (10)

Bloomfield went even further to define `tertiary responses to language' as innately hostile, angry, or contemptuous comments from those whose Secondary Responses are questioned in any serious way. They would be simply rote answers or rote repetitions of people's `secondary' statements whenever they were challenged on them, as though they were not capable of reasoning any further about them. Here he seemed to be going even further in identifying these responses with irrational or quasi-hysterical behavior.

What was it that Bloomfield found so worrisome about such opinions on language? Essentially he—along with Whorf and Sapir—had spent all his life building what most people regarded as the `science of linguistics.' It was a study which required extended field work and painstaking analysis of both exotic and familiar languages before one was permitted to make any large generalizations even about a single language, much less about languages in general. Closely allied to the anthropology of Boas and Malinowski, it insisted on careful and thoughtful observations and a non-judgmental view of different cultures and their languages. It was based on extremely high standards of training and scholarship and could not immediately be embraced by society at large. In some ways he and his colleagues had gone off on their own paths, and not everyone was able to follow them. Whorf and Sapir had in fact both died only a few years earlier, and Bloomfield himself would be gone five years later. Here are a few of the `secondary' statements that deeply pained Bloomfield and his generation of linguists:

Language A is more _____ than language B. (.........`logical,' `profound,' `poetic,' `efficient,' etc., fill in the blank yourself)

The structure of Language C proves that it is a universal language, and everyone should learn it as a basis for studying other languages.

Language D and Language E are so closely related that all their speakers can always easily understand each other.

Language F is extremely primitive and can only have a few hundred words in it.

Language G is demonstrably `better' than Languages H, J, and L.

The word for `________' (choose almost any word) in Language M proves scientifically that it is a worse—better, more `primitive' or `evolved,' etc.—language than Language N.

Any language is easy to master, once you learn the basic structure all languages are built on.

Summarized from Bloomfield, 1944, pp. 413-21

All of these statements are almost always demonstrably false upon closer knowledge of language and linguistics, yet such opinions are still quite commonly voiced. In this same piece Bloomfield also voiced his sadness over continual claims that `pure Elizabethan English' was spoken in this or that region of the American South (a social and historical impossibility—at best such dialects contain a few archaic phrases) or boasts that the Sequoyan Indian language was so perfect and easy to learn that all citizens of the State of Oklahoma should study it in school. (11) What he found particularly disturbing was that this sort of linguistic folklore never seemed to die out, never yielded to scientific knowledge, simply went on and on repropagating itself with a life of its own. Traces of it could even be found in the work of other scholars writing about language and linguistics.

Bloomfield's views were very much a reflection of his time. They stressed a relativistic view of language and culture and the notion that languages spoken by small indigenous groups of people had a significance comparable to that of languages spoken by much larger populations. They willingly embraced the notion that language, like reality itself, is a complex matrix of factors and tended to reject simplistic generalizations of any sort about either language or culture. Moreover, Bloomfield certainly saw his approach as being a crucial minimum stage for building any kind of true linguistic science.

Less than ten years after his death these ideas were replaced, also in the name of science, by a set of different notions, which Bloomfield would have almost certainly have dismissed as `Secondary Responses to Language.' These new observations, which shared a certain philosophical groundwork with computational linguistics, constitute the credo of the Chomskian approach, now accepted as the dominant scientific view. They include the following notions:

All languages are related by a `universal grammar.'

It is possible to delineate the meaning of any sentence in any language through knowledge of its deep structure and thereby replicate it in another language.

A diagram of any sentence will reveal this deep structure.

Any surface level sentence in any language can easily be related to its deep structure, and this in turn can be related to universal grammar in a relatively straightforward manner through a set of rules.

These and related statements are sufficient to describe not only the structure of language but the entire linguistic process of development and acculturation of infants and young children everywhere and can thus serve as a guide to all aspects of human language, including speech, foreign language training, and translation.

The similarity of these deep and surface level diagrams to the structure of computer languages, along with the purported similarity of the human mind to a computer, may be profoundly significant. (12)

These ideas are clearly not ones Bloomfield could have approved of. They are not relativistic or cautious but universalist and all-embracing, they do not emphasize the study of individual languages and cultures but leap ahead into stunning generalizations. As such, he would have considered them examples of `Secondary Responses' to language. In many ways they reflect the America of the late 'Fifties, a nation proud of its own new-found dominance and convinced that its values must be more substantial than those of `lesser' peoples. Such ideas also coincide nicely with a seemingly perennial need academia feels for theories offering a seemingly scientific approach, suggestive diagrams, learned jargon, and a grandiose vision.

We all know that science progresses by odd fits and starts and that the supreme doctrines of one period may become the abandoned follies of a later one. But the turnabout we have described is surely among the most extreme on record. It should also be stressed that the outlook of Bloomfield, Whorf and Sapir has never truly been disproved or rejected and still has followers today. (13) Moreover, there is little viable proof that these newer ideas, while they may have been useful in describing the way children learn to speak, have ever helped a single teacher to teach languages better or a single translator to translate more effectively. Nor has anyone ever succeeded in truly defining `deep structure' or `universal grammar.'

No one can of course place the whole responsibility for machine translation today on Noam Chomsky's theories about language—certainly his disciples and followers (14) have also played a role, as has the overall welcome this entire complex of ideas has received. Furthermore, their advent has certainly also coincided with the re-emergence of many other `Secondary Responses', including most of the comments I mentioned overhearing at M.I.T. Much of the literature on Machine Translation has owed—and continues to owe—a fair amount to this general approach to linguistic theory. Overall understanding of language has certainly not flourished in recent times, and the old wives' tale of a single magical language providing the key to the understanding of all other tongues now flourishes again as a tribute both to Esperanto and the Indian Aymara language of Peru. (15) Disappointment with computer translation projects has also been widespread throughout this time, and at one point even Chomsky seemingly washed his hands of the matter, stating that `as for machine translation and related enterprises, they seemed to me pointless as well as probably quite hopeless.' (16)

Even such lofty notions as those favored by Turing and Weaver, that removing `language barriers' would necessarily be a good thing, or that different languages prevent people from realizing that they are `really all the same deep down,' could turn out to be `Secondary Responses.' It may also be that language barriers and differences have their uses and virtues, and that enhanced linguistic skills may better promote world peace than a campaign to destroy such differences. But popular reseeding of such notions is, as Bloomfield foresaw, quite insidious, and most of these ideas are still very much with us, right along with the proof that they may be unattainable. This is scarcely to claim that the end is near for computers as translation tools, though it may mean that further progress along certain lines of enquiry is unlikely.

There are probably two compelling sets of reasons why computers can never claim the upper hand over language in all its complexity, one rooted in the cultural side of language, the other in considerations related to mathematics. Even if the computer were suddenly able to communicate meaning flawlessly, it would still fall short of what humans do with language in a number of ways. This is because linguists have long been aware that communication of meaning is only one among many functions of language. Others are:

Demonstrating one's class status to the person one is speaking or writing to.

Simply venting one's emotions, with no real communication intended.

Establishing non-hostile intent with strangers, or simply passing time with them.

Telling jokes.

Engaging in non-communication by intentional or accidental ambiguity, sometimes also called `telling lies.'

Two or more of the above (including communication) at once.

Under these circumstances it becomes very difficult to explain how a computer can be programmed merely to recognize and distinguish these functions in Language A, much less make all the adjustments necessary to translate them into Language B. As we have seen, computers have problems simply with the communications side, not to mention all these other undeniable aspects of language. This would be hard enough with written texts, but with spoken or `live' language, the problems become all but insurmountable.

Closely related here is a growing awareness among writers and editors that it is virtually impossible to separate the formulation of even the simplest sentence in any language from the audience to whom it is addressed. Said another way, when the audience changes, the sentence changes. Phrased even more extremely, there is no such thing as a `neutral' or `typical' or `standard' sentence—even the most seemingly innocuous examples will be seen on closer examination to be directed towards one audience or another, whether by age, education, class, profession, size of vocabulary, etc. While those within the target audience for any given sentence will assume its meaning is obvious to all, those on its fringes must often make a conscious effort to absorb it, and those outside its bounds may understand nothing at all. This is such an everyday occurrence that it is easy to forget how common it really is. And this too adds a further set of perplexities for translators to unravel, for they must duplicate not only the `meaning' but also the specialized `angling' to an analogous audience in the new language. Perhaps the most ironic proof of this phenomenon lies in the nature of the `model' sentences chosen by transformational and computational linguists to prove their points. Such sentences rarely reflect general usage—they are often simply the kinds of sentences used by such specialists to impress other specialists in the same field.

Further proof is provided here by those forms of translation often described as `impossible,' even when performed by humans—stageplays, song lyrics, advertising, newspaper headlines, titles of books or other original works, and poetry. Here it is generally conceded that some degree of adaptation may be merged with translation. Theatre dialogue in particular demands a special level of `fidelity.' Sentences must be pronounceable by actors as well as literally correct, and the emotional impact of the play must be recreated as fully as possible. A joke in Language A must also become a joke in Language B, even if it isn't. A constantly maintained dramatic build-up must seek its relief or `punch-lines' at the right moments. This may seem far from the concerns of a publication manager anxious to translate product documentation quickly and correctly. But in a real sense all use of words is dependent on building towards specific points and delivering `punch-lines' about how a product or process works. The difference is one of degree, not of quality. It is difficult to imagine how computers can begin to cope with this aspect of translation.

Cross-cultural concerns add further levels of complexity, and no miraculous `universal structure' (17) exists for handling them. Languages are simply not orderly restructurings of each other's ideas and processes, and a story I have told elsewhere (18) may perhaps best illustrate this. It relates to a real episode in my life when my wife and I were living in Italy. At that time she did most of the shopping to help her learn Italian, and she repeatedly came home complaining that she couldn't find certain cuts of meat at the butcher's. I told her that if she concentrated on speaking better Italian, she would certainly find them. But she still couldn't locate the cuts of meat she wanted. Finally, I was forced to abandon my male presumption of bella figura and go with her to the market place, where I patiently explained in Italian what it was we were looking for to one butcher after the next. But even together we were still not successful. What we wanted actually turned out not to exist.

The Italians cut their meat differently than we do. There are not only different names for the cuts but actually different cuts as well. Their whole system is built around it—they feed and breed their cattle differently so as to produce these cuts. So one might argue that the Italian steer itself is different—technically and anatomically, it might just qualify as a different subspecies.

This notion of `cutting the animal differently' or of `slicing reality differently' can turn out to be a factor in many translation problems. It is altogether possible for whole sets of distinctions, indeed whole ranges of psychological —or even tangible—realities to vanish when going from one language to another. Those which do not vanish may still be mangled beyond recognition. It is this factor which poses one of the greatest challenges even for experienced translators. It may also place an insurmountable stumbling block in the path of computer translation projects, which are based on the assumption that simple conversions of obvious meanings between languages are readily possible.

Another cross-cultural example concerns a well-known wager AI pioneer Marvin Minsky has made with his M.I.T. students. Minsky has challenged them to create a program or device that can unfailingly tell the difference, as humans supposedly can, between a cat and a dog. Minsky has made many intriguing remarks on the relation between language and reality, (19) but he shows in this instance that he has unwittingly been manipulated by language-imposed categories. The difference between a cat and a dog is by no means obvious, and even `scientific' Linnaean taxonomy may not provide the last word. The Tzeltal Indians of Mexico's Chiapas State in fact classify some of our `cats' in the `dog' category, rabbits and squirrels as `monkeys,' and a more doglike tapir as a `cat,' thus proving in this case that whole systems of animals can be sliced differently. Qualified linguistic anthropologists have concluded that the Tzeltal system of naming animals—making allowance for the fact that they know only the creatures of their region—is ultimately just as useful and informative as Linnaean latinisms and even includes information that the latter may omit. (20) Comparable examples from other cultures are on record. (21)

An especially dramatic cross-cultural example suggests that at least part of the raging battle as to whether acupuncture and the several other branches of Chinese Medicine can qualify as `scientific' springs from the linguistic shortcomings of Western observers. The relationships concerning illness the Chinese observe and measure are not the ones we observe, their measurements and distinctions are not the same as ours, their interpretation of such distinctions are quite different from ours, the diagnosis suggested by these procedures is not the same, and the treatment and interpretation of a patient's progress can also radically diverge from our own. Yet the whole process is perfectly logical and consistent in its own terms and is grounded in an empirical procedure. (18) The vocabulary is fiendishly difficult to explain to non-specialists in this highly developed branch of the Chinese language. No one knows how many other such instances of large and small discontinuities between languages and their meanings may exist, even among more closely related tongues like French and English, and no one can judge how great an effect such discontinuities may have on larger relationships between the two societies or even on ordinary conversations between their all too human representatives.

Just as the idea that the earth might be round went against the grain for the contemporaries of Columbus, so the notion that whole ranges of knowledge and experience may be inexpressible as one moves from one language to another seems equally outrageous to many today. Such a notion, that Language A cannot easily and perfectly replicate what is said in Language B, simply goes against what most people regard as `common sense.' But is such insistence truly commonsensical or merely another instance of Bloomfield's `Secondary Responses?' Something like this question lies at the root of the long-continuing and never fully resolved debate among linguists concerning the so-called Whorf-Sapir hypothesis. (7)

Mathematical evidence suggesting that computers can never fully overtake language is quite persuasive. It is also in part fairly simple and lies in a not terribly intricate consideration of the theory of sets. No subset can be larger than the set of which it is a part. Yet all of mathematics—and in fact all of science and technology, as members of a Linguistics school known as Glossematics (22) have argued—can be satisfactorily identified as a subcategory—and possibly a subset—of language. According to this reasoning, no set of its components can ever be great enough to serve as a representation of the superset they belong to, namely language. Allowing for the difficulties involved in determining the members of such sets, this argument by analogy alone would tend to place language and translation outside the limits of solvable problems and consign them to the realm of the intractable and undecidable. (23)

The theory of sets has further light to shed. Let us imagine all the words of Language A as comprising a single set, within which each word is assigned a number. Now let us imagine all the words of Language B as comprising a single set, with numbers once again assigned to each word. We'll call them Set A and Set B. If each numbered word within Set A meant exactly the same thing as each word with the same number in Set B, translation would be no problem at all, and no professional translators would be needed. Absolutely anyone able to read would be able to translate any text between these two languages by looking up the numbers for the words in the first language and then substituting the words with the same numbers in the second language. It would not even be necessary to know either language. And computer translation in such a case would be incredibly easy, a mere exercise in `search and replace,' immediately putting all the people searching through books of words and numbers out of business.

But the sad reality of the matter—and the real truth behind Machine Translation efforts—is that Word # 152 in Language A does not mean exactly what Word # 152 in Language B means. In fact, you may have to choose between Words 152, 157, 478, and 1,027 to obtain a valid translation. It may further turn out that Word 152 in Language B can be translated back into Language A not only as 152 but also 149, 462, and 876. In fact, Word # 152 in Language B may turn out to have no relation to Word # 152 in Language A at all. This is because 47 words with lower numbers in Language B had meanings that spilled over into further numbered listings. It could still be argued that all these difficulties could be sorted out by complex trees of search and goto commands. But such altogether typical examples are only the beginning of the problems faced by computational linguists, since words are rarely used singly or in a vacuum but are strung together in thick, clammy strings of beads according to different rules for different languages. Each bead one uses influences the number, shape, and size of subsequent beads, so that each new word in a Language A sentence compounds the problems of translation into Language B by an extremely non-trivial factor, with a possible final total exceeding by several orders of magnitude the problems confronted by those who program computers for the game of chess.

There are of course some real technical experts, the linguistic equivalents of Chess Grand Masters, who can easily determine most of the time what the words mean in Language A and how to render them most correctly in Language B. These experts are called translators, though thus far no one has attributed to them the power or standing of Chess Masters. Another large irony: so far the only people who have proved capable of manipulating the extremely complex systems originally aimed at replacing translators have been, in fact.....translators.

Translators and MT Developers: Mutual Criticisms

None of the preceding necessarily makes the outlook for Machine Translation or Computer Aided Translation all that gloomy or unpromising. This is because most developers in this field long ago accepted the limitations of having to produce systems that can perform specific tasks under specific conditions. What prospective users must determine, as I have sought to explain, is whether those conditions are also their conditions. Though there have been a few complaints of misrepresentation, this is a situation most MT and CAT developers are prepared to live with. What they are not ready to deal with (and here let's consider their viewpoint) is the persistence of certain old wives' tales about the flaws of computer translation.

The most famous of these, they will point out with some ire, are the ones about the expressions `the spirit is willing, but the flesh is weak' or `out of sight, out of mind' being run through the computer and coming out `the vodka is good, but the meat is rotten' and `invisible idiot' respectively. There is no evidence for either anecdote, they will protest, and they may well be right. Similar stories circulate about `hydraulic rams' becoming `water goats' or the headline `Company Posts Sizeable Growth' turning into `Guests Mail Large Tumor.' Yet such resentment may be somewhat misplaced. The point is not whether such and such a specific mistranslation ever occurred but simply that the general public—the same public equally prepared to believe that `all languages share a universal structure'—is also ready to believe that such mistranslations are likely to occur. In any case, these are at worst only slightly edited versions of fairly typical MT errors—for instance, I recently watched a highly regarded PC-based system render a `dead key' on a keyboard (touche morte) as `death touch.' I should stress that there are perfectly valid logical and human reasons why such errors occur, and that they are at least as often connected to human as to computer error. There are also perfectly reasonable human ways of dealing with the computer to avoid many of these errors.

The point is that the public is really quite ambivalent—even fickle—not just about computer translation but about computers in general, indeed about much of technology. Lacking Roman gladiators to cheer, they will gladly applaud at the announcement that computers have now vanquished all translation problems but just as readily turn thumbs down on hearing tales of blatant mistranslations. This whole ambivalence is perhaps best demonstrated by a recent popular film where an early model of a fully robotized policeman is brought into a posh boardroom to be approved by captains of industry. The Board Chairman instructs an impeccably clad flunky to test the robot by pointing a pistol towards it. Immediately the robot intones `If you do not drop your weapon within twenty seconds, I will take punitive measures.' Naturally the flunky drops his gun, only to hear `If you do not drop your weapon within ten seconds, I will take punitive measures.' Some minutes later they manage to usher the robot out and clean up what is left of the flunky. Such attitudes towards all computerized products are widespread and coexist with the knowledge of how useful computers can be. Developers of computer translation systems should not feel that they are being singled out for criticism.

These same developers are also quite ready to voice their own criticisms of human translators, some of them justified. Humans who translate, they will claim, are too inconsistent, too slow, or too idealistic and perfectionist in their goals. It is of course perfectly correct that translators are often inconsistent in the words they choose to translate a given expression. Sometimes this is inadvertent, sometimes it is a matter of conscious choice. In many Western languages we have been taught not to repeat the same word too often: thus, if we say the European problem in one sentence, we are encouraged to say the European question or issue elsewhere. This troubles some MT people, though computers could be programmed easily enough to emulate this mannerism. We also have many fairly similar ways of saying quite close to the same thing, and this also impresses some MT people as a fault, mainly because it is difficult to program for.

This whole question could lead to a prolonged and somewhat technical discussion of "disambiguation," or how and when to determine which of several meanings a word or phrase may have—or for that matter of how a computer can determine when several different ways of saying something may add up to much the same thing. Though the computer can handle the latter more readily than the former, it is perhaps best to assume that authors of texts will avoid these two extreme shoals of "polysemy" and "polygraphy" (or perhaps "polyepeia") and seek out the smoother sailing of more standardized usage.
Perhaps the most impressive experiments on how imperfect translation can become were carried out by the French several decades ago. A group of competent French and English translators and writers gathered together and translated various brief literary passages back and forth between the two languages a number of times. The final results of such a process bore almost no resemblance to the original, much like the game played by children sitting in a circle, each one whispering words just heard to the neighbor on the right. (24) Here too the final result bears little resemblance to the original words.

The criticisms of slowness and perfectionism/idealism are related to some extent. While the giant computers used by the C.I.A. and N.S.A. can of course spew out raw translation at a prodigious rate, this is our old friend Fully Automatic Low Quality output and must be edited to be clear to any but an expert in that specialty. There is at present no evidence suggesting that a computer can turn out High Quality text at a rate faster than a human—indeed, humans may in some cases be faster than a computer, if FAHQT is the goal. The claim is heard in some MT circles than human translators can only handle 200 to 500 words per hour, which is often true, but some fully trained translators can do far better. I know of many translators who can handle from 800 to 1,000 words per hour (something I can manage under certain circumstances with certain texts) and have personally witnessed one such translator use a dictating machine to produce between 3,000 and 4,000 words per hour (which of course then had to be fed to typists).

Human ignorance—not just about computers but about how languages really work—creeps in here again. Many translators report that their non-translating colleagues believe it should be perfectly possible for a translator to simply look at a document in Language A and `just type it out' in flawless Language B as quickly as though it were the first language. If human beings could do this, then there might be some hope for computers to do it too. Here again we have an example of Bloomfield's Secondary Responses to Language, the absolute certainty that any text in one language is exactly the same in another, give or take some minimal word juggling. There will be no general clarity about computer translation until there is also a greatly enhanced general clarity about what languages are and how they work.

In all of this the translator is rarely perceived as a real person with specific professional problems, as a writer who happens to specialize in foreign languages. When MT systems are introduced, the impetus is most often to retrain and/or totally reorganize the work habits of translators or replace them with younger staff whose work habits have not yet been formed, a practice likely to have mixed results in terms of staff morale and competence. Another problem, in common with word processing, is that no two translating systems are entirely alike, and a translator trained on one system cannot fully apply experience gained on one to another. Furthermore, very little effort is made to persuade translators to become a factor in their own self-improvement. Of any three translators trained on a given system, only one at best will work to use the system to its fullest extent and maximize what it has to offer. Doing so requires a high degree of self-motivation and a willingness to improvise glossary entries and macros that can speed up work. Employees clever enough to do such things are also likely to be upwardly mobile, which may mean soon starting the training process all over again, possibly with someone less able. Such training also forces translators to recognize that they are virtually wedded to creating a system that will improve and grow over time. This is a great deal to ask in either America's fast-food job market or Europe's increasingly mobile work environment. Some may feel it is a bit like singling out translators and asking them to willingly declare their life-long serfdom to a machine.

And the Future?

Computer translation developers prefer to ignore many of the limitations I have suggested, and they may yet turn out to be right. What MT proponents never stop emphasizing is the three-fold increase in computer capacity awaiting us in the not so distant future: increasing computer power, rapidly dwindling size, and plummeting prices. Here they are undoubtedly correct, and they are also probably correct in pointing out the vast increase in computer power that advanced multi-processing and parallel processing can bring. Equally impressive are potential improvements in the field of Artificial Intelligence, allowing for the construction of far larger rule-based systems likely to be able to make complicated choices between words and expressions. (25) Neural Nets (26), along with their Hidden Markov Model cousins (27), also loom on the horizon with their much publicized ability to improvise decisions in the face of incomplete or inaccurate data. And beyond that stretches the prospect of nanotechnology, (28) an approach that will so miniaturize computer pathways as to single out individual atoms to perform tasks now requiring an entire circuit. All but the last are already with us, either now in use or under study by computer companies or university research projects. We also keep hearing early warnings of the imminent Japanese wave, ready to take over at any moment and overwhelm us with all manner of `voice-writers,' telephone-translators, and simultaneous computer-interpreters.

How much of this is simply more of the same old computer hype, with a generous helping of Bloomfield's Secondary Responses thrown in? Perhaps the case of the `voice-writer' can help us to decide. This device, while not strictly a translation tool, has always been the audio version of the translator's black box: you say things into the computer, and it immediately and flawlessly transcribes your words into live on-screen sentences. In most people's minds, it would take just one small adjustment to turn this into a translating device as well.

In any case, the voice-writer has never materialized (and perhaps never will), but the quest for it has now produced a new generation of what might best be described as speaker-assisted speech processing systems. Though no voice-writers, these systems are quite useful and miraculous enough in their own way. As you speak into them at a reasonable pace, they place on the screen their best guess for each word you say, along with a menu showing the next best guesses for that word. If the system makes a mistake, you can simply tell it to choose another number on the menu. If none of the words shown is yours, you still have the option of spelling it out or keying it in. This ingenious but relatively humble device, I predict, will soon take its place as a useful tool for some translators. This is because it is user-controlled rather than user-supplanting and can help those translators who already use dictation as their means of transcribing text. Those who lose jobs because of it will not be translators but typists and secretaries.

Whenever one discovers such a remarkable breakthrough as these voice systems, one is forced to wonder if just such a breakthrough may be in store for translation itself, whether all one's reasons to the contrary may not be simply so much rationalization against the inevitable. After due consideration, however, it still seems to me that such a breakthrough is unlikely for two further reasons beyond those already given. First, the very nature of this voice device shows that translators cannot be replaced, simply because it is the speaker who must constantly be on hand to determine if the computer has chosen the correct word, in this case in the speaker's native language.

How much more necessary does it then become to have someone authoritative nearby, in this case a translator, to ensure that the computer chooses correctly amidst all the additional choices imposed where two languages are concerned? And second, really a more generalized way of expressing my first point, whenever the suspicion arises that a translation of a word, paragraph, or book may be substandard, there is only one arbiter who can decide whether this is or is not the case: another translator. There are no data bases, no foreign language matching programs, no knowledge-engineered expert systems sufficiently supple and grounded in real world knowledge to take on this job. Writers who have tried out any of the so-called "style-checking" and "grammar-checking" programs for their own languages have some idea of how much useless wheel-spinning such programs can generate for a single tongue and so can perhaps imagine what an equivalent program for "translation-checking" would be like.

Perhaps such a program could work with a severely limited vocabulary, but there would be little point to it, since it would only be measuring the accuracy of those texts computers could already translate. Based on current standards, such programs would at best produce verbose quantities of speculations which might exonerate a translation from error but could not be trusted to separate good from bad translators except in the most extreme cases. It could end up proclaiming as many false negatives as false positives and become enshrined as the linguistic equivalent of the lie detector. And if a computer cannot reliably check the fidelity of an existing translation, how can it create a faithful translation in the first place?

Which brings me almost to my final point: no matter what gargantuan stores of raw computer power may lie before us, no matter how many memory chips or AI rules or neural nets or Hidden Markov Models or self-programming atoms we may lay end to end in vast arrays or stack up in whatever conceivable architecture the human mind may devise, our ultimate problem remains 1) to represent, adequately and accurately, the vast interconnections between the words of a single language on the one hand and reality on the other, 2) to perform the equivalent task with a second language, and 3) to completely and correctly map out all the interconnections between them. This is ultimately a linguistic problem and not an electronic one at all, and most people who take linguistics seriously have been racking their brains over it for years without coming anywhere near a solution.

Computers with limitless power will be able to do many things today's computers cannot do. They can provide terminologists with virtually complete lists of all possible terms to use, they can branch out into an encyclopedia of all related terms, they can provide spot logic checking of their own reasoning processes, they can even list the rules which guide them and cite the names of those who devised the rules and the full text of the rules themselves, along with extended scholarly citations proving why they are good rules. But they cannot reliably make the correct choice between competing terms in the great majority of cases. In programming terms, there is no shortage of ways to input various aspects of language nor of theories on how this should be done—what is lacking is a coherent notion of what must be output and to whom, of what should be the ideal `front-end' for a computer translation system. Phrased more impressionistically, all these looming new approaches to computing may promise endless universes of artificial spider's webs in which to embed knowledge about language, but will the real live spiders of language—words, meaning, trust, conflict, emotion—actually be willing to come and live in them?

And yet Bloomfieldian responses are heard again: there must be some way around all these difficulties. Throughout the world, industry must go on producing and selling—no sooner is one model of a machine on the market than its successor is on the way, urgently requiring translations of owners' manuals, repair manuals, factory manuals into a growing number of languages. This is the driving engine behind computer translation that will not stop, the belief that there must be a way to bypass, accelerate or outwit the translation stage. If only enough studies were made, enough money spent, perhaps a full-scale program like those intended to conquer space, to conquer the electron, DNA, cancer, the oceans, volcanoes and earthquakes. Surely the conquest of something as seemingly puny as language cannot be beyond us. But at least one computational linguist has taken a radically opposite stance:

A Manhattan project could produce an atomic bomb, and the heroic efforts of the 'Sixties could put a man on the moon, but even an all-out effort on the scale of these would probably not solve the translation problem.

—Kay, 1982, p. 74

He goes on to argue that its solution will have to be reached incrementally if at all and specifies his own reasons for thinking this can perhaps one day happen in at least some sense:

The only hope for a thoroughgoing solution seems to lie with technology. But this is not to say that there is only one solution, namely machine translation, in the classic sense of a fully automatic procedure that carries a text from one language to another with human intervention only in the final revision. There is in fact a continuum of ways in which technology could be brought to bear, with fully automatic translation at one extreme, and word-processing equipment and dictating machines at the other.

—Ibid.

The real truth may be far more sobering. As Bloomfield and his contemporaries foresaw, language may be no puny afterthought of culture, no mere envelope of experience but a major functioning part of knowledge, culture and reality, their processes so interpenetrating and mutually generating as to be inseparable. In a sense humans may live in not one but two jungles, the first being the tangible and allegedly real one with all its trials and travails. But the second jungle is language itself, perhaps just as difficult to deal with in its way as the first.

At this point I would like to make it abundantly clear that I am no enemy either of computers or computer translation. I spend endless hours at the keyboard, am addicted to downloading all manner of strange software from bulletin boards, and have even ventured into producing some software of my own. Since I also love translation, it is natural that one of my main interests would lie at the intersection of these two fields. Perhaps I risk hyperbole, but it seems to me that computer translation ought to rank as one of the noblest of human undertakings, since in its broadest aspects it attempts to understand, systematize, and predict not just one aspect of life but all of human understanding itself. Measured against such a goal, even its shortcomings have a great deal to tell us. Perhaps one day it will succeed in such a quest and lead us all out of the jungle of language and into some better place. Until that day comes, I will be more than happy to witness what advances will next be made.

Despite having expressed a certain pessimism, I foresee in fact a very optimistic future for those computer projects which respect some of the reservations I have mentioned and seek limited, reasonable goals in the service of translation. These will include computer-aided systems with genuinely user-friendly interfaces, batch systems which best deal with the problem of making corrections, and—for those translators who dictate their work—the new voice processing systems I have mentioned. There also seems to be considerable scope for using AI to resolve ambiguities in technical translation with a relatively limited vocabulary. Beyond this, I am naturally describing my reactions based on a specific moment in the development of computers and could of course turn out to be quite mistaken. In a field where so many developments move with such remarkable speed, no one can lay claim to any real omniscience, and so I will settle at present for guarded optimism over specific improvements, which will not be long in overtaking us.


Alex Gross served as a literary advisor to the Royal Shakespeare Company during the 1960's, and his translations of Dürrenmatt and Peter Weiss have been produced in London and elsewhere. He was awarded a two-year fellowship as writer-in-residence by the Berliner Künstler-Programm, and one of his plays has been produced in several German cities. He has spent twelve years in Europe and is fluent in French, German, Italian and Spanish. He has published works related to the translation of traditional Chinese medicine and is planning further work in this field. Two more recent play translations were commissioned and produced by UBU Repertory Company in New York, one of them as part of the official American celebration of the French Revolutionary Bicentennial in 1989. Published play translations are The Investigation (Peter Weiss, London, 1966, Calder & Boyars) and Enough Is Enough (Protais Asseng, NYC, 1985, Ubu Repertory Co.). His experience with translation has also encompassed journalistic, diplomatic and commercial texts, and he has taught translation as part of NYU's Translation Certificate Program. In the last few years a number of his articles on computers, translation, and linguistics have appeared in The United Kingdom, Holland, and the US. He is the Chairperson of the Machine Translation Committee of the New York Circle of Translators, is also an active member of the American Translators Association, and has been involved in the presentatations and publications of both groups.

NOTES:

(1) In 1947 Alan Turing began work on his paper Intelligent Machinery, published the following year. Based on his wartime experience in decoding German Naval and General Staff messages, this work foresaw the use of `television cameras, microphones, loudspeakers. wheels and "handling servo-mechanisms" as well as some sort of "electric brain."' It would be capable of:

`(i) Various games...
`(ii) The learning of languages
`(iii) Translation of languages (my emphasis)
`(iv) Cryptography
`(v) Mathematics'

Further details on Turing's role are found in Hodges. The best overview of this entire period, as well as of the entire history of translating computers, is of course provided by Hutchins.

(2) See especially Weaver.

(3) Typical among these have been advertisements for Netherlands-based Distributed Language Technology, which read in part: `DLT represents the safe route to truly automatic translation: without assistance from bilinguals, polyglots, or post-editors. But meeting the quality standards of professional translators—no less.....The aim is a translation machine that understands, that knows how to tell sense from nonsense....In this way, DLT will surpass the limitations of formal grammar or man-made dictionaries.....' At various times during its long development, this system has boasted the use of pre-editing, gigantic bilingual knowledge banks, an Esperanto Interlingual architecture, Artificial Intelligence, and the ability to handle `a vast range of texts on general and special subjects.' (Source: ads in Language Technology/Electric Word, in most 1989 issues) On the research side, Jaime G. Carbonell and Masaru Tomita announced in 1987 that Carnegie-Mellon University `has begun a project for the development of a new generation of MT systems whose capabilities range far beyond the current technology.' They further specified that with these systems, `.....unlike current MT systems, no human translator should be required to check and correct the translated text.' (Carbonell & Tomita) This treatment is found in Sergei Nirenburg's excellent though somewhat technical anthology (Nirenburg).

(4) According to an influential book in the United States, `innumeracy' is as great a threat to human understanding as illiteracy (Paulos).

(5) `In the testing phase, some 5000 pages of documentation, in three types of text, will be processed, and the results compared with human translation of the same text in terms of quantity, time taken, deadlines met, and cost.' (Kingscott) This piece describes the B'Vital/Ariane method now being used by the French documentation giant SITE. To my knowledge, this is the first reasonably thorough test proposed comparing human and machine translation. Yet it is limited to one system in one country under conditions which, after the fact, will most probably be challenged by one party or another. Human translators will certainly demand to know if full setup costs, on-the-job training courses, and software maintenance expenses have been fully amortized. For their part, machine translation advocates might conceivably ask how human translators were chosen for the test and/or what level of training was provided. These are all questions which merit further consideration if a fair discussion comparing computer and human translation is to take place.

(6) Newman, as included in Vasconcellos 1988a. In addition to this excellent piece, those obtaining this volume will also want to read Jean Datta's candid advice on why computer translation techniques should be introduced into a business or institution slowly and carefully (Datta), Muriel Vasconcellos' own practical thoughts on where the field is headed (Vasconcellos, 1988b), and Fred Klein's dose of healthy skepticism (Klein).

(7) Both Sapir and Whorf carried out extensive study of American Indian languages and together evolved what has come to be called the Whorf-Sapir Hypothesis. Briefly stated, this theory states that what humans see, do and know is to a greater or lesser extent based on the structure of their language and the categories of thought it encourages or excludes. The prolonged and spirited debate around this hypothesis has largely centered on the meaning of the phrase to a greater or lesser extent. Even the theory's most outright opponents concede it may have validity in some cases, though they see something resembling strict determinism in applying it too broadly and point out that translation between languages would not be possible if the Whorf-Sapir Hypothesis were true. Defenders of the theory charge that its critics may not have learned any one language thoroughly enough to become fully aware of how it can hobble and limit human thinking and further reply that some translation tasks are far more difficult than others, sometimes bordering on the impossible.

(8) Bloomfield, Secondary and Tertiary Responses to Language, in Hockett 1970 , pp: 412-29. This piece originally appeared in Language 20.45-55 and was reprinted in Hockett 1970 and elsewhere. The author's major work in the field of linguistics was Bloomfield 1933/1984.

(9) Bloomfield, in Hockett 1970, page 420.

(10) Since so many people in so many countries speak two or more languages, it might be imagined that there is a broad, widely-shared body of accurate knowledge about such people. In point of fact there is not, and the first reasonably accessible book-length account of this subject is Grosjean. Some of this book's major points, still poorly appreciated by society at large:

Relatively few bilingual people are able to translate between their two languages with ease. Some who try complain of headaches, many cannot do it at all, many others do it badly but are not aware of this. Thus, bilingualism and translation skills are two quite different abilities, perhaps related to different neurological processes.

No bilinguals possess perfectly equal skills in both their languages. All favor the one or the other at least slightly, whether in reading, writing, or speaking. Thus, the notion of being brought up perfectly bilingual is a myth—much of bilingualism must be actively achieved in both languages..

One does not have to be born bilingual to qualify as such. Those who learn a second language later, even as adults, can be considered bilingual to some extent, provided they actively or passively use a second language in some area of their lives.

(11) Bloomfield, in Hockett 1970, pp. 414-16.

(12) Though presented here in summarized form, these ideas all form part of the well-known Chomskian process and can be found elaborated in various stages of complexity in many works by Chomsky and his followers. See Chomsky, 1957, 1965, & 1975.

(13) The bloodied battlefields of past scholarly warfare waged over these issues are easily enough uncovered. In 1968 Charles Hockett, a noted follower of Bloomfield, launched a full-scale attack on Chomsky (Hockett, 1968) Those who wish to follow this line of debate further can use his bibliography as a starting point. Hostilities even spilled over into a New Yorker piece and a book of the same name (Mehta). Other starting points are the works of Chomsky's teacher (Harris) or a unique point of view related to computer translation by Lehmann. Throughout this debate, there have been those who questioned why these transformational linguists, who claim so much knowledge of language, should write such dense and unclear English. When questioned on this, Mehta relates Chomsky's reply as follows: `"I assume that the writing in linguistics is no worse than the writing in any other academic field" Chomsky says. "The ability to use language well is very different from the ability to study it. Once the Slavic Department at Harvard was thinking of offering Vladimir Nabokov an appointment. Roman Jakobson, the linguist, who was in the department then, said that he didn't have anything against elephants but he wouldn't appoint one a professor of zoology." Chomsky laughs.'

(14) See for example Fodor or Chisholm.

(15) See Note 5 for reference to Esperanto. The South American Indian language Aymara has been proposed and partially implemented as a basis for multilingual Machine Translation by the Bolivian mathematician Ivan Guzman de Rojas, who claims that its special syntactic and logical structures make it an idea vehicle for such a purpose. On a surface analysis, such a notion sounds remarkably close to Bloomfieldian secondary responses about the ideal characteristics of the Sequoyan language, long before computers entered the picture. (Guzman de Rojas)

(16) See Chomsky, 1975, p. 40.

(17) The principal work encouraging a search for `universal' aspects of language is Greenberg. Its findings are suggestive but inconclusive.

(18) This section first appeared in a different form as a discussion between Sandra Celt and the author (Celt & Gross).

(19) Most of Marvin Minsky's thoughts on language follow a strictly Chomskian framework—thus, we can perhaps refer to the overall outlook of his school as a Minksian-Chomskian one. For further details see Sections 19-26 of Minsky.

(20) See Hunn for a considerably expanded treatment.

(21) A rich literature expanding on this theme can be found in the bibliography of the book mentioned in the preceding note.

(22) Glossematics is in the U.S. a relatively obscure school of linguistics, founded by two Danes, Louis Hjelmslev and Hans Jørgen Uldall, earlier in the century. Its basic thesis has much in common with thinking about computers and their possible architectures. It starts from the premise that any theory about language must take into account all possible languages that have ever existed or can exist, that this is the absolute minimum requirement for creating a science of linguistics. To objections that this is unknowable and impossible, its proponents reply that mathematicians regularly deal with comparable unknowables and are still able to make meaningful generalizations about them. From this foundation emerges the interesting speculation that linguistics as a whole may be even larger than mathematics as a whole, and that `Linguistics' may not be that science which deals with language but that the various so-called sciences with their imperfect boundaries and distinctions may in fact be those branches of linguistics that deal for the time being with various domains of linguistics. Out of this emerges the corollary that taxonomy is the primary science, and that only by naming things correctly can one hope to understand them more fully. Concomitant with these notions also arises an idea that ought to have attracted computer translation researchers, that a glossematic approach could lay down the down the basis for creating culture-independent maps of words and realities through various languages, assigning precise addresses for each `word' and `meaning,' though it would require a truly vast system for its completion and even then would probably only provide lists of possible translations rather than final translated versions. The major theoretical text of Glossematics, somewhat difficult to follow like many linguistic source books, is Hjelmslev. One excellent brief summary in English is Whitfield, another available only in Spanish or Swedish is Malmberg.

(23) Different strands of this argument may be pursued in Nagel and Newman, Harel, and Goedel

(24) Vinay & Darbelnet, pp. 195-96.

(25) In correct academic terms, Artificial Intelligence is not some lesser topic related to Machine Translation, rather Machine Translation is a branch of Artificial Intelligence. Some other branches are natural language understanding, voice recognition, machine vision, and robotics. The successes and failures of AI constitute a very different story and a well-publicized one at that—it can be followed in the bibliography provided by Minsky. On AI and translation, see Wilks.

(26) Neural Nets are once again being promoted as a means of capturing knowledge in electronic form, especially where language is concerned. The source book most often cited is Rumelhart and McClelland.

(27) Hidden Markov Models, considered by some merely a different form of Neural Nets but by others as a new technology in its own right, are also being mentioned as having possibilities for Machine Translation. They have as noted proved quite effective in facilitating computer-assisted voice transcription techniques.

(28) The theory of nanotechnology visualizes a further miniaturization in computers, similar to what took place during the movement from tubes to chips, but in this case actually using internal parts of molecules and even atoms to store and process information. Regarded with skepticism by some, this theory also has its fervent advocates (Drexler).

NOTE OF ACKNOWLEDGEMENT: I wish to express my gratitude to the following individuals, who read this piece in an earlier version and assisted me with their comments and criticisms: John Baez, Professor of Mathematics, Wellesley College; Alan Brody, computer consultant and journalist; Sandra Celt, translator and editor; Andre Chassigneux, translator and Maitre de Conferences at the Sorbonne's Ecole Superieure des Interpretes et des Traducteurs (L'ESIT); Harald Hille, English Terminologist, United Nations; Joseph Murphy, Director, Bergen Language Institute; Lisa Raphals, computer consultant and linguist; Laurie Treuhaft, English Translation Department, United Nations; Vieri Tucci, computer consultant and translator; Peter Wheeler, Director, Antler Translation Services; Apollo Wu, Revisor, Chinese Department, United Nations. I would also like to extend my warmest thanks to John Newton, the editor of this volume, for his many helpful comments.

SELECT BIBLIOGRAPHY:

As noted above, this bibliography has not gone through the comprehensive checking of the published edition, which the reader may also wish to consult.

Bloomfield, Leonard (1933) Language, New York, , (reprinted in great part in 1984, University of Chicago).

Bloomfield, Leonard (1944) Secondary and Tertiary Responses to Language. This piece originally appeared in Language 20.45-55, and has been reprinted in Hockett 1970 and elsewhere. This particular citation appears on page 420 of the 1970 reprint.

Booth, Andrew Donald, editor (1967) Machine Translation, Amsterdam.

Brower, R.A. editor (1959) On Translation, Harvard University Press.

Carbonell, Jaime G. & Tomita, Masaru (1987) Knowledge-Based Machine Translation, and the CMU Approach, found in Sergei Nirenburg's excellent though somewhat technical anthology (Nirenburg).

Celt, Sandra & Gross, Alex (1987) The Challenge of Translating Chinese Medicine, Language Monthly, April. .

Chisholm, William S., Jr. (1981) Elements of English Linguistics, Longman.

Chomsky, Noam(1957) Syntactic Structures, Mouton, The Hague.

Chomsky, Noam (1965) Aspects of the Theory of Syntax, MIT Press.

Chomsky, Noam (1975) The Logical Structure of Linguistic Theory, p. 40, University of Chicago Press.

Coughlin, Josette (1988) Artificial Intelligence and Machine Translation, Present Developments and Future Prospects, in Babel 34:1. 3-9 , pp. 1-9.

Datta, Jean(1988) MT in Large Organizations, Revolution in the Workplace, in Vasconcellos 1988a.

Drexler, Eric K. (1986) Engines of Creation, Forward by Marvin Minsky, Anchor Press, New York.

Fodor, Jerry A & Katz, Jerrold J. (1964) The Structure of Language, Prentice-Hall, N.Y.

Goedel, Kurt (1931) Ueber formal unentscheidbare Saetze der Principia Mathematica und verwandte Systeme I, Monatshefte fuer Mathematik und Physik, vol. 38, pp. 173-198.

Greenberg, Joseph (1963) Universals of Language, M.I.T.Press.

Grosjean, Francois (1982) Life With Two Languages: An Introduction to Bilingualism, Harvard University Press.

Guzman der Rojas, Ivan (1985) Logical and Linguistic Problems of Social Communication with the Aymara People, International Development Research Center, Ottawa.

Harel, David (1987) Algorithmics: The Spirit of Computing, Addison-Wesley.

Harris, Zellig (1951) Structural Linguistics, Univ. of Chicago Press.

Hjelmslev, Louis (1961) Prolegomena to a Theory of Language, translated by Francis Whitfield, University of Wisconsin Press, (Danish title: Omkring sprogteoriens grundlaeggelse, Copenhagen, 1943)

Hockett, Charles F. (1968) The State of the Art, Mouton, The Hague.

Hockett, Charles F. (1970) A Leonard Bloomfield Anthology, Bloomington, [(contains Bloomfield 1944)].

Hodges, Andrew (1983) Alan Turing: The Enigma,Simon & Schuster, New York.

Hunn, Eugene S. (1977) Tzeltal Folk Zoology: The Classification of Discontinuities in Nature, Academic Press, New York.

Hutchins, W.J. (1986) Machine Translation: Past, Present, Future, John Wiley & Sons.

Jakobson, Roman (1959) On Linguistic Aspects of Translation, in Brower.

Kay, Martin (1982) Machine Translation, from American Journal of Computational Linguistics, April-June, pp. 74-78.
Kingscott, Geoffrey (1990) SITE Buys B'Vital, Relaunch of French National MT Project, Language International, April.

Klein, Fred (1988) Factors in the Evaluation of MT: A Pragmatic Approach, in Vasconcellos 1988a.

Lehmann, Winfred P. (1987) The Context of Machine Translation, Computers and Translation 2.

Malmberg, Bertil (1967) Los Nuevos Caminos der la Linguistica, Siglo Veintiuno, Mexico, , pp. 154-74 (in Swedish: Nya Vagar inom Sprakforskningen, 1959)

Mehta, Ved (1971) John is Easy to Please, Ferrar, Straus & Giroux, New York, (originally a New Yorker article, reprinted in abridged form in Fremantle, Anne (1974) A Primer of Linguistics, St. Martin's Press, New York.

Minsky, Marvin (1986) The Society of Mind, Simon & Schuster, New York, especially Sections 19-26.

Nagel, Ernest and Newman, James R. (1989) Goedel's Proof, New York University Press.

Newman, Pat (1988) Information-Only Machine Translation: A Feasibility Study, in Vasconcellos 1988a.

Nirenburg, Sergei (1987) Machine Translation, Theoretical and Methodological Issues, Cambridge University Press.

Paulos, John A. (1989) Innumeracy, Mathematical Illiteracy and its Consequences, Hill & Wang, New York.

Rumelhart, David E. and McClelland, James L. (1987) Parallel Distributed Processing, M.I.T. Press.

Sapir, Edward (1921) Language: An Introduction to the Study of Speech, Harcourt and Brace.

Saussure, Fernand de (1913) Cours der Linguistique Generale, Paris (translated by W. Baskin as Course in General Linguistics, 1959, New York).

Slocum, Jonathan, editor (1988) Machine Translation Systems, Cambridge University Press.

Vasconcellos, Muriel, (editor) (1988a) (1988) Technology as Translation Strategy, American Translators Association Scholarly Monograph Series, Vol. II, SUNY at Binghampton.

Vasconcellos, Muriel (1988b) Factors in the Evaluation of MT, Formal vs. Functional Approaches, in Vasconcellos 1988a.

Vinay, J.-P. and Darbelnet, J. (1963) Stylistique Comparee du Francais et de l'Anglais, Methode der Traduction, Didier, Paris.

Weaver, Warren (1955) Translation, in Locke, William N. & Booth, Albert D.: Machine Translation of Languages, pp. 15-23, Wiley, New York.

Whitfield Francis (1969) Glossematics, Chapter 23 of Linguistics, edited by Archibald A. Hill, Voice of America Forum Lectures.

Whorf, Benjamin Lee (1956) Language, Thought and Reality, (collected papers) M.I.T. Press.

Wilks, Yorick (1984?) Machine Translation and the Artificial Intelligence Paradigm of Language Processes, in Computers in Language Research 2.

Tuesday, December 30, 2008

Language Ambiguity: A Curse and a Blessing

By Cecilia Quiroga-Clare
E-mail:
cecilia89@comcast.net

Introduction

Despite the fact that ambiguity in language is an essential part of language, it is often an obstacle to be ignored or a problem to be solved for people to understand each other. I will examine this fact and attempt to show that even when perceived as a problem, ambiguity provides value. In any case, language ambiguity can be understood as an illustration of the complexity of language itself.

As a start, I will define some terms to clarify what we mean by "ambiguity." By defining "lexical and structural ambiguity," "connotation, denotation and implication" and tropes as metaphor and allegory, I will try to construct a base upon which language ambiguity takes on extra meaning.

Following this, I will use three major accomplishments of human creativity: literature, psychoanalysis and computational linguistics, as examples of where language ambiguity has an important place. I will briefly comment on the consequences of the different interpretations of one of the most, if not the most, controversial work of literature in history: the Holy Bible.

What does Language Ambiguity Mean?

Something is ambiguous when it can be understood in two or more possible senses or ways. If the ambiguity is in a single word it is called lexical ambiguity. In a sentence or clause, structural ambiguity.

Examples of lexical ambiguity are everywhere. In fact, almost any word has more than one meaning. "Note" = "A musical tone" or "A short written record." "Lie" = "Statement that you know it is not true" or "present tense of lay: to be or put yourself in a flat position." Also we can take the word "ambiguity" itself. It can mean an indecision as to what you mean, an intention to mean several things, a probability that one or other or both of two things has been meant, and the fact that a statement has several meanings. Ambiguity tends to increase with frequency of usage.

Some examples of structural ambiguity: "John enjoys painting his models nude." Who is nude? "Visiting relatives can be so boring." Who is doing the visiting? "Mary had a little lamb." With mint sauce? (7)

In normal speech, ambiguity can sometimes be understood as something witty or deceitful. Harry Rusche (15) proposes that ambiguity should be extended to any verbal nuance, which gives room to alternative reactions to the same linguistic element.

Polysemy (or polysemia) is a compound noun for a basic linguistic feature. The name comes from Greek poly (many) and semy (to do with meaning, as in semantics). Polysemy is also called radiation or multiplication. This happens when a word acquires a wider range of meanings. For example, "paper" comes from Greek papyrus. Originally it referred to writing material made from the papyrus reeds of the Nile, later to other writing materials, and now it refers to things such as government documents, scientific reports, family archives or newspapers. (11)

There is a category, called "complementary polysemy" wherein a single verb has multiple senses, which are related to one another in some predictable way. An example is "bake," which can be interpreted as a change-of-state verb or as a creation verb in different circumstances. "John baked the potato." (change-of-state) "John baked a cake." (creation) (9)

Denotation, Connotation, Implication.

Denotation: This is the central meaning of a word, as far as it can be described in a dictionary. It is therefore sometimes known as the cognitive or referential meaning. It is possible to think of lexical items that have a more or less fixed denotation ("sun," denoting the nearest star) but this is rare. Most are subject to change over time. The denotation of "silly" today is not what it was in the 16th century. (11) At that time the word meant "happy" or "innocent."

Connotation: Connotation refers to the psychological or cultural aspects; the personal or emotional associations aroused by words. When these associations are wide-spread and become established by common usage, a new denotation is recorded in dictionaries. A possible example of such a change is the word vicious. Originally derived from vice, it meant "extremely wicked." In modern British usage, however, it is commonly used to mean "fierce," as in the brown rat is a vicious animal. (11)

Implication: What the speech intends to mean but does not communicate directly. The listener can deduce or infer the intended meaning from what has been uttered. Example from David Chrystal:

Utterance: "A bus!" ? Implicature (implicit meaning): "We must run." (11)

Tropes: Metaphor, Metonym, Allegory, Homonym, Homophone, Homograph, Paradox

These are only a few of the language figures or "tropes," providing concepts useful to understanding ambiguity in language.

Metaphor: This refers to the non-literal meaning of a word, a clause or sentence. Metaphors are very common; in fact all abstract vocabulary is metaphorical. A metaphor compares things. (Examples: "blanket of stars"; "out of the blue")

A metaphor established by usage and convention becomes a symbol. Thus crown suggests the power of the state, press = the print news media and chair = the control (or controller) of a meeting. (11)

Metonym: A word used in place of another word or expression to convey the same meaning. (Example: the use of brass to refer to military officers) (6)

Allegory: The expression by means of symbolic fictional figures and actions of truths or generalizations about human existence; an instance (as in a story or painting) of such expression. (10) "Moby Dick" by Herman Melville is a clear example of allegory; where the great white whale is more than a very large, aquatic mammal; it becomes a symbol for eternity, evil, dread, mortality, and even death, something so great and powerful that we humans cannot even agree on what it might mean.

Homonym: When different words are pronounced, and possibly spelled, the same way (examples: to, too, two; or bat the animal, bat the stick, and bat as in the bat the eyelashes) (6)

Homophone: Where the pronunciation is the same (or close, allowing for such phonological variation as comes from accent) but standard spelling differs, as in flew (from fly), flu ("influenza") and flue (of a chimney).

Homograph: When different words are spelled identically, and possibly pronounced the same (examples: lead the metal and lead, what leaders do) (6)

Paradox: A statement that is seemingly contradictory or opposed to common sense and yet is perhaps true; a self-contradictory statement that at first seems true; an argument that apparently derives self-contradictory conclusions by valid deduction from acceptable premises. (10) Example :

"I do not love you except because I love you;
"I go from loving to not loving you,
"From waiting to not waiting for you
"My heart moves from cold to fire."

Pablo Neruda

Having defined terms, I would say that language ambiguity is a phenomenon we can include as an illustration of the Paradigm of Complexity. Complexity is a weave constituted by diverse events, interactions, and randomness; it is disorganized and unpredictable. For this we need to put order, discard what is uncertain, distinguish, clarify, and classify. But all those operations, necessary for language to become intelligible, put us at risk of blindness.

I could say that ambiguity in language is the uncertainty within the very core of the organized system of language.

Working with Words and their Meanings

Ambiguity and Literature

We tend to think of language as a clear and literal vehicle for accurately communicating ideas. But even when we use language literally, misunderstandings arise and meanings shift. People can be intentionally or unintentionally ambiguous. Nevertheless, when someone uses a potentially ambiguous sentence or expression, usually the intention was to express only one meaning. As we know, most words can have denotations, apparent meanings, connotations and implied or hidden meanings. Also, we often use words in a figurative way. Even though figurative language is more often used in poetry and fiction, it is still very common in ordinary speech.

Ambiguity is a poetic vehicle. It is human nature to try to find meaning within an exchange. A text is given to us and in return we give our interpretation. Our own associations give understanding of what is presented to us.

A characteristic of the late twentieth century, as well as of postmodern literature, is that certainties are continuously called into question, and thus allegory becomes a suitable form for expression. Allegory is a classic example of double discourse that avoids establishing a center within the text, because in allegory the unity of the work is provided by something that is not explicitly there. (16)

In contrast to symbols, which are generally taken to transcend the sign itself and express universal truths, allegories and metaphors divide the sign, exposing its arbitrariness. (I use "sign" here in the sense of the direct intended meaning - see below) Thus the allegorical impulse in contemporary literature can be seen as a reflection of the postmodern emphasis on the reader as co-producer, since it invites the reader's active participation in making meaning. (16)

Metaphors are indeed highly appropriate postmodern devices, because they are obvious vehicles for ambiguity. A living metaphor always carries dual meanings, the literal or sentence meaning and the conveyed or utterance meaning.

A metaphor induces comparison, but since the grounds of similarity are not always given, metaphors serve to emphasize the freedom of the reader as opposed to the authority of the writer. (16)

Historically we can point to Saussure as initiating the discussion related to the arbitrariness of the sign as described in his Course of General Linguistics. The signifier may stay the same but the signified will shift in relation to context. In terms of change over time, Saussure states "whatever the factors involved in [the] change, whether they act in isolation or in combination, they always result in a shift in the relationship between the sign and the signification." (Saussure, 1983, p. 75)

Taking into consideration why all the aforementioned could be considered as a curse, no example of literature better serves than the Bible. This special book, because of its central place at the heart of three of the world's most important religions, has been subject to enormously detailed scrutiny over the centuries in an attempt to glean meaning and to determine "once and for all" the proper way of living and worshipping.

Persecution and oppression have resulted from these interpretations, whether done in the true belief of the of the heretics' evil nature or by cynically using the Bible for political purposes, as Hitler did in his attempted annihilation of the Jews.

Where are the Cathars? Where are the Huguenots now? There is no doubt that these people, were any still surviving, would view the ambiguity of language as a curse, for their interpretations of the Bible were viewed as heresy, and they were extinguished because the same Bible was read in different ways by different men.

Ambiguity and Psychoanalysis

When Sigmund Freud refers to the difficulties in the patient narrative: "Neurotic Family Novel," it is in relation to the value of the historical truth through its discursive expression. Thus memory is contrasted with a way of forgetting; the objective of the cure is to re-write the history, similar to an archeological work, which begins with hieroglyphics to decode an epoch. (17)

The interpretation interposes meaningful words that allow the meaning to shift. The operability of the psychoanalysis relies on a semantic base, that is to say, the attribution of significance and its verbalization.

The Freudian concept of symptoms as symbols, his consideration of dreams as hieroglyphic writing, and the cure based on the spoken word, immediately established a link between psychoanalysis and linguistics. Freud presents words as bridges between unconscious and conscious thoughts. Similarly, neurosis presents a peculiar bond between disease and language, representing a usage dysfunction or a symbolization process that failed, or the existence of an archive that contains pathogenic memories. (18)

The study of oral or written slips of the tongue, the forgetting of names, the importance of polysemy and homophony for the Unconscious, the psychic mechanisms like condensation and displacement (metaphor and metonym), is a substantial part of the psychoanalytic discovery-invention-theory.

And the most important aspect is the use and significance of the language in the therapeutic discourse, that is to say, speech as a working tool.

For the discourse analysis, who is talking, how, why and when something is said, are essential. Speech is not a simple vocalization in abstract but a speech about something for someone, about someone, or about something. It is also important how significance and coherence are reached and how the mental processes and representations are involved in the comprehension. All these issues are basic to the psychoanalyst's interpretative work (17)

Therefore, homophones, mistakes provoked by polysemy, metaphors, and metonyms are considered as primary characteristics of the constitutive heterogeneity of the discourses, rather than incorrectness.

If everything we know is viewed as a transition from something else - Freud said in The Antithetical Meaning of Primal Words (4), every experience must have a double meaning or for every meaning there must be two aspects. All meaning is only meaningful in reference to, and in distinction from, other meanings; there is no meaning in any stable or absolute sense. Meanings are multiple, changing, and contextual. (8)

Ambiguity and Computational Linguistics

Computational linguistics has two aims: To enable computers to be used as aids in analyzing and processing natural language, and to understand, by analogy with computers, more about how people process natural language.

One of the most significant problems in processing natural language is the problem of ambiguity. Most ambiguities escape our notice because we are very good at resolving them using context and our knowledge of the world. But computer systems do not have this knowledge, and consequently do not do a good job of making use of the context. (16)

The problem of ambiguity arises wherever computers try to cope with human language, as when a computer on the Internet retrieves information about alternative meanings of the search terms, meanings that we had no interest in. In machine translation, for a computer it is almost impossible to distinguish between the different meanings of an English word that may be expressed by very different words in the target language. Therefore all attempts to use computers alone to process human language have been frustrated by the computer's limited ability to deal with polysemy.

Efforts to solve the problem of ambiguity have focused on two potential solutions: knowledge-based, and statistical systems. In the knowledge-based approach, the system developers must encode a great deal of knowledge about the world and develop procedures to use it in determining the sense of the text.

In the statistical approach, a large corpus of annotated data is required. The system developers then write procedures that compute the most likely resolutions of the ambiguities, given the words or word classes and other easily determined conditions.

The reality is that there no operational computer system capable of determining the intended meanings of words in discourse exists today. Nevertheless, solving the polysemy problem is so important that all efforts will continue. I believe that when we achieve this goal, we will be close to attaining the holy grail of computer science, artificial intelligence. In the meanwhile, there is a lot more to teach computers about contexts and especially linguistic contexts.

Conclusion

Language cannot exist without ambiguity; which has represented both a curse and a blessing through the ages.

Since there is no one "truth" and no absolutes, we can only rely on relative truths arising from groups of people who, within their particular cultural systems, attempt to answer their own questions and meet their needs for survival.

Language is a very complex phenomenon. Meanings that can be taken for granted are in fact only the tip of a huge iceberg. Psychological, social and cultural events provide a moving ground on which those meanings take root and expand their branches.

Signification is always "spilling over," as John Lye says, "especially in texts which are designed to release signifying power, as texts which we call 'literature'." The overlapping meanings emerge from the tropes, ways of saying something by always saying something else. In this sense, ambiguity in literature has a very dark side, when important documents are interpreted in different ways, resulting in persecution, oppression, and death.

Giving meaning to human behavior is one of the challenges for Psychoanalysis and Psychology in general -- a risk to be taken during a psychoanalytic session. After Ferdinand de Saussure proposed that there is no mutual correspondence between a word and a thing, to ascribe significance becomes much more complicated. The meaning in each situation appears as an effect of the underlying structure of signs. These signs themselves do not have a fixed significance, the significance exists only in the individual. "Sign is only what it represents for someone." The sign appears as pure reference, as a simple trace, says Peirce. (18)

"Disambiguation" is a key concept in Computational Linguistitics. The paradox of how we tolerate semantic ambiguity and yet we seem to thrive on it, is a major question for this discipline. (3)

Computational Linguists created "Word Sense Disambiguation" with the objective of processing the different meanings of a word and selecting the meaning appropriate to the use of the word in a particular context. Over 40 years of research has not solved this problem.

At this time, there is no computer capable of storing enough knowledge to process what human knowledge has accumulated.

It can be seen, therefore, that ambiguity in language is both a blessing and a curse. I would like to say, together with Pablo Neruda, "Ambiguity, I love you because I don't love you."

References

(1) Clare, Richard Fraser. (Historian) Informal conversations about historal consequences of different interpretations of the Bible.

(2) Engel, S. Morris. "Fallacies & Pitfalls of Language" from Fallacies & Pitfalls of Language: The Language Trap. Ed Paperback Nov.1994.

(3) Fortier, Paul A. "Semantic Fields and Polysemy: A correspondence analysis approach" University of Manitoba. Paper.

(4) Frath, Pierre "Metaphor, polysemy and usage" Université MarcBloch, Department d'anglais. France.

(5) Freud, Sigmund "El sentido antitético de las palabras primitivas" Obras Completas Ed. Biblioteca Nueva.

(6) Fromkin, Victoria/Rodman, Robert. "An introduction to language" Ed. Harcourt.

(7) Hobbs, Jerry R. "Computers & Language" SRI International, Menlo Park, CA.

(8) Lye, John "Some characteristics of Contemporary Theory" (Lacan) Department of English, Brock University 1997/2000.

(9) Long, David "Polysemy" Article on the Internet.

(10) Merriam-Webster English Dictionary Online

(11) Miller, George "Ambiguous words" iMP Magazine. March 22, 2001.

(12) Misa, Luis Páginas Web "La complejidad," "El paradigma de la complejidad."

(13) Moore, Andrew. "Semantics, meanings, etymology and the lexicon" Web Site.

(14) Portner, Paul "Semantic Issues for Computational Linguistics" Department of Linguistics, Georgetown University, Washington. Fall 1998.

(15) Rusche, Harry "Ambiguity" English Department, Emory University.

(16) Traugott, Elisabeth Gloss. "'Conventional' and 'Dead' Metaphors Revisited." The Ubiquity of Metaphor: Metaphor in Language and Thought. Ed. Wolf Paprotte and Rena Dirven. Amsterdam: Benjamins, 1985. 17-56.

(17) Vinocur de Fischbein, Susana "Formas de inscripción psíquica: el lugar del lenguaje y la expresión de los afectos en el campo psicoanalítico" Revista de Psicoanálisis, Argentina. Nov.1999 No.3.

(18) Zoroastro, Gastón A. "Problemas epistemológicos de la interpretación" Paper.

This article was originally published at Translation Journal (http://accurapid.com/journal).