Now Reading
The geometry of word embeddings in semantic projections

The geometry of word embeddings in semantic projections

In what way does the mental lexicon store knowledge of word meaning? Word meanings are now inferred by computer models using lexical co-occurrence patterns. Words that are used in more comparable linguistic contexts, or that are more semantically connected, are positioned closer together in the vector representation of words that they learn to use. Inter-word proximity simply measures general relatedness, but human judgments are often context-dependent. For instance, while having comparable sizes, dolphins and alligators have different levels of danger.

Researchers are examining if machines can emulate human thought processes and comprehend language in the same manner that humans do. That issue is covered in a recent study conducted by scientists from UCLA, MIT, and the National Institutes of Health.

According to the study, which was published in the journal Nature Human Behaviour, artificial intelligence systems may actually acquire highly complex word meanings. The researchers also found a straightforward method for extracting this sophisticated information.

They discovered that the AI system they looked at accurately represented word meanings and had a positive correlation with human judgment. The authors’ AI system has been extensively applied in recent years to research word meaning. Tens of billions of words’ worth of information on the internet are “read” by it as it learns to decipher word meanings. When two words are regularly used together, like “table” and “chair,” the system learns that they have similar meanings. Additionally, it learns that words with very distinct meanings have relatively infrequent occurrences together, such as “table” and “planet,” when they do.

The goal of the study, according to co-lead author and UCLA associate professor of psychology and linguistics Dan Blank, was to determine what the system understands about the words it learns and what type of “common sense” it possesses. The system looked to have a significant flaw before the investigation, according to Blank: “As far as the system is concerned, every two words have only one numerical value that shows how close they are.” Human knowledge, on the other hand, is a lot more intricate and nuanced.

Consider our knowledge of dolphins and alligators,” Blank said. “When we compare the two on a scale of size, from ‘small’ to ‘big,’ they are relatively similar. In terms of their intelligence, they are somewhat different. In terms of the danger they pose to us, on a scale from ‘safe’ to ‘dangerous,’ they differ greatly. So a word’s meaning depends on context. We wanted to ask whether this system actually knows these subtle differences — whether its idea of similarity is flexible in the same way it is for humans.

The authors created a method they refer to as “semantic projection” in order to find out. The representations of various animals can be compared to one another by drawing a line between the model’s depictions of the words “large” and “little,” for instance. The scientists tested this strategy on 52-word groupings to see if the system could learn to categorize meanings, such as grouping U.S. states based on their weather or total income, or categorizing animals based on their size or level of human risk.

Other word-groups included those for initial names, occupations, sports, mythical animals, and apparel. Size, danger, intellect, age, and speed are a few examples of the several contexts or aspects that were attributed to each group.

A depiction of semantic projection, which can determine the similarity between two words in a specific context. This grid shows how similar certain animals are based on their size. © Idan Blank/UCLA

We use a domain-general method to extract context-dependent relationships from word embeddings: ‘semantic projection’ of word-vectors onto lines that represent features such as size (the line connecting the words ‘small’ and ‘big’) or danger (‘safe’ to ‘dangerous’), analogous to ‘mental scales’. This method recovers human judgements across various object categories and properties. Thus, the geometry of word embeddings explicitly represents a wealth of context-dependent world knowledge.

See Also

Dan Blank

Over a wide range of things and situations, the researchers discovered that human intuition and their approach were highly comparable. (The researchers also requested that cohorts of 25 participants each provide comparable evaluations about each of the 52 word groupings in order to make that comparison.)

Surprisingly, the system learnt to recognize the names “Betty” and “George” as representing distinct genders despite the fact that they are similar in that they are both rather “old.” And that while “fencing” and “weightlifting” both often take place indoors, they differ in the amount of intellect they call on.

It is such a beautifully simple method and completely intuitive,” Blank said. “The line between ‘big’ and ‘small’ is like a mental scale, and we put animals on that scale.[..] It turns out that this machine learning system is much smarter than we thought; it contains very complex forms of knowledge, and this knowledge is organized in a very intuitive structure. [..] Just by keeping track of which words co-occur with one another in language, you can learn a lot about the world.

Dan Blank

Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Gabriel Grand, Idan Asher Blank, Francisco Pereira & Evelina Fedorenko

Published: April 2022
DOI: 10.1038/s41562-022-01316-8

© 2022 GEOMETRY MATTERS. ALL RIGHTS RESERVED.
Scroll To Top