Product Code Database
Example Keywords: glove -shoes $77
barcode-scavenger
   » » Wiki: Imageability
Tag Wiki 'Imageability'.
Tag
Imageability is a measure of how easily a physical object, word or environment will evoke a clear mental image in the mind of any person observing it.
(1960). 9780262120043, MIT Press.
It is used in architecture and city planning, in psycholinguistics, and in automated computer vision research.
(2020). 9781450369367, Association for Computing Machinery.
In automated image recognition, training models to connect images with concepts that have low imageability can lead to biased and harmful results.


History and components
Kevin A. Lynch first introduced the term, "imageability" in his 1960 book, The Image of the City. In the book, Lynch argues cities contain a key set of physical elements that people use to understand the environment, orient themselves inside of it, and assign it meaning.
(2025). 9780203094235

Lynch argues the five key elements that impact the imageability of a city are Paths, Edges, Districts, Nodes, and Landmarks.

  • Paths: channels in which people travel. Examples: , , trails, , .
  • Edges : objects that form boundaries around space. Examples: , buildings, , , streets, and .
  • : medium to large areas people can enter into and out of that have a common set of identifiable characteristics.
  • Nodes: large areas people can enter, that serve as the foci of the city, neighborhood, district, etc.
  • : memorable points of reference people cannot enter into. Examples: signs, mountains and public art.

In 1914, half a century before The Image of the City was published, Paul Stern discussed a concept similar to imageability in the context of art. Stern, in Reflections on Art, names the attribute that describes how vividly and intensely an artistic object could be experienced apparency.

(1979). 9780405106118, Arno Press.


In computer vision
Automated image recognition was developed by using machine learning to find patterns in large, annotated datasets of photographs, like . Images in ImageNet are labelled using concepts in . Concepts that are easily expressed verbally, like "early", are seen as less "imageable" than nouns referring to physical objects like "leaf". Training AI models to associate concepts with low imageability with specific images can lead to problematic bias in image recognition algorithms. This has particularly been critiqued as it relates to the "person" category of WordNet and therefore also ImageNet. and demonstrated in their essay "Excavating AI" and their art project ImageNet Roulette how this leads to photos of ordinary people being labelled by AI systems as "terrorists" or "sex offenders".

Images in datasets are often labelled as having a certain level of imageability. As described by Kaiyu Yang, and co-authors, this is often done following criteria from and collaborators' 1968 psycholinguistic study of nouns. Yang el.al. write that dataset annotators tasked with labelling imageability "see a list of words and rate each word on a 1-7 scale from 'low imagery' to 'high imagery'.

To avoid biased or harmful image recognition and image generation, Yang et.al. recommend not training vision recognition models on concepts with low imageability, especially when the concepts are offensive (such as sexual or racial slurs) or sensitive (their examples for this category include "orphan", "separatist", "Anglo-Saxon" and "crossover voter"). Even "safe" concepts with low imageability, like "great-niece" or "vegetarian" can lead to misleading results and should be avoided.


See also


Further reading

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time