Magnocellular Deficit

Magnocellular (“magno” for short) deficit is related to dyslexia. The magno cells are associated with processing and detecting movement of stimuli coming through your retina. In autopsies of dyslexics and non-dyslexics, the former has a smaller cluster of magno cells that can bring in rapidly changing information. Because of this, images would tend to clump together and an activity like reading proves to be extremely difficult; the brain simply cannot parse out the many images (of text) going into your eyes. Without clean breaks between one word to the next, the words on a page seem to shimmer and jump on a page. It is not surprising to find that people who are dyslexic also do not like crowds or places with lots of movement – city streets, for example, with its many moving cars and people.

Parallel pathways of magno (fast-processing) and parvo (slow-processing) cells

In The User’s Guide to the Brain (page 105) by John J. Ratey, a researcher gives her story of how hard it was for her to believe that her mother is a dyslexic.

No, it couldn’t be, I thought to myself. My mother couldn’t possibly be dyslexic. She had graduated at the top of her class, she’s a perfectionist, and she absolutely loves to learn. How could she of all people be dyslexic?

We often do not realize that dyslexia can happen to anyone, and that being smart and motivated does not mean that it is easy to read. Reading, after all, is not an innate ability. Humans are not born knowing how to read but we certainly are capable of doing so. Dyslexia does not equate the lack of intelligence.

This Is Your Brain on Silence – Issue 16: Nothingness – Nautilus

“Silence is a resource,” it said. It could be marketed just like clean water or wild mushrooms. “In the future, people will be prepared to pay for the experience of silence.”

Finland has begun a campaign to rebrand their country for tourism’s sake. The idea? SILENCE as a valuable commodity. It cannot be found everywhere, and it is by this definition that makes silence worth acquiring.

Stop and think about this. We are surrounded by noise 24/7, and yes, even at home. Silence is intangible and limitless. New hip meditation “shops” sell silence to help folks relax. Noise cancelling headphones sell for god-knows-too-much.

The article then turns to talk a little about the underlying physiological explanation for why our brains are always on alert to detect sound in our environments. Even when it is silent, our brains are noisy and in search for external stimuli. This article was a great read.

“Yet to her great surprise, Kirste found that two hours of silence per day prompted cell development in the hippocampus, the brain region related to the formation of memory, involving the senses. This was deeply puzzling: The total absence of input was having a more pronounced effect than any sort of input tested.”

Source: This Is Your Brain on Silence – Issue 16: Nothingness – Nautilus.

Placebos – Is it all in the mind? – A comparison essay

Scientists have known for a long time that the mind plays a role in the process of medicinal treatment. They found that the psychosocial context of a treatment is central in the strength of the placebo effect, but the underlying neurological basis for this psychological phenomenon has only recently gained momentum in the field as a topic worthy of studying. This paper will discuss the experimental procedures and results of two independent experiments, and then examine how effective these experiments are in answering the broad question of “What factors create the placebo effect?”

Continue reading

Viewing the Big Picture

The fusiform gyrus area, an area of the temporal lobe, specializes in face recognition. When looking at faces and other objects in which we are very familiar with, this area is highly active. Much research has been conducted in this area to understand how we are able to differentiate one face from another. How would I be able to recognize who is Bob and who is Paul?  Do I look at their entire face or the features on their face, e.g., their eyes or mouth?  There are currently two competing hypotheses: The holistic hypothesis is the view that the face is recognized as a whole rather than the internal features on the face, e.g., the eyes. It would be easier to recognize a part of the face when it is presented in the whole face relative to recognizing a part of the face by itself. The part-based hypothesis, on the other hand, suggests that face recognition relies on piecing together internal features of the face. Several studies have shown that face recognition rely on both the holistic process and the part-based process, but more so with the former. The focus of this paper is to present some research in support of the holistic hypothesis.

One landmark study was conducted by Tanaka and Farah (1993). They investigated whether subjects would be more accurate to identify parts of a face when it is presented in a whole face or in isolation. The entire study consisted of three separate experiments. In the first experiment, subjects had to learn and memorize the names of intact or scrambled faces. Before starting, subjects were placed in either the scrambled face or intact face condition. An intact face is not altered in any way. A scrambled face is one where the mouth is placed on the forehead and the nose is placed where the right eye would normally be. Subjects studied faces presented in block trials. After this learning phase, they went through the test phase where they were given a forced-choice recognition test of the faces they studied before. In the test, one face would be the original and the other would be a foil face, where a feature is switched with another feature from another face. Subjects were then given another forced-choice recognition test with only isolated parts. For example, given two noses, subjects were asked, “Which is Bob’s nose?” In experiment two, subjects underwent the same procedure of tests. Instead of scrambled faces, however, they studied faces inverted 180 degrees. The learning and test phases were the same. From these experiments, Tanaka and Farah found that subjects were more accurate in recognizing intact faces than scrambled faces and even less so for isolated face parts. These results are consistent with the interpretation that faces are stored holistically in memory rather than in terms of their parts. The last experiment used houses as stimuli to ensure the results from the first experiment was not because subjects were simply looking at upright stimuli. Subjects first served as the control group. They memorized human faces in the same procedure as before, and, consistent with previous results, subjects were better able to recognize faces when the whole-face was presented than when only isolated features were presented. Then they memorized the names and appearances of different houses, and took forced-choice recognition tests of isolated-parts and whole-objects. They did not show much difference in recognition accuracy when it came to houses. Whether features were shown in parts or as a part of a whole, subjects identified the house with similar accuracy across the different visual stimuli conditions. This finding supports their hypothesis that whole-recognition is used to a greater extent than part-recognition and that face recognition is different from the recognition of other objects, such as houses.

In another study, Tanaka and Sengco (1997) looked into the role of facial configuration in face recognition. It has been speculated that because the relation of everyone’s face features are the same, that is, the eyes are above the nose and the nose above the mouth, recognition of specific faces requires remembering the configural information (spatial distance between features) contained in the face.  Tanaka and Sengco reasoned that if one was to disrupt the configural information of a given face, this would greatly impair the retrieval of featural information. Their study included four experiments, each using upright faces, inverted faces, upright houses, or inverted houses as visual stimuli. Subjects in each respective experiment first memorized faces/houses and completed force-choice recognition tests, one for isolated parts and one for whole-face/house. The whole-face/house tests had subjects compare target items with foil items. Their percentage correct was measured. The manipulation, or independent variable, was the spacing between the eyes (Experiments 1 and 2) or the windows (Experiments 3 and 4). In the old configuration condition, spacing was not manipulated. In the new configuration condition, distance between the key features were manipulated, some close together and some far apart.

The results in this series of experiments showed subjects recognized features best when it was presented in the old configuration, moderately well with new configurations, and poorly with isolated parts. Interestingly, changes to facial configurations do not disrupt holistic processes in inverted faces or non-face objects. Configuration affected the holistic recognition of features on normal faces but had no effect on inverted faces or houses. This study showed that featural and configural information are intertwined in holistic face representations. When faces were intact and upright, subjects relied on configural information more than featural information. When presented with inverted faces or houses, subjects relied more on features specific to the object. Tanaka and Sengco thus concluded that configuration is an important factor in holistic recognition of faces.

Another group of researchers aimed to understand more about the interaction of the configural and part-based systems in face recognition. Rivest, Moscovitch and Black (2009) conducted various experiments on two subjects who had impairments on either object or face recognition and compared their results to those of healthy participants. Subject one, CK, has object agnosia (inability to recognize objects) and alexia (inability to understand written or printed language), but he has normal upright face recognition as long as there are sufficient configural information. Subject two, DC, has prosopagnosia (inability to recognize faces), whether it be based on configural or part-based processing, but he has normal object recognition. Rivest et al. had DC and CK perform a series of recognition tests. These tests included: recognizing inverted faces of famous people and inverted faces of cartoons, recognizing faces with their internal structures inverted, and recognizing disguised faces and faces with their external features inverted. Comparing the results of DC, CK, and healthy controls, the results led to the general conclusion that because there is a clear dissociation in CK between his part-based and configural face recognition and there is no evidence of this dissociation in DC, face recognition is greatly impaired when whole-face configurations are altered. CK was normal at recognizing upright whole faces, but when it came to distinguishing faces that were inverted, fractured or modified in such a way that altered the gestalt, he performed poorly on the tests. This study further supports the holistic representation hypothesis that face recognition relies more heavily on whole faces. Although configural face perception can proceed without part-based processing, the reverse is not true. Without using configural details of facial features (viewing the face in terms of all its parts), it is difficult for people with or without prosopagnosia to identify faces if all they have to rely on is facial parts.

Work by Schiltz and Rossion (2006) found more evidence that faces are represented holistically in the brain. They modeled their study to incorporate the composite face effect, which occurs when two identical parts of a face are perceived as being different if their respective bottom halves belong to different faces. Using functional magnetic resonance imaging, the research team scanned subjects’ brains while the subjects viewed images of faces, objects or scrambled faces to find where the brain was most activated when viewing these visual stimuli. The images were separated into top and bottom halves. In some conditions, the halves were aligned; in other conditions, the halves were misaligned.  When looking at aligned images of faces, the middle fusiform gyrus (MFG), which contains neurons most sensitive to whole facial stimuli, was most activated. There was a larger response in the MFG when faces were different from each other than when the faces were identical. This response is only larger when the top and bottom halves were aligned. When faces are misaligned, there is greater activation in the inferior occipital gyrus (IOG), the area more sensitive to facial features. If we were to remove a part of the face or scramble face features, this would cause marked reduction in neuronal response in the MFG area. This suggests that faces are represented holistically in the fusiform gyrus and occipital areas when we first see a new face and try to recognize it.

As the studies conducted by Tanaka and Farah (1993), Tanaka and Sengco (1997), Rivest et al. (2009), and Schiltz and Rossion (2006) show, there is good evidence that face recognition depends on holistic processing more than featural processing. It is generally agreed that faces are recognized not on the basis of their individual features, but in terms of the wholethat emerges from the features. These findings suggest that while the brain processes featural details and use it for face recognition, it relies more on a holistic approach to facial recognition.

References:

Rivest, J., Moscovitch, M. & Black, S. (2009). A comparative case study of face recognition: The contribution of configural and part-based recognition systems, and their interaction. Neuropsychologia, 47, 2798-2811.

Schiltz, Christine & Rossion, Bruno (2006). Faces are represented holistically in the human occipital-temporal cortex. NeuroImage, 32, 1385-1394.

Tanaka, J. W. & Farah, M. L. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology, 2, 225-245.

Tanaka, J. W. & Sengco, J. A. (1997). Features and their configuration in face recognition. Memory & Cognition, 25, 583-592.