Categories
Academia anxiety count definition Environment health knowledge neuroscience origin research Researcher Special Guest stress struggle

(44) Research on BRAIN (extended): Misophonia

The quest into the unknown land of ‘misophonia’ continues. It is not included in any diagnostic manuals, it is not widely acknowledged by the medical community. Yet people who suffer from misophonia exist and here is what they are confronted with, in the words of Dr. Jennifer Jo Brout, the founder of International Misophonia Research Network, a New York State Certified School Psychologist, a Connecticut Professional Licensed Counselor, with a Doctorate in School/Clinical-Child Psychology, based here in the Connecticut, the United States of America.

AAEAAQAAAAAAAAfbAAAAJDYyMmQ5MjMyLWEwYzYtNDZkMi05NzgyLWMyNGYyOTFiZjEwZg.png
Dr. Jennifer Jo Brout, International Misophonia Research Network.

Differentiating Disorders: Misophonia and Sensory Over-Responsivity

As all researchers know, almost comically, well, uncovering new scientific knowledge is no easy task. Whether you are engaged in investigating a well-trod topic, or, like me, you are forging relatively new territory, there are often not simple solutions to the complex problems we encounter. Perhaps you have recently read about the disorder I study and advocate for, misophonia, on this blog. Misophonia is a neurologically based disorder in which auditory, and sometimes visual, stimuli are misinterpreted within the central nervous system, leading sufferers to have unpleasant reactions to sounds others would consider barely noticeable.

research1
Source: internet.

When misophonia sufferers are exposed to particular “trigger sounds,” the fight/flight response is set off within the body. For these individuals, hearing a noxious noise can feel akin to being confronted with a wild animal, as their hearts race and muscles tense.

Because misophonia (does not appear in diagnostic manuals, such as DSM-5 or ICD-10) is only recently gaining wider recognition in the public and scientific communities, studying this disorder presents a unique set of challenges. 

afda.png

Though there is a scant amount of research on misophonia at this point, fortunately, there is a large body of research that has been developed over the past 15 years on a similar disorder, Sensory Over-Responsivity (a subtype of Sensory Processing Disorder). Individuals suffering from Sensory Over-Responsivity react to all types of sensory information as thought it were dangerous, and their fight/flight systems can be activated by seemingly inoffensive sights, smells, tastes, touches, or sounds. In both, misophonia and Sensory Over-Responsivity, certain sounds can leave sufferers feeling angry, fearful, disgusted, and “out of control.”

research2
Source: Internet.

Though it may seem natural that the research on Sensory Over-Responsivity be used to inform our understanding of misophonia, this has, largely, not taken place. We may ask ourselves, why are these two highly similar disorders rarely compared in misophonia academic articles, or articles in the popular press? My answer to this question is an unfortunate one: for the most part, researchers are not used to working within a cross-disciplinary model.

While psychology researchers, audiology researchers, and occupational therapy researchers may be competent and successful within their own fields, they are often not accustomed to reaching beyond them to integrate other types of research into their own work. There is a long pragmatic and political history behind the lack of cross-disciplinary research work that is not necessarily the fault of academic researchers or clinicians. However, in the “age of information” that we are living in, sharing valuable knowledge between researchers from different disciplines should now be as quick and easy as doing a google search, and as common. As it is, this lack of information sharing trickles down to the public, and often leads Misophonia and Sensory Over-Responsivity sufferers to find inaccurate information about their own conditions.

research 4.png

Unfortunately, another important problem facing both misophonia and Sensory Over-Responsivity is that neither disorder has been accepted into the diagnostic manuals (DSM-5 or the ICD-10). It is difficult to understand the logic behind this fact, as studies have estimated that up to 20% of children are affected by sensory-based disorders. Likewise, tens of thousands of people have gathered on social media platforms to form support groups for misophonia, helping one another fill the gaps left by a large portion of the mental health community. There is a long political history involving how a disorder gains entry into diagnostic manuals, and though the National Institute of Health has taken steps recently to try to change this process, this change comes long after the damage has been done. Therefore, what we are left with is two disorders that “don’t exist,” that are not reimbursable by insurance, and for which research funding is extraordinarily difficult to come by.

Sensory Over- Responsivity and Misophonia share more than symptoms. They share neglect from the medical and psychiatric communities, which has resulted in the dissemination of more than enough inaccurate and confusing information to do damage to sufferers lives. My hope is that going forward, receptive practitioners and researchers from all facets of the healthcare community can work cooperatively to study and treat these disorders, discovering important knowledge and improving sufferers’ quality of life.

This post is written by Dr. Jennifer Jo Brout  (who is also the mother of adult triplets, and is a Misophonia sufferer herself) and Miss Madeline Appelbaum, a recent alumna of Reed College (Oregon, USA), with a particular interest in educational psychology. Madeline wrote an undergraduate thesis on the effects of autonomous and controlled motivation to learn on college students.

14202695_10154452811021753_8423034886095690784_n.png
Madeline Appelbaum, Intern at International Misophonia Research Network

International Misophonia Research Network (Amsterdam)

With love for Research,

signature

Categories
Academia anxiety health knowledge neuroscience research Researcher Special Guest stress

(41) Research on brain: hearing.

And here is Laurien back again with a crash intro on what is happening in our brain when we hear something! Did you hear that? 😉

LaurienNC.png
PhD Laurien Nagels-Coune

A ringing in your ear?

1212.png

Source: http://well.blogs.nytimes.com/2012/12/03/living-with-a-sound-you-can%E2%80%99t-turn-off/

The first post in this BRAIN research series was about language. Next to spoken words, there are plenty of other sounds in our daily life. They are the source of joy and comfort but what if a certain sound drives you mad? Tinnitus is the fancy term for ‘having a ringing in your ear’. It is in fact the perception of sound in absence of any actual sound.

Now, before I go on, I have to emphasize that I am no expert in this field. My PhD is focused on muscle-independent communication for locked-in patients. These are patients who lost most motor capacities and are in essence ‘locked-in’ their own bodies, yet let me tell you more about this another time 😉 . I am writing about tinnitus now because it is a scientific side project of mine and I will collaborate in a clinical investigation soon on it. As a clinician, I have always found it fascinating how such a seemingly insignificant disorder can drive one mad, but try to listen to a few of these 11 tinnitus sounds by the British tinnitus association. Personally, I can imagine going mad when being forced to listen to sound 8 or 11 for even a day.

11111.png

In April I went to a studium generale lecture here in Maastricht by Prof. dr. Robert Stokroos and Dr. Iris Nowak-Maes. Perhaps some of you were there as well? I remember that extra chairs were brought in to accommodate the immense turn-up that evening. Prof. dr. Stokroos confirmed the immense proportion of this seemingly insignificant disorder:

Source:http://www.geeksandbeats.com/wp-content/uploads/2014/04/shutterstock_24666676.jpg

About a million people in the Netherlands have to deal with tinnitus, about 60.000 of those are seriously hindered in their daily lives. Tinnitus costs around 2.3 percent of the yearly care budget.”

Ok, so now that we know what tinnitus is. We also know how severe its consequences are in our society. So let’s cut to the chase.

What causes tinnitus? The most common cause is exposure to noise, such as a noisy work environment. People that have been in warfare for example often develop tinnitus. What happens is that the cochlea, the ‘snail house’, of the ear gets damaged.

123

Source: http://www.webmd.com/a-to-z-guides/inner-ear

Specifically, there are tiny hair cells in this snail house that get damaged. But where does neuroscience come in? Well in most cases, damage to these little hair cells causes hearing loss in a specific frequency range. This is because the hair cells are grouped per frequency. What is interesting now is that often the tinnitus frequency is exactly in this frequency range! So what might be happening? Animal models suggest that when the hair cells are damaged, there is differentiation of nerves going from the cochlea to the brain. Our auditory part of the brain starts to have increased spontaneous activity.  So what is a disease of the ear, soon becomes a disease of the brain.

1235.png

Image adopted from Adjamian, P., Sereda, M., & Hall, D. A. (2009). The mechanisms of tinnitus: perspectives from human functional neuroimaging. Hearing research, 253(1), 15-31.

What is often seen in animal models is that there is some reorganization of the auditory cortex (part C of the above figure). You can see that the top red regions stop responding to high frequencies but start reacting on lower frequencies that were close to them. You can see how damage to a specific part of the ear, can change the workings of the brain.

The above is just a common way of thinking about tinnitus. However, be careful dear readers, little is still known about this fascinating topic. One in four tinnitus patients do not have hearing loss namely and reorganization of the auditory cortex has not been confirmed as a cause of tinnitus in humans. However, motivated neuroscientists keep learning and understanding this disease better and better. Once the mechanisms are unraveled, the way is open to treatment and interventions. However, my take home message to those readers that haven’t developed tinnitus yet is: Protect your ears J As always, prevention is better than treatment!

12345555.png

Source: http://www.oaklandaudiology.com/wp-content/uploads/2014/03/Pixmac000088050972.jpg

Tinnitus remains a hot topic in the field of neuroscience, we don’t understand it fully yet. There is still a lot more to discover about auditory perception. For example, another strange disorder that involves the hatred of certain specific sounds…   but our next guest will unravel the neural correlates of this phenomenon in next week’s post.

by Laurien Nagels-CounePhD student in Cognitive Neuroscience at FPN, Maastricht University

With love for Researchers,

signature

Categories
Academia knowledge neuroscience research Special Guest writing

(40) Research on Brain: reading.

Research on BRAIN month continues with the another indispensable gift that human brain offers us, amongst other skills – and that is READING. The Experienced Researcher at the 4th top young leading university in the world Maastricht University, Dr. Gojko Žarić will explain us how is it that we end up re-a-ding… Finally, I get to understand what are these machines on a Researcher’s head doing :):)

15135525_10154017010043244_837274749_n.png
Dr. Gojko Zaric

Can you read? How do you do that?

Humans read and write for about 5000 years. It appears that the reading relies on the brain areas that serve other functions, such as vision, hearing and language, rather than reading alone. But things get more complicated with other higher cognitive functions such as, the attention – a crucial factor in successful reading.

 

15086333_10154017011358244_905572498_n
Figure 1. Possibly the earliest known writing. Summer pictographic writing on the limestone tablet from Kish dated ~3500 BC (Source: Wikipedia).

This means that a large number of brain areas have to cooperate to allow us to read (one example of these areas is in  Figure 2). Thus, as reading is a complex cognitive function, it is not surprising that 1-2 in every 20 children have trouble mastering this skill. In other words, in every classroom there is at least one child struggling with reading due to a specific learning disability with neuro-biological root. These children suffer from developmental dyslexia. In my research I am looking at brain responses of children and adults to reading related materials and tasks.

15145071_10154017011568244_767492315_o.png
Figure 2. One modern day view of the brain areas involved in reading (Source: Dehaene, Reading in the Brain, 2009).

Written language consists of arbitrary visual forms, i.e. letters. that the certain society relates to the speech sounds of its language. Speech sounds are distinct units of the spoken language that differentiate between the words (e.g. mug, bug, rug). If the letters represent distinct speech sounds we call the script alphabetic (Latin, Greek, Cyrillic, Hangul, Armenian and Georgian) A more loose definition of alphabetic scripts includes script such as abjad in which commonly only the consonants are written (Arabic and Hebrew), syllabic (Japanese Katakana script), and abugida scripts in which the vowel does not have its own symbol but is represented by changing the letter symbol of the consonant (e.g. Indic, Ethiopic, Canadian Aboriginal). Another group of scripts are logographic scripts, such as Chinese, in which visual symbols already represent meaningful units of language (Figure 3).

hgkkhjg.png
Figure 3. Examples of different scripts: 1) Alphabet – Dutch 2) Syllabic – Japanese Katakana 3) Abjad – Hebrew 4) Abugida – Ethiopic 5) Alphabet – Serbian Cyrillic and Latin 6) Logographic – Chinese standard and modern simplified (examples).

Thus, a child learning to read an alphabetic script first has to learn to connect letters and corresponding speech sounds, e.g. letter “m” to the speech sound /m/ in the words “mug”, “drum”… My research topic (remember what we called a “Research question” part on do-your-own-little Research?) is how a child’s brain builds up letter-speech sound connections and how it automatizes them to allow children to move to the next stages of the reading development. Next, my research also concerns the reading stage in which these connections are made and children can recognize words as units without having to read them letter by letter.  Furthermore, I am interested in what brain responses differ between children with and without reading problems. In other words, can I find which brain areas or which cognitive functions are not cooperating as supposed to.

To investigate these questions we can use different neuro-scientific methods such as electroencephalography, functional and structural magnetic resonance imaging.

Electroencephalography (EEG) tells us, with the millisecond precision when the brain responds to a certain stimulus. For example, we can present readers with the letters and speech sounds that match or mismatch (Figure 4, middle row) and we can investigate how is the brain responding in these two conditions and if the brains of dyslexic children produce different responses. In another task, we can measure  their responses while they read words or meaningless letter-like false font strings (Figure 4, bottom row). We can then analyze, for example, amplitudes and latencies of the brain responses. We can also analyze how is signal measured at the back of the head related to the signal measured at the front of the head to see if these signals come from the brain areas that cooperate or not. And many more possibilities for an analysis of the EEG data are available…

gdfghdh.png
Figure 4. Examples of an EEG measurement during letter-speech sound integration and word reading task.

We can use functional magnetic resonance imaging (fMRI) to see where in the brain are the areas that get more/less activated during letter-speech sound integration or word reading (example in Figure 2 is based on multiple studies with various tasks). With this method we can see that children and adult readers activate the same brain network during reading, but regions that are involved in letter-speech sound integration are activated more in children, while regions involved in fast recognition of words as units are activated more in adults.

We can also look at the white matter of the brain using magnetic resonance imaging, by employing different imaging technique, diffusion weighted imaging (Figure 5). White matter contains the highways of the brain, large bundles of the neuronal axons through which the signals travel between different brain areas. The integrity of the white matter can develop differently over time in dyslexic and typical readers. On the other hand, reading can influence white matter development, i.e. the more the certain neuronal bundles are used, the more structured they become.

15129923_10154017016043244_117396544_n
Figure 5. Diffusion weighted image of the white matter of the human brain (Source: Wikipedia).

Thus neuroscience offers multiple methods to look at the typical and atypical reading development. These methods can be combined with reading trainings to examine the benefits of the training at both behavioral and neural level. The coupling of behavioral and neural changes with reading training is not only scientifically important as it informs us which brain areas serve which functions, but it is foremost important for the children with reading problem, as it may be a sign that the changes are long-lasting.

If this short introduction made you interested in the research of reading and dyslexia, you can check my ResearchGate page or webpage http://gorka.science/, made by my collaborator, Dr. Gorka Fraga González from University of Amsterdam, where you can find our scientific papers on these topics. You can visit our Maastricht University based research group page to find out more on different reading, speech and language related research.

Post written by Gojko Žarić, M-BIC, Maastricht University

With love for Research,

signature

Categories
Academia count enthusiasm health knowledge neuroscience research Researcher Special Guest

(39) Research on Brain: language.

This is Research on Brain month on Researchista and this is our guest of the week, I would normally say, but this is not just an usual introduction. This is such a genuinely nice person and friend, I wish to transmit at least a little bit from the inspiration and huge support that Joao has been giving to research communication. I would like to thank him for accepting to break the ice on Research and BRAIN month – with its related topics that are included in one field, called ‘Neuroscience’. It start with how brain helps us express clearly and use language to solve our problems and grow. Welcome to our Special Guest Dr. Joao Correia, originally from Portugal, Experienced Researcher at Maastricht University.

LANGUAGE: YOUR KEY TO THE WORLD.

14997257_10153912455656846_1580675135_n
Dr. Joao Correia

We all have the impression that the brain is vast, and that vastness allows us to perform a long list of human functions. One of the unique functions that humans have is by far the ability to communicate. Human communication is direct and self-motivated. We do not only express ourselves to others, but we do it with the intention to change the behavior and knowledge of others.

My research dives into the unknown neural circuits of the communicating brains via speech and language. I try to understand how we speak and how we understand the speech of others, and in addition how these seemingly natural capacities serve the memory and thoughts and above all, shape the advanced societies of our world. Imagine, a car crash test.

fig1

A car at high speed drives against a brick wall. This is – figuratively – what happens in the tympanic membrane of our ears when you hear something. Sound waves (travelling at 340 meters per second) bring auditory information into our ears, which transforms this mechanical energy into electric signals that can be interpreted by our brains.

Without this basic physical and neural capacity to receive sound information, for example from speech, infants wouldn’t develop normal speech and linguistic capabilities. Our ability to speak or to read owes much to this initial training of speech sound perception, such as our parents voice.

fig2

As the auditory cortices in the left and right hemispheres, receive signals from spoken language, they start to link to others brain areas that are being coherently stimulated. For example, we hear different melodic tunnes (also called ‘signatures’ or ‘prosodies’) when our parents want to provide us a positive or negative feedback for education. Or we hear the word ‘water’ coherently together with the experience of drinking water. In sum, our senses start becoming linked, originating richer memory representations (auditory, visual, tactile, olfactory or emotional). How exactly these links are created and used in everyday life remains largely unknown.

Another linguistic faculty that is poorly understood is how we speak. Remember how swimming is a super exercise because it uses so many muscles of our body? Well, speaking uses more than 100 muscles, from the diaphragm and costal muscles – to create air flow – to multiple muscles of the larynx – to create the necessary pressure – to transform air flow onto sound waves – and finally – muscles of the vocal tract like the lips and tongue – to shape those sound waves onto concrete speech sounds. Due to our highly linked brain, we are capable to develop speaking abilities purely from hearing other people speaking, as well as, experiencing our own attempts to speak.

This link between auditory and motoric brain systems is often referred to as sensorimotor integration, because it provides a platform to integrate sensory and motor components. Sensorimotor integration is a key aspect of speech development, everyday speaking and comprehension. In a nutshell, we speak in a certain way because of how we hear and we hear in a certain way because of how we speak.

fig3.png
Source: internet.

I am deeply in love with the versatility and complexity of sensorimotor integration, as it has the potential to explain multiple mysteries of the communicating brains, how comprehension and speaking develop normally and abnormally or how the brain learns to read.

Until recently, to ask these questions would necessarily lead to difficult philosophic and psychological discussions for which my engineering background wouldn’t be ready. However, in addition to these critical points in science, today we can image the human brain safely and with unprecedented detail, which allows directly to test and create hypothesis for how humans communicate…

Functional MRI (magnetic resonance imaging) allows taking magnetic pictures of the brain as people execute scientific experiments, including speaking or listening to speech. The pictures,

fig4.png

reflect oxygen consumption within each small 3D pixel (or voxel) and are extremely rich in detail. However, such a complex capability is not present in one or two voxels, but distributed among the vast neural circuitry of the brain. Thousands of voxels per second must be analyzed during a single act of hearing, speaking or reading.

This screams for computational tools, able to handle such large amount of data. In my work, I use tools that have been developed for statistical learning, like predicting the weather, to learn how voxels behave for language. By investigating how voxels encode linguistic units, I hope to help formulate models of spoken communication that can have a direct impact to understand the neural circuitry for speech and language and to help unravel how these circuits fail during speech and language disorders. There is a long road to walk, but with the help of parallel technological development, this road may now be driven in a fast sport car rather than by foot. In 2010, I counted on voxels of 42 cubic millimeters, in 2014 of 8 cubic millimeters, and now in 2016 of 1 cubic millimeter. This increase in spatial resolution has a huge impact on our research, that goes hand in hand with innovation. Together, the vastness of the human brain is becoming increasingly understood.

Post written by Joao Correia, M-BIC, Maastricht University

With love for Research,

signature