lederr

Archive for September, 2012|Monthly archive page

Can Parents’ Divorce Boost Son’s Risk for Stroke?

In Fitness/Health on Sunday, 30 September 2012 at 12:20

Can Parents’ Divorce Boost Son’s Risk for Stroke?

Researchers suspect stress hormone may play a role

 WEDNESDAY, Sept. 26 (HealthDay News)

New research suggests a strong association between parental divorce and boys’ risk for stroke later in life.

Researchers from the University of Toronto found that boys whose parents divorce before they turn 18 years old are three times more likely to suffer a stroke as adults than men who grow up in intact families. They noted this greater risk of stroke was not the result of other contributing factors such as family violence or parental addiction.

“The strong association we found for males between parental divorce and stroke is extremely concerning,” study lead author Esme Fuller-Thomson, chair of the university’s Factor-Inwentash Faculty of Social Work, said in a university news release.

Even after adjusting for factors such as race, income and education, and adult health behaviors such as smoking, parental divorce was still associated with a threefold risk of stroke among males, said Fuller-Thomson.

Although the reason why these men seem at greater risk for stroke remains unclear, the study authors suggested it may have something to do with their levels of the stress hormone cortisol.

“It is possible that exposure to the stress of parental divorce may have biological implications that change the way these boys react to stress for the rest of their lives,” noted Fuller-Thomson.

Women from divorced families do not face the same increased risk, the researchers found.

Although an association was noted between sons of divorced parents and later stroke risk, the research did not establish a cause-and-effect relationship. More research is needed to confirm their findings, the researchers said.

“If these findings are replicated in other studies, then perhaps health professionals will include information on a patient’s parental divorce status to improve targeting of stroke prevention education,” said Fuller-Thomson.

The findings are published in the September issue of the International Journal of Stroke.

More information

The U.S. National Library of Medicine has more about stress and health.

SOURCE: University of Toronto, news release, September 2012

Last Updated: Sept. 26, 2012

Copyright © 2012 HealthDay. All rights reserved.

Retrieved from: http://consumer.healthday.com/Article.asp?AID=668686

 

Advertisements

Big Brother and Boobies…

In Education on Sunday, 30 September 2012 at 12:15

East Haddam Student Wins Right To Wear ‘Boobies’ Bracelet In School

By KATHLEEN MEGAN, kmegan@courant.comThe Hartford Courant

September 27, 2012

School Had Forbidden Her From Wearing Breast Cancer Awareness Bracelet, But ACLU Of CT Stepped In

After two years of struggle with East Haddamschool officials, Sara Dickinson, a high school senior, is finally free to wear a breast cancerawareness bracelet that says “I (heart) Boobies” and “Keep A Breast” without fear of getting into trouble.

Contending that the bracelet was disruptive, administrators at Nathan Hale-Ray High School banned it. When Sara continued to wear it, they confiscated it once, threatened Sara with detention and later suggested they would get an injunction to force her to stop wearing it.

But after the American Civil Liberties Union of Connecticut interceded on her behalf last month, the school district had a change of heart.

“It’s nice to know somebody is going to let me exercise my rights,” said Dickinson, who said the bracelet was designed to interest students her age. “The whole point was to get everybody talking, to raise awareness of breast cancer.”

Sandra Staub, legal director for the American Civil Liberties Union of Connecticut said, “Sara isn’t the only student whose speech rights have been questioned by school officials around Connecticut. We want to make sure that other students and school officials are aware that students do have free speech rights unless their speech fits within very limited categories.”

Dickinson, an honors student and a member of the high school debate team, began wearing the bracelet as a sophomore because her grandmother died of breast cancer and other relatives have had the disease.

Along with other students, she purchased the bracelet for $4 through The Keep A Breast Foundation, which claims on its website to be “the leading youth-focused, global, nonprofit breast cancer organization.”

For a few months she and others wore the bracelets without an issue, but then later in the fall, high school Principal Eric Spencer announced a ban on the bracelets, Dickinson said. East Haddam Superintendent Mary Beth Iacobelli, who was hired by the district in July, said she was told that the bracelets were banned because they were “a disruption, a distraction from teaching and learning.”

Believing the ban was unjust, Dickinson continued to wear the bracelet.

“I could see how [the bracelet] could be distracting but my rationale was that kids in my school walk around with hair that is absolutely neon pink,” Dickinson said. “Well, I really feel that neon pink hair is much more distracting than a white bracelet.”

An English teacher confiscated the bracelet, but it was eventually returned. Over the course of two years, Dickinson said, she wore the bracelet but was always worried it might be taken away. She was threatened with detention at one point. In her sophomore year, she presented her case to the school board, but she says she never heard back from them.

Last June, the school administration approached her with a compromise, Dickinson said. They proposed that East Haddam students come up with their own design for a breast cancer bracelet to replace the “boobies” model. They suggested, she said, that if she didn’t agree they would get an injunction to force her to stop wearing the “boobies” bracelet to school.

“They wanted kids to design a bracelet that would not be offensive to anyone,” Dickinson said, but she said the bracelet didn’t offend kids. “I think adults are a lot more offended by most things in general. I feel like the older generation can be set in traditional ways.”

Dickinson turned the offer down and called the American Civil Liberties Union of Connecticut for help.

In an August letter to Eric Spencer, the ACLU‘s Staub quoted from two U.S. Supreme Court rulings that said school officials may forbid speech only when it threatens to “materially or substantially interfere with … the operations of the school” or if it contains “vulgar and lewd speech” that would “undermine the school’s basic educational mission.”

“Sara’s bracelet does neither,” Staub’s letter said.

Iacobelli said the administration respects “kids’ right to support this cause or any cause. … We have many of our kids affected by family members who have cancer.”

She said students will be allowed to wear the bracelet, but will be accountable for their behavior. If the bracelets lead to a disruption or interference with teaching, she said, “We would address it on an individual basis, as opposed to banning bracelets across the board.”

Copyright © 2012, The Hartford Courant

Retrieved from: http://www.ctnow.com/news/hc-boobie-bracelet-0928-20120927,0,284886.story?hpt=us_bn7

exemplary…

In Inspiration, Mindfulness on Saturday, 29 September 2012 at 16:34

i am completely humbled.

http://shine.yahoo.com/women-who-shine/sikh-woman-balpreet-kaur-turns-cyber-bullying-incident-203500244.html

“My attitude and thoughts and actions have more value in them than my body… by not focusing on the physical beauty, I have time to cultivate those inner virtues and hopefully, focus my life on creating change and progress for this world in any way I can.”~Balreet Kaur

 

marriage equality…

In LGBTQI on Friday, 28 September 2012 at 14:31

just sayin’!

“Unintended Consequences…”

In Education, Education advocacy, School reform on Friday, 28 September 2012 at 14:28

one thing i have noticed this year, more than others, is the overwhelming feelings of tension, anxiety, fear, etc. that are pervasive and hang thick in the air every time i enter one of the schools i work at.  a feeling that you must ALWAYS watch your back because there is someone waiting around the corner to document any misstep (the idea that if they can point out how poorly one teacher, counselor, administrator, etc. is, they will curry favor with the higher ups and/or show their loyalty and possibly take the spotlight off of themselves).  the tension is thick and no doubt felt by the kids as well.  i believe much of this stems from the fact that those working in education feel threatened regarding the arbitrary nature of the “new and improved” evaluation system.  and the fact that they do not feel valued as an educator and can easily be replaced.  or the tension stemming from public perception as to what a ‘piece of cake’ this government job is.  or the fact that the students they are supposed to be teaching, disciplining, and testing are now going to have a say in whether or not they keep their jobs.  part of their evaluations include not only how the students do on those all-important standardized tests, but also on how the students themselves rate their teachers.  “give me an “f” and make my parents take away my new iphone…now you’re going to be sorry.  i’m going to give you a poor evaluation.”  we all know kids (grades 3-12, since they were ‘kind’ enough to leave out k-2) can see things from their own myopic viewpoint and, when angry, may not thoughtfully and truthfully assess a teacher.  say it isn’t so!  yes, it is. i can’t say exactly why it is but, lately, when i walk into my schools i feel such a heaviness in the air.  anxiety stemming from uncertainty?  some of us haven’t even seen or been told about the very tool we are going to be assessed with nor have any clue as to what the actual assessment measures are.  how would that go in the real world?  “i am going to be looking at a number of factors related to how well you do your job in order to better determine whether or not you will be keeping your job.  you will have to collect data and show quantitative support as to how you achieved these objectives.  oh! you want to know what data to be preparing and what you are actually being evaluated on???  we haven’t been trained on that yet.  sorry.  we’ll let you know…to be determined.”  why should that cause anxiety?  i guess it should make a professional no more anxious than a student who has been told he or she has a test on “math” and never being told anything else.  ok, you know it’s on “math” but is it geometry?  pre-calculus? algebra?  just wait until the test and you’ll find out.  you also might fail the test, but hey, that’s life and then we can replace you with a student who can pass…

i realize my example is a bit far-fetched, but not that much.  all i can say is the feelings being expressed by my colleagues are those of anxiety, stress, and tension.  NOT a fun place to be and i can’t think that it’s good for the kids.

and i am not sure we needed a paper to tell us about the rising tensions, but the latest one is below.  teacher evaluation system or not…public education is not a fun place to be right now.  to me, the “unintended consequences” be they a result of the evaluation system or the system, in general, negatively impact the kids.  is that really the desired result?

***

Teacher Evaluation Systems Hold Inherent Tensions, Require Refining: American Enterprise Institute Report

With No Child Left Behind waiver applications and related legislation ushering in new teacher evaluation systems in upwards of 20 states, a report out of the American Enterprise Institute highlights four key tensions policymakers and educators must consider in refining such policies.

The first section in the paper, titled “The Hangover: Thinking About The Unintended Consequences Of The Nation’s Teacher Evaluation Binge,” calls for evaluation requirements to allow for flexibility. There is a tendency to make policies overly prescriptive, which in turn could limit school autonomy and hinder innovation that could lead to the development of better evaluations.

Many of the evaluation proposals being circulated call for decreased attention on details like teachers’ training and other characteristics, and greater focus on the bigger picture — results they elicit in the classroom. On the other hand, mandates that teacher evaluations include specific design elements could be seen as overly prescriptive. According to the paper, this is already the case in several states that now require school districts to adopt teacher evaluations that employ state-defined value-added models or specific teacher evaluation rubrics. In addition, while NCLB waiver criteria require only that states design guidelines for teacher evaluation systems and ensure local districts implement systems that meet those guidelines, some states — including Delaware and South Carolina — have elected to adopt a single statewide teacher evaluation system, in which all the state’s districts must take part.

The paper’s authors point out that poorly designed evaluation requirements could also hinder other innovative models. Some schools have begun to incorporate learning-based software in their classrooms and other blended learning models; these technologies vary in design, approach, costs and teacher role. Student groups in these models are more flexible and fluid, and students receive instruction and tutoring from a variety of teachers and programs. This makes it difficult or impossible to attribute student learning gains in a particular subject to a particular teacher, and complicates teacher evaluation systems that rely on linking teachers to their students’ academic results.

The third tension the paper highlights is the purpose of evaluations; new evaluation systems have been marketed as a means of identifying and dismissing underperforming teachers, while providing all teachers with useful feedback to help improve their performance. That said, state efforts to create new evaluation systems have focused much more on what happens to teachers at the bottom of the spectrum, versus those in the middle or at the top.

Several states’ new teacher evaluation laws mandate the creation of a professional development plan only for low-performing teachers, and chiefly as a means of allowing them an opportunity to improve before dismissing them. Current design efforts have not focused on incorporating features that would ensure evaluations actually help teachers improve. According to the report’s authors, evaluation systems need to be designed with a mind to allowing for face-to-face discussion time between the teacher and his or her evaluator.

Lastly, there is a prevailing sentiment that holding teachers accountable for their performance will more closely align teaching with norms in other professions. However, most professional fields rely on a combination of data and managerial judgment when conducting evaluations and making subsequent personnel decisions. This is in stark contrast to the teaching profession, in which new evaluation systems have aimed to eliminate subjective judgments entirely, instead focusing solely on student performance.

According to the paper, the best protection against biased managerial judgment is to ensure that the managers themselves are also held accountable for performance. Furthermore, in designing value-added systems, policymakers should consider whether the elements they are adding move education away from or closer to professional norms in other fields.

The report’s authors offer several policy recommendations for designing new teacher evaluation systems moving forward:

• Be clear about the problems new evaluation systems are intended to solve. • Do not mistake processes and systems as substitutes for cultural change.

• Look at the entire education ecosystem, including broader labor-market impacts, pre- and in-service preparation, standards and assessments, charter schools, and growth of early childhood education and innovative school models.

• Focus on improvement, not just deselection.

• Encourage and respect innovation.

• Think carefully about waivers versus umbrellas.

• Do not expect legislation to do regulation’s job.

• Create innovation zones for pilots—and fund them

Retrieved from: http://www.huffingtonpost.com/2012/09/28/american-enterprise-insti_n_1921088.html

Is it a Pretty Ugly World?

In Inspiration, Mindfulness on Thursday, 27 September 2012 at 12:05

Is it a Pretty Ugly World?.

be happy, do nothing…

In Fitness/Health, Inspiration, Mindfulness on Thursday, 27 September 2012 at 11:42

To be Mentally Sound, DO NOTHING! Says Psychologist!

Practically the whole planet is on the move to find ways to maintain a sound psychological health. This can prove to be counter-productive according to a psychologist. In a recent report, Jamie Gruman says the key to a great mental well-being is to actually do nothing at all.

“Health: Social Psychologist Proposes Science of Positive Thinking”

Canadian social psychologist Jamie Gruman is proposing a new way of achieving nirvana: Do nothing.

Instead, live in the moment and embrace the “serene and contented acceptance of life as it is, with no ambitions of acquisition, accomplishment or progress toward goals,” said Gruman, co-founder of the newly created Canadian Positive Psychology Association, a network of scholars and academics studying human well-being and happiness.

Psychology has long focused on our inner torment: understanding why people get depressed or anxious, and how to alleviate it. The emphasis has been on “disorders,” “deficits,” “neuroses” and the need for “therapy.”

Positive psychology emphasizes strengths more than illness. It focuses on happiness, well-being, resilience, empathy, gratitude and forgiveness — how to “flourish” as a human. One idea, said Frank Farley, an Edmonton native and a past president of the American Psychological Association who studies heroism and personality, is that maybe it can inoculate people against mental distress.

More than a decade after its founding, the field is undergoing something of a revival. The neuroscience behind it is advancing. Researchers are finding links between positive emotions and a longer, healthier life span.

At the same time, the notion of a healthy national psyche is being embraced more openly by economists, politicians and political scientists around the globe, including in Canada, where, for example, Green Party leader Elizabeth May recently introduced a private member’s bill in the House of Commons meant to develop a set of indicators to measure “the real health and well-being of people.” A United Nations expert panel earlier this year called for nations around the globe to track the happiness of their people, arguing that economic wealth doesn’t equal psychological health.

Except for those living below the poverty line, “the correlation between money and happiness is almost non-existent,” said Gruman, an associate professor of organizational behaviour at the University of Guelph.

“We’re trying to find out what makes people happy,” Gruman said, “because we’ve learned it isn’t money.”

Science is searching for prescriptions for happiness at a time when North American adults increasingly are being medicated with anti-depressants.

According to new figures released exclusively to Postmedia News by market research firm IMS Brogan, Canadian pharmacists dispensed 40.2 million prescriptions worth $1.7 billion for anti-depressants in 2011 — a 7.5 per cent increase over 2010.

Over the last five years, the use of anti-depressants has increased on a per-person basis in every province except Prince Edward Island. Of the 40.2 million prescriptions dispensed across the nation last year, Quebec had the largest share (14.2 million) followed by Ontario (13.8 million) and B.C. (4.1 million).

In all, Canadians made 7.9 million visits to a doctor for symptoms of depression in 2011, according to IMS Brogan.

Gruman said positive-psych isn’t the Pollyannaish, “lollipops-and-rainbows” approach to living that some critics dismiss it as.

“It’s about living the best possible life. I don’t think that only understanding pathology and misery leads us to knowing how to live the best possible life we can.”

Humans have an innate tendency to focus on the negative, he said, and there’s an evolutionary reason for that.

“When you’re feeling good, that’s the body’s signal that everything is hunky-dory. When you’re feeling upset or anxious or scared, that’s your body’s way of telling you something is wrong. So it’s evolutionarily adaptive for us to be drawn to the negative — it helps us survive.

“When there’s a sabre-toothed tiger running after you, it’s healthy to be scared. You’re going to run away and you’re going to live.”

A healthy dose of pessimism is appropriate at times, he said, adding that life “necessarily requires admitting the negative and recognizing the negative and respecting the negative.”

“But it also involves trying to understand, when you’re not dying of cancer, when you’re not suffering your heart attack, when you’re not suffering depression, when you have a positive moment, how do you make the most of those moments?”

Dr. Adam Anderson is Canada Research Chair in Affective Neuroscience at the University of Toronto. Anderson said a part of the brain called the medial prefrontal cortex is activated in response to positive emotions. “You find it in jazz musicians improvising,” he said.

If that is the brain’s “positivity muscle,” can we cultivate it? In randomized, controlled trials, his team has found that mindfulness meditation alters the brain; it changes the activity in the prefrontal cortex.

“Some people are lucky and have the right genes, we think, to be able to live the good life. And, if you don’t, you have to exercise in some way to try to boost that,” Anderson said.

Some equate the good life with constantly seeking the next pleasure, which Anderson said is like an addiction. “That’s like saying a cocaine addict has a really good model for living the good life because they’re trying to maximize the number of pleasures they have.”

Thinking positive is easy when you know how. Gordon McInnis shares useful advice on how to have a positive mindset.

Anderson said it’s not about seeking out or wanting things, “but to explore. To be creative, to play.”

The function of happiness isn’t to be happy, said Anderson, who isn’t a positive psychologist but who will be a featured speaker at the Canadian Positive Psychology Association’s inaugural conference this month in Toronto. “It’s evolution’s way of saying, go out and discover new things. Go play, go explore.”

Not everyone is enthused by the rush to “positivity.” All of us struggle with a tension “between our own dark feelings and the grating call of the bright, shiny, happy world,” said Eric Wilson, author of Against Happiness. Self-help books can further guilt us into thinking, “I’m not happy enough.”

But Anderson said the word “happy” seems “so loaded and confused.”

Our economy is built on selling happiness through consumption, he argued — and that increasing depression could, paradoxically, be a fallout of seeking happiness.

“If you go out seeking happiness and you don’t find it — you desire something, you assume that’s going to make you happy, you get it and you’re not happier, or you’re happier for a little bit of time, ultimately, that will make you depressed,” he said.”

This news came from Vancouversun.com.

When you think about it, people indeed have the tendency to be very busy on things to the point that it becomes unhealthy. Slowing down to smell the roses, although a cliché, should be put to practice especially in a world that moves so fast that it leaves everyone behind scrambling and stressed out to keep up.

Please make sure everybody you know in Twitter and Facebook knows this information by sharing this article with them. Also, to make sure you get updates about psychology, remember to leave some comments and subscribe. Thanks and stay positive!

Retrieved from: http://psychologyonlinecourses.net/to-be-mentally-sound-do-nothing-says-psychologist/

 

Aspartame and Neurotoxicity

In Fitness/Health on Thursday, 27 September 2012 at 10:02

New Study Shows Aspartame is neurotoxic, damages, the brain & still approved for use in over 90 countries.

Submitted by Michelle on 26 September 2012 – 9:48am

 

A new study on aspartame has the potential to reignite the decades-old controversy behind this artificial sweetener’s safety, or lack thereof. As far back as 1996, folks were writing about the potential link between aspartame and increasing brain tumor rates.

Indeed, its intrinsic neurotoxicity and carcinogenicity has been confirmed in the biomedical literature. And yet, aspartame has been approved for use in thousands of consumer products in over 90 countries, and is still being consumed by millions worldwide on a daily basis – despite the fact that over 40 adverse health effects of aspartame have been documented.

The new study, published in the September edition of the Journal of Bioscience and titled, “Effect of chronic exposure to aspartame on oxidative stress in the brain of albino rats,” aimed to test the hypothesis that chronic consumption of aspartame may be causing neurological damage in exposed populations. They found that chronic (90 day) administration of aspartame to rats, at ranges only 50% above what the FDA considers safe for human consumption, resulted in blood and brain tissue changes consistent with brain damage.

Aspartame is metabolized into three distinct components: aspartic acid, methanol andphenylalanine. While aspartic acid is a well-known excitotoxin, phenylalanine only presents a serious health concern to those with a genetic disorder known as phenyletonuria. Methanol, on the other hand, is far more problematic, as it is not naturally found in significant quantities in the human diet.

According to a recent review until 200 years ago, methanol was an extremely rare component of the human diet and is still rarely consumed in contemporary hunter and gatherer cultures. With the invention of canning in the 1800s, canned and bottled fruits and vegetables, whose methanol content greatly exceeds that of their fresh counterparts, became far more prevalent. The recent dietary introduction of aspartame, an artificial sweetener 11% methanol by weight, has also greatly increased methanol consumption.

Moreover, the aspartame metabolite methanol (also known as wood alcohol) is highly toxic and is metabolized into the known human carcinogen formaldehyde and formic acid, which is known to be highly toxic to the central nervous system. Considering the fact that the normal human body temperature is approximately 98.6 degrees Farenheit, and that aspartame will convert to its toxic metabolites at temperatures as low as 86 degrees Farenheit, the finding that aspartame is neurotoxic to animals is not a surprise. T

The authors of the new study surmised that the observed adverse brain changes were due to the generation of oxidative stress in brain regions.

Aspartame, of course, is a proprietary synthetic chemical not found in nature, and exists primarily because plants like stevia, which have significant, clinically-substantiated healing properties, can be grown in your back yard for free and are therefore not profitable commodities that can be produced and controlled only by a few.

But, aspartame is not the only toxic sweetener on the market. A growing body of research now shows that sucralose, known by the brand-name Splenda, is also capable of suppressing the immune system, causing inflammatory bowel conditions such as Crohn’s and ulcerative colitis, migraine headaches, and DNA damage.

The trick is to stick with naturally occurring compounds, whose sweetness is not associated with adverse health effects. Compounds whose sweetness is not associated with adverse health effects include: Honey, Xylitol,Erythritol, Stevia

Article by Ji Sayer of GreenMedInfo

Retrieved from: http://theallergymenu.com/blog/new-study-shows-aspartame-neurotoxic-damages-brain-still-approved-use-over-90-countries

Schools and ADHD…an “F” grade

In ADHD, ADHD Adult, ADHD child/adolescent, Education, School Psychology, Special Education on Thursday, 27 September 2012 at 07:25

How schools (even great ones) fail kids with ADHD

By Valerie Strauss

 There’s a group of students struggling through school rd to navigate that gets little attention in the media or in the debate about how to fix schools: Children with ADHD.

ADHD, or Attention Deficit Hyperactivity Disorder, is a brain condition that makes it especially hard for children to focus and concentrate in school and has a number ofother symptoms. It is too often misunderstood by teachers, parents and even the students themselves. According to the Centers for Disease Control, about 9.5% or 5.4 million children 4-17 years of age, had been diagnosed with ADHD, as of 2007. Many others who have the disorder haven’t had the benefit of a diagnosis.

Here is a powerful post by David Bernstein, a nonprofit executive who lives in Gaithersburg, Md., writing about the difficulties that his two sons, ages 7 and 15, have confronted in school as a result of ADHD.

By David Bernstein

When I was in fourth grade in the mid-1970s, my teacher pronounced that I was going to be an artist. The truth was that she didn’t think I had any academic talent to speak of. I was an “ADHD” boy who couldn’t follow directions, figure out what page we were on in the book, or turn my work in on time. With a severely limited understanding of the mind, the teacher simultaneously overestimated my artistic talent and underestimated my intellectual gifts.

School, particularly elementary school, was not for boys like me. And, 25 years later, even the very best schools have only changed slightly.
Like so many others who deviated from the norm, I learned much more from exploring my passions than I ever did from a structured school setting. With the help of numerous mentors, I taught myself how to write op-eds, lead teams, speak, and advocate. I actually cared about ideas, not primarily because of school, but despite it.  The Washington area, alive with political discourse, was the perfect place to give expression to my passions, and I moved here in my early twenties to take a job in the world of advocacy.

Now I have two boys of my own, neither of whom possesses a normative learning style. My teenage son goes to what is widely considered an excellent private school in the area with numerous wonderful, committed teachers. But like nearly every other educational institution in America, it’s built on an outmoded educational model.

Ironically, I first began to question the current model of education when the headmaster of my son’s school showed a video clip at a graduation ceremony of creativity guru Ken Robinson discussing how education kills creativity. Robinson maintains that we are using a model of education left over from the industrial revolution, where schools are organized along factory lines, complete with ringing bells and separate facilities.  “We educate kids in batches, as if the most important thing about them is their date of manufacture,” he states in another video on the topic.

Influenced by Robinson, best-selling author Seth Godin recently published a manifesto , “Stop Stealing Dreams,” on the need for radical education reform. He lays out the need for a post-industrial educational model that caters to diverse learning styles, passion for ideas and what individual students really care about. In such a school, teachers are coaches who help students in a journey of self-discovery. Students have a great deal of choice to determine what they study and how they study it, in stark contrast to the one-size-fits-all system of today.

Your child is right that he or she will never use trigonometry (unless so inclined). Exposing them to variety is one thing, but forcing the same subject for 13 years is another. In the modern marketplace, depth is just as important, if not more so, than breadth. Schools are all about breadth.

In today’s schools, the “good” students end up conforming, diminishing their own prospects for greatness, and the rest end up in an excruciating battle with themselves, their parents (trust me on this), their teachers and the endless tutors. My job as a parent, I’m reminded over and over again by the school, is to enforce the absurdity of the current system — make them turn everything in on time — which I do faithfully because there seems to be no other choice.

My youngest child, a rising second grader, rambunctious and restless as any you’ll find, has “fallen behind” in reading. He is “not sufficiently available for learning,” we are told. The teachers and guidance counselors, loving and well-meaning though they are, insist on ADHD meds so he can amp up his reading and catch up with his classmates. He’s a creative, bright, independent child, who will, there’s not a doubt in my mind, learn to read well and become highly successful. But he’s just not on their timetable for reading.

We are forced, in the words of Ken Robinson, to “anesthetize” him so he can function in today’s antiquated classroom setting. The Ritalin will do nothing to make him a more successful human being, a better thinker, or a more productive member of society. It will simply help him keep up with the masses and potentially drain him of some of his creative juices. By forcing him and so many other children like him to take thesepowerful drugs , schools deprive the future economy and society of precisely the creative talent they will need the most.

Greg Selkoe, the 36-year-old CEO of Karmaloop, a growing hipster media company with revenue of more than $130 Million a year, stated in a recent interview in Inc.: “I was diagnosed with ADHD in elementary school and actually got kicked out of several schools before landing in one for kids with learning issues. What made me not do well in school has actually been very beneficial in business, because I can focus on something very intensely for a short while and then move on to the next thing.”

Yet today’s schools insist that we prescribe our kids drugs to rid them of their hyper-focus.

I’ve talked with a number of educators, who see the writing on the wall for the current education system. They know that the economic reality of the day demands that schools change. But they also know that college-obsessed parents would balk at such changes, fearful that it might detract from their kids’ chances to go to the best college possible.
It will take monumental leadership to change the current educational mindset and model. In the meantime, my kids will struggle through school, battered along the way, and, like their father, be forced to discover most of their talents and passions on their own, outside of school.

Retrieved from: http://www.washingtonpost.com/blogs/answer-sheet/post/how-schools-even-great-ones-fail-kids-with-adhd/2012/09/23/8e81c83c-f828-11e1-8253-3f495ae70650_blog.html

a people’s education platform

In Education, Education advocacy, School reform on Thursday, 27 September 2012 at 07:23

A people’s education platform

By Valerie Strauss

With the presidential election approaching and the recent Chicago teachers strike, it seems like a good time for the following post, a “people’s platform” for education, or what Americans really want from their public schools. It was written by Nancy Flanagan, an education consultant and blogger at Education Week Teacher, and Don Bartalo, a retired superintendent who now works as an instructional coach and is also an author.

By Nancy Flanagan and Don Bartalo

What if education policy guidelines and political platforms were shaped by rank-and-file citizens?

What if the real education experts — parents, teachers, students and school leaders — got to fashion a platform of policy goals for education and determine which ideas provide maximum opportunity for public school students, our future citizens?

We need a national, nonpartisan effort to figure out how to strengthen public education, a kind of people’s platform. Such an effort — given the upcoming election and the recent Chicago Teachers Union strike — could not be timelier. Says Karen Lewis, president of the Chicago Teachers Union:

[Billionaire education reformers] have been putting money and money and money into education. And all they come up with is, ‘Let’s just get rid of all the teachers. Let’s have a national curriculum. Let’s test people to death.’ None of this stuff works. Not only does it not work, it exacerbates the problem. Standardized tests have been disguised as merit when they’re just ranking and sorting, and they’re disguising race and class privilege. We don’t have honest discussions about education in this country because we don’t want to have honest discussions about race and class.

 Amen. Let’s start talking.

Humorist Joel Stein says: A platform is supposed to be boring. It’s one of the few documents that people write hoping no one reads them.

An education platform written by educators and stakeholders would not be boring! It could be a lively and changing framework of ideas and principles. We have invested more than a century in funding the uniquely American concept of a free, high-quality, fully public education for every child. Why not lay down a clear template of what this means, in terms of federal responsibilities?

We might begin with a preamble:

There comes a time when a truly great nation must either choose to have the best public schools possible or stop talking about the importance of education.  Structures for publicly supported education must:

·         Help students learn

·         Help teachers teach

·         Let school leaders and elected boards lead

·         Relentlessly pursue genuine excellence and equity

As we see it, there are only three fundamental parts to the federal role in education: Excellence, equity, and economies of scale.

Excellence comes before equity — because you can’t have equity unless you know what top quality looks like. It’s in all of our interests to provide a clear and explicit vision of excellence: Rich curriculum. Creative instruction, tailored for unique student populations. Ample resources. Community input. Future-focused, joyful learning.

What do we want students to experience, learn, understand, and be able to do? What about citizenship? Defining the mission public education ought to come first in a platform.

Equity is the most fundamental federal role in education policy. The national government’s central priority must be assuring that children in the Mississippi Delta have the same basic opportunities as those who grow up on Long Island or a ranch in Wyoming. Besides great teaching and sufficient tools, what are those opportunities? A free, first-rate pre-school education program? Tutoring to ensure literacy for all? Let’s make a list.

Economies of Scale. What can the federal government provide, cheaper and better than smaller-scale commercial vendors? Lots of technological infrastructure. Free materials — both digital and hard copy. Travel and internships. Scholarships and low-cost loans. Ideas and research, instituted without the taint of corporate funding.

It is ironic that the federal Department of Education is now developing computer-based tests for schools with inadequate hardware to administer them. It should be the other way around, with the department using its purchasing power to provide or reduce costs of infrastructure, rather than shaping curricular and instructional work that belongs in districts and classrooms. We need a Race to Improve All Schools, not a state-against-state competition for federal grants aligned with questionable ideas that should be decided at local levels.

The Education Department needs to return to its original purpose.

Congress established the U.S. Department of Education in 1979. ED’s stated mission is to:

Strengthen federal commitment to assuring access to equal educational opportunity; Supplement and complement the efforts of education stakeholders;

Encourage the increased involvement of the public, parents, and students in federal education programs;

Promote improvements in the quality and usefulness of education, to share research information;

Improve the coordination, management of federal programs and accountability to the public.

Note: ED was not established to make teachers accountable to the public, but to make the federal government accountable to the people. When was this guiding principle lost?

Public schools should be the foundational hope of the American people — the children of the poor have no other hope. It’s time for ordinary citizens to initiate those uncomfortable conversations about race and class, and share our national hopes and dreams for public education with policy-makers.

Retrieved from: http://www.washingtonpost.com/blogs/answer-sheet/post/a-peoples-education-platform/2012/09/25/ecd63dbc-072f-11e2-afff-d6c7f20a83bf_blog.html#pagebreak

Tensions and Pitfalls in Teacher Evaluation Policies

In Education, Pedagogy, Special Education on Thursday, 27 September 2012 at 04:40

Getting Honest About the Tensions and Pitfalls in Teacher Evaluation Policies

By Sara Mead on September 26, 2012 10:48 AM

Today the American Enterprise Institute is releasing a new paper that I wrote with my colleagues Andrew Rotherham and Rachael Brown looking at some of the tensions in the current policy shift towards new teacher evaluation systems–and advising policymakers on how to avoid some potential pitfalls implicit in those tensions.

Obviously, Andy, Rachael and I are no foes of the move towards new systems of teacher evaluation: We believe the previous system–which ignored student learning completely, failed to recognize excellence or give teachers meaningful feedback to improve, and rated 99+% of teachers satisfactory or better–was clearly a broken one. We also believe that new evaluation systems, when done well, have the potential not only to identify ineffective teachers who would be better suited to other careers, but also to give due credit to excellent teachers who should be rewarded and retained, and to help all teachers improve their performance.

But we’re also very cognizant of the pitfalls here. In the rush to gain public and political support for new evaluation systems, proponents of these systems have too often over-promised or ignored real limitations, tensions, and trade-offs in both the design of these systems and the technologies (including value-added metrics, data systems, and observational rubrics) that underlie them. As Iwrote recently, there’s a temptation among some reformers to treat value-added measures and evaluation systems as a sort of “magical black box” that, if we just use it, will tell us the real, honest truth about teacher performance. But the reality is a lot more complicated than that. And in failing to acknowledge that, reformers run the risk of jeopardizing the sustainability and success of the very systems they seek to promote. We need to move forward with new teacher evaluation systems–but we need to do so with humility, the recognition that no one knows all the answers, and plenty of room for flexibility and revision over time as we learn from the successes and challenges of various models.

Andy, Rachael, and I outline four key tensions that have been overlooked in current debates over teacher evaluation: 1) Tensions between centralized control and flexibility, 2) Tensions about the role of teacher evaluation in an evolving overall ecosystem where an increasing number of teachers cannot be directly linked to the test scores of a specific group of students in a specific subject, 3) Tensions about how to prioritize different purposes (accountability, personnel decisions, professional development) for which evaluation results may be used, and 4) Tensions about what it really means to evaluate teachers as professionals. We also offer recommendations for policymakers seeking to negotiate and balance these tensions in evolving teacher evaluation systems. Check out the whole thing here.

Retrieved from: http://blogs.edweek.org/edweek/sarameads_policy_notebook/2012/09/avoiding_a_teacher_evaluation_hangover.html?print=1

The hangover: Thinking about the unintended consequences of the nation’s teacher evaluation binge

Sara Mead, Andrew J. Rotherham, Rachael Brown | American Enterprise Institute

September 26, 2012

Over the past three years, more than twenty US states have passed legislation establishing new teacher evaluation requirements and systems, and even more have committed to do so in Race to the Top or Elementary and Secondary Education Act Flexibility Waiver applications. These new evaluation systems have real potential to foster a more performance-oriented public education culture that gives teachers meaningful feedback about the quality and impact of their work. But there are pitfalls in states’ rush to legislate new systems, and there are real tensions and trade-offs in their design.

Unfortunately, much of the current policy debate has been framed in stark ideological terms that leave little room for adult discussion of these tensions. This paper seeks to move the debate beyond ideology and technical issues by highlighting four key tensions that policymakers, advocates, and educators must consider in the development of new teacher evaluations:

  • Flexibility versus control: There is a temptation to prescribe and legislate details of evaluations to ensure rigor and prevent evaluations from being watered down in implementation. But overly prescriptive policies may also limit school autonomy and stifle innovation that could lead to the development of better evaluations.
  • Evaluation in an evolving system: Poorly designed evaluation requirements could pose an obstacle to blended learning and other innovative models in which it is difficult or impossible to attribute student learning gains in a particular subject to a particular teacher.
  • Purposes of evaluations: New evaluation systems have been sold as a way both to identify and dismiss underperforming teachers and to provide all teachers with useful feedback to help them improve their performance. But there are strong tensions between these purposes that create trade-offs in evaluation system design.
  • Evaluating teachers as professionals: Advocates argue that holding teachers responsible for their performance will bring teaching more in line with norms in other fields, but most professional fields rely on a combination of data and managerial judgment when making evaluation and personnel decisions, and subsequently hold managers accountable for those decisions, rather than trying to eliminate subjective judgments as some new teacher evaluation systems seek to do.

Recognizing these tensions and trade-offs, this paper offers several policy recommendations:

  • Be clear about the problems new evaluation systems are intended to solve.
  • Do not mistake processes and systems as substitutes for cultural change.
  • Look at the entire education ecosystem, including broader labor-market impacts, pre- and in-service preparation, standards and assessments, charter schools, and growth of early childhood education and innovative school models.
  • Focus on improvement, not just deselection.
  • Encourage and respect innovation.
  • Think carefully about waivers versus umbrellas.
  • Do not expect legislation to do regulation’s job.
  • Create innovation zones for pilots—and fund them.

Retrieved from: http://www.aei.org/papers/education/k-12/teacher-policies/the-hangover-thinking-about-the-unintended-consequences-of-the-nations-teacher-evaluation-binge/

 

an interesting look at adhd…

In ADHD, ADHD Adult, ADHD child/adolescent, ADHD stimulant treatment, Mindfulness on Thursday, 27 September 2012 at 04:06

Of ADHD and Lord Ganesha (A Tale of the Differently-Headed)

By ZOË KESSLER, BA, B.ED.

I was a happy little thing as a child. Then, I started feeling like a freak. People called me names. I didn’t know why.

Sometimes, they‘d push me, or punch me. Once, I ended up face down in gravel in the schoolyard.

I looked into the mirror to see if I was bleeding. Tears streamed down my eyes, clouding my vision. Finally, my tears dried. That’s when I saw it.

My eyes opened wide. I jumped back in shock. My head was so very strange. I didn’t look anything like any of my friends.

No wonder they didn’t like me. No wonder they all made fun of me. No wonder they wouldn’t let me play games with them, instead calling me names as I walked away.

“Freak!”

“Weirdo!”

“Loser!”

I was grotesque.

I ran home crying. Mom! Mom!

But mom couldn’t do anything. Reluctantly, she told me that this was my head and I would have to live with it.

No! I cried, running out of the house. I ran and ran and ran. I couldn’t believe it. Why would my mom lie to me? Of course we could fix my head. Of course we could. She just didn’t know how, so she was lying to me.

I ran to my school’s library. I sat tucked away in a corner where no one could find me. Mom was right! I would have this head for ever and ever. It was true: no one could save me. I hung my head.

My teardrops formed dark circles and spread on the page. The wet paper bubbled, each drop coming alive, the words rising up to mock me. I cried and cried. I would always have this head.

Always.

344/365. Deva Shree Ganesha.Creative Commons License photo credit: Anant N S (www.thelensor.tumblr.com)

This is the story of many with ADHD. This is also what vividly came to my imagination as I meditated on Lord Ganesha, the Hindu elephant God, one of Hinduism’s major deities.

There are several versions of Lord Ganesha’s story, but he’s generally accepted as the son of Shiva and Parvati, themselves Hindu deities. Still a babe, Lord Ganesha suffers a terrible tragedy: through a misunderstanding, his head is lopped off by dad.

Mom Parvati is of course grief-stricken, so dad thinks fast and replaces the babe’s head with that of a young elephant.

And the little trooper turns it all around, conquering adversity to become Lord Ganesha, inspiring millions of followers. Metaphorically speaking, his many qualities can also inspire those of us with ADHD.

Symbols as sources of inspiration

The symbolism of Lord Ganesha is singularly relevant to those of us with ADHD. For example, his large ears remind us to listen; his small eyes, to focus and concentrate; and his tiny mouth, to speak less.

Lord Ganesha rose above his misfortunes, becoming revered as the Remover of Obstacles. You think you’ve got challenges? Think about what Lord Ganesha had to overcome with his strange, unusual head.

Now think about what you’ve had to overcome with your unusual head. Lord Ganesha can inspire us to keep fighting to overcome our own obstacles, which, let’s face it, are as small as mice when compared to having the head of an elephant.

Count your blessings.

Annual celebration

As India celebrates its annual Ganesh Chaturthi festival in honor of Lord Ganesha (September 19 – 29, 2012), I’m offering gratitude for Lord Ganesha as a source of inspiration and insight in my daily meditations.

What inspires you?

I’d like to invite you to contemplate your own sources of strength and inspiration. Look closely: you might find more than initially meets the eye. I began to pray to Lord Ganesha to remove obstacles, and found myself relating to having a very different head.

When you look deeply into the face of the Divine, you too may find yourself reflected back.

Namaste.

Retrieved from: http://blogs.psychcentral.com/adhd-zoe/2012/09/of-adhd-and-lord-ganesha-a-tale-of-the-differently-headed/

Information regarding the upcoming DSM V

In Neuropsychology, Psychiatry, School Psychology on Wednesday, 26 September 2012 at 08:00

DSM-5: Finding a Middle Ground

Nassir Ghaemi, MD

DSM-5: Validity vs Reliability

This year’s American Psychiatric Association (APA) annual meeting was probably the last before the publication of theDiagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5), scheduled for May of next year. Hence, there was a sense of tense uncertainty in the many sessions addressing potential DSM-5 revisions.

DSM-5 Task Force Vice Chair Darrel Regier headed a symposium reviewing results of field trials on the reliability of proposed DSM-5 criteria. The trials were meant to assess whether clinicians can use the proposed criteria consistently and provided kappa values for the individual proposals.

Kappa values reflect the agreement in a rating by 2 different persons, after correction for chance agreement. From a statistical perspective, kappa values greater than 0.5 are generally considered good. As an example, 70% agreement between raters translates to a kappa value of 0.4.

Results of the field trials showed good agreement for such disorders as major neurocognitive disorder, autism spectrum disorders, and post-traumatic stress disorder, with kappa values of 0.78, 0.69, and 0.67, respectively. However, poor kappa values, in the range of 0.20-0.40, were reported for commonly diagnosed conditions, such as generalized anxiety disorder and major depressive disorder. All of the observed kappa values in the DSM-5 field trials translate to agreement between clinicians of around 50%.

Is this good or bad? A recent editorial[1] by DSM-5 leaders makes comparisons with other medical settings, and the claim is that most medical diagnoses involve diagnostic kappa values similar to those in the DSM-5 field trials. I spoke with prominent psychiatrists at this year’s meeting who were involved in some of these DSM studies and discussions; they expressed unhappiness with the kappa values in DSM-5 field trials, and some pointed out that kappa values in the DSM-III were higher.

So, the reliability of DSM-5 criteria seems to have declined compared to DSM-III. Is this a problem? It might be, but it might not be.

Reliability only means that we agree. It doesn’t mean that we agree on what is right. Validity is a separate issue. It could be that criteria are changed so that they are more valid — that is, actually true — but this could increase unreliability; raters might have to use, for instance, some criteria that are less objective and hence less replicable.

We will see. DSM-5 might be more valid but less reliable than DSM-IV and DSM-III. If so, that’s progress, in a way.

It is also important to think about other medical studies with low reliability. We should be careful about criticizing certain diagnoses, such as bipolar disorder (as some have[2]), without an awareness that this is the case for almost all our diagnoses. The problem of reliability is a general one, not a problem about claimed “overdiagnosis” of some conditions.

In my view, it is definitely time for a new edition of DSM; we can’t pretend that something written almost 2 decades ago is anywhere near up to date, with a generation of new research. Some of the proposed changes in DSM-5 — for example, the inclusion of antidepressant-induced mania as part of bipolar disorder; the inclusion of dimensions for axis II personality conditions; and the removal of nosologically nonspecific axis II diagnoses, such as “histrionic” personality — are consistent with an update based on convincing new research. But other changes, such as the wish to discourage the diagnosis of childhood bipolar disorder by making up a new category based on limited data (temper dysregulation disorder), merely repeat the mistakes of DSM-IV. Making up diagnoses because we don’t like others is not a scientifically sound way to revise a profession’s diagnostic system, and it won’t serve us well for the next 20 years.

But DSM-IV Has Limitations, Too

Also at this year’s APA meeting, Steven Hyman, a psychiatrist and neurobiology researcher who is former head of the National Institute of Mental Health, gave a plenary lecture on DSM-5 that was refreshingly honest in its appreciation of the limitations that the DSM-IV has placed on research. Rewinding to DSM-III, from the 1980s, he made the point that although that edition was a major advance, it is now out of date, and that DSM-IV, which merely continued the basic DSM-III structure, needs major changes. “The DSM-III was a brilliant document that could not have foreseen the science. It’s time to move on scientifically,” said Hyman.

Hyman noted that DSM-III actually hinders science. Researchers have difficulty getting funding from the National Institutes of Health or publishing papers that go outside DSM criteria: “For example, it was very hard to get a grant to test the hypothesis that maybe the apparent comorbidity of multiple anxiety disorder and mood disorders was just that there was a single underlying process or single disorder that got expressed with different symptom complexes in different times in life.”

There was a name for that condition — neurotic depression — and Sir Martin Roth, the great British psychopathologist, warned repeatedly in the 1970s and 1980s that it would be a mistake for DSM-III to remove it. DSM-III made that mistake, and the field has since acted like it would be a sin to study the matter any further.

There are many examples of this ilk in DSM-III and DSM-IV. Some who are upset with proposed changes in DSM-5 are diagnostic conservatives who seem to think that all our questions were answered in 1980 and 1994.

Dr. Hyman has been influential in designing the new Research Domain Criteria (RDOC), an attempt to create a DSM for research that begins with biological, rather than clinical, terms. I agree with the need for a DSM for research, but I don’t think our biological knowledge is advanced enough yet — despite all the advances that have been made — to build a diagnostic system from them, even for research purposes.

I think we should have a new DSM just for research: a system of Research Diagnostic Criteria (RDC), like what was created in the 1970s that led to DSM-III to begin with. I’ve started that process with my colleagues in the world of bipolar disorder research. We will publish a new RDC for bipolar disorder within the coming year — before DSM-5, I hope. If we do so, I hope that colleagues in other specialties in psychiatry will produce similar RDCs.

With these new publications, psychiatry may then be in a position for real advance. We will then have 3 nosologies, all complementary to each other and able to improve the others:

  1. DSM-5: a nosology based on a mix of research, economic concerns, social preferences, and professional consensus that is used for basic practice, insurance reimbursement, and short-term consensus.
  2. RDOC: a nosology based solely on biological research that is used for research.
  3. RDC: a nosology based solely on clinical research that is used for research.

In summary, DSM-5 is on its way, and May 2013 is as good a date as any for its publication. In some places, it will be a much-needed advance over the now-outdated DSM-IV. But in other places, it keeps old categories that are not as well proven as they should be, and it even adds a few new categories that are mainly based on professional, economic, and social concerns rather than on sufficient scientific evidence.

References

  1. Kraemer HC, Kupfer DJ, Clarke DE, Narrow WE, Regier DA. DSM-5: how reliable is reliable enough? Am J Psychiatry. 2012;169:13-15. http://ajp.psychiatryonline.org/article.aspx?articleid=181221 Accessed May 15, 2012.
  2. Zimmerman M, Ruggero CJ, Chelminski I, Young D. Is bipolar disorder overdiagnosed? J Clin Psychiatry. 2008;69:935-940.

Retrieved from: http://www.medscape.com/viewarticle/764740?src=ptalk

Poor Sleep and Hypertension

In Fitness/Health, Insomnia on Wednesday, 26 September 2012 at 07:36

Poor Sleep Relted to Resistant Hypertension

Sue Hughes

September 24, 2012 (Washington, DC) — Poor sleep quality is associated with a doubling in the risk of resistant hypertension in women, with the mechanism possibly mediated by depression, a new study shows [1].

The study was presented here at last week’s American Heart Association High Blood Pressure Research 2012 Scientific Sessions by Dr Rosa Maria Bruno (University of Pisa, Italy).

“I would say that treating insomnia may improve resistant hypertension, although we need further data before we make firm clinical recommendations on this,” Bruno told heartwire .

She commented: “There is lots of evidence that sleep disorders are related to cardiovascular events, but most relate to sleep-disordered breathing such as sleep apnea. Also, there have been many studies showing an association between short sleep duration and the incidence of cardiovascular events or hypertension. But we looked at whether insomnia was linked to the severity of hypertension, and we found poor sleep quality was significantly more prevalent in patients with resistant hypertension.”

Quality Rather Than Quantity

The researchers reported that it was the quality of sleep rather than the duration of sleep that seemed to be the important factor in the relationship with resistant hypertension. They also found a large difference between men and women.

Bruno noted: “In women, poor sleep quality was strongly related to anxiety and depression and resistant hypertension, but this was not the case for men. This difference remained after accounting for other confounding factors. In women, we found that poor sleep quality was associated with a fivefold increase in the probability of having resistant hypertension, even after adjustment.”

She cautioned that as this was only a cross-sectional study, they can conclude there is an independent association between poor sleep quality and resistant hypertension, but they cannot deduce that this is a causal effect. “This needs to be confirmed in a prospective study. It could also be that the hypertension is causing the insomnia, but we believe that the insomnia is making the hypertension worse.”

Experimental evidence supports this view. It is known that interrupted sleep stimulates the sympathetic nervous system and increases cortisol levels, both of which cause an increase in blood pressure.

For the study, data on sleep quality, anxiety/depression, and cardiovascular risk factors were collected for 270 patients from a hypertension outpatient unit. Sleep quality was measured by the Pittsburgh Sleep Quality Index (PSQI), and anxiety and depression with the Beck Depression Inventory (BDI). Poor sleep quality was defined as PSQI >5, mild to severe depressive symptoms as BDI score >10. Patients with obstructive sleep apnea were excluded. Resistant hypertension was defined as a failure to control hypertension with three or more drugs.

Complete data were available for 234 patients, half of whom were women. Mean sleep duration was 6.4 hours, and 49% of participants had short sleep duration (less than six hours), which was similar in both sexes.

However, women had higher PSQI scores and a higher prevalence of poor sleep quality. Women showed also higher depression scores and prevalence of depressive symptoms than men.

Sleep and Depression Scores in Women vs Men

Score Women Men p
PSQI score 5.2 3.6 0.03
Poor sleep quality (%) 46 30 0.01
Depression score 4.5 1.8 0.006
Prevalence of depressive symptoms (%) 20 7 0.003

Resistant hypertension was present in 15% of patients, and these individuals had higher PSQI scores than those without resistant hypertension, a difference shown in women but not in men. The association between depression score and resistant hypertension showed a similar trend.

Sleep Scores (PSQI) Related to Resistant Hypertension

Group Resistant hypertension No resistant hypertension p
Entire population 5.8 4.1 0.03
Women 6.8 4.8 0.04
Men 4.7 3.5 0.37

Depression Scores (BDI) Related to Resistant Hypertension

Group Resistant hypertension No resistant hypertension p
Entire population 3.6 2.8 0.02
Women 5.1 3.7 0.03
Men 2.0 1.9 0.53

In a multiple logistic regression analysis (including age, sex, obesity, diabetes, previous CV events, sleep duration, use of hypnotic drugs) poor sleep quality was independently associated with resistant hypertension (OR 2.2). But this relationship lost significance when depressive symptoms were included in the model.

References

  1. Bruno RM, Palagini L, Di Giulio A, et al. Relation between poor sleep quality and resistant hypertension. American Heart Association High Blood Pressure Research Scientific Sessions; September 21, 2012; Washington, DC. Abstract 63.

Retrieved from: http://www.medscape.com/viewarticle/771457?src=nl_topic

Noninvasive Prenatal Diagnosis: Can Ethics and Science Meet?

In Genes, Genomic Medicine, Neuropsychology, Neuroscience on Wednesday, 26 September 2012 at 07:30

posting as an addition to my recent post on genomic medicine.  the growing field and research in genomic medicine raises some interesting ethical issues.

Noninvasive Prenatal Diagnosis: Can Ethics and Science Meet?

Elizabeth H. Dorfman; Mildred Cho, PhD

Editor’s Note:
Technological advances have enabled researchers to sequence an entire fetal genome noninvasively by extracting cell-free fetal DNA from maternal plasma.[1,2] This use of noninvasive prenatal diagnosis (NIPD) shifts the focus away from screening for known or suspected anomalies and inherited conditions to potentially discovering a wide array of information about the fetus that patients and clinicians might not be prepared to address.

On behalf of Medscape, Elizabeth H. Dorfman, a graduate student at the University of Washington Institute for Public Health Genetics, Seattle, Washington, interviewed Mildred Cho, PhD, Professor at the Stanford Center for Biomedical Ethics, Stanford, California, about the ethical and social implications of NIPD and how advances in these techniques might affect clinical practice.

Ms. Dorfman: Let’s start with a few background questions to set the stage. Can you briefly describe the technique behind NIPD using cell-free fetal DNA?

Dr. Cho: NIPD allows prenatal testing to be done from a sample of maternal blood instead of having to take a sample through invasive techniques, such as amniocentesis or chorionic villus sampling. There are a lot of different ways of analyzing fetal DNA in maternal serum; this technology, which is more recently developed, enables one to look at fragments of cell-free fetal DNA as opposed to fetal cells in maternal blood.

Ms. Dorfman: In regard to the timing of testing, risk to the fetus or the pregnancy, or potential for incidental findings — are they substantively different for NIPD compared with existing tests, or are they similar?

Dr. Cho: NIPD could potentially be used earlier in gestation, so that would give people more time to think about what to do with the results. Right now, I don’t think it’s being used very early because the ability to get enough DNA in the sample hasn’t been worked out fully, but that is the hope. Obviously, because it’s noninvasive, that makes a big difference to the person who is giving the sample: Not only is it not uncomfortable or painful, but there is virtually no risk to the fetus from taking the sample.

Ms. Dorfman: A team at the University of Washington recently announced that they had used noninvasive cell-free fetal DNA methods to sequence the entire genome of a developing fetus.[1] Could you go into a little bit of detail about how this changes the scope of NIPD?

Dr. Cho: Currently, fetal testing is done either to screen for one of a small number of conditions, or as follow-up to a prior screening, such as a genetic screening or fetal ultrasonography. In these cases, the fetal diagnostic test will be used to focus on any conditions or anomalies that turned up positive in the prior screen, or to detect a condition that is of particular concern that may have been identified through a family history. So, diagnostic testing will be just that: diagnostic, trying to come up with a genetic cause for an observed or suspected anomaly.

When or if it becomes possible to do whole-genome analysis in a clinical setting routinely, it will open up the possibility that people can get information about the fetus that is well beyond a handful of known fetal conditions, such as a trisomy. This raises the concern that people will be faced with a huge amount of information about which there might be a lot of uncertainty and will have very little time to consider what to do with the information.

Thinking even further into the future, one of the concerns is that it could potentially change the way people think about pregnancy because it might be perceived that they have a lot of choices to make about what kind of children they want to have. Moving from a limited set of conditions to potentially any kind of human trait that has a major genetic component could really change the way people think about pregnancy and prenatal testing.

Ms. Dorfman: How does this potentially expanded capability reconcile with current practice guidelines and policy statements related to genetic testing in children? For example, the American Academy of Pediatrics Committee on Genetics’ recommendations on ethical issues with genetic testing in pediatrics,[3] or the National Society of Genetic Counselors’ position statements on prenatal and childhood testing for adult-onset disorders.[4]

Dr. Cho: There is going to have to be some further thought about how this kind of fetal testing might be used by clinicians. The current guidelines don’t really speak to whether there are professional limits on what clinicians will and will not use genetic testing for, so the clinical communities will have to ask themselves whether there are any genetic traits for which they won’t offer testing, or whether there are any limits on information that they will provide to patients.

Ms. Dorfman: As a follow-up to that, the editor’s summary of the University of Washington study that was published in Science Translational Medicine [1] stated, “An ideal prenatal genetic diagnostic would noninvasively screen for all Mendelian disorders early in pregnancy.” I was wondering whether you agreed with or had any comments about that statement.

Dr. Cho: We have to think about what “ideal” means to different people. We can currently test for a lot of mendelian conditions, and yet a lot of people don’t opt to get those tests. For a lot of people, that kind of information might be unwanted; some of it may be the kind of information that won’t have any bearing on how people treat their pregnancies, or it may not be relevant until after the child is born. I think that’s something that can be debated, whether that’s an ideal situation or not.

Ms. Dorfman: NIPD requires a blood sample from the pregnant woman, and as you have described, carries no risk for miscarriage or direct fetal harm. Of note, this has raised concerns about inadequate informed consent, and I was hoping that you can comment on where this concern came from.

Dr. Cho: People who are already familiar with prenatal screening tests that analyze maternal serum already know that sometimes, women may not realize that one of the blood samples taken during pregnancy was not used to check their blood glucose, but was actually a prenatal screening test. So I think the concern is that if there isn’t a specific and unique procedure that is part of the prenatal testing process, it could go almost unnoticed until the results come back — and then be a shock to people who get the results. They might not understand the implications of this type of testing.

Ms. Dorfman: Is there consensus about the information and risks that should be disclosed in the informed consent process before NIPD?

Dr. Cho: I don’t think there is consensus on how to deal with information that should be disclosed in almost any clinical situation, and no, I don’t think that there is consensus for how to deal with genomic results and NIPD.

Ms. Dorfman: What risks do you think should be disclosed before testing?

Dr. Cho: People should understand that a prenatal test is being done and that the information they might receive from that test could be very broad and potentially have a major impact on decision-making. And if they have a choice to not get all that information, they should understand that as well.

The consent process should note the risk for getting information that the person might not want, and also that the information might affect family members as well, who may not be interested in getting genomic information.

Ms. Dorfman: Noninvasive testing using cell-free fetal DNA can be used to determine fetal gender as early as 7 weeks’ gestation. Is there any reason to think that this will promote prenatal sex selection in regions where this has not been a problem or exacerbate the practice in regions where this is already a concern?

Ms. Cho: There might be reason to be concerned about the use of cell-free fetal DNA testing for sex selection, especially in areas where gender imbalance is already widespread. Even if there are laws against sex selection, it would be relatively easy to get a blood sample and also relatively easy to send it out of the country, and to get a result back.

It’s something to be aware of and keep tabs on. Companies that offer testing will have to think about how they’re going to determine whether the samples are being used for things that are actually illegal in other countries; it may be their obligation to ensure that they’re not contributing to illegal behavior.

Ms. Dorfman: The American College of Obstetrics and Gynecology has published a position statement that this new technology should not be used for the purposes of sex selection.[5] Do you have any recommendations on what, if anything, should be done proactively to prevent that from becoming an issue in such countries as the United States, where we don’t see this as an issue but where we also don’t have laws banning it?

Dr. Cho: There is a professional stance against sex testing in the United States. But in places where sex testing is not necessarily against professional guidelines or is illegal, there needs to be more thought about what responsibility the testing companies have and what practical measures laboratories can take to ensure that they’re not potentially violating the law.

Ms. Dorfman: There is significant interest in whether and when to return genetic results to patients. How does this take shape in NIPD?

Dr. Cho: This question of returning results of genomic findings may be even more important in prenatal testing than in other clinical situations. In prenatal settings, patients typically have very broad autonomy to make decisions about what kind of information they seek and about what kind of information they have access to. It’s a little different from returning results in, say, adult medicine where you could argue that genomic results shouldn’t be treated any differently from other kinds of medical testing. But in the prenatal setting, there is usually such a premium put on autonomy of the patient to make decisions about her pregnancy that it puts the issue of returning results in a bit of a different light.

Ms. Dorfman: Noninvasive methods that require both a maternal and a paternal sample to determine which of the DNA segments are from the fetus could introduce additional opportunities for incidental findings. Could you comment on that?

Dr. Cho: I agree; when you’re getting samples from the mother and the father, you definitely have a much greater potential for incidental findings. It should be part of the consent process and their understanding of what kind of results they may potentially get back.

Ms. Dorfman: What efforts are currently under way to characterize how NIPD is affecting clinical practice and reproductive decision-making, if any?

Dr. Cho: Some people are studying the clinical implementation of NIPD, which is currently limited to aneuploidy detection. I don’t know that it’s being studied broadly for applications other than aneuploidy at this point, but I imagine that will happen in the near future.

A side issue that might become influential in the application of cell-free fetal DNA research to clinical practice is the question of intellectual property and whether patents for cell-free fetal DNA testing might affect how clinicians can or cannot use the test. The ethical side of this is how or whether intellectual property policy should be allowed to dictate how clinical tests are or are not available to clinicians and patients.

Ms. Dorfman: Looking ahead, how do you think can we best maximize the benefits of cell-free fetal DNA testing capabilities while minimizing the potential harm? Are there regulations or policies that can be implemented that you think would yield a favorable balance of risks and benefits?

Dr. Cho: That’s a good question, but I don’t have a very good answer. Some of the concerns about potentially eugenic uses of cell-free fetal DNA in a prenatal setting are very difficult to address at the policy level, and we haven’t done a very good job of that so far with other kinds of prenatal testing. A lot will depend on such things as informed consent, which has not proven very effective right now for other types of prenatal testing, so it is likely going to be a difficult problem to tackle.

The US Food and Drug Administration might be more willing to regulate this kind of genetic testing than other kinds of genetic testing simply because the nature of the decisions made in the prenatal setting are so much more ethically fraught and important. More specific scrutiny of prenatal genetic testing, putting into play some kind of mechanism for quality control, quality assessment, and accuracy at an analytic level would at least help to minimize some of the risks from having inaccurate results.

But the large social and ethical issues are going to be very difficult to address through policy, and clinicians are going to have a hard time dealing with them. Up to this point, we’ve been very reluctant to interfere with prenatal decision-making. Much of this will probably end up being left to public education efforts, which may be of limited effectiveness.

References

  1. Kitzman JO, Snyder MW, Ventura M, et al. Noninvasive whole-genome sequencing of a human fetus. Sci Transl Med. 2012;4:137ra76.
  2. Fan HC, Gu W, Wang J, Blumenfeld YJ, El-Sayed YY, Quake SR. Non-invasive prenatal measurement of the fetal genome. Nature. 2012;487:320-324. Abstract
  3. Committee on Bioethics. Ethical issues with genetic testing in pediatrics. Pediatrics. 2001;107:1451-1455. Abstract
  4. National Society of Genetic Counselors. Position Statement: Prenatal and Childhood Testing for Adult-onset Disorders. 1995. http://www.nsgc.org/Advocacy/PositionStatements/tabid/107/Default.aspx#PrenatalChildTestingAdultOnsetAccessed July 12, 2012.
  5. American College of Obstetrics and Gynecology. ACOG Committee Opinion: Sex Selection; February 2007 (reaffirmed 2011). http://www.acog.org/Resources_And_Publications/Committee_Opinions/Committee_on_Ethics/Sex_SelectionAccessed July 12, 2012.

Medscape Genomic Medicine © 2012 WebMD, LLC

Retrieved from: http://www.medscape.com/viewarticle/771190?src=nl_topic

 

ADHD medication and cardiovascular risk

In ADHD, ADHD Adult, ADHD child/adolescent, ADHD stimulant treatment, Medication, Psychiatry, Psychopharmacology on Wednesday, 26 September 2012 at 07:20

i believe many people may hold some misconceptions related to stimulant medication in treating ADHD.  in fact, i have a personal story related to that.  a friend of mine needed to go to the emergency room for a cut that needed stitches.  while in triage, the nurse took her blood pressure and it was quite elevated.  the nurse questioned her about her bp and asked if she was diagnosed with high blood pressure (this person is an avid athlete and has never had issues with high bp).  once the nurse saw on her intake form that she took 10mg. of adderall a day for adult ADHD, she told my friend that that medicine was “toxic” and the she needed to stop it “right away” and go to her doctor immediately for a cardiac assessment.  she repeatedly stated that the adderall she was taking was going to do her harm and she MUST stop taking it right away!  my friend was somewhat startled at the nurse’s vehement opinions regarding the adderall.  what i do know is that, when i am hurt or in a tense situation (i.e. the emergency room and in pain), my blood pressure might temporarily go up.  i also know that i DO NOT have high blood pressure.  no mention of being anxious or in pain was made in relation to my friend’s high bp at that time.  on a side note, once my friend was out of the ER and we had gone to the pharmacy to get a prescription, she took it again and it was well within the normal range, showing she was just anxious/worried/in pain and her higher bp was a residual effect of that.  but…this does illustrate that there are times people believe that something is fact because of popular opinion, their own biases, etc., even when the literature may not support their belief/s.  so, in light of that, i wanted to share a post on stimulant medication and cardiovascular risk.  as you can see, it is not as clear-cut as our opinionated nurse thought it was.

ADHD Medications in Adults Yield Mixed Cardiovascular Risk Results

Deborah Brauser & Hien T. Nghiem, MD

In the United States, roughly 1.5 million adults use medications for attention-deficit/hyperactivity disorder (ADHD). These medications include amphetamines, atomoxetine, and methylphenidate. ADHD medications are known to increase both blood pressure (< 5 mm Hg) and heart rate (< 7 bpm). Given these effects, there are concerns regarding serious cardiovascular events related to taking ADHD medications.

The aim of this study by Hennessy and colleagues was to determine whether use of methylphenidate in adults is associated with elevated rates of serious cardiovascular events compared with rates in nonusers.

Study Synopsis and Perspective

Although adults prescribed the ADHD medication methylphenidate may be at increased risk for adverse cardiovascular events, this association may not be causal, new research suggests.

In a cohort study of almost 220,000 individuals, new users of methylphenidate had almost twice the risk for sudden death or ventricular arrhythmia than age-matched control participants had. They also had a significantly higher risk for all-cause death.

However, the medication dosage “was inversely associated with risk,” meaning it lacked a dose-response relationship, report the investigators.

“We were surprised by the risk findings. But the inverse associations leads us to be somewhat skeptical,” coinvestigator Sean Hennessy, PharmD, PhD, associate professor of epidemiology and pharmacology at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia, told Medscape Medical News.

“Ordinarily, if a drug increases the risk of adverse outcomes, that increase is going to be dose-dependent. We didn’t see that, and in fact, found an inverse relationship for death and other outcomes,” he explained.

Dr. Hennessy said that this could be due to “frail, elderly patients who have other things going on” and who are prescribed low-dose methylphenidate.

“Maybe baseline differences in those patients that aren’t captured in the medical claims data are responsible for the elevated risk of adverse outcomes we were seeing rather than it being a causal effect of the methylphenidate itself,” he opined.

“So I would say to wait for these findings to be replicated and clarified in other research before they are acted on clinically.”

The study is published in the February issue of the American Journal of Psychiatry.

Mixed Findings

According to the investigators, methylphenidate and other ADHD medications are used by almost 1.5 million adults in the United States — even though these medications have been shown to raise blood pressure and heart rate.

“Given these effects, case reports of sudden death, stroke, and myocardial infarction have led to regulatory and public concern about the cardiovascular safety of these drugs,” write the researchers.

However, in May 2011, and reported by Medscape Medical News at that time, the same group of researchers published a study in Pediatrics that showed no increased risk for cardiovascular events in children treated with ADHD medications.

In addition, researchers from Kaiser Permanente Northern California published a study in December 2011 in the Journal of the American Medical Association that examined risks in adults younger than age 65 years who were taking methylphenidate, amphetamine, atomoxetine, or pemoline.

The combined group of ADHD medication users showed no increased risk for serious cardiovascular events, including myocardial infarction, sudden cardiac death, or stroke, compared with the group of nonusers.

For this analysis, investigators examined records from Medicaid and commercial databases, representing 19 states, for adults in a broader age range. Included were 43,999 new users of methylphenidate and 175,955 individuals who did not use methylphenidate, amphetamines, or atomoxetine (for both groups, 55.4% were women).

In each group, 67.3% of the participants were between the ages of 18 and 47 years, 23.2% were between the ages of 48 and 64 years, and 9.5% were aged 65 years or older.

Primary cardiac events assessed included sudden death or ventricular arrhythmia, myocardial infarction, stroke, and a combination of stroke/myocardial infarction. All-cause death was a secondary measure.

Unexpected Results

Results showed that the adjusted hazard ratio (HR) for sudden death/ventricular arrhythmia for the methylphenidate users compared with the nonusers was 1.84 (95% confidence interval [CI], 1.33 – 2.55). For all-cause death, the HR was 1.74 (95% CI, 1.60 – 1.89).

Adjusted HRs for myocardial infarction and stroke (alone or in combination) were not statistically different between the 2 treatment groups.

For the participants who experienced a cardiovascular event, the median treatment dosage was 20 mg/day. No significant association was found for sudden death/ventricular arrhythmia between the patients who took more or less than 20 mg/day of methylphenidate.

“However, there were unexpected inverse associations” between high methylphenidate dosage and stroke, myocardial infarction, stroke/myocardial infarction, and all-cause death compared with low dosage, report the researchers. They add that this lack of a dose-response association discredits a causal relationship.

“Furthermore, the inverse relationships…may suggest that lower dosages were prescribed to the frailest patients, who might have had a greater risk of all-cause death and sudden death — that is, the results may have been affected by unmeasured confounding,” write the investigators.

Other limitations cited included the fact that the study was not randomized and that administrative databases do not include potential confounders such as smoking, blood pressure, substance use, and exercise use/nonuse.

Dr. Hennessy reported that the investigators also assessed cardiovascular risks in their study participants who were also taking amphetamines or atomoxetine. They will be publishing those results soon.

Findings “Generally Reassuring”

Christopher J. Kratochvil, MD, from the University of Nebraska Medical Center in Omaha, writes in an accompanying editorial that this and other studies are “generally reassuring and demonstrate movement in the right direction, with systematic retrospective analyses better informing us of issues related to cardiovascular safety with ADHD pharmacotherapy.”

“While gaps persist in the methodical and comprehensive assessments of the safety of ADHD medications, these studies add valuable information to our already large repository of safety and efficacy data…and better inform the risk-benefit analysis of their use,” writes Dr. Kratochvil, who was not involved with this research.

He adds that establishing a “robust” national electronic health records system containing detailed data elements will also offer considerable help to clinicians.

These large and more accessible databases “will allow us to improve our identification and understanding of rare but serious adverse effects and better address these questions of public health significance,” he concludes.

The study was funded through a sponsored research agreement with Shire Development, Inc., and by a Clinical and Translational Science Award from the National Institutes of Health. The study authors all receive salary support from Shire through their employers. All financial disclosures for the study authors and Dr. Kratochvil are listed in the original article.

Am J Psychiatry. 2012;169:112-114;178-185. Abstract, Editorial

Study Highlights

■This study was a nonrandomized cohort study of new users of methylphenidate based on administrative data from a 5-state Medicaid database (1999-2003) and a 14-state commercial insurance database (2001-2006).

■All new methylphenidate users with at least 180 days of prior enrollment were identified.

■Users were matched on data source, state, sex, and age to as many as 4 comparison participants who did not use methylphenidate, amphetamines, or atomoxetine.

■A total of 43,999 new methylphenidate users were identified and were matched to 175,955 nonusers.

■The main outcome measures were (1) sudden death or ventricular arrhythmia; (2) stroke; (3) myocardial infarction; and (4) a composite endpoint of stroke or myocardial infarction.

■Secondary outcomes included all-cause death and nonsuicide death.

■Results demonstrated that the age-standardized incidence rate per 1000 person-years of sudden death or ventricular arrhythmia was 2.17 (95% CI, 1.63 – 2.83) in methylphenidate users and 0.98 (95% CI, 0.89 – 1.08) in nonusers, for an adjusted HR of 1.84 (95% CI, 1.33 – 2.55).

■Dosage was inversely associated with the risks for stroke, myocardial infarction, stroke/myocardial infarction, and all-cause death.

■Adjusted HRs for stroke, myocardial infarction, and the composite endpoint of stroke or myocardial infarction did not differ statistically from one another.

■For the secondary outcome of all-cause death, methylphenidate demonstrated a positive association (adjusted HR, 1.74; 95% CI, 1.60 – 1.89). Nonsuicide deaths were nearly identical.

■Limitations of this study include the potential for unmeasured confounders (ie, smoking, blood pressure, nonprescribed aspirin use, substance misuse, and level of physical activity) because the study was not randomized.

Clinical Implications

■ADHD medications raise blood pressure by less than 5 mm Hg and heart rate by less than 7 bpm.

■Although initiation of methylphenidate was associated with a 1.8-fold increase in the risk for sudden death or ventricular arrhythmia, the lack of a dose-response relationship suggests that this association may not be a causal one.

Retrieved from: http://www.medscape.org/viewarticle/759069

The root of chocolate cravings…

In Fitness/Health on Wednesday, 26 September 2012 at 06:56

Brain Study Reveals the Roots of Chocolate Temptations

ScienceDaily (Sep. 20, 2012) — Researchers have new evidence in rats to explain how it is that chocolate candies can be so completely irresistible. The urge to overeat such deliciously sweet and fatty treats traces to an unexpected part of the brain and its production of a natural, opium-like chemical, according to a report published online on September 20th in Current Biology, a Cell Press publication.

“This means that the brain has more extensive systems to make individuals want to overconsume rewards than previously thought,” said Alexandra DiFeliceantonio of the University of Michigan, Ann Arbor. “It may be one reason why overconsumption is a problem today.”

DiFeliceantonio’s team made the discovery by giving rats an artificial boost with a drug delivered straight to a brain region called the neostriatum. Those animals gorged themselves on more than twice the number of M&M chocolates than they would otherwise have eaten. The researchers also found that enkephalin, the natural drug-like chemical produced in that same brain region, surged when rats began to eat the candy-coated morsels, too.

It’s not that enkephalins or similar drugs make the rats like the chocolates more, the researchers say, but rather that the brain chemicals increase their desire and impulse to eat them.

The findings reveal a surprising extension of the neostriatum’s role, as DiFeliceantonio notes that the brain region had primarily been linked to movement. And there is reason to expect that the findings in rats can tell us a lot about our own binge-eating tendencies.

“The same brain area we tested here is active when obese people see foods and when drug addicts see drug scenes,” she says. “It seems likely that our enkephalin findings in rats mean that this neurotransmitter may drive some forms of overconsumption and addiction in people.”

The researchers now hope to unravel a related phenomenon that some of us might wish we could do more to control: what happens in our brains when we pass by our favorite fast food restaurant and feel that sudden desire to stop.

Journal Reference:

  1. Alexandra G. DiFeliceantonio, Omar S. Mabrouk, Robert T. Kennedy, Kent C. Berridge. Enkephalin Surges in Dorsal Neostriatum as a Signal to Eat. Current Biology, 2012; DOI: 10.1016/j.cub.2012.08.014

Cell Press (2012, September 20). Brain study reveals the roots of chocolate temptations. ScienceDaily. Retrieved September 23, 2012, from http://www.sciencedaily.com­ /releases/2012/09/120920135605.htm?goback=%2Egde_2514160_member_166874293

Retrieved from: http://www.sciencedaily.com/releases/2012/09/120920135605.htm?goback=%2Egde_2514160_member_166874293

Early detection of ASDs

In Autism Spectrum Disorders, Neuropsychology, School Psychology on Wednesday, 26 September 2012 at 06:54

ASD’s Can Be Diagnosed in Patients as Young as 12 Months

Fran Lowry & Hien T. Nghiem, MD

Clinical Context

Autism-spectrum disorders (ASDs) are neurodevelopmental disorders diagnosed by clinical observation of core behavioral symptoms. The prevalence of ASDs is estimated to be approximately 1% of the general population and is typically diagnosed in the preschool years. However, it has been reported that behavioral risk signs of ASDs may be evident before 12 months of age.

By 9 to 12 months of age, infants who will eventually receive a diagnosis of ASD may demonstrate the absence of social communicative features, such as shared affective engagement, imitation, social orienting, and joint attention, and present with unusual sensory features such as repetitive play, sensory preoccupations, emotional dysregulation, hyporesponsiveness to novel stimuli, and atypical motor behaviors. The First Year Inventory (FYI) is a parent-report measure designed to identify 12-month-old infants at risk for ASD. FYI taps behaviors that indicate risk in the developmental domains of sensory–regulatory and social–communication functioning.

The aim of this study is to determine an effective FYI scoring cutoff for most accurately indentifying infants who are at risk for a later diagnosis of ASD. The aim was met by conducting a follow-up of 699 children at 3 years of age from a community sample whose parents completed the FYI when their children were 12 months old.

Study Synopsis and Perspective

A questionnaire for parents is a promising tool for identifying 12-month-old infants who are at risk for an eventual diagnosis of ASD, new research shows.

A longitudinal follow-up study showed that 31% of children identified by the inventory as being at risk for ASD at 12 months had a confirmed diagnosis by age 3 years.

In addition, 85% of the children identified at 12 months had a developmental disability or concern by age 3 years, coauthor Grace Baranek, PhD, from the University of North Carolina School of Medicine, Chapel Hill, told Medscape Medical News.

“These children have the advantage of being enrolled in an intervention sooner and being tracked sooner than they would normally be, because most of the screenings that are recommended by the American Academy of Pediatrics happen at 18 or 24 months of age,” Dr. Baranek said.

Led by Lauren M. Turner-Brown, PhD, who is also from the University of North Carolina School of Medicine, the study was published online July 10 in Autism: The International Journal of Research & Practice.

Critical Changes

The FYI was developed specifically for 12-month-old infants because this age seems to map onto a period of critical developmental and neurobiological changes that are occurring in many infants who will eventually be diagnosed with ASD, she explained.

The current study was carried out to determine the effectiveness of the inventory in identifying infants at risk for a later diagnosis of ASD. In it, the parents of the 699 children who had completed the FYI when their child was 12 months old completed the additional screening questionnaires when their child reached the age of 3 years.

The parents and children were recruited through a community mailing that was based on North Carolina birth records.

In addition to the FYI, parents received the Social Responsiveness Scale–Preschool Version and the Developmental Concerns Questionnaire, which asked specific questions about parent concerns and child diagnoses. They also received $5.00 to encourage participation in the study.

The inventory identified 6 children with ASD and 3 children with pervasive developmental disorder–not otherwise specified.

Sooner Is Better

A high score in the sensory regulatory domain, which looked at such things as unusual behaviors with play, repetitive behaviors, unusual responses to sensory things such as light and sounds, and day-to-day regulatory patterns such as feeding, sleeping, and eating, was an important predictor of a future diagnosis of ASD, Dr. Baranek said.

Scoring badly in the social communication domain, especially when accompanied by a high score in the sensory regulatory domain, was also predictive, she said. “What we are finding is that although we can identify a lot of children who go on to have autism through their lack of social communicative abilities, the sensory regulatory items help us to more specifically identify the kids with autism so we’re not overidentifying just children with language delay.”

Once the FYI tool is refined, Dr. Baranek said, she and her team would like to see it used in primary care settings at the 12-month baby check, where physicians, nurse practitioners, and early interventionists could screen the child and use the inventory as a basis for progressive surveillance.

“The sooner we can identify any child who has a concern, the sooner they can be referred for more comprehensive evaluation and be connected with support services,” she said.

Significant Impact

Autism Society board chairman Jim Ball agreed. Commenting on this work for Medscape Medical News, Ball said: “Early screening and diagnosis can have a significant impact in an individual’s life, leading to improved educational and social outcomes, as well as employment and independent living in adulthood.”

He added that it is a priority of the Autism Society “to ensure all families know the signs of autism, have access to expert diagnosticians, receive appropriate services, and transition effectively into adulthood.”

The study was funded in part by the National Institutes of Health, Autism Speaks, and the Ireland Family Foundation. Dr. Turner-Brown, Dr. Barane, and Ball have disclosed no relevant financial relationships.

Autism. Published online July 10, 2012.

Study highlights

  • Families who participated in the FYI normative study and who gave consent to be recontacted were invited to participate in this longitudinal follow-up.
  • There were 2 phases: the initial FYI screening mailing at 12 months of age and the subsequent follow-up mailing at age 3 years.
  • At 3 years, parents of 699 children completed the Social Responsiveness Scale–Preschool version and the Developmental Concerns Questionnaire to determine developmental outcomes.
  • In addition, children deemed at risk for ASD on the basis of liberal cut points on the FYI, Social Responsiveness Scale–Preschool, and/or Developmental Concerns Questionnaire were invited for in-person diagnostic evaluations.
  • 38 families participated in the in-person diagnostic assessments. In addition to the FYI, Social Responsiveness Scale–Preschool, and Developmental Concerns Questionnaire, the 38 children who received further in-person diagnostic evaluation also completed the Mullen Scales of Early Learning, the Vineland Adaptive Behavior Scale, and the Autism Diagnostic Observation Schedule.
  • A “best estimate” diagnostic outcome was determined and divided into 1 of 4 categories: diagnosis of ASD; diagnosis of other developmental disability; no professional diagnosis, but developmental concerns noted or observed; and no developmental concerns.
  • 9 children had a confirmed diagnosis of ASD from the sample of 699 children, representing 1.3% of this sample.
  • A total of 43 children (6%) were in the diagnosed or treated group for non-ASD developmental problems.
  • An additional 82 (12%) children were in the developmental concerns group.
  • Finally, 574 (82%) of 699 children were in the no concerns group.
  • According to the receiver operating characteristic (ROC) analyses, “a total risk score…of 19.2, which is at or above the 96th percentile, was chosen as the best cutoff score.”
  • A second ROC analysis was performed to calculate the optimal cutoffs for each of the 2 FYI domains.
  • For the social communication domain, “a domain score of 22.5, which is at the 94th percentile, yielded the optimal classification of children with ASD at age 3.”
  • “For the sensory-regulatory domain, a score of 14.75, which is at the 88th percentile, yielded optimal classification of children with an ASD diagnosis at age 3.”
  • The ROC analyses determined that a 2-domain cutoff score yielded optimal classification of children: 31% of those meeting algorithm cutoffs had ASD and 85% had a developmental disability or concern by age 3 years.
  • Limitations of the study included the following:
    • lack of design as an epidemiological study,
    • lack of generalizability because the families who participated in the study tended to be more educated and less racially diverse,
    • that unidentified children were probably missed by current measures, and
    • the feasibility of such large-scale diagnostic protocols.

Clinical Implications

  • By 9 to 12 months of age, infants who will eventually receive a diagnosis of ASD may demonstrate the absence of social communicative features and the presence of unusual sensory features.
  • These results suggest that the FYI is a promising tool for identifying 12-month-old infants who are at risk for an eventual diagnosis of ASD.

Retrieved from: http://www.medscape.org/viewarticle/769367

 

affordable health care, korean-style.

In Uncategorized on Tuesday, 25 September 2012 at 18:44

kamsa hamnida!

http://www.washingtonpost.com/lifestyle/food/kimchi-koreas-affordable-health-care/2012/09/17/ecb7614e-003b-11e2-b257-e1c2b3548a4a_story.html

First Direct Genetic Evidence for ADHD Discovered-2010

In ADHD, ADHD Adult, ADHD child/adolescent, Genes, Genomic Medicine, Neuropsychology, Psychiatry, School Psychology on Tuesday, 25 September 2012 at 06:20

an older article, but one i thought worthy of posting.

First Direct Genetic Evidence for ADHD Discovered

Caroline Cassels

September 29, 2010 — New research provides the first direct evidence that attention-deficit/hyperactivity disorder (ADHD) is genetic.

In a study published online September 30 in The Lancet, investigators from the University of Cardiff in the United Kingdom say their findings, which show that ADHD has a genetic basis, suggest it should be classified as a neurodevelopmental and not a behavioral disorder.

“We’ve known for many years that ADHD may well be genetic because it tends to run in families in many instances. What is really exciting now is that we’ve found the first direct genetic link to ADHD,” principal investigator Anita Thapar, MD, told reporters attending a press conference to unveil the study results.

In the genomewide analysis, 366 children 5 to 17 years of age who met diagnostic criteria for ADHD but not schizophrenia or autism and 1047 matched controls without the condition were included. Researchers found that compared with the control group without ADHD, children with the disorder were twice as likely — approximately 15% vs 7% — to have copy number variants (CNVs).

CNVs, explained study investigator Nigel M. Williams, PhD, are sections of the genome in which there are variations from the usual 2 copies of each chromosome, such that some individuals will carry just 1 (a deletion) and others will have 3 or more (duplications).

“If a gene is included in one of these copy number variants, it can have deleterious consequences,” said Dr. Williams.

Shared Biological Link

The study authors note that the increased rate of CNVs was particularly high among children with a combination of ADHD and learning disabilities but “there was also a significant excess in cases with no such disability.”

The researchers also found that CNVs overlap with chromosomal regions that have previously been linked to autism and schizophrenia. Although these disorders are thought to be completely separate, there is some overlap between ADHD and autism in terms of symptoms and learning difficulties.

We’ve looked at only 1 class of variation, but it’s an important one because it has been linked to other brain disorders.

This finding suggests there may be a shared biological basis for the 2 conditions and, according to investigators, provides the first direct evidence that ADHD is a neurodevelopmental condition.

“We found that the most significant excess of these types of copy number variants was on a specific region of chromosome 16. This chromosomal region includes a number of genes, including one that affects brain development,” said Dr. Thapar.

The team’s research marks the start of the “unraveling of the genetics” of ADHD, according to Dr. Thapar.

“We’ve looked at only 1 class of variation, but it’s an important one because it has been linked to other brain disorders,” she said.

Implications for DSM-5?

Dr. Thapar added that the study results also have direct implications for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which is currently under development by the American Psychiatric Association.

A “huge debate” about whether ADHD should be classified as a behavioral or neurodevelopmental disorder is ongoing. However, she said, these findings should help put this controversy to rest.

“Our results clearly show that ADHD should be considered a neurodevelopmental disorder,” she said.

In fact, Dr. Thapar noted that the study findings have been submitted to one of the DSM-5 work groups for consideration in the development of the new manual.

The investigators note that despite epidemiologic evidence derived from twin studies showing high heritability and the fact that ADHD is often accompanied by learning disabilities, there is still a great deal of public misunderstanding about the disorder.

Some people say this is not a real disorder, that it is the result of bad parenting. Children and parents can encounter much stigma because of this. So this finding of a direct genetic link to ADHD should help clear this misunderstanding and help address the issue of stigma.

“Some people say this is not a real disorder, that it is the result of bad parenting. Children and parents can encounter much stigma because of this. So this finding of a direct genetic link to ADHD should help clear this misunderstanding and help address the issue of stigma,” said Dr. Thapar.

Although there are no immediate treatment implications, Dr. Thapar said she hopes the research will have an immediate impact in terms of shifting public perception about ADHD and fuel further research into the biological basis of the disorder with a view to developing better, more effective therapies for affected individuals.

In an accompanying editorial, Peter H. Burbach, PhD, from the Rudolf Magnus Institute of Neuroscience, University Medical Center Utrecht, the Netherlands, writes, “The first gains beyond today’s study might be initial insights into the pathogenesis and neurobiology of brain development as influenced by these genetic variants. This knowledge will eventually enter the clinic and might affect the way people think about and treat neurodevelopmental disorders by accounting for the biological consequence of the specific patient’s genotype.”

Lancet. Published online September 30, 2010.

Retrieved from: http://www.medscape.com/viewarticle/729652

coming soon to a bookstore near you!

In ADHD, ADHD Adult, ADHD child/adolescent, Neuropsychology, School Psychology, Uncategorized on Monday, 24 September 2012 at 16:47

Psychometric Analysis of the New ADHD DSM-V Derived Symptoms

Ahmad Ghanizadeh

BMC Psychiatry. 2012;12(21) © 2012 BioMed Central, Ltd.

Abstract and Introduction

AbstractBackground Following the agreements on the reformulating and revising of ADHD diagnostic criteria, recently, the proposed revision for ADHD added 4 new symptoms to the hyperactivity and Impulsivity aspect in DSM-V. This study investigates the psychometric properties of the proposed ADHD diagnostic criteria.
Method ADHD diagnosis was made according to DSM-IV. The parents completed the screening test of ADHD checklist of Child Symptom Inventory-4 and the 4 items describing the new proposed symptoms in DSM-V.
Results The confirmatory factor analysis of the ADHD DSM-V derived items supports the loading of two factors including inattentiveness and hyperactivity/impulsivity. There is a sufficient reliability for the items. However, confirmatory factor analysis showed that the three-factor model is better fitted than the two-factor one. Moreover, the results of the exploratory analysis raised some concerns about the factor loading of the four new items.
Conclusions The current results support the two-factor model of the DSM-V ADHD diagnostic criteria including inattentiveness and hyperactivity/impulsivity. However, the four new items can be considered as a third factor.

Background

Attention-deficit/hyperactivity disorder (ADHD) is one of the most common behavioral disorders in children and adolescents. Its rate in community samples is variably reported. A study reported the rate of 5.29%.[1] Meanwhile, the rate of its screening symptoms is much higher, reaching up to 10.1% in school age children.[2] This high rate of ADHD prevalence emphasizes the need for accurate identification and diagnosis of ADHD.[3]

There has been a recent significant argument or controversy regarding the necessity of reformulating and revising ADHD criteria.[1,4,5] For example, recent criticism of the current ADHD subtypes and the suggestion of including age-specific ADHD criteria in DSM V should be considered.[6] In addition, the current ADHD subtypes are frequently criticized.[3] Some researchers are interested in introducing ADHD-inattentive type as a learning disorder.[7] Furthermore, there is a debate whether oppositional defiant disorder should be considered as a type of ADHD.[8,9] Girls with ADHD are underdiagnosed in the community.[6] Moreover, the impact of the change in the age of the onset has been investigated.[10]

Given that the proposed DSM-V criteria for ADHD are available and would be implemented in the near future,[11] it is advised that their psychometric properties and modifications be studied before their clinical application. To the best of the author’s knowledge, there are no published studies investigating the psychometric properties of the proposed ADHD diagnostic criteria for DSM-V.

DSM-IV defines ADHD as a cluster of symptoms; the patient must have at least six or more out of the 9 symptoms of inattention and/or six or more out of the 9 symptoms of hyperactivity/impulsivity.[12] The proposed revision of ADHD by American Psychiatric Association added 4 new symptoms to the Hyperactivity and Impulsivity aspect in DSM- V. These four symptoms are: “Tends to act without thinking, such as starting tasks without adequate preparation or avoiding reading or listening to instructions, may speak out without considering consequences or make important decisions on the spur of the moment, such as impulsively buying items, suddenly quitting a job, or breaking up with a friend”, “Is often impatient, as shown by feeling restless when waiting for others and wanting to move faster than others, wanting people to get to the point, speeding while driving, and cutting into traffic to go faster than others”, “Is uncomfortable doing things slowly and systematically and often rushes through activities or tasks”, and “Finds it difficult to resist temptations or opportunities, even if it means taking risks (A child may grab toys off a store shelf or play with dangerous objects; adults may commit to a relationship after only a brief acquaintance or take a job or enter into a business arrangement without doing due diligence)”.[11]

The aim of this study was to investigate the psychometric properties of the proposed ADHD symptoms in DSM-V. In the first step, factor analyses were conducted to assess the loadings for the symptoms. Then, the convergent and discriminative validity of the categories of inattentiveness and hyperactivity-impulsivity of DSM-V ADHD symptoms were assessed. Finally, the internal reliability of the inattentiveness and hyperactivity- impulsivity was calculated.

Methods

106 children, who were consecutive referrals to a university affiliated Child and Adolescent Psychiatry Clinic in Shiraz, Iran, participated in this study. All of the children and adolescents were interviewed face to face by a board certified Child and Adolescent psychiatrist. In addition, at least one of their parents or caregivers was interviewed face to face as a collateral information resource.

The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, DSM-IV diagnostic criteria was used to make psychiatric diagnoses.[12] Interviews were conducted according to the Farsi version of the Schedule for Affective Disorders and Schizophrenia for School-Age Children.[13]

Parents reported ADHD symptoms by completing the ADHD checklist of child symptom inventory-4.[14–16] The ADHD checklist of child symptom inventory-4 includes 18 symptoms. The symptoms are categorized into two groups of inattentiveness and hyperactive/impulsivity symptoms. The inattentiveness symptoms category consists of 9 symptoms according to DSM-IV. The category of hyperactive/impulsivity symptoms consists of 9 symptoms according to DSM-IV as well. In fact, the symptoms are the DSM-IV diagnostic criteria. There is a 5-point Likert response scale for the symptoms. The responses ranged from “never,” “sometimes,” “often,” to “almost always”. Scores 0 and 1 were assigned to the categories of “never” and “sometimes”, respectively. The categories of “often” and “almost always” were assigned to 2 and 3, respectively. The range of scoring for each of inattentiveness and hyperactivity-impulsivity categories was from 0 to 9. The Farsi version of this checklist has enough reliability, convergent and discrimination validity[15] and has been used in many studies.[17–19] The internal reliability of this checklist for ADHD-inattentive type, ADHD-Hyperactive impulsive type, and combined type of ADHD is 0.81, 0.85, and 0.83, respectively.[14]

The four new items proposed by DSM-V to be added to ADHD diagnostic criteria were translated into Farsi and back translated into English by a bilingual child and adolescent psychiatrist and a psychologist. Every effort was made to preserve the concept of each symptom. After a pilot study on children referred to the clinic, the final version was used in the current study. The responses to these symptoms were in the Likert scale ranging from “never,” “sometimes,” “often,” to “almost always”.

The children and parents or caregivers gave their assent or informed written consent for voluntary participation in this study. This study was approved by the Ethics Committee of Shiraz University of Medical Sciences.

Analysis

SPSS statistical software was used to analyze the data. A factor analysis with varimax rotation was conducted to examine the factor structure of the ADHD DSM-V symptoms. The Kaiser-Meyer-Olkin Measure and the Bartlett’s test of sphericity were conducted. Internal consistency was examined using Cronbach’s tests.

One-, two-, three-factor models of confirmatory factor analysis were also conducted using LISREL 8.54 software. The convergent and discriminative validity of ADHD symptoms were analyzed using Pearson’s r correlation coefficient.

Another factor analysis was also conducted including the four newly proposed symptoms to examine item loading of the 13 symptoms of DSM-V derived hyperactivity- impulsivity symptoms. Here, the symptoms of inattentiveness were not included in the analysis. This analysis was conducted to examine whether the 13 items could be divided into two categories of hyperactivity and impulsivity.

Another factor analysis was conducted including the DSM-IV derived inattentiveness symptoms and the four new symptoms proposed in DSM-V. The symptoms of hyperactivity-impulsivity of DSM-IV were not included.

Results

The sample included 84 (79.2%) boys and 22 (20.8%) girls. The age range of the children and adolescents was 5.5 to 17years. Their mean age was 9.1(SD = 2.5) years.

The Kaiser-Meyer-Olkin Measure was 0.76. It shows the adequacy of sampling. The Bartlett’s test of sphericity was less than 0.001. These results indicate that the data are suitable for factor analysis. The factor loading of the principal component analysis is indicated in Table 1. The factor of Hyperactivity-Impulsivity explained 30.4% (eigenvalue = 6.7) of the total variance. The factor of Inattentiveness accounted for 12.1% (eigenvalue = 2.6). Nearly all of the symptoms of inattentiveness were loaded in one factor. All of the Hyperactivity-Impulsivity symptoms were loaded on another factor. Three out of the four newly proposed ADHD separate symptoms were loaded on the factor including inattentiveness symptoms.

[ CLOSE WINDOW ]

Table 1. Principal component analysis of the ADHD DSM-V checklist by rotated method of varimax

Component DSM-V symptoms Hyperactivity- Impulsivity
Inattentiveness
ADHD- item 1- makes careless mistakes −.049 .600
ADHD- item 2- sustaining attention .032 .731
ADHD- item 3- listening when spoken to .323 .319
ADHD- item 4- follows instructions .354 .515
ADHD- Item 5- organizing tasks .164 .775
ADHD-Item 6 – sustained mental effort −.097 .784
ADHD- item 7- loses things .185 .527
ADHD- item 8- distracted by extraneous stimuli .223 .536
ADHD- item 9- forgetful in daily activities .157 .486
ADHD- item10- fidgets with hands .532 .227
ADHD- item11- leaves seat in classroom .657 .206
ADHD- item 12- runs about .638 .178
ADHD- item 13- playing or leisure activities .864 −.013
ADHD- item 14- often “on the go” .800 −.008
ADHD- item 15- talks excessively .726 .152
ADHD- item 16- blurts out answers .663 .134
ADHD- item 17- awaiting turn .625 .335
ADHD- item 18- interrupts or intrudes on others .713 .086
ADHD- item 19- act without thinking .272 .412
ADHD- item 20- impatient .431 .330
ADHD- item 21- uncomfortable doing things slowly and systematically .358 .430
ADHD- item 22- difficult to resist temptations or opportunities .236 .399

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

In order to test which of the various models gives the best fit to the data, three confirmatory factor analyses were conducted. A one-factor model was not a good fit (Chi- square = 384.65, df = 209, P value 0.0001, Root Mean Square Error of Approximation (RMSEA) = 0.098, Non-normed Fit index (NNFI) = 0.96, Comparative Fit index = 0.96.).

A two-factor model fit well. The results of two-factor model confirmatory factor analysis showing the correlation between inattentiveness and hyperactivity/impulsivity factors that was .56 are displayed in Table 2.

Table 2. The two-factor model of Confirmatory Factor Analysis of the ADHD DSM- V Checklist

Component DSM-V symptoms Hyperactivity- Impulsivity
Inattentiveness
ADHD- item 1- makes careless mistakes .49
ADHD- item 2- sustaining attention .71
ADHD- item 3- listening when spoken to .53
ADHD- item 4- follows instructions .73
ADHD- Item 5- organizing tasks .81
ADHD-Item 6 – sustained mental effort .66
ADHD- item 7- loses things .57
ADHD- item 8- distracted by extraneous stimuli .63
ADHD- item 9- forgetful in daily activities .56
ADHD- item10- fidgets with hands .64
ADHD- item11- leaves seat in classroom .72
ADHD- item 12- runs about .71
ADHD- item 13- playing or leisure activities .84
ADHD- item 14- often “on the go” .81
ADHD- item 15- talks excessively .76
ADHD- item 16- blurts out answers .69
ADHD- item 17- awaiting turn .76
ADHD- item 18- interrupts or intrudes on others .72
ADHD- item 19- act without thinking .49
ADHD- item 20- impatient .62
ADHD- item 21- uncomfortable doing things slowly and systematically .58
ADHD- item 22- difficult to resist temptations or opportunities .51

Chi-square = 384.65, df = 209, P valu < 0.0001, Root Mean Square Error of Approximation (RMSEA) = 0.098, Non-normed Fit index (NNFI) = 0.96, Comparative Fit index = 0.96.

However, a three-factor model of confirmatory factor analysis also fit well and it was better than the two-factor model (Table 3).

Table 3. The three-factor model of Confirmatory Factor Analysis of the ADHD DSM-V Checklist

Component Newly DSM-V symptoms Hyperactivity- Impulsivity
Inattentiveness added items
ADHD- item 1- makes careless mistakes .49
ADHD- item 2- sustaining attention .71
ADHD- item 3- listening when spoken to .52
ADHD- item 4- follows instructions .72
ADHD- Item 5- organizing tasks .82
ADHD-Item 6 – sustained mental effort .67
ADHD- item 7- loses things .57
ADHD- item 8- distracted by extraneous stimuli .63
ADHD- item 9- forgetful in daily activities .56
ADHD- item10- fidgets with hands .65
ADHD- item11- leaves seat in classroom .74
ADHD- item 12- runs about .73
ADHD- item 13- playing or leisure activities .86
ADHD- item 14- often “on the go” .83
ADHD- item 15- talks excessively .78
ADHD- item 16- blurts out answers .71
ADHD- item 17- awaiting turn .78
ADHD- item 18- interrupts or intrudes on others .74
ADHD- item 19- act without thinking .63
ADHD- item 20- impatient .80
ADHD- item 21- uncomfortable doing things slowly and systematically .78
ADHD- item 22- difficult to resist temptations or opportunities .66

Chi-square = 31.84, df = 206, P valu < 0.0001, Root Mean Square Error of Approximation (RMSEA) = 0.077, Non-normed Fit index (NNFI) = 0.99, Comparative Fit index = 0.99.

The factor loading of the second component analysis including only the symptoms of hyperactivity-impulsivity of DSM-V is displayed in Table 4. The Kaiser-Meyer-Olkin Measure was 0.83. Bartlett’s test of sphericity was less than 0.001. It shows that all of the symptoms of the ADHD DSM-IV derived are loaded in one factor. Meanwhile, the four new symptoms proposed in DSM-V are loaded in another factor.

Table 4. Principal components analysis of the hyperactivity-impulsivity symptoms of ADHD DSM-V Checklist

Hyperactivity-impulsivity symptoms
Component
1 2
ADHD- item10- fidgets with hands .566 .123
ADHD- item11- leaves seat in classroom .666 .214
ADHD- item 12- runs about .629 .225
ADHD- item 13- playing or leisure activities .834 .111
ADHD- item 14- often “on the go” .771 .157
ADHD- item 15- talks excessively .753 .154
ADHD- item 16- blurts out answers .682 .112
ADHD- item 17- awaiting turn .574 .396
ADHD- item 18- interrupts or intrudes on others .717 .132
ADHD- item 19- act without thinking .207 .496
ADHD- item 20- impatient .208 .794
ADHD- item 21- uncomfortable doing things slowly and systematically .187 .755
ADHD- item 22- difficult to resist temptations or opportunities .022 .781

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

The principal component analysis including the DSM-IV derived inattentiveness symptoms and the four new symptoms proposed in DSM-V indicated the two factor loading (Table 5). This analysis indicates that all of the inattentiveness symptoms are loaded in one factor and the new symptoms proposed in DSM-V are loaded in another factor.

Table 5. Principal component analysis including the DSM-IV derived inattentiveness symptoms and the four new symptoms proposed in DSM-V

Inattentiveness symptom of DSM-IV and new proposed symptoms in DSM-V
Component
1 2
ADHD- item 1- makes careless mistakes .676 −.045
ADHD- item 2- sustaining attention .780 .079
ADHD- item 3- listening when spoken to .486 .122
ADHD- item 4- follows instructions .551 .322
ADHD- Item 5- organizing tasks .686 .386
ADHD-Item 6 – sustained mental effort .667 .192
ADHD- item 7- loses things .414 .341
ADHD- item 8- distracted by extraneous stimuli .450 .367
ADHD- item 9- forgetful in daily activities .579 .057
ADHD- item 19- act without thinking .275 .565
ADHD- item 20- impatient .069 .785
ADHD- item 21- uncomfortable doing things slowly and systematically .136 .793
ADHD- item 22- difficult to resist temptations or opportunities .058 .750

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

The convergent and discriminative validity for the whole 22 symptoms proposed for ADHD in DSM-V were calculated. The range of convergent validity for the symptoms of inattentiveness was from 0.504 to 0.772 and that of discriminative validity for the symptoms of inattentiveness was from 0.017 to 0.427. Also, the range of convergent validity for the symptoms of hyperactivity-impulsivity was from 0.42 to 0.770 and that of discriminative validity for the symptoms of hyperactivity-impulsivity was from 0.12 to 0.39.

The alpha coefficient for the whole 24 symptoms of ADHD in DSM-V was 0.88. The alpha for the DSM-V hyperactivity-impulsivity was 0.87. It was 0.80 for DSM-IV inattention.

Discussion

To the best of the author’s knowledge, this is the first study investigating psychometric and factor structure of ADHD DSM-V derived symptoms. So, it is not possible to compare the current results with those of other studies. Confirmatory factor analysis confirmed the proposed two-factor loading of inattentiveness and hyperactivity/impulsivity for the new ADHD DSM-V criteria. However, the three-factor model of confirmatory factor analysis showed that the four new items can be considered as the third factor.

The results indicate that convergent and discriminative validity for ADHD DSM-V derived inattention symptoms are sufficient. Although the symptoms of hyperactivity- impulsivity are discriminated from inattentiveness symptoms, the convergent validity of the four newly proposed symptoms in DSM-V is not as high as that of the 9 symptoms derived from DSM-IV. The three new criteria for hyperactivity/impulsivity were loaded in inattentiveness factor rather than in hyperactivity-impulsivity factor. These may not support the fact that the 4 proposed symptoms for revision of ADHD exactly describe hyperactivity-impulsivity symptoms. However, the internal consistency and reliability of the inattentiveness and hyperactive/impulsivity symptoms are high.\

Considering the factor loading of the four newly proposed symptoms added to DSM-V, there is a concern that inattentiveness symptoms may falsely increase the diagnosis of ADHD-hyperactive/impulsive type or combined type of ADHD. It means that the symptoms which are loaded as inattentive symptoms may lead to subthreshold ADHD- hyperactive/impulsive type using DSM-IV, while fulfilling criteria of ADHD- hyperactive/impulsive type using DSM-V.

With respect to the fact that the better diagnoses and classification of children with ADHD could lead to a better treatment, more discussion and justification about the new items are required. Probably, future studies should investigate the neuropsychological functioning of children with ADHD for the classification of the subtypes of ADHD. The current results indicated that continued research is required to reach accurate diagnostic criteria for making accurate ADHD diagnoses.

There is some overlap between ADHD symptoms and ODD in DSM-IV.[20] ODD symptoms are properly differentiated from ADHD. However, two items of the ADHD including “Often has trouble organizing activities” and “Often runs about or climbs when and where it is not appropriate” are loaded in the oppositional defiant disorder component rather than ADHD component.[20] Another concern is whether the new added symptoms in DSM-V are well differentiated from ODD symptoms. This needs further studies.

There are some limitations in this study which need to be considered. This study was conducted on a clinical sample of children and adolescents with ADHD. Further studies with larger sample size including community sample with a wider age rage are recommended. The children and their parents were the sources of information. Including other informants such as teachers is also recommended. This study is based on one sample in a specific geographical area. In addition, the use of translation instead of the actual questionnaire is another limitation. A multi-site approach with a more limited age range would be required to appropriately assess the psychometric properties of the proposed items of a classification used worldwide.

Despite the above-mentioned limitations, this is the first study that assesses psychometric properties of ADHD DSM-V derived symptoms. In addition, the children, adolescents and parents were interviewed face to face using a well known semi- structured interview. Moreover, all the interviews were conducted by a Board-certified child and adolescent psychiatrist.

Conclusion

The findings of present study support the two-factor model of the DSM-V ADHD diagnostic criteria including inattentiveness and hyperactivity/impulsivity. Nevertheless, the four new items can be considered as a third factor.

References

  1. Rohde LA: Is there a need to reformulate attention deficit hyperactivity disorder criteria in future nosologic classifications? Child Adolesc Psychiatr Clin N Am 2008, 17(2):405–420.
  2. Ghanizadeh A: Distribution of symptoms of attention deficit-hyperactivity disorder in schoolchildren of Shiraz, south of Iran. Arch Iran Med 2008, 11(6):618–624.
  3. Bell AS: A Critical Review of ADHD Diagnostic Criteria: What to Address in the DSM-V. J AttenDisord 2010.
  4. Ghanizadeh A: Is it time to revise the definition of attention deficit hyperactivity disorder? Ann Acad Med Singapore 2010, 39(2):155–156.
  5. Swanson JM, Wigal T, Lakes K: DSM-V and the future diagnosis of attention- deficit/hyperactivity disorder. Curr Psychiatry Rep 2009, 11(5):399–406.
  6. Ramtekkar UP, Reiersen AM, Todorov AA, Todd RD: Sex and age differences in attention- deficit/hyperactivity disorder symptoms and diagnoses: implications for DSM-V and ICD-11. J Am Acad Child Adolesc Psychiatry 2010, 49(3):217–228. e211–213
  7. Milich R, Balentine AC, Lynam DR: ADHD combined type and ADHD predominantly inattentive type are distinct and unrelated disorder. Clinical Psychology: Science and Practice 2001, 8:463–488.
  8. Poulton AS: Time to redefine the diagnosis of oppositional defiant disorder. J Paediatr Child Health 2010.
  9. Ghanizadeh A: Should ADHD broaden diagnostic classification to include oppositional defiant disorder? Journal of Paediatrics and Child Health 2011, 47(6):396–397.
  10. Polanczyk G, Caspi A, Houts R, Kollins SH, Rohde LA, Moffitt TE: Implications of extending the ADHD age-of-onset criterion to age 12: results from a prospectively studied birth cohort. J Am Acad Child Adolesc Psychiatry 2010, 49(3):210–216.
  11. DSM-5 development [http://www.dsm5.org/ProposedRevisions/Pages/proposedrevision.aspx?rid=383]
  12. American Psychiatric Association. Diagnostic and statistical manual of mental disorders Fourth edition. Washington, DC: American Psychiatric Association; 1994.
  13. Ghanizadeh A, Mohammadi MR, Yazdanshenas A: Psychometric properties of the Farsi translation of the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version. BMC Psychiatry 2006, 6:10.
  14. Ghanizadeh A, Jafari P: Cultural structures of the Persian parents’ ratings of ADHD. J Atten Disord 2010, 13(4):369–373.
  15. Alipour A, Esmaile EM (Eds): Studying of Validity, Reliability, and Cutoff points of CSI-14 in the School Children Aged 6 to 14 in Tehran In Tehran Exceptional students’ Research Center 2004.
  16. Sprafkin J, Gadow KD, Salisbury H, Schneider J, Loney J: Further evidence of reliability and validity of the Child Symptom Inventory-4: parent checklist in clinically referred boys. J Clin Child Adolesc Psychol 2002, 31(4):513–524.
  17. Ghanizadeh A, Mohammadi MR, Moini R: Comorbidity of psychiatric disorders and parental psychiatric disorders in a sample of Iranian children with ADHD. J Atten Disord 2008, 12(2):149–155.
  18. Ghanizadeh A: Psychiatric comorbidity differences in clinic-referred children and adolescents with ADHD according to the subtypes and gender. J Child Neurol 2009, 24(6):679–684.
  19. Ghanizadeh A, Khajavian S, Ashkani H: Prevalence of psychiatric disorders, depression, and suicidal behavior in child and adolescent with thalassemia major. J Pediatr Hematol Oncol 2006, 28(12):781–784.
  20. Ghanizadeh A: Overlap of ADHD and oppositional defiant disorder DSM-IV derived criteria. Arch Iran Med 2011, 14(3):179–182.

Retrieved from: http://www.medscape.com/viewarticle/764516

Genomic Medicine

In Genes, Genomic Medicine on Monday, 24 September 2012 at 16:20

how positively fascinating…

Genomic Medicine: Evolving Science, Evolving Ethics

Sarah E Soden; Emily G Farrow; Carol J Saunders; John D Lantos

Personalized Medicine. 2012;9(5):523-528. © 2012 Future Medicine Ltd.

Abstract and Introduction

Abstract

Genomic medicine is rapidly evolving. Next-generation sequencing is changing the diagnostic paradigm by allowing genetic testing to be carried out more quickly, less expensively and with much higher resolution; pushing the envelope on existing moral norms and legal regulations. Early experience with implementation of next-generation sequencing to diagnose rare genetic conditions in symptomatic children suggests ways that genomic medicine might come to be used and some of the ethical issues that arise, impacting test design, patient selection, consent, sequencing analysis and communication of results. The ethical issues that arise from use of new technologies cannot be satisfactorily analyzed until they are understood and they cannot be understood until the technologies are deployed in the real world.

Introduction

Genomic medicine is rapidly evolving and changing the ways in which we think about the ethical, legal and economic regulation of this powerful biotechnology.

Public perceptions of genetic testing are complex and ambivalent. They have been shaped by some of the more unsavory uses to which genetics has been put in the past. It is difficult to separate our thoughts about current genetic screening for medical care from practices such as the eugenics movement, racial profiling based upon faulty understandings of genetics and compulsory sterilization programs throughout Europe and north America.[1]

Past controversies about the appropriate use of genetic testing have led to many of the ethical and regulatory safeguards that surround genetic testing today. Genetic testing created a backlash due to it being used against people, rather than for them. Genetic testing came to be seen as fundamentally different from other forms of testing, one in need of more rigorous and explicit policies regarding informed consent and voluntariness.[2]

This genetic exceptionalism continues today; however, it is taking new forms. We still tend to treat genetic testing as if it is ethically and legally distinct from other sorts of testing. However, technological advances, particularly those that allow testing to be done quicker, less expensively and with much a higher resolution, are pushing the envelope on existing moral norms and legal regulations. Genetic testing is one of the first types of testing that is being offered directly to consumers. Today, one can send a sample of saliva to a direct-to-consumer genetic testing company and receive results about one’s risk factors for a variety of medical conditions. Often, the information is difficult to interpret, probabilistic and based on algorithms that are proprietary and thus somewhat mysterious. Still, genetic testing joins a relatively small group of other diagnostic tests, such as home pregnancy tests, blood pressure testing and glucometers in its ready availability to the consumer without a physician intermediary.

The day may be coming, and quite soon, when whole-genome or -exome sequencing will be readily available. It is hard to know whether to think of this as a good or a bad thing, whether people who undergo such testing – whether they are patients, research subjects or consumers – will be helped or harmed by it. In this article, we will speculate about the near future of genetic testing by analyzing the way such testing is used as a new and inexpensive way of diagnosing rare genetic conditions in symptomatic children.

Next-generation Sequencing for the Diagnosis of Rare Mendelian Conditions

One of the most difficult challenges facing pediatricians is that of diagnosing rare genetic conditions in children who present with signs and symptoms that suggest an underlying genetic cause, but for whom the etiology remains elusive despite costly and often lengthy etiologic investigations. Such cases arise commonly in clinics that evaluate children for, among other things, cognitive impairment, neuromuscular disorders and congenital anomalies.[3] Such diagnostic odysseys often include serial molecular testing of one or a few genes; a process that can be emotionally taxing to families and frustrating to physicians, who must decide together how long to pursue the quest for diagnosis.[4]

The advent of next-generation sequencing (NGS) coupled with advanced bioinformatic processing is changing this diagnostic paradigm. At Children’s Mercy Hospital in Kansas City (MO, USA), we have introduced a highly multiplexed molecular diagnostic test that enables simultaneous interrogation of genes associated with more than 500 X-linked, autosomal-recessive and mitochondrial-pediatric diseases, including some genes for which no commercially available molecular test exists.[5] Currently available on a research basis, this test is being offered to complex pediatric patients by subspecialists in our institution. As we move toward clinical implementation, it is projected that the cost will initially be less than that of a single conventional molecular test (<US$1000), and will continue to fall rapidly.[101] Furthermore, turnaround time will rival conventional molecular tests which commonly have a time-to-result of 2–8 weeks. Thus, NGS-based diagnostic testing will lower the threshold for physicians to pursue gene sequencing, and ultimately advance knowledge of the biologic underpinnings of challenging pediatric diseases.

Ethical considerations affect all aspects of the implementation of this program, including test design, patient selection, consent, sequencing analysis of patient DNA and delivery of results to patient and family. Potential unintended consequences of multiplexed genetic testing in children include: detection of carrier status for recessive diseases; and discovery of predisposition to adult onset disease. Conservative estimates are that humans carry at least ten recessive Mendelian diseases.[6] With the possible exceptions of cystic fibrosis and sickle cell disease, it is rare for individuals to have knowledge of the single recessive alleles residing in their genome. The American Society of Human Genetics (ASHG; MD, USA)/American College of Medical Genetics (ACMG; MD, USA) guidelines for genetic testing of minors prohibits predictive genetic testing for adult onset diseases and discourages reporting of carrier status to minors.[7] In all such testing, physicians and scientists should be concerned about violating the child’s right to what has been called ‘an open future’, that is, a future not involuntarily shaped by information of uncertain accuracy that the child did not ask for or necessarily want to know.

Informed Consent

Informed consent for NGS testing is complex. The 2012 ACMG policy statement on clinical application of genomic sequencing recommends pretest counseling with a medical geneticist or genetic counselor, and that formal consent should be a part of the process.[102] Specifically, patients must be informed of the expected results from testing, including the likelihood of incidental findings. A significant challenge to this process is the lack of a clear consensus in the genetics community on how incidental findings should be handled, especially in the pediatric population. Currently, each individual laboratory is responsible for determining how incidental findings will be handled and reported. Further complicating the process is the recommendation that patients be offered the option of not receiving results from secondary or incidental findings.

We have also encountered an additional layer of complexity in the consenting process at our institution. In our experience, some families with children suffering potentially fatal conditions and/or progressive disease ask very few questions when we explain gene sequencing and seek their consent. For parents, the urgency to help an ill child is paramount and overshadows potentially thorny issues such as discovery of their child’s (and potentially their own) carrier status or future disease risk. Our challenge then, is to build protections into the testing process. In our institution one such protection has been to develop a team of core users: 20 medical geneticists, pediatric subspecialists and genetic counselors who participated in test design and implementation policies. This process necessitated an uncommon understanding of genomic medicine including, but not limited to, knowledge of: ACMG/ASHG pediatric guidelines, the Genetic Information Nondiscrimination Act,[8] and the potential benefits and limitations of NGS. The goal is for these clinicians to become uniquely suited to offer this test to the appropriate patients and, in collaboration with knowledgeable genetic counselors, engage in a meaningful informed consent process with families.

Test Design & Interpretation

A unique patient protection strategy developed in our institution is Sign and Symptom Assisted Gene Analysis; a component of our bioinformatic pipeline that reduces the risk of violating ASHG guidelines by not identifying genetic findings unrelated to the patient’s presenting condition [Saunders CJ et al., Rapid whole genome sequencing for genetic disease diagnosis in neonatal intensive care units (2012), Manuscript in preparation]. Physician who order our NGS diagnostic panel are required to select up to ten clinical signs and symptoms from a controlled medical ontology of 225 terms. Sign and Symptom Assisted Gene Analysis generates a unique candidate gene list specific to that patient. Genes not on the candidate list are not analyzed computationally, thereby greatly reducing the likelihood of unintended findings such as carrier status or future disease risk unrelated to the child’s clinical condition.

Even with such protections, expert interpretation is necessary. For example, two very different conclusions may be made when a single heterozygous pathogenic variant in an autosomal recessive locus is detected. Such a finding may be inadvertent detection of carrier status in a child. However, it may also be diagnostic in the case of a compound heterozygote; such individuals may have a second undetectable variant derived from the other parent in the same gene but outside the sequenced region. A careful review of the literature, knowledge of the gene in question, phenotypic plausibility and genotyping of the parents must factor in the interpretation. The size of the candidate gene list also impacts interpretation of such a case. A single pathogenic variant is more likely to be thought of as disease-causing when interrogating 20–30 genes for a child with ambiguous genitalia, while the probability is lower for a patient with intellectual disability, which is associated with more than 300 genes on our multiplexed test.

Interpretation of nucleotide variants is further complicated by the ubiquity of variants of unknown significance and the lack of a comprehensive clinical grade reference databases. In a verification study, Bell et al. reported that 122 out of 460 literature-annotated disease mutations are either erroneous or benign polymorphisms, as evidenced by a frequency of >5% in samples tested and/or homozygosity in unaffected individuals.[9] This highlights the need for cautious and informed interpretation, which in some cases will require functional studies, confirmatory testing and sequencing of family members.

Communication of Results to Physicians

The communication of NGS results to clinicians poses many challenges. Great care must also be taken to educate physicians, particularly about variants of unknown significance (VUS) in the face of enthusiasm for diagnosis and gene discovery. Previous guidelines, developed in the context of serial single-gene testing, called for reporting of all VUS, a practice that would overwhelm physicians and patients with data that may be anxiety provoking and susceptible to misinterpretation. In our institution, whole-exome sequencing from a single individual reveals 130,000–140,000 variants. Approximately 98% of these are category 4 variants (unlikely to be disease causing) for reasons such as high allele frequency.[10] When faced with a variant in a gene of interest, physicians unfamiliar with NGS and the ubiquity of nucleotide variation may be dubious that a variant is not likely to be disease causing. A challenge when drafting NGS result reports is to accurately characterize pertinent findings in a format that is meaningful to a variety of subspecialists.[11] An electronic report is in development, which will contain the key findings and interpretation. We envision ‘point-of-care’ educational resources such as hyperlinks to disease information and resources or brief education modules that physicians would have the opportunity to view when pertinent to the case at hand. As McGuire and Burke noted: “If genomic research is to achieve its promise, investments in health outcomes research, health technology assessment, clinical practice guidelines and information tools will need to increase”.[12]

Communication of Results to Patients & Families

For patients, post-test counseling with a geneticist or genetic counselor is important. In addition to diagnostic findings, an institution may determine that secondary findings will be conveyed to patients who have chosen to receive them following meaningful pretest counseling. As defined by ACMG, secondary findings are “gene variants known to be associated with a phenotype, but not believed to be related to the condition that led to the testing”.[102] In order to facilitate the discussion of results, some institutions have adopted a staged release of results where diagnostic results are included in a primary report, including incidental findings that have clear medical interventions. An optional full report including all variants may also be requested. Our current approach to pediatric patients is to report only variants predicted to be causative of the child’s symptoms. Confirmatory Sanger sequencing of research results is performed such that all results are clinically actionable.

In cutting-edge genomic medicine, novel variants and genes are routinely identified that may be associated with unknowns such as: pleiotropy (the effect of a single gene on multiple phenotypic traits), epistasis (gene–gene interactions), phenotypic heterogeneity, incomplete penetrance and epigenetic processes. An additional complication is that variant interpretation may change. Some of today’s VUS will become interpretable as genomic reference databases improve. There is a lack of consensus about whether there is a duty to recontact patients and families whose interpretation has changed, and if so, who is responsible for contacting families.[13] Furthermore, the practicalities of such reanalysis have scarcely been considered.

A vital member of our research team is a genetic counselor who is well versed in the complexities of molecular medicine and NGS, and will guide and support families. The need for highly specialized patient and family support is exemplified by the case of a family whose first-born child suffered a rapidly painful and life-threatening neonatal disease [Saunders CJ et al., Rapid whole genome sequencing for genetic disease diagnosis in neonatal intensive care units (2012), Manuscript in preparation]. Clinical testing was nondiagnostic and the parents consented to NGS for themselves and their child. Shortly after consent the patient passed away, and the family was faced with the decision of whether to seek a postmortem molecular diagnosis. Their decision-making process necessitated contemplation of complex scientific concepts, including that of de novo dominant mutations versus rare recessive disorders, and the implication of such findings for family planning. In a case as this one, a misinterpreted variant could have serious unintended consequences for future pregnancies.

A recognized psychosocial benefit of genetic testing in symptomatic children is reduction of uncertainty. For both treatable and untreatable conditions, patients and family members may derive benefit from identification of a definitive etiology or elimination of specific genetic diseases from the differential diagnosis. The power of NGS to change pediatric healthcare, and most importantly to affect the lives of children today was seen among the first patients enrolled at the Children’s Mercy Hospital Center for Pediatric Genomic Medicine [Soden SE et al., A systematic approach to implementing monogenic genomic medicine (2012), Manuscript in preparation]. We enrolled siblings with progressive neurologic symptoms who, despite a 5-year etiologic investigation costing more than US$20,000 had not been identified with a causal etiology. Using NGS, a diagnosis was made and confirmed in our clinical laboratory within 6 weeks. These findings brought a diagnostic odyssey to an end for the family and their healthcare team. At the time of diagnosis the younger sister was 5 years of age, the same age at which her older sister’s symptoms, particularly cerebellar atrophy and ataxia, accelerated. At the time of diagnosis the younger child had only very mild ataxia. However, her sister’s condition had progressed such that a wheelchair was needed for ambulation, speech was dysarthric and upper extremity dysmetria and chorea were prominent. Faced with uncertainty about both daughters’ prognoses, and a differential that included fatal neurodegenerative diseases, the quest for diagnosis had intensified. Following molecular diagnosis with NGS, the literature could be drawn upon and the family reasured that individuals with this genetic diagnosis commonly live into adulthood with intact cognitive abilities. Furthermore, reports of coenzyme Q10 deficienty in individuals with this diagnosis, who reponded to coenzyme Q10 administration, prompted cautious optimism that a treatment to potentially slow disease progression had been identified.

Conclusion

This early report from the frontlines of genomic medicine suggests some of the ways that genomic medicine might come to be used and some of the ethical issues that might arise. The technologies are changing rapidly. The uses to which those technologies can be put are also rapidly changing. The ethical issues that arise from new uses of new technologies cannot be satisfactorily analyzed until they are understood and they cannot be understood until the technologies are deployed in the real world. Uncertainty is inherent in these projects. One response to uncertainty is to assume the worst, become risk-averse, and put roadblocks in the way of innovation until innovation has been proven safe. But innovation cannot be proven safe in a risk averse environment. The only way to assess the risks of a new technology is to use it – cautiously, carefully, with an open mind and a willingness to collect data that will allow an assessment of the risks and benefits. We should strive, as hard as we can, to minimize risks to the early adopters, even if doing so means we slow progress, prohibit certain seemingly desirable and potentially beneficial activities, and restrict the range of human choice. But we should not become so risk-averse as to abjure progress because of fear of unlikely, unproven, and even unnamed risks.

Such an approach would allow experimentation even as we remain exquisitely attentive to the ethical issues that arise as we innovate, and respond to those issues in a tentative but cautious way. Many of the fears that swirled around the early use of genomic technology have not been realized. Instead, adults have shown themselves to be more capable of dealing with troubling information, potentially bad news and uncertainty than they were once thought to be.[103] They have also shown themselves capable of deciding for themselves what they do and do not want to know.[14]

Testing children, of course, raises different concerns – but not so different. As with adults, genetic testing can either provide a precise diagnosis or it can provide probabilistic information about the risks of developing particular diseases. Both sorts of information may be useful, even crucial, to parents. Current guidelines dictate that testing should not be done for conditions that do not have any health implications during childhood. Such rules are probably wise in most circumstances. In some situations, as we learn more about the natural history of genetic conditions, or as we develop interventions that might prevent the onset of such conditions, the conventional wisdom about testing children for such conditions might change.

It may be, then, that the proper question to ask is not whether genetic information should be treated like other medical information – but instead, why other medical information should not be treated like genetic information. That is, why is all medical information not the property of the patient, rather than the property of the doctor? Why do patients not have the right to see all of their test results, rather than having those results reported to their doctors?

Genetic information may transform other medical information in part because it is becoming available at a time and in a form that makes it similar to other information that is widely and publically available. Steven Pinker, who was one of the first people in the world to have his whole genome sequenced, wrote: “People who have grown up with the democratization of information will not tolerate paternalistic regulations that keep them from their own genomes”.[15]

We should continue to innovate, analyze the implications of innovation and allow the technology to shape the questions to which ethics offers responses.

Future Perspective

Inexpensive and accurate whole-genome sequencing will soon be available to all doctors and to all citizens. The availability of this technology will challenge prevailing ethical and regulatory paradigms not just in genetics but in all of medicine. Massive amounts of complex data will require doctors to learn more about genetics, information scientists to develop ways of making sense of the data and patients (including parents of patients) to make decisions about what they want to know, when they want to know it, and how they want to access the information. Today’s projects are the pilot projects in which we must carefully explore the risks and benefits of new approaches to testing and to talking about test results.

Sidebar

Executive Summary

  • Current regulation of genomics reflects past eugenic abuses.
  • New technologies, which allow rapid and inexpensive whole-exome or -genome sequencing, raise questions about whether restrictive regulation is either feasible or appropriate.
  • We describe a program that we are developing at Children’s Mercy Hospital in Kansas City (MO, USA) to use next-generation sequencing techniques to test symptomatic children for hundreds of known autosomal recessive conditions.
  • We have developed an approach to informed consent and to disclosure of results that tries to balance clinicians’ desires to make the correct diagnosis, parents’ rights to make medical decisions for their children, and the child’s right to an open future.

References

  1. Kevles DJ. Eugenics, the genome, and human rights. Medicine Studies1,85–93 (2009).
  2. Greely H. Legal, ethical and social issues in human genome research. Annu. Rev. Anthropol.437–502 (1998).
  3. Roesser J. Diagnostic yield of genetic testing in children diagnosed with autism spectrum disorders at a regional referral center. Clin. Pediatr.50(9),834–843 (2011).
  4. Fierman AH. Advances in whole-genome genetic testing: from chromosomes to microarrays; solving the puzzle: case examples of array comparative genomic hybridization as a tool to end the diagnostic odyssey. Foreword. Curr. Probl. Pediatr. Adolesc. Health Care42(3),45–46 (2012).
  5. Kingsmore SF, Dinwiddie DL, Miller NA. Adopting orphans: comprehensive genetic testing of Mendelian diseases of childhood by next-generation sequencing. Expert Rev. Mol. Diagn.11(8),855–868 (2011).
  6. Nussbaum RL. Thompson and Thompson Genetics in Medicine. Nussbaum RL, McInnes RR, Willard HF (Eds). Saunders Elsevier, PA, USA (2007).
  7. Points to consider: ethical, legal, and psychosocial implications of genetic testing in children and adolescents. Am. J. Hum. Genet.57,1233–1241 (1995).
  8. Rothstein MA. Currents in contemporary ethics. GINA, the ADA, and genetic discrimination in employment. J. Law Med. Ethics36(4),837–840 (2008).
  9. Bell CJ, Dinwiddie DL, Miller NA et al. Carrier testing for severe childhood recessive diseases by next-generation sequencing. Sci. Transl. Med.3,65ra4 (2011).
  10. Cooper GM, Shendure J. Needles in stacks of needles: finding disease-causal variants in a wealth of genomic data. Nat. Rev. Genet.212(9),628–640.
  11. Kingsmore SF, Saunders CJ. Deep sequencing of patient genomes for disease diagnosis: when will it become routine? Sci. Transl. Med.3,87ps23 (2011).
  12. McGuire A, Burke W. An unwelcome side effect of direct-to-consumer personal genome testing. JAMA300,2669–2671 (2008).
  13. Pyeritz RE. The coming explosion in genetic testing – is there a duty to recontact? N. Engl. J. Med.365,1367–1369 (2011).
  14. Angrist M. Here is a Human Being. At the Dawn of Personal Genomics. HarperCollins, NY, USA (2010)
  15. Pinker S. My genome, my self. New York Times, 7 January (2009).

    Websites
    101. Wetterstrand KA. DNA Sequencing costs: data from the NHGRI large-scale genome sequencing program. http://www.genome.gov/sequencingcosts (Accessed 3 May 2012)
    102. American College of Medical Genetics and Genomics (ACMG) policy statement: points to consider in the clinical application of genomic sequencing. http://www.acmg.net/StaticContent/PPG/Clinical_Application_of_Genomic_Sequencing.pdf (Accessed 8 June 2012)
    103. Powers R. The book of me. GQ, October, 2008. http://www.gq.com/news-politics/big-issues/200810/richard-powers-genome-sequence (Accessed 6 June 2012)

Retrieved from: http://www.medscape.com/viewarticle/768487?src=ptalk

Drug Therapy for Autism

In Autism Spectrum Disorders, Medication, Neuropsychology, Psychiatry, Psychopharmacology on Monday, 24 September 2012 at 16:10

Autism Patients Might Benefit from Drug Therapy

By SYDNEY LUPKIN | ABC News – Wed, Sep 19, 2012 2:37 PM EDT

Researchers have found a drug that can help patients with Fragile X syndrome, the most common cause of inherited intellectual impairment (formerly known as mental retardation), stay calm in social situations by treating their anxiety.

Dr. Elizabeth Berry-Kravis and her team found that a drug called Arbaclofen reduced social avoidance and repetitive behavior in Fragile X patients, especially those with autism, by treating their anxiety. The drug increases GABA, a chemical in the brain that regulates the excitatory system in Fragile X patients, who have been known to have too little GABA to do the job otherwise, causing their excitatory systems to “signal out of control” and make them anxious.

Such patients have been known to cover their ears or run away at their own birthdays because they are overwhelmed by the attention, but one trial participant said he was able to enjoy his birthday party for the first time in his life while he was on Arbaclofen, she said.

“I feel like it’s kind of the beginning of chemotherapy when people first realized you could use chemotherapy to treat cancer patients instead of just letting them die,” said Berry-Kravis, a professor of neurology and biochemistry at Rush University Medical Center in Chicago who has studied Fragile X for more than 20 years.

She said people used to think Fragile X patients couldn’t be helped either, but she and her team have proven that by using knowledge from existing brain mechanism studies, doctors can select medications to target specific problems in Fragile X patients’ brains.

Fragile X syndrome is a change in the FMRI gene, which makes a protein necessary for brain growth, and studies indicate it causes autism in up to one-third of patients diagnosed with it. Unlike Fragile X syndrome, which is genetic, autism is a behavioral diagnosis characterized by an inability to relate to other people or read social cues. Autism and Fragile X are linked, but not mutually exclusive. A core symptom of both is social withdrawal.

Sixty-three patients with Fragile X participated in Berry-Kravis’s placebo-controlled, double-blind clinical trial from December 2008 through March 2010. Of those, the patients with autism showed the biggest improvements in social behavior, Berry-Kravis said.

To psychologist Lori Warner, who directs the HOPE Center at Beaumont Children’s Hospital, the study is exciting because when her autistic patients are anxious, they often have a harder time learning the social cues they can’t read on their own.

“Reducing anxiety opens up your brain to be able to take in what’s happening in an environment and be able to learn from and understand social cues because you’re no longer frightened of the situation,” Warner said.

She works mostly with autism patients, and although some do have Fragile X as well, most do not.

Fragile X affects one in 4,000 men and one in 6,000 to 8,000 women, according to the Centers for Disease Control and Prevention.

Although Arbaclofen worked best on autistic Fragile X patients, further studies will be needed to prove whether it can help all autism patients, not just those with autism caused by Fragile X.

“There’s a difference between one person’s brain and another in how it’s set up,” Berry-Vargis said. “This is not a magic cure. It’s a step.”

Retrieved from: http://gma.yahoo.com/autism-patients-might-benefit-drug-therapy-183744169–abc-news-health.html

Perinatal Risk Factors for ADHD Confirmed

In ADHD, ADHD Adult, Psychiatry, School Psychology on Monday, 24 September 2012 at 16:06

important info!

Perinatal Risk Factors for ADHD Confirmed

Megan Brooks

September 13, 2012 — The combination of maternal gestational diabetes mellitus (GDM) and low socioeconomic status (SES) is a strong risk factor for childhood attention-deficit/hyperactivity disorder (ADHD), a study from Germany confirms.

Perinatal health problems, maternal smoking during pregnancy, and atopic eczema also raise the risk for ADHD, whereas fully breastfeeding appears to protect against ADHD, regardless of the duration of breastfeeding, the study showed.

“Modification of these environmental risk factors by evidence-based prevention programs may help to decrease the burden of ADHD,” write coinvestigators Jochen Schmitt, MD, MPH, of Technical University Dresden, and Marcel Romanos, MD, from the University Hospital of Würzburg, in Germany.

The study was published online September 10 in Archives of Pediatrics and Adolescent Medicine.

It follows a study published in the same journal earlier this year by Yoko Nomura, PhD, MPH, from the Department of Psychology, Queens College, City University of New York, and colleagues. That study, which included 212 preschool-age children, linked maternal GDM and low SES, especially in combination, to a heightened risk for childhood ADHD.

Nationwide Study

These latest findings from Dr. Schmitt and Dr. Romanos replicate this finding in a large nationwide representative sample of 3- to 17-year-olds who participated in the German Health Interview and Examination Survey for Children and Adolescents (n = 13,488).

The outcome of interest was childhood ADHD, and the primary exposures of interest were self-reported physician-diagnosed GDM (absent or present) and SES, classified as low, medium, or high on the basis of parental education, professional qualification, professional status, and family income.

The authors also considered age, sex, and a broad set of environmental exposures in the prenatal and perinatal period and in infancy as competing risk factors in multivariate analysis.

A total of 660 children (4.9%) had ADHD; the prevalence of GDM and low SES was 2.3% (n = 280) and 25.5% (n = 3420), respectively, the authors report.

Both maternal GDM and low SES were significantly related to ADHD. In multivariate regression modeling (based on 11,222 observations without any missing data), GDM and low SES were independent risk factors for childhood ADHD. The same was true for perinatal health problems, maternal smoking during pregnancy, and atopic eczema, whereas breastfeeding was protective.

Table: Risk for ADHD With Outcomes of Interest

Characteristic/Exposure aOR (95% CI)
Maternal GDM 1.91 (1.21 – 3.01)
Low SES 2.04 (1.56 – 2.68)
Smoking 1.48 (1.19 – 1.84)
Perinatal health problems 1.69 (1.40 – 2.03)
Atopic eczema 1.62 (1.30 – 2.02)
Breastfeeding 0.83 (0.69 – 0.996)

aOR = adjusted odds ratio; CI = confidence interval

The investigators note that their findings confirm those of Dr. Nomura and colleagues by showing an association between low SES, maternal GDM, and ADHD “and their additive interaction as risk factors for ADHD in a large population-based sample.”

The researchers say their study also extends previous research by showing that fully breastfeeding may have protective effects on childhood ADHD.

Fetus a “Captive Audience”

Dr. Nomura told Medscape Medical News that “being able to duplicate our findings in a different sample and a much larger sample is important.”

“I’m not sure if most doctors know that GDM is a risk factor for ADHD; biological and environmental risk factors for ADHD is a fairly new scientific field,” she added.

“ADHD is a highly hereditary illness, but it’s not only hereditary; we are beginning to gather information about environmental or biological causes and beginning to focus on perinatal risk factors for ADHD,” said Dr. Nomura.

“The fetus is a captive audience,” she noted, “and it seems like in utero exposure to a variety of things like excessive insulin, smoking, plastic materials, food dyes, toxic chemicals may cause epigenetic changes in brain development that may show up later in life.”

Arch Pediatr Adolesc Med. Published online September 10, 2012. Abstract

 

friendship is like a box of chocolates…

In Humor, Insomnia on Monday, 24 September 2012 at 07:30

Tailoring Antidepressant Treatment

In ADHD, ADHD Adult, ADHD child/adolescent, Anxiety, Medication, Psychiatry, Psychopharmacology on Monday, 24 September 2012 at 07:13

Tailoring Antidepressant Treatment: Factors to Individualize Medication Selection Thomas L. Schwartz, MD; Daniel Uderitz, MD

In the realm of psychopharmacology, we often declare medications within their respective therapeutic classes as being equal. This is a byproduct related to the way medications achieve their indications for treatment for specific psychiatric disorders. In the case of antidepressant treatments, the US Food and Drug Administration (FDA) indicates that if a study can obtain a majority of patients improved by 50% compared with placebo, then a drug may become an antidepressant treatment. There are no standards for differentiating antidepressant treatments beyond this. Clinicians often note that all antidepressant treatments are not created equal, especially when applied to clinical situations and patients who are often complex and have comorbid conditions. The goal of this article is to sort out regimens that may convey certain advantages during the treatment in an individualized manner. This involves conceptualizing and utilizing monotherapies, combination therapies, and adjunctive treatments.

Monotherapies

The first-line treatment of patients with major depressive disorder (MDD) should start with an aggressive monotherapy. This occurs in clinical practice and is supported by many guidelines and reviews. The various antidepressant medications have unique properties that can be used to individualize treatments. Most psychiatrists can easily name their “favorite” antidepressant to use in certain situations. This is sometimes based on a simple bias, but often has evidence to back up clinical practice. Let us start with the mechanistically simple and move toward more complex ways to think about these medications. This includes thinking about FDA approvals, available guidelines, comorbidities, side effects, and more complex pharmacodynamic receptor-based neuropsychiatry.

A patient rarely comes to a psychiatrist without having a combination of psychiatric symptoms. Typically, clinicians screen patients and find that they often meet criteria for more than 1 Diagnostic and Statistical Manual of Mental Disorders Fourth Edition-Text Revision (DSM IV-TR) criteria.[1] At a minimum, the individual patient raises suspicion for various problem areas, even if they do not meet criteria for a specific disorder. In reviewing FDA guidelines, clinicians may quickly make simple decisions regarding treatment regimens that are more individualized based on these comorbidities and predominant symptoms. Of note, additional FDA approval or lack of approval for various indications does not necessarily mean that evidence does not support efficacy for other disorders. For example, the manufacturer may not have pursued FDA approval for other indications, or may have decided not to support randomized controlled trials to study another indication.

Single Indication

The first group of antidepressants approved by the FDA for the single indication of MDD include amitriptyline, citalopram, desipramine, desvenlafaxine, mirtazipine, nortriptyline, protriptyline, trazodone, trimipramine, vilazodone, and the monoamine oxidase inhibitor (MAOI) class.[2-4] Clinicians should know that these medications have only the 1 indication, and this clearly supports their use in MDD. However, many practitioners recognize that there are multiple other factors that allow these medications to be used in an off-label manner for various individuals. In a pure model, these antidepressants have regulatory data suggesting use only in patients with MDD but, as discussed, a lack of approval for other indications does not necessarily indicate a lack of supportive data or lack of efficacy.

Multiple Indications

Unlike those listed above, many antidepressants have other labeled or approved indications. These span a variety of comorbidities including anxiety disorders, seasonal affective disorder, sleep disorders, pain disorders, premenstrual dysphoric disorder, bulimia nervosa, and other miscellaneous indications. Given this, and assuming MDD is often complicated by comorbidity, let us evaluate a few comorbidities where data-driven decisions may help in individualizing treatments in patients who are depressed and simultaneously experience other psychiatric conditions.

Posttraumatic Stress Disorder

Patients with posttraumatic stress disorder often have comorbid depression. Only 2 antidepressants, the selective serotonin reuptake inhibitors (SSRI) sertraline and paroxetine, are approved for this indication.[2] Multiple other medications have been recognized as effective off-label treatments for posttraumatic stress disorder, however; these include amitriptyline, fluoxetine, fluvoxamine, imipramine, and venlafaxine.[5-7] If a patient presents with MDD and posttraumatic stress disorder, these antidepressants may be considered if necessary to achieve efficacy for both conditions.

Obsessive-Compulsive Disorder

Several medications are approved for obsessive-compulsive disorder, including the tricyclic antidepressant clomipramine, and the SSRIs fluoxetine, fluvoxamine, paroxetine, and sertraline.[2] Venlafaxine, a serotonin norepinephrine reuptake inhibitor (SNRI),[8] and the SSRI citalopram have shown some promise in obsessive compulsive disorder,[9] but have not yet received that indication from the FDA.

Panic Disorder

The SSRIs fluoxetine, paroxetine, and sertraline are approved for treatment of panic disorder, as is the SNRI venlafaxine.[2,10] Other antidepressants with an evidence base for use that are not approved include the TCAs clomipramine and imipramine, and the SSRI fluvoxamine.[6]

Anxiety Disorders

Social anxiety disorder. The SSRIs paroxetine and sertraline, and the SNRI, venlafaxine extended-release (ER) have been approved for the treatment of social anxiety disorder.[2] The SSRI fluoxetine is sometimes used for treatment of social anxiety disorder.

Generalized anxiety disorder. Four antidepressants have been indicated for the treatment of generalized anxiety disorder. These include the SSRI escitalopram and paroxetine, and the SNRI duloxetine and venlafaxine ER.[2,11]

Insomnia

Although sleep difficulties are a nearly universal symptom of depression, few antidepressants have an official indication for insomnia. Doxepin, a TCA, is the sole antidepressant labeled with this indication, when it is used at subtherapeutic antidepressant doses of 3 to 6 mg per day.[12] However, clinicians often use sedating antidepressants to induce sleep in those patients with MDD and insomnia (Schwartz TL. Novel hypnotics: moving beyond positive allosteric modulation of the GABA-A receptor. Manuscript submitted). These medications include the TCA amitriptyline, the tetracyclic mirtazapine, and the serotonin modulator trazodone.

Pain Syndromes

Duloxetine, an SNRI, is the only antidepressant medication that has official indications for treatment of pain syndromes.[2,10] These include chronic musculoskeletal pain, neuropathic pain (diabetic neuropathy in particular), and fibromyalgia. Alternatively, many of the TCAs, as well as other SNRI, have been studied for the treatment of pain syndromes, primarily involving neuropathic or chronic pain conditions.[13,14] Amitriptyline also is often used for migraine headaches. Unfortunately these other medications have not received official indications for these psychosomatic comorbidities.

Attention-Deficit/Hyperactivity Disorder

Some antidepressants have shown promise for the treatment of attention-deficit/hyperactivity disorder, but not enough to warrant a specific FDA indication. Nonetheless, these medications are used for the treatment of attention-deficit/hyperactivity disorder, particularly in patients with substance use disorder. Bupropion, desipramine, imipramine, nortriptyline, and venlafaxine have some evidence base to support their use.[15-19]

Other Comorbid Considerations

Premenstrual dysphoric disorder. The SSRI fluoxetine, paroxetine, and sertraline have been FDA approved for the treatment of premenstrual dysphoric disorder.[2]

Smoking cessation. Many patients who receive mental health treatment are also addicted to nicotine. Bupropion SR has received the indication for nicotine addiction.[2] Nortriptyline also has been shown to be helpful for smoking cessation efforts, but has not received an official indication.[20]

Miscellaneous. Bupropion XL carries a specific indication for prophylaxis of seasonal affective disorder and often is used off-label for the treatment of bipolar depression.[19,21,22] Fluoxetine is indicated for treatment of bulimia nervosa and sometimes is used for the treatment of Raynaud’ phenomenon.[2,19,23] Venlafaxine and paroxetine have data supporting use for the treatment of vasomotor hot flashes.[24,25] Finally, imipramine may be used in the treatment of enuresis.[26]

Take-Home Point

Clinicians should be aware of FDA approvals and the evidence base supporting the use of antidepressants in patients with MDD, who are often complex and suffering with other medical and psychiatric comorbidities. Choosing agents with indications that match the patient’s comorbid symptoms is one way to tailor and individualize treatment to each patient.

Beyond the simplistic but labor-intensive role of delineating specific comorbidities and focusing on antidepressant indications, is the imperative to develop a more complex individualized antidepressant treatment plan. If it were as simple as following the FDA labels and simple algorithms to make decisions, then much psychiatric education could be eliminated. A review of antidepressant mechanisms of action will allow us to further distinguish these medications, thus allowing more individualized treatments for MDD.

SSRI Class

The first and most commonly prescribed class of antidepressant is the SSRI. At the most basic understanding, these medications increase serotonin in the synapse and function ultimately to down-regulate serotonin receptors. However, as the science behind these medications is further explored, there is much more to these agents. When looking at the SSRI class as a whole, and in comparison with other antidepressant classes, a few general characteristics can be considered. The SSRI medications as a group are thought of as having fewer side effects than most other classes of antidepressants, and particularly the older classes of drugs. The most common and clinically relevant consideration for these medications is the development of gastrointestinal upset, sexual side effects, and weight gain.[27] The following delineates some of the subtle differences for each medication in this class and describes the benefits and drawbacks of treatment with each to help refine treatment selection.

Citalopram. Citalopram is one of the most widely used antidepressants today, and has a few properties that make it desirable. The medication has a long half-life of 23-45 hours, second only to fluoxetine,[2] and it is typically well tolerated in medically ill patients and the elderly.[19,28] Citalopram has weak H1 receptor antihistamine properties, and these properties provide anxiolytic and positively sedating effects.[27] Citalopram is made up of 2 mirror image enantiomers, each of which have different properties [27] that may lead to some inconsistencies in the property or function of the medication at lower doses. Citalopram is a weak inhibitor of CYP 2D6, with minimal drug-drug interactions.[30] Finally, recent FDA warnings have changed prescribing practices of this medication because of potential QTc prolongation at daily doses higher than 40 mg[29]; daily doses of 60 mg should no longer be used.

Benefits.Citalopram is a well-tolerated medication with mild antihistamine effects that may help with insomnia or mild anxieties. The longer half-life results in less withdrawal or discontinuation side effects.[31]

Drawbacks.Structural enantiomers result in this medication having less predictable effects at lower doses, and higher doses are contrary to FDA recommendations related to the potential for QTc prolongation. It has fewer FDA approvals for comorbid psychiatric disorders than other drugs in the SSRI class; as discussed earlier, this may simply reflect the manufacturer’s failure to seek approval for other indications.

Escitalopram. In contrast to the parent drug citalopram, escitalopram is separated and includes only the left enantiomer.[27] This results in the removal of much of the antihistamine and CYP 2D6 inhibitory properties.[19,27] It also results in more effective and predictable dose responses of the medication at the lower doses.

Benefits.Escitalopram has the benefit of better tolerability with less drug interactions. It may have less sedating effects, and is approved for generalized anxiety disorder as well as MDD.[2]

Drawbacks.Currently this is the only SSRI still on patent, and is thus more expensive than other, generic SSRI.

Fluoxetine. The first member of the SSRI class, fluoxetine has a few characteristics that make it desirable. Fluoxetine has mild serotonin 2C receptor antagonistic actions. This may result in the disinhibition of dopamine and norepinephrine release to the prefrontal cortex, which likely helps to improve concentration, energy, and executive functioning.[19,27] Furthermore, the serotonin 2C effects of this medication may contribute to the initial anorexic and ongoing anti-bulimic effects of this medication.[27] More recently, the effects of fluoxetine on the serotonin system have been combined with those of olanzapine, a second-generation antipsychotic, for the treatment of depression in patients with bipolar disorder and for treatment resistant unipolar depression.[19,27] Fluoxetine also may be a mild norepinephrine reuptake inhibitor, particularly at higher doses.

Fluoxetine significantly affects CYP 2D6 and 3A4 inhibition, and thus is highly likely to interact with other medications.[19,27] Finally, this medication has the longest half-life of the SSRIs, at 2-3 days, with an active metabolite that exists for 2 weeks.[2]

Benefits.Fluoxetine has action at the serotonin 2C receptor, and may affect norepinephrine levels at higher doses. The drug has the longest half-life among the SSRI, making it least likely to cause withdrawal. It is available as a once weekly dosing formulation and is approved for MDD, panic disorder, premenstrual dysphoric disorder, obsessive compulsive disorder, and bulimia nervosa.[2] It also has positive combination effects with the second generation antipsychotic olanzapine, and a combination formulation has been approved by the FDA for treating treatment-resistant and bipolar depression.*[19]

Drawbacks.The medication is likely to be activating in some patients, making it a more difficult option for those with insomnia, agitation, and intense anxiety.[19,27] Slower dose titration is warranted in these cases. Fluoxetine has a high degree of CYP 2D6 inhibition, resulting in significant drug-drug interactions.[19]

*Multiple trials of other second generation antipsychotics combined with various antidepressants including SSRI and SNRI have shown antidepressant efficacy for these combinations in patients with refractory depression.[32]

Paroxetine. The action of paroxetine is more complex than the previously described SSRI medications. In addition to serotonin reuptake inhibition, paroxetine functions with mild anticholinergic properties, mild norepinephrine reuptake inhibition (NRI), inhibition of nitric oxide synthetase, and potent inhibition of CYP 2D6 (similar to fluoxetine).[19,27] It has anticholinergic and antihistaminergic properties that may lend to its being calming and sedating, but also may increase dry mouth, blurred vision, and short term memory problems.[19,27] The NRI effects of the medication may contribute to clinical effectiveness. The effects on nitric oxide synthetase may cause sexual dysfunction.

Benefits.In addition to major depression, paroxetine is approved for various anxiety disorders, with possible calming/sedating effects. It is available in immediate- and slow-release preparations.

Drawbacks.Paroxetine has the potential for anticholinergic side effects[31] Its shorter half-life may result in more and more severe withdrawal side effects than other SSRI; paroxetine is also most strongly associated with weight changes, compared with other SSRI.[2] This medication also has a higher drug-drug interaction probability.

Sertraline. This SSRI may have dual mechanisms that distinguish it from other SSRIs. At higher doses, it acts as both a dopamine transporter inhibitor and a sigma 1 receptor binder.[27] The effects of dopamine transporter inhibition may result in improved energy, motivation, and concentration. Sigma 1 implications are not yet well understood, but some hypothetical benefit is attributed to their mild anxiolytic effects in psychotic and delusional depressions.[27]

Benefits.Sertraline is approved for MDD, many anxiety disorders, eating disorders, and premenstrual dysphoric disorder.[2] This medication has very little CYP 2D6 inhibition and therefore few drug-drug interactions.[19] It has a moderate half-life and thus the possibility of some withdrawal symptoms.

Drawbacks.Sertraline can be activating in patients with anxiety disorders, which may require slowly titrating doses; it is often associated with gastrointestinal distress.

Take-Home Point

The SSRI class is considered a homogeneous class of antidepressants because all are held to the same standard of passing FDA regulatory norms. However, a pharmacodynamic look into their wider mechanisms of action may suggest that each drug is actually different in ways that may foster unique advantages or disadvantages for any given patient. This type of finding would not be apparent in a typical 300-subject regulatory trial, but is often noted in clinical practice, where the sample size comprises the one unique subject that the clinician is treating.

SNRI Class

The next most common class of medications used for the treatment of MDD is the SNRI. This group of medications has a dual mechanism of action, increasing synaptic norepinephrine as well as serotonin.[19,27] In addition to increasing norepinephrine and serotonin levels throughout the brain, these medications may also boost dopamine in the prefrontal cortex, resulting in additional benefits.[27] In the prefrontal cortex, no dopamine transporters are there to recycle dopamine out of the synapse. Typically norepinephrine transporters remove dopamine in these areas, but with the inhibition of these, the dopamine effect in the dorsal lateral prefrontal cortex is more robust.[27] This activation in the brain has been correlated with antidepressant effects.

On the other hand, as the additional norepinephrine boost is added to the brain, it is not contained there. Norepinephrine effects are seen throughout the body, including the spinal cord, peripheral autonomic nervous system, heart, and bladder.[19,27] In the spinal cord this may reduce pain, but may also lead to side effects such as tremor, motor activation, and increased blood pressure and heart rate.[27] Also, these effects may allow a pseudo-anticholinergic effect resulting in such things as dry mouth, constipation, and urinary retention. However, these norepinephrine-related side effects do not rival those of the tricyclic antidepressant class.[31] Generally, the SNRIs are well tolerated, but the subtle increase in side effect burden needs to be considered.

Venlafaxine. Venlafaxine was the first SNRI and was initially approved in an immediate-release preparation. This medication is a substrate of CYP 2D6, and is converted into desvenlafaxine, an SNRI that was developed subsequently.[19,27] Unfortunately, the absorption of immediate-release venlafaxine is rapid, affording it remarkable side effects; this has been mitigated with an extended-release formulation that appears to be much better tolerated in practice. The medication also has a unique character, causing a varying ratio of serotonin to norepinephrine effects.[19,27] At low doses, there are fewer NRI properties (and more SRI properties) available and only at higher doses do the norepinephrine transporter inhibition properties increase more robustly.

Benefits.Compared with the SSRI, this medication has effects at both serotonin and norepinephrine receptors leading to its antidepressant effectiveness. The medication is very effective in the treatment of anxiety disorders, with multiple approved uses, likely comparable to sertraline and paroxetine.[2]

Drawbacks.The norepinephrine effects of the medication are much more robust only at higher doses and must be titrated. The medication has a short half-life resulting in many withdrawal side effects. There may be higher rates of nausea and dry mouth in comparison to some other antidepressants.[31] This medication may cause hypertension in some patients, and thus, blood pressure should be monitored.[19]

Desvenlafaxine. Desvenlafaxine is the active metabolite of venlafaxine,[19] and has the added benefit of a greater effect on norepinephrine transporter inhibition than its precursor at the initial dose levels. However, the effects on norepinephrine are less than those on serotonin.[27] Because it is the active metabolite of venlafaxine, it is less subjected to the genetic and drug-induced differences of CYP 2D6, which allows more consistent plasma levels of the medication.[27] It may be one of the “cleanest” antidepressant medications, given its extremely low vulnerability to cytochrome P450 metabolism, renal excretion, and low protein binding. The role of desvenlafaxine in the regulation of vasomotor symptoms (night sweats, hot flashes, insomnia, and related depression) in perimenopausal women is being investigated.[27]

Benefits.Although similar to extended-release venlafaxine, desvenlafaxine has a more balanced ratio of norepinephrine/serotonin properties, and it has one of the most favorable drug-drug interaction profiles.

Drawbacks.This medication has a short half-life and significant withdrawal side effects.[31]

Duloxetine. Duloxetine is unique among the SNRI class of drugs because, in addition to MDD, it is approved for treating a variety of pain syndromes.[2] This is related to the SNRI effect on the descending spinal norepinephrine pathways that reduce afferent pain fiber activity.[27] The increase in norepinephrine activity in spinal areas results in less thalamic input to the sensory cortex and therefore less perceived pain. The norepinephrine-facilitating effects in the prefrontal cortex also may show some benefit in treatment of cognitive symptoms prevalent in geriatric depression.[27]. Compared with venlafaxine, duloxetine has a lower incidence of treatment-related hypertension and milder withdrawal reactions. It is approved for MDD, generalized anxiety disorder, musculoskeletal pain, neuropathic pain, and fibromyalgia-related pain.[2]

Benefits.One of the only antidepressants approved for management of pain syndromes, duloxetine also has a more balanced norepinephrine to serotonin ratio at its initial doses.[28]

Drawbacks.Duloxetine is a mild to moderate CYP 2D6 inhibitor, which results in some drug-drug interactions.[19] In addition, it should not be used in alcoholic patients or those with renal and/or liver impairment.

Take-Home Point

The SNRI class is considered a homogeneous class of antidepressants because all are held to the same standard of passing FDA regulatory norms. As with the SSRI, a pharmacodynamic look into their wider mechanisms of action suggests that each drug is actually different in ways that may foster unique advantages or disadvantages for any given patient. This is clear when one considers the diverse FDA approvals for each and different potencies related to facilitating distinct ratios of serotonin to norepinephrine transporter inhibition. Again, this type of finding would not be apparent in a typical 300-subject regulatory trial, but is often noted in clinical practice, where the sample size comprises the one unique subject that the clinician is treating.

TCA Class

This class is one of the oldest and still highly utilized classes of antidepressant in the history of psychopharmacology, and includes amitriptyline, imipramine, clomipramine, desipramine, trimipramine, and nortriptyline. The TCAs are often overlooked because of their relatively high level of side effects when compared with other classes of antidepressant, and because of high lethality in overdose. The TCAs have significant effects on the norepinephrine, serotonin, and to some extent dopamine activity in the brain.[19,27] The higher incidence of side effects are likely mediated through blockade of anticholinergic receptors (M1/M3), histamine receptors (H1), alpha 1 adrenergic receptors, and voltage-sensitive sodium channels.[19,27] Histamine blockade causes sedation and weight gain. Muscarinic blockade causes dry mouth, blurred vision, urinary retention, and constipation. Alpha 1 blockade causes orthostatic hypotension and dizziness. Sodium channel blockade affects the heart significantly, resulting in arrhythmias and conduction changes at higher doses.[27] This latter side effect results in significant risk of successful suicide with overdose, and renders TCAs difficult to use in medically comorbid patients.

Benefits.Overall, TCAs are very effective antidepressants. Indeed, early studies comparing TCA with SSRI medications found significantly higher remission rates with TCA than with SSRI in depressed, endogenous and inpatients samples.[33-36] However, in less severely depressed patients, there is not conclusive evidence of benefit of either class of antidepressant over another. Off-label, the use of TCAs in the treatment of pain, enuresis, and insomnia is widespread.[19] Availability of plasma level monitoring helps to guarantee therapeutic trials while minimizing toxicity.

Drawbacks.The significant adverse event profile causes an array of side effects that are often poorly tolerated and lead to medication noncompliance. Because of cardiac side effects, TCAs carry significant risk of death with overdose.

MAOI Class

This class of antidepressants has its own unique mechanism of action. MAOI has fallen into the realm of rarely used antidepressants in modern day psychopharmacology. This is related to the risks and side effects inherent to MAOI use. On the other hand, MAOI are among the most clinically powerful classes of antidepressant treatments. This class interferes with MAO enzyme subtypes A and B. The inhibition of these enzymes results in higher levels of serotonin and norepinephrine due to reduced catabolism of these neurotransmitters.[27] Moreover, by specifically lowering MAO-B activity, dopamine levels in the brain increase as well. Thus, all 3 monoamine neurotransmitter levels are robustly increased, which, in turn, affects a broad array of depressive symptoms.

The use of these medications may come at the cost of difficulty in using them. The most well-known drawback is that patients need to maintain a specific diet that is free of high tyramine foods, or risk the likelihood of hypertensive crisis related to the acute elevation of systemic norepinephrine, which also may result in stroke.[19,27] Foods to be avoided include tap beers, smoked meat or fish, fava beans, aged cheeses, sauerkraut, and soy. However, certain beers, wines, and cheeses are not contraindicated. These items need to be researched and discussed prior to starting a patient on the medication.

Drug-drug interactions are plentiful; combining an MAOI with other norepinephrine medications may increase blood pressure, and combining with a serotonin-based medication can cause serotonin syndrome.[19,27] Patients are also advised to avoid decongestants, stimulants, antidepressants, certain opioids, and appetite suppressants.[19,27]

The MAOI tranylcypromine may act similarly to an amphetamine in the frontal cortex, affording it some additional benefits.[27] Likewise, selegiline also involves breakdown into an amphetamine metabolite. Selegiline is more often used for Parkinson disease than depression.

Benefits.MAOIs are recognized as among the most potent of antidepressants in monotherapy, with effects on serotonin, dopamine, and norepinephrine. This class of antidepressant is often used for the patient who is refractory to other antidepressant trials.

Drawbacks.The MAOIs are associated with risks of hypertensive crisis and serotonin syndrome. There is a need to maintain a tyramine free diet except when using the low dose transdermal selegiline. Because of potential for drug-drug interactions, careful, ongoing monitoring of all additional medications (including over-the-counter medications) is essential.

Miscellaneous Antidepressants

Several other well-known antidepressant medications do not fit discretely into the 4 main antidepressant classes. Each has unique mechanisms that will be discussed similarly below.

Bupropion. This norepinephrine-dopamine receptor inhibitor (NDRI) medication is of particular use in a few subsets of patients. As the class name indicates, bupropion facilitates effects on norepinephrine and dopamine, blocking norepinephrine transporter and dopamine transporter activity at a moderate level, likely in the frontal cortex.[27] The unique properties of bupropion as an antidepressant may be related to its lack of serotonin activity. It is approved for smoking cessation and is used off-label to reduce craving for substances of abuse. Clinicians contend that the dopamine actions of this medication help to improve the loss of positive affect in MDD. Thus, it effectively increases joy, interest, pleasure, energy, enthusiasm, alertness, and self-confidence.[27] The norepinephrine and dopamine facilitation helps patients with attention-deficit/hyperactivity disorder as well.[19]

Several cases of psychosis and paranoia have been reported in patients taking bupropion, likely related to the dopamine effects of the drug.[37] Limited data suggest that this medication, like all antidepressants, may activate depressed patients with bipolar disorder, causing manic episodes. However, it is widely accepted that bupropion and the SSRI class may be less likely to activate mania compared with the TCA class of medications. Because it does not act on serotonin, this is one of the few antidepressants that does not cause sexual side effects or weight gain.[19,27] The medication is uniquely approved for the treatment of seasonal affective disorder.[2]

Benefits.Bupropion is indicated for the treatment of MDD, seasonal affective disorder, and nicotine dependence. It has very low sexual and weight gain side effect liability.

Drawbacks.There is limited serotonin activity with bupropion and less evidence for the treatment of anxiety. Bupropion lowers the seizure threshold in patients predisposed to these events (including patients with eating disorders and those with epilepsy).

Trazodone. Trazodone is a serotonin antagonist/reuptake inhibitor (SARI). It blocks serotonin 2A and 2C receptors and also acts as a mild serotonin reuptake inhibitor.[19,27] This medication typically is used at lower doses because of its properties as a strong antihistamine (H1) and alpha-1 adrenergic blocking medication. The blockade of these receptors causes significant sedation, which may help with insomnia, but may cause excessive somnolence and dizziness in the daytime. The blockade of serotonin also may explain trazodone’s properties as a hypnotic, providing more efficient sleep.[27] Although higher doses of this medication provide excellent benefit related to the synergistic effects of blocking serotonin 2A and 2C and by acting as a serotonin reuptake inhibitor, this medication is not typically given in full divided doses because of excessive side effects.[19,27] A new slow-release preparation has been approved to allow a better tolerated, full dose range.

Benefits.Trazodone is often called a sedating antidepressant. It helps insomnia, improves sleep efficiency, and has its action even at low doses. Sexual side effects and activating side effects are low.[19,27]

Drawbacks. Significant sedation may limit its use.

Mirtazapine. This medication is also considered to be sedating and is typically either avoided or sought because of its side effect profile. Side effects include sedation/hypnotic effects and appetite stimulation, but not sexual side effects. The lack of sexual side effects is again related to serotonin in that mirtazapine is not a serotonin reuptake inhibitor, but in this case acts as a serotonin 2A/2C receptor antagonist.[19,27] The blockade of these receptors may result in more dopamine and norepinephrine release in the prefrontal cortex. The histamine blockade (H1) results in sedation, anxiolytic/hypnotic effects, and weight gain.[19,27] Mirtazapine also acts as a 5HT3 receptor antagonist, resulting in reduction of gastrointestinal problems.[19,27] The primary mechanism of antidepressant action is through alpha 2/norepinephrine receptor antagonism. Through this antagonism, inhibition of norepinephrine is disinhibited through auto receptor blockade. This allows downstream effects on several pathways and may result in overall release of serotonin and norepinephrine. This effect can often be combined with an SNRI to obtain synergistic effects.[27]

Benefits.Mirtazapine has many unique mechanisms of actions that make it beneficial in particular populations. It lacks sexual side effects, reduces gastrointestinal upset, and is not activating. The sedating qualities of this medication are typically used to the medication’s and the patient’s benefit.

Drawbacks.Mirtazapine has significant weight gain/appetite stimulation effects, which could lead to metabolic disorders.

This review is both practical and factual. Clinicians ideally should be aware of regulatory approvals and appropriate use of them in certain patient populations. When used this way, clinicians may expect results comparable to those noted in the evidence base of regulatory trials. However, those who treat patients understand that not all are identical to those enrolled in research trials. What follows will provide some practical clinical approaches when responses do not meet expectations.

As noted, only one third of patients will fully remit on their first antidepressant trial.[38] These numbers hold true for patients who are fully treated with moderate to high dose SSRI for as long as 12 weeks. In clinical practice, patients may not even have such a rigorous dosing profile and failure rates are likely higher. What approaches should be taken when a patient is not responding to treatment?

Adherence and Dosing

First, ask and attempt to ensure adherence to the antidepressant treatment. This questioning should be nonjudgmental and empathic, as most patients will likely say they are compliant even when they are not. Oftentimes suggesting that most people tend to naturally miss a few doses and that you as the clinician are just checking up will diffuse the situation. As dosing becomes divided throughout the day and polypharmacy increases, compliance usually diminishes, making assessment for compliance and adherence to medical regimens even more important.

Tolerability

An important area to address to improve adherence to a regimen relates to side effects and antidepressant tolerability. Sometimes patients cease taking their antidepressant or fail to escalate the dose as advised when adverse effects are not well tolerated. Many mild side effects will dissipate over time and this should be discussed directly with the patient.[39] Patients should be instructed to inform prescribers of any moderate to severe side effects and the drug can then be safely stopped. Patients should also be told that there are many antidepressants, and these have different side effects.[2,39] For example, SSRI, SNRI, and NDRI may be activating, and thus cause insomnia or nervousness upon initiation of treatment. Patients may be switched to a less activating SARI or noradrenergic antagonist-selective serotonin antagonist mechanism-based product, as these tend to be less activating and more sedating.[2]

Some patients may experience drug-drug interactions depending upon their genetic make-up.[2] Switching away from hepatic inhibiting medications towards medications that are less likely to interact with other drugs may be warranted. Typical side effects of headaches, stomachaches, or even insomnia often can be treated very effectively with over the counter or prescription medications. Later onset side effects such as weight gain or sexual dysfunction may be more difficult to mitigate or treat. Open discussions with patients about these longer term risks are warranted because patients often have to stay on their antidepressants for a year or more to maintain remission and avoid a depressive relapse.[38] Because certain antidepressants may have a more, or less favorable weight or sexual side effect profile, they should be chosen based on a discussion about patient preference when possible.

Assuming adherence is adequate, the next step is to confirm that the antidepressant dose was at the moderate to high end of the approved range and has been taken for at least 4 to 6 weeks. If dosing is confirmed to be reasonable, consider a final maximization of dose or switch to a new antidepressant monotherapy.[39]

Switching Monotherapies

If it is necessary to consider switching monotherapies, no clear benefit has been attributed to any particular strategy.[38] Many experts agree, however, that a switch away from an SSRI is warranted if the fully dosed SSRI therapy has failed to improve the patient’s symptoms.[27,39] The theoretical implication is that the patient’s current depressive symptoms have been treated with aggressive serotonergic facilitation and that repeating this mechanism may not be fruitful. This suggests that, pharmacodynamically, the depression may not be entirely serotonin-based in regards to its etiology.[27,39] Given this, a cross titration on to an SNRI such as venlafaxine XR or duloxetine, a NDRI such as bupropion XL, a noradrenergic antagonist-selective serotonin antagonist such as mirtazapine, or a more aggressive serotonergic facilitating agent like a SARI such as trazodone ER or a serotonin partial agonist-reuptake inhibitor such as vilazodone theoretically may be warranted.[2]

One final concern regarding switching involves the use of generic vs brand-name drugs. The FDA ensures that the bioavailability between a brand name and its generic counterpart is approximately between 20% weaker and 20% stronger.[40,41] Most generics are highly comparable, but occasionally when a patient actually changes from one generic to another, the bioavailability could change from a 20% stronger to a 20% weaker generic drug and symptom relapse may occur. By contrast, going from a weaker to a stronger generic might actually improve depression outcomes but may also create new-onset side effects after months of stable treatment as the newer generic preparation is more potent, raising blood levels higher than previously. These types of events should be monitored and dosing adjusted as needed.

Finally, a generic drug may possess a different slow-release mechanism compared with the parent brand-name drug. Oftentimes the generic, despite being a slow-release drug itself may actually release active drug more quickly than the original brand’s slow-release technology. There may be no evidence of a clinical problem; however, some patients may develop side effects when taking the faster release preparation. In this case, the dose may need to be lowered while monitoring for relapse or a switch back to the brand-name slow-release product may be warranted.

In conclusion, this article seeks to identify treatments that match patients with MDD and their common comorbidities, as a first line approach to MDD management. Secondarily and more theoretically, patients’ MDD symptoms may be effectively treated if clinicians are aware of the neurotransmitters and receptors that each antidepressant modulates. Finally, patients may suffer issues with nonefficacy, noncompliance, and tolerability. Each patient is unique and these clinical situations may interfere with optimal depression outcomes. Each patient must be educated and given informed consent about the myriad effective antidepressant treatment options available.

Supported by an independent educational grant from Valeant Pharmaceuticals.

References:

  1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Text revision. Washington, DC: American Psychiatric Association; 2000.
  2. Stahl SM. Essential Psychopharmacology: The Prescriber’s Guide. Cambridge, Mass: Cambridge University Press; 2005.
  3. FDA Package Insert. Pristiq. Pfizer Inc. 2011.
  4. FDA Package Insert. Viibryd. Forest Laboratories, Inc. 2011.
  5. Davidson J, Baldwin D, Stein DJet al.Treatment of posttraumatic stress disorder with venlafaxine extended release: a 6-month randomized controlled trial. Arch Gen Psychiatry. 2006;63:1158-1165. Abstract
  6. Bandelow B, Zohar J, Hollander E, et al. World federation of societies of biological psychiatry (WFSBP) guidelines for the pharmacological treatment of anxiety, obsessive-compulsive and post-traumatic stress disorders — first revision. World J Biol Psychiatry. 2008;9:248-312. Abstract
  7. Benedek DM, Friedman MJ, Zatzick D, et al. Guideline watch (March 2009): practice guideline for the treatment of patients with acute stress disorder and posttraumatic stress disorder.
  8. Phelps NJ, Cates ME, The role of venlafaxine in the treatment of obsessive-compulsive disorder. Ann Pharmacother. 2005;39:136-140. Abstract
  9. Gartlehner G, Hansen RA, Reichenpfader U, et al. Drug class review: second-generation antidepressants: final update 5 report [internet].Portland, Ore: Oregon Health & Science University; March 2011.
  10. Effexor Prescribing Information. http://labeling.pfizer.com/showlabeling.aspx?id=100
  11. FDA Package Insert. Cymbalta. Lilly USA LLC. 2004/2011.
  12. FDA Package Insert. Silenor. Somaxon Pharmaceuticals Inc. 1969.
  13. Hsu, ES. Acute and chronic pain management in fibromyalgia: updates on pharmacotherapy. Am J Ther. 2011;18:487-509. Abstract
  14. Verdu B, Decosterd I, Buclin T, et al. Antidepressants for the treatment of chronic pain. Drugs. 2008;68:2611-2632. Abstract
  15. Prince JB, Wilens TE, Biederman J, et al. A controlled study of nortriptyline in children and adolescents with attention deficit hyperactivity disorder. J Child Adolesc Psychopharmacol. 2000;10:193-204. Abstract
  16. Pliszka SR. Non-stimulant treatment of attention-deficit/hyperactivity disorder. CNS Spectr. 2003;8:253-258. Abstract
  17. Wilens TE, Prince JB, Spencer T, et al, An open trial of bupropion for the treatment of adults with attention-deficit/hyperactivity disorder and bipolar disorder. Biol Psychiatry. 2003;54:9-16. Abstract
  18. Olvera RL, Pliszka SR, Luh J, et al. An open trial of venlafaxine in the treatment of attention-deficit/hyperactivity disorder in children and adolescents. J Child Adolesc Psychopharmacol. 1996;6:241-250. Abstract
  19. Sadock BJ, Sadock VA, et al. Kaplan and Sadock’s Synopsis of Psychiatry: Behavioral Sciences/Clinical Psychiatry. Tenth ed. Philadelphia, Pa: Lippincott Williams and Wilkins; 2007:977-1126.
  20. Prochazka AV, Kick S, Steinbrunn C, et al. A randomized trial of nortriptyline combined with transdermal nicotine for smoking cessation. Arch Intern Med. 2004;164:2229-2233. Abstract
  21. FDA Package Insert. Wellbutrin XL. GlaxoSmithKline. 2008.
  22. McIntyre RS, Mancini DA, McCann S, et al, Topiramate versus bupropion SR when added to mood stabilizer therapy fpr the depressive phase of bipolar disorder: a preliminary single-blind study. Bipolar Disord. 2002;4:207-213. Abstract
  23. Coleiro B, Marshall SE, Denton CP, et al. Treatment of raynaud’s phenomenon with the selective serotonin reuptake inhibitor fluoxetine. Rheumatology. 2001;40:1038-1043. Abstract
  24. Stearns V, Beebe KL, Iyengar M, et al. Paroxetine controlled release in the treatment of menopausal hot flashes: a randomized controlled trial. JAMA. 2003;289:2827-2834. Abstract
  25. Evans ML, Pritts E, Vittinghoff E, et al. Management of postmenopausal hot flushes with venlafaxine hydrochloride: a randomized, controlled trial. Obstet Gynecol. 2005;105:161-166. Abstract
  26. Muller D, Roehr CC, Eggert P. Comparative tolerability of drug treatment for nocturnal enuresis in children. Drug Saf. 2004;27:717-727. Abstract
  27. Stahl SM. Stahl’s Essential Psychopharmacology: Neuroscientific Basis and Practical Applications. 3rd ed. Cambridge, Mass: Cambridge University Press; 2008:511-666.
  28. Spina E, Scordo MG. clinically significant drug interactions with antidepressants in the elderly. Drugs Aging. 2002;19:299-320. Abstract
  29. FDA Packet Insert. Celexa. Forest Laboratories, Inc. 2010/2011.
  30. Spina E, Santoro V, D’Arrigo C. Clinically relevant pharmacokinetic drug interactions with second-generation antidepressants: an update. Clin Ther. 2008;30:1206-1227. Abstract
  31. Cipriani A, Furukawa TA, Salanti G, et al. Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis. Lancet. 2009;373:746-758. Abstract
  32. Nelson JC, Papakostas GI. Atypical antipsychotic augmentation in major depressive disorder: a meta-analysis of placebo-controlled randomized trials. Am J Psychiatry. 2009;166:980-991. Abstract
  33. Danish University Antidepressant Group. Citalopram: clinical effect profile in comparison with clomipramine. A controlled multicenter study. Psychopharmacology (Berl). 1986;90:131-138. Abstract
  34. Danish University Antidepressant Group.Paroxetine: a selective serotonin reuptake inhibitor showing better tolerance, but weaker antidepressant effect than clomipramine in a controlled multicenter study. J Affect Disord. 1990;18:289-299. Abstract
  35. Roose SP, Glassman AH, Attia E, Woodring S. Comparative efficacy of selective serotonin reuptake inhibitors and tricyclics in the treatment of melancholia. Am J Psychiatry. 1994;151:1735-1739. Abstract
  36. Beasley CM Jr, Holman SL, Potvin JH. Fluoxetine compared with imipramine in the treatment of inpatient depression. A multicenter trial. Ann Clin Psychiatry. 1993;5:199-207. Abstract
  37. Bailey J. Acute psychosis after bupropion treatment in a healthy 28-year-old woman. J Am Board Fam Med. 2008;21:244.
  38. Rush AJ, Trivedi MH, Wisnewlski Sr, et al. Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: a STAR*D report. Am J Psychiatry. 2006; 163:1905-1917. Abstract
  39. Zajecka JM, Goldstein C. Combining medication to achieve remission. In: Schwartz T, Petersen T, eds. Depression: Treatment Strategies and Management. 2nd ed. New York: Informa; 2009.
  40. Park K, ed. Controlled Drug Delivery: Challenges and Strategies. Washington, DC: American Chemical Society; 1997.
  41. Orange book annual preface, statistical criteria for bioequivalence. In: Approved Drug Products with Therapeutic Equivalence Evaluations. 29th ed. US Food and Drug Administration Center for Drug Evaluation and Research; 2009-06-18, update 3-01-11. http://www.fda.gov/Drugs/DevelopmentApprovalProcess/ucm079068.htm

Retrieved from:

Tailoring Antidepressant Treatment: Factors to Individualize Medication Selection Thomas L. Schwartz, MD; Daniel Uderitz, MD

In the realm of psychopharmacology, we often declare medications within their respective therapeutic classes as being equal. This is a byproduct related to the way medications achieve their indications for treatment for specific psychiatric disorders. In the case of antidepressant treatments, the US Food and Drug Administration (FDA) indicates that if a study can obtain a majority of patients improved by 50% compared with placebo, then a drug may become an antidepressant treatment. There are no standards for differentiating antidepressant treatments beyond this. Clinicians often note that all antidepressant treatments are not created equal, especially when applied to clinical situations and patients who are often complex and have comorbid conditions. The goal of this article is to sort out regimens that may convey certain advantages during the treatment in an individualized manner. This involves conceptualizing and utilizing monotherapies, combination therapies, and adjunctive treatments.

Monotherapies

The first-line treatment of patients with major depressive disorder (MDD) should start with an aggressive monotherapy. This occurs in clinical practice and is supported by many guidelines and reviews. The various antidepressant medications have unique properties that can be used to individualize treatments. Most psychiatrists can easily name their “favorite” antidepressant to use in certain situations. This is sometimes based on a simple bias, but often has evidence to back up clinical practice. Let us start with the mechanistically simple and move toward more complex ways to think about these medications. This includes thinking about FDA approvals, available guidelines, comorbidities, side effects, and more complex pharmacodynamic receptor-based neuropsychiatry.

A patient rarely comes to a psychiatrist without having a combination of psychiatric symptoms. Typically, clinicians screen patients and find that they often meet criteria for more than 1 Diagnostic and Statistical Manual of Mental Disorders Fourth Edition-Text Revision (DSM IV-TR) criteria.[1] At a minimum, the individual patient raises suspicion for various problem areas, even if they do not meet criteria for a specific disorder. In reviewing FDA guidelines, clinicians may quickly make simple decisions regarding treatment regimens that are more individualized based on these comorbidities and predominant symptoms. Of note, additional FDA approval or lack of approval for various indications does not necessarily mean that evidence does not support efficacy for other disorders. For example, the manufacturer may not have pursued FDA approval for other indications, or may have decided not to support randomized controlled trials to study another indication.

Single Indication

The first group of antidepressants approved by the FDA for the single indication of MDD include amitriptyline, citalopram, desipramine, desvenlafaxine, mirtazipine, nortriptyline, protriptyline, trazodone, trimipramine, vilazodone, and the monoamine oxidase inhibitor (MAOI) class.[2-4] Clinicians should know that these medications have only the 1 indication, and this clearly supports their use in MDD. However, many practitioners recognize that there are multiple other factors that allow these medications to be used in an off-label manner for various individuals. In a pure model, these antidepressants have regulatory data suggesting use only in patients with MDD but, as discussed, a lack of approval for other indications does not necessarily indicate a lack of supportive data or lack of efficacy.

Multiple Indications

Unlike those listed above, many antidepressants have other labeled or approved indications. These span a variety of comorbidities including anxiety disorders, seasonal affective disorder, sleep disorders, pain disorders, premenstrual dysphoric disorder, bulimia nervosa, and other miscellaneous indications. Given this, and assuming MDD is often complicated by comorbidity, let us evaluate a few comorbidities where data-driven decisions may help in individualizing treatments in patients who are depressed and simultaneously experience other psychiatric conditions.

Posttraumatic Stress Disorder

Patients with posttraumatic stress disorder often have comorbid depression. Only 2 antidepressants, the selective serotonin reuptake inhibitors (SSRI) sertraline and paroxetine, are approved for this indication.[2] Multiple other medications have been recognized as effective off-label treatments for posttraumatic stress disorder, however; these include amitriptyline, fluoxetine, fluvoxamine, imipramine, and venlafaxine.[5-7] If a patient presents with MDD and posttraumatic stress disorder, these antidepressants may be considered if necessary to achieve efficacy for both conditions.

Obsessive-Compulsive Disorder

Several medications are approved for obsessive-compulsive disorder, including the tricyclic antidepressant clomipramine, and the SSRIs fluoxetine, fluvoxamine, paroxetine, and sertraline.[2] Venlafaxine, a serotonin norepinephrine reuptake inhibitor (SNRI),[8] and the SSRI citalopram have shown some promise in obsessive compulsive disorder,[9] but have not yet received that indication from the FDA.

Panic Disorder

The SSRIs fluoxetine, paroxetine, and sertraline are approved for treatment of panic disorder, as is the SNRI venlafaxine.[2,10] Other antidepressants with an evidence base for use that are not approved include the TCAs clomipramine and imipramine, and the SSRI fluvoxamine.[6]

Anxiety Disorders

Social anxiety disorder. The SSRIs paroxetine and sertraline, and the SNRI, venlafaxine extended-release (ER) have been approved for the treatment of social anxiety disorder.[2] The SSRI fluoxetine is sometimes used for treatment of social anxiety disorder.

Generalized anxiety disorder. Four antidepressants have been indicated for the treatment of generalized anxiety disorder. These include the SSRI escitalopram and paroxetine, and the SNRI duloxetine and venlafaxine ER.[2,11]

Insomnia

Although sleep difficulties are a nearly universal symptom of depression, few antidepressants have an official indication for insomnia. Doxepin, a TCA, is the sole antidepressant labeled with this indication, when it is used at subtherapeutic antidepressant doses of 3 to 6 mg per day.[12] However, clinicians often use sedating antidepressants to induce sleep in those patients with MDD and insomnia (Schwartz TL. Novel hypnotics: moving beyond positive allosteric modulation of the GABA-A receptor. Manuscript submitted). These medications include the TCA amitriptyline, the tetracyclic mirtazapine, and the serotonin modulator trazodone.

Pain Syndromes

Duloxetine, an SNRI, is the only antidepressant medication that has official indications for treatment of pain syndromes.[2,10] These include chronic musculoskeletal pain, neuropathic pain (diabetic neuropathy in particular), and fibromyalgia. Alternatively, many of the TCAs, as well as other SNRI, have been studied for the treatment of pain syndromes, primarily involving neuropathic or chronic pain conditions.[13,14] Amitriptyline also is often used for migraine headaches. Unfortunately these other medications have not received official indications for these psychosomatic comorbidities.

Attention-Deficit/Hyperactivity Disorder

Some antidepressants have shown promise for the treatment of attention-deficit/hyperactivity disorder, but not enough to warrant a specific FDA indication. Nonetheless, these medications are used for the treatment of attention-deficit/hyperactivity disorder, particularly in patients with substance use disorder. Bupropion, desipramine, imipramine, nortriptyline, and venlafaxine have some evidence base to support their use.[15-19]

Other Comorbid Considerations

Premenstrual dysphoric disorder. The SSRI fluoxetine, paroxetine, and sertraline have been FDA approved for the treatment of premenstrual dysphoric disorder.[2]

Smoking cessation. Many patients who receive mental health treatment are also addicted to nicotine. Bupropion SR has received the indication for nicotine addiction.[2] Nortriptyline also has been shown to be helpful for smoking cessation efforts, but has not received an official indication.[20]

Miscellaneous. Bupropion XL carries a specific indication for prophylaxis of seasonal affective disorder and often is used off-label for the treatment of bipolar depression.[19,21,22] Fluoxetine is indicated for treatment of bulimia nervosa and sometimes is used for the treatment of Raynaud’ phenomenon.[2,19,23] Venlafaxine and paroxetine have data supporting use for the treatment of vasomotor hot flashes.[24,25] Finally, imipramine may be used in the treatment of enuresis.[26]

Take-Home Point

Clinicians should be aware of FDA approvals and the evidence base supporting the use of antidepressants in patients with MDD, who are often complex and suffering with other medical and psychiatric comorbidities. Choosing agents with indications that match the patient’s comorbid symptoms is one way to tailor and individualize treatment to each patient.

Beyond the simplistic but labor-intensive role of delineating specific comorbidities and focusing on antidepressant indications, is the imperative to develop a more complex individualized antidepressant treatment plan. If it were as simple as following the FDA labels and simple algorithms to make decisions, then much psychiatric education could be eliminated. A review of antidepressant mechanisms of action will allow us to further distinguish these medications, thus allowing more individualized treatments for MDD.

SSRI Class

The first and most commonly prescribed class of antidepressant is the SSRI. At the most basic understanding, these medications increase serotonin in the synapse and function ultimately to down-regulate serotonin receptors. However, as the science behind these medications is further explored, there is much more to these agents. When looking at the SSRI class as a whole, and in comparison with other antidepressant classes, a few general characteristics can be considered. The SSRI medications as a group are thought of as having fewer side effects than most other classes of antidepressants, and particularly the older classes of drugs. The most common and clinically relevant consideration for these medications is the development of gastrointestinal upset, sexual side effects, and weight gain.[27] The following delineates some of the subtle differences for each medication in this class and describes the benefits and drawbacks of treatment with each to help refine treatment selection.

Citalopram. Citalopram is one of the most widely used antidepressants today, and has a few properties that make it desirable. The medication has a long half-life of 23-45 hours, second only to fluoxetine,[2] and it is typically well tolerated in medically ill patients and the elderly.[19,28] Citalopram has weak H1 receptor antihistamine properties, and these properties provide anxiolytic and positively sedating effects.[27] Citalopram is made up of 2 mirror image enantiomers, each of which have different properties [27] that may lead to some inconsistencies in the property or function of the medication at lower doses. Citalopram is a weak inhibitor of CYP 2D6, with minimal drug-drug interactions.[30] Finally, recent FDA warnings have changed prescribing practices of this medication because of potential QTc prolongation at daily doses higher than 40 mg[29]; daily doses of 60 mg should no longer be used.

Benefits.Citalopram is a well-tolerated medication with mild antihistamine effects that may help with insomnia or mild anxieties. The longer half-life results in less withdrawal or discontinuation side effects.[31]

Drawbacks.Structural enantiomers result in this medication having less predictable effects at lower doses, and higher doses are contrary to FDA recommendations related to the potential for QTc prolongation. It has fewer FDA approvals for comorbid psychiatric disorders than other drugs in the SSRI class; as discussed earlier, this may simply reflect the manufacturer’s failure to seek approval for other indications.

Escitalopram. In contrast to the parent drug citalopram, escitalopram is separated and includes only the left enantiomer.[27] This results in the removal of much of the antihistamine and CYP 2D6 inhibitory properties.[19,27] It also results in more effective and predictable dose responses of the medication at the lower doses.

Benefits.Escitalopram has the benefit of better tolerability with less drug interactions. It may have less sedating effects, and is approved for generalized anxiety disorder as well as MDD.[2]

Drawbacks.Currently this is the only SSRI still on patent, and is thus more expensive than other, generic SSRI.

Fluoxetine. The first member of the SSRI class, fluoxetine has a few characteristics that make it desirable. Fluoxetine has mild serotonin 2C receptor antagonistic actions. This may result in the disinhibition of dopamine and norepinephrine release to the prefrontal cortex, which likely helps to improve concentration, energy, and executive functioning.[19,27] Furthermore, the serotonin 2C effects of this medication may contribute to the initial anorexic and ongoing anti-bulimic effects of this medication.[27] More recently, the effects of fluoxetine on the serotonin system have been combined with those of olanzapine, a second-generation antipsychotic, for the treatment of depression in patients with bipolar disorder and for treatment resistant unipolar depression.[19,27] Fluoxetine also may be a mild norepinephrine reuptake inhibitor, particularly at higher doses.

Fluoxetine significantly affects CYP 2D6 and 3A4 inhibition, and thus is highly likely to interact with other medications.[19,27] Finally, this medication has the longest half-life of the SSRIs, at 2-3 days, with an active metabolite that exists for 2 weeks.[2]

Benefits.Fluoxetine has action at the serotonin 2C receptor, and may affect norepinephrine levels at higher doses. The drug has the longest half-life among the SSRI, making it least likely to cause withdrawal. It is available as a once weekly dosing formulation and is approved for MDD, panic disorder, premenstrual dysphoric disorder, obsessive compulsive disorder, and bulimia nervosa.[2] It also has positive combination effects with the second generation antipsychotic olanzapine, and a combination formulation has been approved by the FDA for treating treatment-resistant and bipolar depression.*[19]

Drawbacks.The medication is likely to be activating in some patients, making it a more difficult option for those with insomnia, agitation, and intense anxiety.[19,27] Slower dose titration is warranted in these cases. Fluoxetine has a high degree of CYP 2D6 inhibition, resulting in significant drug-drug interactions.[19]

*Multiple trials of other second generation antipsychotics combined with various antidepressants including SSRI and SNRI have shown antidepressant efficacy for these combinations in patients with refractory depression.[32]

Paroxetine. The action of paroxetine is more complex than the previously described SSRI medications. In addition to serotonin reuptake inhibition, paroxetine functions with mild anticholinergic properties, mild norepinephrine reuptake inhibition (NRI), inhibition of nitric oxide synthetase, and potent inhibition of CYP 2D6 (similar to fluoxetine).[19,27] It has anticholinergic and antihistaminergic properties that may lend to its being calming and sedating, but also may increase dry mouth, blurred vision, and short term memory problems.[19,27] The NRI effects of the medication may contribute to clinical effectiveness. The effects on nitric oxide synthetase may cause sexual dysfunction.

Benefits.In addition to major depression, paroxetine is approved for various anxiety disorders, with possible calming/sedating effects. It is available in immediate- and slow-release preparations.

Drawbacks.Paroxetine has the potential for anticholinergic side effects[31] Its shorter half-life may result in more and more severe withdrawal side effects than other SSRI; paroxetine is also most strongly associated with weight changes, compared with other SSRI.[2] This medication also has a higher drug-drug interaction probability.

Sertraline. This SSRI may have dual mechanisms that distinguish it from other SSRIs. At higher doses, it acts as both a dopamine transporter inhibitor and a sigma 1 receptor binder.[27] The effects of dopamine transporter inhibition may result in improved energy, motivation, and concentration. Sigma 1 implications are not yet well understood, but some hypothetical benefit is attributed to their mild anxiolytic effects in psychotic and delusional depressions.[27]

Benefits.Sertraline is approved for MDD, many anxiety disorders, eating disorders, and premenstrual dysphoric disorder.[2] This medication has very little CYP 2D6 inhibition and therefore few drug-drug interactions.[19] It has a moderate half-life and thus the possibility of some withdrawal symptoms.

Drawbacks.Sertraline can be activating in patients with anxiety disorders, which may require slowly titrating doses; it is often associated with gastrointestinal distress.

Take-Home Point

The SSRI class is considered a homogeneous class of antidepressants because all are held to the same standard of passing FDA regulatory norms. However, a pharmacodynamic look into their wider mechanisms of action may suggest that each drug is actually different in ways that may foster unique advantages or disadvantages for any given patient. This type of finding would not be apparent in a typical 300-subject regulatory trial, but is often noted in clinical practice, where the sample size comprises the one unique subject that the clinician is treating.

SNRI Class

The next most common class of medications used for the treatment of MDD is the SNRI. This group of medications has a dual mechanism of action, increasing synaptic norepinephrine as well as serotonin.[19,27] In addition to increasing norepinephrine and serotonin levels throughout the brain, these medications may also boost dopamine in the prefrontal cortex, resulting in additional benefits.[27] In the prefrontal cortex, no dopamine transporters are there to recycle dopamine out of the synapse. Typically norepinephrine transporters remove dopamine in these areas, but with the inhibition of these, the dopamine effect in the dorsal lateral prefrontal cortex is more robust.[27] This activation in the brain has been correlated with antidepressant effects.

On the other hand, as the additional norepinephrine boost is added to the brain, it is not contained there. Norepinephrine effects are seen throughout the body, including the spinal cord, peripheral autonomic nervous system, heart, and bladder.[19,27] In the spinal cord this may reduce pain, but may also lead to side effects such as tremor, motor activation, and increased blood pressure and heart rate.[27] Also, these effects may allow a pseudo-anticholinergic effect resulting in such things as dry mouth, constipation, and urinary retention. However, these norepinephrine-related side effects do not rival those of the tricyclic antidepressant class.[31] Generally, the SNRIs are well tolerated, but the subtle increase in side effect burden needs to be considered.

Venlafaxine. Venlafaxine was the first SNRI and was initially approved in an immediate-release preparation. This medication is a substrate of CYP 2D6, and is converted into desvenlafaxine, an SNRI that was developed subsequently.[19,27] Unfortunately, the absorption of immediate-release venlafaxine is rapid, affording it remarkable side effects; this has been mitigated with an extended-release formulation that appears to be much better tolerated in practice. The medication also has a unique character, causing a varying ratio of serotonin to norepinephrine effects.[19,27] At low doses, there are fewer NRI properties (and more SRI properties) available and only at higher doses do the norepinephrine transporter inhibition properties increase more robustly.

Benefits.Compared with the SSRI, this medication has effects at both serotonin and norepinephrine receptors leading to its antidepressant effectiveness. The medication is very effective in the treatment of anxiety disorders, with multiple approved uses, likely comparable to sertraline and paroxetine.[2]

Drawbacks.The norepinephrine effects of the medication are much more robust only at higher doses and must be titrated. The medication has a short half-life resulting in many withdrawal side effects. There may be higher rates of nausea and dry mouth in comparison to some other antidepressants.[31] This medication may cause hypertension in some patients, and thus, blood pressure should be monitored.[19]

Desvenlafaxine. Desvenlafaxine is the active metabolite of venlafaxine,[19] and has the added benefit of a greater effect on norepinephrine transporter inhibition than its precursor at the initial dose levels. However, the effects on norepinephrine are less than those on serotonin.[27] Because it is the active metabolite of venlafaxine, it is less subjected to the genetic and drug-induced differences of CYP 2D6, which allows more consistent plasma levels of the medication.[27] It may be one of the “cleanest” antidepressant medications, given its extremely low vulnerability to cytochrome P450 metabolism, renal excretion, and low protein binding. The role of desvenlafaxine in the regulation of vasomotor symptoms (night sweats, hot flashes, insomnia, and related depression) in perimenopausal women is being investigated.[27]

Benefits.Although similar to extended-release venlafaxine, desvenlafaxine has a more balanced ratio of norepinephrine/serotonin properties, and it has one of the most favorable drug-drug interaction profiles.

Drawbacks.This medication has a short half-life and significant withdrawal side effects.[31]

Duloxetine. Duloxetine is unique among the SNRI class of drugs because, in addition to MDD, it is approved for treating a variety of pain syndromes.[2] This is related to the SNRI effect on the descending spinal norepinephrine pathways that reduce afferent pain fiber activity.[27] The increase in norepinephrine activity in spinal areas results in less thalamic input to the sensory cortex and therefore less perceived pain. The norepinephrine-facilitating effects in the prefrontal cortex also may show some benefit in treatment of cognitive symptoms prevalent in geriatric depression.[27]. Compared with venlafaxine, duloxetine has a lower incidence of treatment-related hypertension and milder withdrawal reactions. It is approved for MDD, generalized anxiety disorder, musculoskeletal pain, neuropathic pain, and fibromyalgia-related pain.[2]

Benefits.One of the only antidepressants approved for management of pain syndromes, duloxetine also has a more balanced norepinephrine to serotonin ratio at its initial doses.[28]

Drawbacks.Duloxetine is a mild to moderate CYP 2D6 inhibitor, which results in some drug-drug interactions.[19] In addition, it should not be used in alcoholic patients or those with renal and/or liver impairment.

Take-Home Point

The SNRI class is considered a homogeneous class of antidepressants because all are held to the same standard of passing FDA regulatory norms. As with the SSRI, a pharmacodynamic look into their wider mechanisms of action suggests that each drug is actually different in ways that may foster unique advantages or disadvantages for any given patient. This is clear when one considers the diverse FDA approvals for each and different potencies related to facilitating distinct ratios of serotonin to norepinephrine transporter inhibition. Again, this type of finding would not be apparent in a typical 300-subject regulatory trial, but is often noted in clinical practice, where the sample size comprises the one unique subject that the clinician is treating.

TCA Class

This class is one of the oldest and still highly utilized classes of antidepressant in the history of psychopharmacology, and includes amitriptyline, imipramine, clomipramine, desipramine, trimipramine, and nortriptyline. The TCAs are often overlooked because of their relatively high level of side effects when compared with other classes of antidepressant, and because of high lethality in overdose. The TCAs have significant effects on the norepinephrine, serotonin, and to some extent dopamine activity in the brain.[19,27] The higher incidence of side effects are likely mediated through blockade of anticholinergic receptors (M1/M3), histamine receptors (H1), alpha 1 adrenergic receptors, and voltage-sensitive sodium channels.[19,27] Histamine blockade causes sedation and weight gain. Muscarinic blockade causes dry mouth, blurred vision, urinary retention, and constipation. Alpha 1 blockade causes orthostatic hypotension and dizziness. Sodium channel blockade affects the heart significantly, resulting in arrhythmias and conduction changes at higher doses.[27] This latter side effect results in significant risk of successful suicide with overdose, and renders TCAs difficult to use in medically comorbid patients.

Benefits.Overall, TCAs are very effective antidepressants. Indeed, early studies comparing TCA with SSRI medications found significantly higher remission rates with TCA than with SSRI in depressed, endogenous and inpatients samples.[33-36] However, in less severely depressed patients, there is not conclusive evidence of benefit of either class of antidepressant over another. Off-label, the use of TCAs in the treatment of pain, enuresis, and insomnia is widespread.[19] Availability of plasma level monitoring helps to guarantee therapeutic trials while minimizing toxicity.

Drawbacks.The significant adverse event profile causes an array of side effects that are often poorly tolerated and lead to medication noncompliance. Because of cardiac side effects, TCAs carry significant risk of death with overdose.

MAOI Class

This class of antidepressants has its own unique mechanism of action. MAOI has fallen into the realm of rarely used antidepressants in modern day psychopharmacology. This is related to the risks and side effects inherent to MAOI use. On the other hand, MAOI are among the most clinically powerful classes of antidepressant treatments. This class interferes with MAO enzyme subtypes A and B. The inhibition of these enzymes results in higher levels of serotonin and norepinephrine due to reduced catabolism of these neurotransmitters.[27] Moreover, by specifically lowering MAO-B activity, dopamine levels in the brain increase as well. Thus, all 3 monoamine neurotransmitter levels are robustly increased, which, in turn, affects a broad array of depressive symptoms.

The use of these medications may come at the cost of difficulty in using them. The most well-known drawback is that patients need to maintain a specific diet that is free of high tyramine foods, or risk the likelihood of hypertensive crisis related to the acute elevation of systemic norepinephrine, which also may result in stroke.[19,27] Foods to be avoided include tap beers, smoked meat or fish, fava beans, aged cheeses, sauerkraut, and soy. However, certain beers, wines, and cheeses are not contraindicated. These items need to be researched and discussed prior to starting a patient on the medication.

Drug-drug interactions are plentiful; combining an MAOI with other norepinephrine medications may increase blood pressure, and combining with a serotonin-based medication can cause serotonin syndrome.[19,27] Patients are also advised to avoid decongestants, stimulants, antidepressants, certain opioids, and appetite suppressants.[19,27]

The MAOI tranylcypromine may act similarly to an amphetamine in the frontal cortex, affording it some additional benefits.[27] Likewise, selegiline also involves breakdown into an amphetamine metabolite. Selegiline is more often used for Parkinson disease than depression.

Benefits.MAOIs are recognized as among the most potent of antidepressants in monotherapy, with effects on serotonin, dopamine, and norepinephrine. This class of antidepressant is often used for the patient who is refractory to other antidepressant trials.

Drawbacks.The MAOIs are associated with risks of hypertensive crisis and serotonin syndrome. There is a need to maintain a tyramine free diet except when using the low dose transdermal selegiline. Because of potential for drug-drug interactions, careful, ongoing monitoring of all additional medications (including over-the-counter medications) is essential.

Miscellaneous Antidepressants

Several other well-known antidepressant medications do not fit discretely into the 4 main antidepressant classes. Each has unique mechanisms that will be discussed similarly below.

Bupropion. This norepinephrine-dopamine receptor inhibitor (NDRI) medication is of particular use in a few subsets of patients. As the class name indicates, bupropion facilitates effects on norepinephrine and dopamine, blocking norepinephrine transporter and dopamine transporter activity at a moderate level, likely in the frontal cortex.[27] The unique properties of bupropion as an antidepressant may be related to its lack of serotonin activity. It is approved for smoking cessation and is used off-label to reduce craving for substances of abuse. Clinicians contend that the dopamine actions of this medication help to improve the loss of positive affect in MDD. Thus, it effectively increases joy, interest, pleasure, energy, enthusiasm, alertness, and self-confidence.[27] The norepinephrine and dopamine facilitation helps patients with attention-deficit/hyperactivity disorder as well.[19]

Several cases of psychosis and paranoia have been reported in patients taking bupropion, likely related to the dopamine effects of the drug.[37] Limited data suggest that this medication, like all antidepressants, may activate depressed patients with bipolar disorder, causing manic episodes. However, it is widely accepted that bupropion and the SSRI class may be less likely to activate mania compared with the TCA class of medications. Because it does not act on serotonin, this is one of the few antidepressants that does not cause sexual side effects or weight gain.[19,27] The medication is uniquely approved for the treatment of seasonal affective disorder.[2]

Benefits.Bupropion is indicated for the treatment of MDD, seasonal affective disorder, and nicotine dependence. It has very low sexual and weight gain side effect liability.

Drawbacks.There is limited serotonin activity with bupropion and less evidence for the treatment of anxiety. Bupropion lowers the seizure threshold in patients predisposed to these events (including patients with eating disorders and those with epilepsy).

Trazodone. Trazodone is a serotonin antagonist/reuptake inhibitor (SARI). It blocks serotonin 2A and 2C receptors and also acts as a mild serotonin reuptake inhibitor.[19,27] This medication typically is used at lower doses because of its properties as a strong antihistamine (H1) and alpha-1 adrenergic blocking medication. The blockade of these receptors causes significant sedation, which may help with insomnia, but may cause excessive somnolence and dizziness in the daytime. The blockade of serotonin also may explain trazodone’s properties as a hypnotic, providing more efficient sleep.[27] Although higher doses of this medication provide excellent benefit related to the synergistic effects of blocking serotonin 2A and 2C and by acting as a serotonin reuptake inhibitor, this medication is not typically given in full divided doses because of excessive side effects.[19,27] A new slow-release preparation has been approved to allow a better tolerated, full dose range.

Benefits.Trazodone is often called a sedating antidepressant. It helps insomnia, improves sleep efficiency, and has its action even at low doses. Sexual side effects and activating side effects are low.[19,27]

Drawbacks. Significant sedation may limit its use.

Mirtazapine. This medication is also considered to be sedating and is typically either avoided or sought because of its side effect profile. Side effects include sedation/hypnotic effects and appetite stimulation, but not sexual side effects. The lack of sexual side effects is again related to serotonin in that mirtazapine is not a serotonin reuptake inhibitor, but in this case acts as a serotonin 2A/2C receptor antagonist.[19,27] The blockade of these receptors may result in more dopamine and norepinephrine release in the prefrontal cortex. The histamine blockade (H1) results in sedation, anxiolytic/hypnotic effects, and weight gain.[19,27] Mirtazapine also acts as a 5HT3 receptor antagonist, resulting in reduction of gastrointestinal problems.[19,27] The primary mechanism of antidepressant action is through alpha 2/norepinephrine receptor antagonism. Through this antagonism, inhibition of norepinephrine is disinhibited through auto receptor blockade. This allows downstream effects on several pathways and may result in overall release of serotonin and norepinephrine. This effect can often be combined with an SNRI to obtain synergistic effects.[27]

Benefits.Mirtazapine has many unique mechanisms of actions that make it beneficial in particular populations. It lacks sexual side effects, reduces gastrointestinal upset, and is not activating. The sedating qualities of this medication are typically used to the medication’s and the patient’s benefit.

Drawbacks.Mirtazapine has significant weight gain/appetite stimulation effects, which could lead to metabolic disorders.

This review is both practical and factual. Clinicians ideally should be aware of regulatory approvals and appropriate use of them in certain patient populations. When used this way, clinicians may expect results comparable to those noted in the evidence base of regulatory trials. However, those who treat patients understand that not all are identical to those enrolled in research trials. What follows will provide some practical clinical approaches when responses do not meet expectations.

As noted, only one third of patients will fully remit on their first antidepressant trial.[38] These numbers hold true for patients who are fully treated with moderate to high dose SSRI for as long as 12 weeks. In clinical practice, patients may not even have such a rigorous dosing profile and failure rates are likely higher. What approaches should be taken when a patient is not responding to treatment?

Adherence and Dosing

First, ask and attempt to ensure adherence to the antidepressant treatment. This questioning should be nonjudgmental and empathic, as most patients will likely say they are compliant even when they are not. Oftentimes suggesting that most people tend to naturally miss a few doses and that you as the clinician are just checking up will diffuse the situation. As dosing becomes divided throughout the day and polypharmacy increases, compliance usually diminishes, making assessment for compliance and adherence to medical regimens even more important.

Tolerability

An important area to address to improve adherence to a regimen relates to side effects and antidepressant tolerability. Sometimes patients cease taking their antidepressant or fail to escalate the dose as advised when adverse effects are not well tolerated. Many mild side effects will dissipate over time and this should be discussed directly with the patient.[39] Patients should be instructed to inform prescribers of any moderate to severe side effects and the drug can then be safely stopped. Patients should also be told that there are many antidepressants, and these have different side effects.[2,39] For example, SSRI, SNRI, and NDRI may be activating, and thus cause insomnia or nervousness upon initiation of treatment. Patients may be switched to a less activating SARI or noradrenergic antagonist-selective serotonin antagonist mechanism-based product, as these tend to be less activating and more sedating.[2]

Some patients may experience drug-drug interactions depending upon their genetic make-up.[2] Switching away from hepatic inhibiting medications towards medications that are less likely to interact with other drugs may be warranted. Typical side effects of headaches, stomachaches, or even insomnia often can be treated very effectively with over the counter or prescription medications. Later onset side effects such as weight gain or sexual dysfunction may be more difficult to mitigate or treat. Open discussions with patients about these longer term risks are warranted because patients often have to stay on their antidepressants for a year or more to maintain remission and avoid a depressive relapse.[38] Because certain antidepressants may have a more, or less favorable weight or sexual side effect profile, they should be chosen based on a discussion about patient preference when possible.

Assuming adherence is adequate, the next step is to confirm that the antidepressant dose was at the moderate to high end of the approved range and has been taken for at least 4 to 6 weeks. If dosing is confirmed to be reasonable, consider a final maximization of dose or switch to a new antidepressant monotherapy.[39]

Switching Monotherapies

If it is necessary to consider switching monotherapies, no clear benefit has been attributed to any particular strategy.[38] Many experts agree, however, that a switch away from an SSRI is warranted if the fully dosed SSRI therapy has failed to improve the patient’s symptoms.[27,39] The theoretical implication is that the patient’s current depressive symptoms have been treated with aggressive serotonergic facilitation and that repeating this mechanism may not be fruitful. This suggests that, pharmacodynamically, the depression may not be entirely serotonin-based in regards to its etiology.[27,39] Given this, a cross titration on to an SNRI such as venlafaxine XR or duloxetine, a NDRI such as bupropion XL, a noradrenergic antagonist-selective serotonin antagonist such as mirtazapine, or a more aggressive serotonergic facilitating agent like a SARI such as trazodone ER or a serotonin partial agonist-reuptake inhibitor such as vilazodone theoretically may be warranted.[2]

One final concern regarding switching involves the use of generic vs brand-name drugs. The FDA ensures that the bioavailability between a brand name and its generic counterpart is approximately between 20% weaker and 20% stronger.[40,41] Most generics are highly comparable, but occasionally when a patient actually changes from one generic to another, the bioavailability could change from a 20% stronger to a 20% weaker generic drug and symptom relapse may occur. By contrast, going from a weaker to a stronger generic might actually improve depression outcomes but may also create new-onset side effects after months of stable treatment as the newer generic preparation is more potent, raising blood levels higher than previously. These types of events should be monitored and dosing adjusted as needed.

Finally, a generic drug may possess a different slow-release mechanism compared with the parent brand-name drug. Oftentimes the generic, despite being a slow-release drug itself may actually release active drug more quickly than the original brand’s slow-release technology. There may be no evidence of a clinical problem; however, some patients may develop side effects when taking the faster release preparation. In this case, the dose may need to be lowered while monitoring for relapse or a switch back to the brand-name slow-release product may be warranted.

In conclusion, this article seeks to identify treatments that match patients with MDD and their common comorbidities, as a first line approach to MDD management. Secondarily and more theoretically, patients’ MDD symptoms may be effectively treated if clinicians are aware of the neurotransmitters and receptors that each antidepressant modulates. Finally, patients may suffer issues with nonefficacy, noncompliance, and tolerability. Each patient is unique and these clinical situations may interfere with optimal depression outcomes. Each patient must be educated and given informed consent about the myriad effective antidepressant treatment options available.

Supported by an independent educational grant from Valeant Pharmaceuticals.

References:

  1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Text revision. Washington, DC: American Psychiatric Association; 2000.
  2. Stahl SM. Essential Psychopharmacology: The Prescriber’s Guide. Cambridge, Mass: Cambridge University Press; 2005.
  3. FDA Package Insert. Pristiq. Pfizer Inc. 2011.
  4. FDA Package Insert. Viibryd. Forest Laboratories, Inc. 2011.
  5. Davidson J, Baldwin D, Stein DJet al.Treatment of posttraumatic stress disorder with venlafaxine extended release: a 6-month randomized controlled trial. Arch Gen Psychiatry. 2006;63:1158-1165. Abstract
  6. Bandelow B, Zohar J, Hollander E, et al. World federation of societies of biological psychiatry (WFSBP) guidelines for the pharmacological treatment of anxiety, obsessive-compulsive and post-traumatic stress disorders — first revision. World J Biol Psychiatry. 2008;9:248-312. Abstract
  7. Benedek DM, Friedman MJ, Zatzick D, et al. Guideline watch (March 2009): practice guideline for the treatment of patients with acute stress disorder and posttraumatic stress disorder.
  8. Phelps NJ, Cates ME, The role of venlafaxine in the treatment of obsessive-compulsive disorder. Ann Pharmacother. 2005;39:136-140. Abstract
  9. Gartlehner G, Hansen RA, Reichenpfader U, et al. Drug class review: second-generation antidepressants: final update 5 report [internet].Portland, Ore: Oregon Health & Science University; March 2011.
  10. Effexor Prescribing Information. http://labeling.pfizer.com/showlabeling.aspx?id=100
  11. FDA Package Insert. Cymbalta. Lilly USA LLC. 2004/2011.
  12. FDA Package Insert. Silenor. Somaxon Pharmaceuticals Inc. 1969.
  13. Hsu, ES. Acute and chronic pain management in fibromyalgia: updates on pharmacotherapy. Am J Ther. 2011;18:487-509. Abstract
  14. Verdu B, Decosterd I, Buclin T, et al. Antidepressants for the treatment of chronic pain. Drugs. 2008;68:2611-2632. Abstract
  15. Prince JB, Wilens TE, Biederman J, et al. A controlled study of nortriptyline in children and adolescents with attention deficit hyperactivity disorder. J Child Adolesc Psychopharmacol. 2000;10:193-204. Abstract
  16. Pliszka SR. Non-stimulant treatment of attention-deficit/hyperactivity disorder. CNS Spectr. 2003;8:253-258. Abstract
  17. Wilens TE, Prince JB, Spencer T, et al, An open trial of bupropion for the treatment of adults with attention-deficit/hyperactivity disorder and bipolar disorder. Biol Psychiatry. 2003;54:9-16. Abstract
  18. Olvera RL, Pliszka SR, Luh J, et al. An open trial of venlafaxine in the treatment of attention-deficit/hyperactivity disorder in children and adolescents. J Child Adolesc Psychopharmacol. 1996;6:241-250. Abstract
  19. Sadock BJ, Sadock VA, et al. Kaplan and Sadock’s Synopsis of Psychiatry: Behavioral Sciences/Clinical Psychiatry. Tenth ed. Philadelphia, Pa: Lippincott Williams and Wilkins; 2007:977-1126.
  20. Prochazka AV, Kick S, Steinbrunn C, et al. A randomized trial of nortriptyline combined with transdermal nicotine for smoking cessation. Arch Intern Med. 2004;164:2229-2233. Abstract
  21. FDA Package Insert. Wellbutrin XL. GlaxoSmithKline. 2008.
  22. McIntyre RS, Mancini DA, McCann S, et al, Topiramate versus bupropion SR when added to mood stabilizer therapy fpr the depressive phase of bipolar disorder: a preliminary single-blind study. Bipolar Disord. 2002;4:207-213. Abstract
  23. Coleiro B, Marshall SE, Denton CP, et al. Treatment of raynaud’s phenomenon with the selective serotonin reuptake inhibitor fluoxetine. Rheumatology. 2001;40:1038-1043. Abstract
  24. Stearns V, Beebe KL, Iyengar M, et al. Paroxetine controlled release in the treatment of menopausal hot flashes: a randomized controlled trial. JAMA. 2003;289:2827-2834. Abstract
  25. Evans ML, Pritts E, Vittinghoff E, et al. Management of postmenopausal hot flushes with venlafaxine hydrochloride: a randomized, controlled trial. Obstet Gynecol. 2005;105:161-166. Abstract
  26. Muller D, Roehr CC, Eggert P. Comparative tolerability of drug treatment for nocturnal enuresis in children. Drug Saf. 2004;27:717-727. Abstract
  27. Stahl SM. Stahl’s Essential Psychopharmacology: Neuroscientific Basis and Practical Applications. 3rd ed. Cambridge, Mass: Cambridge University Press; 2008:511-666.
  28. Spina E, Scordo MG. clinically significant drug interactions with antidepressants in the elderly. Drugs Aging. 2002;19:299-320. Abstract
  29. FDA Packet Insert. Celexa. Forest Laboratories, Inc. 2010/2011.
  30. Spina E, Santoro V, D’Arrigo C. Clinically relevant pharmacokinetic drug interactions with second-generation antidepressants: an update. Clin Ther. 2008;30:1206-1227. Abstract
  31. Cipriani A, Furukawa TA, Salanti G, et al. Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis. Lancet. 2009;373:746-758. Abstract
  32. Nelson JC, Papakostas GI. Atypical antipsychotic augmentation in major depressive disorder: a meta-analysis of placebo-controlled randomized trials. Am J Psychiatry. 2009;166:980-991. Abstract
  33. Danish University Antidepressant Group. Citalopram: clinical effect profile in comparison with clomipramine. A controlled multicenter study. Psychopharmacology (Berl). 1986;90:131-138. Abstract
  34. Danish University Antidepressant Group.Paroxetine: a selective serotonin reuptake inhibitor showing better tolerance, but weaker antidepressant effect than clomipramine in a controlled multicenter study. J Affect Disord. 1990;18:289-299. Abstract
  35. Roose SP, Glassman AH, Attia E, Woodring S. Comparative efficacy of selective serotonin reuptake inhibitors and tricyclics in the treatment of melancholia. Am J Psychiatry. 1994;151:1735-1739. Abstract
  36. Beasley CM Jr, Holman SL, Potvin JH. Fluoxetine compared with imipramine in the treatment of inpatient depression. A multicenter trial. Ann Clin Psychiatry. 1993;5:199-207. Abstract
  37. Bailey J. Acute psychosis after bupropion treatment in a healthy 28-year-old woman. J Am Board Fam Med. 2008;21:244.
  38. Rush AJ, Trivedi MH, Wisnewlski Sr, et al. Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: a STAR*D report. Am J Psychiatry. 2006; 163:1905-1917. Abstract
  39. Zajecka JM, Goldstein C. Combining medication to achieve remission. In: Schwartz T, Petersen T, eds. Depression: Treatment Strategies and Management. 2nd ed. New York: Informa; 2009.
  40. Park K, ed. Controlled Drug Delivery: Challenges and Strategies. Washington, DC: American Chemical Society; 1997.
  41. Orange book annual preface, statistical criteria for bioequivalence. In: Approved Drug Products with Therapeutic Equivalence Evaluations. 29th ed. US Food and Drug Administration Center for Drug Evaluation and Research; 2009-06-18, update 3-01-11. http://www.fda.gov/Drugs/DevelopmentApprovalProcess/ucm079068.htm

Retrieved from: http://www.medscape.org/viewarticle/755180

listen willya

In Education, Pedagogy, Uncategorized on Sunday, 23 September 2012 at 14:20

nicely done.

How Bad Teachers Nearly Ruined My Life

In Education, Humor, Pedagogy on Sunday, 23 September 2012 at 13:30

How Bad Teachers Nearly Ruined My Life.

Marxism…or, “Whose turn is it to buy the smokes?”

In Philosophy on Sunday, 23 September 2012 at 13:12

The philosophers have only interpreted the world, in various ways; the point is to change it.“-Karl Marx

Marxists’ Apartment A Microcosm Of Why Marxism Doesn’t Work

November 13, 2002 | ISSUE 38•42 | More News

AMHERST, MA—The filthy, disorganized apartment shared by three members of the Amherst College Marxist Society is a microcosm of why the social and economic utopia described in the writings of Karl Marx will never come to fruition, sources reported Monday.  “The history of society is the inexorable history of class struggle,” said sixth-year undergraduate Kirk Dorff, 23, resting his feet on a coffee table cluttered with unpaid bills, crusted cereal bowls, and bongwater-stained socialist pamphlets. “The stage is set for the final struggle between the bourgeoisie and the proletariat, the true productive class. We’re well aware of that here at 514 W. Elm Street, unlike other apartments on this supposedly intellectual campus.”

Upon moving in together at the beginning of the fall 2001 semester, Dorff, Josh Foyle, and Tom Eaves sat down and devised an egalitarian system for harmonious living. Each individual roommate would be assigned a task, which he would be required to carry out on a predetermined day of the week. A bulletin board in the kitchen was chosen as the spot for household announcements, and to track reimbursements for common goods like toothpaste and toilet paper.

“We were creating an exciting new model for living,” said Dorff, stubbing his cigarette into an ashtray that had not been emptied in six days. “It was like we were dismantling the apparatus of the state right within our own living space.”

Despite the roommates’ optimism, the system began to break down soon after its establishment. To settle disputes, the roommates held weekly meetings of the “Committee of Three.”

“I brought up that I thought it was total bullshit that I’m, like, the only one who ever cooks around here, yet I have to do the dishes, too,” said Foyle, unaware of just how much the apartment underscores the infeasibility of scientific socialism as outlined in Das Kapital. “So we decided that if I cook, someone else has to do the dishes. We were going to rotate bathroom-cleaning duty, but then Kirk kept skipping his week, so we had to give him the duty of taking out the garbage instead. But now he has a class on Tuesday nights, so we switched that with the mopping.”

After weeks of complaining that he was the only one who knew how to clean “halfway decent,” Foyle began scaling back his efforts, mirroring the sort of production problems experienced in the USSR and other Soviet bloc nations.

At an Oct. 7 meeting of the Committee of Three, more duties and a point system were added. Two months later, however, the duty chart is all but forgotten and the shopping list is several pages long.

The roommates have also tried to implement a food-sharing system, with similarly poor results. The dream of equal distribution of shared goods quickly gave way to pilferage, misallocation, and hoarding.

“I bought the peanut butter the first four times, and this Organic Farms shit isn’t cheap,” Eaves said. “So ever since, I’ve been keeping it in my dresser drawer. If Kirk wants to make himself a sandwich, he can run to the corner store and buy some Jif.”

Another failed experiment involves the cigarettes bought collectively. Disagreements constantly arose over who smoked more than his fair share of the group’s supply of American Spirit Blues, and the roommates now hide individually purchased packs from each other—especially late at night when shortages are frequent.

The situation is familiar to Donald Browning, author of Das Kouch: A History Of College Marxism, 1970-1998.

“When workers willfully become less productive, the economy of the household suffers,” Browning said. “But in a society where a range of ability naturally exists, someone is bound to object to picking up the slack for others and end up getting all pissy, like Josh does.”

According to Browning, the group’s lack of productivity pervades their lives, with roommates encouraging each other to skip class or work to sit on the couch smoking pot and talking politics.

“A spirit of free-market competition in the house would likely result in better incomes or better grades,” Browning said. “Then, instead of being hated and ostracized by the world at large as socialist countries usually are, they could maintain effective diplomacy with their landlord, their parents, and Kirk’s boss who cut back his hours at Shaman Drum Books.”

The lack of funds and the resulting scarcity breeds not only discontent but also corruption. Although collectivism only works when all parties contribute to the fullest extent, Foyle hid the existence of a $245 paycheck from roommates so he would not have to pay his back rent, in essence refusing to participate in the forced voluntary taxation that is key to socialism. Even worse, Dorff, who is entrusted with bill collection and payment, recently pocketed $30, a theft he claimed was “for the heating bill” but was put toward buying drinks later that night.

“As is human nature, power tends to corrupt even the noblest of men,” Browning said. “The more power the collective has over the lives of the individuals, as is the case in this household, the more he who is in charge of distribution has to gain by being unscrupulous. These Marxists will soon realize they overestimated how much control they would like 514 W. Elm as an entity to have.”

Retrieved from: http://www.theonion.com/articles/marxists-apartment-a-microcosm-of-why-marxism-does,1382/

Autism Patients Might Benefit from Drug Therapy

In Medication, Psychiatry, School Psychology on Sunday, 23 September 2012 at 09:01

Autism Patients Might Benefit from Drug Therapy

By SYDNEY LUPKIN | ABC News – Wed, Sep 19, 2012 2:37 PM EDT

Researchers have found a drug that can help patients with Fragile X syndrome, the most common cause of inherited intellectual impairment (formerly known as mental retardation), stay calm in social situations by treating their anxiety.

Dr. Elizabeth Berry-Kravis and her team found that a drug called Arbaclofen reduced social avoidance and repetitive behavior in Fragile X patients, especially those with autism, by treating their anxiety. The drug increases GABA, a chemical in the brain that regulates the excitatory system in Fragile X patients, who have been known to have too little GABA to do the job otherwise, causing their excitatory systems to “signal out of control” and make them anxious.

Such patients have been known to cover their ears or run away at their own birthdays because they are overwhelmed by the attention, but one trial participant said he was able to enjoy his birthday party for the first time in his life while he was on Arbaclofen, she said.

“I feel like it’s kind of the beginning of chemotherapy when people first realized you could use chemotherapy to treat cancer patients instead of just letting them die,” said Berry-Kravis, a professor of neurology and biochemistry at Rush University Medical Center in Chicago who has studied Fragile X for more than 20 years.

She said people used to think Fragile X patients couldn’t be helped either, but she and her team have proven that by using knowledge from existing brain mechanism studies, doctors can select medications to target specific problems in Fragile X patients’ brains.

Fragile X syndrome is a change in the FMRI gene, which makes a protein necessary for brain growth, and studies indicate it causes autism in up to one-third of patients diagnosed with it. Unlike Fragile X syndrome, which is genetic, autism is a behavioral diagnosis characterized by an inability to relate to other people or read social cues. Autism and Fragile X are linked, but not mutually exclusive. A core symptom of both is social withdrawal.

Sixty-three patients with Fragile X participated in Berry-Kravis’s placebo-controlled, double-blind clinical trial from December 2008 through March 2010. Of those, the patients with autism showed the biggest improvements in social behavior, Berry-Kravis said.

To psychologist Lori Warner, who directs the HOPE Center at Beaumont Children’s Hospital, the study is exciting because when her autistic patients are anxious, they often have a harder time learning the social cues they can’t read on their own.

“Reducing anxiety opens up your brain to be able to take in what’s happening in an environment and be able to learn from and understand social cues because you’re no longer frightened of the situation,” Warner said.

She works mostly with autism patients, and although some do have Fragile X as well, most do not.

Fragile X affects one in 4,000 men and one in 6,000 to 8,000 women, according to the Centers for Disease Control and Prevention.

Although Arbaclofen worked best on autistic Fragile X patients, further studies will be needed to prove whether it can help all autism patients, not just those with autism caused by Fragile X.

“There’s a difference between one person’s brain and another in how it’s set up,” Berry-Vargis said. “This is not a magic cure. It’s a step.”

Retrieved from: http://gma.yahoo.com/autism-patients-might-benefit-drug-therapy-183744169–abc-news-health.html

Parenting Style and Its Correlates

In Education on Sunday, 23 September 2012 at 08:42

Parenting Style and Its Correlates

Developmental psychologists have been interested in how parents influence the development of children’s social and instrumental competence since at least the 1920s. One of the most robust approaches to this area is the study of what has been called “parenting style.” This Digest defines parenting style, explores four types, and discusses the consequences of the different styles for children.

Parenting Style Defined

Parenting is a complex activity that includes many specific behaviors that work individually and together to influence child outcomes. Although specific parenting behaviors, such as spanking or reading aloud, may influence child development, looking at any specific behavior in isolation may be misleading. Many writers have noted that specific parenting practices are less important in predicting child well-being than is the broad pattern of parenting. Most researchers who attempt to describe this broad parental milieu rely on Diana Baumrind’s concept of parenting style. The construct of parenting style is used to capture normal variations in parents’ attempts to control and socialize their children (Baumrind, 1991). Two points are critical in understanding this definition. First, parenting style is meant to describenormal variations in parenting. In other words, the parenting style typology Baumrind developed should not be understood to include deviant parenting, such as might be observed in abusive or neglectful homes. Second, Baumrind assumes that normal parenting revolves around issues of control. Although parents may differ in how they try to control or socialize their children and the extent to which they do so, it is assumed that the primary role of all parents is to influence, teach, and control their children.

Parenting style captures two important elements of parenting: parental responsiveness and parental demandingness (Maccoby & Martin, 1983). Parental responsiveness (also referred to as parental warmth or supportiveness) refers to “the extent to which parents intentionally foster individuality, self-regulation, and self-assertion by being attuned, supportive, and acquiescent to children’s special needs and demands” (Baumrind, 1991, p. 62). Parental demandingness (also referred to as behavioral control) refers to “the claims parents make on children to become integrated into the family whole, by their maturity demands, supervision, disciplinary efforts and willingness to confront the child who disobeys” (Baumrind, 1991, pp. 61-62).

Four Parenting Styles 

Categorizing parents according to whether they are high or low on parental demandingness and responsiveness creates a typology of four parenting styles: indulgent, authoritarian, authoritative, and uninvolved (Maccoby & Martin, 1983). Each of these parenting styles reflects different naturally occurring patterns of parental values, practices, and behaviors (Baumrind, 1991) and a distinct balance of responsiveness and demandingness.

  • Indulgent parents (also referred to as “permissive” or “nondirective”) “are more responsive than they are demanding. They are nontraditional and lenient, do not require mature behavior, allow considerable self-regulation, and avoid confrontation” (Baumrind, 1991, p. 62). Indulgent parents may be further divided into two types: democratic parents, who, though lenient, are more conscientious, engaged, and committed to the child, and nondirective parents.
  • Authoritarian parents are highly demanding and directive, but not responsive. “They are obedience- and status-oriented, and expect their orders to be obeyed without explanation” (Baumrind, 1991, p. 62). These parents provide well-ordered and structured environments with clearly stated rules. Authoritarian parents can be divided into two types: nonauthoritarian-directive, who are directive, but not intrusive or autocratic in their use of power, and authoritarian-directive, who are highly intrusive.
  • Authoritative parents are both demanding and responsive. “They monitor and impart clear standards for their children’s conduct. They are assertive, but not intrusive and restrictive. Their disciplinary methods are supportive, rather than punitive. They want their children to be assertive as well as socially responsible, and self-regulated as well as cooperative” (Baumrind, 1991, p. 62).
  • Uninvolved parents are low in both responsiveness and demandingness. In extreme cases, this parenting style might encompass both rejecting–neglecting and neglectful parents, although most parents of this type fall within the normal range.

Because parenting style is a typology, rather than a linear combination of responsiveness and demandingness, each parenting style is more than and different from the sum of its parts (Baumrind, 1991). In addition to differing on responsiveness and demandingness, the parenting styles also differ in the extent to which they are characterized by a third dimension: psychological control. Psychological control “refers to control attempts that intrude into the psychological and emotional development of the child” (Barber, 1996, p. 3296) through use of parenting practices such as guilt induction, withdrawal of love, or shaming. One key difference between authoritarian and authoritative parenting is in the dimension of psychological control. Both authoritarian and authoritative parents place high demands on their children and expect their children to behave appropriately and obey parental rules. Authoritarian parents, however, also expect their children to accept their judgments, values, and goals without questioning. In contrast, authoritative parents are more open to give and take with their children and make greater use of explanations. Thus, although authoritative and authoritarian parents are equally high in behavioral control, authoritative parents tend to be low in psychological control, while authoritarian parents tend to be high.

Consequences for Children

Parenting style has been found to predict child well-being in the domains of social competence, academic performance, psychosocial development, and problem behavior. Research based on parent interviews, child reports, and parent observations consistently finds:

  • Children and adolescents whose parents are authoritative rate themselves and are rated by objective measures as more socially and instrumentally competent than those whose parents are nonauthoritative (Baumrind, 1991; Weiss & Schwarz, 1996; Miller et al., 1993).
  • Children and adolescents whose parents are uninvolved perform most poorly in all domains.

In general, parental responsiveness predicts social competence and psychosocial functioning, while parental demandingness is associated with instrumental competence and behavioral control (i.e., academic performance and deviance). These findings indicate:

  • Children and adolescents from authoritarian families (high in demandingness, but low in responsiveness) tend to perform moderately well in school and be uninvolved in problem behavior, but they have poorer social skills, lower self-esteem, and higher levels of depression.
  • Children and adolescents from indulgent homes (high in responsiveness, low in demandingness) are more likely to be involved in problem behavior and perform less well in school, but they have higher self-esteem, better social skills, and lower levels of depression.

In reviewing the literature on parenting style, one is struck by the consistency with which authoritative upbringing is associated with both instrumental and social competence and lower levels of problem behavior in both boys and girls at all developmental stages. The benefits of authoritative parenting and the detrimental effects of uninvolved parenting are evident as early as the preschool years and continue throughout adolescence and into early adulthood. Although specific differences can be found in the competence evidenced by each group, the largest differences are found between children whose parents are unengaged and their peers with more involved parents. Differences between children from authoritative homes and their peers are equally consistent, but somewhat smaller (Weiss & Schwarz, 1996). Just as authoritative parents appear to be able to balance their conformity demands with their respect for their children’s individuality, so children from authoritative homes appear to be able to balance the claims of external conformity and achievement demands with their need for individuation and autonomy.

Sidebar: Children with ADHD, ODD, and other behavioral disorders are particularly vulnerable to low self-esteem. They frequently experience school problems, have difficulty making friends, and lag behind their peers in psychosocial development. They are more likely than other children to bully and to be bullied. Parents of children with behavior problems experience highly elevated levels of child-rearing stress, and this may make it more difficult for them to respond to their children in positive, consistent, and supportive ways.

Influence of Sex, Ethnicity, or Family Type

It is important to distinguish between differences in the distribution and the correlates of parenting style in different subpopulations. Although in the United States authoritative parenting is most common among intact, middle-class families of European descent, the relationship between authoritativeness and child outcomes is quite similar across groups. There are some exceptions to this general statement, however: (1) demandingness appears to be less critical to girls’ than to boys’ well-being (Weiss & Schwarz, 1996), and (2) authoritative parenting predicts good psychosocial outcomes and problem behaviors for adolescents in all ethnic groups studied (African-, Asian-, European-, and Hispanic Americans), but it is associated with academic performance only among European Americans and, to a lesser extent, Hispanic Americans (Steinberg, Dornbusch, & Brown, 1992; Steinberg, Darling, & Fletcher, 1995). Chao (1994) and others (Darling & Steinberg, 1993) have argued that observed ethnic differences in the association of parenting style with child outcomes may be due to differences in social context, parenting practices, or the cultural meaning of specific dimensions of parenting style.

Conclusion

Parenting style provides a robust indicator of parenting functioning that predicts child well-being across a wide spectrum of environments and across diverse communities of children. Both parental responsiveness and parental demandingness are important components of good parenting. Authoritative parenting, which balances clear, high parental demands with emotional responsiveness and recognition of child autonomy, is one of the most consistent family predictors of competence from early childhood through adolescence. However, despite the long and robust tradition of research into parenting style, a number of issues remain outstanding. Foremost among these are issues of definition, developmental change in the manifestation and correlates of parenting styles, and the processes underlying the benefits of authoritative parenting (see Schwarz et al., 1985; Darling & Steinberg, 1993; Baumrind, 1991; and Barber, 1996).

For More Information

Barber, B. K. (1996). Parental psychological control: Revisiting a neglected construct. Child Development, 67(6), 3296-3319.

Baumrind, D. (1989). Rearing competent children. In W. Damon (Ed.), Child development today and tomorrow (pp. 349-378). San Francisco: Jossey-Bass.

Baumrind, D. (1991). The influence of parenting style on adolescent competence and substance use. Journal of Early Adolescence, 11(1), 56-95.

Chao, R. K. (1994). Beyond parental control and authoritarian parenting style: Understanding Chinese parenting through the cultural notion of training. Child Development, 65(4), 1111-1119..

Darling, N., & Steinberg, L. (1993). Parenting style as context: An integrative model. Psychological Bulletin, 113(3),487-496.

Maccoby, E. E., & Martin, J. A. (1983). Socialization in the context of the family: Parent–child interaction. In P. H. Mussen (Ed.) & E. M. Hetherington (Vol. Ed.), Handbook of child psychology: Vol. 4. Socialization, personality, and social development (4th ed., pp. 1-101). New York: Wiley.

Miller, N. B., Cowan, P. A., Cowan, C. P., & Hetherington, E. M. (1993). Externalizing in preschoolers and early adolescents: A cross-study replication of a family model. Developmental Psychology, 29(1), 3-18.

Schwarz, J. C., Barton-Henry, M. L., & Pruzinsky, T. (1985). Assessing child-rearing behaviors: A comparison of ratings made by mother, father, child, and sibling on the CRPBI. Child Development, 56(2), 462-479.

Steinberg, L., Darling, N., & Fletcher, A. C. (1995). Authoritative parenting and adolescent adjustment: An ecological journey. In P. Moen, G. H. Elder, Jr., & K. Luscher (Eds.), Examining lives in context: Perspectives on the ecology of human development (pp. 423-466). Washington, DC: American Psychological Assn.

Steinberg, L., Dornbusch, S. M., & Brown, B. B. (1992). Ethnic differences in adolescent achievement: An ecological perspective. American Psychologist, 47(6), 723-729.

Weiss, L. H., & Schwarz, J. C. (1996). The relationship between parenting types and older adolescents’ personality, academic achievement, adjustment, and substance use. Child Development, 67(5), 2101-2114.


Source: Clearinghouse on Elementary and Early Childhood Education
Author: Nancy Darling, PhD, MS
EDO-PS-99-3, March 1999


Page last modified or reviewed by AH on March 28, 2012

Retrieved from: http://www.athealth.com/practitioner/ceduc/parentingstyles.html

“you are responsible, forever, for what you have tamed…”

In Education, Well-being on Sunday, 23 September 2012 at 08:30

reply to gpicone…thanks for giving me things to blog about!

gpicone:  “Thanks for your support and for this post. I haven’t written about it yet but in my final year I was sent to work in what was basically a prison compound for students that the district no longer wanted to pay to have sent to outside programs where they might receive help for their emotional and behavioral problems. 2 teachers were set on fire (their hair lit with lighters) and both were fired (pun intended? How could it not be!) for being flammable I suppose! And with our big fat popular governor of NJ (ironic that a man who indulges himself to his own detriment should be the leader of austerity for everyone else and no one notices) bad mouthing us and cutting our benefits I had no choice but to leave after 33 years in teaching. I didn’t want to go but…”

***

i hear so many stories just like yours.  teachers who were forced to retire because of the lack of support in the system.  teachers who are over worked and under paid.  teachers who have just been beaten down by the system.  more and more, i see great teachers being forced out of teaching because, while they dreamed of teaching children and spent their schooling studying to do so (and love it), have become nothing more than paper pushers and glorified babysitters.  it’s tragic. 

i am assigned to two “traditional” schools (a middle and high school) as well as our alternative program (i work with both the middle and high school there).  ironic that you mention the ‘hair on fire’ incident because we have a kid who set a girl’s hair on fire and was sent to us (this is NOT a program for severe emotional and behavioral issues….we have a different program for that).  so, we end up with many kids who make/made poor choices.  i say that because many of the kids we get really DID just make a bad choice (you brought alcohol for you and your friends to drink during lunch…bad choice) but we also have kids with felony records (from rape to firearm charges and everything in between), probation officers, babies at 12 years old (sometimes second babies at 12, first at 10 or 11), etc.  some of these kids have one brush with the law and realize that is NOT where they want to be and end up going back to their home schools after they are with us (they are given a length of time they are to be with us, usually a semester) and are successful.  and some…well, let’s just say i have seen quite a few former students on the news as well (carjacking with a baby in the car, breaking and entering, name it, i’ve probably seen/heard it).  what i do notice is the great majority of these kids aren’t “bad” kids but have little guidance or support from home and are just following in their brother/sister/mother/father’s footsteps.  if dad is in a gang, you are gong to be in a gang.  that’s just life to them and it’s all they know.  and i will also say that i have met some kids for whom, and i hate to say/feel this, we just don’t know what to do with.  you can’t reach them and they have little regard or empathy for others.  but, even for those few kids, there are moments when you can see them trying to care or trying to do the right thing, they just don’t get it.  as a whole, though, these are the kids who will likely wind up in the penal system or worse.  it is rare that you can’t reach most of these kids on some level and see the person they could become, but it happens. 

the overarching difference i notice between the kids who end up “flying right” and realizing that their behavior isn’t going to get them anywhere and those that appear to just not care and continually get into trouble, have failing grades, brushes with the law, etc. is the family environment.  i am generalizing here, but the kids who have parents that actually show up when you call or bother to come to a meeting, those are the kids who usually end up going back to their home schools and graduating or at least not coming back to the alternative school aagin.  the kids who have parents that you never see (and frankly appear to be bothered when you actually do get in touch with them and ask them to come to a meeting/pick up their sick child/check on homework), those kids get the message that they don’t matter.  if their own parents don’t care enough about them, why should they care about anyone and why should anyone care about them?  those are the kids i see on the news.  and they are the kids who are “repeaters” in and out of our school because they continue to do things that get them sent back to the alternative program.

look, i’m not going to sit here and say that all kids just make poor choices and, in the end, turn out to be successful and contributing members of our society.  it’s a bell curve just like most things…there are the outliers.  but, when a kid has to go to the hospital because of an injury (or a drug overdose, a result of a fight, etc.) and you call and call the parent/emergency contact/guardian and they either refuse to come or don’t answer their phone (even when calling for 5 hours)…what message does that send to that kid?!  that your own parents don’t even care about you, so why should you care about anyone?  you’re not worth it.  

the point of this (wordy, i know) response is NOT to make excuses for these kids.  ALL BEHAVIOR HAS CONSEQUENCES, positive or negative, and i don’t believe you get a free pass if you have a crap home life (it is my experience that the apple and the tree are never far apart), but kids usually do what they see.  from my observations, parent involvement is KEY to a successful outcome for children (however “success” is defined).  i am not going to say that parental involvement is a panacea and “cures all ills” but it is a huge factor in a child’s outcome.  so, ask yourself, HOW INVOLVED ARE YOU IN YOUR CHILD’S LIFE?  do you know their friends?  if you work, do you know what your child does when he/she gets home from school and you are not there?  do you know where your child is at any given time?  are you parenting or trying to be their friend?  while having a good relationship with your child is important, i have also seen those who try so hard to appear “cool” and be their child’s friend that they are not respected by their kids or seen as any kind of authority.  their children do not see them as an someone they need to listen to and, thus, have no regard for any kind of rule their parents try and implement (if, on that rare occasion they do try to set a boundary or make a rule) and this filters down to others in positions of authority, i.e. teachers, administrators, police.  so, be your child’s friend, someone they feel they can come to and share openly, but…MAINTAIN THE PARENTAL ROLE.  these kids need guidance and many of their behaviors appear to be a cry for attention…the attention they are not getting elsewhere.  and, remember, negative attention is still attention.

as antoine de saint-exupery said in “the little prince” (if you have not read this gem of a book, do!  such a beautiful story with so many life-lessons), “You become responsible, forever, for what you have tamed” in relation to the prince and his beautiful rose (really, if you haven’t, go read it).  be it the tiny puppy who was SO adorable (that has now become older, maybe less adorable, and possibly wreaking havoc in the house), the cute kitten you rescued from the gutter who now needs care, or the baby you chose to have.  you should not take that dog to the pound to be killed (see: http://www.aspca.org/about-us/faq/pet-statistics.aspx) and you certainly can’t take your child to the pound.  but…disassociated parents do greater harm, in my opinion.  you have just increased the likelihood that your child will be less successful academically, have more behavior problems, a greater chance of being involved negatively with the law, the list goes on and on.  just as that now grown puppy that relied on you for feeding, walking, and care can’t take care of itself, neither can your child.  they need you and you have a responsibility to them.  that puppy had no say in whether or not you took him home and that child did not have a say when you became their parent.  you chose this, TAKE CARE OF YOUR RESPONSIBILITIES.  be FOREVER responsible.  even if it means breaking the cycle and raising your child in a way different from the way you were raised (the old, “i got beat as a kid, so i beat my kid” cycle).

become an ACTIVE participant in your child’s education and school life.  an ACTIVE participant in your child’s social life.  an ACTIVE participant in your child’s spiritual/moral/ethical life.  demand to know who your child’s friends are (up until i graduated high school, my mother had to meet every single person i went out with, was friends with, dated…while i may have been embarrassed at my mom calling my prospective date’s mother and requiring to meet the person before we went out, i knew that this was only because she cared about me. and, as far as i knew, this was normal because all my friends had to do the same), be involved in their schooling (demand to see homework, keep up with their grades and attendance, have some sort of presence at school, even if it’s just emailing teachers to let them know you are involved and supportive and will be there if needed).  believe that these things have a direct impact on your child’s future.  the child you chose to bring into this world.  the child you “tamed.”

more so, know that should you decide that this is not your priority (you have your own life to live and you send your child to school to learn life lessons you should be teaching…yes, there are many parents who expect the schools/teachers/coaches to “raise” their children)…you WILL have to deal with them at some point.  be it via a phone call from jail or the morgue.   gruesome, i realize, but a fact.  at the very least, the less involved you are in their education, the more likely it is that they will not be as successful in school and may not have as many options open to them after.  one way or another, you will have to deal with your child.  do you want to do it now and help mold them into who they could become or when it might be too late and they (and you) are paying for their mistakes later (i.e. not able to get a job, not able to go to college, have a police record, are in jail…you can imagine the host of outcomes).  am i saying that parental involvement is a panacea and end-all-be-all?  absolutely no…not at all.  but, don’t you want to give your child every opportunity to be successful in their life and happy?  to be a contributor to society and not a burden?  this is one way to do so.
please…be it a child, animal, or rose…be responsible for what you have tamed.

***

for information on parental involvement and student outcome, please see:

http://www.hfrp.org/content/download/1340/48835/file/fi_adolescent.pdf 

for information on parenting styles and outcomes, please see:

http://www.athealth.com/practitioner/ceduc/parentingstyles.html

“You become responsible, forever, for what you have tamed.”

musings on…teachers

In Education, Pedagogy on Saturday, 22 September 2012 at 07:09

in response to a FABULOUS post by gpicone…please go read it!  such fabulous points! http://ipledgeafallegiance.wordpress.com/2012/09/21/teachers-and-tenure/#respond

i wanted to post my response here as it is a subject i have wanted to talk about in relation to what i see as a non-teaching public school employee.

***

EXCELLENT points and fabulous way to get them across.  i am in georgia where we are a “right-to-work” state and have no unions and we still have a whole mess of problems.  so, those who say it is the fault of the teachers’ unions that cause all the ills in education can come to a non-union state and see the EXACT same issues.  ironically, in my early years of working in the schools (probably about 9 or so years ago) a kid put “something” in a teacher’s coffee.  i don’t recall what he put in there (i believe it was some sort of chemical…definitely poisonous, charges were filed by the teacher, not the school or the system).  the particulars escape me now.  things i have seen in the schools: the kid who ruined a principal’s eyesight in one eye permanently by flashing a laser directly into his eye, my dear friend who had a kid take photos of her when she was helping another student-did i mention that she was wearing a past-the-knee length skirt and while she was standing with another student helping him work a problem and this kid was taking photos of her UNDER HER SKIRT (?!?!), teachers who receive written or verbal death threats (there have been many of those), or the kid who threw a teacher into a wall, or the 19 year old freshman who molested a 15 year-old girl with serious intellectual disabilities IN FRONT OF OTHER STUDENTS IN THE CAFETERIA (there was supervision, but the cafeteria is huge and a few adults can’t be everywhere)…i will spare the gory details as to what he made her do.  these are just examples that i can readily think of, there are so very many, unfortunately.  and, i am in a very “wealthy” area (i.e. these are generally kids from families with a high SES).  the stories get more violent as SES is decreased or in the less wealthy areas.  i am NOT a teacher (i don’t think i could ever do that job effectively and i admire teachers who teach for the love of teaching and believe THEY are the true heroes), but do work in the public schools (i am a school psychologist). teaching appears to have become about babysitting, behavior management, raising test scores however you can (see: atlanta public schools cheating scandal for one example http://www.ajc.com/news/news/local/investigation-into-aps-cheating-finds-unethical-be/nQJHG/), trying to make sure you don’t get attacked or sued, etc.  i am not sure how teachers are able to really teach (i am talking about teaching via true dialectic and experiential methods), and if they do, how they do it with the over-time demands (being right-to-work, it says in our contracts that we are to work the 8 hour school day “AND any other hours/days you are asked by your principal or direct supervisor” WITHOUT compensation.  we are not allowed to take a lunch break unless we are DIRECTLY supervising kids (i.e. eating with the kids) or we must make up whatever time we take to eat.  even eating and say, writing a report or entering in grades is not considered “direct supervision” and that time must be made up that day (to all those that complain that teachers get so much time off…what if you were told you could not take a lunch break without having to “make up” the time?  no “business lunches.”  heck, my teachers can’t even go to the bathroom when they have to because they can’t leave the kids or it is considered “abandonment of duties.”  i don’t think that happens in the corporate world).  one of the reasons i don’t eat until i am off work-it’s just too much hassle.  i think people really need to realize we don’t have these cush government jobs…especially those states in which there are no unions to attempt to at least get fair compensation (i.e. yes, we will work those extra hours like open house or such, but let us get some sort of compensation.  on open house evenings, my teachers come in at regular time in the morning and must stay until 8 or 9 at night for the open house…that’s a long day and it is required or you lose your job).

as far as compensation, and i can only speak for myself in regard to this. since i started working in the public schools 11 years ago, my salary has DROPPED by about $14,000.  i am making LESS now than when i signed my first contract 11 years ago!  not to mention that higher degree that i worked so hard for…our “degree credit” (really about $3000 a year) was cut in half for anyone that does not work in the classroom.  that, along with decreasing my work days from 205 to 190,  furloughs, increase in cost of benefits, etc. have all contributed to the huge decrease in pay.  what other job is like that where, in addition to not receiving any added monies, i.e. a raise of any kid, in 6 years and your salary DECREASES each year?  and, did i mention that our department was cut by 25% a few years ago, but more schools and responsibilities were added on (in essence, we are now responsible for doing 125%…i know…try and figure out how that works). yet, i can honestly say i don’t do this for the money and nothing is better than the feeling i get when i have really helped a student and/or their family and made a connection.  my soul is fed by “my kids” but it does get crushed each time i get a ‘message’ from the higher ups that i am nothing more than just another employee/number and easily replaceable or that that higher degree you got to make yourself more effective at your job just doesn’t mean anything (i.e. why further your skill set and knowledge base when all you get from it is mounting student debt and the message that higher education is not important?  i.e. loss of degree credit).  it’s just…sad.  kids are the future yet we treat those that help to mold them into who they are going to be awfully and with little respect.  instead, it sometimes feels like glorified babysitting.

i would challenge anyone who thinks teachers make too much money (laughable), have too much time off (not when you add all the extra hours from taking work home and various after school hours mandatory functions), or thinks teachers have total have job security via tenure thus allowing teachers to not teach and not be held accountable (in my state, the KIDS comments and impressions are now part of a teacher’s evaluation…thus, a student who is angry with a teacher for failing/disciplining/assigning homework/etc. is going to be part of what determines if a teacher keeps their job or not (not to mention that that teacher BETTER show an improvement in academic achievement based on standardized testing).  we all know kids can be reactionary and, if angry or they just don’t like someone, will not hesitate to rate that teacher as “unfit” and probably not rate them accurately nor objectively.  please see http://www.washingtonpost.com/blogs/answer-sheet/post/georgia-professors-blast-teacher-evaluation-system/2012/07/09/gJQAFhSbZW_blog.html for a very salient case made regarding teacher evaluations in georgia.

i could go on (and on and on) but there’s no point.  many people will believe what they want to believe, even when shown data to the contrary.  i just wish those who thought teaching was “cake” would go try it for even a day and have to competently fulfill all duties (endless meetings, data collection (RTI), accommodations and modifications, individualized and differentiated instruction, paperwork, keeping class blogs updated, grading homework and classwork, administering the many assessments that are required, thus eating into instruction time BUT being expected to cover ALL material in the curriculum so that progress (i.e. higher scores) can be quantified, the list goes on). whoever thinks all teachers do is stand up in front of a class and actually teach is sorely misled.  that’s why i TRULY believe, as jonathan kozol says:  “Teachers are my heroes; they are the most courageous people in this country.”  we need to start treating them as such instead of bashing the profession. for example, just because you have a physician who is negligent, do you then go and bash ALL physicians based on one or two bad examples?  no.  while the unscrupulous teachers get the press, there are ordinary heroes in our classrooms everyday.  those who teach for the love of teaching and for the love of kids.  there are many more of those than bad ones…you just don’t hear about them.  while we revere physicians for being able to heal, why don’t we share some of that same reverence for teachers?  THEY are the ones who spend 8 hours a day, approximately 180 days a year, preparing YOUR children for the future (often, more cumulative time than parents spend with their own children).  maybe not performing a life-saving surgery, but helping to prepare these kids for the future.  THEY should be our heroes and celebrities.  and, unlike celebrity, they are CERTAINLY not in it for the money or accolades…

just my opinion…

Your brain on books…

In Brain imaging, Brain studies, Education, Neuroscience on Friday, 21 September 2012 at 04:30

today’s technology has led to SO many amazing breakthroughs and allows for new and more precise understanding of so many things.  possible genetic markers for autism (and a possible vaccine), neurogenesis, new understandings of ADHD and treatment implications, just to name a few i have posted about.  following is an article on reading and our brains.  while fMRI’s have been around a while, it is through such instruments that we are really changing our understanding of the brain and its related functions.  i remember when i first read yvette sheline’s article* on hippocampal volume in 2003 that i realized we were at a turning point in brain research and understanding how the brain works.  from that point on, my interest in all things ‘neuro’ became much stronger.  the following is an article on reading and the brain.  while i can’t conceptualize what i would do if i didn’t have reading as an outlet.  i know, for some, reading is a “necessary evil” and while those people might read all day at work (work-related) are unlikely to read for pleasure.  maybe this research will help to convince people that reading has SO many benefits besides being able to get completely lost in a book.  happy reading, everyone!

*Sheline’s article: http://www.ncbi.nlm.nih.gov/pubmed/12900317 (the one that hooked me)

MRI reveals brain’s response to reading

Posted By Stanford On September 10, 2012 @ 4:13 pm In Science & Technology

STANFORD (US) — Researchers asked people to read Jane Austen in an MRI machine, and say the surprising results suggest reading closely could be “training” for our brains.

Neurobiological experts, radiologists, and humanities scholars are working together to explore the relationship between reading, attention, and distraction—by reading Jane Austen.

Surprising preliminary results reveal a dramatic and unexpected increase in blood flow to regions of the brain beyond those responsible for “executive function,” areas which would normally be associated with paying close attention to a task, such as reading, says Natalie Phillips, the literary scholar leading the project.

During a series of ongoing experiments, functional magnetic resonance images track blood flow in the brains of subjects as they read excerpts of a Jane Austen novel. Experiment participants are first asked to skim a passage leisurely as they might do in a bookstore, and then to read more closely, as they would while studying for an exam.

Phillips says the global increase in blood flow during close reading suggests that “paying attention to literary texts requires the coordination of multiple complex cognitive functions.” Blood flow also increased during pleasure reading, but in different areas of the brain. Phillips suggests that each style of reading may create distinct patterns in the brain that are “far more complex than just work and play.”

The experiment focuses on literary attention, or more specifically, the cognitive dynamics of the different kinds of focus we bring to reading. This experiment grew out of Phillips’ ongoing research about Enlightenment writers who were concerned about issues of attention span, or what they called “wandering attention.”

Phillips, who received her PhD in English literature at Stanford in 2010, is now an assistant professor of English at Michigan State University. She says one of the primary goals of the research is to investigate the value of studying literature.

Beyond producing good writers and thinkers, she is interested in “how this training engages the brain.”

The research is “one of the first fMRI experiments to study how our brains respond to literature,” Phillips says, as well as the first to consider “how cognition is shaped not just by what we read, but how we read it.”

Print overload

Critical reading of humanities-oriented texts are recognized for fostering analytical thought, but if such results hold across subjects, Phillips says it would suggest “it’s not only what we read—but thinking rigorously about it that’s of value, and that literary study provides a truly valuable exercise of people’s brains.”

Though modern life’s cascade of beeps and buzzes certainly prompts a new kind of distraction, Phillips warns against “adopting a kind of historical nostalgia, or assuming those of the 18th century were less distracted than we are today.”

Many Enlightenment writers, Phillips notes, were concerned about how distracted readers were becoming “amidst the print-overload of 18th-century England.”

Rather than seeing the change from the 18th century to today as a historical progression toward increasing distraction, Phillips likes to think of attention in terms of “changing environmental, cultural, and cognitive contexts: what someone’s used to, what they’re trying to pay attention to, where, how, when, for how long, etc.”

Ironically, the project was born out of a moment of distraction. While sitting on a discussion panel (which happened to be one of the first on cognitive approaches to literature), Phillips found herself distracted from the talk by the audience’s varieties of inattention: “One man was chatting to his neighbor; another person was editing their talk; one guy was looking vaguely out the window; a final had fallen asleep.”

The talk inspired Phillips to consider connections between her traditional study of 18th-century literature and a neuroscientific approach to literary analysis.

Phillips was especially intrigued by the concept of cognitive flexibility, which she defines as “the ability to focus deeply on one’s disciplinary specialty, while also having the capacity to pay attention to many things at once,” such as connections between literature, history of mind, philosophy, neuroscience, and so on.

Samantha Holdsworth, a research scientist specializing in MRI techniques, recalls an early conversation about the project when two scientists were trying to communicate with three literary scholars: “We were all interested, but working at the edge of our capacity just to understand even 10 percent of what each other were saying.”

Heightened attention

After working through the challenges of disciplinary lingo, the team devised a truly interdisciplinary experiment. Participants read a full chapter from Mansfield Park, which is projected onto a mirror inside an MRI scanner. Together with a verbal cue, color-coding on the text signals participants to move between two styles of attention: reading for pleasure or reading with a heightened attention to literary form.

The use of the fMRI allows for a dynamic picture of blood flow in the brain, “basically, where neurons are firing, and when,” says Phillips. Eye-tracking compatible with fMRI shows how people’s eyes move as they read. As Phillips explains, the micro-jumps of the eyes “can be aligned with the temporal blood flow to different regions in the brain.”

When participants are done with a chapter, they leave the scanner and write a short literary essay on the sections they analyzed closely. The test subjects, all literary PhD candidates from the Bay Area, were chosen because Phillips felt they could easily alternate between close reading and pleasure reading.

After reviewing early scans, neuroscientist Bob Doherty, director of the Stanford Center for Cognitive and Neurobiological Imaging (CNI), says he was impressed by “how the right patterns of ink on a page can create vivid mental imagery and instill powerful emotions.”

Doherty was also surprised to see how “a simple request to the participants to change their literary attention can have such a big impact on the pattern of activity during reading.”

The researchers expected to see pleasure centers activating for the relaxed reading and hypothesized that close reading, as a form of heightened attention, would create more neural activity than pleasure reading.

If the ongoing analysis continues to support the initial theory, Phillips says, teaching close reading (i.e., attention to literary form) “could serve—quite literally—as a kind of cognitive training, teaching us to modulate our concentration and use new brain regions as we move flexibly between modes of focus.”

With the field of literary neuroscience in its infancy, Phillips says this project is helping to demonstrate the potential that neuroscientific tools have to “give us a bigger, richer picture of how our minds engage with art—or, in our case, of the complex experience we know as literary reading.”

Source: Stanford University

Article printed from Futurity.org: http://www.futurity.org

URL to article: http://www.futurity.org/science-technology/mri-reveals-brain%e2%80%99s-response-to-reading/

For more information on Dr. Sheline and her work: http://wuphysicians.wustl.edu/physician2.aspx?PhysNum=1319http://wuphysicians.wustl.edu/physician2.aspx?PhysNum=1319

More on insomnia…

In Medication, Neuropsychology, Psychiatry on Thursday, 20 September 2012 at 06:30

Expert Interview – Emerging Concepts and Therapies in Insomnia: An Expert Interview With Daniel Buysse, MD

Daniel J. Buysse, MD, 2006

http://www.medscape.org/viewarticle/519857

Editor’s Note:

Marni Kelman, MSc, Medscape Neurology & Neurosurgery Editorial Director, discussed emerging concepts and therapies in insomnia with Daniel Buysse, MD, Professor of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. Insomnia in older adults, comorbid insomnia, new treatments on the horizon for insomnia, and new endpoints for therapeutic effectiveness of insomnia treatments were discussed.

Medscape: This issue of Current Perspectives in Insomnia includes a column on sleep disorders in older adults, with a focus on insomnia. What would you say is the impact of insomnia in older patients?

Dr. Buysse: The impact of insomnia in general is pretty wide-ranging, and some of those impairments are even greater in older people. Insomnia can have negative effects on a person’s mood the next day, on their concentration, on their energy level, or can cause fatigue or even sleepiness. Since older adults might experience these things for other reasons, insomnia tends to make them even worse.

Medscape: Are there any particular things that you take into consideration when you diagnose insomnia in an older patient?

Dr. Buysse: Because older adults will so commonly have medical or psychiatric comorbidity, it’s very important to look for those things. Older adults can have medical conditions that can cause pain, difficulty breathing, or impaired mobility, and all of those things can worsen insomnia. Older adults are also at risk for depression, which is the most common comorbid condition seen with insomnia. In addition, older adults are typically the ones who are taking the most medications, and because many medications can have effects on sleep, including insomnia, it’s very important to assess the effects of medications as well.

There are also a number of behavioral factors that can contribute to insomnia — things like going to bed early or spending too much time in bed — and those things, too, affect older adults disproportionately, since in older adults, limitations in daytime activities may leave them with fewer alternatives to going to bed. So, from both a medical perspective and a behavioral perspective, older adults are at risk.

There are a couple of sleep disorders that are more common in older adults that may be associated with insomnia symptoms, and those include restless legs syndrome and periodic limb movement disorder. There is almost certainly an increase in periodic limb movements with age, and again, this can lead to, or be associated with insomnia complaints. Older adults also have an increased incidence of sleep apnea, and compared to younger adults, sleep apnea may less commonly be associated with obesity, and less commonly associated with daytime sleepiness as the primary presenting complaint. The combination of sleep apnea with insomnia seems to be something that is disproportionately common in older adults.

The final sleep disorder that is common in older patients and that can cause insomnia is advanced sleep-phase syndrome. An individual with this syndrome feels very sleepy and goes to sleep early in the evening but then has insomnia characterized by early-morning awakening and an inability to return to sleep. This condition may be related to certain circadian changes that accompany aging.

Medscape: What special considerations do you take into account when you treat older patients with insomnia?

Dr. Buysse: With regard to medications, one needs to proceed a bit more cautiously for 2 reasons. First, older adults may have changes in drug-metabolizing enzymes, so they may metabolize drugs more slowly, or store drugs disproportionately longer because of an increase in the relative amount of body fat. This means that the same drug may have a longer than expected action in older adults. Second, older adults are typically on multiple medications, and some may have additive effects with some of the medications that we give for sleep.

With regard to behavioral treatments, I think the main thing to keep in mind is that older adults can and do benefit from those kinds of treatments as well. So the main message there is to not assume that older adults can’t learn these techniques; they can, and several studies have shown that they can be very effective.

Medscape: Are there particular types of medications that you use in the elderly and/or avoid?

Dr. Buysse: Generally, the approved hypnotic medications are appropriate for older adults, but you do need to be cautious, so it’s often wise to begin with a lower dose than you would use in younger and middle-aged adults. Because of the sensitivity that older people may have to the cognitive side effects of hypnotic drugs, in general, you would want to use a short-acting drug whenever possible to avoid the daytime cognitive and sedative consequences of hypnotic medications. The new hypnotic medication, ramelteon, may be particularly useful in older adults because it has very, very few — actually no — demonstrated cognitive side effects. So that may be a useful drug. The question there is whether it’s actually long enough acting to help with some of the sleep-maintenance problems that older adults might have.

Sedating antidepressants are pretty commonly used, as are antihistamines for the treatment of insomnia. Antihistaminic drugs should be approached with particular caution in the elderly because they often have anticholinergic effects that can worsen cognition and even lead to adverse consequences, such as delirium and urinary retention. One also needs to be careful when using sedating antidepressants in older adults.

Medscape: Another topic that we have discussed in this newsletter is insomnia associated with psychiatric and medical disorders. Are there particular considerations that you take into account when diagnosing those types of patients as well as treating them?

Dr. Buysse: The previous assumption was that if insomnia is associated with another condition, one would be best off just treating that other condition, and then the insomnia should get better. While there is clearly some evidence that treating comorbid conditions does lead to some improvement in insomnia, in many individuals insomnia may persist, even when the other disorder is adequately or optimally treated. In those cases, it may be useful to think of insomnia as a comorbid condition rather than as, strictly speaking, a symptom of that other disorder. If you think of insomnia as a comorbid condition, then in many cases it’s appropriate to direct treatment at the insomnia itself.

There is certainly emerging evidence that treating insomnia specifically does lead to improvement in sleep among patients with either medical conditions or psychiatric conditions. However, there is also a small, but growing, body of evidence that treating insomnia may actually lead to better outcomes of the comorbid medical or psychiatric condition itself.

Medscape: I would also like to ask for your feedback on emerging treatments for insomnia. Are there particular new therapeutic targets for therapy that you think are most promising?

Dr. Buysse: There are a lot of different therapeutic targets that are being examined, and I think the first general thing to say is that this is great because it’s unlikely that insomnia in all people results from the same problem. Therefore, having different ways to impact sleep just makes sense. The other point is that the regulation of sleep itself is very complex and involves multiple neurotransmitter systems. So again, having drugs that target different neurotransmitter systems just makes good sense.

Having said that, there are new agents under investigation that affect the GABA-benzodiazepine receptors and have modified-release preparations so that you can combine a reasonably long duration of action with a short half-life. That means that there is the possibility of providing adequate coverage of insomnia for the entire night, but rapid metabolism of the medication occurs toward the end of the night so that there are fewer daytime consequences.

That’s one strategy. Another strategy is to look at GABA reuptake or extrasynaptic GABA receptors. Other neurotransmitter systems are also being investigated, including serotonin 5HT2A receptors. Antagonists at that receptor have different effects on sleep, so that will be interesting to investigate. Different companies are looking into medications that interact with hypocretin or orexin receptors. That, too, promises, I think, to be a pretty exciting development.

Medscape: There has been some discussion about using new therapeutic endpoints for insomnia, for example, alertness, decreased depression, or decreased daytime napping. How do you feel about this, and what do you think are the most promising new endpoints that should be considered when looking at therapeutic effectiveness?

Dr. Buysse: I think that this is a very important area because patients with insomnia complain not only because their nighttime sleep is disturbed, but because that disturbance is associated with daytime consequences. Therefore, I think that the most interesting areas to look at are those that assess the daytime complaints presented by people with insomnia. One area is the routine assessment of mood symptoms and problems. We’ve been working on some data that show that it may be important not only to assess the person’s mood, but to evaluate how mood changes during the course of the day. So, looking at time-of-day effects may be very important. The second area to assess is fatigue, which is so commonly reported by people with insomnia and can be reliably measured with a number of rating scales. That should certainly be a focus of increased attention.

An area that has been somewhat perplexing, but very important, is the measurement of cognitive difficulties in people with insomnia. There have not been a lot of positive studies in this regard, so despite the fact that people complain of difficulty concentrating or problems with alertness, actually demonstrating impairments has generally not met with success. This may be due to the fact that the tools we have used have been of the wrong type or are not sensitive enough. So, I think trying to identify and develop tests that objectively measure daytime performance as related to the insomnia complaints would be very beneficial as well.

Medscape: What would you consider to be the biggest challenges in insomnia today?

Dr. Buysse: For behavioral and psychological treatments, the big challenge is making those treatments more widely available. We have several techniques that have demonstrated efficacy, but trying to really position them in the community so they have a wide impact is the challenge.

For medications, one of the biggest challenges is developing strategies for longer-term management of insomnia. We know that insomnia tends to be a chronic or recurring condition, and there is still uncertainty about the optimal way to manage chronic insomnia with medications.

The more general thing that I would say pertains to both behavioral and pharmacologic treatment: We really are in very substantial need of empirically supported treatment guidelines or treatment algorithms. We know that we have several efficacious treatments, but we don’t know how best to sequence them, how to target them to specific patients, and how to change from one to the other when the first treatment does not meet with success.

 

how to look lovely

In Inspiration, Mindfulness, Well-being on Thursday, 20 September 2012 at 05:58

ASD interventions

In Autism Spectrum Disorders, School Psychology on Thursday, 20 September 2012 at 04:29

Interventions for Adolescents and Young Adults

By: Lee Wilkinson, Ph. D

Although it would seem obvious that children with ASD will eventually transition to adolescence and adulthood, there is a paucity of information about effective interventions for these age groups compared to data for younger children. Even though the core symptoms of ASD (impairments in communication and social interaction and restricted/repetitive behaviors and interests) may improve overtime with intervention for many individuals, some degree of impairment typically remains throughout the lifespan. Consequently, the focus of intervention/treatment must shift from remediating core deficits in childhood to promoting adaptive behaviors that can facilitate and enhance ultimate functional independence and quality of life in adulthood. This includes new developmental challenges such as independent living, vocational engagement, post-secondary education, and family support. Unfortunately, there is evidence to suggest that improvements in symptoms and problem behaviors may decrease or end once youth with ASD transition from school-based programs. This is likely due, at least in part, to the termination of services received through the secondary school system upon exiting from high school, as well as the lack of adult services. The lack of services available to help young adults with ASD transition to greater independence has been noted by researchers for a number of years and has become an increasingly important issue as the prevalence of ASD continues to grow and as children identified with ASD reach adolescence and adulthood.

Comparative Effectiveness Review

What are the effects of currently available interventions/treatments on adolescents and young adults with ASD? To answer this question, researchers at the Vanderbilt Evidence-based Practice Center systematically reviewed evidence on therapies for adolescents and young adults (ages 13 to 30) with autism spectrum disorders (ASD). Their review focused on the outcomes, including harms and adverse effects, of interventions addressing the core symptoms of ASD; common medical and mental health comorbidities occurring with ASD; the attainment of goals toward functional/adult independence; educational and occupational/vocational attainment; quality of life; access to health and other services; and the transitioning process (e.g., process of transitioning to greater independent functioning). Researchers also addressed the effects ofinterventions on family outcomes including parent distress and satisfaction with interventions.

Of more than 4,500 studies on autism interventions published between 1980 and 2011, only 32 focused on interventions/therapies for individuals ages 13 to 30. Most of the studies available were of poor quality, which may reflect the relative recency of the field. Five studies, primarily of medical interventions, had fair quality. Behavioral, educational, and adaptive/life skills studies were typically small and short term and suggested some potential improvements in social skills and functional behavior. Small studies suggested that vocational programs may increase employment success for some individuals. Few data are available to support the use of medical or allied health interventions in the adolescent and young adult population. The medical studies that have been conducted focused on the use of medications to address specific challenging behaviors, including irritability and aggression, for which effectiveness in this age group is largely unknown and inferred from studies including mostly younger children. However, antipsychotic medications and serotonin reuptake inhibitors were associated with improvements in specific challenging behaviors. Similarly, little evidence supports the use of allied health interventions including facilitated communication.

Conclusion

Despite an increasing population of adolescents and young adults identified with an ASD and the need for effective intervention across the lifespan, very few studies have been conducted to assess treatment approaches for adolescents and young adults with ASD. Moreover, the available research is lacking in scientific rigor. As a result, there is little evidence available for specific treatment approaches in this population; especially for evidence-based approaches to support the transition of youth with ASD to adulthood. In particular, families have little in the way of evidence-based approaches to support interventions capable of optimizing the transition of teens with autism into adulthood. Research is needed across all intervention types on which outcomes to use in future studies. “Overall, there is very little evidence in all areas of care for adolescents and young adults with autism, and it is urgent that more rigorous studies be developed and conducted,” commented Melissa McPheeters, director of Vanderbilt’s Evidence-Based Practice Center and senior author of the report. “There are growing numbers of adolescents and adults with autism in need of substantial support. Without a stronger evidence base, it is very hard to know which interventions will yield the most meaningful outcomes for individuals with autism and their families,” said Zachary Warren of Vanderbilt who also contributed to the report.

Lounds Taylor J, Dove D, Veenstra-VanderWeele J, Sathe NA, McPheeters ML, Jerome RN, Warren Z. Interventions for Adolescents and Young Adults With Autism Spectrum Disorders. Comparative Effectiveness Review No. 65. (Prepared by the Vanderbilt Evidence-based Practice Center under Contract No. 290-2007-10065-I.) AHRQ Publication No. 12-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality. August 2012. http://www.effectivehealthcare.ahrq.gov/reports/final.cfm

The complete report is available at: http://effectivehealthcare.ahrq.gov/ehc/products/271/1196/CER65_Autism-Young-Adults_20120723.pdf

Lee A. Wilkinson, PhD, CCBT, NCSP is author of the award-winning book, A Best Practice Guide to Assessment and Intervention for Autism and Asperger Syndrome in Schools, published by Jessica Kingsley Publishers.

Dr. Wilkinson can be reached at: http://bestpracticeautism.com

Retrieved from: http://www.examiner.com/article/interventions-for-adolescents-and-young-adults-with-asd?goback=.gde_58284_member_165763295

 

Cyberbullying in the schools

In Education, School Psychology, Special Education on Wednesday, 19 September 2012 at 06:19

in this age of computers, smartphones, twitter, facebook, etc., it has become increasingly easier and easier to ‘broadcast’ anything, even massive negativity.  i have seen an uprising in the amount of cyberbullying by students to other students year after year.  i have many examples, and will share one from my middle school.  we had a child with asperger’s who ‘liked’ a girl.  while in the cafeteria, some girls convinced him that he should “ask her out.”  now, the girl he liked was in on the joke.  so, he walked up to the girl in the cafeteria and did what they told him…asked her out.  she pretended to be flattered and accepted.  well, my student got very excited and started “flapping” and very obviously (and in his very asperger’s way) showed his excitement.  as you can imagine, this boy, who didn’t really get attention from his peers, and especially girls, put on a bit of a ‘show.’  that night, the while episode was on youtube.  all i can be thankful for is that my student did not know of it.  but…when trying to find some disciplinary action to take via the school system and anti-bullying, we could not as the incident happened at home (via their home computers) and not at school.  while we did call the parents and have it removed, that was about all we could do.  and this is a MILD story.  i have so many more in which the student being cyberbullied DID know what was being posted/written and there was little we could do about it.  kids who send sexually explicit photos to others, kids who post death threats to other kids, kids who arrange bullying ‘events’ via social media and get others involved…the list goes on and on. 

on another note, i have also had teacher friends of mine videoed in class, then the videos were carefully edited for maximum effect and posted on youtube. 

as a side note, our district does have some leeway now to deal with cyberbullying, but in my opinion, it is not enough.

so, the following article holds promise for cyberbullying.

***

Teachers Fight Online Slams

Amid Free-Speech Concerns, Law Targets Comments That ‘Torment’ Faculty

By STEVE EDER

After years spent trying to shield students from online bullying by their peers, schools are beginning to crack down on Internet postings that disparage teachers.

Schools elsewhere in the U.S. have punished the occasional tweeter who hurls an insult at a teacher, but North Carolina has taken it a step further, making it a crime for students to post statements via the Internet that “intimidate or torment” faculty. Students convicted under the law could be guilty of a misdemeanor and punished with fines of as much as $1,000 and/or probation.

The move is one of the most aggressive yet by states to police students’ online activities. While officials have long had the ability to regulate student speech at school, the threat of cyberbullying teachers, which typically occurs off-campus, has prompted efforts to restrain students’ use of the Internet on their own time.

Judy Kidd, a Charlotte, N.C., teacher said teachers needed a law for ‘protection’ from online comments.

School officials in North Carolina and elsewhere say the moves are necessary to protect teachers in an age when comments posted online—sometimes by students pretending to be the teachers they are mocking—can spread quickly and damage reputations.

The North Carolina law makes it a crime for a student to “build a fake profile or web site” with the “intent to intimidate or torment a school employee.”

Critics, however, argue the law risks trampling on mere venting and other less inflammatory forms of expression.

“Our concern is that we don’t throw the First Amendment out the window in our haste to get the kid who is calling the principal bad names on Facebook,” said Frank LoMonte, executive director of the Student Press Law Center in Arlington, Va., a national group that advocates for students’ free-speech rights.

Traditional issues of free speech on public-school grounds are largely settled, thanks to a 1969 Supreme Court ruling in Tinker v. Des Moines. That ruling held that students’ First Amendment rights are generally protected on campus, but that administrators can punish them for speech on school grounds when they can clearly show it caused significant disruption to school activities or violated others’ rights.

But while past off-campus insults about a school employee were largely undetected and unpunished, cyberinsults are digitally preserved and on display for many to see.

The wide use of social media, particular among teens, makes such platforms the go-to place for such incendiary comments.

While nearly every U.S. state has now passed measures to curb student-on-student cyberbullying, North Carolina is apparently the first to pass a law aimed at students bullying teachers online.

Courts have been mixed on the issue. Last year, the Third U.S. Circuit Court of Appeals, in two separate decisions, said two schools, both in Pennsylvania, had encroached on students’ free-speech rights by punishing them for creating social media profiles mocking their school principals. The court held that the students’ parodies, which were created off-campus, didn’t significantly disrupt the schools.

School Rules

Under a new law, North Carolina students face a fine of as much as $1,000 and/or probation if they:

  • Build a fake profile of…
  • Post a real or fake image of…
  • Post information about…
  • Or repeatedly contact…

…school employees, including teachers

In one case, Justin Layshock, a high-school student, mocked his principal in a Myspace profile parody, writing, among other things, that the principal was “too drunk to remember” his own birthday. In the other case, a middle-school student identified in court documents only by initials J.S. created a Myspace page to make fun of her school principal. using his photo and including among his general interests: “hitting on students and their parents.”

Yet in a separate case in Connecticut last year, the Second U.S. Circuit Court of Appeals found administrators were within the law when they disciplined Avery Doninger, a high-school student, for posting a message to her blog encouraging people to call school officials a profanity in order to protest the school’s “jamfest” being canceled.

Even though Ms. Doninger wrote the post off campus, the court held that it created a substantial disturbance at school to warrant a punishment. Mr. Layshock and Ms. Doninger, whose cases garnered national attention, have gone on to graduate from college, attorneys for them said.

In the past year, the U.S. Supreme Court has turned down opportunities to hear those three cases, as well as a fourth about student speech, which might have brought some clarity. In the fourth case, the Fourth U.S. Circuit Court of Appeals found it permissible for administrators in West Virginia to suspend a student who had created a Myspace page ridiculing another student.

The Classroom Teachers Association of North Carolina l based in Charlotte, lobbied for the teacher-bullying provisions to be included in the state’s School Violence Prevention Act of 2012 after fielding complaints about students using social media sites and email to make false accusations about school employees, said Judy Kidd, the group’s president. In one case Ms. Kidd cited, a sixth-grader sent sexually explicit emails about a teacher to other students. In another, a high-school student posted false allegations on Facebook that an instructor for the Reserve Officers’ Training Corps had groped her while fitting her for a uniform.

“It became apparent that we had to get some kind of protection,” said Ms. Kidd, a high-school science teacher in the Charlotte-Mecklenburg Schools.

Some free-speech advocates say the North Carolina law gives administrators wide latitude to go after students and possibly infringe on free speech. They say the law, which was passed in July, could be enforced against students who are making truthful statements or posting undoctored photos of staff.

Thomas Wheeler, an Indiana lawyer who represents school districts, said he hoped a case will be heard by the Supreme Court and result in clear guidance from the justices on how far schools can go to police what students say online and on social media sites. “The times have changed and we are trying to get caught up,” he said.

Write to Steve Eder at steve.eder@wsj.com

A version of this article appeared September 18, 2012, on page A3 in the U.S. edition of The Wall Street Journal, with the headline: Teachers Fight Online Slam.

Retrieved from: http://online.wsj.com/article/SB10000872396390443779404577644032386310506.html?KEYWORDS=student+online+postings&goback=.gde_159675_member_165295745

Get moving!

In Fitness/Health, Psychiatry, Well-being on Tuesday, 18 September 2012 at 16:29

The exercise effect

Evidence is mounting for the benefits of exercise, yet psychologists don’t often use exercise as part of their treatment arsenal. Here’s more research on why they should.

By Kirsten Weir

December 2011, Vol 42, No. 11

When Jennifer Carter, PhD, counsels patients, she often suggests they walk as they talk. “I work on a beautiful wooded campus,” says the counseling and sport psychologist at the Center for Balanced Living in Ohio.

Strolling through a therapy session often helps patients relax and open up, she finds. But that’s not the only benefit. As immediate past president of APA’s Div. 47 (Exercise and Sport Psychology), she’s well aware of the mental health benefits of moving your muscles. “I often recommend exercise for my psychotherapy clients, particularly for those who are anxious or depressed,” she says.

Unfortunately, graduate training programs rarely teach students how to help patients modify their exercise behavior, Carter says, and many psychologists aren’t taking the reins on their own. “I think clinical and counseling psychologists could do a better job of incorporating exercise into treatment,” she says.

“Exercise is something that psychologists have been very slow to attend to,” agrees Michael Otto, PhD, a professor of psychology at Boston University. “People know that exercise helps physical outcomes. There is much less awareness of mental health outcomes — and much, much less ability to translate this awareness into exercise action.”

Researchers are still working out the details of that action: how much exercise is needed, what mechanisms are behind the boost exercise brings, and why — despite all the benefits of physical activity — it’s so hard to go for that morning jog. But as evidence piles up, the exercise-mental health connection is becoming impossible to ignore.

Mood enhancement

If you’ve ever gone for a run after a stressful day, chances are you felt better afterward. “The link between exercise and mood is pretty strong,” Otto says. “Usually within five minutes after moderate exercise you get a mood-enhancement effect.”

But the effects of physical activity extend beyond the short-term. Research shows that exercise can also help alleviate long-term depression.

Some of the evidence for that comes from broad, population-based correlation studies. “There’s good epidemiological data to suggest that active people are less depressed than inactive people. And people who were active and stopped tend to be more depressed than those who maintain or initiate an exercise program,” says James Blumenthal, PhD, a clinical psychologist at Duke University.

The evidence comes from experimental studies as well. Blumenthal has explored the mood-exercise connection through a series of randomized controlled trials. In one such study, he and his colleagues assigned sedentary adults with major depressive disorder to one of four groups: supervised exercise, home-based exercise, antidepressant therapy or a placebo pill. After four months of treatment, Blumenthal found, patients in the exercise and antidepressant groups had higher rates of remission than did the patients on the placebo. Exercise, he concluded, was generally comparable to antidepressants for patients with major depressive disorder (Psychosomatic Medicine, 2007).

Blumenthal followed up with the patients one year later. The type of treatment they received during the four-month trial didn’t predict remission a year later, he found. However, subjects who reported regular exercise at the one-year follow-up had lower depression scores than did their less active counterparts (Psychosomatic Medicine, 2010). “Exercise seems not only important for treating depression, but also in preventing relapse,” he says.

Certainly, there are methodological challenges to researching the effects of exercise, from the identification of appropriate comparison groups to the limitations of self-reporting. Despite these challenges, a compelling body of evidence has emerged. In 2006, Otto and colleagues reviewed 11 studies investigating the effects of exercise on mental health. They determined that exercise could be a powerful intervention for clinical depression (Clinical Psychology: Science and Practice, 2006). Based on those findings, they concluded, clinicians should consider adding exercise to the treatment plans for their depressed patients.

Mary de Groot, PhD, a psychologist in the department of medicine at Indiana University, is taking the research one step further, investigating the role exercise can play in a particular subset of depressed patients: those with diabetes. It’s a significant problem, she says. “Rates of clinically significant depressive symptoms and diagnoses of major depressive disorder are higher among adults with diabetes than in the general population,” she says. And among diabetics, she adds, depression is often harder to treat and more likely to recur. The association runs both ways. People with diabetes are more likely to develop depression, and people with depression are also more likely to develop diabetes. “A number of studies show people with both disorders are at greater risk for mortality than are people with either disorder alone,” she says.

Since diabetes and obesity go hand-in-hand, it seemed logical to de Groot that exercise could effectively treat both conditions. When she reviewed the literature, she was surprised to find the topic hadn’t been researched. So, she launched a pilot project in which adults with diabetes and depression undertook a 12-week exercise and cognitive-behavioral therapy (CBT) intervention program (Diabetes, 2009). Immediately following the program, the participants who exercised showed improvements both in depression and in levels of A1C, a blood marker that reflects blood-sugar control, compared with those in a control group. She’s now undertaking a larger study to further explore exercise and CBT, both alone and in combination, for treating diabetes-related depression.

Fight-or-flight

Researchers have also explored exercise as a tool for treating — and perhaps preventing — anxiety. When we’re spooked or threatened, our nervous systems jump into action, setting off a cascade of reactions such as sweating, dizziness, and a racing heart. People with heightened sensitivity to anxiety respond to those sensations with fear. They’re also more likely to develop panic disorder down the road, says Jasper Smits, PhD, Co-Director of the Anxiety Research and Treatment Program at Southern Methodist University in Dallas and co-author, with Otto, of the 2011 book “Exercise for Mood and Anxiety: Proven Strategies for Overcoming Depression and Enhancing Well-being.”

Smits and Otto reasoned that regular workouts might help people prone to anxiety become less likely to panic when they experience those fight-or-flight sensations. After all, the body produces many of the same physical reactions — heavy perspiration, increased heart rate — in response to exercise. They tested their theory among 60 volunteers with heightened sensitivity to anxiety. Subjects who participated in a two-week exercise program showed significant improvements in anxiety sensitivity compared with a control group (Depression and Anxiety, 2008). “Exercise in many ways is like exposure treatment,” says Smits. “People learn to associate the symptoms with safety instead of danger.”

In another study, Smits and his colleagues asked volunteers with varying levels of anxiety sensitivity to undergo a carbon-dioxide challenge test, in which they breathed CO2-enriched air. The test often triggers the same symptoms one might experience during a panic attack: increased heart and respiratory rates, dry mouth and dizziness. Unsurprisingly, people with high anxiety sensitivity were more likely to panic in response to the test. But Smits discovered that people with high anxiety sensitivity who also reported high activity levels were less likely to panic than subjects who exercised infrequently (Psychosomatic Medicine, 2011). The findings suggest that physical exercise could help to ward off panic attacks. “Activity may be especially important for people at risk of developing anxiety disorder,” he says.

Smits is now investigating exercise for smoking cessation. The work builds on previous research by Bess Marcus, PhD, a psychology researcher now at the University of California San Diego, who found that vigorous exercise helped women quit smoking when it was combined with cognitive-behavioral therapy (Archives of Internal Medicine, 1999). However, a more recent study by Marcus found that the effect on smoking cessation was more limited when women engaged in only moderate exercise (Nicotine & Tobacco Research, 2005).

Therein lies the problem with prescribing exercise for mental health. Researchers don’t yet have a handle on which types of exercise are most effective, how much is necessary, or even whether exercise works best in conjunction with other therapies.

“Mental health professionals might think exercise may be a good complement [to other therapies], and that may be true,” says Blumenthal. “But there’s very limited data that suggests combining exercise with another treatment is better than the treatment or the exercise alone.”

Researchers are starting to address this question, however. Recently, Madhukar Trivedi, MD, a psychiatrist at the University of Texas Southwestern Medical College, and colleagues studied exercise as a secondary treatment for patients with major depressive disorder who hadn’t achieved remission through drugs alone. They evaluated two exercise doses: One group of patients burned four kilocalories per kilogram each week, while another burned 16 kilocalories per kilogram weekly. They found both exercise protocols led to significant improvements, though the higher-dose exercise program was more effective for most patients (Journal of Clinical Psychiatry, 2011).

The study also raised some intriguing questions, however. In men and women without family history of mental illness, as well as men with family history of mental illness, the higher-dose exercise treatment proved more effective. But among women with a family history of mental illness, the lower exercise dose actually appeared more beneficial. Family history and gender are moderating factors that need to be further explored, the researchers concluded.

Questions also remain about which type of exercise is most helpful. Most studies have focused on aerobic exercise, though some research suggests weight training might also be effective, Smits says. Then there’s the realm of mind-body exercises like yoga, which have been practiced for centuries but have yet to be thoroughly studied. “There’s potential there, but it’s too early to get excited,” he says.

Buffering the brain

It’s also unclear exactly how moving your muscles can have such a significant effect on mental health. “Biochemically, there are many things that can impact mood. There are so many good, open questions about which mechanisms contribute the most to changes in depression,” says de Groot.

Some researchers suspect exercise alleviates chronic depression by increasing serotonin (the neurotransmitter targeted by antidepressants) or brain-derived neurotrophic factor (which supports the growth of neurons). Another theory suggests exercise helps by normalizing sleep, which is known to have protective effects on the brain.

There are psychological explanations, too. Exercise may boost a depressed person’s outlook by helping him return to meaningful activity and providing a sense of accomplishment. Then there’s the fact that a person’s responsiveness to stress is moderated by activity. “Exercise may be a way of biologically toughening up the brain so stress has less of a central impact,” Otto says.

It’s likely that multiple factors are at play. “Exercise has such broad effects that my guess is that there are going to be multiple mechanisms at multiple levels,” Smits says.

So far, little work has been done to unravel those mechanisms. Michael Lehmann, PhD, a research fellow at the National Institute of Mental Health, is taking a stab at the problem by studying mice — animals that, like humans, are vulnerable to social stress.

Lehmann and his colleagues subjected some of their animals to “social defeat” by pairing small, submissive mice with larger, more aggressive mice. The alpha mice regularly tried to intimidate the submissive rodents through the clear partition that separated them. And when the partition was removed for a few minutes each day, the bully mice had to be restrained from harming the submissive mice. After two weeks of regular social defeat, the smaller mice explored less, hid in the shadows, and otherwise exhibited symptoms of depression and anxiety.

One group of mice, however, proved resilient to the stress. For three weeks before the social defeat treatment, all of the mice were subjected to two dramatically different living conditions. Some were confined to spartan cages, while others were treated to enriched environments with running wheels and tubes to explore. Unlike the mice in the bare-bones cages, bullied mice that had been housed in enriched environments showed no signs of rodent depression or anxiety after social defeat (Journal of Neuroscience, 2011). “Exercise and mental enrichment are buffering how the brain is going to respond to future stressors,” Lehmann says.

Lehmann can’t say how much of the effect was due to exercise and how much stemmed from other aspects of the stimulating environment. But the mice ran a lot — close to 10 kilometers a night. And other experiments hint that running may be the most integral part of the enriched environment, he says.

Looking deeper, Lehmann and his colleagues examined the mice’s brains. In the stimulated mice, they found evidence of increased activity in a region called the infralimbic cortex, part of the brain’s emotional processing circuit. Bullied mice that had been housed in spartan conditions had much less activity in that region. The infralimbic cortex appears to be a crucial component of the exercise effect. When Lehmann surgically cut off the region from the rest of the brain, the protective effects of exercise disappeared. Without a functioning infralimbic cortex, the environmentally enriched mice showed brain patterns and behavior similar to those of the mice who had been living in barebones cages.

Humans don’t have an infralimbic cortex, but we do have a homologous region, known as cingulate area 25 or Brodmann area 25. And in fact, this region has been previously implicated in depression. Helen Mayberg, MD, a neurologist at Emory University, and colleagues successfully alleviated depression in several treatment-resistant patients by using deep-brain stimulation to send steady, low-voltage current into their area 25 regions (Neuron, 2005). Lehmann’s studies hint that exercise may ease depression by acting on this same bit of brain.

Getting the payoff

Of all the questions that remain to be answered, perhaps the most perplexing is this: If exercise makes us feel so good, why is it so hard to do it? According to the Centers for Disease Control and Prevention, in 2008 (the most recent year for which data are available), some 25 percent of the U.S. population reported zero leisure-time physical activity.

Starting out too hard in a new exercise program may be one of the reasons people disdain physical activity. When people exercise above their respiratory threshold — that is, above the point when it gets hard to talk — they postpone exercise’s immediate mood boost by about 30 minutes, Otto says. For novices, that delay could turn them off of the treadmill for good. Given that, he recommends that workout neophytes start slowly, with a moderate exercise plan.

Otto also blames an emphasis on the physical effects of exercise for our national apathy to activity. Physicians frequently tell patients to work out to lose weight, lower cholesterol or prevent diabetes. Unfortunately, it takes months before any physical results of your hard work in the gym are apparent. “Attending to the outcomes of fitness is a recipe for failure,” he says.

The exercise mood boost, on the other hand, offers near-instant gratification. Therapists would do well to encourage their patients to tune into their mental state after exercise, Otto says — especially when they’re feeling down.

“Many people skip the workout at the very time it has the greatest payoff. That prevents you from noticing just how much better you feel when you exercise,” he says. “Failing to exercise when you feel bad is like explicitly not taking an aspirin when your head hurts. That’s the time you get the payoff.”

It may take a longer course of exercise to alleviate mood disorders such as anxiety or depression, Smits adds. But the immediate effects are tangible — and psychologists are in a unique position to help people get moving. “We’re experts in behavior change,” he says. “We can help people become motivated to exercise.”


Kirsten Weir is a writer in Minneapolis.

Retrieved from http://www.apa.org/monitor/2011/12/exercise.aspx

 

New autism research (with promising findings…)

In Autism Spectrum Disorders, Neuroscience, Psychiatry on Tuesday, 18 September 2012 at 16:24

some promising new studies on autism.  the first discusses the development of a genetic test that may be able to predict the risk of developing an asd.  while gene studies bring up both ethical and moral concerns for some, the findings and possible implications can not be dismissed.  the next study discusses the possibility of a drug involving glutamate receptor antagonists as a treatment for asd.  good stuff…ENJOY!

Genetic Test Predicts Risk for Autism Spectrum Disorder

Australian researchers have developed a genetic test that is able to predict the risk of developing autism spectrum disorder (ASD). (Credit: © Lucian Milasan / Fotolia)

ScienceDaily (Sep. 11, 2012) — A team of Australian researchers, led by University of Melbourne has developed a genetic test that is able to predict the risk of developing autism spectrum disorder (ASD).

Lead researcher Professor Stan Skafidas, Director of the Centre for Neural Engineering at the University of Melbourne said the test could be used to assess the risk for developing the disorder. “This test could assist in the early detection of the condition in babies and children and help in the early management of those who become diagnosed,” he said. “It would be particularly relevant for families who have a history of autism or related conditions such as Asperger’s syndrome,” he said.

Autism affects around one in 150 births and is characterized by abnormal social interaction, impaired communication and repetitive behaviours. The test correctly predicted ASD with more than 70 per cent accuracy in people of central European descent. Ongoing validation tests are continuing including the development of accurate testing for other ethnic groups.

Clinical neuropsychologist, Dr Renee Testa from the University of Melbourne and Monash University, said the test would allow clinicians to provide early interventions that may reduce behavioural and cognitive difficulties that children and adults with ASD experience. “Early identification of risk means we can provide interventions to improve overall functioning for those affected, including families,” she said.

A genetic cause has been long sought with many genes implicated in the condition, but no single gene has been adequate for determining risk. Using US data from 3,346 individuals with ASD and 4,165 of their relatives from Autism Genetic Resource Exchange (AGRE) and Simons Foundation Autism Research Initiative (SFARI), the researchers identified 237 genetic markers (SNPs) in 146 genes and related cellular pathways that either contribute to or protect an individual from developing ASD.

Senior author Professor Christos Pantelis of the Melbourne Neuropsychiatry Centre at the University of Melbourne and Melbourne Health said the discovery of the combination of contributing and protective gene markers and their interaction had helped to develop a very promising predictive ASD test.

The test is based on measuring both genetic markers of risk and protection for ASD. The risk markers increase the score on the genetic test, while the protective markers decrease the score. The higher the overall score, the higher the individual risk.

“This has been a multidisciplinary team effort with expertise across fields providing new ways of investigating this complex condition,” Professor Pantelis said.

The study was undertaken in collaboration with Professor Ian Everall, Cato Chair in Psychiatry and Dr Gursharan Chana from the University of Melbourne and Melbourne Health, and Dr Daniela Zantomio from Austin Health.

The next step is to further assess the accuracy of the test by monitoring children who are not yet diagnosed over an extended study. The study has been published today in the journal Molecular Psychiatry.


Story Source:

The above story is reprinted from materials provided by University of Melbourne.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

University of Melbourne (2012, September 11). Genetic test predicts risk for autism spectrum disorder. ScienceDaily. Retrieved September 18, 2012, from http://www.sciencedaily.com­ /releases/2012/09/120912093827.htm?goback=.gde_2514160_member_163245605

Retrieved from: http://www.sciencedaily.com/releases/2012/09/120912093827.htm?goback=.gde_2514160_member_163245605

***

Disorder of Neuronal Circuits in Autism is Reversible

14 September 2012 08:43 Universität Basel

People with autism suffer from a pervasive developmental disorder of the brain that becomes evident in early childhood. Peter Scheiffele and Kaspar Vogt, Professors at the Biozentrum of the University of Basel, have identified a specific dysfunction in neuronal circuits that is caused by autism. In the respected journal „Science“, the scientists also report about their success in reversing these neuronal changes. These findings are an important step in drug development for the treatment for autism.

According to current estimates, about one percent of all children develop an autistic spectrum disorder. Individuals with autism may exhibit impaired social behavior, rigid patterns of behavior and limited speech development. Autism is a hereditary developmental disorder of the brain. A central risk factor for the development of autism are numerous mutations in over 300 genes that have been identified, including the gene neuroligin-3, which is involved in the formation of synapses, the contact junction between nerve cells.

Loss of neuroligin-3 interferes with neuronal signal transmission

The consequences of neuroligin-3 loss can be studied in animal models. Mice lacking the gene for neuroligin-3 develop behavioral patterns reflecting important aspects observed in autism. In collaboration with Roche the research groups from the Biozentrum at the University of Basel have now identified a defect in synaptic signal transmission that interferes with the function and plasticity of the neuronal circuits. These negative effects are associated with increased production of a specific neuronal glutamate receptor, which modulates the signal transmission between neurons. An excess of these receptors inhibits the adaptation of the synaptic signal transmission during the learning process, thus disrupting the development and function of the brain in the long term.

Of major importance is the finding that the impaired development of the neuronal circuit in the brain is reversible.  When the scientists reactivated the production of neuroligin-3 in the mice, the nerve cells scaled down the production of the glutamate receptors to a normal level and the structural defects in the brain typical for autism disappeared. Hence, these glutamate receptors could be a suitable pharmacological target in order to stop the developmental disorder autism or even reverse it.

Vision for the future: Medication for autism

Autism currently cannot be cured.  At present, only the symptoms of the disorder can be alleviated through behavioral therapy and other treatment. A new approach to its treatment, however, has been uncovered through the results of this study. In one of the European Union supported projects, EU-AIMS, the research groups from the Biozentrum are working in collaboration with Roche and other partners in industry on applying glutamate receptor antagonists for the treatment of autism and hope, that in the future, this disorder can be treated successfully in both children and adults.

http://www.unibas.ch/index.cfm?uuid=BF9F46B0ADFE4138B2556373D7286FD5&type=search&show_long=1

  • Full bibliographic information: Baudouin S. J., Gaudias J., Gerharz S., Hatstatt L., Zhou K., Punnakkal P., Tanaka K. F., Spooren W., Hen R., De Zeeuw C.I., Vogt K., Scheiffele K.
    Shared Synaptic Pathophysiology in Syndromic and Non-syndromic Rodent Models of Autism
    Science; Published online September 13 (2012) | doi: 10.1126/science.1224159

 

worry about yourself

In Gay rights, Humor, LGBTQI on Tuesday, 18 September 2012 at 06:08

let’s hope we are not old and gray and STILL fighting against the right for all consenting adults to marry regardless of whom it is they are marrying.  i mean, really, how does that affect YOU???

and another…

Hypnotic use and side effects

In Insomnia, Medication, Neuropsychology, Psychiatry on Tuesday, 18 September 2012 at 05:25

chronic insomnia and sleep deprivation are major issues affecting over 30% of  the u.s population.  approximately 10 million people are prescribed hypnotics to treat insomnia.  while the concurrent effects from untreated insomnia are vast, and hypnotic use may be the only viable option, it is suggested that one educate themselves on the effects related to untreated insomnia (see previous post titled “the state of sleep in the u.s.”) and options for treatment as well as recent research regarding the side-effects of some treatments.  

Hypnotic Use Linked With Increased Risk for Early Death

Megan Brooks & Laurie Barclay, MD

http://www.medscape.org/viewarticle/759730

Clinical Context

In 2010, approximately 6% to 10% of US adults used a hypnotic drug for sleep problems. Earlier studies have suggested an association between hypnotic use and excess mortality rates.

The objectives of this study by Kripke and colleagues were to estimate the mortality risks and cancer risks associated with specific, currently popular hypnotics, using a matched cohort design and proportional hazards regression models. In addition, the investigators examined what degree of risk associated with hypnotic use could be explained by confounders and comorbid conditions.

Study Synopsis and Perspective

Adults who use hypnotics to help them sleep have a greater than 3-fold increased risk for early death, according to results of a large matched cohort survival analysis.

Hazard ratios were elevated in separate analyses for several commonly prescribed hypnotics and for newer shorter-acting drugs, the researchers say. The drugs included benzodiazepines, such as temazepam; nonbenzodiazepines, such as zolpidem, eszopiclone, and zaleplon; barbiturates; and sedative antihistamines.

“The take-home from the article is that the risks associated with hypnotics are very high, and certainly these possible risks outweigh any benefits of hypnotics,” first author Daniel F. Kripke, MD, co-director of research at the Scripps Clinic Viterbi Family Sleep Center in La Jolla, California, told Medscape Medical News.

“Our study is the 19th epidemiological study showing that hypnotics are significantly associated with excess mortality,” Dr. Kripke added, noting it is also the first to specify the drugs and the first to show dose-response. “Even considering that the epidemiologic studies show association and do not prove causality, the risks look much larger than the benefits,” Dr. Kripke added.

Their analysis also showed a 35% overall increased risk for cancer in hypnotics users. “The risks of hypnotics are similar to the risks of cigarettes,” Dr. Kripke said.

The associations were evident in every age but were greatest among those aged 18 to 55 years, the investigators note. “Rough order-of-magnitude estimates…suggest that in 2010, hypnotics may have been associated with 320,000 to 507,000 excess deaths in the USA alone,” they report.

The new report is published February 28 in BMJ Open.

Dr. Kripke, a long-time critic of hypnotics, emphasized that the data “apply only to the particular hypnotics studied when used as sleeping pills. They do not apply to drugs which were not tested.” Moreover, he said, they may not apply when the drugs are used other purposes, “in which they might be life-saving. Oddly enough, the data for use of benzodiazepines for anxiety may not be similar,” Dr. Kripke noted.

“Risks Outweigh Any Benefits”

In 2010, an estimated 6% to 10% of adults in the United States took a hypnotic drug to help them sleep, with the percentages probably higher in Europe, Dr. Kripke and colleagues note in their report.

Data for their analysis were derived from the electronic medical records of the Geisinger Health System, the largest rural integrated health system in the United States, serving a 41-county area of Pennsylvania with roughly 2.5 million people.

Study participants included 10,529 adults (mean age, 54 years) who received hypnotic prescriptions and 23,676 matched controls with no hypnotic prescriptions, followed for an average of 2.5 years between 2002 and 2007.

“As predicted,” report the researchers, patients prescribed any hypnotic, even fewer than 18 pills per year, were significantly more likely to die during follow-up compared with those prescribed no hypnotics. A dose-response effect was evident, and the findings “were robust with adjustment for multiple potential confounders and consistent using multiple strategies to address confounding by health status,” they report.

Table 1. Risk for Death by Level of Hypnotic Use

Any Hypnotic

Hazard Ratio (95% Confidence Interval)

P Value

Up to 18 pills per year

3.60 (2.92 – 4.44)

<.001

18 – 132 pills per year

4.43 (3.67 – 5.36)

<.001

> 132 pills per year

5.32 (4.50 – 6.30)

<.001

Zolpidem was the most commonly prescribed hypnotic during the study interval, followed by temazepam; both were associated with significantly elevated risks for death, again in a dose-response fashion.

Table 2. Risk for Death with Zolpidem and Temazepam

Agent (mg/y)

Hazard Ratio (95% Confidence Interval)

P Value

Zolpidem

5 – 130

3.93 (2.98 – 5.17)

<.001

130 – 800

4.54 (3.46 – 5.95)

<.001

> 800

5.69 (4.58 – 7.07)

<.001

Temazepam

10 – 240

3.71 (2.55 – 5.38)

<.001

240 – 1640

4.15 (2.88 – 5.99)

<.001

> 1640

6.56 (5.03 – 8.55)

<.001

“The death [hazard ratios] HR associated with prescriptions for less commonly prescribed hypnotic drugs were likewise elevated and the confidence limits of death hazards for each other hypnotic overlapped that for zolpidem, with the exception of eszopiclone, which was associated with higher mortality,” the investigators report.

Any hypnotic use in the upper third (>132 pills per year) was also associated with a modest but statistically significant increased risk for incident cancer (HR, 1.35; 95% CI, 1.18 – 1.55). The cancer risk was nearly 2-fold higher with temazepam (>1640 mg per year; HR, 1.99; 95% CI, 1.57 – 2.52).

Study Raises “Important Concerns”

Prior studies have shown multiple causal pathways by which hypnotics might raise the risk for death. For example, controlled trials have shown that hypnotics impair motor and cognitive skills, such as driving. Use of hypnotics has been linked to an increase in automobile crashes and an increase in falls due to hangover sedation. In some patients, hypnotics may increase or prolong sleep apneas and suppress respiratory drive. They may also increase incident depression.

“The meagre benefits of hypnotics, as critically reviewed by groups without financial interest… would not justify substantial risks,” the investigators write. They say a “consensus is developing that cognitive-behavioural therapy of chronic insomnia may be more successful than hypnotics.”

In a prepared statement, Trish Groves, MBBS, MRCPsych, editor-in-chief of BMJ Open, comments: “Although the authors have not been able to prove that sleeping pills cause premature death, their analyses have ruled out a wide range of other possible causative factors. So these findings raise important concerns and questions about the safety of sedatives and sleeping pills.”

American Academy of Sleep Medicine Urges Caution

In a statement, Nancy Collop, MD, president of the American Academy of Sleep Medicine (AASM) urged caution in interpreting these data.

“Although the study found that the use of hypnotic medication, or sleeping pills, was associated with an increased risk of mortality, a cause-and-effect relationship could not be established because the study only analyzed an insurance database,” Dr. Collop notes in the statement. “The authors also noted several other limitations to their study. For example, it was impossible for them to control for psychiatric conditions and anxiety, which is an area of significant concern to this study population.” In addition, she adds, those taking hypnotics had a “markedly greater rate of several comorbid health problems than the control group, suggesting they were a sicker population.”

AASM guidelines say that hypnotic medication prescribed appropriately and monitored carefully is a “reasonably” safe therapy that provides some improvement in people with insomnia, Dr. Collop notes in the statement. When possible, behavioral and cognitive therapies should be used and if needed supplemented with short-term use of hypnotics, the guidelines recommend. “Patients taking hypnotics should schedule regular follow-up visits with their physician, and efforts should be made to prescribe the lowest effective dose of medication and to reduce the medication’s usage when conditions allow,” the statement adds.

Effective treatment of insomnia is important because it’s associated with a “host” of comorbid conditions, including major depression and other psychiatric disorders, as well as increased for suicide, motor vehicle accidents, and possibly cardiovascular disease, Dr. Collop points out. Other research has shown widespread changes in physiology and the central nervous system associated with insomnia, and the “marked dysfunction and diminished quality of life” reported by some of those with insomnia are similar to that seen with major psychiatric or medical illnesses.

“We commend Drs. Kripke, Langer and Kline for contributing new scientific information to the study of sleep medicine,” Dr. Collop notes in the AASM statement. “We believe it is important for patients and physicians to be aware of how sleep issues impact health. But we caution physicians and patients to consider the years of research in support of limited hypnotics use, under the clinical guidelines of the AASM, before making any drastic changes in therapy.”

The AASM recommends that individuals with ongoing sleep problems should seek help from a board-certified sleep physician, “at one of 2,400 AASM-accredited sleep centers across the US.” A sleep center listing is found at the AASM’s site, www.sleepcenters.org.

In a competing interests statement, Dr. Kripke reports long-term criticism of hypnotic drugs at his nonprofit Web site. He also discloses a family interest in an investment corporation that has a small percentage of its assets in stock of sanofi-aventis and Johnson & Johnson. His 2 coauthors have disclosed no relevant financial relationships. Dr. Collop has disclosed no relevant financial relationships.

BMJ Open. Published online February 28, 2012. Abstract

Study Highlights

  • This matched cohort study took place at a large, integrated US health system.
  • The investigators extracted longitudinal electronic medical records for a 1-to-2 matched cohort survival analysis.
  • Patients who received hypnotic prescriptions (n = 10,529) were matched with 23,676 control participants with no hypnotic prescriptions.
  • Mean age was 54 years, and average duration of follow-up (between January 2002 and January 2007) was 2.5 years.
  • Data were adjusted for age, sex, smoking, body mass index, ethnicity, marital status, alcohol use, and history of cancer.
  • Cox proportional hazards models allowed calculation of HRs for death.
  • The Cox models were controlled for risk factors and used up to 116 strata, which exactly matched case patients and control participants by 12 classes of comorbidity.
  • Compared with patients who were prescribed no hypnotics, those who were prescribed any hypnotic had markedly increased hazards of dying.
  • There was a dose-response association. The HR was 3.60 (95% CI, 2.92 – 4.44) for 0.4 – 18 doses per year, 4.43 (95% CI, 3.67 – 5.36) for 18 to 132 doses per year, and 5.32 (95% CI, 4.50 – 6.30) for more than 132 doses per year.
  • In separate analyses, HRs were increased for several widely used hypnotics and for newer shorter-acting drugs, including zolpidem, temazepam, eszopiclone, zaleplon, other benzodiazepines, barbiturates, and sedative antihistamines.
  • Among users in the highest tertiles of doses per year, the HRs for death were 5.3 for all hypnotics, 5.7 for zolpidem alone, and 6.6 for temazepam alone.
  • Patients in the highest tertile of hypnotic use had a significant (35%) increased risk for incident cancer (HR, 1.35; 95% CI, 1.18 – 1.55).
  • These findings were robust within groups having a comorbid condition, suggesting that the risks for death and cancer associated with hypnotic drugs were not explained by preexisting disease.
  • Hypnotic prescriptions were associated with increased diagnoses of esophageal regurgitation and peptic ulcer disease; the investigators note that increased regurgitation could cause esophageal damage and cancer.
  • On the basis of these findings, the investigators concluded that hypnotic prescriptions was associated with more than a 3-fold increased risk for death, even when the prescription was for less than 18 pills per year.
  • This association was also observed in separate analyses for several commonly used hypnotics and for newer, shorter-acting drugs. Control of selective prescription of hypnotics for patients in poor health did not explain the observed excess mortality rates.
  • Limitations of this study include possible residual confounding, lack of data on compliance with prescriptions, and inability to determine causality or to control for depression and other psychiatric symptoms.

Clinical Implications

  • Compared with control participants not receiving hypnotic prescriptions, patients receiving prescriptions for zolpidem, temazepam, and other commonly used hypnotics had a more than 3-fold risk for greater mortality in this matched cohort study. There appeared to be a dose-response relationship, but even patients prescribed less than 18 hypnotic doses per year had increased mortality rates.
  • Among patients prescribed hypnotics, the incidence of cancer was increased for several specific types of cancer, and those prescribed high doses had an increased overall rate of cancer of 35%.

The state of sleep in the U.S.

In ADHD, ADHD Adult, ADHD child/adolescent, ADHD stimulant treatment, Anxiety, Fitness/Health, Medication, Well-being on Tuesday, 18 September 2012 at 05:04

stress, anxiety, and depression are but three related etiologies for insomnia.  people with ADHD also suffer from insomnia, either as a side-effect of psychostimulants or because of the ADHD itself.  insomnia can have significant effects on quality of life, work/school life, and health.  statistics show that insomnia is a growing problem in the U.S. today and sleep aids are being prescribed at an increasing rate.  the following are some statistics related to insomnia as well as a case-study/research article on insomnia. 

to be followed by an article about hypnotic use and associated risk-factors.

***

General Insomnia Statistics

  • People today sleep 20% less than they did 100 years ago.
  • More than 30% of the population suffers from insomnia.
  • One in three people suffer from some form of insomnia during their lifetime.
  • More than half of Americans lose sleep due to stress and/or anxiety.
  • Between 40% and 60% of people over the age of 60 suffer from insomnia.
  • Women are up to twice as likely to suffer from insomnia than men.
  • Approximately 35% of insomniacs have a family history of insomnia.
  • 90% of people who suffer from depression also experience insomnia.
  • Approximately 10 million people in the U.S. use prescription sleep aids.
  • People who suffer from sleep deprivation are 27% more likely to become overweight or obese. There is also a link between weight gain and sleep apnea.
  • A National Sleep Foundation Poll shows that 60% of people have driven while feeling sleepy (and 37% admit to having fallen asleep at the wheel) in the past year.
  • A recent Consumer Reports survey showed the top reason couples gave for avoiding sex was “too tired or need sleep.”

Financial Implications of Insomnia

Insomnia statistics aren’t confined to the relationship between insomnia and health. This sleep disorder costs government and industry billions of dollars a year.

  • The Institute of Medicine estimates that hundreds of billions of dollars are spent annually on medical costs that are directly related to sleep disorders.
  • The National Highway Traffic Safety Administration statistics show that 100,000 vehicle accidents occur annually drowsy driving. An estimated 1,500 die each year in these collisions.
  • Employers spend approximately $3,200 more in health care costs on employees with sleep problems than for those who sleep well.
  • According to the US Surgeon General, insomnia costs the U.S. Government more than $15 billion per year in health care costs.
  • Statistics also show that US industry loses about $150 billion each year because of sleep deprived workers. This takes into account absenteeism and lost productivity.

These sobering insomnia statistics underscore the importance of enhancing sleep disorder awareness and why individuals need to seek immediate treatment for the health and the well-being of others.

Sources: National Sleep Foundation, Better Sleep Council, Gallup Polls, Institute of Medicine, National Highway Traffic Safety Administration, US Surgeon General’s Office

http://www.better-sleep-better-life.com/insomnia-statistics.html

Manifestations and Management of Chronic Insomnia: NIH State-of-the-Science Conference Findings and Implications

Authors: William T. Riley, PhD; Carl E. Hunt, MD

http://www.medscape.org/viewarticle/517618

Introduction

The Problem of the Inadequate Identification and Treatment of Chronic Insomnia

Despite considerable advances in the understanding of and treatments for chronic insomnia, this condition remains inadequately identified and treated. Approximately one third of US adults report difficulty sleeping, and 10% to 15% have the clinical disorder of insomnia.[1] Among primary care patients, approximately half have sleep difficulties, but these difficulties often are undetected by health professionals.[2,3] Even if detected and appropriately diagnosed, these patients are more likely to receive treatments of questionable safety and efficacy rather than treatments with substantial, evidence-based support for safety and efficacy.

The inadequate identification and treatment of chronic insomnia has serious medical and public health implications. Chronic insomnia results in impaired occupational performance and diminished quality of life.[4,5] Insomnia is associated with higher healthcare usage and costs, including a 2-fold increase in hospitalizations and physician visits.[6] Insomnia is also a risk factor for a number of other disorders, particularly psychiatric disorders, such as depression, and an important sign or symptom for a range of medical and other psychiatric disorders.[7]

In a recent review, Benca[8] identified the following 5 barriers to the recognition, diagnosis, and treatment of insomnia in primary care settings:

  • Inadequate knowledge base: In the 1990s, about one third of medical schools had no formal sleep medicine training. A majority of practitioners rate their knowledge of sleep medicine as only “fair.”
  • Office visit time constraints: Unless sleep difficulties are the presenting complaint, visit time may be inadequate for sleep difficulties to be addressed.
  • Lack of discussion about sleep: Less than half of patients with insomnia have discussed this problem with their physicians, and most of these discussions were patient-initiated.
  • Misperceptions regarding treatment: Health professionals may have greater concerns than warranted about the safety and efficacy of pharmacologic treatments, and they may not be aware of or have access to effective nonpharmacologic approaches.
  • Lack of evidence for functional outcomes: Although treatments for insomnia reduce symptoms in the short term, there is inadequate evidence for long-term efficacy, improvements in daytime functioning, or the impact on comorbid disorders.

Addressing these barriers could lead to improved recognition and treatment of chronic insomnia and may substantially reduce the personal and public health burden of this disorder.

The Importance of Appropriate Recognition and Treatment of Chronic Insomnia: NIH State-of-the-Science Conference Statement

The purpose of this Clinical Update is to emphasize the importance of appropriate recognition of and treatment for chronic insomnia based on the recently published statement from the National Institutes of Health (NIH) State-of-the-Science Conference on the Manifestations and Management of Chronic Insomnia in Adults.[9] An independent panel of health professionals convened in June 2005 to evaluate the evidence from (1) systematic literature reviews prepared by the Agency for Health Research and Quality, (2) presentations by insomnia researchers over a 2-day public session, (3) questions and comments by conference attendees during the public sessions, and (4) closed deliberations by the panel. This process resulted in a State-of-the-Science (SOS) Conference Statement on chronic insomnia, including implications for clinical and research efforts.

The SOS Conference proceedings and statement were organized around the following 5 questions, which serve as the outline for this Clinical Update:

  • How is chronic insomnia defined, diagnosed, and classified, and what is known about its etiology?
  • What are the prevalence, natural history, incidence, and risk factors for chronic insomnia?
  • What are the consequences, morbidities, comorbidities, and public health burden associated with chronic insomnia?
  • What treatments are used for the management of chronic insomnia, and what is the evidence regarding their safety, efficacy, and effectiveness?
  • What are important future directions for insomnia-related research?

The SOS Conference focused on adults with chronic insomnia, not acute or episodic manifestations, which typically resolve in a few weeks, often without intervention. Although secondary or comorbid insomnia (insomnia associated with other conditions) was considered with respect to diagnosis and classification, the conference focused on the treatment of primary insomnia, not on any existing comorbid conditions. This Clinical Update, therefore, follows the scope of the SOS Conference and focuses on chronic primary insomnia in adults. Information in the SOS Conference Statement is augmented by the research literature, including a number of excellent, recent reviews on the clinical management of insomnia.[8,10-14]

How Is Chronic Insomnia Defined, Diagnosed, and Classified, and What Is Known About Its Etiology?

Case Study: Part 1

A 56-year-old woman presents for routine monitoring of postmenopausal symptoms and bone density, following a 2-year course of hormone replacement therapy that was initiated 5 years ago when she began experiencing hot flashes and depressive symptoms. During the visit, she is asked about her sleep and reveals that she has difficulty falling asleep most nights and sometimes awakens in the middle of the night, and is unable to go back to sleep. She notes frustration at her inability to get a good night’s sleep, particularly because she often feels tired and has difficulty concentrating at work. She reports that her insomnia began about the time of her menopausal symptoms, but has continued even though her other menopausal symptoms have resolved.

What steps should be taken to diagnose her condition?

Detecting Sleep Difficulties

The patient in the case above has a distinct advantage over many patients who suffer with insomnia because her healthcare professional specifically asked about her sleep. As early as Hippocrates, sleep has been an important indicator of patient health. “Disease exists, if either sleep or watchfulness be excessive”: Hippocrates, Aphorism LXXI.[12] In a recent study of adult primary care patients with insomnia, only about half reported discussing insomnia with their physicians.[15] Other studies have found that only 10% to 30% of those with insomnia discussed this problem with their physicians,[16] and most healthcare providers fail to ask about sleep.[2] Asking a simple question, such as “How have you been sleeping?” can lead to the detection of insomnia and a range of other sleep-related conditions.[17]

Definitions and Diagnostic Criteria for Chronic Insomnia. Insomnia is a sleep disturbance that most often manifests as difficulty initiating sleep, but also manifests as difficulty maintaining sleep or experiencing early-morning awakenings.

How much sleep disruption is sufficient for the diagnosis of insomnia? Normal sleep needs vary greatly from individual to individual. Moreover, the degree of sleep disturbance in those with insomnia can be quite variable from night to night, including nights without any sleep disturbance. Although quantitative indices for sleep-onset latency (≥ 30 minutes) and for sleep efficiency (percentage of total time asleep over total time in bed ≤ 85%) have been used for research purposes,[18] these indices do not correlate well with the patient’s experience of insomnia.[19] Therefore, the subjective experience of inadequate sleep is frequently more important than quantitative sleep indices in diagnosing insomnia.

The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) defines primary insomnia as a difficulty initiating or maintaining sleep or experiencing nonrestorative sleep that results in clinically significant distress or impairment in functioning.[20] Based on these criteria, someone who does not appear to have objective manifestations of sleep disturbance but whose sleep is sufficiently inadequate or nonrestorative to produce distress or dysfunction would meet the criteria for insomnia. In contrast, someone who gets only a few hours of sleep each night but feels rested and without associated distress or dysfunction does not meet the criteria for insomnia. Therefore, subjective impressions of nonrestorative sleep with associated distress or dysfunction are important symptoms of insomnia.

These complaints of disturbed sleep also must occur in the context of adequate opportunity and circumstances for sleep. Although disruption of sleep from environmental perturbations may place someone at risk for insomnia, sleep disruption is not classified as insomnia unless there is adequate opportunity to sleep. Resident physicians on call or mothers of newborns commonly experience sleep disturbances, nonrestorative sleep, and daytime distress or impairment from inadequate sleep, but these problems are not diagnosed as insomnia because they are the result of having an inadequate opportunity to sleep.

Many people experience insomnia on occasion, but most of these “acute” or “episodic” forms of insomnia are transient and typically resolve without treatment. The duration required for insomnia to be “chronic” has varied from as little as 1 month to as long as 6 months. Based primarily on recent Research and Diagnostic Criteria (RDC) for insomnia,[21] the SOS Conference Statement concluded that insomnia lasting 1 month or more is clinically important and indicates the need for professional attention.

The SOS Conference Statement concluded that insomnia lasting 1 month or more is clinically important and indicates the need for professional attention.

RDC for insomnia. The Academy of Sleep Medicine recently developed RDC for insomnia[21] in an effort to merge different nosologies and improve the diagnostic reliability of insomnia. The RDC criteria also provide 3 subclassifications of primary insomnia: Psychophysiologic Insomnia, Paradoxical Insomnia (Sleep State Misperception), and Idiopathic Insomnia, which may facilitate research on potential etiologies of this disorder. These diagnostic criteria will be included in the second edition of the International Classification of Sleep Disorders (ICSD-2) and will likely be adopted in the next International Classification of Diseases (ICD) version. The RDC diagnostic scheme first delineates the criteria for an insomnia disorder and then specifies the exclusion criteria for primary insomnia. Compared with the DSM-IV criteria, the RDC insomnia criteria specify the requirement for adequate opportunity or circumstances for sleep and provide greater detail of the distress or functional impairment criteria. The RDC criteria for primary insomnia also clarify that the presence of a comorbid disorder does not exclude the diagnosis of primary insomnia unless the insomnia can be attributed exclusively to the comorbid disorder.

Comorbid insomnia. Primary insomnia is a diagnosis of exclusion. Numerous other conditions can contribute to the onset or maintenance of insomnia, including psychiatric disorders, substance abuse, other sleep disorders, or medical conditions/treatments. In the past, insomnia was considered “secondary” if it appeared due to another condition, but this was often difficult to determine clinically.[22] In addition, the relationship between insomnia and various comorbid disorders is complex and multidirectional. For example, insomnia may be a symptom of comorbid depression, but it may also be a separate and predisposing condition for depression.[23]

Given these complexities, the SOS Conference Panel recommended that “comorbid insomnia” replace the term “secondary insomnia.” The practical implication of this terminology for clinicians is that insomnia should not be relegated to secondary status whenever a comorbid disorder exists. The presence of comorbid disorders needs to be evaluated, and temporal relationships between the course of the comorbid disorder and the insomnia may shed light on possible etiologic relationships between them,[7] but it cannot be assumed that treating only the comorbid disorder will result in resolution of the insomnia as well.

The SOS Conference Panel recommended that “comorbid insomnia” replace the term “secondary insomnia.”

Clinical assessment of insomnia. The diagnosis of insomnia is based primarily on the patient’s history. Reports by family members, particularly the bed partner, can augment the assessment of sleep behavior and daytime functioning. Medical history and physical examination are also useful for determining the presence of possible comorbid syndromes.[7]

Sleep diaries are frequently used to document sleep-and-wake behaviors. In addition to providing data to support a diagnosis, these data are often used to devise treatment plans and to monitor treatment outcomes. Patients are typically instructed to complete the diary each morning after awaking and provide their best estimates of variables, such as time in bed, time of sleep onset, awakenings, and wake time. These data are only estimates by patients and tend to underestimate actual sleep time, but they are useful for assessing individual sleep patterns, possible factors associated with poor sleep, and changes in sleep over time. There are also a number of self-report instruments, a few of which have been adequately standardized and validated for monitoring outcomes.[24]

To provide more objective measures of sleep behavior, actigraphs or accelerometers have been used in research trials to infer sleep-and-wake behaviors from changes in the amount of movement. Although useful, actigraphs have not been fully validated and may underestimate sleep time if sleep is restless or fitful (eg, with comorbid restless legs syndrome). Actigraphs and other automated measures of sleep behavior have not typically been used in routine practice, but can provide more objective measures of sleep patterns, especially when the patient’s report is in question (eg, sleep-state misperception).[25]

Polysomnography remains the gold standard for measuring sleep-wake states; however, the American Academy of Sleep Medicine does not recommend polysomnography for the assessment of insomnia except when needed to rule out a comorbid disorder, such as sleep apnea.[26] In addition to expense, polysomnography is unlikely to provide an accurate representation of an insomnia patient’s sleep difficulties given the night-to-night variability of sleep behavior and influence of the sleep environment on insomnia symptoms.

The American Academy of Sleep Medicine does not recommend polysomnography for the assessment of insomnia except when needed to rule out a comorbid disorder, such as sleep apnea.

Etiology of insomnia. Although there is growing consensus about the appropriate diagnostic criteria and procedures for insomnia, the possible etiologic factors for insomnia remain poorly understood. Spielman’s 3 Ps — predisposing, precipitating, and perpetuating factors — is a useful model for organizing various etiologic factors.[27]

Very little is known about possible predisposing factors for insomnia. Other than some limited research suggesting familial aggregation,[28,29] there are no data on genetic predisposition for insomnia. There is considerable research on the neurobiology of sleep-wake states, including the inhibitory feedback loop involving the GABA and galanin neurons in the ventrolateral preoptic nucleus of the hypothalamus and the orexin or hypocretin neurons in the posterior hypothalamus, which serve as a “flip-flop” switch of major cortical arousal systems.[30,31] It remains unclear, however, how these systems are dysfunctional in insomnia. Deficiencies in endogenous melatonin or benzodiazepine receptors and hyperactivity of corticotropin-releasing factor neurons are possible etiologic factors, but further research is needed to better understand these potential etiologies for insomnia.[32]

The possible etiologic factors for insomnia remain poorly understood, and little is known about possible predisposing factors for insomnia.

Hyperarousal appears to be an important mechanism for insomnia. Research has shown increased brain glucose metabolism when awake or asleep, increased beta and decreased theta and delta during sleep, and increased adrenocorticotropic hormone activity.[33,34] Results from recent functional imaging studies provide additional support for the central nervous system hyperarousal hypothesis.[35]

Potential precipitating factors for insomnia are numerous and include many of the possible disorders that are comorbid with insomnia, such as psychiatric disturbance, sleep-wake schedule changes, medical conditions and their treatments, other sleep disorders, and substance use. Substances, including caffeine, theophylline and other stimulants, steroids, antihypertensives, and antidepressants, can also precipitate insomnia.[12] A recent study found that family, health, and work-school-related events were the most common precipitating factors for insomnia, and that even positive events can precipitate insomnia.[36]

There is general agreement that insomnia, regardless of how it is precipitated, is perpetuated by cognitive and behavioral mechanisms. Cognitive factors involved in perpetuating insomnia include misconceptions about normal sleep needs and stability, misattributions about the causes of sleep disturbance, and catastrophic worry about the daytime effects of inadequate sleep.[18,37] These dysfunctional beliefs often promote behaviors that are intended to improve sleep but are disruptive to sleep homeostasis and a consistent sleep-wake cycle (eg, taking naps and sleeping in late to “catch up” on sleep). These sleep-disruptive behaviors are further perpetuated by behavioral conditioning, which produces conditioned arousal to stimuli that would normally be associated with sleep.[38] It is important to recognize that these cognitive and behavioral perpetuating factors may be present in both comorbid and primary

What Are the Prevalence, Course, Incidence, and Risk Factors for Chronic Insomnia?

Prevalence of Chronic Insomnia

Estimates of the prevalence of insomnia vary depending on the definition used. Approximately one third of the general population complains of sleep disruption, and 9% to 15% of the population report associated daytime impairment consistent with the diagnosis of insomnia.[1] However, the proportion of those reporting sleep disturbance with daytime impairment who would meet the diagnostic criteria for insomnia is unclear. Among patients in primary care, the prevalence rates for insomnia are much higher, as high as 50%.[4] In a large survey of managed care participants, over one third experienced symptoms of insomnia, although less than 1% presented with an insomnia complaint.[39]

Incidence, Natural Course, and Duration of Chronic Insomnia

The SOS Conference Statement noted that there is very little known about the incidence, natural course, and duration of insomnia. Limited evidence suggests that insomnia is a chronic and persisting condition with low rates of spontaneous remission and possible recurrence after a period of remission, but these processes are poorly understood.

There is very little known about the incidence, natural course, and duration of insomnia.

Risk Factors for Chronic Insomnia

Given that most research on risk factors for insomnia is cross-sectional, not longitudinal, it is difficult to know whether potential risk factors are causal or correlational. The prevalence of insomnia is higher in divorced, separated, or widowed adults, and in those with lower education and income levels.[1] Insomnia is also more likely to occur in women, especially postmenopausal women.[1] There is an increased prevalence of insomnia in older adults, but it remains unclear to what extent this is independent of declining health and comorbid influences. Sleep patterns, however, do change with age. Older people experience more awakenings during the night, lower sleep efficiency, less sleep, more variable sleep, and lighter sleep than younger adults.[40]

Several psychiatric and medical disorders are associated with insomnia. As noted earlier, however, these relationships are complex and multidirectional. For example, research on the relationship between insomnia and depression indicates that it is more likely that insomnia is a risk factor for depression than that depression is a risk factor for insomnia. Insomnia appears to be predictive of a number of disorders, including depression, anxiety, alcohol abuse/dependence, drug abuse/dependence, and suicide.[41] Medical and sleep disorders that potentially disrupt sleep (eg, chronic pain conditions, such as arthritis, or sleep apnea) may be precipitants of or risk factors for insomnia. Substance abuse and the use of prescribed medications that can disturb sleep also can be risk factors for insomnia.

It is difficult to know whether potential risk factors are causal or correlational. Several psychiatric and medical disorders are associated with insomnia, but these associations are complex and multidirectional.

What Are the Consequences, Morbidities, Comorbidities, and Public Health Burden Associated With Chronic Insomnia?

Economic Costs of Insomnia

Insomnia is associated with high healthcare utilization. Walsh and Ustun[42] estimated annual direct total costs for insomnia at about $12 billion for healthcare services and $2 billion for sleep-promoting agents. People with insomnia have more medical problems and use more medications than those without insomnia, and they have double the number of office visits and hospitalizations as those without insomnia.[6,43]

The relative contribution of insomnia and comorbid conditions to these costs remains unclear. Indirect costs of insomnia are even less clear. In 1994, the economic costs of insomnia were estimated at $80 billion annually.[44,45] These indirect cost estimates are higher than those for other chronic conditions, such as rheumatoid arthritis,[46] but there are limited data available to reliably estimate the indirect costs of insomnia.

Effects of Insomnia on Functioning and Quality of Life

Sleep loss does result in impaired psychomotor and cognitive functioning, but these impairments are less pronounced for insomnia.[47] Despite the equivocal impact of insomnia on memory and cognitive functioning, insomnia is related to occupational role dysfunction, including increased absenteeism and decreased work performance.[4,43] These daytime impairments, however, may be more related to the chronic hyperarousal state[48] or to perceptions of sleep deprivation[49] than to actual sleep loss from insomnia.

In considering the consequences of insomnia, it is important to differentiate being sleepy from being tired or fatigued. Sleepiness involves recurrent episodes of being drowsy and involuntarily falling asleep in nonstimulating environments (ie, dozing off). Sleepiness is more often associated with other primary sleep disorders, such as narcolepsy, sleep apnea, and periodic limb movement disorder. In contrast, those with insomnia are often tired or fatigued but not sleepy.[48,50]

Insomnia is associated with substantial impairments in quality of life. Although insomnia is often considered more benign than most other chronic medical and psychiatric disorders, the impairments in quality of life in insomnia are comparable to those observed in diabetes, arthritis, and heart disease.[5] Quality of life also improves with treatment for insomnia, although not to the level of the normal population.[51]

Insomnia is associated with substantial impairments in quality of life that are comparable to the impairments observed in other chronic medical disorders.

Comorbidities and Morbidities

Approximately 40% of adults with insomnia also have a diagnosable psychiatric disorder.[16] In addition, approximately three quarters of people presenting to sleep clinics or general medical practices with insomnia have a comorbid psychiatric disorder.[52] Although there are a number of psychiatric disorders that are comorbid with insomnia (eg, generalized anxiety disorder, attention-deficit/hyperactivity disorder, and schizophrenia), depression has received the most attention. Insomnia was once considered only a symptom of depression or secondary to depression. Recent research, however, has consistently shown that insomnia is a predisposing factor for depression. Insomnia often occurs prior to the onset of depression,[53] and often precedes depression relapses.[54,55] Those with persistent insomnia are also much more likely to develop depression at a later time.[16,56] In addition to depression, insomnia is associated with an increased risk for suicide[57] and is a precipitant of manic episodes in those with bipolar disorder.[58]

Insomnia is common in other primary sleep disorders, such as sleep apnea (sleep-disordered breathing [SDB]), restless legs syndrome, and periodic limb movement disorder. In these cases, insomnia may be secondary or fully attributable to the underlying sleep disorder, but often is a comorbid disorder precipitated by the other primary sleep disorder but perpetuated by cognitive and conditioning factors.[59] SDB typically presents clinically with nonrestorative sleep complaints and disturbed sleep maintenance with normal sleep onset. Snoring and/or apnea episodes are often reported by the bed partners, but patients are typically unaware of their sleep-related symptoms. If positive indications of SDB are found during a clinical interview, then overnight sleep recording is typically performed to establish the diagnosis and determine its severity.[7,59] SDB may be exacerbated by benzodiazepines, so it is important to rule out this condition before proceeding with insomnia treatment.

A number of chronic medical conditions are associated with insomnia, including chronic pain syndromes, coronary heart disease, asthma, gastrointestinal disorders, vascular disorders, chronic fatigue, and endocrine and metabolic disorders.[7] In addition, substances, including caffeine, theophylline and other stimulants, steroids, antihypertensives, and antidepressants, can precipitate insomnia.[12]

Although many of the disorders comorbid with insomnia are associated with increased mortality rates, insomnia itself does not appear to be associated with higher mortality. In a recent longitudinal study, neither insomnia nor the use of hypnotics for insomnia increased the risk for mortality over a 6-year period.[60] Higher mortality has been associated with either too much or too little sleep, but not with insomnia disorder per se.[61,62]

Insomnia is frequently comorbid with psychiatric disorders, other primary sleep disorders, and chronic medical conditions.

What Treatments Are Used for the Management of Chronic Insomnia, and What Is the Evidence Regarding Their Safety, Efficacy, and Effectiveness?

Case Study: Part 2

The patient’s medical history reveals menopausal symptoms that were controlled on hormone replacement therapy and did not recur following discontinuation 3 years ago. Her insomnia symptoms, however, have continued and worsened in the past 5 years. The patient is otherwise healthy. She does not report pain at night, snoring or gasping for air during sleep, or restless legs. She does report awakening at least once a night to urinate, but indicates that she is sometimes unable to return to sleep after awakening.

The clinical interview reveals no other psychiatric disorder. She has no history of substance abuse or dependence, but does indicate that she has begun drinking a glass or 2 of wine at night to help her fall asleep. She describes primarily being unable to fall asleep, and says it takes her an hour or 2 to fall asleep most nights. She also describes awakening during the night, sometimes being unable to go back to sleep, and that these sleep-maintenance symptoms have worsened in the past 6 months. She reports hearing that older people can get by on less sleep, but that she feels tired and irritable after nights of inadequate sleep. She is beginning to believe that she is not functioning as well at work because of her sleep difficulties. She reports feeling particularly distressed in the evening as her bedtime approaches and worries whether she will get enough sleep to perform well the next day.

The patient is provided with general information about sleep and insomnia and reassured that her sleep difficulties can be managed. She is provided with a sleep diary and asked to record her sleep-wake patterns for 2 weeks and then to return with her husband to complete the evaluation.

At the second visit, her husband confirms that she does not snore loudly or excessively and does not appear to experience short bouts of not breathing while asleep. He reports that she does have difficulty going to sleep and will toss and turn for an hour or so before falling asleep. On 2-3 mornings each week, he wakes up and finds that she is not in bed but that she got up during the night and later fell asleep while watching television downstairs. On weekends, he usually lets her sleep in late. He reports that she is sometimes so tired after a bad night that she will come home from work and take a nap before dinner. Her sleep diary reveals an average sleep-onset latency of about 45 minutes each night, that she is awake for over an hour during the night on about half the nights, a mean total sleep time of 6 hours and 30 minutes per night, and a mean sleep efficiency of 82%.

Based on this assessment, what treatment approaches should be considered?

Cognitive Behavioral Therapy

Cognitive behavioral therapy for insomnia (CBTI) addresses the hyperarousal, cognitive, and conditioning factors that appear to perpetuate the disorder. CBTI typically consists of 5 major components:[38]

  • Sleep-hygiene strategies to promote a sleep environment and routine that promote sleep.
  • Relaxation therapy (progressive muscle relaxation, visual imagery, etc) to reduce physiologic arousal.
  • Cognitive restructuring to change dysfunctional attitudes about sleep (eg, attempting to will oneself to sleep or excessive worrying about the effects of not sleeping).
  • Stimulus control to reassociate the bed and bedroom with going to sleep instead of staying awake. These instructions include (1) going to bed only when sleepy, (2) establishing a standard wake-up time, (3) getting out of bed whenever awake in bed for 15 minutes or more, (4) avoiding doing sleep-incompatible behaviors (reading or watching television) while in bed, and (5) refraining from daytime napping.
  • Sleep restriction to condense time in bed to the average time typically asleep. For this component, the time to bed is set based on the average time asleep but not less than 5 hours, and then it is gradually increased as sleep efficiencies improve.

The American Academy of Sleep Medicine Task Force on nondrug alternatives for primary chronic insomnia[63] found that CBTI produced reliable and durable improvement in chronic insomnia. Nearly 80% of those treated with CBTI show measurable benefit, but the magnitude of the benefit varies. CBTI produces objective improvements as well as subjective improvements in sleep and appears to improve homeostatic sleep regulation.[64] Although most of the research on CBTI is with primary insomnia, CBTI has been shown to produce benefits for the comorbid condition as well as for the insomnia.[65]

Sleep hygiene is the component of CBTI that is most often provided by healthcare providers,[66] and patients tend to like and adhere to sleep-hygiene strategies.[67] Unfortunately, sleep hygiene appears to be the least effective CBTI component. Stimulus control and sleep restriction are the most effective CBTI components,[68] but patients have the most difficulty adhering to these components.[67]

When CBTI is compared with medications, sedative hypnotics appear to produce more rapid improvements, but the long-term safety and efficacy of sedative hypnotics are less well established than CBTI.[69,70] The efficacy of CBTI, particularly long-term, and the minimal apparent adverse effects of this treatment have resulted in it being considered a first-line treatment for primary insomnia.[70]

Challenges with CBTI. Although CBTI is clearly efficacious, accessibility to this treatment has been severely limited by a general lack of knowledge regarding efficacy, inadequate coverage of this treatment by insurance carriers, and a lack of professionals trained in CBTI, even at certified sleep disorder centers.[38] The treatment is generally well accepted by patients when they are provided this option,[71] and the treatment is relatively short. Although session dosage remains unclear, Edinger and Means[38] have suggested that 4 sessions at 2-week intervals may be optimal based on their review of this treatment approach.

To increase availability, researchers have experimented with alternative methods of CBTI treatment delivery. Treatment delivery in individual, group, or phone-based sessions appears to be equally helpful.[72] Although self-help interventions appear less effective than professional assistance, self-help versions of CBTI still provide modest benefit over controls.[73] Delivery of CBTI via the Internet and other technologies is a promising new approach area for potentially improving the accessibility of this efficacious treatment for insomnia.[74]

Although CBTI is not typically provided by primary care health professionals, recent efforts show this to be another potential strategy for providing this treatment to those with insomnia. Indeed, allied healthcare providers have been trained to deliver CBTI with some success.[75] Recently, Edinger and Sampson[76] devised a “primary care friendly” form of CBTI. This abbreviated form of CBTI involves two 25-minute sessions 2 weeks apart. Session 1 consists of reviewing sleep logs and providing sleep education, stimulus control, and sleep-restriction instructions, such as eliminating activities that are incompatible with sleep, avoiding daytime naps, and setting up a consistent sleep-wake schedule (including sleep restriction). Session 2 consists of reviewing progress, addressing adherence difficulties, and modifying sleep strategies accordingly. This abbreviated treatment was significantly better than sleep-hygiene instructions alone for most insomnia measures and resulted in reductions of insomnia symptoms to normal levels in over half of patients.[76]

Although CBTI is efficacious, accessibility to this treatment has been severely limited by a general lack of knowledge regarding efficacy, inadequate coverage of this treatment by insurance carriers, and a lack of professionals trained in CBTI, even at certified sleep disorder centers.

US Food and Drug Administration-Approved Medications

Benzodiazepine and nonbenzodiazepine hypnotics. Both benzodiazepine and nonbenzodiazepine hypnotics have been approved for the treatment of insomnia.

Benzodiazepine hypnotics. The benzodiazepine hypnotics approved by the US Food and Drug Administration (FDA) for the treatment of insomnia are estazolam, flurazepam, quazepam, temazepam, and triazolam. These medications have been found effective in a number of double-blind, placebo-controlled trials, but these trials have typically been short-term (4-6 weeks).[77] Even with longer term use, there is a reduced effect after 4-8 weeks.[78] Except for triazolam, these benzodiazepine hypnotics have long half-lives, which contribute to their efficacy for maintaining sleep, but also result in higher rates of next-day impairments, such as morning sedation, cognitive impairment, and motor incoordination.[79] Temazepam is the most commonly prescribed benzodiazepine hypnotic,[80] but, despite its long half-life, it appears to have minimal impact on number of awakenings, and produces tolerance, morning sedation, and cognitive impairment.[8] Triazolam, the only short half-life agent in this group, has more of an impact on sleep onset than maintenance, but possible amnestic effects have been a concern.[81,82]

Except in those with a history of substance abuse, abuse liability from these benzodiazepine hypnotics appears to be minimal.[83] However, due to concerns about abuse liability, the FDA has indicated that these medications should be limited to 7-10 days of use with reevaluation if used for more than 2-3 weeks. Some have argued that these limitations were based on now obsolete guidelines,[84] and that longer term use may not increase the risk for abuse liability,[85] but the long-term effects of these medications on tolerance and abuse liability require further study.

Nonbenzodiazepine hypnotics. Nonbenzodiazepine hypnotics are a new class of hypnotics that act on specific benzodiazepine receptor subtypes, but have a nonbenzodiazepine structure. Three nonbenzodiazepine hypnotics — zaleplon, zolpidem, and eszopiclone — have been approved by the FDA for the treatment of insomnia. As a class, these medications generally have shorter half-lives than their benzodiazepine predecessors, which results in greater effects on sleep onset than sleep maintenance and minimal morning sedation and other daytime impairments. Nonbenzodiazepine hypnotics also may have less abuse liability potential than benzodiazepine hypnotics, although further research is needed.[86]

Zolpidem is the most commonly prescribed agent for insomnia,[80] and due to its rapid onset and short half-life (1.5-4 hours), it has more of an effect on sleep onset than sleep maintenance.[87] Modified-release formulations may provide better sleep-maintenance effects, but data on these formulations are still needed.[88] Efficacy data do not extend beyond 1-2 months, so the effects of longer term use are unknown.[89]

Zaleplon has a very short half-life of only about 1 hour and, therefore, affects primarily sleep onset.[90] Higher doses may affect sleep maintenance and may increase the risk for side effects.[91] Although studies of zaleplon have been of longer duration than zolpidem, long-term safety and efficacy beyond 1-3 months have not been established.[92,93]

Eszopiclone is the newest medication in this group, and it has the longest half-life (5-6 hours). Studies show that this half-life appears adequate to produce effects on sleep maintenance as well as sleep onset while also resulting in minimal morning sedation.[94,95] Eszopiclone does not have a limitation on duration of use, and recent findings have shown efficacy and safety with minimal tolerance or abuse liability over 12 months of use.[96]

As a group, these medications appear to produce minimal sedation effects or psychomotor impairment.[97,98] These reduced side effects relative to benzodiazepine hypnotics appear to be due to their short half-lives more so than their selective receptor agonist effects.[99] Nonbenzodiazepine hypnotics also may produce potentially fewer or less severe drug interactions than many of the benzodiazepine hypnotics because they rely less exclusively on CYP3A4 metabolism.[100] Substantial proportions of these medications, however, are still metabolized through CYP3A4; so these medications, as is the case with the most traditional benzodiazepine hypnotics, should be carefully monitored if CYP inducers (rifampicin) or CYP3A4 inhibitors (ketoconazole, erythromycin, and cimetidine) are also being prescribed.[100] Alcohol also potentiates the effects of all hypnotics, so patients should be instructed not to drink, and if they do, to understand that they will feel more sedated the next morning, potentially affecting their ability to drive.

Medications for insomnia are typically taken every night on a prophylactic basis to manage insomnia. Due to the rapid onset and minimal abuse liability of nonbenzodiazepine hypnotics, nonnightly or as-needed use has been considered and appears safe and efficacious in preliminary trials.[101] Further trials, however, are needed to substantiate the safety and efficacy of long-term, nonnightly administration.

Nonbenzodiazepine hypnotics have shorter half-lives, which result in greater effects on sleep onset than sleep maintenance and minimal morning sedation and other daytime impairments. They may also be associated with fewer or less severe drug interactions, and may have less abuse liability than benzodiazepine hypnotics.

Discontinuation of hypnotics. Little research has been conducted on the persistence or reappearance of symptoms after prescription therapy is discontinued. Discontinuation of hypnotics, whether benzodiazepine or nonbenzodiazepine, generally results in relapse of symptoms. Many of the benzodiazepines also produce rebound insomnia, insomnia that is worse than pretreatment levels, for a few days. Rebound insomnia also may be reduced with the newer nonbenzodiazepine hypnotics, although further research is needed.[78] CBTI has been used to reduce relapse rates after benzodiazepine discontinuation.[102]

Melatonin receptor agonists. The FDA recently approved ramelteon for the treatment of chronic insomnia. Ramelteon is a selective melatonin receptor agonist (MT1, MT2) that is rapidly absorbed (< 1 hour) and has a relatively short half-life (2-5 hours). Initial studies of ramelteon have shown reduced sleep-onset latency compared with placebo, with a low rate of side effects and adverse events.[103] Abuse liability also appears to be minimal. Ramelteon should not be prescribed concomitantly with strong CYP1A2 inhibitors, such as fluvoxamine. Although ramelteon is a promising alternative to sedative-hypnotics, further research on its safety and efficacy, particularly long-term, is needed.

Prescription Drugs Without FDA Approval for Insomnia

Trazodone is one of the most commonly prescribed medications for the treatment of insomnia, comparable to zolpidem.[80] The low cost of antidepressant medications along with unrestricted long-term use and minimal abuse liability may be factors leading to the increased use of these medications for insomnia.

Trazodone is sedating, but there is a paucity of data on its effects on insomnia. Research has usually been performed with small, comorbid, depressed samples with short and equivocal effects on sleep.[104,105] Trazodone can have significant side effects, including orthostatic hypotension, blurred vision, nausea, dry mouth, constipation, drowsiness, headache, and (rarely) priapism. These side effects also increase the risk for falls and accidents, which can have serious consequences in the elderly. Although these risks are less pronounced at the lower doses typically used for insomnia, the risk-benefit ratio may be too great in some situations to use trazodone for insomnia.[106] There are also limited data on the short-term effects of doxepin[107] for insomnia. The potential adverse effects from trazadone, doxepin, and other antidepressants overshadow the limited efficacy data on these medications. Dose-response relationships of antidepressants for insomnia also are poorly understood.[108,109]

The SOS Conference Statement notes that various other medications have been used in the treatment of insomnia, including barbiturates (phenobarbital) and antipsychotics (quetiapine and olanzapine). These medications, however, have serious side effects and adverse risks with little to no data supporting their efficacy. Therefore, these medications are not recommended for the treatment of insomnia.

According to the SOS Conference Statement, the risk-benefit ratio may be too great in some situations to use trazodone or other antidepressants for the treatment of insomnia. In addition, barbiturates (phenobarbital) and antipsychotics are not recommended for the treatment of insomnia.

Over-the-Counter Medications

Over-the-counter (OTC) medications are frequently used for insomnia. About one fourth of US adults with sleep difficulties use OTC sleep aids.[110]

Antihistamines (H1 receptor agonists, such as diphenhydramine) are the most commonly used OTC medications for insomnia. There is, however, no systematic evidence of efficacy for insomnia, and there are significant side effects, including dry mouth, blurred vision, urinary retention, constipation, and a risk for increased intraocular pressure in patients with narrow angle glaucoma.[111]

Alcohol is often used to reduce sleep-onset latency. Although alcohol does reduce sleep latency, it also results in poorer quality sleep and nighttime awakening. Alcohol also is clearly not appropriate for someone with a risk for substance use. Therefore, alcohol cannot be recommended as a sleep aid.[112]

Melatonin is a natural hormone that is produced by the pineal gland that has a role in circadian rhythm control. Melatonin may be helpful for reducing symptoms of jet lag, but there is minimal evidence of efficacy for insomnia. Melatonin appears to be safe for short-term use, but long-term safety is unknown. Except for the recently FDA-approved ramelteon, melatonin compounds are unregulated, and preparations may vary.[113]

L-tryptophan is an endogenous amino acid sometimes used as a hypnotic. Evidence of efficacy for insomnia, however, is extremely limited and there are possible toxic interaction effects with some psychiatric medications.[114]

Valerian is derived from the valeriana plant root and thought to promote sleep, but there is no proven benefit for insomnia. Valerian is unregulated and possibly associated with hepatotoxicity. Other herbal products are sometimes used for insomnia, but there are no data supporting their efficacy and there are similar concerns about safety and drug interactions.[115]

Other alternative treatments, such as tai chi, yoga, acupuncture, and light therapy, have been used to treat insomnia, but they have not been adequately evaluated.[114,116]

OTC products, alternative treatments, and complementary therapies are often used to treat insomnia. These therapies, however, have not been systematically evaluated; efficacy data are lacking; and there are concerns about side effects.

Case Study: Part 3

Following the clinical assessment, the patient is advised regarding treatment approaches. Although menopausal symptoms appear to have been a precipitant of the insomnia, these symptoms have resolved and no longer appear to be related to the insomnia. The patient is counseled about cognitive behavioral and sedative-hypnotic approaches for insomnia. Given the minimal risks, she would prefer to try CBTI first, but the nearest specialist with expertise in CBTI is 2 hours away. Therefore, she agrees to try one of the newer sedative-hypnotics and to obtain an abbreviated form of CBTI from the nurse practitioner who has some limited training in this approach.

Because she presents with both sleep-onset and sleep-maintenance difficulties, and may require long-term medication use to control her insomnia, she is started on an agent appropriate for long-term administration immediately before bed each night, and advised that it may be necessary to increase her prescription if her sleep difficulties, particularly sleep maintenance difficulties, persist.

The patient meets with the nurse practitioner who provides information about sleep hygiene and instructs her to refrain from using alcohol to fall asleep, particularly in combination with her medication. A consistent wake time of 7:00 am is agreed to and a time to bed of 12:30 am is determined based on her average time asleep from her sleep diaries. The patient is concerned that she may be more tired than usual if she goes to bed this late, but is reassured that she will be getting the same amount of sleep as she usually does, just more consolidated. She is also instructed to get out of bed if she does not fall asleep within 15 minutes, to do something restful, and then return to bed when she feels sleepy again. She is assured that she can function adequately the next day if she does not get much sleep, which she has been doing for years, and that she can only control getting in and out of bed, not if and when she falls asleep while in bed. She is encouraged not to take naps and to maintain her regular wake time even if she did not sleep well the night before or can sleep later that morning.

After 2 weeks, the patient’s sleep diary shows that she has generally adhered to her new sleep schedule and that her sleep efficiencies are above 90% as a result of her bedtime restrictions. She is instructed to adjust her bedtime 15 minutes earlier and to readjust her bedtime earlier each week if her sleep efficiencies average above 90%. She is encouraged to continue the strategies that appear to be working, particularly maintaining a consistent bedtime, not taking naps, and getting out of bed if she is unable to fall asleep.

At a follow-up visit 1 month later, the patient reports sleeping well and feeling rested although her total sleep time is only 7.5 hours, less than she thought was adequate. She is reassured that sleep needs change over time and that her sense of feeling rested and restored is more important than how much sleep she gets. She is encouraged to continue the CBTI strategies that she has found helpful thus far. She wonders whether the medication is still needed to control her sleep. She is instructed to shift from taking it every night to taking it as needed after getting out bed if she is unable to fall asleep within 15 minutes.

At a follow-up visit 3 months later, the patient reports that she no longer takes the medication for sleep, that she continues to get about 7.5 hours of sleep per night with little to no difficulty initiating or maintaining sleep, and that she feels rested and refreshed most mornings.

What Are Important Directions for Insomnia-Related Research?

Based on what is known about the manifestations and management of insomnia, the SOS Conference Panel made a number of recommendations for future research needs:[9]

  1. Developing and validating instruments to assess chronic insomnia, particularly measures of outcome and diurnal consequences;
  2. Conducting more research on possible genetic and neural mechanisms of insomnia;
  3. Conducting longitudinal observational studies to better understand the incidence, course, and correlates of insomnia, including the adoption of sleep-disturbance items in national health survey research;
  4. Obtaining more information on the impact of insomnia on quality of life and the indirect and direct impact on individuals, caregivers, and society as a whole;
  5. Providing better estimates of the cost of illness to determine cost-effectiveness of treatments;
  6. Obtaining more long-term outcome data, particularly following discontinuation of treatment;
  7. Performing large-scale, multisite comparative treatment trials, including studies of the efficacy of combined or sequenced administration of medications and CBTI;
  8. Conducting more research on OTC and alternative remedies for insomnia;
  9. Conducting efficacy trials in subpopulations, such as children, nursing home residents, and postmenopausal women, and in those with comorbid as well as primary chronic insomnia; and
  10. Assessing clinician decision making with insomnia patients; although much is known that can inform clinical decision making, much more research is needed in this area.

Conclusions

Insomnia is a major public health problem affecting millions of individuals, their families, and their communities. Little is known about etiologic mechanisms, but hyperarousal, cognitive processes, and behavioral conditioning have some support as possible factors. Current evidence supports the efficacy of CBTI and sedative-hypnotics for the treatment of insomnia. Despite widespread use, there is very little evidence supporting the use of other treatments, such as antidepressants and OTC agents, for the treatment of insomnia.

Although there are a number of efficacious medications for insomnia, the SOS Conference Panel noted concern about the mismatch between the chronic, long-term nature of the disorder and the short duration of most clinical trials. Only eszopiclone has been evaluated in trials lasting 6-12 months. Newer medications not yet approved, such as indiplon (a short-acting nonbenzodiazepine hypnotic), provide additional options for the treatment of chronic insomnia, but there remains a clear need for new and more targeted drug therapies that can be used safely and effectively long-term. CBTI shows promising long-term effects with minimal safety concerns, and accessibility to this treatment option should be expanded.

References

  1. Ohayon MM. Epidemiology of insomnia: what we know and what we still need to learn. Sleep Med Rev. 2002;6:97-111.
  2. Haponik EF, Frye AW, Richards B, et al. Sleep history is neglected diagnostic information. Challenges for primary care physicians. J Gen Intern Med. 1996;11:759-761.
  3. Allaert FA, Urbinelli R. Sociodemographic profile of insomniac patients across national surveys. CNS Drugs. 2004;18(suppl1):3-7.
  4. Simon GE, VonKorff M. Prevalence, burden, and treatment of insomnia in primary care. Am J Psychiatry. 1997;154:1417-1423.
  5. Chevalier H, Los F, Boichut D, et al. Evaluation of severe insomnia in the general population: results of a European multinational survey. J Psychopharmacol. 1999;13(suppl1):S21-24.
  6. Leger D, Guilleminault C, Bader G, Levy E, Paillard M. Medical and socio-professional impact of insomnia. Sleep. 2002;25:625-629.
  7. Thase ME. Correlates and consequences of chronic insomnia. Gen Hosp Psychiatry. 2005;27:100-112.
  8. Benca RM. Diagnosis and treatment of chronic insomnia: a review. Psychiatr Serv. 2005;56:332-343.
  9. The National Institutes of Health Consensus Development Program. NIH State-of-the-Science Conference Statement on Manifestations and Management of Chronic Insomnia in Adults. Available at: http://consensus.nih.gov/2005/2005InsomniaSOS026html.htm Accessed November 17, 2005.
  10. Wilson S, Nutt D. Assessment and management of insomnia. Clin Med. 2005;5:101-104.
  11. Silber MH. Chronic insomnia. N Engl J Med. 2005;343:803-810.
  12. Sateia MJ, Nowell PD. Insomnia. Lancet. 2004;364:1959-1973.
  13. Yang CM, Spielman AJ, Huang YS. Insomnia. Curr Treat Options Neurol. 2005;7:373-386.
  14. Neubauer DN. Insomnia. Prim Care. 2005;32:375-388.
  15. Aikens JE, Rouse ME. Help-seeking for insomnia among adult patients in primary care. J Am Board Fam Pract. 2005;18:257-261.
  16. Ford DE, Kamerow DB. Epidemiologic study of sleep disturbances and psychiatric disorders. An opportunity for prevention? JAMA. 1989;262:1479-1484.
  17. Doghramji PP. Recognizing sleep disorders in a primary care setting. J Clin Psychiatry. 2004;65(suppl16):23-26.
  18. Epsie CA. Insomnia: conceptual issues in the development, persistence, and treatment of sleep disorder in adults. Annu Rev Psychol. 2002;53:215-243.
  19. Erman MK. Sleep architecture and its relationship to insomnia. J Clin Psychiatry. 2001;62(suppl10):9-17.
  20. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders — Text Revision. 4th ed. Washington, DC: American Psychiatric Publishing; 2000.
  21. Edinger JD, Bonnet MH, Bootzin RR, et al. Derivation of Research Diagnostic Criteria for Insomnia: Report of an American Academy of Sleep Medicine Workgroup. Sleep. 2004;27:1567-1596.
  22. Edinger JD. Classifying insomnia in a clinically useful way. J Clin Psychiatry. 2004;65(suppl8):36-43.
  23. Billiard M, Bentley A. Is insomnia best categorized as a symptom or a disease? Sleep Med. 2004;5(suppl1):S35-S40.
  24. Devine EB, Hakim Z, Green J. A systematic review of patient-reported outcome instruments measuring sleep dysfunction in adults. Pharmacoeconomics. 2005;23:889-912.
  25. Ancoli-Israel S, Cole R, Alessi C, Chambers M, Moorcroft W, Pollak CP. The role of actigraphy in the study of sleep and circadian rhythms. Sleep. 2003;26:342-392.
  26. Chesson A, Hartse K, Anderson WM, et al. Practice parameters for the evaluation of chronic insomnia. An American Academy of Sleep Medicine report. Standards of Practice Committee of the American Academy of Sleep Medicine. Sleep. 2000;23:237-241.
  27. Spielman AJ, Caruso LS, Glovinsky PB. A behavioral perspective on insomnia treatment. Psychiatr Clin North Am. 1987;10:541-553.
  28. Yves E, Morin C, Cervena K, Carlander R, Beset A, Billard M. Family studies in insomnia. Sleep. 2003;26:A304.
  29. Dauvilliers Y, Morin C, Cervena K, et al. Family studies in insomnia. J Psychosom Res. 2005;58:271-278.
  30. Roth T. Characteristics and determinants of normal sleep. J Clin Psychiatry. 2004;65(suppl16):8-11.
  31. Siegel JM. The neurotransmitters of sleep. J Clin Psychiatry. 2004;65(suppl16):4-7.
  32. Richardson GS, Roth T. Future directions in the management of insomnia. J Clin Psychiatry. 2001;62(suppl10):39-45.
  33. Perlis ML, Smith MT, Pigeon WR. Etiology and pathophysiology of insomnia. In: Kryger MH, Roth T, Dement WC, eds. Principles and Practice of Sleep Medicine. 4th ed. Philadelphia, Pa: Elsevier; 2005:714-725.
  34. Bonnet MH, Arand DL. Hyperarousal and insomnia. Sleep Med Rev. 1997;1:97-108.
  35. Drummond SPA, Smith MT, Orff HJ, Chengazi V, Perlis ML. Functional imaging of the sleeping brain: review of findings and implications for the study of insomnia. Sleep Med Rev. 2004;8:227-242.
  36. Bastien CH, Vallieres A, Morin CM. Precipitating factors of insomnia. Behav Sleep Med. 2004;2:50-62.
  37. Harvey AG, Tang NKY, Browning L. Cognitive approaches to insomnia. Clin Psychol Rev. 2005;25:593-611.
  38. Edinger JD, Means MK. Cognitive-behavioral therapy for primary insomnia. Clin Psychol Rev. 2005;25:539-558.
  39. Hatoum HT, Kania CM, Kong SX, et al. A survey of enrollees at five managed care organizations. Am J Manag Care. 1998;4:79-86.
  40. Nau SD, McCrae CS, Cook KG, Lichstein KL. Treatment of insomnia in older adults. Clin Psychol Rev. 2005;25:645-672.
  41. Taylor DJ, Lichstein KL, Durrence HH. Insomnia as a health risk factor. Behav Sleep Med. 2003;1:227-247.
  42. Walsh J, Ustun B. Prevalence and health consequences of insomnia. Sleep. 1999;22:S427-S436.
  43. Leger D, Guilleminault C, Bader G, Levy E, Paillard M. Medical and socio-professional impact of insomnia. Sleep. 2002;25:625-629.
  44. Stoller MK. Economic effects of insomnia. Clin Ther. 1994;16:873-897.
  45. Martin SA, Aikens JE, Chervin RD. Toward cost-effectiveness analysis in the diagnosis and treatment of insomnia. Sleep Med Rev. 2004;8:63-72.
  46. Yelin E, Callahan LF. The economic cost and social and psychological impact of musculoskeletal conditions. National Arthritis Data Work Groups. Arthritis Rheumatol. 1995;38:1351-1362.
  47. Sateia MJ, Doghramji K, Hauri PJ, Morin CM. Evaluation of chronic insomnia: an American Academy of Sleep Medicine review. Sleep. 2000;23:243-308.
  48. Bonnet MH, Arand DL. The consequences of a week of insomnia. Sleep. 1996;19:452-461.
  49. Semler CN, Harvey AG. Misperception of sleep can adversely affect daytime functioning in insomnia. Behav Res Ther. 2005;43:843-856.
  50. Reidel BW, Lichstein KL. Insomnia and daytime functioning. Sleep Med Rev. 2000;4:277-298.
  51. Reimer MA, Flemons WW. Quality of life in sleep disorders. Sleep Med Rev. 2003:7:335-349.
  52. Katz DA, McHorney CA. The relationship between insomnia and health related quality of life in patients with chronic illness. J Fam Pract. 2002;51:229-235.
  53. Breslau N, Roth T, Rosenthal L, et al. Sleep disturbance and psychiatric disorders: a longitudinal epidemiologic study of young adults. Biol Psychiatry. 1996;39:411-418.
  54. Ohayon MM, Roth T. Place of chronic insomnia in the course of depressive and anxiety disorders. J Psychiatr Res. 2003;37:9-15.
  55. Lustberg L, Reynolds CF. Depression and insomnia: questions of cause and effect. Sleep Med Rev. 2000;4:253-262.
  56. Riemann D, Voderholzer U. Primary insomnia: a risk factor to develop depression. J Affect Disord. 2003;76:255-259.
  57. Agargun MY, Kara H, Solmaz M. Sleep disturbances and suicidal behavior in patients with major depression. J Clin Psychiatry. 1997;58:249-251.
  58. Wehr TA. Sleep loss: a preventable cause of mania and other excited states. J Clin Psychiatry. 1989;50(suppl):8-16.
  59. Chung KF. Insomnia subtypes and their relationship to daytime sleepiness in patients with obstructive sleep apnea. Respiration. 2005;72:460-465.
  60. Phillips B, Mannino DM. Does insomnia kill? Sleep. 2005;28:965-971.
  61. Kripke DF, Garfinkel L, Wingard DL, Klauber MR, Marler MR. Mortality associated with sleep duration and insomnia. Arch Gen Psychiatry. 2002;59:131-136.
  62. Youngstedt SD, Kripke DF. Long sleep and mortality: rationale for sleep restriction. Sleep Med Rev. 2004;8:159-174.
  63. Morin CM, Hauri PJ, Espie CA, et al. Nonpharmacologic treatment of chronic insomnia: an American Academy of Sleep Medicine Review. Sleep. 1999;22:1134-1156.
  64. Cervena K, Dauvilliers Y, Espa F, et al. Effect of cognitive behavioural therapy for insomnia on sleep architecture and sleep EEG power spectra in psychophysiological insomnia. J Sleep Res. 2004;13:385-393.
  65. Smith MT, Huang MI, Manber R. Cognitive behavior therapy for chronic insomnia occurring within the context of medical and psychiatric disorders. Clin Psychol Rev. 2005;25:559-592.
  66. Allaert FA, Urbinelli R. Sociodemographic profile of insomniac patients across national surveys. CNS Drugs. 2004;18(suppl1):3-7.
  67. Vincent N, Lionberg C. Treatment preference and patient satisfaction in chronic insomnia. Behav Sleep Med. 2003;1:125-139.
  68. Morin CM, Culbert JP, Schwartz SM. Nonpharmacological interventions for insomnia: a meta-analysis of treatment efficacy. Am J Psychiatry. 1994;151:1172-1180.
  69. McClusky HY, Milby JB, Switzer PK, et al. Efficacy of behavioral vs. triazolam treatment in persistent sleep onset insomnia. Am J Psychiatry. 1991;148:121-126.
  70. Jacobs GD, Pace-Schott EF, Stickgold R, et al. Cognitive-behavior therapy and pharmacotherapy for insomnia. Arch Intern Med. 2004;164:1888-1896.
  71. Morin CM, Gaulier B, Barry T, Kowatch RA. Patients’ acceptance of psychological and pharmacological therapies for insomnia. Sleep. 1992;15:302-305.
  72. Bastien CH, Morin CM, Ouellet MC, et al. Cognitive-behavioral therapy for insomnia: comparison of individual therapy, group therapy, and telephone consultations. J Cons Clin Psychol. 2004;72:653-659.
  73. Mimeault V, Morin CM. Self-help treatment for insomnia: bibliotherapy with and without professional guidance. J Cons Clin Psychol. 1999;67:511-519.
  74. Strom Pettersson R, Andersson G. Internet-based treatment for insomnia: a controlled evaluation. J Cons Clin Psychol. 2004;72:113-120.
  75. Espie CA, Inglis SJ, Tessier S, Harvey L. The clinical effectiveness of cognitive behaviour therapy for chronic insomnia: implementation and evaluation of a sleep clinic in general medical practice. Behav Res Ther. 2001;39:45-60.
  76. Edinger JD, Sampson WS. A primary care “friendly” cognitive behavioral insomnia therapy. Sleep. 2003;26:177-182.
  77. Walsh JK. Pharmacologic management of insomnia. J Clin Psychiatry. 2004;65(suppl16):41-45.
  78. Mendelson WB, Roth T, Cassella J, et al. The treatment of chronic insomnia: drug indications, chronic use and abuse liability. Summary of a 2001 New Clinical Drug Evaluation Unit meeting symposium. Sleep Med Rev. 2004;8:7-17.
  79. Kripke DF, Hauri P, Ancoli-Israel S, et al. Sleep evaluation in chronic insomniacs during 14-day use of flurazepam and midazolam. J Clin Psychopharm. 1990;10:32S-43S.
  80. IMS Health. National Prescription Audit Plus. Fairfield, Conn: IMS Health; 2003.
  81. Kales A, Manfredi RL, Vgontzas AN, et al. Rebound insomnia after only brief and intermittent use of rapidly eliminated benzodiazepines. Clin Pharmacol Ther. 1991;49:468-476.
  82. Bunney WE, Azarnoff DL, Brown BW, et al. Report of the Institute of Medicine Committee on the efficacy and safety of Halcion. Arch Gen Psychiatry. 1999;56:349-352.
  83. Busto U, Sellers EM, Naranjo CA, et al. Patterns of benzodiazepine abuse and dependence. Br J Addict. 1986;81:87-94.
  84. National Institutes of Health Consensus Conference. Drugs and insomnia: the use of medications to promote sleep. JAMA. 1984;251:2410-2414.
  85. Krystal AD. The changing perspective on chronic insomnia management. J Clin Psychiatry. 2004;65(suppl8):20-25.
  86. Jaffe JH, Bloor R, Crome I, et al. A postmarketing study of relative abuse liability of hypnotic sedative drugs. Addiction. 2004;99:165-173.
  87. Saletu-Zyhlarz G, Anderer P, Brandstatter N, et al. Placebo-controlled sleep laboratory studies on the acute effects of zolpidem on objective and subjective sleep and awakening quality in nonorganic insomnia related to neurotic and stress-related disorder. Neuropsychobiology. 2000;41:139-148.
  88. Erman MK, Young T, Patel SR, Neubauer DN. The role of modified-release formulations in hypnotic therapy for insomnia. CNS Spectr. 2005;10(suppl9):1-13.
  89. Swainston HT, Keating GM. Zolpidem: a review of its use and management of insomnia. CNS Drugs. 2005;19:65-89.
  90. Elie R, Ruther E, Farr I, et al. Sleep latency is shortened during 4 weeks of treatment with zaleplon, a novel nonbenzodiazepine hypnotic. J Clin Psychiatry. 1999;60:536-544.
  91. Barbera J, Shapiro C. Benefit-risk assessment of zaleplon in the treatment of insomnia. Drug Saf. 2005;28:301-318.
  92. Ancoli-Israel S, Richardson GS, Mangano RM, et al. Long term exposure to zaleplon is safe and effective for younger-elderly and older-elderly patients with primary insomnia. Sleep. 2003;26:A77.
  93. Fry J, Scharf M, Mangano R, et al. Zaleplon improves sleep without producing rebound effects in outpatients with insomnia. Int Clin Psychopharmacol. 2000;15:141-152.
  94. Krystal AD, Walsh JK, Laska E, et al. Sustained efficacy of eszopiclone over six months of nightly treatment: results of a randomized, double-blind, placebo controlled trial in adults with chronic insomnia. Sleep. 2003;26:793-799.
  95. Melton ST, Wood JM, Kirkwood CK. Eszopiclone for Insomnia. Ann Pharmacother. 2005;39:1659-1666.
  96. Roth T, Walsh HK, Krystal A, Wessel T, Roehrs TA. An evaluation of the efficacy and safety of eszopiclone over 12 months in patients with chronic primary insomnia. Sleep Med. In press.
  97. Verster JC, Volkerts ER, Schreuder AH, et al. Residual effects of middle-of-the-night administration of zaleplon and zolpidem on driving ability, memory functions, and psychomotor performance. J Clin Psychopharm. 2002;22:576-583.
  98. Terzano MG, Rossi M, Palomba V, Smerieri A, Parrino L. New drugs for insomnia: comparative tolerability of zopiclone, zolpidem, and zaleplon. Drug Saf. 2003;26:261-282.
  99. Staner L, Ertle S, Boeijinga P, et al. Next-day residual effects of hypnotics on DSM-IV primary insomnia: a driving simulator study with simultaneous electroencephalogram monitoring. Psychopharmacology. 2005;181:790-798.
  100. Hesse LM, vonMoltke LL, Greeblatt DJ. Clinically important drug interactions with zopiclone, zolpidem, and zaleplon. CNS Drugs. 2003;17:513-532.
  101. Perlis ML, McCall WV, Krystal AD, Walsh JK. Long-term, non-nightly administration of zolpidem in the treatment of patients with primary insomnia. J Clin Psychiatry. 2004;65:1128-1137.
  102. Morin CM, Belanger L, Bastien C, Vallieres A. Long-term outcome after discontinuation of benzodiazepines for insomnia: a survival analysis of relapse. Behav Res Ther. 2005;43:1-14.
  103. Roth T, Stubbs C, Walsh JK. Ramelteon (TAK-375), a selective MT1/MT2-receptor agonist, reduces latency to persistent sleep in a model of transient insomnia related to a novel sleep environment. Sleep. 2005;28:303-307.
  104. Parrino L, Spaggiari MC, Boselli M, et al. Clinical and polysomnographic effects of trazodone CR in chronic insomnia associated with dysthymia. Psychopharmacology. 1994;116:389-395.
  105. Montgomery I, Oswald I, Morgan K, et al. Trazadone enhances sleep in subjective quality but not in objective duration. Br J Clin Pharmacol. 1983;16:139-144.
  106. Mendelson WB. A review of the evidence for the efficacy and safety of trazodone in insomnia. J Clin Psychiatry. 2005;66:469-476.
  107. Hajak G, Rodenbeck A, Voderholzer U, et al. Doxepin in the treatment of primary insomnia: a placebo-controlled, double-blind, polysomnographic study. J Clin Psychiatry. 2001;62:453-463.
  108. Mayer AG, Baldwin DS. Antidepressants and their effect on sleep. Hum Psychopharmacol Clin Exp. In press.
  109. Wilson S, Argyropoulos S. Antidepressants and sleep: a qualitative review of the literature. Drugs. 2005;65:927-947.
  110. 2002 Sleep in America Poll. Washington, DC: National Sleep Foundation; 2002.
  111. Rickels K, Morris RJ, Newman H, et al. Diphenhydramine in insomniac family practice patients: a double-blind study. J Clin Pharmacol. 1990;23:234-242.
  112. Brower KJ. Insomnia, alcoholism, and relapse. Sleep Med Rev. 2003;7:523-539.
  113. Brzezinski A, Vangel MG, Wurtman RJ, et al. Effects of exogenous melatonin on sleep: a meta-analysis. Sleep Med Rev. 2005;9:41-50.
  114. Larzelere MM, Wiseman P. Anxiety, depression, and insomnia. Prim Care. 2002;29:339-360.
  115. Wheatley D. Medicinal plants for insomnia: a review of their pharmacology, efficacy, and tolerability. J Psychopharmacol. 2005;19:414-421.
  116. Lack L, Wright H, Kemp K, Gibbon S. The treatment of early morning awakening insomnia with 2 evenings of bright light. Sleep. 2005;28:616-623.

 

The Cost of Disatisfaction at Work

In Mindfulness, Well-being on Monday, 17 September 2012 at 07:36

Happy Employees Are Critical For An Organization’s Success, Study Shows 

ScienceDaily (Feb. 4, 2009)

 One’s happiness might seem like a personal subject, but a Kansas State University researcher says employers should be concerned about the well-being of their employees because it could be the underlying factor to success.

 Thomas Wright, Jon Wefald Leadership Chair in Business Administration and professor of management at K-State, has found that when employees have high levels of psychological well-being and job satisfaction, they perform better and are less likely to leave their job — making happiness a valuable tool for maximizing organizational outcomes.

“The benefits of a psychologically well work force are quite consequential to employers, especially so in our highly troubled economic environment,” Wright said. “Simply put, psychologically well employees are better performers. Since higher employee performance is inextricably tied to an organization’s bottom line, employee well-being can play a key role in establishing a competitive advantage.”

Happiness is a broad and subjective word, but a person’s well-being includes the presence of positive emotions, like joy and interest, and the absence of negative emotions, like apathy and sadness, Wright said.

An excessive negative focus in the workplace could be harmful, such as in performance evaluations where negatives like what an employee failed to do are the focus of concentration, he said. When properly implemented in the workplace environment, positive emotions can enhance employee perceptions of finding meaning in their work.

In addition, studies have shown that being psychologically well has many benefits for the individual, Wright said. Employees with high well-being tend to be superior decision makers, demonstrate better interpersonal behaviors and receive higher pay, he said. His recent research also indicates that psychologically well individuals are more likely to demonstrate better cardiovascular health.

Wright said happiness is not only a responsibility to ourselves, but also to our co-workers, who often rely on us to be steadfast and supportive. In addition, Employee well-being affects the organization overall. Studies have shown that after controlling for age, gender, ethnicity, job tenure and educational attainment level, psychological well-being still is significantly related to job performance, according to Wright.

Wright said psychologically well employees consistently exhibit higher job performance, with significant correlations in the 0.30 to 0.50 range. Not only are these findings statistically significant, they are practically relevant as well, he said. A correlation of 0.30 between well-being and performance indicates that roughly 10 percent of the variance in job performance is associated with differences in well-being, while a correlation of 0.50 points to a substantial 25 percent of the variance.

In some of Wright’s academic and consulting work, he has used a form of utility analysis to determine the level of actual savings tied to employee well-being. For example, in a sample of management personnel with average salaries in the $65,000 range, he found that being psychologically distressed could cost the organization roughly $75 a week per person in lost productivity. With 10 employees that translates to $750 per week in performance variance; for 100 employees the numbers are $7,500 per week or $390,000 per year.

When employees have low levels of well-being and job satisfaction, they are more likely to quit their job. Wright said employee turnover could be extremely costly for an organization losing a disproportionate share of its better employees. In one study, Wright found that the possibility of turnover was 0.57 times smaller for any one-unit increase in well-being. As with job performance, the knowledge of an employee’s well-being can be highly useful in helping human resource personnel determine cost-effective employee retention strategies, he said.

Well-being has shown to be stable over time, though it can be influenced by situational circumstances through psychological-based interventions, Wright said. Methods to improve well-being include assisting workers so they fit their jobs more closely, providing social support to help reduce the negative impact of stressful jobs, and teaching optimism to emphasize positive thought patterns.

 Wright said one controversial approach to improving well-being in the workplace is by seeking and hiring employees who have high levels of well-being.

Wright’s findings on psychological well-being and job satisfaction have appeared in several publications, including the Journal of Management, Organizational Dynamics, the Journal of Occupational Health Psychology, the Journal of Applied Psychology and the Journal of Organizational Behavior.

Reference:  Kansas State University (2009, February 4). Happy Employees Are Critical For An Organization’s Success, Study Shows. ScienceDaily. Retrieved September 17, 2012, from http://www.sciencedaily.com­ /releases/2009/02/090203142512.htm?goback=.gde_2047935_member_164235829

Autism and other disorders may not be linked to the age of the mother…

In Uncategorized on Sunday, 16 September 2012 at 11:35

Dad’s age, not mom’s, may drive autism, schizophrenia, other disorders

Posted on August 22, 2012 by Stone Hearth News

REYKJAVIK, Iceland–(BUSINESS WIRE)–deCODE Genetics, a global leader in analyzing and understanding the human genome, in collaboration with Illumina, a global leader in the making of instruments to analyze the genome, reported today in the journal Nature that a father’s age, not a mother’s, at the time a child is conceived is the single largest contributor to the passing of new hereditary mutations to offspring. The findings come from the largest whole genome sequencing project to examine associations of diseases with rare variants in the genome.

“Strikingly, this study found that a father’s age at the time a child is conceived explains nearly all of the population diversity in new hereditary mutations found in the offspring,” said study lead author Kari Stefansson, M.D., Dr. Med., CEO of deCODE Genetics. “With the results here, it is now clear that demographic transitions that affect the age at which males reproduce can have a considerable impact on the rate of certain diseases linked to new mutation.”

To better understand the cause of new hereditary mutations, the deCODE team sequenced the genomes of 78 Icelandic families with offspring who had a diagnosis of autism or schizophrenia. The team also sequenced the genomes of an additional 1,859 Icelanders, providing a larger comparative population.

On average, the investigators found a two mutation per-year increase in offspring with each one-year increase in age of the father. The average age of the father in the study was 29.7 years old. Also, when specifically examining the genomes of families with autism and schizophrenia, the authors identified in offspring mutations in genes previously implicated in the diseases. They also identified two genes, CUL3 and EPHB2, with mutations in an autism patient subgroup.

“Our results all point to the possibility that as a man ages, the number of hereditary mutations in his sperm increases, and the chance that a child would carry a deleterious mutation that could lead to diseases such as autism and schizophrenia increases proportionally,” said Dr. Stefansson. “It is of interest here that conventional wisdom has been to blame developmental disorders of children on the age of mothers, whereas the only problems that come with advancing age of mothers is a risk of Down syndrome and other rare chromosomal abnormalities. It is the age of fathers that appears to be the real culprit.”

Epidemiological studies in Iceland show the risk of both schizophrenia and autism spectrum disorders increases significantly with father’s age at conception, and that the average age of father’s in Iceland (now 33 years-old) at the time a child is conceived is on the rise. The authors noted that demographic change of this kind and magnitude is not unique to Iceland, and it raises the question of whether the reported increase in autism spectrum disorder diagnosis is at least partially due to an increase in the average age of fathers at conception.

About deCODE

Headquartered in Reykjavik, Iceland, deCODE genetics is a global leader in analyzing and understanding the human genome. Using its unique expertise and population resources, deCODE has discovered genetic risk factors for dozens of common diseases ranging from cardiovascular disease to cancer.

In order to most rapidly realize the value of genetics for human health, deCODE partners with life sciences companies to accelerate their target discovery, validation, and prioritization efforts, yielding improved patient stratification for clinical trials and essential companion diagnostics. In addition, through its CLIA- and CAP-certified laboratory, deCODE offers DNA-based tests for gauging risk and empowering prevention of common diseases. deCODE also licenses its tests, intellectual property, and analytical tools to partner organizations. deCODE’s corporate information can be found at http://www.decode.com with information about our genetic testing services at http://www.decodehealth.com and www.decodeme.com.

Some thoughts on education…

In Education, Pedagogy, Philosophy on Sunday, 16 September 2012 at 11:26

there is a need for teaching to involve true socratic dialectic.  below is an article i found on how it can be done on today’s public schools.  following it is an article on paulo freire.  i believe his books should be required reading for those going into education (along with jonathan kozol’s books on education…all of them!).  i urge anyone in education who has not read kozol’s books to read at least one along with freire’s work.  i believe a better understanding of epistemology and pedagogy, in general, are crucial when trying to make a difference in the education of our nation’s children.

***

Making the Leap to Socratic Seminars

By Elizabeth Ely

Premium article access courtesy of TeacherMagazine.org.

Over the past few years, I’ve attended summer workshop after summer workshop that touted the merits of Socratic seminars. The discussions revolved around open-ended questions facilitated by not teachers, as I’d previously understood such seminars—but students. Perhaps it is appropriate that I often left these workshops with more questions than answers.

I just couldn’t picture how this would work in my 6th grade English/language arts classroom. How would I guide my students to discuss topics in a civil way and connect their ideas to their academic learning? How would I ensure each student was engaged? How would I assess students? What if no one had anything to say?

But this past year, I pushed aside my own desire for control and gave more agency to my students. It was risky, especially when facing high-stakes testing and a new evaluation system. But my students were more engaged in learning than ever. And I knew that I didn’t need to worry about the evaluation rubric if my students could sustain this kind of growth.

Listen, Discuss, Collaborate

Let’s face it—most middle school students don’t walk into our classrooms in August ready and able to participate in a Socratic seminar … or any type of academic discussion. But here’s how I got them ready.

From week one, I began to set expectations about three major skills: active listening, academic discussion, and collaborative teamwork. I also worked to create a climate in which it was safe for students to speak their minds—where it’s okay to take risks (and sometimes fail).

The first thing we talked about was active listening. Students need to understand that a discussion involves constant feedback and participation from all involved—and that even a listener’s body language can affect the tone and focus of the discussion. I solicited and recorded students’ ideas about what active listening is, creating a sort of “how-to” poster as they discussed.

During the first week, I built in lots of discussion activities in pairs, small groups, and whole-class arrangements. Students got to know one another, built a sense of community, and practiced their active-listening skills.

One of our first activities was “What’s My Lie?” Students wrote three statements about themselves—two true statements and one that was a lie. I did the same, then modeled the activity: I shared my statements with a student volunteer, who then guessed which statement was a lie and explained why. I confirmed or shared the correct answers.

Next, students mingled and performed the same activity, changing partners when prompted. I reminded students about active listening and encouraged them to thank their partners in the activity.

During the early weeks of the school year, our activities were very structured, gradually becoming less so. Students learned to initiate questions or engage in discourse without my dictating the order of responses.

Early on, I introduced strategies for responding to others in a civil way that sustains the discussion: “I agree, Bobby, but I would also add … ” or “I disagree, Sally, because the text says … ” or “That reminds me of the article/text/novel that … ”

As last year got going, I realized that my new role was to facilitate learning rather than deliver it. As I moved from table to table, I modeled active listening and academic discussion for my students while at the same time getting to know them and assessing their learning.

Next Step: Introducing Socratic Seminars

I now know you should never spring a Socratic seminar on students without introducing the concept. Period. For a seminar to be truly effective, I’ve found my students need to know what it is, why they’re doing it, what’s expected of them, and how they’ll be graded. They need time to prepare.

Two or three days before our first seminar, I took an entire period to introduce what Socratic seminars are like—and why we would be doing them.

Students need to understand the roles of the seminars in my classroom—and their importance. A seminar can be a discussion of articles or a novel students have read, or they can be the culminating point of an entire unit. Seminars can help students with pre-writing or serve as performance-based assessments.

It is also important for students to perceive seminar participation as an exciting privilege—a chance to be responsible for their own learning. I want them to see that I am interested in their insights. The more I stress the value of the activity, the more value students place on their personal performance.

I began the first introductory session by giving students background on who Socrates was and what “Socratic” means. I introduced Socrates as an ancient Greek philosopher and teacher who valued the power of asking questions, engaging in inquiry, and discussing rather than debating.

Then we talked about the seminar’s structure. I’ve found an inner-outer circle most effective with my 6th graders. I arrange student desks in two concentric circles. During the seminar, the inner circle discusses while the outer circle observes and assesses their inner partners. Halfway through the seminar, the groups switch roles.

I explained the seminar responsibilities of students: to be prepared with their handouts and texts, to take part in discussion when in the inner circle, and to evaluate the discussion when in the outer circle.

My favorite part was explaining my role as teacher, which is to open a Diet Coke and relax. They laughed, but by the end of the year, they realized how accurate this description had been.

This year, this introductory lesson will be followed by a class session in which we watch and analyze a video clip of a Socratic seminar in action.

Deciding What Matters: Student-Generated Rubrics

I took another risk this past year as I committed to a student-centered classroom: I decided students should play a role in designing a rubric for seminar participation.

I had initiated this process at the start of the year, when I first asked students to identify the characteristics of an active listener. Continued reflection on the “how” of our classroom activities led students to become much more aware of my expectations—and their own.

The day after I introduced the basic concept of Socratic seminars, I asked students to consider how the seminars should be evaluated.

I distributed a template with categories (participation, quality of discussion, and behavior/attitude) and scoring columns (exemplary, proficient, and emerging). I left the contents of the rubric blank, and asked student groups to generate indicators for each of the possible scores for the categories.

I recorded student contributions and solicited revisions along the way, encouraging as much specificity as possible. And I found that, given the opportunity, my students set high expectations for themselves—in part because they were so excited and honored to be able to take part in the seminars.

The rubric-building activity helps students become even more aware of what’s expected on seminar day. Just to make sure we were all on the same page, I posted the rubric to my class wiki, requiring students to review it for homework and “sign” their names on the wiki page.

Many teachers have practiced this kind of student-centered instruction. But it was revolutionary for me, a teacher who once felt more comfortable with a tightly scripted plan for each lesson.

Here’s what my principal said after observing a Socratic seminar in my 6th grade ELA classroom: “The only thing that could have made it more impressive was if you had just turned around and left the room.”

On that day, in that moment, I became obsolete and loved it. It was then that I knew I truly had a student-centered classroom—my students were motivated and engaged enough to learn from one another without me.

Elizabeth Ely is a 6th grade ELA and world history teacher at Walker Middle Magnet for International Studies in Tampa, Fla. She is a member of CTQ’s Teacher Leaders Network and plans to continue taking risks this year in the classroom.

***

food for thought…

Reference: Smith, M. K. (1997, 2002) ‘Paulo Freire and informal education’, the encyclopaedia of informal education. [www.infed.org/thinkers/et-freir.htm. Last update: May 29, 2012]

http://www.infed.org/thinkers/et-freir.htm

paulo freire

Perhaps the most influential thinker about education in the late twentieth century, Paulo Freire has been particularly popular with informal educators with his emphasis on dialogue and his concern for the oppressed.

Paulo Freire (1921 – 1997), the Brazilian educationalist, has left a significant mark on thinking about progressive practice. His Pedagogy of the Oppressed is currently one of the most quoted educational texts (especially in Latin America, Africa and Asia). Freire was able to draw upon, and weave together, a number of strands of thinking about educational practice and liberation. Sometimes some rather excessive claims are made for his work e.g. ‘the most significant educational thinker of the twentieth century’. He wasn’t – John Dewey would probably take that honour – but Freire certainly made a number of important theoretical innovations that have had a considerable impact on the development of educational practice – and on informal education and popular education in particular. In this piece we assess these – and briefly examine some of the critiques that can be made of his work.

Contribution

Five aspects of Paulo Freire’s work have a particular significance for our purposes here. First, his emphasis on dialogue has struck a very strong chord with those concerned with popular and informal education. Given that informal education is a dialogical (or conversational) rather than a curricula form this is hardly surprising. However, Paulo Freire was able to take the discussion on several steps with his insistence that dialogue involves respect. It should not involve one person acting on another, but rather people working with each other. Too much education, Paulo Freire argues, involves ‘banking’ – the educator making ‘deposits’ in the educatee.

Second, Paulo Freire was concerned with praxis – action that is informed (and linked to certain values). Dialogue wasn’t just about deepening understanding – but was part of making a difference in the world. Dialogue in itself is a co-operative activity involving respect. The process is important and can be seen as enhancing community and building social capital and to leading us to act in ways that make for justice and human flourishing. Informal and popular educators have had a long-standing orientation to action – so the emphasis on change in the world was welcome. But there was a sting in the tail. Paulo Freire argued for informed action and as such provided a useful counter-balance to those who want to diminish theory.

Third, Freire’s attention to naming the world has been of great significance to those educators who have traditionally worked with those who do not have a voice, and who are oppressed. The idea of building a ‘pedagogy of the oppressed’ or a ‘pedagogy of hope’ and how this may be carried forward has formed a significant impetus to work. An important element of this was his concern with conscientization – developing consciousness, but consciousness that is understood to have the power to transform reality’ (Taylor 1993: 52).

Fourth, Paulo Freire’s insistence on situating educational activity in the lived experience of participants has opened up a series of possibilities for the way informal educators can approach practice. His concern to look for words that have the possibility of generating new ways of naming and acting in the world when working with people around literacies is a good example of this.

Fifth, a number of informal educators have connected with Paulo Freire’s use of metaphors drawn from Christian sources. An example of this is the way in which the divide between teachers and learners can be transcended. In part this is to occur as learners develop their consciousness, but mainly it comes through the ‘class suicide’ or ‘Easter experience’ of the teacher.

The educator for liberation has to die as the unilateral educator of the educatees, in order to be born again as the educator-educatee of the educatees-educators. An educator is a person who has to live in the deep significance of Easter. Quoted by Paul Taylor (1993: 53)

Critique

Inevitably, there are various points of criticism. First, many are put off by Paulo Freire’s language and his appeal to mystical concerns. The former was a concern of Freire himself in later life – and his work after Pedagogy of the Oppressed was usually written within a more conversational or accessible framework.

Second, Paulo Freire tends to argue in an either/or way. We are either with the oppressed or against them. This may be an interesting starting point for teaching, but taken too literally it can make for rather simplistic (political) analysis.

Third, there is an tendency in Freire to overturn everyday situations so that they become pedagogical. Paulo Freire’s approach was largely constructed around structured educational situations. While his initial point of reference might be non-formal, the educational encounters he explores remain formal (Torres 1993: 127)  In other words, his approach is still curriculum-based and entail transforming settings into a particular type of pedagogical space. This can rather work against the notion of dialogue (in that curriculum implies a predefined set of concerns and activities). Educators need to look for ‘teachable moments’ – but when we concentrate on this we can easily overlook simple power of being in conversation with others.

Fourth, what is claimed as liberatory practice may, on close inspection, be rather closer to banking than we would wish. In other words, the practice of Freirian education can involve smuggling in all sorts of ideas and values under the guise of problem-posing. Taylor’s analysis of Freire’s literacy programme shows that:

.. the rhetoric which announced the importance of dialogue, engagement, and equality, and denounced silence, massification and oppression, did not match in practice the subliminal messages and modes of a Banking System of education. Albeit benign, Freire’s approach differs only in degree, but not in kind, from the system which he so eloquently criticizes. (Taylor 1993: 148)

Educators have to teach. They have to transform transfers of information into a ‘real act of knowing’ (op cit: 43).

Fifth, there are problems regarding Freire’s model of literacy. While it may be taken as a challenge to the political projects of northern states, his analysis remains rooted in assumptions about cognitive development and the relation of literacy to rationality that are suspect (Street 1983: 14). His work has not ‘entirely shrugged off the assumptions of the “autonomous model”‘ (ibid.: 14).

Last, there are questions concerning the originality of Freire’s contribution. As Taylor has put it – to say that as many commentators do that Freire’s thinking is ‘eclectic’, is ‘to underestimate the degree to which he borrowed directly from other sources’ (Taylor 1993: 34). Taylor (1993: 34-51) brings out a number of these influences and ‘absorbtions’ – perhaps most interestingly the extent to which the structure of Pedagogy of the Oppressed parallels Kosik’s Dialectic of the Concrete (published in Spanish in the mid 1960s). Here we would simply invite you to compare Freire’s interests with those of Martin Buber. His concern with conversation, encounter, being and ethical education have strong echoes in Freirian thought.

Further reading and references

Key texts:

Paulo Freire’s central work remains:

Freire, P. (1972) Pedagogy of the Oppressed, Harmondsworth: Penguin. Important exploration of dialogue and the possibilities for liberatory practice. Freire provides a rationale for a pedagogy of the oppressed; introduces the highly influential notion of banking education; highlights the contrasts between education forms that treat people as objects rather than subjects; and explores education as cultural action. See, also:

Freire, P. (1995) Pedagogy of Hope. Reliving Pedagogy of the Oppressed, New York: Continuum. This book began as a new preface to his classic work, but grew into a book. It’s importance lies in Freire’s reflection on the text and how it was received, and on the development of policy and practice subsequently. Written in a direct and engaging way.

Biographical material: There are two useful English language starting points:

Freire, P. (1996) Letters to Cristina. Reflections on my life and work, London: Routledge. Retrospective on Freire’s work and life. in the form of letters to his niece. He looks back at his childhood experiences, to his youth, and his life as an educator and policymaker.

Gadotti, M. (1994) Reading Paulo Freire. His life and work, New York: SUNY Press. Clear presentation of Freire’s thinking set in historical context written by a close collaborator.

For my money the best critical exploration of his work is:

Taylor, P. (1993) The Texts of Paulo Freire, Buckingham: Open University Press.

Other references

Kosik, K. (1988) La dialectique du concret, Paris: Plon.

Street, B. V. (1984) Literacy in Theory and Practice, Cambridge: Cambridge University Press.

Torres, C. A. (1993) ‘From the “Pedagogy of the Oppressed” to “A Luta Continua”: the political pedagogy of Paulo Freire’ in P. McLaren and P. Leonard (eds.) Freire: A critical encounter, London: Routledge.

Links

Lesley Bentley – Paulo Freire. Brief biography plus lots of useful links.

Catedra Paulo Freire (Pontificia Universidad Catolica de Sao Paulo) – click for English version.

Blanca Facundo’s critique of Freire’s ideas, and reactions to Facundo’s critique – interesting collection of pieces.

Paulo Freire Institute – a wide range of material available about current work in the Freirian tradition. Click for the English version.

Daniel Schugurensky on Freireconsists of a collection of reviews of his books and links to other pages.

Q&A: The Freirian Approach to Adult Literacy Education,  David Spener’s review for ERIC.

The Uses of Poverty: The Poor Pay All

In Education, Philosophy on Sunday, 16 September 2012 at 10:54

as promised, i am posting this for a reader.  while it is an old article, i think it is still pertinent.

The Uses of Poverty: The Poor Pay All

Herbert J. Gans

Social Policy July/August 1971: pp. 20-24

Some twenty years ago Robert K. Merton applied the notion of functional analysis to explain the continuing though maligned existence of the urban political machine: if it continued to exist, perhaps it fulfilled latent – unintended or unrecognized – positive functions. Clearly it did. Merton pointed out how the political machine provided central authority to get things done when a decentralized local government could not act, humanized the services of the impersonal bureaucracy for fearful citizens, offered concrete help (rather than abstract law or justice) to the poor, and otherwise performed services needed or demanded by many people but considered unconventional or even illegal by formal public agencies.

Today, poverty is more maligned than the political machine ever was; yet it, too, is a persistent social phenomenon. Consequently, there may be some merit in applying functional analysis to poverty, in asking whether it also has positive functions that explain its persistence.

Merton defined functions as “those observed consequences [of a phenomenon] which make for the adaptation or adjustment of a given [social] system.” I shall use a slightly different definition; instead of identifying functions for an entire social system, I shall identify them for the interest groups, socio-economic classes, and other population aggregates with shared values that ‘inhabit’ a social system. I suspect that in a modern heterogeneous society, few phenomena are functional or dysfunctional for the society as a whole, and that most result in benefits to some groups and costs to others. Nor are any phenomena indispensable; in most instances, one can suggest what Merton calls “functional alternatives” or equivalents for them, i.e., other social patterns or policies that achieve the same positive functions but avoid the dysfunctions.

Associating poverty with positive functions seems at first glance to be unimaginable. Of course, the slumlord and the loan shark are commonly known to profit from the existence of poverty, but they are viewed as evil men, so their activities are classified among the dysfunctions of poverty. However, what is less often recognized, at least by the conventional wisdom, is that poverty also makes possible the existence or expansion of respectable professions and occupations, for example, penology, criminology, social work, and public health. More recently, the poor have provided jobs for professional and para-professional “poverty warriors,” and for journalists and social scientists, this author included, who have supplied the information demanded by the revival of public interest in poverty.

Clearly, then, poverty and the poor may well satisfy a number of positive functions for many non-poor groups in American society. I shall describe thirteen such functions – economic, social and political – that seem to me most significant.

The Functions of Poverty

First, the existence of poverty ensures that society’s “dirty work” will be done. Every society has such work: physically dirty or dangerous, temporary, dead-end and underpaid, undignified and menial jobs. Society can fill these jobs by paying higher wages than for “clean” work, or it can force people who have no other choice to do the dirty work – and at low wages. In America, poverty functions to provide a low-wage labor pool that is willing – or rather, unable to be unwilling – to perform dirty work at low cost. Indeed, this function of the poor is so important that in some Southern states, welfare payments have been cut off during the summer months when the poor are needed to work in the fields. Moreover, much of the debate about the Negative Income Tax and the Family Assistance Plan [welfare programs] has concerned their impact on the work incentive, by which is actually meant the incentive of the poor to do the needed dirty work if the wages therefrom are no larger than the income grant. Many economic activities that involve dirty work depend on the poor for their existence: restaurants, hospitals, parts of the garment industry, and “truck farming,” among others, could not persist in their present form without the poor.

Second, because the poor are required to work at low wages, they subsidize a variety of economic activities that benefit the affluent. For example, domestics subsidize the upper middle and upper classes, making life easier for their employers and freeing affluent women for a variety of professional, cultural, civic and partying activities. Similarly, because the poor pay a higher proportion of their income in property and sales taxes, among others, they subsidize many state and local governmental services that benefit more affluent groups. In addition, the poor support innovation in medical practice as patients in teaching and research hospitals and as guinea pigs in medical experiments.

Third, poverty creates jobs for a number of occupations and professions that serve or “service” the poor, or protect the rest of society from them. As already noted, penology would be minuscule without the poor, as would the police. Other activities and groups that flourish because of the existence of poverty are the numbers game, the sale of heroin and cheap wines and liquors, Pentecostal ministers, faith healers, prostitutes, pawn shops, and the peacetime army, which recruits its enlisted men mainly from among the poor.

Fourth, the poor buy goods others do not want and thus prolong the economic usefulness of such goods – day-old bread, fruit and vegetables that otherwise would have to be thrown out, secondhand clothes, and deteriorating automobiles and buildings. They also provide incomes for doctors, lawyers, teachers, and others who are too old, poorly trained or incompetent to attract more affluent clients.

In addition to economic functions, the poor perform a number of social functions:

Fifth, the poor can be identified and punished as alleged or real deviants in order to uphold the legitimacy of conventional norms. To justify the desirability of hard work, thrift, honesty, and monogamy, for example, the defenders of these norms must be able to find people who can be accused of being lazy, spendthrift, dishonest, and promiscuous. Although there is some evidence that the poor are about as moral and law-abiding as anyone else, they are more likely than middle-class transgressors to be caught and punished when they participate in deviant acts. Moreover, they lack the political and cultural power to correct the stereotypes that other people hold of them and thus continue to be thought of as lazy, spendthrift, etc., by those who need living proof that moral deviance does not pay.

Sixth, and conversely, the poor offer vicarious participation to the rest of the population in the uninhibited sexual, alcoholic, and narcotic behavior in which they are alleged to participate and which, being freed from the constraints of affluence, they are often thought to enjoy more than the middle classes. Thus many people, some social scientists included, believe that the poor not only are more given to uninhibited behavior (which may be true, although it is often motivated by despair more than by lack of inhibition) but derive more pleasure from it than affluent people (which research by Lee Rainwater, Walter Miller and others shows to be patently untrue). However, whether the poor actually have more sex and enjoy it more is irrelevant; so long as middle-class people believe this to be true, they can participate in it vicariously when instances are reported in factual or fictional form.

Seventh, the poor also serve a direct cultural function when culture created by or for them is adopted by the more affluent. The rich often collect artifacts from extinct folk cultures of poor people; and almost all Americans listen to the blues, Negro spirituals, and country music, which originated among the Southern poor. Recently they have enjoyed the rock styles that were born, like the Beatles, in the slums, and in the last year, poetry written by ghetto children has become popular in literary circles. The poor also serve as culture heroes, particularly, of course, to the Left; but the hobo, the cowboy, the hipster, and the mythical prostitute with a heart of gold have performed this function for a variety of groups.

Eighth, poverty helps to guarantee the status of those who are not poor. In every hierarchical society, someone has to be at the bottom; but in American society, in which social mobility is an important goal for many and people need to know where they stand, the poor function as a reliable and relatively permanent measuring rod for status comparisons. This is particularly true for the working class, whose politics is influenced by the need to maintain status distinctions between themselves and the poor, much as the aristocracy must find ways of distinguishing itself from the nouveaux riches.

Ninth, the poor also aid the upward mobility of groups just above them in the class hierarchy. Thus a goodly number of Americans have entered the middle class through the profits earned from the provision of goods and services in the slums, including illegal or nonrespectable ones that upper-class and upper-middle-class businessmen shun because of their low prestige. As a result, members of almost every immigrant group have financed their upward mobility by providing slum housing, entertainment, gambling, narcotics, etc., to later arrivals – most recently to Blacks and Puerto Ricans.

Tenth, the poor help to keep the aristocracy busy, thus justifying its continued existence. “Society” uses the poor as clients of settlement houses and beneficiaries of charity affairs; indeed, the aristocracy must have the poor to demonstrate its superiority over other elites who devote themselves to earning money.

Eleventh, the poor, being powerless, can be made to absorb the costs of change and growth in American society. During the nineteenth century, they did the backbreaking work that built the cities; today, they are pushed out of their neighborhoods to make room for “progress. Urban renewal projects to hold middle-class taxpayers in the city and expressways to enable suburbanites to commute downtown have typically been located in poor neighborhoods, since no other group will allow itself to be displaced. For the same reason, universities, hospitals, and civic centers also expand into land occupied by the poor. The major costs of the industrialization of agriculture have been borne by the poor, who are pushed off the land without recompense; and they have paid a large share of the human cost of the growth of American power overseas, for they have provided many of the foot soldiers for Vietnam and other wars.

Twelfth, the poor facilitate and stabilize the American political process. Because they vote and participate in politics less than other groups, the political system is often free to ignore them. Moreover, since they can rarely support Republicans, they often provide the Democrats with a captive constituency that has no other place to go. As a result, the Democrats can count on their votes, and be more responsive to voters – for example, the white working class – who might otherwise switch to the Republicans.

Thirteenth, the role of the poor in upholding conventional norms (see the fifth point, above) also has a significant political function. An economy based on the ideology of laissez faire requires a deprived population that is allegedly unwilling to work or that can be considered inferior because it must accept charity or welfare in order to survive. Not only does the alleged moral deviancy of the poor reduce the moral pressure on the present political economy to eliminate poverty but socialist alternatives can be made to look quite unattractive if those who will benefit most from them can be described as lazy, spendthrift, dishonest and promiscuous.

The Alternatives

I have described thirteen of the more important functions poverty and the poor satisfy in American society, enough to support the functionalist thesis that poverty, like any other social phenomenon, survives in part because it is useful to society or some of its parts. This analysis is not intended to suggest that because it is often functional, poverty should exist, or that it must exist. For one thing, poverty has many more dysfunctions that functions; for another, it is possible to suggest functional alternatives.

For example, society’s dirty work could be done without poverty, either by automation or by paying “dirty workers” decent wages. Nor is it necessary for the poor to subsidize the many activities they support through their low-wage jobs. This would, however, drive up the costs of these activities, which would result in higher prices to their customers and clients. Similarly, many of the professionals who flourish because of the poor could be given other roles. Social workers could provide counseling to the affluent, as they prefer to do anyway; and the police could devote themselves to traffic and organized crime. Other roles would have to be found for badly trained or incompetent professionals now relegated to serving the poor, and someone else would have to pay their salaries. Fewer penologists would be employable, however. And Pentecostal religion probably could not survive without the poor – nor would parts of the second- and third-hand goods market. And in many cities, “used” housing that no one else wants would then have to be torn down at public expense.

Alternatives for the cultural functions of the poor could be found more easily and cheaply. Indeed, entertainers, hippies, and adolescents are already serving as the deviants needed to uphold traditional morality and as devotees of orgies to “staff” the fantasies of vicarious participation.

The status functions of the poor are another matter. In a hierarchical society, some people must be defined as inferior to everyone else with respect to a variety of attributes, but they need not be poor in the absolute sense. One could conceive of a society in which the “lower class,” though last in the pecking order, received 75 percent of the median income, rather than 15-40 percent, as is now the case. Needless to say, this would require considerable income redistribution.

The contribution the poor make to the upward mobility of the groups that provide them with goods and services could also be maintained without the poor having such low incomes. However, it is true that if the poor were more affluent, they would have access to enough capital to take over the provider role, thus competing with and perhaps rejecting the “outsiders.” (Indeed, owing in part to antipoverty programs, this is already happening in a number of ghettos, where white store owners are being replaced by Blacks.) Similarly, if the poor were more affluent, they would make less willing clients for upper-class philanthropy, although some would still use settlement houses to achieve upward mobility, as they do now. Thus “Society” could continue to run its philanthropic activities.

The political functions of the poor would be more difficult to replace. With increased affluence the poor would probably obtain more political power and be more active politically. With higher incomes and more political power, the poor would be likely to resist paying the costs of growth and change. Of course, it is possible to imagine urban renewal and highway projects that properly reimbursed the displaced people, but such projects would then become considerably more expensive, and many might never be built. This, in turn, would reduce the comfort and convenience of those who now benefit from urban renewal and expressways. Finally, hippies could serve also as more deviants to justify the existing political economy – as they already do. Presumably, however, if poverty were eliminated, there would be fewer attacks on that economy.

In sum, then, many of the functions served by the poor could be replaced if poverty were eliminated, but almost always at higher costs to others, particularly more affluent others. Consequently, a functional analysis must conclude that poverty persists not only because it fulfills a number of positive functions but also because many of the functional alternatives to poverty would be quite dysfunctional for the affluent members of society. A functional analysis thus ultimately arrives at much the same conclusion as radical sociology, except that radical thinkers treat as manifest what I describe as latent: that social phenomena that are functional for affluent or powerful groups and dysfunctional for poor or powerless ones persist; that when the elimination of such phenomena through functional alternatives would generate dysfunctions for the affluent or powerful, they will continue to persist; and that phenomena like poverty can be eliminated only when they become dysfunctional for the affluent or powerful, or when the powerless can obtain enough power to change society.

Managing Adverse Effects to Optimize Treatment for ADHD

In ADHD, ADHD Adult, ADHD child/adolescent, ADHD stimulant treatment, Medication, Psychiatry, School Psychology on Sunday, 16 September 2012 at 10:35

Managing Adverse Effects to Optimize Treatment for ADHD

http://www.medscape.org/viewarticle/583252

Introduction

Attention-deficit/hyperactivity disorder (ADHD) begins in early childhood, and at least 50% of children will go on to have symptoms and impairment in adulthood.[1] Treatment requires a combination of medication and counseling, and adherence to medication therapy is essential for good outcomes. Managing adverse effects is a key component of effective treatment. Diagnosis and treatment of psychiatric comorbidity, which is common, is another essential aspect of care. This review will examine common adverse effects, prescribing medication successfully, deciding when to switch to an alternative medication, and some aspects of using concomitant medication.

Initiating Treatment

Diagnosis

According to the text revision of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR), the diagnosis of ADHD requires symptom onset before age 7 years. When evaluating children, parent and teacher input is essential and easy to obtain. Although some investigators have suggested that adult-onset ADHD is possible,[2] a full evaluation of an adult involves attempts to document symptoms and impairment in childhood. Interviews with parents and examination of school or medical records are often helpful.

Monitoring treatment success requires documentation of baseline functional impairment. In adults, collateral interviews with partners or even coworkers, with the patient’s permission, may be illuminating. Adults with ADHD experience important consequences from their impaired functioning. In a case-control study of 500 adults, those with ADHD had lower educational attainment, less job stability, lower incomes, and less successful relationships.[3] The evaluating clinician should investigate all of these areas.

The other essential aspect of evaluation is screening for comorbidity. In many cases, ADHD is not the chief complaint but comes to light during evaluation of another symptom. The most prevalent comorbid conditions are depression, bipolar disorder, and anxiety disorders.[4] Substance-use disorders including nicotine dependence are also more common in people with ADHD than in the general population.

Patient Education

Once the diagnosis is established, the physician should explain the implications and the proposed treatment plan. Educating patients and families about both the therapeutic and adverse effects of pharmacotherapy will help them know what to expect. Describing the benefits of treatment, including possible improvements in psychosocial outcomes, will allow a fully informed decision.

After learning about the side-effect profile of psychostimulants, a few patients who are ambivalent about medication may reject that treatment option. Nonstimulants should also be discussed to provide the full range of options, but the clinician should mention the trade-off of lower efficacy of nonstimulants compared with psychostimulants.[5] Once a patient has consented to a specific medication, the physician should explain the minimum trial duration necessary to determine a response and the dose-adjustment schedule. Clearly worded written information about the medication is usually appreciated by patients and their families. The informed-consent process should be documented.

Managing Adverse Effects

The common adverse effects of treatment are inherent in the pharmacodynamics of stimulant medication. Enhanced catecholamine neurotransmission in the central and autonomic nervous systems can cause insomnia, anorexia, and increased heart rate and blood pressure. These effects are most noticeable at the outset of treatment and after increases in dose. Patients often adjust to them during the ensuing weeks but may require encouragement during that interval.

Insomnia

Studies show that adults and children with untreated ADHD experience sleep anomalies compared with control subjects. A review of sleep studies of unmedicated children found evidence of more nocturnal motor activity and daytime somnolence compared with controls.[6] An actigraphic study of 33 adults with ADHD and 39 control subjects found similar differences between the groups at baseline, and sleep latency was prolonged in the ADHD subjects.[7] After treatment with methylphenidate, the adult patients continued to experience prolonged sleep latency and less total sleep duration, but sleep efficiency improved.

In a study that used the most comprehensive method of evaluating sleep, polysomnography in a sleep lab, 34 untreated adults with ADHD had increased nocturnal activity, reduced sleep efficiency, more awakenings, and reduced REM sleep compared with control subjects.[8] For 10 patients who were treated with open-label methylphenidate, repeat polysomnography showed better sleep efficiency, and the patients also reported improved restorative value of sleep.

Clinicians can conclude from these studies that the effect of medication on sleep may be beneficial in at least some patients, but further research with more subjects and with a variety of medications is needed. The fact remains that many patients treated with psychostimulants complain of initial insomnia, so an approach to manage this problem is necessary. Clinicians should document sleep patterns and complaints before treatment to help interpret problems that may arise after medication is prescribed.

Sleep hygiene, consisting of simple behavioral approaches to promote sound sleep (eg, creating a restful environment and avoiding caffeine), is an inexpensive intervention for all patients with insomnia. In a study of initial insomnia in 27 children 6-14 years treated for ADHD with psychostimulants, the researchers provided a sleep hygiene intervention to which 5 of the children responded.[9] They randomly assigned the nonresponders to either 5 mg of melatonin or placebo. Adverse effects of placebo and melatonin were not significantly different. The investigators found the combination of sleep hygiene and melatonin to be safe and effective, with an effect size of 1.7.

Although comparable randomized, controlled trial data do not exist for adults, mirtazapine has been reported as safe and effective for adults taking psychostimulants.[10]

Atomoxetine may have an effect on sleep that is different from that of psychostimulants, including reduced sleep latency but less efficiency. In a randomized, double-blinded, crossover trial, methylphenidate treatment for children with ADHD caused more initial insomnia but fewer awakenings compared with atomoxetine treatment.[11] Switching to atomoxetine may be considered for patients who prefer it or who do not respond to adjunctive interventions for stimulant-associated insomnia.

Appetite and Growth

Appetite reduction is common with psychostimulants and also can occur with nonstimulants, including atomoxetine and bupropion. This may be accompanied by nausea and abdominal pain in some patients. Some adults treated with psychostimulants may regard appetite suppression with resultant weight loss as beneficial. With long-acting stimulants, appetite returns later in the day.

Simple approaches to this problem include eating breakfast before taking medication. Having food in the stomach may also help reduce abdominal symptoms. Children in particular should have a nutritious, high-calorie snack in the evening if their food intake has been low since breakfast. However, parents should be warned to monitor evening intake of empty calories, such as candy and chips.

Weight loss or a downward shift of weight percentile is typical in children treated with psychostimulants. Short-term reduction in height growth rate during the initial 1-3 years of treatment with psychostimulants is well documented. In a literature review article, Poulton[12] concluded that a mean 1 cm/year deficit in height occurs during that interval. Less conclusive findings included a possible negative correlation between dose and growth, greater growth effect from dextroamphetamine than from methylphenidate, and rebound in growth of height and weight after discontinuation of stimulants.

More controversial is the effect on final stature. According to Poulton, “It would appear that most children achieve a satisfactory adult height, but there may be an important subgroup whose growth is permanently attenuated.”[12] Clinicians must discuss this with parents, many of whom will already have some concerns about the issue, and monitor children’s height and weight, ideally at each visit.

Research on atomoxetine is less comprehensive, but available evidence suggests a short-term downward shift in height and weight percentile. The effect on height may be minimal,[13] but longer-term studies are needed.

In a child or adult with worrisome weight loss, or if a child’s parents are anxious about growth deceleration, switching to another medication should be considered. Substituting methylphenidate for amphetamine would be more rational than substituting amphetamine for methylphenidate, but a nonstimulant is more likely to be ameliorative.

Affective Symptoms

Irritability, dysphoria, and (rarely) suicidal ideation can occur during treatment of ADHD.[14] Atomoxetine carries an FDA warning of a 0.4% incidence of suicidal ideation that has occurred in children during the first month of therapy.[15] No completed suicides have been reported, but discontinuation of atomoxetine is indicated if suicidal thoughts emerge. Minor mood changes and irritability occur with both psychostimulants and atomoxetine. Little evidence is available to guide intervention, but if the symptom is severe, the clinician may consider dose reduction, switching to an alternative psychostimulant, or trying an antidepressant nonstimulant such as bupropion or nortriptyline.

Psychosis and Mania

As dopamine transmission agonists, psychostimulants at excessive and prolonged doses would be expected to provoke psychotic symptoms or mania. These are well-reported but uncommon adverse effects during treatment in children, with an incidence estimated at 0.25%.[16] Emergent delusions, hallucinations, mania, or disorganized behavior requires treatment discontinuation. Most such symptoms resolve, but in a few cases, a bipolar disorder may be unmasked, which takes treatment priority.

Cardiovascular Effects

Psychostimulants cause increased heart rate and blood pressure in adults and children. The effect is mild in most cases, but in adults, some patients with borderline baseline blood pressure may develop frank hypertension. In a 24-month study of 223 adults treated with mixed amphetamine salts, 5 subjects developed hypertension and 2 experienced palpitations or tachycardia that required medication discontinuation.[17]

In a manufacturer-sponsored review of clinical-trial data, atomoxetine was found to cause small but clinically insignificant effects on blood pressure and heart rate in children, adolescents, and adults.[18] Treatment discontinuation for these effects was necessary only in a few adults. In managing any patient on psychostimulants or atomoxetine, clinicians should document pulse rate and blood pressure at baseline and every 6 months, with more frequent monitoring of patients with elevated risk for hypertension.

A more controversial aspect of ADHD medications is the effect on cardiac conduction and the rare occurrence of sudden death. In an unpublished review of documented cases of sudden death in children and adults treated with stimulants or atomoxetine through 2005, many of these patients had an underlying cardiac anomaly discovered on autopsy or were taking other medications.[19] Furthermore, psychostimulants have little effect on the QTc interval. Data on atomoxetine are conflicting, with US trials suggesting no QTc effect.[14] A Europe-wide postmarketing surveillance study, however, found a small number of cases of QTc prolongation that resolved with medication discontinuation.[20]

Whether a baseline electrocardiogram (ECG) is necessary for every patient is a matter of debate among specialists. Dr. David Goodman, an ADHD researcher and clinician, recommends specific screening for cardiac risk.[21] The 5 items he inquires about are history of spontaneous syncope, exercise-induced syncope, exercise-induced chest pain, sudden death in family members age 30 years and younger, and a family history of structural or electrical abnormalities. An ECG — and in ambiguous situations, specialist consultation — would be appropriate before initiating medication in older adults or any patient with risk factors.

Complex Psychopharmacology

Because comorbidity is common with ADHD, clinicians may prescribe psychostimulants with other medications, such as antidepressants, mood stabilizers, or antipsychotics. In fact, experienced psychopharmacologists often prescribe psychostimulants adjunctively for adults with treatment-resistant depression. Atomoxetine metabolism and a small portion of amphetamine metabolism involve CYP2D6, so caution is appropriate when combining these medications with fluoxetine, paroxetine, or fluvoxamine, which inhibit the enzyme.

Tricyclic antidepressants have been safely prescribed with psychostimulants, although several case reports exist of increased adverse effects with the combination of imipramine and methylphenidate.[22] Psychostimulants combined with monoamine oxidase inhibitors may cause a hypertensive crisis; coadministration is contraindicated.

The comorbidity of bipolar disorder and ADHD remains an area of active research and controversy. In a recent randomized, controlled trial, 40 children 6-17 years old with bipolar mania or hypomania and ADHD received divalproex for 8 weeks.[23] The 30 whose mood stabilized but who had active ADHD symptoms received mixed amphetamine salts. The researchers reported no significant adverse effects or worsening of mania. Similar controlled trials in adults are lacking, but in a retrospective study of 16 adult patients with bipolar disorder who were receiving methylphenidate, 5 patients had comorbid ADHD.[24] The others received a stimulant for depression. The patients were also taking various mood stabilizers, including divalproex, lithium, carbamazepine, lamotrigine, and second-generation antipsychotics. The investigators concluded that the practice was safe and effective, although “mild to moderate side effects” occurred, the single most common of which was irritability.

Conclusion

Initiating treatment with psychostimulants is no different from initiating other psychiatric medications. The key steps are:

  • Obtaining baseline data and, in exceptional cases, specialist consultation;
  • Educating patients and families about risks and benefits;
  • Documenting informed consent; and
  • Monitoring adverse effects and intervening as needed.

Rare adverse effects, such as jaundice, skin reactions, vasculitis, and thrombocytopenia, are idiosyncratic, and routine testing for them is not cost-effective.[14] Any unusual complaints should prompt further investigation. Regular documentation of pulse and blood pressure (and growth in children) is mandatory. Most adverse effects can be managed by reassurance or dose reduction, but switching to a different agent may at times be necessary. Combining medications for comorbidities is justifiable and often safe if diagnoses and rationale are well documented, but evidence of efficacy is not well established.

 

ADHD and Sensory Defensiveness

In ADHD, ADHD Adult, ADHD child/adolescent, School Psychology on Sunday, 16 September 2012 at 10:30

ADD and Hypersensitivity:
Is There A Connection?

Follow Up Report by Mary Jane Johnson

from http://www.oneaddplace.com

It has been several months now since I reported on ADHD and hypersensivity. Since that time I have heard from several ADD adults who suffer some of the same symptoms. One person sent me an article entitled “Social and Emotional Issues of Adults with Sensory Defensiveness “from the Sensory Integration Newsletter published by The American Occupational Therapy Assoc.

Many of these same hypersensitivities are mentioned in this article and I will quote from the article as well as what was shared by the readers who wrote to me. Sensory Integration Newsletter states, “Adults with tactile defensiveness commonly report strong clothing preferences and avoidances, and aversions to clothes with tags, jewelry….may also feel uncomfortable with wool or synthetic materials against the skin….and may be bothered by these aversions to an extreme degree….

“Along those same lines K. wrote in that, “I have to keep my shoes tied tight on my feet… If they are not tight I get frustrated… I find that I constantly re-tie my shoes as tight as possible, during the day.” And D. relates, “I agree completely about the elastic… I also do not like sleeves, high collars, knee socks that fall down, tags on the inside of shirts, anything touching my skin that isn’t soft or cottony, slacks too tight in the crotch… I hate panty hose… I don’t wear my coat in the car, I have a nice thin vest with lots of pockets that I wear while shopping.

“In regards to sensitivity to food textures M. shares, “My dad as a child couldn’t stand different foods to touch, so my grandmother bought him a compartmentalized plate… I had to do the same for my son… He stopped picking up wet finger foods or food that made his hands sticky… He wanted a different spoon or fork for each food… He wanted only bland soft foods and to this day there are very few foods he likes… My taste is more sensitive than the others in my family.” D. says, “I am also a picky eater, but I love spicy food. I can’t stand browned scrambled eggs and my fried eggs must be perfect.”

When it comes to heat and cold sensitivity, M. writes, “If it gets around 70 degrees I’m cold… That’s why we live in the desert… my hands and feet seem to always have had poor circulation… My hands get cold inside good leather gloves.” K. states, “…especially cold… I need to dress and keep the house warm as soon as cool weather moves in… If I didn’t love New England so much I would probably live in a warm climate year round.

“The remarks about hearing sensitivity includes: M., “clock in the living room because he could hear it ticking all the way in his room…My son can sleep through noises but certain frequencies hurt or upset him…I travel with a Sears ‘sleepmate’ white noise machine. I can’t sleep without masking the noise. I annoy the heck out of my husband by my ability to hear the TV at the other end of the house… I can’t have a ticking clock in the room where I sleep… My dad also has a noise machine.

” K., “…when trying to focus on things I can’t filter out noises…While typing this letter I can hear water dripping in the next room, the refrigerator turning on and off, and a car engine idling outside. “And D., “I enjoy loud music, but only when I feel like it. I think that’s why people think we are selfish at times.”

Sensory Integration Newsletter reports, “Social events… puts the person with defensiveness in an uncomfortable situation… Almost all subjects described the discomfort experienced when someone’s touch takes them by surprise… Many subjects describe shaking hands as unpleasant… When the touch or hug from comes from behind, it’s effect is multiplied because of the element of surprise… and many need to exert self-control to avoid striking out at the person who touched them.

“M. shares, “I don’t like being touched… even shaking hands is difficult… It’s taken my husband years to learn how to touch me without provoking a negative response… Touching my head or hair is a no no!” A twist on this particular hypersensitivity comes from D. who says, “I am happy to say the hypersensitivity to touch, in the romantic sense, is more often a plus than a minus.” And K. adds, “I don’t mind shaking someone’s hand, but forget hugging… Whenever someone hugs me, I tense up and my stomach gets tied up in knots… Being married my wife loves to be touched… I get all tied up when she hugs me or wants to be hugged or held… It tears me apart because I love my wife and yet for some reason want my space

“……..”Most subjects described feeling uncomfortable in crowded places such as crowded elevators, buses, or subways, restaurants, stores, malls… Shopping is difficult for them”, states Sensory Integration Newsletter. K. agrees by saying, “I have difficulty going into elevators, and detest having to go to malls, food stores, sporting events, etc… I get very over-stimulated, overwhelmed and irritable until I’m free from crowded areas.” D. states, “I notice that claustrophobia is more evident when I am somewhere I don’t like to be, such as in a car on a trip of more than an hour.” In some individuals there seems to be a connection between having ADD and being hypersensitive, as these cases indicate.

ADULT SENSORY QUESTIONNAIRE (ASQ)

(Kinnealey and Oliver, © 2002)

Circle the item as T – true or F – False as it applies to you.

 

 

 

 

 

1.  T   F

 

I am sensitive and get bothered by smells that don’t seem to bother other people.

2.  T   F I am sensitive or bothered by sounds that don’t seem to bother other people.
3.  T   F I am bothered by looking down a long flight of stairs or going down an escalator.
4.  T   F I get car sick.
5.  T   F I am sensitive to movement.  I get dizzy very easily.
6.  T   F I am sensitive to and bothered by lights/contrasts/reflections or objects close to my face (that don’t seem to bother others).
7.  T   F I am bothered by some food textures in my mouth (or I avoid them).
8.  T   F It bothers me to be barefoot on grass or sand.
9.  T   F I am bothered by tags and labels in my clothes (or I remove them).
10. T   F I am bothered by turtleneck shirts, tight fitting clothes, elastic, nylons, or synthetic material in clothes (any of the above).
11. T   F I am bothered by the feeling of jewelry (or I never wear it because of this).
12. T   F I am very aware that certain parts of my body are very sensitive.
13. T   F I avoid putting creams and lotions on my skin because of how it feels.
14. T   F I have a sensitive scalp.
15. T   F I do not like being in crowded areas such as elevators, malls, subways, crowded shops or bars (or I never put myself in these situations).
16. T   F Growing up, I did not like to be hugged (except by my mother).
17. T   F I am often uncomfortable with physical intimacy because touching bothers me.
18. T   F I feel bothered when someone touches me from behind or unexpectedly, or stands too close.
19. T   F I was very active as a child (or I am now).
20. T   F I have mood swings more than other people.
21. T   F I do not go to sleep easily and wake up easily and/or I don’t sleep between 6 and 8 hours each night.
22. T   F I consider myself to be anxious.
23. T   F I feel I must mentally prepare myself for situations in which people are apt to touch me.
24. T   F It is important for me to be in control and know what to expect.
25. T   F I am perfectionistic, or compulsive.
26. T   F I avoid if at all possible, situations in which my senses will be stressed.

____________ Total Score (count up the number of “Trues”)

Scoring:           

> 10   = definite sensory defensiveness

6 – 10 = moderate sensory defensiveness

< 6      = not sensory defensive

 

some days you’re the pigeon…

In Fitness/Health, Humor, Inspiration, Mindfulness, Well-being on Sunday, 16 September 2012 at 10:23

i can’t recall where i found this but i really like it!

***

A lecturer, when explaining stress management to an audience, raised a glass of water and asked, “How heavy is this glass of water?”

Answers called out ranged from 20g to 500g.

The lecturer replied, “The absolute weight doesn’t matter. It depends on how long you try to hold it. If I hold it for a minute, that’s not a problem. If I hold it for an hour, I’ll have an ache in my right arm. If I hold it for a day, you’ll have to call an ambulance. In each case, it’s the same weight, but the longer I hold it, the heavier it becomes. And that’s the way it is with stress management. If we carry our burdens all the time, sooner or later, as the burden becomes increasingly heavy, we won’t be able to carry on. As with the glass of water, you have to put it down for a while and rest before holding it again. When we’re refreshed, we can carry on with the burden.

So, before you return home tonight, put the burden of work down. Don’t carry it home. You can pick it up tomorrow. Whatever burdens you’re carrying now, let them down for a moment if you can. Relax; pick them up later after you’ve rested. Life is short. Enjoy it!”

And then he shared some ways of dealing with the burdens of life:

  • Accept that some days you’re the pigeon, and some days you’re the statue.
  • Always keep your words soft and sweet, just in case you have to eat them.
  • Always read stuff that will make you look good if you die in the middle of it.
  • Drive carefully. It’s not only cars that can be recalled by their maker.
  • If you can’t be kind, at least have the decency to be vague.
  • If you lend someone $20 and never see that person again, it was probably worth it.
  • It may be that your sole purpose in life is simply to serve as a warning to others.
  • Never put both feet in your mouth at the same time, because then you won’t have a leg to stand on.
  • Nobody cares if you can’t dance well. Just get up and dance. Melody:
  • Since it’s the early worm that gets eaten by the bird…sleep late.
  • The second mouse gets the cheese. (so, don’t always be in such a hurry)
  • You may be only one person in the world, but you may also be the world to one person.
  • We could learn a lot from crayons. Some are sharp, some are pretty and some are dull. Some have weird names, and all are different colors, but they all have to live in the same box.
  • A truly happy person is one who can enjoy the scenery on a detour.

skepticism is healthy…

In Philosophy on Sunday, 16 September 2012 at 07:14

always view everything with a healthy dose of skepticism. make your own conclusions.

Healthy Diet and ADHD

In ADHD, ADHD Adult, ADHD child/adolescent, Alternative Health, School Psychology on Sunday, 16 September 2012 at 05:16

Healthy vs Western Diet Linked to Better Outcomes in ADHD

Megan Brooks & Penny Murata, MD

http://www.medscape.org/viewarticle/757166

Clinical Context

In children with attention-deficit/hyperactivity disorder (ADHD), the effectiveness of diet and dietary supplements is not clear. Dietary measures that have been proposed include sugar restriction; the additive- and salicylate-free Feingold diet; the oligoantigenic or elimination diet; and ketogenic, megavitamin, and polyunsaturated fatty acid (PUFA) supplements. In the July 2011 issue of the Journal of Attention Disorders, Howard and colleagues reported a link between ADHD and a “Western” diet high in fat, refined sugars, and sodium.

This review of the literature assesses the evidence for dietary treatment in children with ADHD.

Study Synopsis and Perspective

When drug therapy fails to control ADHD or is unacceptable, adopting a “healthy” diet, eliminating items known to predispose to ADHD, and adding omega-3 fatty acid supplementation may be worth trying, new research suggests.

“The recent increase of interest in this form of therapy for ADHD, and especially in the use of omega supplements, significance of iron deficiency, and the avoidance of the ‘Western pattern’ diet, make the discussion timely,” the authors write.

Many parents and physicians continue to be interested in how diet and dietary changes, particularly parents wanting to find an alternative to stimulant medication or a complementary therapy. Nevertheless, it remains a “controversial” topic, the authors note.

For their review, J. Gordon Millichap, MD, and Michelle M. Yee, CPNP, from Children’s Memorial Hospital in Chicago, Illinois, searched PubMed for relevant studies on the role of diet and dietary supplements for the treatment of children with ADHD.

They note that their recommendations on diet and dietary supplements are based on a critical review of the data and their own experience in a neurology clinic for children and adolescents with ADHD.

The study was published online on January 9 in Pediatrics.

Elimination Diets Not Advisable

Perhaps the “most promising and practical” complementary or alternative treatment, write Dr. Millichap and Ms. Yee, is adopting a “healthy” dietary pattern, omitting items shown to predispose to ADHD or to make the condition worse. These items include fast foods, red meat, processed meat, potato chips, high-fat dairy foods, and soft drinks.

They point to a “provocative” study published last year, which found a link between ADHD in adolescents and a “Western-style” dietary pattern that was high in fat, refined sugars, and sodium and low in fiber, folate, and omega-3 fatty acids (Howard et al, J Atten Disord. 2011;15:403-411). ADHD was not associated with a “healthy” dietary pattern rich in fish, vegetables, fruit, legumes, and whole-grain foods.

Adopting a healthy dietary pattern “may offer an alternative method of treatment of ADHD and less reliance on medications,” the authors of the current study write.

They also note that although many parents report worsening of hyperactivity symptoms after consumption of foods and drinks containing sugar or aspartame — and isolated reports support the parents’ observations — most controlled studies have failed to find a significant harmful effect of sugar or aspartame, the authors note.

Additionally, they say that the elimination of sugar and aspartame and adapting additive-free diets are complicated, disruptive, and often impractical; such measures are indicated only in select cases.

Fatty Acid Supplements May Be Helpful

Low levels of long-chain PUFAs have been reported in the plasma and red cells of children with ADHD in comparison with their ADHD-free peers, Dr. Millichap and Ms. Yee note. Some studies have demonstrated a reduction in ADHD symptoms with PUFA supplementation, although no definitive conclusions can be drawn.

However, the authors note that “on the basis of reports of efficacy and safety, we use doses of 300 to 600 mg/day of omega-3, and 30 to 60 mg/day of omega-6 fatty acids, continued for 2 or 3 months, or longer if indicated.”

“As initial or add-on therapy, we have occasional reports of improved school grades and lessening of symptoms of ADHD, without occurrence of adverse effects. Most parents are enthusiastic about trying the diet supplements, despite our explanation of only possible benefit and lack of proof of efficacy,” they note.

They also note that iron and zinc supplementation is advisable when there is a known deficiency in these minerals, and this may “enhance the effectiveness” of stimulant therapy.

Pediatrics. Published online January 9, 2012.

Related Link
The National Institute of Mental Health’s Attention Deficit Hyperactivity Disorder (ADHD) site offers a wide range of information helpful for parent education including a downloadable booklet discussing the condition and its management.

Study Highlights

  • This review study provides an overview of the role diet has in children with ADHD. The following supplements, foods, and diets affect the children’s health outcomes in various ways, according to several studies.
  • Omega-3 and omega-6 fatty acid supplements
    • Low long-chain PUFA levels were reported in children with ADHD vs control patients.
    • Some studies showed that PUFA reduced ADHD symptoms, but other studies did not.
    • Doses of omega-3, 300 to 600 mg/day, and omega-6, 30 to 60 mg/day, for 2 to 3 months or longer have been used.
    • Concurrent ADHD medication is almost always needed.
  • Additive and salicylate-free (Feingold) diet
    • Adherence to the diet is complicated and may be disruptive or impractical.
    • Foods to be avoided are apples, grapes, luncheon meats, sausage, hot dogs, and cold drinks with artificial flavors and coloring agents.
    • Permitted foods are grapefruit, pears, pineapple, bananas, beef, lamb, plain bread, certain cereals, milk, eggs, and color-free vitamins.
    • Controlled trials found a small subgroup of preschool children had an adverse response to challenges of additives and preservatives.
    • Children with ADHD and atopy vs no atopy have a higher response to elimination of foods, artificial colorings, and preservatives.
  • Oligoantigenic (hypoallergenic/elimination) diet
    • Adherence to the diet is complicated and may be disruptive or impractical.
    • The oligoantigenic diet eliminates sensitizing food antigens or allergens, including cow’s milk, cheese, wheat cereals, egg, chocolate, nuts, and citrus fruits.
    • Elimination of some foods appeared to decrease some ADHD symptoms, but plays an uncertain role in ADHD treatment.
    • A 2- to 3-week period of elimination diet is followed by the reintroduction of single items each week until the food sensitivity is identified.
    • Behavior improvements might not occur for up to 2 weeks.
    • Enzyme-potentiated desensitization might enable children to become tolerant of provoking foods.
  • Sugar and aspartame
    • Sugar does not affect behavior or cognitive performance, but might affect a subset.
    • In preschool boys, daily sucrose and total sugar intake correlated with duration of aggression.
    • Reactive hypoglycemia after sugar load might reduce cognitive function.
    • Hypoglycemia is linked with impaired electrical activity of the cerebral cortex and slow rhythms on electroencephalogram.
  • Ketogenic diet
    • A ketogenic diet high in fats and low in carbohydrates for children with intractable seizures helped to control seizures and improve behavior, attention, and social functioning.
  • Iron deficiency
    • Iron deficiency is not consistently linked with ADHD severity or frequency.
    • 1 study showed that low serum ferritin correlated with baseline inattention, hyperactivity, impulsivity, and effective amphetamine dose needed.
  • Zinc deficiency
    • Low zinc levels were found in the serum, red cells, hair, urine, and nails of children with ADHD, but mostly in countries with endemic zinc deficiency.
    • In the United States, low serum zinc was linked with inattention, but not with hyperactivity or impulsivity.
    • Zinc supplements might enhance the effect of d-amphetamine, but are not routinely recommended.
  • Other alternative dietary therapies
    • Orthomolecular medicine and megavitamin therapy refer to combination of minerals and nutrients.
    • A study of megavitamin therapy in children with ADHD showed no improvement in behavior, but 42% had elevated serum transaminase levels.
  • “Healthy” vs “Western” diet pattern
    • A cohort study of children from birth to age 14 years found a “Western” dietary pattern associated with ADHD diagnosis and a “Healthy” diet pattern not associated with ADHD diagnosis.
    • The Western dietary pattern includes fast foods, red and processed meats, potato chips, high-fat dairy products, and soft drinks.
    • The Healthy dietary pattern includes fish, vegetables, tomatoes, fresh fruit, whole grains, and low-fat dairy products.

Clinical Implications

  • Indications for dietary therapy in children with ADHD include medication failure or adverse reactions, patient or parental preference, mineral deficiency, and need for change from an ADHD-linked Western diet to an ADHD-free Healthy diet.
  • In children with ADHD, additive-free and elimination diets are time-consuming and disruptive, but might be indicated in selected patients; iron and zinc are indicated for deficiencies; omega-3 supplements have inconsistent effects; and a Healthy diet rich in fish, vegetables, fruit, legumes, and whole grains might be beneficial vs a Western diet of fast foods, red or processed meats, high-fat dairy products, soft drinks, and potato chips.

Pet overpopulation…an Illustration.

In Animal Rescue, Animal Welfare, Humane Education, Pets on Saturday, 15 September 2012 at 10:41

It really adds up!

please spay/neuter

 

for my fellow animal rescuers…

In Animal Rescue, Animal Welfare, Humane Education, Pets on Saturday, 15 September 2012 at 10:35

animal rescue is one of the most rewarding things i do, but also one of the most gut-wrenching, heart-breaking, and difficult.  but…the rewards are immeasurable.  still, some advice for my fellow rescuers:

“Rescuers Need Rescue, too.”

 By Chandra Moira Beal

 Animal rescue is deeply rewarding yet extremely difficult work.  To survive in this realm, one must find healthy ways to cope with the emotional challenges.

Here are 10 points to ponder:

1. You can’t save them all.  Even if you spent every hour of every day working to save animals, you still wouldn’t be able to save them all.  Take comfort in knowing that you are not alone in your efforts.

2. Work smarter, not harder.  Manage your rescue efforts like a business.  Organize tasks to make the best use of time.  For example, time spent recruiting more volunteers may make more sense in the long run than trying to do more yourself.  If you find yourself pulled in many directions, you might be more effective if you focus on one rescue facility, one geographic locale, or one species or breed.

3. Just say no.  Many people feel guilty when they can’t take care of everything that comes up.  Be realistic about how much you can handle!  If you’re feeling overwhelmed, it’s okay to say, “I can’t right now.”  Delegate to others when possible, and ask for help when you need it.

4. You are making a difference.  Whenever you question whether you’re  helping very much, remember the old parable about the man walking on the  beach, picking up starfish who have washed ashore and tossing them gently,  one by one, back into the ocean.  Another man approaches, notices that there are starfish on the beach for as far as the eye can see, and asks, “What difference can you possibly make when there are so many?”  Looking at the creature in his hand, the first man replies, “I can make all the difference in the world to THIS starfish.”

5. Celebrate victories.  There are happy endings to many rescue stories.  Rejoice in what is working.  Of course, seeing an animal go home with a loving family is the greatest reward of all.

6. Small kindnesses do count.  It’s common to think that small efforts don’t mean as much as large victories, but stopping to pet an animal, even for just one minute is worth doing.  Your touch may be the only friendly attention he or she receives that day.  Grooming, holding and comforting, or intoning softly that you care, are activities that many shelters don’t have time for.

7. Find outlets for emotional release.  Rescue work can be physically exhausting, emotionally draining and spiritually challenging.  Don’t dismiss your feelings or think you’re a wimp for being affected by it all.  Talk to someone you trust about what you’re experiencing. Cry when you need to.  Write your feelings in a journal.  Channel your emotions into action by writing to the editor of your newspaper or your local representatives about the need for animal protection legislation.

8. Take care of yourself.  Make time to do whatever makes you feel good.  Take a relaxing bath, or go out to dinner and let someone else do the cooking.  You need to recharge your batteries in order to maintain mental and physical health.

9. Don’t downplay your compassion.  When people ask me why I rescue animals,  often I’m tempted to say, “Oh, it’s not big deal” or “Somebody’s got to do  it,” when in reality I rescue animals because I care so deeply about them.  Compassion is healthy, normal and necessary for this work.  Let people know how important this cause is to you.  You just might inspire others to become involved.

10. Never give up.  When you get discouraged, it is tempting to throw in the towel.  Despite all your hard work, you may not see real change in your lifetime.  Still, giving up won’t make it any better.  Take a break, and come back fighting.  And remember the man and the starfish.

***

and for those who may not be able to actively rescue but want to help:

What Can One Person Do?

Plenty!

Here are some suggestions about items to donate and ways to volunteer. The suggestions are based on what many local animal organizations and animal control officers typically need.

Please note: It’s always a good idea to start by checking with your local rescue group or shelter to see what kind of help they really need, Some groups may be desperately in need of materials, like dogs beds, that you’d be willing to provide. Another group may benefit more by getting help with publicity. Checking with the staff first ensures that your donation or service will genuinely be of help to the organization.

These suggestions are based on what many local animal organizations and animal control officers need. Be sure to check with your local groups to see if they can use the type of help you would most like to provide. For example; if you’d like to help get publicity, you’ll want to ask in advance if the organization would like your assistance – after all, you want to be sure that your donation or service is genuinely of help.

Things you may be able to give:

Basic things many shelters can use:

•               Bedding; towels, sheets, blankets, a cat or dog bed, carpet squares

•               Cleaning supplies

•               Cat and dog food, cat litter, toys, collars, leashes

•               Scratching posts, metal bowls, dog crates, grooming supplies

You don’t have to spend a lot of money: Perhaps you are no longer using some of these items around the house, or you may spot them at a yard sale or thrift store.

Doghouses: If you have an old doghouse that isn’t being used, you can clean it up and pass it along for a dog in your neighborhood who could use it. Or give it to your local animal control agency and ask that it be given to a needy dog. Sometimes feral cat groups can refurbish and use doghouses.

Office stuff: office supplies, computers, office furniture, or equipment. The next time your office is upgrading equipment ask about donating it to the local shelter.

Coupons: Some shelters can use free or discount coupons for animal food or cat litter.

Medical supplies: Many spay/neuter clinics and some shelters can use medical supplies.

Humane traps.

Use of a photocopier: Many groups cannot afford a copy machine and would appreciate an opportunity to duplicate flyers and forms.

Prizes for fundraising auctions or raffles.

Things you may want to do:

Be a foster home. Open your home to an animal that needs a place to live and learn until he/she can find a home.

Set up a donation coin can or food program. Create donation cans and place them in area businesses OR pet food donation collection bins at local super markets.

Fix an animal. Help a friend or acquaintance fix their pet. To find a local low-cost spay neuter program call 1-800-248 PETS OR 1-888-PETS911 OR visit: http://www.1888pets911.org

Donate your special skills and talents:

•               Computer skills: Create or manage a website for a local group, or help create a mailing list database.

•               Desktop publishing skills: Create a brochure, newsletters, or posters.

•               Sewing, knitting, or crocheting talent: Offer to make pet beds or catnip mice.

•               Building/Construction skills: Make repairs around the shelter, or build doghouses or feral cat shelters

and feeding stations.

•               Writing talent: Offer to write their newsletter or an article for the local paper.

•               Organizational skills: Help out with administrative tasks or event planning.

•               Gardening skills: Ask if you can help beautify the landscaping around the shelter.

Provide care for shelter animals. Volunteer to clean cages, feed, groom, or walk the animals in a local shelter.

Feed a feral, or two, or three. . . Many organizations practice trap/neuter/return and can use help with feeding cats. Offering to help with feeding once or twice a week can provide a nice break for a busy caregiver.

Promote animal adoptions:

•               List homeless animals on an adoption website (contact us for a list of sites).

•               Photograph shelter animals.

•               Create adoption posters and hang them around the community.

Tell your friends and neighbors. Don’t underestimate the value of word of mouth. Tell others what you are doing and why. Invite them to help out too.

Larger projects you could help to organize:

Plan a fundraising event. This could be as simple as holding a yard sale and donating the proceeds to a shelter, or as involved as planning a benefit auction or walk-a-thon. We have helpful information on planning some types of events.

Organize an adoption event. We have a manual on planning Super Adoption events and off-site adoption programs.

Coordinate a local feral cat spay/neuter program or one-day event.  We can offer advice on how to do this.

Start a local organization or program. Create a community animal welfare group or volunteer Brigade to help other local groups. We can send you information on starting a local program to help the animals.

Start a community e-group to help unite like-minded people, spread the word about animals in need of homes, promote local events, and volunteer opportunities. An excellent model is the Austin Pets Alive No-Kill Handbill. You can see a sample at: http://www.io.com/~mvb/ARCHIVE/ or subscribe at: http://www.austinpetsalive.org

Create a local event: You could plan a local observance of: National Homeless Animals’ Day (www.isaronline.org), National Feral Cat Day (www.alleycat.org), or Spay Day USA (www.ddaf.org).

Start a Week for the Animals. We have a manual to help you create a Week for the Animals in your town, city or state.

Retrieved from: http://www.bestfriends.org/nomorehomelesspets/pdf/WhatOnePerson.pdf

100 Ways To Help A Rescue Without Adopting or Fostering:

Can you:1. Transport a cat/dog?

2. Donate a dog/cat bed or towels or other *bedding* type items?

3. Donate MONEY?

4. Donate a Kong? A Nylabone? A hercules? cat toys?

5. Donate a crate?

6. Donate an x-pen or baby gates?

7. Donate a food dish or a stainless bucket for a crate?

8. Donate a leash?

9. Donate a collar?

10 .Donate some treats or a bag of food?

11 .Donate a halti or promise collar or a gentle leader?

12. Walk a dog?

13. Groom a dog?

14 .Donate some grooming supplies (shampoos, combs, brushes, etc.)?

15 .Go to the local shelter and see if that dog is the breed the shelter says it is or go with rescue to be a second opinion on the dog?

16. Make a few phone calls?

17. Mail out applications to people who’ve requested them?

18. Provide local vet clinics with contact information for educational materials on responsible pet ownership?

19. Drive a dog to and from vet appointments?

20. Donate long distance calling cards?

21. Donate the use of your scanner or digital camera?

22. Donate the use of a photocopier?

23. Attend public education days and try to educate people on responsible pet ownership?

24. Donate a gift certificate to a pet store?

25. Donate a raffle item if your club is holding a fund raiser?

26. Donate flea stuff(Advantage, etc.)?

27. Donate Heartworn pills?

28. Donate a canine/feline first aid kit?

29. Provide a shoulder to cry on when the rescue person is overwhelmed?

30. Pay the boarding fees to board a dog for a week? Two weeks?

31. Be a Santi-paws foster to give the foster a break for a few hours or days?

32. Clip coupons for dog/cat food or treats?

33. Bake some homemade doggie biscuits?

34 .Make book purchases through Amazon via a web site that contributes commissions earned to a rescue group?

35. Host rescue photos with an infornation link on your website. ?

36. Donate time to take good photos of foster dogs for adoption flyers, etc.?

37. Conduct a home visit or accompany a rescue person on the home visit?

38. Go with rescue person to the vet to help if there is more than one dog?

39. Have a yard sale and donate the money to rescue?

40. Be volunteer to do rescue in your area?

41. Take advantage of a promotion on the web or store offering a free ID tag and instead of getting it for your own dog, have the tag inscribed with your Club’s name and phone # to contact?

42. Talk to all your friends about adopting and fostering rescue dogs?

43. Donate vet services or can you help by donating a spay or neuter each year or some vaccinations?

44. Interview vets to encourage them to offer discounts to rescues?

45. Write a column for your local newspaper or club newsletter on dogs on dogs currently looking for homes or ways to help rescue?

46. Take photos of dogs available for adoption for use by the Club?

47. Maintain web sites listing/showing dogs available?

48. Help organize and run fundraising events?

49. Help maintain the paperwork files associated with each dog or enter the infonnation into a database?

50. Tattoo a rescued dog?

51. Microchip a rescued dog?

52. Loan your carpet steam cleaner to someone who has fostered a dog that was sick or marked in the house?

53. Donate a bottle of bleach or other cleaning products?

54. Donate or loan a portable dog run to someone who doesn’t have a quarantine area for quarantining a dog that has an unknown vaccination history and has been in a shelter?

55. Drive the fosters’ children to an activity so that the foster can take the dog to obedience class?

56. Use your video camera to film a rescue dog in action?

57. Pay the cost of taking a dog to obedience class?

58. Be the one to take the dog to its obedience class?

59. Go to the foster home once a week with your children and dogs to help socialize the dog?

60. Help the foster clean up the yard (yes, we also have to scoop what those foster dogs poop)

61. Offer to test the foster dog with cats?

62. Pay for the dog to be groomed or take the dog to a *Do It Yourself* Grooming Place?

63. Bring the foster take out so the foster doesn’t have to cook dinner?

64. Pay a house-cleaning service to do the spring cleaning for someone who fosters dogs all the time?

65. Lend your artistic talents to your club’s newsletter, fundraising ideas, t-shirt designs?

66. Donate printer paper, envelopes and stamps to your club?

67. Go with a rescue person to the vet if a foster dog needs to be euthanized ?

68. Go to local shelters and meet with shelter staff about how to identify your breed or provide photos and breed infonnation showing the different types of that breed may come in and the different colour combinations?

69. Go to local businesses and solicit donations for a club’s fundraising event?

70. Offer to try and help owners be better pet owners by holding a grooming seminar?

71. Help pet owners be better pet owners by being available to answer training questions?

72. Loan a crate if a dog needs to travel by air?

73. Put together an *Owner’s Manual* for those who adopt rescued dogs of your breed?

74. Provide post-adoption follow up or support?

75 .Donate a coupon for a free car wash or gas or inside cleaning of a vehicle?

76. Pay for an ad in your local/metropolitan paper to help place rescue dogs?

77. Volunteer to screen calls for that ad?

78. Get some friends together to build/repair pens for a foster home?

79. Microchip your own pups if you are a breeder, and register the chips, so if your dogs ever come into rescue, you can be contacted to take responsibility for your pup?

80. Donate a small percentage of the sale of each pup to rescue if you are a breeder?

81. Buy two of those really neat dog-items you “have to have” and donate one to Rescue?

82. Make financial arrangements in your will to cover the cost of caring for your dogs after you are gone -so Rescue won’t have to?

83. Make a bequest in your will to your local or national Rescue?

84. Donate your professional services as an accountant or lawyer?

85. Donate other services if you run your own business?

86. Donate the use of a vehicle if you own a car dealership?

87. Loan your cell phone (and cover costs for any calls) to s/one driving a rescued dog?

88. Donate your *used* dog dryer when you get a new one?

89. Let rescue know when you’ll be flying and that you’d be willing to be a rescued dog’s escort?

90. Do something not listed above to help rescue?

91. Donate a doggy seatbelt?

92. Donate a grid for a van or other vehicle?

93. Organize a rescued dog picnic or other event to reunite the rescued dogs that have been placed?

94. Donate other types of doggy/kitty toys that might be safe for rescued animals?

95. Donate a roll-a-treat or Buster cube?

96. Donate clickers or a video on clicker training?

97. Donate materials for a quarantine area at a foster’s home?

98. Donate sheets of linoleum or other flooring materials to put under crates to protect the foster’s floor?

99. Donate an engraving tool to make ID tags for each of the rescued dogs?

lOO. Remember that rescuing a dog involves the effort and time of many people and make yourself available on an emergency basis to do *whatever* is needed?

 

An Important part of the Anti-Social Triad…Animal Abuse

In Animal Rescue, Animal Welfare, Humane Education, Life with dogs on Saturday, 15 September 2012 at 09:59

Animal Abuse and Human Abuse

Violent acts toward animals have long been recognized as indicators of a dangerous psychopathy that does not confine itself to animals. “Anyone who has accustomed himself to regard the life of any living creature as worthless is in danger of arriving also at the idea of worthless human lives,” wrote humanitarian Dr. Albert Schweitzer. “Murderers … very often start out by killing and torturing animals as kids,” according to Robert K. Ressler, who developed profiles of serial killers for the Federal Bureau of Investigation (FBI). Studies have now convinced sociologists, lawmakers, and the courts that acts of cruelty toward animals deserve our attention. They can be the first sign of a violent pathology that includes human victims.

A Long Road of Violence

Animal abuse is not just the result of a minor personality flaw in the abuser, but a symptom of a deep mental disturbance. Research in psychology and criminology shows that people who commit acts of cruelty against animals don’t stop there; many of them move on to their fellow humans.

The FBI has found that a history of cruelty to animals is one of the traits that regularly appear in its computer records of serial rapists and murderers, and the standard diagnostic and treatment manual for psychiatric and emotional disorders lists cruelty to animals as a diagnostic criterion for conduct disorders. (1)

Studies have shown that violent and aggressive criminals are more likely to have abused animals as children than criminals considered non-aggressive. (2) A survey of psychiatric patients who had repeatedly tortured dogs and cats found that all of them had high levels of aggression toward people as well, including one patient who had murdered a boy. (3) To researchers, a fascination with cruelty to animals is a red flag in the lives of serial rapists and killers. (4)

Says Robert Ressler, founder of the FBI’s behavioral sciences unit, “These are the kids who never learned it’s wrong to poke out a puppy’s eyes.” (5)

Notorious Killers

History is replete with notorious examples: Patrick Sherrill, who killed 14 coworkers at a post office and then shot himself, had a history of stealing local pets and allowing his own dog to attack and mutilate them.(6) Earl Kenneth Shriner, who raped, stabbed, and mutilated a 7-year-old boy, had been widely known in his neighborhood as the man who put firecrackers in dogs’ rectums and strung up cats.(7) Brenda Spencer, who opened fire at a San Diego school, killing two children and injuring nine others, had repeatedly abused cats and dogs, often by setting their tails on fire.(8) Albert DeSalvo, the “Boston Strangler” who killed 13 women, trapped dogs and cats in orange crates and shot arrows through the boxes in his youth.(9) Carroll Edward Cole, executed for five of the 35 murders of which he was accused, said his first act of violence as a child was to strangle a puppy.(10) In 1987, three Missouri high school students were charged with the beating death of a classmate. They had histories of repeated acts of animal mutilation starting several years earlier. One confessed that he had killed so many cats he’d lost count. (11) Two brothers who murdered their parents had previously told classmates that they had decapitated a cat.(12) Serial killer Jeffrey Dahmer had impaled dogs’ heads, frogs, and cats on sticks.(13)

More recently, high school killers such as 15-year-old Kip Kinkel in Springfield, Ore., and Luke Woodham, 16, in Pearl, Miss., tortured animals before embarking on shooting sprees.(14) Columbine High School students Eric Harris and Dylan Klebold, who shot and killed 12 classmates before turning their guns on themselves, bragged about mutilating animals to their friends.(15)

“There is a common theme to all of the shootings of recent years,” says Dr. Harold S. Koplewicz, director of the Child Study Center at New York University. “You have a child who has symptoms of aggression toward his peers, an interest in fire, cruelty to animals, social isolation, and many warning signs that the school has ignored.”(16)

Sadly, many of these criminals’ childhood violence went unexamined—until it was directed toward humans. As anthropologist Margaret Mead noted, “One of the most dangerous things that can happen to a child is to kill or torture an animal and get away with it.”(17)

Animal Cruelty and Family Violence

Because domestic abuse is directed toward the
powerless, animal abuse and child abuse often go hand in hand. Parents who neglect an animal’s need for proper care or abuse animals may also abuse or neglect their own children. Some abusive adults who know better than to abuse a child in public have no such qualms about abusing an animal publicly.

In 88 percent of 57 New Jersey families being treated for child abuse, animals in the home had been abused.(18) Of 23 British families with a history of animal neglect, 83 percent had been identified by experts as having children at risk of abuse or neglect.(19) In one study of battered women, 57 percent of those with pets said their partners had harmed or killed the animals. One in four said that she stayed with the batterer because she feared leaving the pet behind.(20)

While animal abuse is an important sign of child abuse, the parent isn’t always the one harming the animal. Children who abuse animals may be repeating a lesson learned at home; like their parents, they are reacting to anger or frustration with violence. Their violence is directed at the only individual in the family more vulnerable than themselves: an animal. One expert says, “Children in violent homes are characterized by … frequently participating in pecking-order battering,” in which they may maim or kill an animal. Indeed, domestic violence is the most common background for childhood cruelty to animals.(21)

Stopping the Cycle of Abuse

There is “a consensus of belief among psychologists … that cruelty to animals is one of the best examples of the continuity of psychological disturbances from childhood to adulthood. In short, a case for the prognostic value of childhood animal cruelty has been well documented,” according to the Cornell University College of Veterinary Medicine.(22)

Schools, parents, communities, and courts who shrug off animal abuse as a “minor” crime are ignoring a time bomb. Instead, communities should be aggressively penalizing animal abusers, examining families for other signs of violence, and requiring intensive counseling for perpetrators. Communities must recognize that abuse to ANY living individual is unacceptable and endangers everyone.

In 1993, California became the first state to pass a law requiring animal control officers to report child abuse. Voluntary abuse-reporting measures are also on the books in Ohio, Connecticut, and Washington, D.C. Similar legislation has been introduced in Florida. “Pet abuse is a warning sign of abuse to the two-legged members of the family,” says the bill’s sponsor, Representative Steve Effman. “We can’t afford to ignore the connection any longer.”(23)

Additionally, children should be taught to care for and respect animals in their own right. After extensive study of the links between animal abuse and human abuse, two experts concluded, “The evolution of a more gentle and benign relationship in human society might, thus, be enhanced by our promotion of a more positive and nurturing ethic between children and animals.”(24)

What You Can Do

• Urge your local school and judicial systems to take cruelty to animals seriously. Laws must send a strong message that violence against any feeling creature—human or other-than-human—is unacceptable.

• Be aware of signs of neglect or abuse in children and animals. Take children seriously if they report animals’ being neglected or mistreated. Some children won’t talk about their own suffering but will talk about an animal’s.

• Don’t ignore even minor acts of cruelty to animals by children. Talk to the child and the child’s parents. If necessary, call a social worker.

References

1. Daniel Goleman, “Child’s Love of Cruelty May Hint at the Future Killer,” The New York Times, 7 Aug. 1991.
2. “Animal Abuse Forecast of Violence,” New Orleans Times-Picayune, 1 Jan. 1987.
3. Alan R. Felthous, “Aggression Against Cats, Dogs, and People,” Child Psychiatry and Human Development, 10 (1980), 169-177.
4. Goleman.
5. Robert Ressler, quoted in “Animal Cruelty May Be a Warning,” Washington Times, 23 June 1998.
6. International Association of Chiefs of Police, The Training Key, No. 392, 1989.
7. The Animals’ Voice, Fall 1990.
8. The Humane Society News, Summer 1986.
9. International Association of Chiefs of Police.
10. Ibid.
11. Ibid.
12. Lorraine Adams, “Too Close for Comfort,” The Washington Post, 4 Apr. 1995.
13. Goleman.
14. Deborah Sharp, “Animal Abuse Will Often Cross Species Lines,” USA Today, 28 Apr. 2000.
15. Mitchell Zuckoff, “Loners Drew Little Notice,” Boston Globe, 22 Apr. 1999.
16. Ethan Bronner, “Experts Urge Swift Action to Fight Depression and Aggression,” The New York Times, p. A21.
17. Margaret Mead, Ph.D, “Cultural Factors in the Cause and Prevention of Pathological Homicide,” Bulletin in the Menninger Clinic, No. 28 (1964),
pp. 11-22.
18. Elizabeth DeViney, Jeffrey Dickert, and Randall Lockwood, “The Care of Pets Within Child-Abusing Families,” International Journal for the Study of
Animal Problems
, 4 (1983) 321-329.
19. “Child Abuse and Cruelty to Animals,” Washington Humane Society.
20. Sharp.
21. Cornell University College of Veterinary Medicine, Animal Health Newsletter, Nov. 1994.
22. Ibid.
23. Sharp.
24. Stephen R. Kellert, Ph.D., and Alan R. Felthous, M.D., “Childhood Cruelty Toward Animals Among Criminals and Noncriminals,” Archives of General Psychiatry, Nov. 1983.

Retrieved from: http://www.wilbargerhumanesociety.org/abuse.php

Can You Speak Dog?

In Animal Welfare, Life with dogs, Pets on Saturday, 15 September 2012 at 09:48

Can you speak dog?

 

http://www.peta.org/living/companion-animals/can-you-speak-dog.aspx

 

Take this quiz to find out if you’re a communicator or a dictator.

You and your dog speak different languages. Dogs have millions of years of evolutionary baggage telling them that digging in the flower bed is the proper way to store food and that barking is a vital form of communication. Your job is to explain that here in the land of naked apes, certain behaviors don’t always go over well, while others, like darting out into the road, are downright dangerous. The question is, are you educating your dog like a kindergarten teacher or a drill sergeant? Take this quiz to find out.

1) The Western you’re watching on TV has just gotten to the big shootout scene when Rover starts whining at the door. You:

a. promise yourself that SOMEDAY you’ll see this movie all the way through as you hop up and let Rover out.

b. ignore Rover until the next commercial, then let him out. Getting up now just rewards him for whining, which you are trying to teach him to stop doing anyway.

c. tell Rover “No.” He needs to learn he can’t go in and out every five minutes.

Answer: a. HELLO! Rover is VERY politely telling you he needs to go outside. (So what if he just went out five minutes ago, he obviously forgot to do something important!) Ignoring Rover’s whine is like ignoring someone’s “please” and forces him to move on to something “rude” like scratching the door or having an “accident.” If you ABSOLUTELY can’t let Rover out right away, at least acknowledge him: “I hear you, buddy-I’m coming.” Telling him “no” is the cruelest of all-imagine telling someone that you have to go the bathroom and they say “no!” Tell it to your bladder!

2) Maggie is a confirmed “chow hound.” Every night at dinner, she hovers at your chair, drools on your knee, and tries to “steal” food off the table. What should you do?

a. Slip her some tidbits every few minutes-she’s so pathetic!

b. Never give her scraps; this only encourages her and makes her want “people” food instead of dog chow.

c. Tell her to lie down and stay until dinner is over, then reward her with scraps.

d. None of the above.

Answer: c. Go ahead and give poor Maggie some variety in her life and feed her nutritious table scraps, just make sure you feed scraps at the RIGHT time. First, always feed Maggie her dinner BEFORE yours. If she still comes begging, ask her to lie down and stay. Teaching Maggie to wait for her tidbits calmly is really kinder than keeping her anxious by sporadically slipping her food. (Often, dogs doze off on a down/stay, which is as relaxing as it gets!) Slipping Maggie morsels during dinner TEACHES her to beg, unfairly setting her up for a scold when you decide that paw-swipes at your arm are no longer cute, or when you’re entertaining dinner guests.

3) Fido knocks the wind out of everybody he meets with an enthusiastic pair of paws planted firmly on the chest. How can you stop him from jumping?

a. Step on Fido’s back feet so he learns to associate discomfort with jumping up.

b. Give visitors food treats and instruct them to tell Fido to sit when he greets them.

c. Put your knee up as Fido jumps, so he hits the knee instead of you.

d. None of the above.

Answer: b. Stepping on Fido’s back feet is unnecessary and painful and could cause injury. Same goes for kneeing. Fido is jumping on people because he is happy to see them; do you really want him to associate being friendly with pain? Why hurt and confuse Fido when asking him to express his greeting in a different way, such as sitting, gets the message across?

4) Princess is busy chewing on a tasty sofa cushion. You walk into the room and wail “Princess!” She looks up, drops the cushion, and bounds over to you, joyfully wagging her tail. You:

a. tell her she’s a bad dog and give her a stern lecture on the high cost of sofa cushions.

b. turn around and ignore her.

c. bite your tongue and give her a pat and a hug.

Answer: c. This is perhaps the most important rule you can learn about communicating with your dog: NEVER, EVER, EVER SCOLD A DOG WHO COMES TO YOU WILLINGLY-no matter how long she dawdled, no matter how bad she was mere seconds before. If Princess had ignored you and kept right on chewing, then saying, “No! Chew on THIS,” as you took away the pillow and handed her a toy would have been in order; but she didn’t—she stopped her “bad” behavior and came to you instead. Coming to you should ALWAYS be a thrilling experience; scold her and she learns—not to stop chewing cushions—but that coming to you isn’t always such a great idea.

5) Benji is the Joan Rivers of dogdom. He barks at EVERYTHING: the moon, the sun, dogs, cats, squirrels, cockroaches, dust mites. How can you get him to quit that incessant yapping?

a. Give Benji a biscuit to distract him.

b. Sneak up behind Benji and startle him with a swat him on the rump as you yell, “No!”

c. Get one of those nifty electronic collars that zaps Benji whenever he utters a peep.

d. None of the above.

Answer: d. For starters, how come people can talk all day, but one peep out of Benji gets a “shut up” from you? Benji is barking because he’s trying to tell you something-“Look out, here comes that guy in the noisy truck!” or, “Hey, I’m lonely out here by myself,” or, “I’m terribly bored; can we go for a walk now?”

But what if you’re not feeling particularly interested in what Benji has to say about the trash truck at 7 a.m.? Hitting Benji and shouting at him is cruel and unfair-you’re punishing him for something he thinks is very important-alerting you to intruders (an instinct you’ll thank him for if a burglar shows up!)

Electronic shock collars are no better: They punish Benji indiscriminately (and painfully), plus they have a number of other drawbacks. Dogs trained with shock collars and “invisible fences” may develop fears or aggression aimed at what they BELIEVE is the source of that pesky shock-kids riding by on bikes-whom Benji starts to chase and bark at until he gets an unpleasant surprise-or the dog next door, who “administers” a painful jolt every time Benji runs up to play (two confused and frustrated dogs once killed a neighboring dog when he crossed the boundary to play). Dogs have also been known to run heedlessly through invisible barriers in hot pursuit of a squirrel or fleeing scary fireworks, then become terrified to cross back through it.

So what can you do? Ask Benji to do something else! Start making 7 a.m. on trash day practice-lying-down time until Benji gets the idea that lying down is the thing to do when the garbage truck comes. (Give Benji a treat only AFTER he does what you ask, not before, otherwise you will be TRAINING him to bark!) You also may try teaching Benji the meaning of the word “quiet” by GENTLY closing his mouth with your hands (no rough treatment, you’re simply showing him what “quiet” means) as you say the word. Remember, don’t lose your temper, holler, or otherwise abuse or over-use the “quiet” command-let Benji talk sometimes!

6) While you were at work, Fluffy emptied the trash can and created a lovely “mixed media artwork” of soda cans, melon rinds, and shredded plastic wrap on the living room floor. What should you do when you come home?

a. Bring Fluffy over to her “masterpiece,” rub her nose in it, and tell her “bad dog!”

b. Lock Fluffy in the garage every day until she learns her lesson.

c. Act like Fluffy’s redecorating is no big deal and figure out where to put the trash can so she can’t get into it.

Answer: c. Naughty human! What Fluffy did was YOUR FAULT for failing to supervise her! (OK, so you can’t quit your job and watch her all day but that’s not HER fault!) Corrections work only as a warning IMMEDIATELY beforehand (“Na-aah-aah, don’t even think of touching that trash can!”) or while Fluffy is “in the act.” If you wait until hours (or even minutes) later, Fluffy will think she’s being scolded for what she’s doing RIGHT NOW, such as being happy to see you!

The solution lies, as always, in prevention. Your best bet is to stash the trash in a pantry or “kid-safe” cabinet. (Confining Fluffy works for YOU, but it doesn’t solve HER basic problems-boredom, loneliness, and lots of energy.) Make sure Fluff has a variety of toys (and/or companions) to keep her occupied and that she gets plenty of exercise, particularly in the morning: A tired dog wants to sleep, not redecorate!

How’d You Score?

Give yourself a point for each correct answer.

0-2: Hey, Mussolini, lighten up! How about we yell at you for getting sick on the carpet, smack you for talking to your friends, lock you in the basement for raiding the refrigerator, and see how YOU like it!

3-4: You’re not quite fluent in “dog-ese” yet, but you’re getting there. Brush up on your communicating skills by reading a book like Dogs Behaving Badly by Dr. Nicholas Dodman or Don’t Shoot the Dog by Karen Pryor, and sign yourself up for a good training class.

5-6: Gandhi would be proud! Now go spend some quality time with that lucky pooch of yours!

Win bonus points if you do the following:

· Stop and smell the roses-and the fire hydrants! Imagine the frustration of sitting around the house all day waiting for a walk and then, when you get one, being hauled around the block without ever getting a chance to explore! Give your dog a break with a retractable leash (available at most pet supply stores): It gives him or her room to run ahead or linger over fascinating trees and can be shortened up for safety when crossing busy intersections. Also, play it safe by walking your dog on a harness.

· Many dogs, especially smaller, energetic breeds like beagles and poodles or larger, delicate-boned breeds like greyhounds, are prone to neck injuries, which can be extremely painful and debilitating. For serious pullers, try a neck-saving “no-pull” harness, which creates a slight “pushing” feeling on dogs’ chests when they tug, causing them to slow down.

· Don’t give your dog orders all the time. Try to make suggestions and ask questions, too. Learning the meanings of words and phrases like “cookies,” “outside?” “water?” “all done,” and “wanna-go-for-a-walk?” can make your dog’s life a lot easier.

· Make a housebreaking schedule and stick to it. Take puppies out at least once every two hours (or within a half hour after eating or drinking), and guide them to the same spot where they can smell having gone before. Until their bladders get bigger, they can spend the night in a crate by your bed so they can wake you up when they need to go. If you use a crate, be careful not to abuse it. Don’t leave dogs in crates for more than three hours at a time during the day and never use the crate as punishment. The dog should view the crate as a safe, secure den, not a dungeon. For more housebreaking tips, click here.

· Let your dog be a dog! The idea behind training is to set up boundaries within which dogs are free to be themselves, not control their every movement. For enthusiastic diggers, for example, don’t flat-out prohibit digging-give them their very own special places to dig. Teach them to use a “sandbox” by burying favorite toys in it.

· Be considerate! Think about how many times you go to the restroom during the day. Now imagine what it must be like for dogs to have to “cross their legs” all that time! Take your dog out at least four times a day: in the morning, in the afternoon, right when you get home, and before you go to bed. If you can’t come home at lunchtime, arrange for a neighbor or professional “petsitter” to take your dog out. Another option is a “doggy door”; however, this is safe only if your yard is fenced and locked against intruders.

 

ABC’s

In Well-being on Saturday, 15 September 2012 at 09:28

I found this among my notes from grad school.  Good advice!

ABC’s:

  1. All behavior is purposeful
  2. Thoughts cause feelings

The “A-B-C” approach to helping yourself:

A= the situation or person or event

B= the beliefs or self-talk about A

C= feelings and behavior-the consequence of the self-talk

B causes C, but most people believe that A causes

  1. Get your expectations in line with reality
  2. You teach people how to treat you
  3. Don’t guarantee anyone’s behavior other than your own
  4. Life is a series of choices
  5. Don’t guarantee anyone’s behavior other than your own
  6. Life is a series of choices

A Discussion of Language Acquisition Theories (2002)

In Education, School Psychology on Saturday, 15 September 2012 at 09:19

A Discussion of Language Acquisition Theories

by Vedat Kiymazarslan, 2002

I. INTRODUCTION

A great many theories regarding language development in human beings have been proposed in the past and still being proposed in the present time. Such theories have generally arisen out of major disciplines such as psychology and linguistics. Psychological and linguistic thinking have profoundly influenced one another and the outcome of language acquisition theories alike. This article aims to discuss language acquisition theories and assess their implications for applied linguistics and for a possible theory of foreign/second language teaching.

Language acquisition theories have basically centered around “nurture” and “nature” distinction or on “empiricism” and “nativism”. The doctrine of empiricism holds that all knowledge comes from experience, ultimately from our interaction with the environment through our reasoning or senses. Empiricism, in this sense, can be contrasted to nativism, which holds that at least some knowledge is not acquired through interaction with the environment, but is genetically transmitted and innate. To put it another way, some theoreticians have based their theories on environmental factors while others believed that it is the innate factors that determine the acquisition of language. It is, however, important to note that neither nurturists (environmentalists) disagree thoroughly with the nativist ideas nor do nativists with the nurturist ideas. Only the weight they lay on the environmental and innate factors is relatively little or more. Before sifting through language acquisition theories here, therefore, making a distinction between these two types of perspectives will be beneficial for a better understanding of various language acquisition theories and their implications for the field of applied linguistics. In the following paragraphs, the two claims posed by the proponents of the two separate doctrines will be explained and the reason why such a distinction has been made in this article will be clarified.

Environmentalist theories of language acquisition hold that an organism’s nurture, or experience, are of more significance to development than its nature or inborn contributions. Yet they do not completely reject the innate factors. Behaviorist and neo-behaviorist stimulus-response learning theories (S-R for simplicity) are the best known examples. Even though such theories have lost their effect partially because of Chomsky’s intelligent review of Skinner’s Verbal Behavior (Chomsky, 1959), their effect has not been so little when we consider the present cognitive approach as an offshoot of behaviorism.

The nativist theories, on the other hand, assert that much of the capacity for language learning in human is ‘innate’. It is part of the genetic makeup of human species and is nearly independent of any particular experience which may occur after birth. Thus, the nativists claim that language acquisition is innately determined and that we are born with a built-in device which predisposes us to acquire language. This mechanism predisposes us to a systematic perception of language around us. Eric Lenneberg (cited in Brown, 1987:19), in his attempt to explain language development in the child, assumed that language is a species – specific behavior and it is ‘biologically determined’. Another important point as regards the innatist account is that nativists do not deny the importance of environmental stimuli, but they say language acquisition cannot be accounted for on the basis of environmental factors only. There must be some innate guide to achieve this end. In Table 1 below, a classification around the nurture/nature distinction has been made.

THEORIES OF LANGUAGE ACQUISITION

(BOTH L1 AND L2)

Some of the Resulting
Foreign/Second Language Teaching Methods

THEORIES BASED ON “NURTURE”

 

(environmental factors are believed
to be more dominant in language acquisition)

 

– Bakhtin’s Theory of Polyphony or Dialogics

– Vygotsky’s Zone of Proximal Development

– Skinner’s Verbal Behavior

– Piaget’s View of Language Acquisition

– The Competition Model

– Cognitive Theory: Language Acquisition View

– Discourse Theory

– The Speech Act Theory

– The Acculturation Model

– Accommodation Theory

– The Variable Competence

– The Interactionist View of Language Acquisition

– The Connectionist Model

 

 

Audiolingual
Method

 

Community Language Learning

 

Communicative Approach

 

Others

THEORIES BASED ON “NATURE”

 

(innate factors are believed to be more dominant in language acquisition)

 

– A Neurofunctional Theory of Language Acquisition

 

– The Universal Grammar Theory

 

– Fodor’s Modular Approach

 

The Monitor Model

 

Winitz’s
Comprehension

Approach

 

The Natural Approach

 

 

 

Table 1. Classification of Language Acquisition Theories Around
“Nurture and Nature Distinction”

The particular reason why such a distinction between environmentalist and nativist theories has been made in this study is to create a clear-cut picture of the current status of language acquisition theories, present and former studies in the field of language acquisition and language teaching methodology. In the following part, the most important ones of language acquisition theories resulting f rom the two opposing views mentioned above will be discussed.

II. THEORIES OF LANGUAGE ACQUISITION

In this part of the article, eight different views of language acquisition will be discussed. Most of the theories may be considered in both L1 (mother tongue) and L2 (second or foreign language) acquisition even though certain theories to be discussed here have been resulted from second language acquisition (SLA) studies. It is important to note once again that language acquisition theories have been influenced especially by linguistic and psychological schools of thought. Thus they have given relatively changing weights on different factors in approaching the acquisition process as can be seen in the following subsections.

2.1 Vygotsky’s Zone of Proximal Development

Vygotsky was a psychologist but his studies on conscious human behavior led him to investigate the role that language plays in human behavior. Vygotsky’s point of view is simply that social interaction plays an important role in the learning process. He places an emphasis on the role of “shared language” in the development of thought and language. The term “shared language” refers to social interaction and can be best elucidated through the notion of “zone of proximal development”.

According to Vygotsky (1962:10), two developmental levels determine the learning process: egocentricity and interaction. We can look at what children do on their own and what they can do while working with others. They mostly choose to remain silent or speak less on their own (less egocentric speech) when they are alone. However, they prefer to speak to other children when they play games with them (more egocentric speech). The difference between these two types of development forms has been called “Zone of Proximal Development”. This zone refers to the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in cooperation with more capable friends of the child. The first thing that children do is to develop concepts by talking to adults and then solve the problems they face on their own. In other words, children first need to be exposed to social interaction that will eventually enable them build their inner resources.

As for the drawbacks of the views proposed by Vygotsky, it is not clear what Vygotsky meant by inner resources. Also, his emphasis on the significance of egocentric speech in the development of thought and language is worth discussing. He suggests that egocentric speech is social and helps children interact with others. When a child is alone he uses less egocentric language than he uses it when playing games with other children. This implies that speech is influenced by the presence of other people. It seems that Vygotsky overemphasizes the function of egocentric speech in the development of language. It is true that society and other people are important factors helping children to acquire language. However, Vygotsky fails to account for the role of the self itself in this process, even though he stresses the importance of egocentric speech, which is not the self actually, and see the relative role of inner linguistic and psycholinguistic mechanisms that promote language acquisition.

In conclusion, Vygotsky contends that language is the key to all development and words play a central part not only in the development of thought but in the growth of cognition as a whole. Within this framework, child language development, thus acquisition, can be viewed as the result of social interaction.

2.2. Skinner’s Verbal Behavior

Behavioristic view of language acquisition simply claims that language development is the result of a set of habits. This view has normally been influenced by the general theory of learning described by the psychologist John B. Watson in 1923, and termed behaviorism. Behaviorism denies nativist accounts of innate knowledge as they are viewed as inherently irrational and thus unscientific. Knowledge is the product of interaction with the environment through stimulus-response conditioning.

Broadly speaking, stimulus (ST) – response (RE) learning works as follows. An event in the environment (the unconditioned stimulus, or UST) brings out an unconditioned response (URE) from an organism capable of learning. That response is then followed by another event appealing to the organism. That is, the organism’s response is positively reinforced (PRE). If the sequence UST –> URE –> PRE recurs a sufficient number of times, the organism will learn how to associate its response to the stimulus with the reinforcement (CST). This will consequently cause the organism to give the same response when it confronts with the same stimulus. In this way, the response becomes a conditioned response (CRE).

The most risky part of the behavioristic view is perhaps the idea that all leaning, whether verbal (language) or non-verbal (general learning) takes place by means of the same underlying process, that is via forming habits. In 1957, the psychologist B.F. Skinner produced a behaviorist account of language acquisition in which linguistic utterances served as CST and CRE.

When language acquisition is taken into consideration, the theory claims that both L1 and L2 acquirers receive linguistic input from speakers in their environment, and positive reinforcement for their correct repetitions and imitations. As mentioned above, when language learners’ responses are reinforced positively, they acquire the language relatively easily.

These claims are strictly criticized in Chomsky’s “A Review of B.F. Skinner’s Verbal Behavior”. Chomsky (1959) asserts that there is “neither empirical evidence nor any known argument to support any specific claim about the relative importance of feedback from the environment”. Therefore, it would be unwise to claim that the sequence UST –> URE –> PRE and imitation can account for the process of language acquisition. What is more, the theory overlooks the speaker (internal) factors in this process.

The behaviorists see errors as first language habits interfering with the acquisition of second language habits. If there are similarities between the two languages, the language learners will acquire the target structures easily. If there are differences, acquisition will be more difficult. This approach is known as the contrastive analysis hypothesis (CAH). According to the hypothesis, the differences between languages can be used to reveal and predict all errors and the data obtained can be used in foreign/second language teaching for promoting a better acquisition environment. Lightbown and Spada (1993: 25) note that:

“… there is little doubt that a learner’s first language influences the acquisition of second language. [But] … the influence is not simply a matter of habits, but rather a systematic attempt by the learner to use knowledge already acquired in learning a new language.”

This is another way of saying that mother tongue interference cannot entirely explain the difficulties that an L2 learner may face. It is true that there might be some influences resulting from L1, but research (Ellis, 1985:29) has shown that not all errors predicted by CAH are actually made. For example, Turkish learners of English simply use utterances just as “No understand” even though the corresponding structure of Turkish (“Anlamiyorum” literally, “UNDERSTAND-NO-ME”) is thoroughly different.

In brief, Skinner’s view of language acquisition is a popular example of the nurturist ideas. Behaviorism, as known by most of us, was passively accepted by the influential Bloomfieldian structuralist school of linguistics and produced some well-know applications in the field of foreign/second language teaching – for instance, the Audiolingual Method or the Army Method. The theory sees the language learner as a tabula rasa with no built-in knowledge. The theory and the resulting teaching methods failed due to the fact that imitation and simple S-R connections only cannot explain acquisition and provide a sound basis for language teaching methodology.

2.3. Piaget’s View of Language Acquisition

Even though Piaget was a biologist and a psychologist, his ideas have been influential in the field of first and second language acquisition studies. In fact he studied the overall behavioral development in the human infant. But his theory of development in children has striking implications as regards language acquisition.

Ellidokuzoglu (1999:16) notes that “many scientists, especially the psychologists are hesitant to attribute a domain-specific built-in linguistic knowledge to the human infant.” Accordingly, they view the human brain as a homogeneous computational system that examines different types of data via general information processing principles. Piaget was one of those psychologists who view language acquisition as a case of general human learning. He has not suggested, however, that the development is not innate, but only that there is no specific language module. Piaget’s view was then that the development (i.e., language acquisition) results mainly from external factors or social interactions. Piaget (cited in Brown, 1987:47, Eyseneck, 1990:51) outlined the course of intellectual development as follows:

– The sensorimotor stage from ages 0 to 2 (understanding the environment)
– The preoperational stage from ages 2 to 7 (understanding the symbols)
– The concrete operational stage from ages 7 to 11 (mental tasks and language use)
– The formal operational stage from the age 11 onwards (dealing with abstraction)

Piaget observes, for instance, that the pre-linguistic stage (birth to one year) is a determining period in the development of sensory-motor intelligence, when children are forming a sense of their physical identity in relation to the environment. Piaget, unlike Vygotsky, believes that egocentric speech on its own serves no function in language development.
2.4. The Universal Grammar Theory

Among theories of language acquisition, Universal Grammar (UG) has recently gained wider acceptance and popularity. Though noted among L2 acquisition theories, the defenders of UG are not originally motivated to account for L2 acquisition, nor for first language (L1) acquisition. However, UG is more of an L1 acquisition theory rather than L2. It attempts to clarify the relatively quick acquisition of L1s on the basis of ‘minimum exposure’ to external input. The ‘logical problem’ of language acquisition, according to UG proponents, is that language learning would be impossible without ‘universal language-specific knowledge’ (Cook, 1991:153; Bloor & Bloor: 244). The main reason behind this argument is the input data:

“…[L]anguage input is the evidence out of which the learner constructs knowledge of language – what goes into the [brain]. Such evidence can be either positive or negative. … The positive evidence of the position of words in a few sentences [the learner] hear[s] is sufficient to show [him] the rules of [a language].” (Cook, 1991: 154)

The views supports the idea that the external input per se may not account for language acquisition (Ellidokuzoglu, 1999:20). Similarly, the Chomskyan view holds that the input is poor and deficient in two ways. First, the input is claimed to be ‘degenerate’ because it is damaged by performance features such as slips, hesitations or false starts. Accordingly, it is suggested that the input is not an adequate base for language learning. Second, the input is devoid of grammar corrections. This means that the input does not normally contain ‘negative evidence’, the knowledge from which the learner could exercise what is ‘not’possible in a given language.

As for L2 acquisition, however, the above question is not usually asked largely because of the frequent failure of L2 learners, who happen to be generally cognitively mature adults, in attaining native-like proficiency. But why can’t adults who have already acquired an L1, acquire an L2 thoroughly? Don’t they have any help from UG? Or if they do, then how much of UG is accessible in SLA? These and similar questions have divided researchers into three basic camps with respect to their approach to the problem:

Direct access -L2 acquisition is just like L1 acquisition. Language acquisition device (LAD) is involved.

No access – L2 learners use their general learning capacity.

Indirect access – Only that part of UG which has been used in L1 acquisition is used in L2 acquisition.

Proponents of UG, for example, believe that both children and adults utilize similar universal principles when acquiring a language; and LAD is still involved in the acquisition process. This view can be better understood in the following quote.

[A]dvocates of [UG] approach working on second-language learning… argue that there is no reason to assume that language faculty atrophies with age. Most second-language researchers who adopt the [UG] perspective assume that the principles and parameters of [UG] are still accessible to the adult learner. (McLaughlin, 1987:96)

To support the view above, the acquisition of the third person “-s” can be given as an example. According to research (1996, Cook: 21) both child L1 and adult L2 learners (e.g. Turkish learners of English) acquire the third person “-s” morpheme at a later stage of their overall acquisition process and have a great difficulty in acquiring it when compared to other morphemes such as the plural morpheme “-s” or the progressive morpheme “-ing”. This shows that such learners are somewhat affected by UG-based knowledge. However, in the case of foreign/second language teaching it is very well known that the third person “-s” is taught at the very beginning of a second language learning program and presented in a great majority of textbooks as the first grammatical item.

Accordingly, Fodor’s views have some parallels with the UG Theory. Jerry Fodor studied the relationship between language and mind and his view that language is a modular process has important implications for a theory of language acquisition. The term modular is used to indicate that the brain is seen, unlike older views such as behavioristic view of learning and language learning, to be organized with many modules of cells for a particular ability (for instance, the visual module). These modules, according to Fodor (1983:47), operate in isolation from other modules that they are not directly connected. The language module, if we are to follow Fodor’s ideas, is one of such modules. This modular separateness has been termed as “informational encapsulation” by Fodor. To put it simply, each module is open to specific type of data. In other words, modules are domain specific. This is another way of saying that conscious knowledge cannot penetrate your visual module or language module or any other subconscious module.

Basically, Fodor’s arguments are somewhat similar to that of Chomsky or the proponents of UG Theory in that the external input per se may not account for language acquisition and that language acquisition is genetically predetermined. Add to this, such a modular approach to language acquisition is totally different from the views of Piaget and Vygotsky who have laid the primary emphasis on the role of social or environmental factors in language development.

In the case of foreign/second language teaching, the common view is that inductive learning (teaching a language through hidden grammar or) leads to acquisition. However, dwelling on Fodor’s views as discussed above, it is obvious that inductive learning is confused with acquisition and that by learning something via discovery learning, students just improve their problem-solving skills, but not acquire a language.

As for the problems with Universal Grammar, it can be said that UG’s particular aim is to account for how language works. Yet UG proponents had to deal with acquisition to account for the language itself. “Acquisition part” is thus of secondary importance. A second drawback is that Chomsky studied only the core grammar of the English language (syntax) and investigated a number of linguistic universals seems to be the major problem. And he neglected the peripheral grammar, that is, language specific rules (i.e., rules of specific languages which cannot be generalized). Thirdly, the primary function of language is communication, but it is discarded. The final and the most significant problem is a methodological one. Due to the fact that Chomsky is concerned only with describing and explaining ‘competence’, there can be little likelihood of SLA researchers carrying out empirical research.

In summary, UG has generated valuable predictions about the course of interlanguage and the influence of the first language. Also, it has provided invaluable information regarding L2 teaching as to how L2 teachers (or educational linguists) should present vocabulary items and how they should view grammar. As Cook (1991:158) puts it, UG shows us that language teaching should deal with how vocabulary should be taught, not as tokens with isolated meanings but as items that play a part in the sentence saying what structures and words they may go with in the sentence. The evidence in support of UG, on the other hand, is not conclusive. If the language module that determines the success in L1 acquisition is proved to be accessible in L2 acquisition, L2 teaching methodologists and methods should study and account for how to trigger this language module and redesign their methodologies. The UG theory should, therefore, be studied in detail so as to endow us with a more educational and pedagogical basis for mother tongue and foreign language teaching.

2.5. A Neurofunctional Theory (based on the environmentalist view):

Ellis (1985:273) notes that this theory is based on two systems: the communication hierarchy and the cognitive hierarchy. “The communication hierarchy” means language and other forms of interpersonal communication. “The cognitive hierarchy, on the other hand, refers to a number of cognitive information processing activities possibly related with “conscious” processes. The theory also makes a sharp distinction between Primary Language Acquisition (PLA) and Secondary Language Acquisition (SELA). PLA is seen in the child’s acquisition of one or more languages from the age of two to five. SELA is found in both adults and children. It is, in addition, divided into two parts (a) foreign language learning, that is formal classroom language learning, and (b) second language acquisition, that is, the natural acquisition of a second language after the age of five. This theory claims that PLA and (b) is marked through use of the communication hierarchy while (a) is marked by the use of the cognitive hierarchy only. If we are to accept the existence of some innate and subconscious linguistic properties, which is what the nativists have claimed, we then have the right to ask the question of why (a) is treated only as a cognitive process.

(1) The Acquisition-Learning Hypothesis

Krashen (1985), in his theory of second language acquisition (SLA) suggested that adults have two different ways of developing competence in second languages: Acquisition and learning. “There are two independent ways of developing ability in second languages. ‘Acquisition’ is a subconscious process identical in all important ways to the process children utilize in acquiring their first language, … [and] ‘learning’…, [which is] a conscious process that results in ‘knowing about’ [the rules of] language” (Krashen 1985:1).

Krashen (1983) believes that the result of learning, learned competence (LC) functions as a monitor or editor. That is, while AC is responsible for our fluent production of sentences, LC makes correction on these sentences either before or after their production. This kind of conscious grammar correction, ‘monitoring’, occurs most typically in a grammar exam where the learner has enough time to focus on form and to make use of his conscious knowledge of grammar rules (LC) as an aid to ‘acquired competence’. The way to develop learned competence is fairly easy: analyzing the grammar rules consciously and practising them through exercises. But what Acquisition / Learning Distinction Hypothesis predicts is that learning the grammar rules of a foreign/second language does not result in subconscious acquisition.

The implication of the acquisition-learning hypothesis is that we should balance class time between acquisition activities and learning exercises.

(2) The Natural Order Hypothesis

According to the hypothesis, the acquisition of grammatical structures proceeds in a predicted progression. Certain grammatical structures or morphemes are acquired before others in first language acquisition and there is a similar natural order in SLA. The implication of natural order is not that second or foreign language teaching materials should be arranged in accordance with this sequence but that acquisition is subconscious and free from conscious intervention.

(3) The Input Hypothesis

This hypothesis relates to acquisition, not to learning. Krashen (1985:3) claims that people acquire language best by understanding input that is a little beyond their present level of competence. Consequently, Krashen believes that ‘comprehensible input’ (that is, i + 1) should be provided. The ‘input’ should be relevant and ‘not grammatically sequenced’. The foreign/second language teacher should always send meaningful messages, which are roughly tuned, and ‘must’ create opportunities for students to access i+1 structures to understand and express meaning. For instance, the teacher can lay more emphasis on listening and reading comprehension activities.

(4) The Monitor Hypothesis

As mentioned before, adult second language learners have two means for internalizing the target language. The first is ‘acquisition’ which is a subconscious and intuitive process of constructing the system of a language. The second means is a conscious learning process in which learners attend to form, figure out rules and are generally aware of their own process. The ‘monitor’ is an aspect of this second process. It edits and makes alterations or corrections as they are consciously perceived. Krashen (1985:5) believes that ‘fluency’ in second language performance is due to ‘what we have acquired’, not ‘what we have learned’: Adults should do as much acquiring as possible for the purpose of achieving communicative fluency. Therefore, the monitor should have only a minor role in the process of gaining communicative competence. Similarly, Krashen suggests three conditions for its use: (1) there must be enough time; (2) the focus must be on form and not on meaning; (3) the learner must know the rule. Students may monitor during written tasks (e.g., homework assignments) and preplanned speech, or to some extent during speech. Learned knowledge enables students to read and listen more so they acquire more.

(5) The Affective Filter Hypothesis

The learner’s emotional state, according to Krashen (1985:7), is just like an adjustable filter which freely passes or hinders input necessary to acquisition. In other words, input must be achieved in low-anxiety contexts since acquirers with a low affective filter receive more input and interact with confidence. The filter is ‘affective’ because there are some factors which regulate its strength. These factors are self-confidence, motivation and anxiety state. The pedagogical goal in a foreign/second language class should thus not only include comprehensible input but also create an atmosphere that fosters a low affective filter.

The Monitor Model has been criticized by some linguists and methodologists McLaughlin (1987: 56), notes that the model fails at every juncture by claiming that none of the hypotheses is clear in their predictions. For example, he notes that the acquisition-learning distinction is not properly defined and that the distinction between these two processes cannot be tested empirically. Although it is true that some parts of the theory need more clarification, it would be harsh to suggest that the Model is a pseudo-scientific. Hasanbey (personal communication) define acquisition as follows:

“Any systematic linguistic behavior, the rules of which cannot be verbalized by its performer is the outcome of acquisition. So if one uses a specific language rule in proper contexts and if the same person cannot articulate the underlying language rule which determines its proper context, then that person is said to have acquired the rule in question. On the other hand, if a person can verbalize a language rule, with or without its proper implementation during performance then that person is said to have conscious knowledge of that rule. So one might have acquired and learned the same rule in theory.”

While writing these very sentences, I have displayed a curious example of committing an error which proves the acquisition-learning distinction. In the statement “Hasanbey (personal communication) define acquisition as follows” the verb define should have an “-s” attached to it. I, as an EFL learner/teacher of English for about 20 years, “consciously” know when to attach that suffix to the verbs. But when it comes to fluent writing and speaking during which only subconsciously acquired rules have a say, I frequently miss that third person singular –s. So I and many other L2 learners who commit this error in spite of knowing the underlying rule at a conscious level, are the irrefutable evidence proving the distinction between acquisition and learning. The on-going interest in Krashen’s theory and the emergence of articles supporting his theory in recent journals also proves that his theory is far from being pseudo-scientific. Here is a typical example:

“Krashen’s ‘acquisition-learning’ distinction has met harsh criticism but the theory he put forward deserves a more sympathetic reappraisal. First of all, the theory is not insulated against falsification. The results of the studies examining the effects of explicit positive and/or negative evidence in formal learning are not inconsistent with it. Recent studies on the acquisition of functional categories lends support to the existence of the natural order in English L2. It is also possible to single out major dimensions on which processes and products of the ‘acquired’ and ‘learned’ systems differ using the principles of markedness and differences in computational complexity.”(Zobl, 1995:35)

So far eight theories of language acquisition have been discussed (see Appendix for a brief account of other theories and a classification of theories based on the distinction made here). It can be seen that none of the theories is complete and most of them need developing. Each theory, however, is important for their implications and provides invaluable information as to how a language is acquired. and how language teaching should take place.

III. CONCLUSION

The most important implication of language acquisition theories is obviously the fact that applied linguists, methodologist and language teachers should view the acquisition of a language not only as a matter of nurture but also an instance of nature. In addition, only when we distinguish between a general theory of learning and language learning can we ameliorate the conditions L2 education. To do so, applied linguists must be aware of the nature of both L1 and L2 acquisition and must consider the distinction proposed in this study.

Ridgway (2000, 13) notes that the educational linguist (not the applied linguist) is a practitioner who applies and adapts the policies of others in the classroom creatively. If the educational linguist is to adapt language models proposed by others (applied linguists) for classroom practice, it becomes more important “how” he or she will adopt them. How, for instance, should s/he utilize the findings of SLA studies conducted on syntax or natural order and use them for his or her particular classroom settings? How should grammar points be handled? Should they be taught inductively or deductively? Or should there be a balance between grammar lessons and acquisition lessons just as proposed by the proponents of the Monitor Model? How should vocabulary teaching be like and how should a syllabus be designed? How will the results of language planning proposed by the government be implemented? Most of these “how” questions can be answered properly only through a detailed analysis and a thorough understanding of language acquisition theories.

Here, on the shoulders of the methodologists lays quite a heavy responsibility. As we often see, linguistics and TEFL/TESL are largely based on the nurturist facet of language acquisition, emphasizing discourse and ethnolinguistic studies. It would, of course, be unwise to deemphasize such studies and their role in accounting for language acquisition and reaching a possible theory of educational linguistics. However, in this article it has been shown that language acquisition is also a considerable matter of innate factors. What is then the role of that “nature” part of theories in the overall sketch of language acquisition and methodology?

In addition, the author wishes to emphasize the necessity of the subfield “educational psycholinguistics”. In Stubbs’ point of view (1986:283), a thorough description of language in use, language variation, levels of language such as phonology, morphology and syntax, semantics and discourse will form the bases of a complete educational theory of language. If such a theory is expected to be beneficial to foreign and second language teaching, then it should not only include these environmentalist components but also include the subfield “educational psycholinguistics” which would mainly focus on “naturist” accounts as discussed in previous parts of this article. The inclusion of educational psycholingustics in this sense will make the current position of applied linguistics and language teaching far stronger. No longer should mind and innateness be treated as dirty words (Pinker, 1994:22). This will most probably lead to innovative proposals for syllabus development and the design of instructional systems, practices, techniques, procedures in the language classroom, and finally a sound theory of L2 teaching and learning.

Retrieved from:

http://chomikuj.pl/natalia06_09/J.+ANGIELSKI/Akwizycja/A+Discussion+of+Language+Acquisition+Theories,1864493093.doc

Multi-Vitamin Supplements and Brain Function

In Alternative Health, Brain studies, Fitness/Health on Saturday, 15 September 2012 at 08:24

Multivitamin supplements boost brain function, say UK researchers

Taking a multivitamin supplement daily can improve cognitive performance in both children and adults, say UK researchers.

Seclusion and Restraint in the Public Schools

In Education, School Psychology, Special Education on Saturday, 15 September 2012 at 08:20
September 8, 2012
A Terrifying Way to Discipline Children
By: Bill Lichenstein
Editors’ note appended
IN my public school 40 years ago, teachers didn’t lay their hands on students for bad behavior. They sent them to the principal’s office. But in today’s often overcrowded and underfunded schools, where one in eight students receive help for special learning needs, the use of physical restraints and seclusion rooms has become a common way to maintain order.It’s a dangerous development, as I know from my daughter’s experience. At the age of 5, she was kept in a seclusion room for up to an hour at a time over the course of three months, until we discovered what was happening. The trauma was severe.According to national Department of Education data, most of the nearly 40,000 students who were restrained or isolated in seclusion rooms during the 2009-10 school year had learning, behavioral, physical or developmental needs, even though students with those issues represented just 12 percent of the student population. African-American and Hispanic students were also disproportionately isolated or restrained.Joseph Ryan, an expert on the use of restraints who teaches at Clemson University, told me that the practice of isolating and restraining problematic children originated in schools for children with special needs. It migrated to public schools in the 1970s as federal laws mainstreamed special education students, but without the necessary oversight or staff training. “It’s a quick way to respond but it’s not effective in changing behaviors,” he said.State laws on disciplining students vary widely, and there are no federal laws restricting these practices, although earlier this year Education Secretary Arne Duncan wrote, in a federal guide for schools, that there was “no evidence that using restraint or seclusion is effective.” He recommended evidence-based behavioral interventions and de-escalation techniques instead.

The use of restraints and seclusion has become far more routine than it should be. “They’re the last resort too often being used as the first resort,” said Jessica Butler, a lawyer in Washington who has written about seclusion in public schools.

Among the recent instances that have attracted attention: Children in Middletown, Conn., told their parents that there was a “scream room” in their school where they could hear other children who had been locked away; last December, Sandra Baker of Harrodsburg, Ky., found her fourth-grade son, Christopher, who had misbehaved, stuffed inside a duffel bag, its drawstrings pulled tight, and left outside his classroom. He was “thrown in the hall like trash,” she told me. And in April, Corey Foster, a 16-year-old with learning disabilities, died on a school basketball court in Yonkers, N.Y., as four staff members restrained him following a confrontation during a game. The medical examiner ruled early last month that the death was from cardiac arrest resulting from the student’s having an enlarged heart, and no charges were filed.

I saw firsthand the impact of these practices six years ago when my daughter, Rose, started kindergarten in Lexington, Mass. Rose had speech and language delays. Although she sometimes became overwhelmed more quickly than other children, she was called “a model of age-appropriate behavior” by her preschool. One evaluation said Rose was “happy, loves school, is social.” She could, however, “get fidgety and restless when she is unsure as to what is expected of her. When comfortable, Rose is a very participatory and appropriate class member with a great deal to contribute to her world.”

Once in kindergarten, Rose began throwing violent tantrums at home. She repeatedly watched a scene from the film “Finding Nemo” in which a shark batters its way into a tiny room, attempting to eat the main characters. The school provided no explanation or solution. Finally, on Jan. 6, 2006, a school aide called saying that Rose had taken off her clothes. We needed to come get her.

At school, her mother and I found Rose standing alone on the cement floor of a basement mop closet, illuminated by a single light bulb. There was nothing in the closet for a child — no chair, no books, no crayons, nothing but our daughter standing naked in a pool of urine, looking frightened as she tried to cover herself with her hands. On the floor lay her favorite purple-striped Hanna Andersson outfit and panties.

Rose got dressed and we removed her from the school. We later learned that Rose had been locked in the closet five times that morning. She said that during the last confinement, she needed to use the restroom but didn’t want to wet her outfit. So she disrobed. Rather than help her, the school called us and then covered the narrow door’s small window with a file folder, on which someone had written “Don’t touch!”

We were told that Rose had been in the closet almost daily for three months, for up to an hour at a time. At first, it was for behavior issues, but later for not following directions. Once in the closet, Rose would pound on the door, or scream for help, staff members said, and once her hand was slammed in the doorjamb while being locked inside.

At the time, I notified the Lexington Public Schools, the Massachusetts Department of Children and Families and the Department of Mental Health about Rose and other children in her class whom school staff members indicated had been secluded. If any of these agencies conducted a formal investigation, I was not made aware of it.

Rose still has nightmares and other symptoms of severe stress. We brought an action against the Lexington Public Schools, which we settled when the school system agreed to pay for the treatment Rose needed to recover from this trauma.

The physical and psychological injuries to children as a consequence of this disciplinary system is an issue that has found its way to Congress. Legislation to ban these practices has been introduced in the House and the Senate, but no vote is expected this year.

Meanwhile, Rose is back in public school and has found it within her to forgive those involved in her case. “They weren’t bad people,” she told me. “They just didn’t know about working with children.”

Bill Lichtenstein is an investigative journalist and filmmaker.

Editors’ Note: September 16, 2012
An opinion essay on Sept. 9 criticizing the use of seclusion and restraint to discipline students described an episode on Jan. 6, 2006, in which the writer’s daughter, then a kindergartner, was kept in an isolation room at her school in Lexington, Mass. Several details of that episode have since been disputed.The girl wet herself while being confined in a closet for misbehaving. But school officials, and a 2008 deposition by the girl’s mother, state that she was then cleaned up and dressed while her parents were notified — and that it was not the case that the parents found her standing alone, unclothed, in her urine.

The article incorrectly described the closet where the girl was confined. It was on a mezzanine between two classroom levels, not in the basement.

While the girl’s parents sued the Lexington school district in 2007, and obtained a settlement in 2008, the writer did not notify two Massachusetts state agencies — the Department of Children and Families and the Department of Mental Health — “at the time” of the episode, according to state records.

The girl’s parents divorced in 2007. If The Times had known before the article was published that the writer’s ex-wife was now the girl’s custodial parent, it would have contacted her.

ADHD into Adolescence

In ADHD, ADHD child/adolescent, ADHD stimulant treatment, Medication, Neuropsychology, School Psychology on Friday, 14 September 2012 at 05:26

Adolescent ADHD: Diagnosis and Initial Treatment

Scott H. Kollins, PhD

http://www.medscape.org/viewarticle/749104_2

ADHD Into Adolescence

Longitudinal studies demonstrate that ADHD is a disorder that children do not simply outgrow as they reach adolescence.[1-5] Follow-up studies of children with ADHD estimate that the diagnosis persists in 50% to 80% of cases.[1,6-10] Studies of clinically referred adolescents with ADHD also indicate that the disorder continues into adolescence and is associated with various functional impairments, particularly when compared with nondiagnosed peers, including social competence, behavioral and emotional adjustment, school performance, and general quality of life.[11,12]

Although ADHD as a disorder is continuous from childhood into adolescence,[13] the persistence of ADHD into adolescence needs to be considered in the context of adolescence as a period of development in which there are many changes at multiple levels, including physical, psychological, and social changes. During this developmental period, adolescents typically experience a growing influence of peers and independence from family members.[14] For adolescents with a disorder like ADHD in which social and emotional impairment is common,[15] this transitional period may be particularly difficult. Cognitive demands increase along with greater independence from adult supervision (eg, multiple teachers with different teaching styles, amount and scope of homework) as children enter into middle and high school,[11] which requires greater self-regulation, a quality that is often impaired in those with ADHD.

Neuronal and hormonal developmental changes during adolescence can further influence how symptoms are expressed.[14] Related to these biologically based changes, adolescence also is a critical period neurobiologically, with more risk-taking behavior and drug and alcohol use, which correspond with notable changes in motivational and reward-related brain regions. Such behaviors can be problematic because adolescents are naturally more sensitive to the positive rewarding properties of various drugs and natural stimuli and less sensitive to the aversive properties of these stimuli.[16] These behavioral and neurobiological developmental changes in concert with social, hormonal, and physiological changes place adolescents at high risk for substance use.[17,18] ADHD is an additional risk factor for such substance use behavior (reviewed in greater detail below) and thus places adolescents with ADHD at greater risk during this critical developmental period.

Given such developmental changes, the presentation of ADHD changes in adolescence as well, including symptom presentation; although inattentive symptoms continue to be involved in the clinical characteristics of most patients, hyperactive symptoms decline in severity for many.[7,19-21] This symptom presentation continues to cause functional impairment in domains typically impaired in childhood, including academics.[22]

Adolescents with ADHD smoke at significantly higher rates than peers without ADHD and start smoking earlier, demonstrate a higher level of nicotine dependence, and have greater difficulty quitting than youth without ADHD. Some studies have estimated that 25% to 75% of adolescents with ADHD meet diagnostic criteria for ODD or CD. Although mood disorders are often seen in adolescents with ADHD, with an incidence of roughly 10% to 20%, they are less common than DBD. ADHD may be evidence of more severe bipolar disease. For example, ADHD is more common in those with childhood-onset bipolar disorder, which suggests that in some cases ADHD may signal an earlier onset, more chronic bipolar disorder.

ADHD and Comorbid Conditions in Adolescence

Comorbidity within populations of adolescents with ADHD is typically the norm rather than the exception. For example, in one clinical sample of patients 6 to 18 years old, more than half met the criteria for at least one comorbid disorder.[23] Disruptive behavior disorders, including ODD and CD, are particularly common.[24] In general population studies, ADHD increases the odds of ODD or CD by 10.7-fold.[25] Some studies have estimated that 25% to 75% of adolescents with ADHD meet the diagnostic criteria for ODD or CD.[14] In another study, ODD was comorbid among 54% to 67% of clinically referred 7- to 15-year-old children with ADHD.[23] In this study, differences in subtypes also emerged. ODD was significantly more common among those with combined and hyperactive-impulsive ADHD subtypes (50.7% and 41.9%, respectively) than with inattentive subtype (20.8%). Such rates are concerning not only because of the characteristics of these comorbid disruptive behavior disorders (eg, delinquency) that are dealt with in adolescence, but also because CD is a precursor to antisocial personality disorder in adulthood. Given that CD is commonly seen in children with ADHD and is a precursor to antisocial personality disorder, it is not surprising that rates of antisocial personality disorder (among additional forms of Axis II psychopathology) are elevated in adults with ADHD.[4,5,10,26,27]

SUDs are also common in adolescents with ADHD. In longitudinal studies of hyperactive children, the risk for SUDs ranges from 12% to 24% into adulthood.[8,10,26] Because adolescence is a time when initial exposure to substances occurs and because adolescence is also a developmental period during which susceptibility to the reinforcing effects of substances is heightened,[16-18] substance use in adolescence is a concern both as an outcome of current use and of continued risk for future use. This risk is further elevated among adolescents with ADHD. Individuals with ADHD engage in experimentation earlier than children without ADHD.[28,29] Although such findings indicate that the relationship between ADHD and SUDs is independent of comorbidity, CD is a strong predictor of risk for SUDs among children with ADHD when they reach adolescence and adulthood.[30-32] In addition, prospective studies indicate that children with ADHD and co-occurring CD or bipolar disorder are at a higher risk for SUDs during adolescence.[33-35]

Adolescents with ADHD smoke at significantly higher rates than peers without ADHD. Prevalence rates range from 10% to 46% for adolescents with ADHD vs 10% to 24% for adolescents without ADHD.[34,36,37] Even among nonclinical patient samples, there is a linear relationship between number of ADHD symptoms, lifetime risk of smoking, and age of onset of regular smoking.[38] Additional studies have demonstrated that youth with ADHD initiate smoking earlier, exhibit a higher level of nicotine dependence, have greater difficulty quitting than youth without ADHD, and are at an increased risk for becoming a regular cigarette smoker.[37,39] In addition, the relationship between ADHD and tobacco use has remained significant as an independent risk factor after accounting for comorbidity, including CD.[40,41]

Mood disorders are also common among adolescents with ADHD.[42] For example, in one study, 21.6% of children 6 to 18 years old who had ADHD also had a depressive disorder.[23] The combination of a major depressive disorder and a comorbid disruptive behavior disorder is a risk factor for suicidal behavior,[43] and both major depressive disorder and disruptive behavior disorder are common comorbidities in those with ADHD. One longitudinal study assessing childhood ADHD reported that the diagnosis of ADHD in children predicted adolescent depression and/or suicide attempts. In addition, female sex, maternal depression, and concurrent symptoms in childhood predicted which children with ADHD were at greatest risk for these outcomes.[44]

Bipolar disorder is another disorder commonly seen in children with ADHD. Studies have estimated that bipolar disorder co-occurs among 10% to 20% of children and adolescents with ADHD.[45-47] Longitudinal studies of hyperactive children indicate a similar prevalence in adulthood,[5,10,26] although another longitudinal study of children with ADHD reported higher rates into adolescence (12%).[48] In some cases, ADHD may be evidence of more severe bipolar disorder. For example, ADHD is more common in cases of childhood-onset bipolar disorder, which suggests that in some cases ADHD may signal an earlier onset, more chronic bipolar disorder.[48] Regarding anxiety disorders, longitudinal studies of hyperactive children do not report significant elevations in comorbid anxiety disorders.[5,10,26] However, anxiety disorders have been reported in 10% to 40% of clinic-referred children and adolescents with ADHD.[23,49-51] Overall, these studies demonstrate that comorbidity is typical among adolescents with ADHD and further complicates its clinical presentation in adolescence. In addition to concerns about prognosis, such comorbidities can easily complicate issues related to assessment.

Assessment of Adolescents With ADHD

An empirically-based assessment of ADHD typically includes structured clinical interviews, standardized questionnaires, and a review of records, all in the context of diagnostic criteria.[14,52] Cognitive test performance may provide additional value when differentiating ADHD subtypes.[53] Although there is diagnostic continuity of ADHD from childhood into adolescence,[13] assessing ADHD during adolescence needs to be considered in the context of complicating factors. One such factor involves comorbidity. Comorbidity is common in adolescents with ADHD, and conditions can co-occur with ADHD or can mimic ADHD symptoms. Regarding the latter, a diminished ability to concentrate can also be a symptom of a major depressive episode, distractibility and being overly talkative can also be symptoms of a manic or hypomanic episode, and restlessness and difficulty concentrating can be symptoms of generalized anxiety disorder or post-traumatic stress disorder.[54] Further, substance use can confound the assessment for ADHD, as alcohol and illicit drug use can create cognitive impairments that are also common in youth with ADHD.[55-57]

An additional factor that emerges in assessments of adolescent ADHD involves reporting source. In childhood ADHD assessments, parents and teachers are the typical reporters.[14] However, adolescents spend more time with peers and less time with parents. Further, in contrast to elementary school, adolescents have multiple teachers who spend less time with them during the school day and thus have fewer opportunities to observe their students’ behavior. Self-report methods can be incorporated into adolescent ADHD assessments as well; however, adolescents with ADHD have a tendency to underreport the severity of their symptoms,[7,58] which should be considered in any assessment. In adolescents with ADHD, concerns about the accuracy of self-report involve not only their account of ADHD symptoms, but of past delinquent behaviors as well. In one study, adolescents and young adults with ADHD were less likely than those without ADHD to report accurately on delinquent behaviors they engaged in 1 year earlier.[59] Such inaccurate reporting of behavior in ADHD is consistent with findings that persons with ADHD have a tendency toward a positive illusory bias view of their behavior[60] and with theories of ADHD that argue that problems with self-awareness emerge from working memory impairments.[61]

Developmental changes in the presentation of ADHD symptoms also have implications for self-report in the assessment of adolescents with ADHD. In particular, the decline in overt hyperactive symptoms into adolescence[7,19,21,22] makes inattentive symptoms more prominent. As a clinical observation, inattentive features common in ADHD may be experienced more subjectively (eg, daydreaming) than more overt hyperactive behaviors (eg, getting out of one’s seat at inappropriate times), thus making self-report more relevant in this age group.

Finally, the appropriateness of diagnostic criteria for ADHD complicates adolescent assessment. Specifically, the Diagnostic and Statistical Manual of Mental Disorders, Fourth edition, Text Revision [54] states that symptom onset must have occurred by age 7 to qualify for an ADHD diagnosis. However, studies addressing the empiric basis for this criterion have called it into question and recommend a revision to include childhood onset at or before age 12.[39,62-64] One study assessing the implications of this diagnostic revision in a large longitudinal sample found that the prevalence estimate, correlates, and risk factors of ADHD would not be affected if this new diagnostic criterion were adopted.[65] Thus, although following diagnostic criteria in adolescent ADHD assessments is recommended, incorporating these more recent findings may be crucial in making a diagnosis.

Treatment of ADHD in Adolescence

Relatively less research has been devoted to efficacious treatments for adolescents with ADHD compared with treatments for children with ADHD.[66] Despite diagnostic continuity, given the physical, social, and psychological changes that occur in adolescents with ADHD, it is somewhat difficult to simply extend childhood treatments to this group. ADHD treatments in this age group are likely to require more extensive and costly interventions. Further, treating adolescents is particularly challenging because they are less likely than children to receive mental health services in the first place.[67]

ADHD treatment is focused on symptom management and the reduction of downstream effects of unmanaged ADHD, such as school failure, automobile accidents, and peer rejection.[68] The more complex academic and social demands during adolescence require a management plan that addresses academic needs throughout the school day and into the evening, as well as weekday and weekend activities including driving, athletic and artistic endeavors, and family and peer relationships. Symptom management should be analogous to symptom management for any lifelong condition, such as nearsightedness, diabetes, or asthma. Such comparisons emphasize that ADHD is not the fault of the person with the disorder but rather a neurobiological condition, and making such comparisons may help the teen deal with any stigma associated with a psychiatric disorder.[69]

For children with ADHD, psychoeducation about ADHD, psychopharmacology (primarily stimulants), parent training in behavior management methods, classroom behavioral modification and academic interventions, and special educational placement are the most effective or promising interventions.[68] The empiric literature regarding extending these treatments into adolescence is much less prevalent, however. Thus, although treatment options for adolescent ADHD may be available, not all are equally effective and in many cases well-controlled studies are lacking. However, some treatments for adolescents with ADHD and their families do have empiric support, particularly pharmacotherapy and specific psychosocial treatment approaches.[68,70]

Although the stimulants and nonstimulants used for the treatment of ADHD can cause minor changes in blood pressure and heart rate, most analyses of studies of cardiac events and sudden death in children, youth, and adults with ADHD treated with stimulants have not found a higher incidence of these events in patients without preexisting structural cardiovascular conditions or a family history of sudden death.[71,72] Therefore, only routine assessment of cardiovascular function, similar to screening for participation in school sports, is recommended.

Current guidelines and consensus statements[71,72] do not recommend specialty cardiovascular screening (including routine electrocardiogram) before initiating treatment for ADHD, either with stimulants or nonstimulants. However, because these medications are known to cause small elevations in blood pressure and pulse (in the case of stimulants and atomoxetine) or hypotensive changes (in the case of the alpha-2 agonists), blood pressure and heart rate should be checked before treatment is started and should be monitored regularly at follow-up visits.

Pharmacotherapy

Stimulant medications. Stimulants and noradrenergic agonists are psychotropic treatments approved by the US Food and Drug Administration (FDA) for use in adolescents. Stimulants include methylphenidates and amphetamine compounds; these medications have a long-standing history in the treatment of ADHD and are considered the first-line therapies for ADHD.[73] The 2 classes of stimulants have slightly differing mechanisms of action. Whereas both block the reuptake of dopamine and norepinephrine into the presynaptic neuron and thereby increase neurotransmitter concentrations, amphetamine compounds also increase the release of dopamine from presynaptic cytoplasmic storage vesicles.[74]

Stimulants are effective in approximately 70% of adolescents with ADHD.[75-77] At least 7 randomized controlled trials have been conducted among adolescents with ADHD and all but one support the efficacy of stimulants for ADHD in adolescence.[74] Consistent with findings of diagnostic continuity of ADHD from childhood into adolescence, the efficacy of stimulants (specifically, methylphenidate) is largely equal from childhood into adolescence.[78] In a meta-analysis of children and adolescents comparing the efficacy of the methylphenidates and amphetamine compounds, amphetamine compounds had a small yet statistically significant advantage over a standard-release form of methylphenidate for parent and clinicians ratings of ADHD symptoms and global ratings (but not for teacher ratings).[79] Although stimulants are effective in acutely reducing ADHD symptoms, common medication side effects (eg, decreased appetite) have prompted consideration of other pharmacologic interventions.[80]

Nonstimulant medications. Noradrenergic agonists approved by the FDA for use in children and adolescents with ADHD include guanfacine extended release (XR), clonidine modified release (MR),[81] and atomoxetine. Although the precise mechanism of action for treating ADHD is unclear, these medications likely facilitate dopamine and noradrenaline neurotransmission thought to play a role in the pathophysiology of ADHD.[81,82]

In 2009, guanfacine XR was the first alpha-2 agent to be approved by the FDA for use in the treatment of ADHD in children and adolescents. According to one randomized controlled trial in children and adolescents with ADHD, guanfacine XR performed better than placebo in reducing teacher-rated ADHD symptoms but not parent-rated ADHD symptoms.[83] In several double-blind, placebo-controlled trials involving child and adolescent participants, guanfacine XR performed significantly better than placebo in reducing ADHD symptoms.[84,85] A 2-year, open-label, follow-up study of guanfacine XR in children and adolescents, with or without co-administration of stimulants, demonstrated continued efficacy as that seen in short-term randomized controlled trials.[86] Such findings emerged in a similar study,[87] although the attrition rate in both studies was greater than 75%, limiting generalizability.

Two randomized, double-blind, placebo-controlled studies evaluating the efficacy of clonidine MR in children and adolescents with ADHD have been conducted. One assessed clonidine MR as a monotherapy, and another studied it as an add-on agent in patients on a non-optimal stimulant drug regimen. In both trials, clonidine MR significantly reduced ADHD symptoms from baseline and was well tolerated.[88,89]

Atomoxetine is another noradrenergic agonist approved for use in adolescents with ADHD,[90-92] and it has comparable efficacy with methylphenidate in reducing core ADHD symptoms in children and adolescents.[93] In one randomized, placebo-controlled, dose-response study of atomoxetine in children and adolescents with ADHD, atomoxetine was consistently associated with a significant reduction of ADHD symptoms.[94] Social and family functioning also improved among those taking atomoxetine with statistically significant improvements in measures of ability to meet psychosocial role expectations and parental impact. In a randomized, placebo-controlled study of atomoxetine among children and adolescents with ADHD, atomoxetine-treated participant reductions in ADHD symptoms were superior to those of the placebo treatment group as assessed by investigator, parent, and teacher ratings.[95] Additional trials have demonstrated the efficacy and tolerability of this medication in children and adolescents with ADHD.[96-101] In addition, acute atomoxetine treatment appears to be equally effective and equally tolerated in children and adolescents.[102] Such findings suggest that pharmacologic differences in tolerability or ADHD symptom response are negligible between children and adolescents.

Treatment Discontinuation in Adolescence

When considering pharmacotherapy, one issue relevant to adolescents with ADHD involves treatment discontinuation. The prevalence of prescribing by general practitioners to adolescent patients with ADHD drops significantly.[103] Further, this decline is greater than the reported age-related decline in symptoms, indicating that treatment is prematurely discontinued in many cases when symptoms persist.[104] In one longitudinal study,[105] 48% of children between the ages of 9 and 15 had discontinued ADHD medication. Age was a significant moderator of medication adherence such that adolescents were less likely to continue their medication.[105] Thus, in addition to a need for continued research devoted to effective treatments for adolescents with ADHD,[66] unique barriers to treatment such as premature discontinuation need to be addressed.

Psychosocial Treatments

In terms of psychosocial treatments for adolescents with ADHD, the empiric literature is sparse compared with the literature on pharmacotherapy options. In addition, because of the many developmental and environmental changes that occur during the transition into adolescence, childhood treatments are not easily translated for this age group. Developmental changes with implications for treatment include that adolescents have a greater cognitive capacity for abstraction, they have more behavioral self-awareness, adolescents are undergoing identity formation and have a need for independence, there is peer influence, there is variability in daily school routines, and adolescents are undergoing physiologic changes (eg, development of secondary sex characteristics).[66] Thus, treatment approaches are recommended that include increased involvement of the teenager, behavioral contingencies that involve more opportunities to socialize with peers and exert independence, collaboration with multiple teachers, homework issues (particularly time management and organizational skills), and self-monitoring strategies.[44] Among studies that have considered psychosocial treatments for adolescents with ADHD, family-based and school-based approaches are the most promising.[44,106]

Family-Based Interventions

Three studies have examined family-based interventions. Barkley and colleagues[107] randomly assigned 12- to 18-year-olds to 8 to 10 sessions of behavior management training, problem-solving and communication training, or structural family therapy. All strategies resulted in significant improvement in negative communication, conflict, anger during conflicts, school adjustment, internalizing and externalizing symptoms, and maternal depressive symptoms at post-treatment, and improvements were largely maintained at a 3-month follow-up visit. However, only 5% to 20% in each treatment group demonstrated clinically significant reliable change following treatment.

Another study compared parent behavior management training with parent behavior management training/problem solving and communication therapy.[108] Both treatments resulted in significant improvement in parent-teen conflicts but were not statistically different from each other. Although such group-level analysis and normalization rates supported the efficacy of these treatments, reliable change indices were similar to those reported by Barkley and colleagues.[107]

Another study evaluated behavior management, problem solving, and education groups for parents of adolescents with ADHD.[109] Pretreatment and posttreatment comparisons indicated statistically significant reductions in the frequency and intensity of self-reported parent-adolescent conflict and in parent-reported problem behavior and positive effects on parent skills and confidence.

Although all these studies are promising, they did not produce much clinically significant reliable change or they were limited by methodologic design (ie, lack of a control or alternative treatment group). In terms of clinical implications, multimodal long-term treatment may be useful to assist parents in their interactions with their teens to manage parental and family distress,[110] as opposed to simply reducing ADHD symptom severity.

School-Based Interventions

Academic functioning is one of the most common concerns of parents of adolescents with ADHD.[110] Interventions targeting academic impairment in adolescents with ADHD are promising.[111] One school-based intervention involving directed note taking through group-based didactic and modeling yielded statistically significant improvements in on-task behavior, material comprehension, and daily assignment scores in a sample of adolescents with ADHD.[112] A more comprehensive treatment, called the Challenging Horizons Program,[113] involves after-school academic training incorporating behavioral strategies in a group and individual setting and monthly group parent training. This program has yielded moderate to large effect sizes on parent- and teacher-rated academic functioning and classroom disturbance compared with a community care group among middle school students with ADHD.[114] Although effect sizes were less promising for social functioning, and methodologic design limited the generalizability of these findings (eg, quasi-experimental design, small sample size), a 3-year treatment outcome study of this program indicated cumulative long-term benefits for the treatment group compared with a community care control group for parent ratings of ADHD symptoms and social functioning.[115] However, this latter study did not indicate any academic benefits of the treatment. Single-subject design studies also support the beneficial impact of behavioral techniques (eg, self-monitoring and functional analysis) in improving goal-oriented behavior in the classroom while reducing disruptive behavior among adolescents with ADHD.[116,117] This deserves additional consideration in future research.

A variant of the interventions aimed at academic behavior in adolescents with ADHD is also emerging. The Homework Intervention Program is a behavioral-based parent training program targeting homework in middle school students. In a pilot study of a small sample of middle school students diagnosed with ADHD (n = 11), multiple-baseline design analyses indicated an improvement in parent-reported homework problems and ADHD symptoms, overall grade point average, and teacher-reported productivity.[118]

Overall, comprehensive school-based interventions are promising and, similar to family-based interventions, warrant future research. Psychosocial treatment for adolescents with ADHD is a small, yet developing field of research. Current treatments need to be more thoroughly assessed. For example, social impairment continues into adolescence.[119] Further, social impairment in youth with ADHD increases the risk for substance use and related problems,[120] which demonstrates the need to also target social functioning in adolescent ADHD interventions. Providers also need to consider how to individualize treatment for adolescents with ADHD and the various potential comorbidities that can be present. In addition, treatments that complement existing psychosocial treatment approaches should be considered to target the multidimensional challenges that adolescents with ADHD face.[66] Some potentially complimentary treatments have yielded promising results. For example, attention training in cognitive training programs, mindfulness meditation, and physical exercise to reduce disruptive behaviors have shown potential, although more methodologically rigorous trials are required.[121-123]

Driving and ADHD

In North America, motor vehicle accidents are the leading cause of death among adolescents.[124] Drivers with ADHD are at significantly higher risk for poor driving outcomes, including increased traffic citations (particularly speeding), accidents that are their fault, repeated and more severe accidents, driving-related morbidity, and license suspensions and revocations.[125] Such findings were not better accounted for by comorbidity or intelligence. Given that substance use is not uncommon in persons with ADHD, the risks associated with drug and alcohol use should also be considered.[126] In terms of clinical implications of such findings, stimulant medications have been shown to improve driving performance in drivers with ADHD.[127-129] The method of stimulant delivery is also an important factor. In one study, adolescent drivers with ADHD drove better throughout the day on a driving simulator after taking an extended controlled-release stimulant compared with an immediate-release formulation.[126]

ADHD Pharmacotherapy and Growth

The effects of ADHD medication (especially stimulants) have been an area of considerable debate and controversy. Reviews indicate that treatment with stimulant medication does lead to subsequent delays in height (approximately 1 cm per year during the first 3 years of treatment) and weight.[130,131] These reviews also indicate that the effect of stimulants on growth decline over time, that growth deficits may be dose dependent, that growth suppression effects may not differ between methylphenidate and amphetamine, that stimulant discontinuation may lead to growth normalization, and that ADHD may itself be associated with dysregulated growth.[130,131]

In one longitudinal study, methylphenidate treatment was associated with small yet significant delays in height, weight, and body mass index.[132] Within the ADHD sample, those who had not received prior stimulant therapy and those who entered the study with an above average height, weight, and body mass index were most likely to experience growth deficits while taking stimulants. Further, the impact on all growth indices was most apparent during the first year of treatment and attenuated over time. In another longitudinal study that evaluated the effect of stimulant medication on physical growth, a newly medicated group exhibited reductions in size after 3 years of treatment relative to a nonmedicated group; the newly medicated group was 2.0 cm shorter and weighed 2.7 kg less.[133]

These findings indicate that in clinical settings, the potential benefits in symptom reduction and daily functioning need to be contrasted with the small but significant effects of pharmacotherapy (particularly stimulants) on growth. In most cases, growth suppression effects do not appear to be a clinical concern for most children treated with stimulants.[130] Although future studies are required to clarify the effects of continuous pharmacotherapy into adulthood to attain a better perspective of the long-term impact on growth, these findings suggest that growth rate should be monitored during treatment for ADHD.

References

  1. Biederman J, Faraone S, Milberger S, et al. A prospective 4-year follow-up study of attention-deficit hyperactivity and related disorders. Arch Gen Psychiatry. 1996;53:437-446.
  2. Fischer M, Barkley RA, Edelbrock CS, Smallish L. The adolescent outcome of hyperactive children diagnosed by research criteria: II. Academic, attentional, and neuropsychological status. J Consult Clin Psychol. 1990;58:580-588.
  3. Mannuzza S, Klein RG, Bessler A, Malloy P, LaPadula M. Adult psychiatric status of hyperactive boys grown up. Am J Psychiatry. 1998;155:493-498.
  4. Rasmussen P, Gillberg C. Natural outcome of ADHD with developmental coordination disorder at age 22 years: a controlled, longitudinal, community-based study. J Am Acad Child Adolesc Psychiatry. 2000;39:1424-1431.
  5. Weiss G, Hechtman LT. Hyperactive Children Grown Up. 2 ed. New York, NY: Guilford Press; 1993.
  6. August GJ, Stewart MA, Holmes CS. A four-year follow-up of hyperactive boys with and without conduct disorder. Br J Psychiatry. 1983;143:192-198.
  7. Barkley RA, Fischer M, Smallish L, Fletcher K. The persistence of attention-deficit/hyperactivity disorder into young adulthood as a function of reporting source and definition of disorder. J Abnorm Psychol. 2002;111:279-289.
  8. Gittelman R, Mannuzza S, Shenker R, Bonagura N. Hyperactive boys almost grown up. I. Psychiatric status. Arch Gen Psychiatry. 1985;42:937-947.
  9. Ingram S, Hechtman L, Morgenstern G. Outcome issues in ADHD: adolescent and adult long-term outcome. Ment Retard Dev Disabil Res Rev. 1999;5:243-250.
  10. Mannuzza S, Klein RG, Bessler A, Malloy P, LaPadula M. Adult outcome of hyperactive boys. Educational achievement, occupational rank, and psychiatric status. Arch Gen Psychiatry. 1993;50:565-576.
  11. Barkley RA, Anastopoulos AD, Guevremont DC, Fletcher KE. Adolescents with ADHD: patterns of behavioral adjustment, academic functioning, and treatment utilization. J Am Acad Child Adolesc Psychiatry. 1991;30:752-761.
  12. Topolski TD, Edwards TC, Patrick DL, Varley P, Way ME, Buesching DP. Quality of life of adolescent males with attention-deficit hyperactivity disorder. J Atten Disord. 2004;7:163-173.
  13. Faraone S, Biederman J, Monuteaux MC. Further evidence for the diagnostic continuity between child and adolescent ADHD. J Atten Disord. 2002;6:5-13.
  14. Barkley RA. Attention Deficit Hyperactivity Disorder: A Handbook for Diagnosis and Treatment. New York, NY: Guilford Press; 2006.
  15. Wehmeier PM, Schacht A, Barkley RA. Social and emotional impairment in children and adolescents with ADHD and the impact on quality of life. J Adolesc Health. 2010;46:209-217.
  16. Doremus-Fitzwater TL, Varlinskaya EI, Spear LP. Motivational systems in adolescence: possible implications for age differences in substance abuse and other risk-taking behaviors. Brain Cogn. 2010;72:114-123.
  17. Masten AS, Faden VB, Zucker RA, Spear LP. Underage drinking: a developmental framework. Pediatrics. 2008;121 Suppl 4:S235-251.
  18. Windle M, Spear LP, Fuligni AJ, et al. Transitions into underage and problem drinking: developmental processes and mechanisms between 10 and 15 years of age. Pediatrics. 2008;121 Suppl 4:S273-289.
  19. Barkley RA, Fischer M, Edelbrock CS, Smallish L. The adolescent outcome of hyperactive children diagnosed by research criteria: I. An 8-year prospective follow-up study. J Am Acad Child Adolesc Psychiatry. 1990;29:546-557.
  20. Hart EL, Lahey BB, Loeber R, Applegate B, Frick PJ. Developmental change in attention-deficit hyperactivity disorder in boys: a four-year longitudinal study. J Abnorm Child Psychol. 1995;23:729-749.
  21. Milich R, Loney J. The role of hyperactivity and aggressive symptomatology in predicting adolescent outcome among hyperactive children. J Pediatr Psychol. 1979;4:93-112.
  22. Mannuzza S, Klein RG, Bessler A, Malloy P, Hynes ME. Educational and occupational outcome of hyperactive boys grown up. J Am Acad Child Adolesc Psychiatry. 1997;36:1222-1227.
  23. Elia J, Ambrosini P, Berrettini W. ADHD characteristics: I. Concurrent co-morbidity patterns in children & adolescents. Child Adolesc Psychiatry Ment Health. 2008;2:15.
  24. Loeber R, Burke JD, Lahey BB, Winters A, Zera M. Oppositional defiant and conduct disorder: a review of the past 10 years, part I. J Am Acad Child Adolesc Psychiatry. 2000;39:1468-1484.
  25. Angold A, Costello EJ, Erkanli A. Comorbidity. J Child Psychol Psychiatry. 1999;40:57-87.
  26. Fischer M, Barkley RA, Smallish L, Fletcher K. Young adult follow-up of hyperactive children: self-reported psychiatric disorders, comorbidity, and the role of childhood conduct problems and teen CD. J Abnorm Child Psychol. 2002;30:463-475.
  27. Miller TW, Nigg JT, Faraone SV. Axis I and II comorbidity in adults with ADHD. J Abnorm Psychol. 2007;116:519-528.
  28. Carroll KM, Rounsaville BJ. History and significance of childhood attention deficit disorder in treatment-seeking cocaine abusers. Compr Psychiatry. 1993;34:75-82.
  29. Wilens TE, Biederman J, Mick E, Faraone SV, Spencer T. Attention deficit hyperactivity disorder (ADHD) is associated with early onset substance use disorders. J Nerv Ment Dis. 1997;185:475-482.
  30. Burke JD, Loeber R, Lahey BB. Which aspects of ADHD are associated with tobacco use in early adolescence? J Child Psychol Psychiatry. 2001;42:493-502.
  31. Molina BSG, Smith BH, Pelham WE. Interactive effects of attention deficit hyperactivity disorder and conduct disorder on early adolescent substance use. Psychol Addict Behav. 1999;13:348-358.
  32. White HR, Xie M, Thompson W, Loeber R, Stouthamer-Loeber M. Psychopathology as a predictor of adolescent drug use trajectories. Psychol Addict Behav. 2001;15:210-218.
  33. Biederman J, Wilens T, Mick E, et al. Is ADHD a risk factor for psychoactive substance use disorders? Findings from a four-year prospective follow-up study. J Am Acad Child Adolesc Psychiatry. 1997;36:21-29.
  34. Molina BS, Pelham WE Jr. Childhood predictors of adolescent substance use in a longitudinal study of children with ADHD. J Abnorm Psychol. 2003;112:497-507.
  35. Satterfield JH, Hoppe CM, Schell AM. A prospective study of delinquency in 110 adolescent boys with attention deficit disorder and 88 normal adolescent boys. Am J Psychiatry. 1982;139:795-798.
  36. Lambert NM, Hartsough CS. Prospective study of tobacco smoking and substance dependencies among samples of ADHD and non-ADHD participants. J Learn Disabil. 1998;31:533-544.
  37. Milberger S, Biederman J, Faraone SV, Chen L, Jones J. ADHD is associated with early initiation of cigarette smoking in children and adolescents. J Am Acad Child Adolesc Psychiatry. 1997;36:37-44.
  38. Tercyak KP, Lerman C, Audrain J. Association of attention-deficit/hyperactivity disorder symptoms with levels of cigarette smoking in a community sample of adolescents. J Am Acad Child Adolesc Psychiatry. 2002;41:799-805.
  39. Rohde LA. Is there a need to reformulate attention deficit hyperactivity disorder criteria in future nosologic classifications? Child Adolesc Psychiatr Clin N Am. 2008;17:405-420.
  40. Glass K, Flory K. Why does ADHD confer risk for cigarette smoking? A review of psychosocial mechanisms. Clin Child Fam Psychol Rev. 2010;13:291-313.
  41. McClernon FJ, Kollins SH. ADHD and smoking: from genes to brain to behavior. Ann N Y Acad Sci. 2008;1141:131-147.
  42. Cuffe SP, McKeown RE, Jackson KL, Addy CL, Abramson R, Garrison CZ. Prevalence of attention-deficit/hyperactivity disorder in a community sample of older adolescents. J Am Acad Child Adolesc Psychiatry. 2001;40:1037-1044.
  43. Lewinsohn PM, Rohde P, Seeley JR. Adolescent psychopathology: III. The clinical consequences of comorbidity. J Am Acad Child Adolesc Psychiatry. 1995;34:510-519.
  44. Chronis AM, Jones HA, Raggi VL. Evidence-based psychosocial treatments for children and adolescents with attention-deficit/hyperactivity disorder. Clin Psychol Rev. 2006;26:486-502.
  45. Carlson GA. Child and adolescent mania–diagnostic considerations. J Child Psychol Psychiatry. 1990;31:331-341.
  46. Wozniak J, Biederman J, Kiely K, et al. Mania-like symptoms suggestive of childhood-onset bipolar disorder in clinically referred children. J Am Acad Child Adolesc Psychiatry. 1995;34:867-876.
  47. Wozniak J, Biederman J. Prepubertal mania exists (and co-exists with ADHD. The ADHD Report. 1995;2:5-6.
  48. Biederman J, Faraone SV, Mick E, et al. Clinical correlates of ADHD in females: findings from a large group of girls ascertained from pediatric and psychiatric referral sources. J Am Acad Child Adolesc Psychiatry. 1999;38:966-975.
  49. Biederman J, Newcorn J, Sprich S. Comorbidity of attention deficit hyperactivity disorder with conduct, depressive, anxiety, and other disorders. Am J Psychiatry. 1991;148:564-577.
  50. Pliszka SR. Comorbidity of attention-deficit hyperactivity disorder and overanxious disorder. J Am Acad Child Adolesc Psychiatry. 1992;31:197-203.
  51. Tannock R. Attention-deficit/hyperactivity disorder with anxious disorders. In: Brown TE, ed. Attention Deficit Disorders and Comorbidities in Children, Adolescents, and Adults. Washington, DC: American Psychiatric Association; 2000.
  52. Anastopoulos AD, Shelton TL. Assessing Attention-Deficit/Hyperactivity Disorder. New York, NY: Kluwer Academic/Plenum Publishers; 2001.
  53. Clarke SD, Kohn MR, Hermens DF, et al. Distinguishing symptom profiles in adolescent ADHD using an objective cognitive test battery. Int J Adolesc Med Health. 2007;19:355-367.
  54. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision. Washington, DC: American Psychiatric Association; 2000.
  55. Fried P, Watkinson B, James D, Gray R. Current and former marijuana use: preliminary findings of a longitudinal study of effects on IQ in young adults. CMAJ. 2002;166:887-891.
  56. Kempel P, Lampe K, Parnefjord R, Hennig J, Kunert HJ. Auditory-evoked potentials and selective attention: different ways of information processing in cannabis users and controls. Neuropsychobiology. 2003;48:95-101.
  57. Parrott A. Cognitive deficits and cognitive normality in recreational cannabis and ecstasy/MDMA users. Hum Psychopharmacol. 2003;18:89-90.
  58. Willoughby MT. Developmental course of ADHD symptomatology during the transition from childhood to adolescence: a review with recommendations. J Child Psychol Psychiatry. 2003;44:88-106.
  59. Sibley MH, Pelham WE, Molina BS, et al. Inconsistent self-report of delinquency by adolescents and young adults with ADHD. J Abnorm Child Psychol. 2010;38:645-656.
  60. Hoza B, Pelham WEJ, Dobbs J, Owens JS, Pillow DR. Do boys with attention-deficit/hyperactivity disorder have positive illusory self-concepts? J Abnorm Psychol. 2002;111:268-278.
  61. Barkley RA. Behavioral inhibition, sustained attention, and executive functions: constructing a unifying theory of ADHD. Psychol Bull. 1997;121:65-94.
  62. Applegate B, Lahey BB, Hart EL, et al. Validity of the age-of-onset criterion for ADHD: a report from the DSM-IV field trials. J Am Acad Child Adolesc Psychiatry. 1997;36:1211-1221.
  63. Barkley RA, Biederman J. Toward a broader definition of the age-of-onset criterion for attention-deficit hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 1997;36:1204-1210.
  64. Todd RD, Huang H, Henderson CA. Poor utility of the age of onset criterion for DSM-IV attention deficit/hyperactivity disorder: recommendations for DSM-V and ICD-11. J Child Psychol Psychiatry. 2008;49:942-949.
  65. Polanczyk G, Caspi A, Houts R, Kollins SH, Rohde LA, Moffitt TE. Implications of extending the ADHD age-of-onset criterion to age 12: results from a prospectively studied birth cohort. J Am Acad Child Adolesc Psychiatry. 2010;49:210-216.
  66. Smith BH, Waschbusch DA, Willoughby MT, Evans S. The efficacy, safety, and practicality of treatments for adolescents with attention-deficit/hyperactivity disorder (ADHD). Clin Child Fam Psychol Rev. 2000;3:243-267.
  67. Jensen PS, Martin D, Cantwell DP. Comorbidity in ADHD: implications for research, practice, and DSM-V. J Am Acad Child Adolesc Psychiatry. 1997;36:1065-1079.
  68. Barkley RA. Adolescents with attention-deficit/hyperactivity disorder: an overview of empirically based treatments. J Psychiatr Pract. 2004;10:39-56.
  69. Wolraich ML, Wibbelsman CJ, Brown TE, et al. Attention-deficit/hyperactivity disorder among adolescents: a review of the diagnosis, treatment, and clinical implications. Pediatrics. 2005;115:1734-1746.
  70. Ingersoll BD, Goldstein, S. Attention Deficit Disorder and Learning Disabilities: Realities, Myths, and Controversial Treatments. New York, NY: Doubleday; 1993.
  71. Vitiello B. Understanding the risk of using medications for attention deficit hyperactivity disorder with respect to physical growth and cardiovascular function. Child Adolesc Psychiatr Clin N Am. 2008;17:459-474.
  72. Hamilton R, Gray C, Bélanger SA, et al. Cardiac risk assessment before the use of stimulant medications in children and youth: A joint position statement by the Canadian Paediatric Society, the Canadian Cardiovascular Society and the Canadian Academy of Child and Adolescent Psychiatry. J Can Acad Child Adolesc Psychiatry. 2009;18:349-355.
  73. Conners CK. Forty years of methylphenidate treatment in attention-deficit/hyperactivity disorder. J Atten Disord. 2002;6 Suppl 1:S17-30.
  74. Connor DF. Stimulants. In: Barkley RA, ed. Attention-Deficit Hyperactivity Disorder: A Handbook for Diagnosis and Treatment. New York, NY: Guilford Press; 2006:608-647.
  75. Biederman J, Spencer T, Wilens T. Evidence-based pharmacotherapy for attention-deficit hyperactivity disorder. Int J Neuropsychopharmacol. 2004;7:77-97.
  76. Evans SW, Pelham WE, Smith BH, et al. Dose-response effects of methylphenidate on ecologically valid measures of academic performance and classroom behavior in adolescents with ADHD. Exp Clin Psychopharmacol. 2001;9:163-175.
  77. Wilens TE, Spencer TJ. The stimulants revisited. Child Adolesc Psychiatr Clin N Am. 2000;9:573-603.
  78. Smith BH, Pelham WE, Gnagy E, Yudell RS. Equivalent effects of stimulant treatment for attention-deficit hyperactivity disorder during childhood and adolescence. J Am Acad Child Adolesc Psychiatry. 1998;37:314-321.
  79. Faraone SV, Biederman J, Roe C. Comparative efficacy of Adderall and methylphenidate in attention-deficit/hyperactivity disorder: a meta-analysis. J Clin Psychopharmacol. 2002;22:468-473.
  80. Schachter HM, Pham B, King J, Langford S, Moher D. How efficacious and safe is short-acting methylphenidate for the treatment of attention-deficit disorder in children and adolescents? A meta-analysis. CMAJ. 2001;165:1475-1488.
  81. Bidwell CL, Dew RE, Kollins SH. Alpha-2 adrenergic receptors and attention-deficit/hyperactivity disorder. Curr Psychiatry Rep. 2010;12:366-373.
  82. Arnsten AF. Toward a new understanding of attention-deficit hyperactivity disorder pathophysiology: an important role for prefrontal cortex dysfunction. CNS Drugs. 2009;23 Suppl 1:33-41.
  83. Scahill L, Chappell PB, Kim YS, et al. A placebo-controlled study of guanfacine in the treatment of children with tic disorders and attention deficit hyperactivity disorder. Am J Psychiatry. 2001;158:1067-1074.
  84. Biederman J, Melmed RD, Patel A, et al. A randomized, double-blind, placebo-controlled study of guanfacine extended release in children and adolescents with attention-deficit/hyperactivity disorder. Pediatrics. 2008;121:e73-84.
  85. Sallee FR, McGough J, Wigal T, Donahue J, Lyne A, Biederman J. Guanfacine extended release in children and adolescents with attention-deficit/hyperactivity disorder: a placebo-controlled trial. J Am Acad Child Adolesc Psychiatry. 2009;48:155-165.
  86. Sallee FR, Lyne A, Wigal T, McGough JJ. Long-term safety and efficacy of guanfacine extended release in children and adolescents with attention-deficit/hyperactivity disorder. J Child Adolesc Psychopharmacol. 2009;19:215-226.
  87. Biederman J, Melmed RD, Patel A, McBurnett K, Donahue J, Lyne A. Long-term, open-label extension study of guanfacine extended release in children and adolescents with ADHD. CNS Spectr. 2008;13:1047-1055.
  88. Jain R, Segal S, Kollins SH, Khayrallah M. Clonidine extended-release tablets for pediatric patients with attention-deficit/hyperactivity disorder. J Am Acad Child Adolesc Psychiatry. 2011;50:171-179.
  89. Kollins SH, Jain R, Brams M, et al. Clonidine extended-release tablets as add-on therapy to psychostimulants in children and adolescents with ADHD. Pediatrics. 2011;127:e1406-1413.
  90. Cheng JY, Chen RY, Ko JS, Ng EM. Efficacy and safety of atomoxetine for attention-deficit/hyperactivity disorder in children and adolescents-meta-analysis and meta-regression analysis. Psychopharmacology (Berl). 2007;194:197-209.
  91. Thomason C, Michelson D. Atomoxetine — treatment of attention deficit hyperactivity disorder: beyond stimulants. Drugs Today (Barc). 2004;40:465-473.
  92. Wilens TE, Newcorn JH, Kratochvil CJ, et al. Long-term atomoxetine treatment in adolescents with attention-deficit/hyperactivity disorder. J Pediatr. 2006;149:112-119.
  93. Hazell PL, Kohn MR, Dickson R, Walton RJ, Granger RE, van Wyk GW. Core ADHD symptom improvement with atomoxetine versus methylphenidate: a direct comparison meta-analysis. J Atten Disord. 2010 Sep 28. [Epub ahead of print]
  94. Michelson D, Faries D, Wernicke J, et al. Atomoxetine in the treatment of children and adolescents with attention-deficit/hyperactivity disorder: a randomized, placebo-controlled, dose-response study. Pediatrics. 2001;108:E83.
  95. Michelson D, Allen AJ, Busner J, et al. Once-daily atomoxetine treatment for children and adolescents with attention deficit hyperactivity disorder: a randomized, placebo-controlled study. Am J Psychiatry. 2002;159:1896-1901.
  96. Buitelaar JK, Danckaerts M, Gillberg C, et al. A prospective, multicenter, open-label assessment of atomoxetine in non-North American children and adolescents with ADHD. Eur Child Adolesc Psychiatry. 2004;13:249-257.
  97. Kratochvil CJ, Michelson D, Newcorn JH, et al. High-dose atomoxetine treatment of ADHD in youths with limited response to standard doses. J Am Acad Child Adolesc Psychiatry. 2007;46:1128-1137.
  98. Michelson D, Read HA, Ruff DD, Witcher J, Zhang S, McCracken J. CYP2D6 and clinical response to atomoxetine in children and adolescents with ADHD. J Am Acad Child Adolesc Psychiatry. 2007;46:242-251.
  99. Newcorn JH, Spencer TJ, Biederman J, Milton DR, Michelson D. Atomoxetine treatment in children and adolescents with attention-deficit/hyperactivity disorder and comorbid oppositional defiant disorder. J Am Acad Child Adolesc Psychiatry. 2005;44:240-248.
  100. Prasad S, Harpin V, Poole L, Zeitlin H, Jamdar S, Puvanendran K. A multi-centre, randomised, open-label study of atomoxetine compared with standard current therapy in UK children and adolescents with attention-deficit/hyperactivity disorder (ADHD). Curr Med Res Opin. 2007;23:379-394.
  101. Wehmeier PM, Schacht A, Lehmann M, Dittmann RW, Silva SG, March JS. Emotional well-being in children and adolescents treated with atomoxetine for attention-deficit/hyperactivity disorder: findings from a patient, parent and physician perspective using items from the pediatric adverse event rating scale (PAERS). Child Adolesc Psychiatry Ment Health. 2008;2:11.
  102. Wilens TE, Kratochvil C, Newcorn JH, Gao H. Do children and adolescents with ADHD respond differently to atomoxetine? J Am Acad Child Adolesc Psychiatry. 2006;45:149-157.
  103. Charach A, Ickowicz A, Schachar R. Stimulant treatment over five years: adherence, effectiveness, and adverse effects. J Am Acad Child Adolesc Psychiatry. 2004;43:559-567.
  104. McCarthy S, Asherson P, Coghill D, et al. Attention-deficit hyperactivity disorder: treatment discontinuation in adolescents and young adults. Br J Psychiatry. 2009;194:273-277.
  105. Thiruchelvam D, Charach A, Schachar RJ. Moderators and mediators of long-term adherence to stimulant treatment in children with ADHD. J Am Acad Child Adolesc Psychiatry. 2001;40:922-928.
  106. Pelham WE Jr, Fabiano GA. Evidence-based psychosocial treatments for attention-deficit/hyperactivity disorder. J Clin Child Adolesc Psychol. 2008;37:184-214.
  107. Barkley RA, Guevremont DC, Anastopoulos AD, Fletcher KE. A comparison of three family therapy programs for treating family conflicts in adolescents with attention-deficit hyperactivity disorder. J Consult Clin Psychol. 1992;60:450-462.
  108. Barkley RA, Edwards G, Laneri M, Fletcher K, Metevia L. The efficacy of problem-solving communication training alone, behavior management training alone, and their combination for parent-adolescent conflict in teenagers with ADHD and ODD. J Consult Clin Psychol. 2001;69:926-941.
  109. McCleary L, Ridley T. Parenting adolescents with ADHD: evaluation of a psychoeducation group. Patient Educ Couns. 1999;38:3-10.
  110. Robin AL. Treating families with adolescents with ADHD. In: Barkley RA, ed. Attention Deficit Hyperactivity Disorder: A Handbook for Diagnosis and Treatment. New York, NY: Guilford Press; 2006:499-546.
  111. Raggi VL, Chronis AM. Interventions to address the academic impairment of children and adolescents with ADHD. Clin Child Fam Psychol Rev. 2006;9:85-111.
  112. Evans SW, Pelham WE, Grudberg MV. The efficacy of notetaking to improve behavior and comprehension of adolescents with attention-deficit hyperactivity disorder. Exceptionality. 1995;5:1-17.
  113. Evans SW, Axelrod J, Langberg JM. Efficacy of a school-based treatment program for middle school youth with ADHD: pilot data. Behav Modif. 2004;28:528-547.
  114. Evans SW, Langberg J, Raggi V, Allen J, Buvinger EC. Development of a school-based treatment program for middle school youth with ADHD. J Atten Disord. 2005;9:343-353.
  115. Evans SW, Serpell ZN, Schultz BK, Pastor DA. Cumulative benefits of secondary school-based treatment of students with attention deficit hyperactivity disorder. School Psychology Review. 2007;36:256-273.
  116. Ervin RA, DuPaul GJ, Kern L, Friman PC. Classroom-based functional and adjunctive assessments: proactive approaches to intervention selection for adolescents with attention deficit hyperactivity disorder. J Appl Behav Anal. 1998;31:65-78.
  117. Stewart KG, McLaughlin TF. Self-recording: effects of reducing off-task behavior with a high school student with an attention deficit hyperactivity disorder. Child & Family Behavior Therapy. 1992;14:53-59.
  118. Raggi VL, Chronis-Tuscano A, Fishbein H, Groomes A. Development of a brief, behavioral homework intervention for middle school students with attention-deficit/hyperactivity disorder. School Mental Health. 2009;1:61-77.
  119. Bagwell CL, Molina BS, Pelham WE Jr, Hoza B. Attention-deficit hyperactivity disorder and problems in peer relations: predictions from childhood to adolescence. J Am Acad Child Adolesc Psychiatry. 2001;40:1285-1292.
  120. Greene RW, Biederman J, Faraone SV, Wilens TE, Mick E, Blier HK. Further validation of social impairment as a predictor of substance use disorders: findings from a sample of siblings of boys with and without ADHD. J Clin Child Psychol. 1999;28:349-354.
  121. Allison DB, Faith MS, Franklin RD. Antecedent exercise in the treatment of disruptive behavior: a meta-analytic review. Clinical Psychology: Science and Practice. 1995;2:279-304.
  122. Epstein JN, Tsal Y. Evidence for cognitive training as a treatment strategy for children with attention-deficit/hyperactivity disorder. Journal of ADHD and Related Disorders. 2010;1:49-64.
  123. Zylowska L, Ackerman DL, Yang MH, et al. Mindfulness meditation training in adults and adolescents with ADHD: a feasibility study. J Atten Disord. 2008;11:737-746.
  124. Peden MM, McGee K, Krug E; World Health Organization. Injury: A Leading Cause of the Global Burden of Disease, 2000. Geneva, Switzerland: World Health Organization; 2002.
  125. Barkley RA. Driving impairments in teens and adults with attention-deficit/hyperactivity disorder. Psychiatr Clin North Am. 2004;27:233-260.
  126. Cox DJ, Merkel RL, Penberthy JK, Kovatchev B, Hankin CS. Impact of methylphenidate delivery profiles on driving performance of adolescents with attention-deficit/hyperactivity disorder: a pilot study. J Am Acad Child Adolesc Psychiatry. 2004;43:269-275.
  127. Barkley RA, Cox D. A review of driving risks and impairments associated with attention-deficit/hyperactivity disorder and the effects of stimulant medication on driving performance. J Safety Res. 2007;38:113-128.
  128. Barkley RA, Murphy KR, Kwasnik D. Motor vehicle driving competencies and risks in teens and young adults with attention deficit hyperactivity disorder. Pediatrics. 1996;98:1089-1095.
  129. Jerome L, Segal A, Habinski L. What we know about ADHD and driving risk: a literature review, meta-analysis and critique. J Can Acad Child Adolesc Psychiatry. 2006;15:105-125.
  130. Faraone SV, Biederman J, Morley CP, Spencer TJ. Effect of stimulants on height and weight: a review of the literature. J Am Acad Child Adolesc Psychiatry. 2008;47:994-1009.
  131. Ptacek R, Kuzelova H, Paclt I. Effect of stimulants on growth of ADHD children: a critical review. Activitas Nervosa Superior. 2009;51:140-146.
  132. Faraone SV, Giefer EE. Long-term effects of methylphenidate transdermal delivery system treatment of ADHD on growth. J Am Acad Child Adolesc Psychiatry. 2007;46:1138-1147.
  133. Swanson JM, Elliott GR, Greenhill LL, et al. Effects of stimulant medication on growth rates across 3 years in the MTA follow-up. J Am Acad Child Adolesc Psychiatry. 2007;46:1015-1027.

 

Predicting the diagnosis of autism spectrum disorder using gene pathway analysis

In Neuroscience, Psychiatry, School Psychology on Thursday, 13 September 2012 at 13:03

Molecular Psychiatry advance online publication 11 September 2012; doi: 10.1038/mp.2012.126

http://www.nature.com/mp/journal/vaop/ncurrent/full/mp2012126a.html

Predicting the diagnosis of autism spectrum disorder using gene pathway analysis
E Skafidas1, R Testa2,3, D Zantomio4, G Chana5, I P Everall5 and C Pantelis2,5

  1. 1Centre for Neural Engineering, The University of Melbourne, Parkville, VIC, Australia
  2. 2Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne & Melbourne Health, Parkville, VIC, Australia
  3. 3Department of Psychology, Monash University, Clayton, VIC, Australia
  4. 4Department of Haematology, Austin Health, Heidelberg, VIC, Australia
  5. 5Department of Psychiatry, The University of Melbourne, Parkville, Victoria, Australia

Correspondence: Professor C Pantelis, National Neuroscience Facility (NNF), Level 3, 161 Barry Street, Carlton South, VIC 3053, Australia. E-mail: cpant@unimelb.edu.au

Received 6 July 2012; Accepted 9 July 2012
Advance online publication 11 September 2012

Abstract

Autism spectrum disorder (ASD) depends on a clinical interview with no biomarkers to aid diagnosis. The current investigation interrogated single-nucleotide polymorphisms (SNPs) of individuals with ASD from the Autism Genetic Resource Exchange (AGRE) database. SNPs were mapped to Kyoto Encyclopedia of Genes and Genomes (KEGG)-derived pathways to identify affected cellular processes and develop a diagnostic test. This test was then applied to two independent samples from the Simons Foundation Autism Research Initiative (SFARI) and Wellcome Trust 1958 normal birth cohort (WTBC) for validation. Using AGRE SNP data from a Central European (CEU) cohort, we created a genetic diagnostic classifier consisting of 237 SNPs in 146 genes that correctly predicted ASD diagnosis in 85.6% of CEU cases. This classifier also predicted 84.3% of cases in an ethnically related Tuscan cohort; however, prediction was less accurate (56.4%) in a genetically dissimilar Han Chinese cohort (HAN). Eight SNPs in three genes (KCNMB4, GNAO1, GRM5) had the largest effect in the classifier with some acting as vulnerability SNPs, whereas others were protective. Prediction accuracy diminished as the number of SNPs analyzed in the model was decreased. Our diagnostic classifier correctly predicted ASD diagnosis with an accuracy of 71.7% in CEU individuals from the SFARI (ASD) and WTBC (controls) validation data sets. In conclusion, we have developed an accurate diagnostic test for a genetically homogeneous group to aid in early detection of ASD. While SNPs differ across ethnic groups, our pathway approach identified cellular processes common to ASD across ethnicities. Our results have wide implications for detection, intervention and prevention of ASD.

Introduction

Autism spectrum disorders (ASDs) are a complex group of sporadic and familial developmental disorders affecting 1 in 150 births1 and characterized by: abnormal social interaction, impaired communication and stereotypic behaviors.2 The etiology of ASD is poorly understood, however, a genetic basis is evidenced by the greater than 70% concordance in monozygotic twins and elevated risk in siblings compared with the population.3, 4, 5 The search for genetic loci in ASD, including linkage and genome-wide association screens (GWAS), has identified a number of candidate genes and loci on almost every chromosome,6, 7, 8, 9, 10, 11 with multiple hotspots on several chromosomes (for example, CNTNAP2, NGLNX4, NRXN1, IMMP2L, DOCK4, SEMA5A, SYNGAP1, DLGAP2, SHANK2 and SHANK3),7, 12, 13, 14, 15 and copy number variations.9, 13, 16, 17, 18, 19, 20, 21 However, none of these have provided adequate specificity or accuracy to be used in ASD diagnosis. Novel approaches are required22 to examine multiple genetic variants and their additive contribution19, 23, 24 taking into account genetic differences between ethnicities and consideration of protective versus vulnerability single-nucleotide polymorphisms (SNPs).

The present study interrogated the Autism Genetics Resource Exchange (AGRE)25 SNP data with two aims: (1) to identify groups of SNPs that populate known cellular pathways that may be pathogenic or protective for ASD, and (2) to apply machine learning to identified SNPs to generate a predictive classifier for ASD diagnosis.26 The results were validated in two independent samples: the US Simons Foundation Autism Research Initiative (SFARI) and UK Wellcome Trust 1958 normal birth cohort (WTBC). This novel and strategic approach assessed the contribution of various SNPs within an additive SNP-based predictive test for ASD.

Materials and methods

The University of Melbourne Human Research Ethics Committee approved the study (Approval Numbers 0932503.1, 0932503.2).

Subjects

(i) Index sample: subject data from 2609 probands with ASD (including Autism, Asperger’s or Pervasive Developmental Disorder-not otherwise specified, but excluding RETT syndrome and Fragile X), and 4165 relatives of probands, was available from AGRE (http://www.agre.org); 1862 probands and 2587 first-degree relatives had SNP data from the Illumina 550 platform relevant to analyses (Figure 1a). Diagnosis of ASD was made by a specialist clinician and confirmed using the Autism Diagnostic Interview Revised (ADI-R27). Control training data was obtained from HapMap28 instead of relatives, as the latter may possess SNPs that predispose to ASD and skew analysis (Figures 1a and b).

Figure 1.

(a and b) Flow charts show the subjects used in the analyses. Key: AGRE, Autism Genetic Research Exchange; SFARI, Simons Foundation Autism Research Initiative; WTBC, Wellcome Trust 1958 normal birth cohort; CEU, of Central (Western and Northern) European origin; HAN, of Han Chinese origin; TSI, of Tuscan Italian origin; For panels 1a and b: ‘red boxes’—samples used in developing the predictive algorithm; ‘blue boxes’—samples used to investigate different ethnic groups; ‘green boxes’—validation sets; ‘light green boxes’—relatives assessed, including parents and unaffected siblings. Numbers in brackets represent numbers of males/females.

Full figure and legend (140K)Download Power Point slide (259 KB)

(ii) Independent validation samples: 737 probands with ASD (ADI-R diagnosed) derived from SFARI; 2930 control subjects from WTBC (Figure 1b).

As SNP incidence rates vary according to ancestral heritage, HapMap data (Phase 3 NCBI build 36) was utilized to allocate individuals to their closest ethnicity. Individuals of mixed ethnicity were excluded; HapMap data has 1 403 896 SNPs available from 11 ethnicities. Any SNPs not included in the AGRE data measured on the Illumina 550 platform were discarded, resulting in 407 420 SNPs. Mitochondrial SNPs reported in AGRE, but not available in HapMap were excluded. The 30 most prevalent (>95%) SNPs within each ethnicity were identified and each ASD individual assigned to the group for which they shared the highest number of ethnically specific SNPs. HapMap groups were determined to be appropriate for analysis, as prevalence rates of the 30 SNPs relevant to each ethnicity were similar for each AGRE group assigned to that ethnicity, P<0.05.

Gene set enrichment analysis (GSEA)

Pathway analysis was selected because it depicts how groups of genes may contribute to ASD etiology (Supplementary S1) and mitigates the statistical problem of conducting a large number of multiple comparisons required in GWAS studies. The current pathway analysis differs from previous ASD analyses in three unique ways: (1) we divided the cohort into ethnically homogeneous samples with similar SNP rates; (2) both protective and contributory SNPs were accounted for in the analysis and (3) the pathway test statistic was calculated using permutation analysis. Although this is computationally expensive, benefits include taking account of rare alleles, small sample sizes and familial effects. It also relaxes the Hardy–Weinberg equilibrium assumption, that allele and genotype frequencies remain constant within a population over generations. Pathways were obtained from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and SNP-to-gene data obtained from the National Center for Biotechnology Information (NCBI). Intronic and exonic SNPs were included. AGRE individuals most closely matching the genetics of Utah residents of Western and Northern European (CEU), Tuscan Italian (TSI) and Han Chinese origin were used in the analysis. CEU individuals (975 affected individuals and 165 controls) were chosen as the index sample, representing the largest group affected in AGRE (Figure 1a). The CEU and Han Chinese had 116 753 SNPs that differed, whereas the CEU and TSI had 627 SNPs, differing in allelic prevalence at P<1 × 10−5. The pathway test statistic was calculated for CEU and Han individuals using a ‘set-based test’ in the PLINK29 software package, with P=0.05, r2=0.5 and permutations set to at least 2 000 000. Significance threshold was set conservatively at P<1 × 10−5, calculated from the number of pathways being examined (200). Therefore, significance was <0.05/200, set at <1 × 10−5 (see Supplementary S1).

Predicting ASD phenotype based upon candidate SNPs

For each individual, a 775-dimensional vector was constructed, corresponding to 775 unique SNPs identified as part of the GSEA. To examine whether SNPs could predict an individual’s clinical status (ASD versus non-ASD), two-tail unpaired t-tests were used to identify which of the 775 SNPs had statistically significant differences in mean SNP value (P<0.005). This significance level provided low classification error while maintaining acceptable variance in estimation of regression coefficients for each SNP’s contribution status, and provided the set of SNPs that maximized the classifier output between the populations (Figure 2 and Supplementary S2). This resulted in 237 SNPs selected for regression analysis. Each dimension of the vector was assigned a value of 0, 1 or 3, dependent on a SNP having two copies of the dominant allele, heterozygous or two copies of the minor allele. The ‘0, 1, 3’ weighting provided greater classification accuracy over ‘0, 1, 2’. Such approaches using superadditive models have been used previously to understand genetic interactions.30 The formula for the classifier and classifier performance are presented in Supplementary S3.

Figure 2.

Cumulative coefficient estimation error and percentage classification error as a function of P-value; P=0.005 provides good trade-off between classification performance and cumulative regression coefficient error.

Full figure and legend (80K)Download Power Point slide (212 KB)

The CEU sample was divided into a training set (732 ASD individuals and 123 controls) and the remainder comprised the validation set. An affected individual was given a value of 10 and an unaffected individual a value of −10, providing a sufficiently large separation to maximize the distance between means (see Supplementary S3). Least squares regression analysis of the training set determined coefficients whose sum over product by SNP value mapped SNPs to clinical status. Kolmogorov–Smirnov goodness of fit test assessed the nature of distribution of SNPs by classification. At P=0.05, the distributions were accepted as being normally distributed, allowing determination of positive and negative predictive values (see ROC, Supplementary S4). The Durbin–Watson test was used to investigate the residual errors of the training set to determine if further correlations existed. At P=0.05, the residuals were uncorrelated. Regression coefficients were used to assess individual SNP contribution to clinical status.

AGRE validation

After analyzing the CEU training cohort, three cohorts were used for validation: 285 (243 probands, 42 controls) CEUs; a genetically similar TSI sample (65 patients, 88 controls); and a genetically dissimilar Han Chinese population (33 patients, 169 controls). To illustrate overlap in SNPs in first-degree relatives of individuals with ASD (n=1512), we mapped the SNPs of parents (n=1219; 581 male) and unaffected siblings (n=293; 98 male) of CEU origin who did not meet criteria for ASD. Finally, the accuracy of the predictive model was modified to test predictive ability using 10, 30 and 60 SNPs having the greatest weightings.

Independent validation

Samples included 507 CEU and 18 TSI subjects with ASD from SFARI, and 2557 CEU and 63 TSI from WTBC (Figure 1b).

Results

Identification of affected pathways

Analyses focused on 975 CEU ASD individuals, in which 13 KEGG pathways were significantly affected (P<1 × 10−5). The pathway analysis identified 775 significant SNPs perturbed in ASD. A number of the pathways were populated by the same genes and had inter-related functions (Table 1).

Table 1 – Statistically significant pathways for the CEU and Han Chinese.

Full table

The most significant pathways were: calcium signaling, gap junction, long-term depression (LTD), long-term potentiation (LTP), olfactory transduction and mitogen-activated kinase-like protein signaling. GSEA on the genetically distinct Han Chinese identified six pathways that overlapped with 13 pathways in the CEU cohort (estimate of this occurring by chance, P=0.05), including: purine metabolism, calcium signaling, phosphatidylinositol signaling, gap junction, long-term potentiation and long-term depression. Related to these pathways, the statistically significant SNPs in both populations were rs3790095 within GNAO1, rs1869901 within PLCB2, rs6806529 within ADCY5 and rs9313203 in ADCY2.

Diagnostic prediction of ASD

From the 775 SNPs identified within the CEU cohort, accurate genetic classification of ASD versus non-ASD was possible using 237 SNPs determined to be highly significant (P<0.005). Figure 3a shows the distribution of ASD and non-ASD individuals based on genetic classification. An individual’s clinical status was set to ASD if their score exceeded the threshold of 3.93. This threshold corresponds to the intersection points of the two normal curves. The theoretical classification error was 8.55%, and positive (ASD) and negative predictive values (controls) were 96.72% and 94.74%, respectively. Classification accuracy for the 285 CEU AGRE validation individuals was 85.6% and 84.3% for the TSI, while accuracy for the Han Chinese population was only 56.4%. Using the same classifier with the identical set of SNPs, accuracy of prediction of ASD in the independent data sets was 71.6%; positive and negative predictive accuracies were 70.8% and 71.8%, respectively.

Figure 3.

(a) Genetic-based classification of CEU population (AGRE and Controls) for ASD and non-ASD individuals, showing Gaussian approximation of distribution of individuals. As both the mapped ASD and control populations were well approximated by normal distributions, the asymptotic Test Positive Predictive Value (PPV) and Negative Predictive Value (NPV) was determined. For individuals with CEU ancestry, the PPV and NPV were 96.72% and 94.74%, respectively. (Note the test was substantially less predictive on individuals with different ancestry, that is, Han Chinese). (b) Genetic-based classification of CEU population, including first-degree relatives (parents and siblings of ASD children). Note that the distribution of relatives of ASD children maps between the ASD and the control groups, with no difference found between mothers and fathers (see Supplementary material S5). Key: ASD, autism spectrum disorder; relatives, first-degree relatives (parents and siblings); Siblings, siblings of ASD cases not meeting criteria for ASD; Autism Classifier Score, scores for each individual derived from the predictive algorithm, with greater values representing greater risk for autism.

Full figure and legend (100K)Download Power Point slide (246 KB)

SNPs were compared with the affected and unaffected individuals. Figure 3b shows that relatives (parents and unaffected siblings combined) fall between the two distributions, with a mean score of 2.68 (s.d.=2.27). The percentage overlap of the relatives and affected individuals was 30.4%. The mean scores of the mothers and fathers did not differ (at P=0.05) with scores of 2.83 (s.d.=2.17) and 2.93 (s.d.=2.34), respectively (see Supplementary S5), whereas unaffected siblings (not meeting diagnostic criteria for ASD) fell between parents and cases (mean=4.74, s.d.=3.80). In testing the robustness of the predictive model, using fewer SNPs monotonically decreased accuracy in the AGRE-CEU analyses to 72% for 60 SNPs, 58% for 30 SNPs and 53.5% for 10 SNPs, with the distribution of parents being indistinguishable from controls.

Of the 237 SNPs within our classifier, presence of some contributed to vulnerability to ASD (Table 2a), whereas others were protective (Table 2b). Eight SNPs in three genes, GRM5, GNAO1 and KCNMB4, were highly discriminatory in determining an individual’s classification as ASD or non-ASD. For KCNMB4, rs968122 highly contributed to a clinical diagnosis of ASD, whereas rs12317962 was protective; for GNAO1, SNP rs876619 contributed, whereas rs8053370 was protective; for GRM5, SNPs rs11020772 was contributory, whereas rs905646 and rs6483362 were protective.

Table 2 – List of 15 most contributory (Table 2a) and 15 most protective (Table 2b) SNPs for ASD diagnosis in the CEU Cohort.

Full table

Discussion

Using pathway analysis, we have generated a genetic diagnostic classifier based on a linear function of 237 SNPs that accurately distinguished ASD from controls within a CEU cohort. This same diagnostic classifier was able to correctly predict and identify ASD individuals with accuracy exceeding 85.6% and 84.3% in the unseen CEU and TSI cohorts, respectively. Our classifier was then able to predict ASD group membership in subjects derived from two independent data sets with an accuracy of 71.6%, thus greatly adding strength to our original finding. However, the classifier was sub-optimal at predicting ASD in the genetically distinct Han Chinese cohort, which may be explained by differences in allelic prevalence. Although only 627 SNPs significantly differed between the TSI and CEU cohorts, this figure increased to 116 753 SNPs between the CEU and Han Chinese. It is likely that an additional set of SNPs may be predictive of ASD diagnosis in Han Chinese and that methods used for our classifier could be applicable to other ethnicities. Interestingly, parents and siblings of ASD-CEU individuals fell as distinct groups between the ASD and controls, reinforcing a genetic basis for ASD with neurobehavioral abnormalities reported in parents of ASD individuals also supporting our findings.31 When we altered the classifier by reducing the number of SNPs, not only did the predictive accuracy suffer but also the relatives merged into the control group. This suggests that use of relatives as controls in SNP GWAS studies is only valid when examining small numbers of SNPs and may not be appropriate when assessing genetic interactions.

There was considerable overlap in the pathways implicated in both the CEU and Han Chinese populations. The analysis demonstrated that SNPs in the Wnt signaling pathway contributed to a diagnosis of ASD in the CEU cohort, but not in the Han Chinese population. Although of interest, a firm conclusion regarding these differences and similarities will require replication in a larger Han Chinese population. Completion of diagnostic classification studies for other ethnic groups will invariably aid in identification of common pathological mechanisms for ASD.

The SNPs contributing most to diagnosis in our classifier corresponded to genes for KCNMB4, GNAO1, GRM5, INPP5D and ADCY8. The three SNPs that markedly skewed an individual towards ASD were related to the genes coding for KCNMB4, GNAO1 and GRM5. Homozygosity for KCNMB4 SNP carries a higher risk of ASD than SNPs related to GNAO1 and GRM5. By contrast, a number of SNPs protected against ASD, including rs8053370 (GNAO1), rs12317962 (KCNMB4), rs6483362 and rs905646 (GRM5). KCNMB4 is a potassium channel that is important in neuronal excitability and has been implicated in epilepsy and dyskinesia.32, 33 It is highly expressed within the fusiform gyrus, as well as in superior temporal, cingulate and orbitofrontal regions (Allen Human Brain Atlas, http://human.brain-map.org/), which are areas implicated in face identification and emotion face processing deficits seen in ASD.34 GNAO1 protein is a subgroup of Ga(o), a G-protein that couples with many neurotransmitter receptors. Ga(o) knockout mice exhibit ‘autism-like’ features, including impaired social interaction, poor motor skills, anxiety and stereotypic turning behavior.35 GNAO1 has also been shown to have a role in nervous development co-localizing with GRIN1 at neuronal dendrites and synapses,36 and interacting with GAP-43 at neuronal growth cones,37 with increased levels of GAP-43 demonstrated in the white matter adjacent to the anterior cingulate cortex in brains from ASD patients.38

In our findings, GRM5 SNPs have both a contributory (rs11020772) and protective (rs905646, rs6483362) effect on ASD. GRM5 is highly expressed in hippocampus, inferior temporal gyrus, inferior frontal gyrus and putamen (Allen Human Brain Atlas), regions implicated in ASD brain MRI studies.39 GRM5 has a role in synaptic plasticity, modulation of synaptic excitation, innate immune function and microglial activation.40, 41, 42, 43 GRM5-positive allosteric modulators can reverse the negative behavioral effects of NMDA receptor antagonists, including stereotypies, sensory motor gating deficits and deficits in working, spatial and recognition memory,44 features described in ASD.45, 46 With regard to GRM5’s involvement with neuroimmune function, this receptor is expressed on microglia,40, 47 with microglial activation demonstrated by us and others in frontal cortex in ASD.48, 49

Further, as GRM5 signaling is mediated via signaling through Gene Protein Couple Receptors, a possible interaction between GNAO1 and GRM5 is plausible. Genes such as PLCB2, ADCY2, ADCY5 and ADCY8 encode for proteins involved in G-protein signaling. Given this association, GRM5 may represent a pivotal etiological target for ASD; however, further work is needed in demonstrating these potential interactions and contribution to glutamatergic dysregulation in ASD.

In conclusion, within genetically homogeneous populations, our predictive genetic classifier obtained a high level of diagnostic accuracy. This demonstrates that genetic biomarkers can correctly classify ASD from non-ASD individuals. Further, our approach of identifying groups of SNPs that populate known KEGG pathways has identified potential cellular processes that are perturbed in ASD, which are common across ethnic groups. Finally, we identified a small number of genes with various SNPs of influential weighting that strongly determined whether a subject fell within the control or ASD group. Overall these findings indicate that a SNP-based test may allow for early identification of ASD. Further studies to validate the specificity and sensitivity of this model within other ethnic groups are required. A predictive classifier as described here may provide a tool for screening at birth or during infancy to provide an index of ‘at-risk status’, including probability estimates of ASD-likelihood. Identifying clinical and brain-based developmental trajectories within such a group would provide the opportunity to investigate potential psychological, social and/or pharmacological interventions to prevent or ameliorate the disorder. A similar approach has been adopted in psychosis research, which has improved our understanding of the disorder and prognosis for affected individuals.50

Conflict of interest

The authors declare no conflict of interest.

References

  1. Autism and Developmental Disabilities Monitoring Network Surveillance Year 2002 Principal Investigators. Prevalence of autism spectrum disorders—autism and developmental disabilities monitoring network, 14 sites, United States, 2002. MMWR Surveill Summ 2007; 56: 12–28. | PubMed |
  2. Association AP. Diagnostic and Statistical Manual of Mental Disorders. Revised 4th edn. Washington, DC, 2000.
  3. Bailey A, Le Couteur A, Gottesman I, Bolton P, Simonoff E, Yuzda E et al. Autism as a strongly genetic disorder: evidence from a British twin study. Psychol Med 1995; 25: 63–77. | Article | PubMed | ISI | CAS |
  4. Zhao X, Leotta A, Kustanovich V, Lajonchere C, Geschwind DH, Law K et al. A unified genetic theory for sporadic and inherited autism. Proc Natl Acad Sci USA 2007; 104: 12831–12836. | Article | PubMed | CAS |
  5. Cichon S, Craddock N, Daly M, Faraone SV, Gejman PV, Kelsoe J et al. Genomewide association studies: history, rationale, and prospects for psychiatric disorders. Am J Psychiatry 2009; 166: 540–556. | Article | PubMed | ISI |
  6. Alarcon M, Abrahams BS, Stone JL, Duvall JA, Perederiy JV, Bomar JM et al. Linkage, association, and gene-expression analyses identify CNTNAP2 as an autism-susceptibility gene. Am J Hum Genet 2008; 82: 150–159. | Article | PubMed | ISI | CAS |
  7. Weiss LA, Arking DE, Daly MJ, Chakravarti A. A genome-wide linkage and association scan reveals novel loci for autism. Nature 2009; 461: 802–808. | Article | PubMed | ISI | CAS |
  8. Sykes NH, Toma C, Wilson N, Volpi EV, Sousa I, Pagnamenta AT et al. Copy number variation and association analysis of SHANK3 as a candidate gene for autism in the IMGSAC collection. Eur J Hum Genet 2009; 17: 1347–1353. | Article | PubMed | ISI |
  9. Maestrini E, Pagnamenta AT, Lamb JA, Bacchelli E, Sykes NH, Sousa I et al. High-density SNP association study and copy number variation analysis of the AUTS1 and AUTS5 loci implicate the IMMP2L-DOCK4 gene region in autism susceptibility. Mol Psychiatry 2010; 15: 954–968. | Article | PubMed | ISI |
  10. Pinto D, Pagnamenta AT, Klei L, Anney R, Merico D, Regan R et al. Functional impact of global rare copy number variation in autism spectrum disorders. Nature 2010; 466: 368–372. | Article | PubMed | ISI | CAS |
  11. State MW. Another piece of the autism puzzle. Nat Genet 2010; 42: 478–479. | Article | PubMed | CAS |
  12. Klauck SM. Genetics of autism spectrum disorder. Eur J Hum Genet 2006; 14: 714–720. | Article | PubMed | ISI | CAS |
  13. Szatmari P, Paterson AD, Zwaigenbaum L, Roberts W, Brian J, Liu XQ et al. Mapping autism risk loci using genetic linkage and chromosomal rearrangements. Nat Genet 2007; 39: 319–328. | Article | PubMed | ISI | CAS |
  14. Weiss LA, Shen Y, Korn JM, Arking DE, Miller DT, Fossdal R et al. Association between microdeletion and microduplication at 16p11.2 and autism. N Engl J Med 2008; 358: 667–675. | Article | PubMed | ISI | CAS |
  15. Sousa I, Clark TG, Toma C, Kobayashi K, Choma M, Holt R et al. MET and autism susceptibility: family and case-control studies. Eur J Hum Genet 2009; 17: 749–758. | Article | PubMed |
  16. Sebat J, Lakshmi B, Malhotra D, Troge J, Lese-Martin C, Walsh T et al. Strong association of de novo copy number mutations with autism. Science 2007; 316: 445–449. | Article | PubMed | ISI | CAS |
  17. Kusenda M, Sebat J. The role of rare structural variants in the genetics of autism spectrum disorders. Cytogenet Genome Res 2008; 123: 36–43. | Article | PubMed |
  18. Losh M, Sullivan PF, Trembath D, Piven J. Current developments in the genetics of autism: from phenome to genome. J Neuropathol Exp Neurol 2008; 67: 829–837. | Article | PubMed |
  19. Buizer-Voskamp JE, Franke L, Staal WG, van Daalen E, Kemner C, Ophoff RA et al. Systematic genotype-phenotype analysis of autism susceptibility loci implicates additional symptoms to co-occur with autism. Eur J Hum Genet 2010; 18: 588–595. | Article | PubMed |
  20. Gilman SR, Iossifov I, Levy D, Ronemus M, Wigler M, Vitkup D. Rare de novo variants associated with autism implicate a large functional network of genes involved in formation and function of synapses. Neuron 2011; 70: 898–907. | Article | PubMed | ISI | CAS |
  21. Sanders SJ, Ercan-Sencicek AG, Hus V, Luo R, Murtha MT, Moreno-De-Luca D et al. Multiple recurrent de novo CNVs, including duplications of the 7q11.23 Williams syndrome region, are strongly associated with autism. Neuron 2011; 70: 863–885. | Article | PubMed | ISI | CAS |
  22. Geschwind DH. Autism: many genes, common pathways? Cell 2008; 135: 391–395. | Article | PubMed | ISI | CAS |
  23. Freitag CM. The genetics of autistic disorders and its clinical relevance: a review of the literature. Mol Psychiatry 2007; 12: 2–22. | Article | PubMed | ISI | CAS |
  24. Neale BM, Kou Y, Liu L, Ma’ayan A, Samocha KE, Sabo A et al. Patterns and rates of exonic de novo mutations in autism spectrum disorders. Nature 2012; 485: 242–245. | Article | PubMed | CAS |
  25. Geschwind DH, Sowinski J, Lord C, Iversen P, Shestack J, Jones P et al. The autism genetic resource exchange: a resource for the study of autism and related neuropsychiatric conditions. Am J Hum Genet 2001; 69: 463–466. | Article | PubMed | ISI | CAS |
  26. Guzzetta G, Jurman G, Furlanello C. A machine learning pipeline for quantitative phenotype prediction from genotype data. BMC Bioinformatics 2010; 11(Suppl 8): S3. | Article | PubMed |
  27. Lord C, Rutter M, Le Couteur A. Autism Diagnostic interview-revised: a revised version of a diagnostic interview for caregivers of individuals with possible pervasive developmental disorders. J Autism Dev Disord 1994; 24: 659–685. | Article | PubMed | ISI | CAS |
  28. International HapMap Consortium. The International HapMap Project. Nature 2003; 426: 789–796. | Article | PubMed | ISI | CAS |
  29. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MA, Bender D et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet 2007; 81: 559–575. | Article | PubMed | ISI | CAS |
  30. Perez-Perez JM, Candela H, Micol JL. Understanding synergy in genetic interactions. Trends Genet 2009; 25: 368–376. | Article | PubMed | CAS |
  31. Mosconi MW, Kay M, D’Cruz AM, Guter S, Kapur K, Macmillan C et al. Neurobehavioral abnormalities in first-degree relatives of individuals with autism. Arch Gen Psychiatry 2010; 67: 830–840. | Article | PubMed |
  32. Cavalleri GL, Weale ME, Shianna KV, Singh R, Lynch JM, Grinton B et al. Multicentre search for genetic susceptibility loci in sporadic epilepsy syndrome and seizure types: a case-control study. Lancet Neurol 2007; 6: 970–980. | Article | PubMed | ISI | CAS |
  33. Lee US, Cui J. {beta} subunit-specific modulations of BK channel function by a mutation associated with epilepsy and dyskinesia. J Physiol 2009; 587: 1481–1498. | Article | PubMed |
  34. Monk CS, Weng SJ, Wiggins JL, Kurapati N, Louro HM, Carrasco M et al. Neural circuitry of emotional face processing in autism spectrum disorders. J Psychiatry Neurosci 2010; 35: 105–114. | Article | PubMed |
  35. Jiang M, Gold MS, Boulay G, Spicher K, Peyton M, Brabet P et al. Multiple neurological abnormalities in mice deficient in the G protein Go. Proc Natl Acad Sci USA 1998; 95: 3269–3274. | Article | PubMed | CAS |
  36. Masuho I, Mototani Y, Sahara Y, Asami J, Nakamura S, Kozasa T et al. Dynamic expression patterns of G protein-regulated inducer of neurite outgrowth 1 (GRIN1) and its colocalization with Galphao implicate significant roles of Galphao-GRIN1 signaling in nervous system. Dev Dyn 2008; 237: 2415–2429. | Article | PubMed |
  37. Yang H, Wan L, Song F, Wang M, Huang Y. Palmitoylation modification of Galpha(o) depresses its susceptibility to GAP-43 activation. Int J Biochem Cell Biol 2009; 41: 1495–1501. | Article | PubMed |
  38. Zikopoulos B, Barbas H. Changes in prefrontal axons may disrupt the network in autism. J Neurosci 2010; 30: 14595–14609. | Article | PubMed | ISI |