Friday, August 14, 2009

Terrorism-Related Fear and Avoidance Behavior in a Multiethnic Urban Population

One public health definition of terrorism proposes that the effects of terrorism ‘‘real or threatened’’ may include ‘‘adverse health effects in those immediately affected and their community, ranging from a loss of well-being or security to injury, illness, or death.’’1 The events of September 11, 2001, influenced wellbeing and security beyond the regions directly attacked.2–4 Many people throughout the United States felt they were at risk from terrorism. Risk perceptions, along with antiterrorism programs, laws, and policies (e.g., airport security regulations, visa restrictions, and warrantless surveillance) affected Americans’ lifestyles and behaviors. In the months following the attacks, 40% to 50% of US adults still feared for their safety4,5 and 11% reported changed behaviors such as avoiding public gatherings.6,7 Risk perception theories and research posit that individuals assess risks based on a balance of many factors, including the probability of a hazard or risk personally affecting them, the severity of the personal consequences from risk exposure, feelings of personal control, the perceived inequality of risk distribution across society, and trust in institutions managing risks.8,9 For instance, a national survey conducted 2 months after the attacks of September 11 found that the distance between one’s home and the World Trade Center was inversely correlated with perceptions of terrorism risk among non-Hispanic Whites.9 By contrast, Latinos’ and African Americans’ judgments of future terror risks were not affected by how far they lived from New York City. These results are consistent with findings of lower risk perceptions among politically conservative White males, who feel greater control over their environment and greater trust in the institutions protecting them.10 As noted by Fischhoff, The estimation of personal risk and vulnerability to terrorism may act as a key motivator to behavioral adaptations, including avoidance of usual activities or increased adoption of protective behaviors.11–14 Those who believe they are particularly vulnerable to a riskmay be motivated to perform risk reduction.

Studies document that vulnerable populations, such as the chronically ill, the physically disabled, non-White racial/ ethnic minorities, and immigrants, bear a disproportionate burden of harm from natural disasters15–18 and that there are racial/ethnic differences in perceived risks of natural disasters.15 Similarly, research finds specifically that African Americans and Latinos perceive they are at greater risk from terrorism than do non- Latino Whites.9,19 A survey conducted less than a year after September 11, 2001, reported that African Americans were most likely to limit their outside activities and change their mode of transportation in response to fears of terrorism.5 Also, a national survey found that persons with disabilities were more anxious about their personal risk from terrorism than were persons without disabilities, even when equally prepared. 20 Another study reported that persons who increased their disaster preparations in response to the possibility of terrorist attacks included African Americans, Latinos, persons with disabilities or household dependents, and non– US-born populations.21 As with health and disasters generally, these populations may experience disparities in the effects of terrorism and terrorism policies including their risk perceptions and avoidant behavior. An Israeli survey found that large social groups, including women, had adapted their daily behaviors to minimize the impact of terrorism risks

FREE DOWNLOAD JOURNAL HERE






Read More...

Read more...

Psychiatric Treatment Received by Primary Care Patients With Panic Disorder With and Without Agoraphobia

Panic disorder is fairly common, with a 12-month prevalence rate of 2.7% and a lifetime prevalence rate of 4.7% (1,2). The course of panic disorder tends to be chronic, with high rates of recurrence after remission, particularly for panic disorder with agoraphobia (3–5). Furthermore, individuals with panic disorder experience considerable impairment and disability, including occupational difficulties (6–9), impaired well-being (10–12), and reduced quality of life (9–14). They also have higher rates of health care use, with a greater number of outpatient visits, emergency room visits, and hospitalizations compared with those without the disorder (8,10,15).

Individuals with panic disorder typically present to the primary care setting, with estimates suggesting that as many as 80% of cases first present to primary care (16). Thus the rate of the disorder is higher in primary care settings, with a reported median prevalence of 4% to 6% (8). Furthermore, the majority of individuals with panic disorder obtain their mental health treatment in the primary care setting (17,18). Despite these findings, research suggests that panic disorder often goes unrecognized (19, 20) and is inadequately treated in both primary care (8,21–23) and psychiatric settings (24–26). A number of effective pharmacologic treatments for panic disorder exist, including tricyclic antidepressants (TCAs), selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitors (SNRIs), and benzodiazepines (27–30). Likewise, psychosocial treatments, namely cognitive-behavioral therapy (30,31) and possibly a specific form of psychoanalytic treatment (32), have been found to be effective. Despite this, estimates suggest that over 40% of individuals with panic disorder go untreated (33). Certain demographic characteristics. (for example, gender, education, and race) and clinical variables (for example, comorbid diagnoses) appear to be related to mental health service use in general (34–36). Additionally, there may be other factors that have an impact on service use, such as not perceiving oneself in need of treatment (37). For individuals with panic disorder who do receive treatment, little is known about the treatment typically received, and no studies have examined whether there are differences in treatment between persons with panic disorder with agoraphobia and those with panic disorder without agoraphobia. Brook A. Marcks, Ph.D.
Risa B. Weisberg, Ph.D. Martin B. Keller, M.D.
FREE DOWNLOAD JOURNAL HERE

Read More...

Read more...

Tuesday, February 10, 2009

THE IMPACT OF CORRUPTION ON DEFORESTATION: A CROSS-COUNTRY EVIDENCE Cuneyt Koyuncu, Rasim Yilmaz. The Journal of Developing Areas. Nashville: Spring 2

We hypothesized that corruption could contribute to deforestation. The present study, therefore, try to identify such a relation between corruption and deforestation. By using three different corruption indices, we found a statistically significant strong positive relation between corruption and deforestation for different periods across different countries. This finding remains valid in both univariate and multivariate models. Also, the model takes the potential heteroscedasticity problem, common in cross-section studies, into account and makes correction if necessary. To our best knowledge, this study is the first cross-country study addressing to the issue by utilizing all available corruption indices, namely Corruption Perception Index (CPI), International Country Risk Guide (ICRG) index, and Business Intelligence (BI) index. Policies and measures taken towards reducing corruption, therefore, may help to decrease illegal forest activities (e.g. illegal logging and timbering, smuggling of forest products etc.) and in turn depletion of forests.

DOWNLOAD HERE
Read More...

Read more...

The reliability of marking on a psychology degree Christopher Dracup. British Journal of Psychology. London: Nov 1997. Vol. 88 Part 4. pg. 691, 18 pgs

The reliability of marking for the final cohort of students to graduate from the psychology degree scheme in place at the University of Northumbria at Newcastle between 1985 and 1993 was investigated. Inter-marker correlations for some course components were low, but the correlation between students' overall first marks and their overall second marks was .93, a value in keeping with those typically reported for national school examinations. The reliability of a student's overall agreed mark was estimated to be .96 and the standard error of measurement to be about 1 per cent. Further analyses went on to consider the influence of question and option choice on reliability, the representativeness of the cohort studied and the effects of agreeing marks rather than simply averaging first and second marks. Cronbach's alpha was proposed as a means of estimating reliability in the absence of second marking and was used to compare the reliability of first and second markers. The possibility of second marking the work only of those students who were classified as borderline on the basis of their first marks was discussed. The paper concludes with a reminder that reliability does not guarantee validity.

Reliability is a fundamental requirement of any assessment procedure. The greater the reliability of an assessment, the more certain we can be that observed differences between individuals on the assessment are the result of real differences between the individuals on whatever the assessment is measuring rather than the result of random error. Reliability does not guarantee validity. The fact that differences on an assessment do result from real differences between individuals on whatever the assessment is measuring does not guarantee that what it is measuring is what we want it to measure. However, the reliability of a test does set an upper bound on the possible validity of a test. Classical test theory tells us that the correlation between observed scores of individuals on a measurement and their true scores on the variable underlying that measurement is equal to the square root of the reliability coefficient. It follows that the correlation between the observed scores and any criterion variable cannot be greater than this value. Hence the criterion validity of a measurement can not exceed the square root of the reliability coefficient (Gulliksen, 1987, p. 33).

Many factors can contribute to the unreliability of an assessment: the particular sample of questions asked; the timing of the assessment, etc. One contributor, of particular concern to those interested in measuring educational attainment, is marker unreliability. If a marker is inconsistent in the way in which he or she allocates marks to examination answers, then some of the observed differences in the scores of those sitting the examination will not be due to real differences in the quality of their answers, but to the marker's inconsistencies. These issues become particularly important at grade boundaries where a small change in an examinee's score could lead to the award of a lower or higher grade (see Cresswell, 1986a, 1988). If such changes are likely to occur as a result of marker unreliability then we have cause for concern.

The issue of marker reliability has prompted a good deal of research into the marking of national school examinations (e.g. Murphy, 1978, 1979, 1982). Murphy (1982) reported inter-marker correlations as high as 1.00 for O-level mathematics and physics examinations, but as low as .80 for O-level English literature. The median inter-marker correlation of the 24 O- and A-level examinations studied was .93. Rather little research has been carried out by psychologists into the reliability of marking on degree schemes. Two notable exceptions are Laming (1990) and Newstea & Dennis (1994). Laming estimated the reliability of the assessment of the overall performance of two cohorts of students on 'a certain university degree ' from inter-marker reliabilities calculated for each of five pairs of markers (each pair assessed one of the five sections into which the scheme was divided). His analysis used the methods of classical test theory and drew comparisons with the findings of research into the precision of absolute judgments. Newstead & Dennis attacked the issue from a different direction. Rather than studying an actual degree scheme, they asked a number of examiners to assess the answers of six students to a single question: `Is there a language module in the mind?' From the range of the scores awarded to each student's answer, they were able to estimate the standard error of measurement for that question and extrapolate that estimate to the overall performance of students on a degree scheme. The two studies came to rather different conclusions. Laming, whose data relate more closely to an actual scheme, concluded that for one of the two years he considered:
DOWNLOAD HERE
Read More...

Read more...

Thursday, January 8, 2009

The relationship of dementia prevalence in older

It has often been assumed that dementia occurs more commonly in the intellectual disability (ID) population than in the general population (Torr, 2005). Although it is now accepted that those with Down syndrome(DS) have a genetic predisposition for dementia related to the APP gene on chromosome 21, dementia may also be more common in the ID population who do not have DS (Cooper, 1997). Furthermore, it has been proposed that dementia in the ID population should occur at a younger age than is usual. Tredgold, a London physician during the first half of the previous century, asserted that ‘as would be expected,in most cases of primary amentia, [the] senile form of dementia sets in at an earlier age than the normal. It often begins to show itself in the fourth decade […], and the majority of aments who live much after this usually show definite and progressive mental deterioration ’ (Tredgold, 1952). Thompson (1951) believed the earlier age of decline to be related to arrested brain development. More recently, the cognitive reserve hypothesis has been proposed to explain how adults with similar brain insults may present with differing clinical pictures. It proposes that intelligence, education and occupational level can influence the occurrence and course of many central nervous system disorders Whalley et al. 2004). Stern (2002) proposed two components to cognitive reserve. The first comprises passive components such as brain size and synapse count or ‘hardware ’ of the brain, which differs between individuals. Proxies for it include measurements such as brain volume and pre-morbid intelligence (Staff et al. 2004). Active components or ‘software ’ of the brain are developed through educational, leisure and occupational activities that develop the use of different neuronal pathways (Stern,2003). The hypothesis assumes that there is a critical threshold of reserve capacity that needs to be breached by pathological processes before clinical or functional symptoms will develop. Those with more reserve have been found to be less likely to develop dementia or cognitive decline (Whalley et al. 2000 ; Verghese et al.2003 ; Valenzuela & Sachdev, 2006). Although these studies are consistent with the theory of cognitive reserve, none specifically studied participants in the ID (mental retardation) range of ability.
DOWNLOAD HERE
Read More...

Read more...

  © Blogger template Coozie by Ourblogtemplates.com 2008

Back to TOP