1naresh
Array
(
[urn:ac.highwire.org:guest:identity] => Array
(
[runtime-id] => urn:ac.highwire.org:guest:identity
[type] => guest
[service-id] => ajnr-ac.highwire.org
[access-type] => Controlled
[privilege] => Array
(
[urn:ac.highwire.org:guest:privilege] => Array
(
[runtime-id] => urn:ac.highwire.org:guest:privilege
[type] => privilege-set
[privilege-set] => GUEST
)
)
[credentials] => Array
(
[method] => guest
)
)
)
1nareshArray
(
[urn:ac.highwire.org:guest:identity] => Array
(
[runtime-id] => urn:ac.highwire.org:guest:identity
[type] => guest
[service-id] => ajnr-ac.highwire.org
[access-type] => FreeToRead
[privilege] => Array
(
[urn:ac.highwire.org:guest:privilege] => Array
(
[runtime-id] => urn:ac.highwire.org:guest:privilege
[type] => privilege-set
[privilege-set] => GUEST
)
)
[credentials] => Array
(
[method] => guest
)
)
)
RT Journal Article
SR Electronic
T1 Evidence Levels for Neuroradiology Articles: Low Agreement among Raters
JF American Journal of Neuroradiology
JO Am. J. Neuroradiol.
FD American Society of Neuroradiology
SP 1039
OP 1042
DO 10.3174/ajnr.A4242
VO 36
IS 6
A1 Ramalho, J.N.
A1 Tedesqui, G.
A1 Ramalho, M.
A1 Azevedo, R.S.
A1 Castillo, M.
YR 2015
UL http://www.ajnr.org/content/36/6/1039.abstract
AB BACKGROUND AND PURPOSE: Because evidence-based articles are difficult to recognize among the large volume of publications available, some journals have adopted evidence-based medicine criteria to classify their articles. Our purpose was to determine whether an evidence-based medicine classification used by a subspecialty-imaging journal allowed consistent categorization of levels of evidence among different raters.MATERIALS AND METHODS: One hundred consecutive articles in the American Journal of Neuroradiology were classified as to their level of evidence by the 2 original manuscript reviewers, and their interobserver agreement was calculated. After publication, abstracts and titles were reprinted and independently ranked by 3 different radiologists at 2 different time points. Interobserver and intraobserver agreement was calculated for these radiologists.RESULTS: The interobserver agreement between the original manuscript reviewers was −0.2283 (standard error = 0.0000; 95% CI, −0.2283 to −0.2283); among the 3 postpublication reviewers for the first evaluation, it was 0.1899 (standard error = 0.0383; 95% CI, 0.1149–0.2649); and for the second evaluation, performed 3 months later, it was 0.1145 (standard error = 0.0350; 95% CI, 0.0460–0.1831). The intraobserver agreement was 0.2344 (standard error = 0.0660; 95% CI, 0.1050–0.3639), 0.3826 (standard error = 0.0738; 95% CI, 0.2379–0.5272), and 0.6611 (standard error = 0.0656; 95% CI, 0.5325–0.7898) for the 3 postpublication evaluators, respectively. These results show no-to-fair interreviewer agreement and a tendency to slight intrareviewer agreement.CONCLUSIONS: Inconsistent use of evidence-based criteria by different raters limits their utility when attempting to classify neuroradiology-related articles.AJNRAmerican Journal of NeuroradiologyEBMevidence-based medicineRreviewerSEstandard error