A revised bloom's taxonomy evaluation of formal written language test items

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

This paper aims to portray the appropriateness of test items in language tests according to Bloom's Taxonomy. Thirty written language tests created by EFL lecturers were analyzed. Document analysis was applied, the data were categorized and examined. In the test for remembering, ‘crucial questions was applied, finding specific examples or data, general concepts or ideas, and abstracting themes in comprehension test. Completing particular projects or solve issues in the applying test, whereas SWOT analysis conducted in analyzing test, and strategic plan should be demonstrated in evaluation test, and last, in creating test, new things or idea should be created, generalizing and make conclusion. The findings demonstrated test item using remembering mental level stood at 66%, understanding 16%, applying 2%. While analyzing level gets 9%, evaluating 2%, and creating group 5%. This addresses disparity between LOTs and HOTs usage. Hence, Bloom taxonomy was not distributed well in the language tests.

Cite

CITATION STYLE

APA

Setyowati, Y., Susanto, S., & Munir, A. (2022). A revised bloom’s taxonomy evaluation of formal written language test items. World Journal on Educational Technology: Current Issues, 14(5), 1317–1331. https://doi.org/10.18844/wjet.v14i5.7296

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free