{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,23]],"date-time":"2025-08-23T05:18:39Z","timestamp":1755926319671,"version":"3.41.2"},"reference-count":27,"publisher":"Emerald","issue":"4","license":[{"start":{"date-parts":[[2019,8,5]],"date-time":"2019-08-05T00:00:00Z","timestamp":1564963200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emeraldinsight.com\/page\/tdm"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IJILT"],"published-print":{"date-parts":[[2019,8,5]]},"abstract":"<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Purpose<\/jats:title>\n<jats:p>The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence with Item Response Theory (IRT) and Classical Test Theory (CTT). The used techniques can be considered a type of Descriptive Learning Analytics since they allow the measurement, collection, analysis and reporting of data generated from students\u2019 assessment.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Design\/methodology\/approach<\/jats:title>\n<jats:p>A representative data set of students\u2019 grades from tests, randomly generated with a bank of questions implemented in Moodle, was used for analysis. The data were extracted from a Moodle database using MySQL with an ODBC connector, and collected in MS Excel<jats:sup>TM<\/jats:sup> worksheets, and appropriate macros programmed with VBA were used. The analysis with the CTT was done through appropriate MS Excel<jats:sup>TM<\/jats:sup> formulas, and the analysis with the IRT was approached with an MS Excel<jats:sup>TM<\/jats:sup> add-in.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Findings<\/jats:title>\n<jats:p>The Difficulty and Discrimination Indexes were calculated for all the questions having enough answers. It was found that the majority of the questions presented values for these indexes, which leads to a conclusion that they have quality. The analysis also showed that the bank of questions presents some internal consistency and, consequently, some reliability. Groups of questions with similar features were obtained, which is very important for the teacher to develop tests as fair as possible.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Originality\/value<\/jats:title>\n<jats:p>The main contribution and originality that can be found in this research is the definition of groups of questions with similar features, regarding their difficulty and discrimination properties. These groups allow the identification of difficulty levels in the questions on the bank of questions, thus allowing teachers to build tests, randomly generated with Moodle, that include questions with several difficulty levels in the tests, as it should be done. As far as the authors\u2019 knowledge, there are no similar results in the literature.<\/jats:p>\n<\/jats:sec>","DOI":"10.1108\/ijilt-02-2019-0023","type":"journal-article","created":{"date-parts":[[2019,6,4]],"date-time":"2019-06-04T04:44:08Z","timestamp":1559623448000},"page":"322-341","source":"Crossref","is-referenced-by-count":11,"title":["Using Learning Analytics to evaluate the quality of multiple-choice questions"],"prefix":"10.1108","volume":"36","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6951-4278","authenticated-orcid":false,"given":"Jose Manuel","family":"Azevedo","sequence":"first","affiliation":[]},{"given":"Ema P.","family":"Oliveira","sequence":"additional","affiliation":[]},{"given":"Patr\u00edcia Damas","family":"Beites","sequence":"additional","affiliation":[]}],"member":"140","reference":[{"first-page":"260","article-title":"e-Assessment in mathematics courses with multiple-choice questions tests","year":"2015","key":"key2019072310420551800_ref001"},{"first-page":"137","article-title":"How do mathematics teachers in higher education look at e-assessment with multiple-choice questions","year":"2017","key":"key2019072310420551800_ref002"},{"first-page":"641","article-title":"Learning analytics: a way to monitoring and improving students\u2019 learning","year":"2017","key":"key2019072310420551800_ref003"},{"first-page":"565","article-title":"Evaluating multiple choice items in determining quality of test","year":"2013","key":"key2019072310420551800_ref004"},{"volume-title":"The Basics of Item Response Theory","year":"2001","key":"key2019072310420551800_ref005"},{"year":"2008","key":"key2019072310420551800_ref006","article-title":"Os testes de escolha m\u00faltipla (Tem)"},{"issue":"5\/6","key":"key2019072310420551800_ref007","doi-asserted-by":"crossref","first-page":"304","DOI":"10.1504\/IJTEL.2012.051816","article-title":"Learning analytics: drivers, developments and challenges","volume":"4","year":"2012","journal-title":"International Journal of Technology Enhanced Learning"},{"issue":"7","key":"key2019072310420551800_ref008","doi-asserted-by":"crossref","first-page":"819","DOI":"10.1080\/02602930903060990","article-title":"E-assessment within the bologna paradigm: evidence from Portugal","volume":"35","year":"2010","journal-title":"Assessment & Evaluation in Higher Education"},{"key":"key2019072310420551800_ref009","unstructured":"Greller, W. and Drachsler, H. (2012), \u201cTranslating learning into numbers: a generic framework for learning analytics multimodal learning analytics for collaborative learning understanding and support view project\u201d, available at: http:\/\/groups.google.com\/group\/learninganalytics (accessed June 15, 2014)."},{"volume-title":"Estat\u00edstica","year":"2007","key":"key2019072310420551800_ref010"},{"edition":"3rd ed.","volume-title":"Developing and Validating Multiple-choice Test Items","year":"2004","key":"key2019072310420551800_ref011"},{"first-page":"4432","article-title":"Comprehensive statistical analysis of a mathematics placement test","year":"2012","key":"key2019072310420551800_ref012"},{"issue":"3","key":"key2019072310420551800_ref013","first-page":"39","article-title":"Comparison of Classical Test Theory and Item Response Theory and their applications to test development","volume":"12","year":"1993","journal-title":"Educational Measurement Issues and Practice"},{"volume-title":"Fundamentals of Item Response Theory","year":"1991","key":"key2019072310420551800_ref014"},{"key":"key2019072310420551800_ref015","unstructured":"JISC (2006), \u201cE-assessment glossary (extended)\u201d, Joint Information Systems Committee, available at: www.jisc.ac.uk\/media\/documents\/themes\/elearning\/eassess_glossary_extendedv101.pdf (accessed September 15, 2014)."},{"issue":"3","key":"key2019072310420551800_ref016","doi-asserted-by":"crossref","first-page":"310","DOI":"10.1108\/JEA-10-2012-0110","article-title":"Development and validity of the ethical leadership questionnaire","volume":"52","year":"2014","journal-title":"Journal of Educational Administration"},{"issue":"2","key":"key2019072310420551800_ref017","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1080\/08957347.2011.554604","article-title":"Validating measurement of knowledge integration in science using multiple-choice and explanation items","volume":"24","year":"2011","journal-title":"Applied Measurement in Education"},{"issue":"3","key":"key2019072310420551800_ref018","doi-asserted-by":"crossref","first-page":"164","DOI":"10.1080\/10627197.2011.611702","article-title":"An investigation of explanation multiple-choice items in science assessment","volume":"16","year":"2011","journal-title":"Educational Assessment"},{"year":"2002","key":"key2019072310420551800_ref019","article-title":"A summary of methods of item analysis"},{"year":"2002","key":"key2019072310420551800_ref020","article-title":"Design requirements of a databank"},{"year":"2002","key":"key2019072310420551800_ref021","article-title":"Principles of assessment"},{"issue":"1","key":"key2019072310420551800_ref022","first-page":"65","article-title":"Qual a fiabilidade do alfa de Cronbach? Quest\u00f5es antigas e solu\u00e7\u00f5es modernas?","volume":"4","year":"2006","journal-title":"Laborat\u00f3rio de Psicologia"},{"issue":"2","key":"key2019072310420551800_ref023","doi-asserted-by":"crossref","first-page":"186","DOI":"10.1177\/0894845310384593","article-title":"Using the self-directed search in research: selecting a representative pool of items to measure vocational interests","volume":"39","year":"2012","journal-title":"Journal of Career Development"},{"issue":"6","key":"key2019072310420551800_ref024","doi-asserted-by":"crossref","first-page":"601","DOI":"10.1109\/TSMCC.2010.2053532","article-title":"Educational Data Mining: a review of the state of the art","volume":"40","year":"2010","journal-title":"IEEE Transactions on Systems, Man, and Cybernetics"},{"edition":"3rd ed.","volume-title":"Business Intelligence: A Managerial Perspective on Analytics","year":"2014","key":"key2019072310420551800_ref025"},{"key":"key2019072310420551800_ref026","doi-asserted-by":"crossref","first-page":"552","DOI":"10.1016\/j.sbspro.2011.03.140","article-title":"An open source tool to verify the psychometric properties of an evaluation instrument","volume":"15","year":"2011","journal-title":"Procedia \u2013 Social and Behavioral Sciences"},{"key":"key2019072310420551800_ref027","unstructured":"Zickar, M.J. and Broadfoot, A.A. (2009), \u201cThe partial revival of a dead horse? Comparing Classical Test Theory and Item Response Theory\u201d, in Vandenberg, C.E.L.R.J. (Ed.), Statistical and Methodological Myths and Urban Legends: Doctrine, Verity and Fable in the Organizational and Social Sciences, Routledge\/Taylor & Francis Group, New York, NY, pp. 37-59."}],"container-title":["The International Journal of Information and Learning Technology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emeraldinsight.com\/doi\/full-xml\/10.1108\/IJILT-02-2019-0023","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emeraldinsight.com\/doi\/full\/10.1108\/IJILT-02-2019-0023","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T22:55:25Z","timestamp":1753397725000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/ijilt\/article\/36\/4\/322-341\/136457"}},"subtitle":["A perspective with Classical Test Theory and Item Response Theory"],"short-title":[],"issued":{"date-parts":[[2019,8,5]]},"references-count":27,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2019,8,5]]}},"alternative-id":["10.1108\/IJILT-02-2019-0023"],"URL":"https:\/\/doi.org\/10.1108\/ijilt-02-2019-0023","relation":{},"ISSN":["2056-4880"],"issn-type":[{"type":"print","value":"2056-4880"}],"subject":[],"published":{"date-parts":[[2019,8,5]]}}}