When Dual Screening, the system allows me to adjudicate individual decisions but I am not able to determine the inter-rater reliability (IRR) within the software. IRR would help me track the accuracy of underlying screeners and also may be published as part of the screening results.
Could an IRR / Kappa statistic be added so that adjudicators/administrators can track the accuracy of underlying screeners in Dual Screening?
This feature was added a while back on Dashboard, under “Screening Agreement”. Enjoy!