Well, we’ve run another two sets of interview tests for candidates using the OMR (FormReturn) system. From my perspective this was much better than the first iteration … good news 🙂
- I thought the answer sheets were better laid out
- it included negative making (which worked)
- it included candidates entering their applicant number as opposed to writing their name. This meant I could auto join the results with the person, as opposed to cut and paste
- the processes and responsibilities are much clearer (nothing to do with the software but important for scaleability). The discipline team have taken responsibility for managing and printing the question booklets
Interestingly, this time we had a significant increase in the number of answer sheets which hadn’t been processed correctly (either it couldn’t read an answer or their application number). So there was an increase in the time I spent having to work through the unprocessed sheets in the software adding in the answer based on their paper copies. This isn’t a pain for me given the numbers (30 candidates). However, it wouldn’t scale particularly well. I assume, we could reduce this time by getting them to complete use a pen to fill in the bubbles at the end of the exam. Hence the contrast would be better than a pencil. I also, think there is a setting around the sensitivity. I’ll look into this for the next time.
Another need for next time is to auto generate the results (final score) as a percentage, as well as a absolute figure
The final enhancement for next time is I’d like to send the answer sheet as a PDF to the discipline team and they can print it out. I’ll then be able to design a work flow process.