Integrate into the current system of pairwise book comparison a drag & drop based grading system. Something like this, with books in the same row being marked as the same difficulty and the higher ones marked as more difficult:
This discussion originally started in another thread.
Some potential benefits/downsides that were brought up there:
Want to change the grading for a book? Just drag it around instead of having to delete all gradings and do them again.
Should drastically reduce the time needed to grade if the system ever got updated to being able to incorporate infinite comparisons from a single user.
Could potentially even extract more information, like roughly how much more difficult you perceived a book to be instead of just “harder”
Yeah, certainly would need something to make it more condensed for people with lots of books.
Perhaps if it only showed the first three entries for every row, having to click a row to see which books it contained in detail, and had a search function to find individual books?
I feel like for the view of all rows I’d be fine as long as it was scrollable.
Personally I’d say let the user grade however they like, placing a rating 50 book all the way at the bottom if they want, and just decide what to do with that info in the background. (e.g. ignoring some of the individual comparisons if they’re too far apart)
Couldn’t the actual grading system stay the way it is for now, but only the interface on the user side of things change?
I.e. through some algorithm pick only a number of pairwise book comparisons that are taken into account and handle the rest as extra gradings?
It might sometimes be appropriate that the book originally graded at Level 25 would end up higher than Level 30. (i.e. mis-graded as JLPT N3.) The current grading system probably doesn’t allow this.
I guess that might work, i’m not sure. The nice thing about the current system is that users can somewhat self select comparisons they feel confident about via the skip mechanism, rather than requiring them to rank everything and then generating random comparisons. It’s a thought though
TBH I think the first step would be to generate the personal ranked list just for viewing purposes and that’d probably give us a lot of insight, like how often do people often contradict themselves.
I think personal ranked lists, even if we only generate them from your current gradings, would look really cool on our profiles
I can only speak for myself, but doesn’t that confusion come from the current system?
Sometimes when grading I’m not really sure exactly how difficult a particular book I am being asked about was. What I end up doing is going back to my old gradings, looking how I graded that book, finding reference points for how difficult I thought it was back then, and then grading based on that. Personally, I’m only struggling with grading exactly because I am being asked to grade single books.
For me, if I have an entire row of books to grade it against, that problem just disappears. It becomes much clearer where I want to put it. (This is actually how I have been doing it with a custom document for a while now.)
EDIT: In retrospect, I realized that you might have been talking about this from a grading accuracy perspective, not a usability one, so I might have talked past your point. I have a bit of a tendency to think of this as a user interface change, because that’s where I think it has the largest improvements over the current one