That sounds like something made up for people who like light novels but are too insecure to admit that they read light novels.
“You read light novels. I read light literature. We are not the same.”
That sounds like something made up for people who like light novels but are too insecure to admit that they read light novels.
“You read light novels. I read light literature. We are not the same.”
Why does search show the 8th volume, rather than the 1st volume or the series?
I’ve seen that happen to a couple of series; @brandon’s been able to fix it so far, but I guess he hasn’t been able to fix the root cause yet?
yes as @eefara mentioned this does happen everyonce in a while for a variety of reason. In this particular case, the alternative names for this series were applied to the 8th volume only and not the entire series… which is now fixed.
So another question re:how do I classify this book for @brandon or anyone else who can answer. The topic this time is graded readers: what’s the minimum a book needs to be counted as a graded reader? Just Japanese/English parallel text? Notes on the Japanese? Word glossary? Etc?
I think the key feature of graded readers is that they are explicitly written with learners in mind, not natives. Everything else is just icing
I suppose this question brings to mind some books written for Japanese people include notes on language (one of my 乱歩 books does) but I’m not coming up with any other good examples that may fall outside that category. Do you have examples of ones that feel unclear?
this exactly. Parallel texts may seem like the strange inclusions, but they are still published with the learner in mind
I think where it can get confusing is the line between textbooks & graded readers (some textbooks try to teach through stories… which can make things confusing). Generally, I’ve just thought if the vast majority of the content is story rather than notes, then it can fall in graded readers. Also, if the thrust of the book is to understand stories rather than teach grammar or something like that, then it probably falls in graded readers.
Thank you both for the help!
Nothing in particular I can’t sort out with your answers! The book advertised parallel text for learners, but I just wanted to double-check before I submitted anything. I feel like it’d be really disappointing to find a new textbook/graded reader, buy it, open it up, and find out the person who submitted it mis-categorized it. So just being overly cautious, I think!
I’m sorry, I’ve got another question: story type “Short Story”. I’m guessing this is for short story collections and not literally a short story, since those aren’t typically published stand-alone (that I’m aware of)?
Most of the Aozora short stories are actually stand alone short stories available by themselves, but collections of short stories also go in that category. You can usually tell them apart by page length
The blurry bit comes when some things are lightly connected short stories vs wholly unconnected. I personally put connected short stories into the novel category and unconnected into short story.
Ah, I forgot about digital editions; my mind was on physical. Thanks for the help!
Just noticed that my selection for the activity feed - e.g. “Following” - always gets reset back to “Global” when I open a book page and go back to the dashboard. Doesn’t seem to happen with other parts of natively. Is that just me? Is that intended?
It happens here as well.
It might be worth opening a bug thread on #natively:product-requests , doesn’t look like something that should be intended as it feels arbitrary
Guessing this has probably been discussed before, but I couldn’t find it in a quick search.
Why doesn’t natively automatically grade your books once it has enough information?
If you have three books - A, B and C - and you grade C > B and B > A, why does it still ask you for the relation between C and A? Shouldn’t it just grade it as C > A automatically?
Seems like it would make grading a lot easier for people that have a lot of books of similar difficulties
That’s a good question and potentially something to think about in the future. Granted, you could make the case that grades are inherently a little noisy, so leveraging this way could exacerbate that noise. However, I think you’re probably right… i think it makes sense to do it at some point
You should find out how often gradings are inconsistent, where the user has contradicted themselves. I bet it happens fairly often. And if it does happen, getting that information could be useful since the contradictory gradings will cancel out (at least somewhat I would expect), negating the user’s (apparent) lack of ability to grade those particular books accurately.
Yes if i went ahead and did this I’d have to build a ‘implied grading’ calculator which you could probably use to view inconsistent grading and see if it truly makes sense to use this leveraging.
TBH, since people usually always finish their available grading, you could make the case too that it’s not worth it to implement yet. It’s a good thought though and makes intuivitive sense.
Yeah, it makes sense. I just know I’ve contradicted my own gradings before, which made me wonder how often that happens and what impacts it might have. (Also, as you know, I love grading the books, so I’m happy to do extra comparisons, even if they are redundant.)
Doesn’t that depend on the accuracy and precision of the data, how far apart the automatically graded books are in perceived grade and how big the contradiction is?
E.g. a user has five books he perceived as A<B<C<D<E, the user has filled in everything but A<E. Since the distance in grade should be fairly large relative to noise, automatically grading shouldn’t propagate error, but mistakingly grading A ≈ E would introduce new error.
I wonder if the same could be said about the C>B>A example, since having an entire book between C and A might already be enough that noise doesn’t impact C>A. Would be interesting to see the math on this
It does seem intuitive though that for books that are fairly close in perceived grade having redundant information would reduce the impact of noise.
Potentially! Maybe longer links could resolve things, but the longer the links you require, the more complicated it would be to calculate (i think) and also the less beneficial it’d be.
TBH, I’d probably be more inclined to surface inconsistencies to users first or try to make a personal difficulty ordering, purely for personal evaluation before having it impact actual gradings generated. Prioritizing the display of inconsistencies first would make the situation a lot more clear
But, yeah like I said, users seem happy to finish their gradings so probably won’t do for a while… as it’s simpler for evaluation and for an algorithm