Some recently posted questions are rather obviously homework assignments, which have been copy-pasted from the test sheet. Some of these questions are more or less valid ones, some are clearly far too vague, and all of them show a lack of effort on the part of the OP.
What do you think is the proper way to deal with this situation? The questions have been downvoted already, which I think is a proper response, and several have been commented on that the OP should put in a little more effort. Should we do something more, like close/delete these questions? Or maybe we should introduce a 'homework' tag, like StackOverflow has?
Note that I am not against people asking homework assignment questions, per se: as far as I'm concerned, using this site is a valid source of information for learning something. But explicitly tagging such questions as homework may give people wanting to provide answers a hint that it might be better to help the OP figure out the answer him/herself rather than give a complete and comprehensive answer.
Or am I making things too bloody difficult here? ;)
If the student has made some effort and is using the site intelligently, then I would answer their (specific, concrete) questions. Helping with specific parts of homework is one thing but it has been a recurring event on jena user lists that questions are a cut-and-paste from the homework. It's usually easy to spot - broad, "what do you think" questions. I will not answer those - indeed, I think there should be some push-back to discourage them. The course organizer probably does not expect questions to be answered by someone else, especially if the work is contributing to the final evaluation so by helping the student, I would be undermining the course organizer.
To add to the link collection:
"How to ask questions the smart way": http://www.catb.org/~esr/faqs/smart-questions.html
A homework tag is a good start, in general I'd like to see people be a little more honest than just adding a homework tag.
As to whether to close/delete that is somewhat hard to say. Some questions may be irrevocably bad and should just be closed/deleted while others could be rescued with some work on the part of the OP.
Generally unless the question is pure spam I would go for closing rather than deleting since closing allows us to say why the question was closed whereas deleting just makes the question disappear and some users may just re-post.
Is it too mean to point people to WhatHaveYouTried.com which is an excellent blog post about lack of effort on the part of people asking questions?
The difference between homework, work and plain curiosity is not really important to me if answering. We're all here to learn.
The main problem I see is "lazy" questions: I have little patience for questions that have answers one simple and obvious Google search away. In fact, I'm not so sure that they are lazy since it would probably take more effort to ask the question than Google the answer. (This is similar to the WhatHaveYouTried.com post.)
I think we should close these questions with a quick pointer to the relevant documentation as a comment, nothing further.
However, the main symptom of lazy questions is noise cluttering the questions page, which is not solved by just closing them. We have a low throughput of questions, so they tend to fester, especially when asked in batches. Perhaps we could set the site up to not show
[closed] questions on the main list such that they can only be found through search.
It would be nice to see a technical answer here.
Is there any way we can rate-limit questions from new users? Since it takes us some time for us to come up with answers and presumably time for the questioner to digest them, it seems that 2-3 questions a day is an upper limit for what a practitioner could ask in good faith.
This kind of rate limit would reduce the impact that a questioner could have without making us make decisions.
Hi, I hope you don't mean the 2 very short questions that I posted (1 last week and 1 today)!
I have been doing my own research in conjunction for my own designed project for the past couple of weeks methodologies for ontology evaluation (I've been using databases and going through scholarly material). While I'm aware of the various methodologies used for metrics (although many of the previously used open access tools are no longer supported or available) and usability testing used with domain experts there seems to be a lack of scholarly writing done on simpler evaluation techniques that are done at the "pre-flight" stage, at the point of when an ontology is composed. In particular in the area of LIS.
I was curious as to (if at all) the community was using at this point. I only posted the second question at the suggestion of another member, I assure you that they are not copy & paste homework questions but my own curiousity in terms of the absentia in the literature on the topic. I should have contextualized my question better. Sorry about that.