One of the most common patterns I see in questions here is "I made an ontology in Protege and now I need to put instance data in... HELP!"
I've got a sneaking suspicion is that many of these people aren't going to do any inference at all with their schemas... The only role the schema plays is human-readable documentation.
0. Maybe in some cases, but then I figure they're not using the standards appropriately. There's much more effective and aesthetically pleasing ways to document vocabularies that don't involve RDFS or OWL. Like simple class/property-domain models, or HTML with nice looking pictures. Further, RDFS and OWL are far too widely misunderstood to form the basis of human-readable documentation.
In a lot of ways I think their perception of RDFS is distorted by experience with relational databases and XML Schema.
On the other hand, what's unique about RDFS and OWL is that it can be used to define ruleboxes that connect different vocabularies. All of the projects where I've used RDFS and OWL have either involved: (i) making sense of large A-Boxes such as DBpedia and Freebase or (ii) making it possible for people to query a private vocabulary with well-known predicates such as foaf:maker and dcterms:creator. Is anybody else doing this?
If I rephrase it as "do people often start with data and then worry about the model?"; then yes! Such a methodology would be particularly common for projects centred around legacy data (of whatever kin). For example, the DBpedia ontology/property-model is a direct result of the raw Wikipedia info-boxes and structure.
Where legacy data does not exist, of course it makes sense to create a model to encourage the creation of such data; for example, FOAF, voiD, etc. go down this path. Such folks might have an idea of what kind of data they want to model, but the (hopefully generic and flexible) model comes first.
Of course, this is a very Linked-Data-esque perspective. People doing more traditional ontology modelling may be representing most of their knowledge-base as T-Box. This is where a tool like Protege excels. Genuine use-cases for full-fledged OWL reasoning are, however, currently much rarer than the number of use-cases that have been proposed. Genuine use-cases should involve complex, rigorously defined legacy models: the types of models that are only prevalent in a few areas such as health care and life sciences, law, maybe manufacturing.