How to keep track of "imported" RDF resources

I am developing an editing tool where users can import resources from "the Semantic Web" and SPARQL end points. When importing, all triples where the resource is a subject will be copied into the edited graph. This will make it easy to link the edited model with external resources without using "all-or-nothing" approaches such as owl:imports.

Now I also want to distinguish those imported resources and make it convenient to refresh/update the triples, and possibly to mark the imported resources as read-only. I guess one way of implementing that would be to store additional triples for the resource, one for the time stamp when the resource was imported last, and another for the source, e.g. the URL of the SPARQL end point. For Linked Data resources, I probably just need a time stamp to indicate that it has been imported.

Does anyone know of an established RDF vocabulary to track such information? What other approaches are used for the scenario above?

I guess one way of implementing that would be to store additional triples for the resource, one for the time stamp when the resource was imported last, and another for the source, e.g. the URL of the SPARQL end point. For Linked Data resources, I probably just need a time stamp to indicate that it has been imported. ... Does anyone know of an established RDF vocabulary to track such information?

DC terms jumps to mind, particularly (but not only) dct:modified and dct:source. But there's lots more related terms in there.

What other approaches are used for the scenario above?

Named graphs would also work, as per @morphyn's answer. At a resource-level granularity, I would still probably favour the approach you use. For individual triples, you would be reification territory.

I don't have an answer and I'm curious to see how this can be handled.

In a similar case where I use Sesame+OWLIM, I make heavy use of contexts (=graphs) to keep track of data provenance. When updating the data, I clear the context corresponding to the source and import the data again. This works quite well if you are loading chunks of data from various sources and then using them in a read-only fashion, but it becomes more complicated if external actors can modify imported data and you still want to be able to update it from the original sources.