[LINK] RFI: How Does the Do-Not-Cache Instruction Work?
Roger Clarke
Roger.Clarke at xamax.com.au
Sun Aug 10 20:05:36 AEST 2008
A question expressed from the user perspective:
With some web-pages, I can't do a save of the page, nor of images
within the page. If I want a copy, I have to do a screen-scrape.
So, if I come back to the web-page later, even a short time later,
the page and the images are fetched all the way from the web-server
again.
How does this work?
Presumably something in the HTTP response from the server via the
ISPs to my browser tell the proxy-servers and my browser not to cache
anything?
Most, maybe all, search-engine bots do appear to actually respect
robots.txt clauses. So maybe proxy-servers and browsers respect the
requests that travel with web-pages?
Or do some browsers break the rules and cache the contents on my machine?
And do some ISP's proxy servers break the rules and cache the
contents on their machines?
[A pointer to an explanation that's not heavily technical would be great.
[A pointer to the documentation and a suggestion that I RT(R)FM -
read the (right) f------ manual - would be fine too.
[Thanks Link Institute!
[Yes, I have an actual and urgent need. And it's not a consultancy gig.
--
Roger Clarke http://www.anu.edu.au/people/Roger.Clarke/
Xamax Consultancy Pty Ltd 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 1472, and 6288 6916
mailto:Roger.Clarke at xamax.com.au http://www.xamax.com.au/
Visiting Professor in Info Science & Eng Australian National University
Visiting Professor in the eCommerce Program University of Hong Kong
Visiting Professor in the Cyberspace Law & Policy Centre Uni of NSW
More information about the Link
mailing list