[LINK] > If Sir Tim Berners-Lee had his time again he'd probably leave // out.
Craig Sanders
cas at taz.net.au
Tue Oct 20 11:06:36 AEDT 2009
On Tue, Oct 20, 2009 at 10:01:12AM +1100, Marghanita da Cruz wrote:
> The reverse slashes on Windows used to drive me mad when I tried to
> test updates for the ramin communications website on an Unix/Apache
> server. Now with Linux on the desktop and on the server the world is a
> wonderful place.
Microsoft copied the idea of hierarchical directory trees from unix for
MS-DOS 2.0, and deliberately chose to use back-slashes(*) in order to be
different. they also retained the idiotic drive-letter thing that MS-DOS
1.0 had copied from CP/M.
back-slashes have mostly just been annoying since then, they've created
a few extra hassles for users switching between unix and MS-DOS/Win, as
well as for programmers porting software between the two environments.
neither forward- nor back- slashes are inherently the "right choice",
either works as well as the other. (IMO, forward-slashes have
a slight intuitiveness/flow/naturalness advantage, and a huge
"first-mover" standard-setting advantage. even back then MS were into
embracing-and-extending-and-buggering-up existing standards).
the drive-letters, however, were a fundamentally stupid design decision
that crippled MS/Win/NT's ability to use disk drives. On a unix box, if
you're running out of disk space, just add a new disk and mount it as
the directory where you need it (e.g. "/home", or "/export" or whatever)
- everything just keeps on working because nothing except the kernel
cares or even notices whether a subdirectory is just a subdirectory or a
mount-point for a different disk/partition.
on MS-DOS etc, you can add a new drive but it can't just be added in to
the direcory tree at any point, because directory trees aren't global to
the system, they are local to each drive/partition...so the new drive
will get a new drive letter. Worse, programs typically expect to be on
the C: drive and to find all their configuration and data files there.
Reconfiguring software so that it can exist on another drive can be
anything from no-hassle to impossible, but is typically a major PITA.
even worse than that, there is a serious risk that adding a drive to the
system will cause the drive letters of existing drives in the system to
change, breaking software that is no longer on the drive letter it used
to be on, or that can no longer find its data on E: because E: has been
renamed to G:
(*) could be worse....Apple used ":" as the path separator in the
original Lisa and Mac and kept it until they switched to unix with Mac
OS X.
OK, that digression was a lot longer than i expected it to be. i'll get
back to the topic now.
> The // is preceded by a http: and the subsequent structure www.domain
> relates to sub-domains not directories.
it's a fair bit more involved than that. the basic format of a URI is:
<scheme name> : <hierarchical part> [ ? <query> ] [ # <fragment> ]
"scheme name" can mostly be thought of as the application protocol. e.g.
http, ftp, mailto, ldap, and many others.
"hierarchical part" is the location of the resource. most protocols
start with a "//" (e.g. http://, ftp://), but some don't (e.g. mailto:).
see http://en.wikipedia.org/wiki/URI_scheme for more details.
> Today, browsers, and apparently thunderbird, assume www is a synonym
> for the "http://www" and has become a defacto standard, which is
> naturally called evolution.
actually, most browsers just use http as the default protocol,
regardless of whether it begins with "www" or not. if you don't specify
one in the URL, it will assume http and prefix "http://" to your URL.
many browsers will also try adding ".com" to the end of the URL if a DNS
search can't resolve the domain, and some will subsequently try adding
"www." to the front. e.g. type in "example" into a browser's window and
it will try, in sequence, "http://example", "http://example.com", and
"http://www.example.com"
some browsers will treat whatever you type as a search string for your
default search engine. either as a last resort after the above DNS-based
attempts, or instead of them.
none of this is set in stone - it's entirely up to the browser what they
do with incorrectly-formed user input.
> The www rather than the http:// has become an indicator of a website.
that's because the internet and computers and pretty much every other
object or concept related to technology are magic black boxes that you
don't need to understand.
on a semi-related note, it really used to bug me how the ABC (and
others, but the ABC in particular) used to pronounce their domain.
they'd say "abc-dot net-dot au", rather than "abc dot-net dot-au". i
guess someone told them how stupid they were making themselves sound
because they stopped doing that a few years ago.
> From a parallel discussion on SLUG I just learnt about file:///home/
file:// URLs are useful, and they also provide ample evidence that
most people see technology as magic black boxes and don't even try to
understand it.
they'll create a web page in Front Page or Dreamweaver or some similar
piece of crap, upload it to their server, and then be completely
incapable of understanding that the site is broken for everyone else but
them because all of the IMG SRC urls and many of the A HREF links
refer to something like "file:///C|/Documents%20and%20Settings/User%20Name/Desktop/Images/picture.gif"
- it works OK when they look at it on their desktop, so there can't possibly
be anything wrong with it.
this is related to the typical end-users' complete inability to
understand the difference between relative and absolute paths, or even
that they exist as concepts that need to be understood.
craig
--
craig sanders <cas at taz.net.au>
More information about the Link
mailing list