[LINK] Microsoft explains how the ANI bug got baked into Vista

Adrian Chadd adrian at creative.net.au
Mon Apr 30 12:09:18 AEST 2007


(Warning: lots of ranty material ahead.)

On Mon, Apr 30, 2007, Bernard Robertson-Dunn wrote:
> Adrian Chadd wrote:
> 
> >On Mon, Apr 30, 2007, Bernard Robertson-Dunn wrote:
> >
> >><brd>
> >>When I used to do assembly programming on Univac mainframes in the mid 
> >>1970s, they had a concept of privileged Operating System mode. This 
> >>meant that the Operating System had its own set of registers and some OS 
> >>only instructions. The consequences of this were two fold:
> >
> >
> >Current Intel CPU architecture has that too. In fact, it has -four-
> >protection rings to seperate privileged and non-priviliged processes.
> >The whole point of having protected mode is being able to setup a process
> >in its own "virtual machine" with limited interfaces to the rest of
> >the system.
> 
> I didn't think that the Intel architecture had a set of registers that 
> only the OS could use.  I thought that all registers were used by both 
> OS and application code, which is why buffer overflow hacks are possible.

Its not that at all. Its easy (and slow!) to save complete register state between
kernel and OS but I doubt thats the specific entry point for bugs.
I know the FreeBSD kernel at least doesn't use the extended registers (SSE/SSE2/MMX/etc)
for anything in-kernel so it doesn't have to save those registers on entry
and exit into the kernel.

Oh, of course - some OSes (eg Linux) use registers for some/all of the syscall
arguments. Thats still not the problem though - the problem is getting unchecked
crap into the kernel and exploiting bugs in the software, not the fact the
CPU doesn't have shadow registers for each protection ring.

There's just too much unaudited and poorly written code designed by humans
and implemented in languages with no native bounds checking. There's little
to no code layering and most open source software practices don't mirror any
formal engineering processes. "managed code" has started popping up which may help
but:

* the OS isn't written in it;
* by all benchmarks the OS would crawl if all the existing features were
  written in it, and;
* some of the exploit vectors are language neutral.

Did you know, for instance, the simplest way to script actions into a windows
program was to record the messages travelling between it and the keyboard/mouse
(as all your mouse movements and keypresses were just events) and if you knew
the sequence of events you could just replay them?

Did you then know the easiest way to get IE to do "stuff" for you for quite
some time was to run a program at a low privilege and send messages to a program
at a higher privilege, getting it to do things like load/run other code?
That's part of a flawed design that would break applications if fixed (since many
applications that existed assumed they could just exchange messages on a local
machine without authenticating or trust..)

Thats only the beginning of how horrid that particular environment is.
Add in the modern UNIX/Windows desktop propensity for allowing the user to get
away, unchecked, with almost bloody murder (arbitrary connections to arbitrary
hosts? bind to lock sockets? random filesystem access? unchecked local RPC access?)
which allow malicious software unfettered access to the box in the first place?
Imagine if you're a virus writer - your first task is getting code onto the box.
Once you've done that you're able to read the users email, check the users network
access, bind to a local port or connect out to an IRC drone botnet to await commands
and begin sniffing for methods to escalate. All of this is in software. None of it
is because of hardware architectural limits.

> In the Univac architecture, there was no such thing as a buffer overflow 
> hack in the sense of application data being executed as either 
> application code or OS code.

So there wasn't any badly written OS code that let you over-run a user
buffer and provide nasty instructions? Or there wasn't a way to run some code
which read the admin password out of a silly non-privileged operators file store?
Etc, etc.

Computer Scientists (which I'm -not- one of, just for the record) know how all of
this can be fixed. Take a look at the OLPC security model for starters. Take a look
at the kind of stuff Vista is at least trying (but, since they have to maintain
some kind of code backward compatibility in places they've still left themselves
open to various kinds of attacks.) There's capabilities available in modern free
and commercial operating systems which require specific ACLs before you're able
to bind to service ports or connect out. Heck, 30 year old operating systems such
as MULTICS IIRC would wrap up privileged access in libraries which filtered access
(such as network access) before a program could do what it wanted.

Keep the kernel small and the system modular should be the design goal of any modern
software engineer. Its a shame what we're using isn't written that way.
(Or heck, most coders aren't approaching "engineer" by any stretch of the imagination.)



Adrian




More information about the Link mailing list