[LINK] AI, consciousness & perception (was Machine Learning)

Jim Birch planetjim at gmail.com
Wed Aug 3 15:22:57 AEST 2016


On 2 August 2016 at 23:01, David Lochrin <dlochrin at key.net.au> wrote:

> Why not make the same claim about other people?  After all, they are just
> physical stuff - wet logic circuits - they couldn't possibly have conscious
> sensation.  I mean, how could it work?
>
> In this case we propose to construct an analogue of the brain (a neural
> network) using familiar electronic components, and of course ignoring
> actual feasibility. Then what do you suppose would happen?  There would
> certainly be lots of electrical activity, the network would go from one
> state to another, but would it become conscious?  If so, how?  You tell
> me...  And no appeal to "magic happens here" is allowed, we're looking for
> an explanation.
>

You haven't answered the question of why you can't make exactly the same
argument about other humans.  Your argument is equally applicable and
unless you can answer that your argument has a massive hole.

We can't "see" other people's subjective consciousness. That's the way it
is.  Their subjective self-consciousness is them seeing or sensing
themselves sensing.  It is a privileged position, almost by definition
unobservable.  We cannot physically place an observer where they will see
another person's consciousness directly from their point of view.  So we
can't "prove" it exists in that way.

However, despite this practical issue, no one seriously doubts other
people's consciousness.  (We can't see black holes directly either. We
infer them.)   It is actually pretty easy to determine that other people
are conscious.  We talk to them.  We model them.  We can check whether they
might be lying.   If you ask someone if they see a red ball and the red
ball is actually there and they say that they see it is there then they are
functionally conscious of the red ball.  End of story.  Obviously, they
might be blind and lying but we can check that out. It's not that hard.
Move it, they notice.  If you want to know whether they are self-conscious
you ask them questions that are self referential: Do you only see the red
ball or do you see the ball, and, know that you are seeing it?   All this
is relatively straightforward - compared to determining the existence, size
and location of black holes - just about anyone could do it.

Obviously, if you can do this with a human, you can do it with an alien or
an AI.  The language might be different, you might have to rephrase the
questions, etc, but it is fundamentally the same process.

Where we appear to disagree it this intangible extra component that you see
as fundamental and I see as basically structural detail.  You seem to think
that there is a way for for a human/AI/alien/whatever to be looking at a
red ball, be quasi-continuously aware of themselves as a separate distinct
entity looking at a red ball, to be able to report it in some fashion -
both to themselves and to others - and to be able to answer questions about
their experience, and yet not be conscious.  I see that as totally absurd.
That is consciousness.

Consciousness is not a yes/no condition but particular accumulations of
capabilities. An AI will have a radically different sensing apparatus and
brain structure to you but (if and when it exists) if it can honestly
answer simple questions about its experience it is conscious.  Is this a
trivial design problem? No, right now we don't know how to make such a
beast.   Will it see the same red as you do? No, different sensors,
different circuits.  Does it have an identical sense of being as you? Of
course not. Could it be a much dumber program that has been designed just
to fool you? Sure, but keep asking questions and you can reach a higher
level of confidence.

This picture doesn't rely on magic as you are suggesting.  You might not
know how a fridge works, but you can easily tell that it cools things
down.  The hard problem of refrigeration is how it makes the radically
different property of coldness out of pipes, wires, motors and
compressors.  Fridge as are understood, brains are not.  And, fridges don't
do self-reference.  Yet.

Jim



More information about the Link mailing list