https://www.liquidpoker.net/


LP international Poland    Contact            Users: 372 Active, 1 Logged in - Time: 04:26

Is it possible for AI to be conscious? - Page 3

New to LiquidPoker? Register here for free!
Forum Index > General
  First 
  < 
  1 
  2 
 3 
  4 
  > 
  Last 
  All 
Rapoza   Brasil. Apr 18 2016 00:17. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

Liquid`Drone   Norway. Apr 18 2016 00:38. Posts 3093

basically, to what degree are humans conscious in the way we think computers cannot be?

lol POKER 

Expiate   Bulgaria. Apr 18 2016 12:11. Posts 236


  On April 17 2016 23:17 Rapoza wrote: My point is that if you can emulate ALL characteristic from a human being, and make all those simulations work together, i think it is possible to a computer simulate thinking process in the same way we do.

That is the idea behind strong AI and nobody atm knows if this is possible.

Just a quick quote from wiki on functionalism for the people not familiar with the term:

  An important part of some accounts of functionalism is the idea of multiple realizability. Since, according to standard functionalist theories, mental states are the corresponding functional role, mental states can be sufficiently explained without taking into account the underlying physical medium (e.g. the brain, neurons, etc.) that realizes such states; one need only take into account the higher-level functions in the cognitive system. Since mental states are not limited to a particular medium, they can be realized in multiple ways, including, theoretically, within non-biological systems, such as computers. In other words, a silicon-based machine could, in principle, have the same sort of mental life that a human being has, provided that its cognitive system realized the proper functional roles.


Also, an illustration of multiple realizability:

  Putnam’s argument can be paraphrased as follows: (1) according to the Mind-Brain Type Identity theorist (at least post-Armstrong), for every mental state there is a unique physical-chemical state of the brain such that a life-form can be in that mental state if and only if it is in that physical state. (2) It seems quite plausible to hold, as an empirical hypothesis, that physically possible life-forms can be in the same mental state without having brains in the same unique physical-chemical state. (3) Therefore, it is highly unlikely that the Mind-Brain Type Identity theorist is correct.

In support of the second premise above—the so-called "multiple realizability" hypothesis—Putnam raised the following point: we have good reason to suppose that somewhere in the universe—perhaps on earth, perhaps only in scientific theory (or fiction)—there is a physically possible life-form capable of being in mental state X (e.g., capable of feeling pain) without being in physical-chemical brain state Y (that is, without being in the same physical-chemical brain state correlated with pain in mammals).


Now everything looks very fine for strong AI until the famous Supervenience Argument from Jaegwon Kim appears: Kim's exclusion problem. To understand the argument one needs to put some time into it, but in short: Most of the philosophers nowadays consider it sound. The conclusion from Kim's exclusion problem is that non-reductive physicalism entails epiphenomenalism. And epiphenomenalism is considered incoherent from the majority of philosophers. So from this follows that the strong AI won't have mental states similar to our species thus making the life of the scientists trying to create strong AI even harder.


  On April 17 2016 23:17 Rapoza wrote: i also strongly believe someone mental states like beliefs, desires, propositional attitude, qualitative experience can all be predicted given enough information, and thus, copied and simulated.

This brings the question why they exist, when there is nothing special about them? From evolutionary pov they should not have been created in first place, if the brain can work just as good without them, because to predict something with 100% certainty you have to have the complete information. If you have it, this means qualia are not important at all. With or without them the person/brain/AI will function the same way. So why they exist? From materialistic pov it doesn't make any sense. From idealistic it does - because there is an agent experiencing them.

Note that I am neither pro, nor con the possibility of strong AI. I am just trying to look from all possible angles at the subject.

 Last edit: 18/04/2016 12:16

Rapoza   Brasil. Apr 18 2016 23:25. Posts 1612

--- Nuked ---

Pouncer Style 4 the winLast edit: 18/04/2016 23:30

Expiate   Bulgaria. Apr 18 2016 23:57. Posts 236

Well, I think you can share your explanation here, because its related to the topic and if needed later, it could aways be transfered to a new thread by a mod. Also, don't forget there might be people who'll find this topic some day through the search engines and this info could be of help to them, just as the OP needs it for his research now, so this is better than PM for sure.


Baalim   Mexico. Apr 19 2016 00:04. Posts 34246


  On April 17 2016 23:38 Liquid`Drone wrote:
basically, to what degree are humans conscious in the way we think computers cannot be?



this is a better way to formulate the question

Ex-PokerStars Team Pro Online 

FMLuser   Canada. Apr 19 2016 06:54. Posts 45


  This brings the question why they exist, when there is nothing special about them? From evolutionary pov they should not have been created in first place, if the brain can work just as good without them, because to predict something with 100% certainty you have to have the complete information. If you have it, this means qualia are not important at all. With or without them the person/brain/AI will function the same way. So why they exist? From materialistic pov it doesn't make any sense. From idealistic it does - because there is an agent experiencing them..



The easy way out of dealing with qualia is to simply deny that it exists, Daniel Dennett paper Quining Qualia does exactly that http://www.fflch.usp.br/df/opessoa/Dennett-Quining-Qualia.pdf . However no matter how hard I try I cannot deny qualia's existence despite Dennetts coherent argument.

 Last edit: 19/04/2016 06:59

Rapoza   Brasil. Apr 19 2016 08:50. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

Expiate   Bulgaria. Apr 19 2016 10:16. Posts 236


  On April 19 2016 07:50 Rapoza wrote:
I am convicted Qualia does not exists...

Also explains why we don't enjoy eating the same food over extended periods of time.

Since you know how food tastes you have experienced qualia.

In the second sentence you talk that qualia are not absolute. Much of same substance, take chilly for example, changes your taste of it in a very short time. This shows that qualia are relative. And that is exactly so and from evolutionary pov makes complete sense. But this doesn't mean they don't exist.

Our brain knows about qualia and can 'measure' them, just as we know the subjective experience (otherwise absurd things would happen). I don't think there are much philosophers atm denying the existence of qualia. The question whether there are people who doesn't experience qualia at all is another topic and not the main idea here. The main idea is that we know about qualia from introspection. Denying that makes no sense.

If qualia are epiphenomenal as Dennett argues, we are faced with two problems. The first one is the knowledge paradox:

  Given that we are capable of making knowledge claims about consciousness, we need to understand how consciousness could be relevant to the production of those claims. To connect consciousness to the production of our claims about it, somewhere in our explanation of our knowledge we will need to appeal to the effects of consciousness on brain states. Now these brain states are solidly physical, and we are assuming the causal closure of the physical meaning that nothing nonphysical can make a causal difference. But if consciousness cannot affect brain states, it cannot play any part in producing our claims about it, and so it seems that we could not really know about consciousness. Yet we do know about it.

The second one is that if the brain evaluated the taste of shit for example differently than you do (than what the agent experiences), we would end up with a case where the brain makes you eat shit, although you don't enjoy the taste of shit and it is not justified in any other way (you don't believe eating shit will make you stronger, no pain-gain scenario).

 Last edit: 19/04/2016 12:03

Stroggoz   New Zealand. Apr 19 2016 14:04. Posts 5296

I've studied this subject a little bit and i think i can contribute some.

The terminology that philosophers and some scientists use, like 'physicalism' and 'functionalism' are simply useless for understanding the mind, they are leading philosophers off in the wrong direction in my view, here is why:

Physicalism is just whatever makes sense to us, if you go back to pre-newtonian physics, philosophers of mind had the mind-body problem because they thought that mechanics was confined to contact, there was no spooky 'action at distance' in physics. But then newton came along and cartesian materialism became physicalism, and that has just been whatever becomes part of the scientific understanding of the world ever since. So whatever we discover about the world from this point on, it's 'physical'. Therefore the doctrine of physicalism is simply useless for understanding the mind. It's equivalent to agreeing with the doctrine that 'the mind is understood by understanding the mind.'

As for functionalism, there have been criticisms by philosophers like ned block, putnam and so on which completely miss the point because they take the doctrine of functionalism seriously. it's just another terminology that doesn't help us understand anything about the mind. The mind is functional? it computes? ok, that really doesn't tell us anything.

As for multiple realizability, i don't find the particular thought experiment very persuasive, since we don't know enough about physiology to confirm that. I will wait for someone to find that organism which has a different physiology with the same mental states. It has still yet to be done and i don't see any good reason for supposing that it can be done.

Human's are conscious in every way that machines can't be. That's the one question here that is simple to answer, even a vague definition of consciousness allows us to say this is something machines don't have and humans do have.


One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings 

Baalim   Mexico. Apr 19 2016 20:50. Posts 34246


  On April 19 2016 13:04 Stroggoz wrote:

Human's are conscious in every way that machines can't be. That's the one question here that is simple to answer





Yet you didnt answer it, so please let us know, in what ways humans are conscious that machines can never be?

Ex-PokerStars Team Pro Online 

Expiate   Bulgaria. Apr 19 2016 21:40. Posts 236

@Baalim: In the way that the current mainstream philosophy of mind predicts machines won't have mental states similar to our own (in post #43 is the explanation), if they have any at all. This is the mainstream prediction based on logic by the people who work in the field of philosophy of mind.

 Last edit: 19/04/2016 21:42

Baalim   Mexico. Apr 19 2016 22:37. Posts 34246

I dont see any numerated post, care to directly post the quotes here?

Ex-PokerStars Team Pro Online 

Expiate   Bulgaria. Apr 19 2016 23:27. Posts 236


  On April 18 2016 11:11 Expiate wrote: Now everything looks very fine for strong AI until the famous Supervenience Argument from Jaegwon Kim appears: Kim's exclusion problem. To understand the argument one needs to put some time into it, but in short: Most of the philosophers nowadays consider it sound. The conclusion from Kim's exclusion problem is that non-reductive physicalism entails epiphenomenalism. And epiphenomenalism is considered incoherent from the majority of philosophers. So from this follows that the strong AI won't have mental states similar to our species thus making the life of the scientists trying to create strong AI even harder.

This part and above it, from the post. It would be hard to follow the logic, if you are not familiar with the concepts.

 Last edit: 19/04/2016 23:27

Baalim   Mexico. Apr 19 2016 23:40. Posts 34246

yup it doesnt say anything to me, care to explain?

Ex-PokerStars Team Pro Online 

asdf2000   United States. Apr 19 2016 23:54. Posts 7690

Since the discussion is still going, here is another question:

Why would we want to create AI that has a consciousness like ours?

Grindin so hard, Im smashin pussies left and right. 

FMLuser   Canada. Apr 20 2016 00:29. Posts 45


  On April 19 2016 22:40 Baalim wrote:
yup it doesnt say anything to me, care to explain?



Supervience is a bit tough to explain but its where the upper level of the system is influenced by its lower levels. Society to person to cell to chemistry to physics. That's the general idea but there is a lot more to it as well as logical problems on who the top can change the lower level and the lower level can change the top. The other problem is that I can describe a society in terms using only physics but that doesn't give me a meaningful explanation of the society


Rapoza   Brasil. Apr 20 2016 00:39. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

Expiate   Bulgaria. Apr 20 2016 00:43. Posts 236

I'll try to involve a few concepts as possible, but I have no idea what will come up from this.

There are 3 possible cases, if the creation of strong AI is possible at all:
1) P = M, consciousness cannot be implemented without neurons. If we have P1 = M1, for person 1 and P2 = M2 for person 2, where P1 != P2, from this follows that M1 != M2.
2) P -> M, consciousness can be implemented in all physical mediums, including computers. If we have P1 -> M1, for person 1 and P2 -> M2 for person 2, where P1 != P2, we still can have M1 = M2 (multiple realizability scenario).
3) P ~ B = M, consciousness might be implemented in all physical mediums, but it heavily differs depending on the type of the brain/system.

P = physical activity, M = mental activity, B = Whole brain/system activity

Case 1) means strong AI is possible only if you consider advanced bio humans with chips in the heads strong AI. I personally don't. Case 2) is denied by Kim's exclusion problem (I can't explain that not involving philosophical terms). So what's left for strong AI is case 3), where as I wrote, the strong AI won't have mental states similar to ours, but this doesn't automatically mean it would have zero consciousness too. It might be less conscious, it might be more, no one knows, but its mental life will be quite different from ours.


Expiate   Bulgaria. Apr 20 2016 00:50. Posts 236

@asdf2000: Because it seems like the easier job to do.

@Rapoza: Yep, that is so. Also, don't forget qualia (Q) is not detachable from cognition (say M). I can think and at the same time experience qualia. This constructs consciousness (C), or C = M ~ Q.

 Last edit: 20/04/2016 00:56

 
  First 
  < 
  1 
  2 
 3 
  4 
  > 
  Last 
  All 



Poker Streams

















Copyright © 2024. LiquidPoker.net All Rights Reserved
Contact Advertise Sitemap