https://www.liquidpoker.net/


LP international Poland    Contact            Users: 263 Active, 1 Logged in - Time: 21:24

Is it possible for AI to be conscious?

New to LiquidPoker? Register here for free!
Forum Index > General
FMLuser   Canada. Apr 14 2016 22:26. Posts 45

So I shared a paper that I was writing for a Philosophy Class with a couple of LPers about if it is possible to create a conscious AI and got some mixed response. I am not talking about computationally intelligent but rather an AI that had a mind similar to ours or possibly that of an animal. An AI that we would have to say "okay your conscious so we should give you some rights". So what does the rest of LP think is Strong AI possible? In the future will we live in an age with computers that are conscious? or are they just machines that behave as if they are conscious?

Facebook Twitter

Baalim   Mexico. Apr 14 2016 23:06. Posts 34246

Of course they can be conscious, consciousness is a byproduct of intelligence, we can see it in varying degrees in different animals

Ex-PokerStars Team Pro Online 

FMLuser   Canada. Apr 14 2016 23:25. Posts 45


  On April 14 2016 22:06 Baalim wrote:
Of course they can be conscious, consciousness is a byproduct of intelligence, we can see it in varying degrees in different animals



Can you clarify? My calculator can perform some pretty complex math equations that if a person could perform then we would consider them highly intelligent, so is my calculator conscious? Are you saying that only the intelligent animals are conscious or are like ants conscious too? Are Micro chips really analogous to living flesh?


gawdawaful   Canada. Apr 15 2016 00:06. Posts 9012

You should probably define what your view of conscious is first so people don't end up arguing on different ideas of what conscious is or should be. For example, Pavlov demonstrated positive and negative reinforcement with dogs, so to me at least, the dog is conscious in knowing that certain action has certain results or consequences. But do dogs know looking at the mirror that is their reflection? Do they need to in order for you to consider dogs conscious?

Im only good at poker when I run good 

Smuft   Canada. Apr 15 2016 00:10. Posts 633


  On April 14 2016 23:06 gawdawaful wrote:
You should probably define what your view of conscious is first so people don't end up arguing on different ideas of what conscious is or should be. For example, Pavlov demonstrated positive and negative reinforcement with dogs, so to me at least, the dog is conscious in knowing that certain action has certain results or consequences. But do dogs know looking at the mirror that is their reflection? Do they need to in order for you to consider dogs conscious?



+1


Santafairy   Korea (South). Apr 15 2016 00:37. Posts 2225

it's philosophy you can't define what you mean because then you'll figure out it's untestable

It seems to be not very profitable in the long run to play those kind of hands. - Gus Hansen 

FMLuser   Canada. Apr 15 2016 00:46. Posts 45

By conscious I mean does the AI have a mind? or does the AI only simulate or appear to have a mind? Is strong AI possible or weak AI only? If I say to the AI "I am feel sad", does it understand the subjective mental state I have.


FMLuser   Canada. Apr 15 2016 00:55. Posts 45


  On April 14 2016 23:37 Santafairy wrote:
it's philosophy you can't define what you mean because then you'll figure out it's untestable



We can use necessary and sufficient conditions to define things like triangles or squares. It is conceivable that we could define necessary and sufficient conditions for what makes something conscious. However there are very strong arguments put forward by some very smart people as to why or why not an AI system has a mind.



It should also be something we work towards solving because pretty soon AIs will be driving our cars and other things like if these AIs are conscious and have some sort of inner subjective mental life ethically speaking they should have rights


Big_Rob_isback   United States. Apr 15 2016 01:24. Posts 211

I dont think we are at that point with a.i. Just go watch the movie a.i. by Spielberg... the whole premise is a.i. consciousness and ethics. I really liked the movie, but I guess most people dont, idk why. I guess im a weird dude : )

just playing live poker for fun 

traxamillion   United States. Apr 15 2016 01:43. Posts 10468


  On April 14 2016 22:06 Baalim wrote:
Of course they can be conscious, consciousness is a byproduct of intelligence, we can see it in varying degrees in different animals



Wrong. Consciousness has certainly not been quantified yet.


Baalim   Mexico. Apr 15 2016 02:02. Posts 34246


  On April 14 2016 22:25 FMLuser wrote:
Show nested quote +



Can you clarify? My calculator can perform some pretty complex math equations that if a person could perform then we would consider them highly intelligent, so is my calculator conscious? Are you saying that only the intelligent animals are conscious or are like ants conscious too? Are Micro chips really analogous to living flesh?


So can an abacus or many methods for arithmetic, doing operations quickly isnt A.I., I dont think you could call ants conscious but its a matter of semantics and Im not sure if thats what you are discussing, maybe you are since this was exposed to philosophers who have no fucking clue about A.I. and will only probably debate consciousness itself.

Ex-PokerStars Team Pro Online 

Baalim   Mexico. Apr 15 2016 02:04. Posts 34246


  On April 15 2016 00:43 traxamillion wrote:
Show nested quote +



Wrong. Consciousness has certainly not been quantified yet.


not like a number, nor intelligence, but we can safely say that we are more aware than a cat, and a cat is more aware than a beetle etc.

Ex-PokerStars Team Pro Online 

Smuft   Canada. Apr 15 2016 02:06. Posts 633


  On April 14 2016 23:46 FMLuser wrote:
By conscious I mean does the AI have a mind? or does the AI only simulate or appear to have a mind? Is strong AI possible or weak AI only? If I say to the AI "I am feel sad", does it understand the subjective mental state I have.



this still seems way too vague a definition

any other examples besides "understanding sadness"?



Svenman87   United States. Apr 15 2016 02:22. Posts 4636

I think the definition changes depending on the subject you're talking about - an AI Conscious would be very different to a human consciousness imo; maybe they could potentially reach a level where humans would not be classified as conscious.

I'm just hoping I can get plugged into the internets.


FMLuser   Canada. Apr 15 2016 02:32. Posts 45

Maybe a better way to frame question is in terms of the Turing Test. One of the major goals is to have an AI pass the Turing Test

https://en.wikipedia.org/wiki/Turing_test

So if an AI were to pass the Turing test would you consider it to have a mind?
Some people have objected to this saying that computers only have a understanding of syntax and not semantics

https://en.wikipedia.org/wiki/Chinese_room


Baalim   Mexico. Apr 15 2016 05:43. Posts 34246

the Turing test? come on, that probably will take 1 or 2 decades at most.


I dont see how this is even a discussion, where do people think consciousness comes from? magic? a soul?

Consciousness derives from intelligence, brain complexity, when A.I. have software and hardware enough for complex intelligence it will become self aware, the fact this was discussed with philosophers and not computer scientists makes me lose hope that this is an endless discussion about what is consciousness and nothing to do actually with AI

Ex-PokerStars Team Pro OnlineLast edit: 15/04/2016 06:19

Smuft   Canada. Apr 15 2016 05:56. Posts 633


  On April 15 2016 04:43 Baalim wrote:
the Turing test? come on, that probably will take 1 or 2 decades at most.


I dont see how this is even a discussion, where do people think consciousness comes from? magic? a soul?

Consciousness derives from intelligence, brain complexity, when A.I. have software and hardware enough for complex intelligence it will become self aware, the fact this was discussed with philosophers and not computer scientists makes me lose hope.



i generally lean towards the points you are making but with much less confidence

considering the thread is struggling to even define the key word in the initial question, the answer to that question seems very open to interpretation

what definition of "consciousness" are you working with that gives you such a confident view on the subject?


FMLuser   Canada. Apr 15 2016 06:10. Posts 45


  On April 15 2016 04:43 Baalim wrote:
the Turing test? come on, that probably will take 1 or 2 decades at most.


I dont see how this is even a discussion, where do people think consciousness comes from? magic? a soul?

Consciousness derives from intelligence, brain complexity, when A.I. have software and hardware enough for complex intelligence it will become self aware, the fact this was discussed with philosophers and not computer scientists makes me lose hope.



Fail to see how a how a computer scientist is more qualified to decide weather or not something has consciousness then a philosopher. And actually in regards to the Turing test that will likely be passed with in the next couple years. But does passing the Turing test show that something is actually conscious? Here is a thought experiment from John Searle a professor Berkeley

"Searle argues that a good way to test a theory of mind, say a theory that holds that understanding can be created by doing such and such, is to imagine what it would be like to do what the theory says would create understanding. Searle (1999) summarized the Chinese Room argument concisely:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese."


auffenpuffer   Finland. Apr 15 2016 08:39. Posts 1429

i just came to cite Chinese room argument, but I see that's taken care of.

As for the question of where does consciousness come from, I'd say that I don't know but it most certainly won't come from a computer that is essentially a Turing machine (such as all digital computers we possess). Turing machine is a model of computation where execution proceeds in steps, each of which is determined by a symbol stored in memory and an input symbol. This is exactly the case Chinese room is concerned with. Digital computers have main memory and a list of input symbols (i.e. program code is machine language stored to main memory), and execution proceeds by manipulating symbols in main memory according to set instructions (e.g. "if register 1 holds 001101010, move the contents of operand register to computing unit and set negation flag to operation register, then store results" ). These instructions are so lengthy and detailed that they can give the impression of something profound going on, while in actual fact everything is strictly determined by program code. The Chinese room argument serves to make obvious that there is nothing more to it than following set rules very quickly.

We have other models of computation too, some of which are not equivalent to a Turing machine. I'd say that to model consciousness we need a fundamentally different model, something that can escape Searle's counter-argument. Afaik there has not been much effort dedicated to devising such model because mostly theory of computation is concerned with, well, modeling computation of machines we have, and not speculating on the fundamentals of human mind.

anyway I don't see what's there relevance of Turing test at all, even if we set the Chinese room aside. A pocket calculator can tell us what 2+2 is, but we are not tempted to say it's conscious because of that. If the pocket calculator would be able to tell us what is it's favorite food, how's the weather been lately, what it thinks about the news of the day etc., it would pass the Turing test but I still would not be tempted to say it's conscious is any sense.

 Last edit: 15/04/2016 08:50

Baalim   Mexico. Apr 15 2016 09:17. Posts 34246


  On April 15 2016 04:56 Smuft wrote:
Show nested quote +



i generally lean towards the points you are making but with much less confidence

considering the thread is struggling to even define the key word in the initial question, the answer to that question seems very open to interpretation

what definition of "consciousness" are you working with that gives you such a confident view on the subject?



Well if we dont asume a simple pragmatical meaning of "conscience" then this thread is useless, and it should be renamed to "what is consciousness" and that would be a pretty useless thread.

I guess I would broadly define it as self-awareness and self-improvement

Ex-PokerStars Team Pro OnlineLast edit: 15/04/2016 09:34

Baalim   Mexico. Apr 15 2016 09:24. Posts 34246


  On April 15 2016 05:10 FMLuser wrote:
Show nested quote +



Fail to see how a how a computer scientist is more qualified to decide weather or not something has consciousness then a philosopher. And actually in regards to the Turing test that will likely be passed with in the next couple years. But does passing the Turing test show that something is actually conscious?


If you want to know what consciousness is ask a philosopher (and he wouldnt be able to answer it in the first place), if you want to know the potential of A.I., selfawareness, sigularity etc, you would ask a scientist, not a random philosopher who has no clue whatsoever about AI

And of course passing the Turing test doesnt mean anything, its a freaking test designed in the 1950s, nobody think thats current, googles Cleverbot can probably sustain a better conversation than some very ignorant people

Ex-PokerStars Team Pro OnlineLast edit: 15/04/2016 09:30

Expiate   Bulgaria. Apr 15 2016 13:39. Posts 236

It's interesting to see that all posters in this thread automatically consider all human beings conscious. But is it really so? Does every single human being experience qualia? I don't think so.

@OP: Since you are interested in strong AI, computationalism and consciousness, check this thread out. You can find much q&a there. Also, this link.

Nobody atm knows if strong AI is possible. There are people who research this. I personally hope we won't ever reach a stage, when we could design and create conscious entities. This would be too much power for our species. And as every power it could be abused.


Joe   Czech Republic. Apr 15 2016 19:39. Posts 5987

Finding out how our brain produces consciousness is an ongoing research and likely not one to be solved soon. It does not directly relate to the brain complexity however - an elephant has way more synapses than we do for example, but its consiousness is most likely nowhere near our level.

It is believed that the part of brain primarily responsible for it is the Pre-Frontal Cortex - a part of brain that helps us with thinking about things before we make a decision, analyzing.

As far as AI consciousness goes - I am currently studying Artificial intelligence at a computer science university and the more I know about it the less I can think of a way how AI consciousness can be achieved, at least if we want it to mean the same thing as in humans. I believe we certainly can (and will) come up with complex programs that can learn what and how to learn itself and probably come up with better decision making in just about anything than humans, but at this point I fail to see how the program could somehow come up with self-awareness.

If you check what the learning algorithms really are, then you see they are simply sophisticated mathematical models that take some data, create a model (a decision tree, a vector of weights, a probabilistic table(s), ...) and then use it to classify new data or aproximate some values.

Take for example one popular algorithm, SVM (Support vector machines). What it does (in simple terms) is that it takes a lot of datapoints and tries to find a hyperplane (a line in 2-D, a plane in 3-D, a hyperplane in X-D) that can separate them into 2 parts (in case of binary classification). Doing this you can solve a lot of decision making problems and theoretically in the future you could have the same algorithm learn in a complex system that learns what to learn itself etc. But how is it going to get consciousness I just dont see.

But certainly I am looking to the future to see whats possible :-)

there is a light at the end of the tunnel... (but sometimes the tunnel is long and deep as hell) 

Baalim   Mexico. Apr 15 2016 21:31. Posts 34246


  On April 15 2016 18:39 Joe wrote:
Finding out how our brain produces consciousness is an ongoing research and likely not one to be solved soon. It does not directly relate to the brain complexity however - an elephant has way more synapses than we do for example, but its consiousness is most likely nowhere near our level.

It is believed that the part of brain primarily responsible for it is the Pre-Frontal Cortex - a part of brain that helps us with thinking about things before we make a decision, analyzing.

As far as AI consciousness goes - I am currently studying Artificial intelligence at a computer science university and the more I know about it the less I can think of a way how AI consciousness can be achieved, at least if we want it to mean the same thing as in humans. I believe we certainly can (and will) come up with complex programs that can learn what and how to learn itself and probably come up with better decision making in just about anything than humans, but at this point I fail to see how the program could somehow come up with self-awareness.

If you check what the learning algorithms really are, then you see they are simply sophisticated mathematical models that take some data, create a model (a decision tree, a vector of weights, a probabilistic table(s), ...) and then use it to classify new data or aproximate some values.

Take for example one popular algorithm, SVM (Support vector machines). What it does (in simple terms) is that it takes a lot of datapoints and tries to find a hyperplane (a line in 2-D, a plane in 3-D, a hyperplane in X-D) that can separate them into 2 parts (in case of binary classification). Doing this you can solve a lot of decision making problems and theoretically in the future you could have the same algorithm learn in a complex system that learns what to learn itself etc. But how is it going to get consciousness I just dont see.

But certainly I am looking to the future to see whats possible :-)



Arent you greatly underestimating the world possible?, or maybe you are thinking in a short timeframe,

Like if we say are we going to be able to leave fossil fuels and have bast amounts of energy in a future? well yeah its pretty much a certainty and probably we will eventually get endless energy from the sun from a dyson sphere thing unless we find an even easier source of infinite energy, and its pretty much inimaginably logistically how it can be archieved but thats pretty much how advancement works, people in earlys 1900s wouldnt be able to conceptualize most of our trivial technology

Ex-PokerStars Team Pro Online 

Smuft   Canada. Apr 15 2016 21:56. Posts 633



seems like a very good talk on the subject, probably too dense for a casual watch unless you're already very familiar with the vocabulary used in such discussions

Kurzweil is the first person in the question's period and I rarely see him give as much respect as he did to Mr Searle here


asdf2000   United States. Apr 15 2016 23:19. Posts 7690


  On April 15 2016 01:04 Baalim wrote:
Show nested quote +



not like a number, nor intelligence, but we can safely say that we are more aware than a cat, and a cat is more aware than a beetle etc.



I don't think that's true. We can say we are more intelligent, but not more aware.

Is one person more aware than another person? How would we know?

How aware are you? You only base it on short term memory, right? So what if you are aware of everything at all moments but it isn't in your memory?


consciousness *certainly* has nothing to do with self awareness. one experiences things regardless of any sort of feeling of perspective.


Personally I am not so much a materialist, I don't believe consciousness "magically appears" at a certain level of complexity. I believe consciousness has a correlation with energy(matter) but that it doesn't arise from it.


To answer the original question, I would say that it is likely that AI already is conscious, because everything is conscious.

Grindin so hard, Im smashin pussies left and right.Last edit: 15/04/2016 23:24

ClouD87   Italy. Apr 16 2016 00:08. Posts 524

piranhas are conscious aswell, although I'm not very sure how they work


Baalim   Mexico. Apr 16 2016 00:24. Posts 34246


  On April 15 2016 23:08 ClouD87 wrote:
piranhas are conscious aswell, although I'm not very sure how they work



Ex-PokerStars Team Pro Online 

Baalim   Mexico. Apr 16 2016 00:33. Posts 34246


  On April 15 2016 22:19 asdf2000 wrote:
Show nested quote +



I don't think that's true. We can say we are more intelligent, but not more aware.

Is one person more aware than another person? How would we know?

How aware are you? You only base it on short term memory, right? So what if you are aware of everything at all moments but it isn't in your memory?


consciousness *certainly* has nothing to do with self awareness. one experiences things regardless of any sort of feeling of perspective.


Personally I am not so much a materialist, I don't believe consciousness "magically appears" at a certain level of complexity. I believe consciousness has a correlation with energy(matter) but that it doesn't arise from it.


To answer the original question, I would say that it is likely that AI already is conscious, because everything is conscious.



We can know for sure who is more conscious, in the same way we cant know for sure who is smarter, if Steven Hawking or you, first you will argue what intelligence really is, maybe Hawkings cant do X or Y that you can, and you will argue that IQ tests really dont tell shit... however If we leave the bullshit aside, we can be pretty sure he is smarter than you, in the same way you can make the same arguments for consciousness but bullshit aside its pretty obvious we are more conscious than a bug.

Thats why I said earlier, this thread is either a boring philosophical discussion about what is consciousness or an actual discussion of the capabilities and future or artificial intelligence

Ex-PokerStars Team Pro OnlineLast edit: 16/04/2016 01:00

Baalim   Mexico. Apr 16 2016 01:00. Posts 34246

BTW this made me think of a horror game im playing, SOMA which kind of deals with uploaded consciousness, anyone has played it:

Ex-PokerStars Team Pro Online 

traxamillion   United States. Apr 16 2016 07:04. Posts 10468

What if the brain is just a conduit for our consciousness to interact with our bodies


Rapoza   Brasil. Apr 16 2016 08:35. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

FMLuser   Canada. Apr 16 2016 18:15. Posts 45


  On April 16 2016 07:35 Rapoza wrote:
What a great video Smuft, thanks for sharing!

Consciousness to me, is how someone perceives itself relative to the world around him.

I think machines will eventually be able to simulate consciousness in a observer-relative way, and at that point, it won't matter anymore if that is an intrinsic characteristic or not.
Our brain is made of water and other chemical reactions, which ultimately simulates consciousness, it isn't something that exists by itself. I see no reason why it can't be also reproduced through electronics.



I think the point that Searle is making in the video is that our brain produces consciousness using "water and other chemical reactions" which is different from simulating consciousness. He makes a good point during the question period, at around 56 min, that we could create a machine that simulates digestion but we don't then attempt to feed it a pizza.


uiCk   Canada. Apr 16 2016 19:54. Posts 3521


  On April 16 2016 06:04 traxamillion wrote:
What if the brain is just a conduit for our consciousness to interact with our bodies


That would mean that conciousness is an entity in itself.

I wish one of your guys had children if I could kick them in the fucking head or stomp on their testicles so you can feel my pain because thats the pain I have waking up everyday -- Mike Tyson 

traxamillion   United States. Apr 16 2016 20:49. Posts 10468


  On April 16 2016 18:54 uiCk wrote:
Show nested quote +


That would mean that conciousness is an entity in itself.



Yea I wouldn't rule it out


lebowski   Greece. Apr 16 2016 21:32. Posts 9205

^that's like talking about the soul in a pseudo scientific way
it's a very weird way to look at it, it's not like we can imagine conscious or unconscious mental states without the brain

fascinating subject though
thread subject reminded me of this
http://onlinelibrary.wiley.com/doi/10.1111/j.0966-8373.2005.00220.x/pdf

"all becoming conscious involves a great and thorough corruption, falsification, reduction to superficialities, and generalization"
Nietzsche makes this claim after proposing that the main difference between conscious and unconscious processes of the brain is the act of conceptualizing the content of our experiences.
The main idea is that consciousness rises through the need of communication and as the ability of conceptualizing the world gets stronger, so does the degree of development of a conscious mind; if there was only one person on earth consciousness wouldn't really be needed.
He then goes on about how the conscious interacts with the unconscious and eventually alters it.
Anyways, read it. My conceptualizing abilities (and/or consciousness?) were weaker before it

new shit has come to light... a-and... shit! man...Last edit: 16/04/2016 23:45

uiCk   Canada. Apr 16 2016 22:43. Posts 3521


  On April 16 2016 19:49 traxamillion wrote:
Show nested quote +



Yea I wouldn't rule it out


Rule out the existence of "spirits" ? Wrong discussion bro.

I wish one of your guys had children if I could kick them in the fucking head or stomp on their testicles so you can feel my pain because thats the pain I have waking up everyday -- Mike Tyson 

Rapoza   Brasil. Apr 17 2016 00:35. Posts 1612

--- Nuked ---

Pouncer Style 4 the winLast edit: 17/04/2016 00:36

FMLuser   Canada. Apr 17 2016 04:22. Posts 45


  On April 16 2016 23:35 Rapoza wrote:
Show nested quote +


Even if is different, then what? If it can simulate "consciousness" it is impossible to tell the simulated and the "real" one apart.

To explain my point let's imagine a better example. Life.
Would you say a computer machine that can only answers simple questions with just "Yes" or "No" is alive?
And what about the same machine but that imitates life EXACTLY like ours?
Now lets assume humanity finds a race similar to robots, that has a society complex just like ours, could you say for sure they are not alive? If yes, how do you do that?

I don't think consciousness is any different.



Thats the point that Searle is making, relative to the observer the robots behavior appears to be conscious. When we talk about an AI performing some action I believe we are making a mistake by using "mental language" to describe its behavior which in some ways might clear it up.

Consider a man walks into a room, walks around in circles for several seconds, and then picks up an object and walks out of the room. When we are asked to describe the mans behavior we would typically say something like "oh he FORGOT(mental language) something and was trying to remember(more mental language) where he put it". If a robot AI where to perform the same set of actions as the man we would likely say something similar but are making a linguistic mistake when we are describing what is occurring. Computers can simply access stored information. We call the stored information memory but the way it operates is completely different from the way our memory operates. We also say things of our computers like "it's thinking" when the computer is taking awhile to load something. We are attributing mental states to something that has no mental states(beliefs, desires, propositional attitude, qualitative experience). The mental states are not an intrinsic property of the AI, its only when we observe their behavior that we are giving them mental states.


Expiate   Bulgaria. Apr 17 2016 11:47. Posts 236


  On April 17 2016 03:22 FMLuser wrote:
The mental states are not an intrinsic property of the AI, its only when we observe their behavior that we are giving them mental states.

Exactly. Just as most people observing other people attribute mental states to all of them. That is why I wrote in my previous post that there are people who I believe do NOT experience qualia. I can't prove it, because we don't have the tools yet, but maybe one day if consciousness is solved, I predict that 10% of the people will completely lack mental states and qualia, although their behavior will appear completely normal to the rest.


Rapoza   Brasil. Apr 18 2016 00:17. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

Liquid`Drone   Norway. Apr 18 2016 00:38. Posts 3093

basically, to what degree are humans conscious in the way we think computers cannot be?

lol POKER 

Expiate   Bulgaria. Apr 18 2016 12:11. Posts 236


  On April 17 2016 23:17 Rapoza wrote: My point is that if you can emulate ALL characteristic from a human being, and make all those simulations work together, i think it is possible to a computer simulate thinking process in the same way we do.

That is the idea behind strong AI and nobody atm knows if this is possible.

Just a quick quote from wiki on functionalism for the people not familiar with the term:

  An important part of some accounts of functionalism is the idea of multiple realizability. Since, according to standard functionalist theories, mental states are the corresponding functional role, mental states can be sufficiently explained without taking into account the underlying physical medium (e.g. the brain, neurons, etc.) that realizes such states; one need only take into account the higher-level functions in the cognitive system. Since mental states are not limited to a particular medium, they can be realized in multiple ways, including, theoretically, within non-biological systems, such as computers. In other words, a silicon-based machine could, in principle, have the same sort of mental life that a human being has, provided that its cognitive system realized the proper functional roles.


Also, an illustration of multiple realizability:

  Putnam’s argument can be paraphrased as follows: (1) according to the Mind-Brain Type Identity theorist (at least post-Armstrong), for every mental state there is a unique physical-chemical state of the brain such that a life-form can be in that mental state if and only if it is in that physical state. (2) It seems quite plausible to hold, as an empirical hypothesis, that physically possible life-forms can be in the same mental state without having brains in the same unique physical-chemical state. (3) Therefore, it is highly unlikely that the Mind-Brain Type Identity theorist is correct.

In support of the second premise above—the so-called "multiple realizability" hypothesis—Putnam raised the following point: we have good reason to suppose that somewhere in the universe—perhaps on earth, perhaps only in scientific theory (or fiction)—there is a physically possible life-form capable of being in mental state X (e.g., capable of feeling pain) without being in physical-chemical brain state Y (that is, without being in the same physical-chemical brain state correlated with pain in mammals).


Now everything looks very fine for strong AI until the famous Supervenience Argument from Jaegwon Kim appears: Kim's exclusion problem. To understand the argument one needs to put some time into it, but in short: Most of the philosophers nowadays consider it sound. The conclusion from Kim's exclusion problem is that non-reductive physicalism entails epiphenomenalism. And epiphenomenalism is considered incoherent from the majority of philosophers. So from this follows that the strong AI won't have mental states similar to our species thus making the life of the scientists trying to create strong AI even harder.


  On April 17 2016 23:17 Rapoza wrote: i also strongly believe someone mental states like beliefs, desires, propositional attitude, qualitative experience can all be predicted given enough information, and thus, copied and simulated.

This brings the question why they exist, when there is nothing special about them? From evolutionary pov they should not have been created in first place, if the brain can work just as good without them, because to predict something with 100% certainty you have to have the complete information. If you have it, this means qualia are not important at all. With or without them the person/brain/AI will function the same way. So why they exist? From materialistic pov it doesn't make any sense. From idealistic it does - because there is an agent experiencing them.

Note that I am neither pro, nor con the possibility of strong AI. I am just trying to look from all possible angles at the subject.

 Last edit: 18/04/2016 12:16

Rapoza   Brasil. Apr 18 2016 23:25. Posts 1612

--- Nuked ---

Pouncer Style 4 the winLast edit: 18/04/2016 23:30

Expiate   Bulgaria. Apr 18 2016 23:57. Posts 236

Well, I think you can share your explanation here, because its related to the topic and if needed later, it could aways be transfered to a new thread by a mod. Also, don't forget there might be people who'll find this topic some day through the search engines and this info could be of help to them, just as the OP needs it for his research now, so this is better than PM for sure.


Baalim   Mexico. Apr 19 2016 00:04. Posts 34246


  On April 17 2016 23:38 Liquid`Drone wrote:
basically, to what degree are humans conscious in the way we think computers cannot be?



this is a better way to formulate the question

Ex-PokerStars Team Pro Online 

FMLuser   Canada. Apr 19 2016 06:54. Posts 45


  This brings the question why they exist, when there is nothing special about them? From evolutionary pov they should not have been created in first place, if the brain can work just as good without them, because to predict something with 100% certainty you have to have the complete information. If you have it, this means qualia are not important at all. With or without them the person/brain/AI will function the same way. So why they exist? From materialistic pov it doesn't make any sense. From idealistic it does - because there is an agent experiencing them..



The easy way out of dealing with qualia is to simply deny that it exists, Daniel Dennett paper Quining Qualia does exactly that http://www.fflch.usp.br/df/opessoa/Dennett-Quining-Qualia.pdf . However no matter how hard I try I cannot deny qualia's existence despite Dennetts coherent argument.

 Last edit: 19/04/2016 06:59

Rapoza   Brasil. Apr 19 2016 08:50. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

Expiate   Bulgaria. Apr 19 2016 10:16. Posts 236


  On April 19 2016 07:50 Rapoza wrote:
I am convicted Qualia does not exists...

Also explains why we don't enjoy eating the same food over extended periods of time.

Since you know how food tastes you have experienced qualia.

In the second sentence you talk that qualia are not absolute. Much of same substance, take chilly for example, changes your taste of it in a very short time. This shows that qualia are relative. And that is exactly so and from evolutionary pov makes complete sense. But this doesn't mean they don't exist.

Our brain knows about qualia and can 'measure' them, just as we know the subjective experience (otherwise absurd things would happen). I don't think there are much philosophers atm denying the existence of qualia. The question whether there are people who doesn't experience qualia at all is another topic and not the main idea here. The main idea is that we know about qualia from introspection. Denying that makes no sense.

If qualia are epiphenomenal as Dennett argues, we are faced with two problems. The first one is the knowledge paradox:

  Given that we are capable of making knowledge claims about consciousness, we need to understand how consciousness could be relevant to the production of those claims. To connect consciousness to the production of our claims about it, somewhere in our explanation of our knowledge we will need to appeal to the effects of consciousness on brain states. Now these brain states are solidly physical, and we are assuming the causal closure of the physical meaning that nothing nonphysical can make a causal difference. But if consciousness cannot affect brain states, it cannot play any part in producing our claims about it, and so it seems that we could not really know about consciousness. Yet we do know about it.

The second one is that if the brain evaluated the taste of shit for example differently than you do (than what the agent experiences), we would end up with a case where the brain makes you eat shit, although you don't enjoy the taste of shit and it is not justified in any other way (you don't believe eating shit will make you stronger, no pain-gain scenario).

 Last edit: 19/04/2016 12:03

Stroggoz   New Zealand. Apr 19 2016 14:04. Posts 5296

I've studied this subject a little bit and i think i can contribute some.

The terminology that philosophers and some scientists use, like 'physicalism' and 'functionalism' are simply useless for understanding the mind, they are leading philosophers off in the wrong direction in my view, here is why:

Physicalism is just whatever makes sense to us, if you go back to pre-newtonian physics, philosophers of mind had the mind-body problem because they thought that mechanics was confined to contact, there was no spooky 'action at distance' in physics. But then newton came along and cartesian materialism became physicalism, and that has just been whatever becomes part of the scientific understanding of the world ever since. So whatever we discover about the world from this point on, it's 'physical'. Therefore the doctrine of physicalism is simply useless for understanding the mind. It's equivalent to agreeing with the doctrine that 'the mind is understood by understanding the mind.'

As for functionalism, there have been criticisms by philosophers like ned block, putnam and so on which completely miss the point because they take the doctrine of functionalism seriously. it's just another terminology that doesn't help us understand anything about the mind. The mind is functional? it computes? ok, that really doesn't tell us anything.

As for multiple realizability, i don't find the particular thought experiment very persuasive, since we don't know enough about physiology to confirm that. I will wait for someone to find that organism which has a different physiology with the same mental states. It has still yet to be done and i don't see any good reason for supposing that it can be done.

Human's are conscious in every way that machines can't be. That's the one question here that is simple to answer, even a vague definition of consciousness allows us to say this is something machines don't have and humans do have.


One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings 

Baalim   Mexico. Apr 19 2016 20:50. Posts 34246


  On April 19 2016 13:04 Stroggoz wrote:

Human's are conscious in every way that machines can't be. That's the one question here that is simple to answer





Yet you didnt answer it, so please let us know, in what ways humans are conscious that machines can never be?

Ex-PokerStars Team Pro Online 

Expiate   Bulgaria. Apr 19 2016 21:40. Posts 236

@Baalim: In the way that the current mainstream philosophy of mind predicts machines won't have mental states similar to our own (in post #43 is the explanation), if they have any at all. This is the mainstream prediction based on logic by the people who work in the field of philosophy of mind.

 Last edit: 19/04/2016 21:42

Baalim   Mexico. Apr 19 2016 22:37. Posts 34246

I dont see any numerated post, care to directly post the quotes here?

Ex-PokerStars Team Pro Online 

Expiate   Bulgaria. Apr 19 2016 23:27. Posts 236


  On April 18 2016 11:11 Expiate wrote: Now everything looks very fine for strong AI until the famous Supervenience Argument from Jaegwon Kim appears: Kim's exclusion problem. To understand the argument one needs to put some time into it, but in short: Most of the philosophers nowadays consider it sound. The conclusion from Kim's exclusion problem is that non-reductive physicalism entails epiphenomenalism. And epiphenomenalism is considered incoherent from the majority of philosophers. So from this follows that the strong AI won't have mental states similar to our species thus making the life of the scientists trying to create strong AI even harder.

This part and above it, from the post. It would be hard to follow the logic, if you are not familiar with the concepts.

 Last edit: 19/04/2016 23:27

Baalim   Mexico. Apr 19 2016 23:40. Posts 34246

yup it doesnt say anything to me, care to explain?

Ex-PokerStars Team Pro Online 

asdf2000   United States. Apr 19 2016 23:54. Posts 7690

Since the discussion is still going, here is another question:

Why would we want to create AI that has a consciousness like ours?

Grindin so hard, Im smashin pussies left and right. 

FMLuser   Canada. Apr 20 2016 00:29. Posts 45


  On April 19 2016 22:40 Baalim wrote:
yup it doesnt say anything to me, care to explain?



Supervience is a bit tough to explain but its where the upper level of the system is influenced by its lower levels. Society to person to cell to chemistry to physics. That's the general idea but there is a lot more to it as well as logical problems on who the top can change the lower level and the lower level can change the top. The other problem is that I can describe a society in terms using only physics but that doesn't give me a meaningful explanation of the society


Rapoza   Brasil. Apr 20 2016 00:39. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

Expiate   Bulgaria. Apr 20 2016 00:43. Posts 236

I'll try to involve a few concepts as possible, but I have no idea what will come up from this.

There are 3 possible cases, if the creation of strong AI is possible at all:
1) P = M, consciousness cannot be implemented without neurons. If we have P1 = M1, for person 1 and P2 = M2 for person 2, where P1 != P2, from this follows that M1 != M2.
2) P -> M, consciousness can be implemented in all physical mediums, including computers. If we have P1 -> M1, for person 1 and P2 -> M2 for person 2, where P1 != P2, we still can have M1 = M2 (multiple realizability scenario).
3) P ~ B = M, consciousness might be implemented in all physical mediums, but it heavily differs depending on the type of the brain/system.

P = physical activity, M = mental activity, B = Whole brain/system activity

Case 1) means strong AI is possible only if you consider advanced bio humans with chips in the heads strong AI. I personally don't. Case 2) is denied by Kim's exclusion problem (I can't explain that not involving philosophical terms). So what's left for strong AI is case 3), where as I wrote, the strong AI won't have mental states similar to ours, but this doesn't automatically mean it would have zero consciousness too. It might be less conscious, it might be more, no one knows, but its mental life will be quite different from ours.


Expiate   Bulgaria. Apr 20 2016 00:50. Posts 236

@asdf2000: Because it seems like the easier job to do.

@Rapoza: Yep, that is so. Also, don't forget qualia (Q) is not detachable from cognition (say M). I can think and at the same time experience qualia. This constructs consciousness (C), or C = M ~ Q.

 Last edit: 20/04/2016 00:56

Rapoza   Brasil. Apr 20 2016 00:52. Posts 1612

--- Nuked ---

Pouncer Style 4 the winLast edit: 20/04/2016 00:59

asdf2000   United States. Apr 20 2016 01:22. Posts 7690

But it doesn't seem to require consciousness to do anything. If anything, the ego that is associated with human consciousness tends to get in the way of accomplishing the things that we want deep down.

Grindin so hard, Im smashin pussies left and right. 

Rapoza   Brasil. Apr 20 2016 01:35. Posts 1612

--- Nuked ---

Pouncer Style 4 the winLast edit: 20/04/2016 01:53

Rapoza   Brasil. Apr 20 2016 01:48. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

FMLuser   Canada. Apr 20 2016 03:22. Posts 45

Just because an AI is able to accelerate our evolution or solve problems in a clever way doesn't mean the AI has a mind or is conscious. AI use formal symbol systems. The AI or CPU or whatever is syntactical and does not use semantics to arrive at an output.
This may seem off topic but it will get at the root of what I am talking about.

What a word refers to is not the same as it's meaning. This is most obvious example is with proper names but is true of more abstract things.....Barrack Obama, The President of the United States, and the tall dark skinned man wearing a suit. All of these pick out and refer to the same person but have different meaning. So its easy to see that Barrack Obama picks out the specific person as it is his proper name, but when I say "POTUS" did X today you understand who I am talking about because you understand the meaning when I say POTUS. Consider if you didn't understand the meaning of any words at all and then looked in the dictionary to understand what Y word refers to, under definition Y=ABCD. So to understand Y you then have look up ABC and D, which will lead you in a infinite loop. So why does this matter?

When a computer or and AI does something it uses a formal symbol system to get an output. Symbol systems take a symbol ( that has no meaning for example X) and manipulate the symbol with other symbols and syntactical rules to produce a new symbol output(x^2+y^2=z^2). The rules for symbol systems are based on the shape of the symbol and not the meaning of the symbol. Now I am just starting my cognitive science degree but from my current understand of neural networks they are using a complex math function to arrive at an output. Even for classification problems like is this a tree or is it a rock the class is translated into a numerical value. So while this method is valuable and will like improve life it doesn't mean that the computer performing this is conscious or that the computer has a mind. The AI may be able to correctly identify what the words POTUS refers to but it does not understand the meaning of the words since its only capable of syntactical manipulation. So while it is then possible to create an AI that is able to simulate our behavior we shouldn't be confused into thinking that this simulation is a duplication since the process at which it arrives at the behavior is so much different then ours.

 Last edit: 20/04/2016 10:23

uiCk   Canada. Apr 20 2016 06:03. Posts 3521

So would you (or whomever) say that we could not replicate the "process" that is consciousness (any level of complexety, assuming other organisms have conciousness) performed by what we would call a robot ?


Would you also say that humans have a ceiling as to what they can replicate synthetically ? Given that we have been synthetically replicating our own biological system step by step over time, what we call "machines" a way, or more vaguely technology as a whole.
And at that point, would we say that conciousness is apart of our biological process, or are consciousness and biology apart?

I think the question is not "if" , but "how".

IMO

I wish one of your guys had children if I could kick them in the fucking head or stomp on their testicles so you can feel my pain because thats the pain I have waking up everyday -- Mike Tyson 

Baalim   Mexico. Apr 20 2016 06:20. Posts 34246

I think these answers are extremely narrowsighted, and in reality are only answering "could my Laptop be conscoius?"

Why would you assume that in a distant future AI can only work through input A - Response B and in no way could remotely understand semantics, I mean, we have evidence that RNA evolved into conscious beings with a very simple survival of the fittest process and yet you cant imagine a way a process like that could ever happen in any other form or organism that isnt protein based? come on.

Ex-PokerStars Team Pro Online 

Expiate   Bulgaria. Apr 20 2016 09:42. Posts 236

Very nice explanation from FMLuser about the current way of working of AI, thanks.

Baalim, nobody is narrowsighted, its just our understanding of consciousness is too limited as of 2016. I can imagine a future world, in which we are all only energy. The process of how to reach that world is what we are lacking.


  On April 20 2016 00:35 Rapoza wrote: I can make an analogy that 2 different persons cannot have the exact same brain state, yet, they might share the exact same consciousness about something thou their Qualia will be different.

Yeah, I knew you were going in that direction, and that is why I told you that you can't separate M and Q.

J. Kim believes that we can have scenario like the one you suggest:

  Are mental properties physically reducible? Yes and no: intentional/cognitive properties are reducible, but qualitative properties of consciousness, or 'qualia,' are not.

The problem is that he was strongly criticized for that, because such variant will separate the unity of consciousness. Here is the explanation in short:

1) qualia (Q) is multiple realizable and epiphenomenal
2) the rest (M) are reducible and genuine properties

Let's consider the following case:
P1 = M1 [~ Q] -> P2 = M2

If we want M1 to be a subset of Q we have a paradox:
P1 and Q are different properties (from 1 and 2), P1 is identical with M1 and both share the same domain (from 2), so M1 can't be a subset of Q. Thus at time t we have multiple mental states.

The other case is the current M state to be interrupted every time a Q state appears or vice versa:
P1 [~ Q] -> P2 = M2 -> P3 [~ Q] -> P4 = M4 -> ...

So if we want to have epiphenomenal qualia in reductionist view, we are faced with two incoherent options - multiple mental states or some kind of interruption.


Baalim   Mexico. Apr 21 2016 00:04. Posts 34246


  On April 20 2016 08:42 Expiate wrote:

Baalim, nobody is narrowsighted, its just our understanding of consciousness is too limited as of 2016



Then it doesnt make any sense to say it cannot happen

Ex-PokerStars Team Pro Online 

asdf2000   United States. Apr 21 2016 01:41. Posts 7690


  On April 20 2016 00:48 Rapoza wrote:
Show nested quote +


The consciousness, alongside creativity and many other things, is why we want things in the first place, otherwise we would be as active as a rock.
For example, we need creativity solutions to our current society problems. The AI as we know would just answer: "Recycle your stuff".
If an AI could think, its possible to reach a reasonable and accurate conclusion much much faster because it can process and simulate data better then our limited chemical brain.



I'm not sure consciousness is why we want things, but rather a manifestation of our wanting of things.

I see what you are saying, but I am still not convinced consciousness has a role to play in AI.

Actually, I feel like it kind of takes the problems with the complexity involved in programming a strong AI and tries to bypass them by going "eh, well just figure out how to make it conscious". I don't think that's relevant to the strengths of AI. I believe that strong AI will be about writing an extremely complex elegant program that is recursive. It will be able to evolve it's own code at an exponential rate. Whether or not it's conscious will be irrelevant (and probably unknowable).

Grindin so hard, Im smashin pussies left and right. 

Baalim   Mexico. Apr 21 2016 08:44. Posts 34246


  On April 21 2016 00:41 asdf2000 wrote:
Show nested quote +



I'm not sure consciousness is why we want things, but rather a manifestation of our wanting of things.

I see what you are saying, but I am still not convinced consciousness has a role to play in AI.

Actually, I feel like it kind of takes the problems with the complexity involved in programming a strong AI and tries to bypass them by going "eh, well just figure out how to make it conscious". I don't think that's relevant to the strengths of AI. I believe that strong AI will be about writing an extremely complex elegant program that is recursive. It will be able to evolve it's own code at an exponential rate. Whether or not it's conscious will be irrelevant (and probably unknowable).



Well if the compuer becomes an all-knowing demi-god wouldnt itself perfectly know if it is conscious or not under our own narrow definition of the word?

Ex-PokerStars Team Pro Online 

auffenpuffer   Finland. Apr 21 2016 09:34. Posts 1429


  I think these answers are extremely narrowsighted, and in reality are only answering "could my Laptop be conscoius?"

Why would you assume that in a distant future AI can only work through input A - Response B and in no way could remotely understand semantics, I mean, we have evidence that RNA evolved into conscious beings with a very simple survival of the fittest process and yet you cant imagine a way a process like that could ever happen in any other form or organism that isnt protein based? come on



well sure, but it's important to keep in mind that we have made exactly 0 progress towards this over last 100 years of computer science. When people talk about AI and its potential they often assume that we are discussing something that is at least remotely relevant to the body of knowledge we call AI research, and also to the algorithms this research program has generated over the years. For strong AI we need something fundamentally different from the computers we have. While it is reasonable to speculate that in the future someone will come up with a technology literally no one can imagine the details of today, it is a distant dream, something like colonizing Alpha Centauri or whatever.

 Last edit: 21/04/2016 09:35

Stroggoz   New Zealand. Apr 22 2016 00:24. Posts 5296


  On April 19 2016 19:50 Baalim wrote:
Show nested quote +



Yet you didnt answer it, so please let us know, in what ways humans are conscious that machines can never be?


machines are just objects which take inputs and outputs. There is no reason to think they can be conscious like a human being, especially when science hardly understands how human beings are conscious.

I agree with a mode of reasoning that the more unlikely a hypothesis is, the more reason you need to believe it.


  On April 20 2016 23:04 Baalim wrote:
Show nested quote +



Then it doesnt make any sense to say it cannot happen


yeah it does because there's no evidence to show that it can happen, and it's simply ridiculous to suggest we could create strong AI with such a limited understanding.

One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings 

Baalim   Mexico. Apr 30 2016 09:29. Posts 34246


  On April 21 2016 23:24 Stroggoz wrote:
Show nested quote +



machines are just objects which take inputs and outputs. There is no reason to think they can be conscious like a human being, especially when science hardly understands how human beings are conscious.

I agree with a mode of reasoning that the more unlikely a hypothesis is, the more reason you need to believe it.


  On April 20 2016 23:04 Baalim wrote:
Show nested quote +



Then it doesnt make any sense to say it cannot happen


yeah it does because there's no evidence to show that it can happen, and it's simply ridiculous to suggest we could create strong AI with such a limited understanding.



No it doesnt unless we were talking about a specific time frame like, "in the next 100 years".

Ex-PokerStars Team Pro Online 

maryn   Poland. May 01 2016 01:36. Posts 1208


 Last edit: 01/05/2016 01:43

MARSHALL28   United States. May 01 2016 23:26. Posts 1897

Does a computer have consciousness? Obviously not ...

But just wait til the singularity. There's a good chance it happens in your lifetime if you are under 40.


Rapoza   Brasil. May 17 2016 05:14. Posts 1612

--- Nuked ---

Pouncer Style 4 the win 

lebowski   Greece. May 02 2017 21:57. Posts 9205

bump
http://www.iflscience.com/brain/man-missing-most-of-his-brain-challenges-everything-we-thought-we-knew-about-consciousness/all/
interesting article

new shit has come to light... a-and... shit! man... 

whamm!   Albania. May 03 2017 00:54. Posts 11625

If the AI starts beating FlaSh's T as Zerg then I would agree that it can be conscious


 



Poker Streams

















Copyright © 2024. LiquidPoker.net All Rights Reserved
Contact Advertise Sitemap