RiKD   United States. Mar 28 2023 03:55. Posts 9031
This can be a general topic.
It feels like AI is saturating the world very fast and faster and faster. I feel like I am falling behind. I am already attached to my AI DJ on Spotify. I would love self-driving cars. In fact, I would probably love AI taking care of most things if it didn't mean dystopia. What do you think?
Last edit: 28/03/2023 04:08
1
lostaccount   Canada. Mar 28 2023 11:54. Posts 6209
AI love acceleration is best
but elon has been right all along about AI 2 bad not many listen to him and just do whatever they want.
here is some love ai
, as hiems said i mess up his AI videos
run good play good GL
Last edit: 28/03/2023 16:01
1
Stroggoz   New Zealand. Mar 28 2023 18:54. Posts 5329
The field needs it's name changed from "A.I" to "brute force statistical machines" so that people can stop making hyperbolic claims about it.
One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings
4
Baalim   Mexico. Mar 29 2023 03:26. Posts 34262
On March 28 2023 17:54 Stroggoz wrote:
The field needs it's name changed from "A.I" to "brute force statistical machines" so that people can stop making hyperbolic claims about it.
Experts raising concerns about the dangers of climate change are right, but experts raising concerns about the dangers of A.I. are hyperbolic.
I while Climate change catastrophes are more likely, the A.I. thread is on a extinction level, at the time the digital and physical world are too disconected, our robotics are too rudimentary for an A.I. to wage physical war on mankind but we are quickly transitioning into full digital warfare machines and that so this threat will grow exponentially bigger as we digitalize more and more.
Anyway if A.I. is built with proper safeguards it sounds awesome, can't wait for A.I. vision to be functional, the leap on robotics will be crazy.
Ex-PokerStars Team Pro Online
1
Stroggoz   New Zealand. Mar 29 2023 07:32. Posts 5329
Nah i agree the concerns of chatgpt and A.I and such are real. Personally I think it could be harnessed for mass misinformation and social control more than anything else. A.I already does a lot of that through recommender systems, but it's getting to be far worse. There are always unforeseeable threats that A.I poses in the future as well. It will all depend on how it's used and the politics behind it.
I mean the hyperbolic claims about us being 'close' to human-like intelligence. Which very few expert agrees with.
One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings
Last edit: 29/03/2023 07:40
1
lostaccount   Canada. Mar 29 2023 18:40. Posts 6209
i think its real since they deleted the evidence though,
run good play good GL
1
lostaccount   Canada. Mar 29 2023 19:05. Posts 6209
A.I. has no chance of replacing genuine human creativity. Its only real use is in improving corporate profits, surveillance, those kinds of things. It will never be "more intelligent than humans" because it is still determined by computing laws, which is not how human brains work. It's old outdated thinking that still has people believe that the mind is essentially a computer and all that matters is how much bandwidth it has and how many things it can compute at the same time. It's very naive and misinformed.
The kind of thing that Elon talks about is a dystopian fantasy. Only people who are really disconnected from reality can entertain the thought that a computer is more dangerous than nukes and climate change. A computer cannot do shit and be "let free" on the world. It can't rebuild itself, it can't even plug itself into an outlet when it's out of power (or near out of power). It does not have that kind of intelligence, it constantly needs support from human beings for its very limited existence. A robot can be sent to autocharge but it doesn't know that it's going to charge itself, it cannot sense this in its environment unless it is programmed to interact with platforms that can charge it. In this dystopian world there would need to be a lot of nearly indestructible charging platforms everywhere for the robots to recharge while they wage war on us, the mental image of that is absolutely ludicrous.
Part of me wants to give them credit and say "they know that they are full of shit and they're just distracting the populace from how much power they have for as long as they can" but that'd be granting them too much intelligence and self-awareness which they likely don't have.
fuck I should just sell some of my Pokemon cards, if no one stakes that is what I will have to do - lostaccount
Last edit: 10/05/2023 19:23
1
RiKD   United States. May 10 2023 19:34. Posts 9031
On May 09 2023 23:16 RiKD wrote:
Not really and especially no with your asinine YouTube videos and bullshit.
not to defend the retarded shaman lion music but you are kinda the king of asinine youtube music videos lol
My music video to blog ratio is nothing spectacular and asinine is a strong word for posting music that I like. If the site were still active it would be interesting to run a poll if people liked, disliked, or were impartial to the music that I posted.
1
Stroggoz   New Zealand. May 11 2023 04:39. Posts 5329
On May 10 2023 18:19 Loco wrote:
A.I. has no chance of replacing genuine human creativity. Its only real use is in improving corporate profits, surveillance, those kinds of things. It will never be "more intelligent than humans" because it is still determined by computing laws, which is not how human brains work. It's old outdated thinking that still has people believe that the mind is essentially a computer and all that matters is how much bandwidth it has and how many things it can compute at the same time. It's very naive and misinformed.
The kind of thing that Elon talks about is a dystopian fantasy. Only people who are really disconnected from reality can entertain the thought that a computer is more dangerous than nukes and climate change. A computer cannot do shit and be "let free" on the world. It can't rebuild itself, it can't even plug itself into an outlet when it's out of power (or near out of power). It does not have that kind of intelligence, it constantly needs support from human beings for its very limited existence. A robot can be sent to autocharge but it doesn't know that it's going to charge itself, it cannot sense this in its environment unless it is programmed to interact with platforms that can charge it. In this dystopian world there would need to be a lot of nearly indestructible charging platforms everywhere for the robots to recharge while they wage war on us, the mental image of that is absolutely ludicrous.
Part of me wants to give them credit and say "they know that they are full of shit and they're just distracting the populace from how much power they have for as long as they can" but that'd be granting them too much intelligence and self-awareness which they likely don't have.
I agree that A.I will never really replace humans or true human creativity. In fact, I think A.I is the exact opposite of what is generally considered to be "intelligence". It solves problems using brute force, rather than using sets of a few principles to efficiently solve a problem. It uses methods that are generally considered to be the stupidest methods by scientists or normal people. I personally do not think AGI will ever be achieved, and things like chatgpt do not bring us any closer because they are using methods that don't draw from any insight into how human psychology works. Like i've said earlier, the field of research could easily be called "Brute Force Inference Algorithms", or something dull like that, and it would make it harder to convince people that "Brute Force Inference Algorithms" are going to be as intelligent as humans in the future. This could be an effective hype killer.
However, I think A.I has more uses than just improving corporate profits and surveillance. Think of email spam classification. That is a basic innovation by A.I that is clearly a huge benefit to human civilization. There are no corporate profits to be made off it, because it's something that can easily be solved in a few lines of code by anyone with a basic understanding of machine learning. Yet it saves people billions of hours of work. It's a fine example of automation. There are lots of great things that A.I can do, and whether they further corporate profits or further us towards fully automated luxury communism is a political choice. There's also a lot of A.I that doesn't involve surveillance. I'm currently making a program that automatically suggests academic citations to people writing in a text editor. I have not needed to use surveillance for it, nor reinforcement learning. I think surveillance is justifiable if people agree to it. In cases where it's just being used to improve a product, many would agree that it's ok and click ok on the agreement thingy, lol. But the things that Facebook does are pure evil. Everyone on this forum has had their political views shaped by A.I to some extent.
Elon's views are not really accepted by the scientific or most of the engineering community when it comes to these bizarre dystopian fantasies. I'm guessing many don't publicly criticize his whacky ideas, because he is a potential source of funding. I think It's generally younger, or uneducated people, that buy into this cult of AGI or cult of Elon. It's kind of strange that people like Ray Kurzweil buy into it, because he clearly knows a lot about A.I, so he should be able to see it for what it is. Though, perhaps it's not so strange when you look at the long history of powerful people thinking they can achieve immortality. I'm guessing most engineers think Elon is a massive douchenozzle: https://www.reddit.com/r/AskEngineers...3j0yz/what_do_you_guys_think_of_elon/ https://www.reddit.com/r/AskEngineers...consider_elon_musk_to_be_an_engineer/
I'm very skeptical that A.I could be used to make self-driving cars (at the level of no steering wheel). Generally, A.I achieve accuracy levels on the 95-99% level doing predictions for most tasks. Humans don't crash when driving cars around 99.999% of the time, which is an enormous difference from 99% when it comes to prediction. A.I therefore, only, has real world applications when the consequences of failure don't matter that much. So I don't really understand why Google and Tesla and other companies are putting so much money into self-driving cars when it could be put elsewhere. Same goes for any other task that has deadly consequences when it fails at prediction. This is common sense really.
I will say this about the threat of things like chatgpt. Currently it is very easy for a software engineer to make their own chatgpt bot. There are lots of guides on youtube. What openA.I has done, it's not actually using anything new. They are using algorithms developed by google around 2016~ (The famous paper is called "attention is all you need", it's on Arxiv). A lone software engineer cannot afford the kind of database and data extraction that openA.I can, so they cannot train their bot to be as good as the one Open A.I uses. I see this not being the case in a decade or two. Individuals will be able to essentially download the whole 2021 database of the internet onto their own cloud or server and it will cost say $100 a month to run. This is probably not a great development for making a reality-based internet where people trust each other. We have websites like kaggle and common-crawl that sell or even freely give away enormous amounts of cleaned data, and google now has their own search engine for datasets.
Also, the people saying A.I will replace people's jobs are wrong. Unemployment rates are literally just a political choice. But even if we agree that capitalism is some law of nature, (it isn't), it's kind of bizarre that over and over again we hear about automation being a threat to employment when the unemployment rate has basically hovered between 0-20% for the last 200 years while the people working in agriculture has gone from 80% of the population down to about 1%.
One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings
Last edit: 11/05/2023 07:36
1
whammbot   Belarus. May 11 2023 15:25. Posts 523
Digital marketing is going to take a huge dick up the ass. It already is btw, writers, copywriters, and content hubs have been so disrupted by this. I use openai directly and pay for output, and it's ridiculous how fast everything is changing. Art, Music, Video, writing jobs will be greatly affected before the year ends.
1
lostaccount   Canada. May 11 2023 16:50. Posts 6209
I'm not savvy on the technology behind spam filtering but I'd assume it's necessary to surveil all your emails in order to do a good job, so I'd toss it into the "improves surveillance" pile. But yes, more surveillance is not always bad thing, the problem is when it's top down and we think we are free to accept it and when we really aren't.
We need some good spam filtering on LP so that 99% of lostaccount's posts are nuked. : )
fuck I should just sell some of my Pokemon cards, if no one stakes that is what I will have to do - lostaccount
1
Stroggoz   New Zealand. May 12 2023 03:22. Posts 5329
If it's using a basic ML algorithm, (not deep learning), it's a machine reading the text in an email, looking for various grammatical and semantic patterns and words, dodgy hyperlinks, specific words like "Nigerian", and "prince", and "lottery", in order to figure out if it's spam or not. Is that much different from a thermometer reading the temperature of a room? I wouldn't really call it surveillance, at least by Shoshana Zuboff's definition.
The machine would be trained on data from many non spam / spam emails that are labeled separately. That data doesn't have to be extracted from unknowing victims, the software engineer could simply train it on their own email inbox plus their friends, if they have a large enough email inbox. Or they could voluntarily contribute to an existing dataset and share it online for research. How they gathered the data, I'm honestly not sure.
What Facebook or Google does is different, I don't need to explain it because it's easy to see what it's doing from personal experience.
I don't think it would be too difficult to build a lostaccount spam post detecting device that is accurate around 95% of the time. I guess you could programme it to temp ban him when it detects a recent post that was edited and contains no text, or a string of 4+ posts in a row.
Yeah my views are that the big tech companies should be democratized/socialized. They are not legal entities, even. Since when did monopolies become legal, or does society want to just keep pretending that there's no such thing as the law. Aside from that there needs to be highly restrictive laws on what can be surveilled. Would the public vote for that in a democracy? Polls indicate that they would.
One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings
Last edit: 12/05/2023 05:04
1
whammbot   Belarus. May 12 2023 04:02. Posts 523
People who use this thing love it but at the same time are terrified by it. I'm telling you it's dangerously close to disrupting things to no end. Every day it's something different. The only concern I have with over regulating this thing is that other grey countries are also developing their own so it forces the West to keep it running just to stay ahead. I'm not saying it's going to catastrophic but it sure feels creepy how fast development is going.
1
Stroggoz   New Zealand. May 12 2023 04:58. Posts 5329
On May 12 2023 03:02 whammbot wrote:
People who use this thing love it but at the same time are terrified by it. I'm telling you it's dangerously close to disrupting things to no end. Every day it's something different. The only concern I have with over regulating this thing is that other grey countries are also developing their own so it forces the West to keep it running just to stay ahead. I'm not saying it's going to catastrophic but it sure feels creepy how fast development is going.
Forget governments or mega corporations. Individuals will be able to do this in 10-20 years. Individuals can already make chatgpt in python, there are lots of guides on youtube. The more impressive thing that openAI did was extract and clean data from the entire internet. That's way harder than using sophisticated algorithms that have already been developed. But this cleaned data, a lot of it is already publicly available. So yeah, it's gona be harder to regulate than gun control in the near future.
I've heard the chinese chatgpt is alot better at math. I think the only use of chatgpt that i've found is that it's very good at automating a lot of coding taks. You can ask chatgpt to make chatgpt for you, and it gives you the backend part of the code, it seems. This is an example: https://i.imgur.com/ouwe89S.png
One of 3 non decent human beings on a site of 5 people with between 2-3 decent human beings