d2jsp
Log InRegister
d2jsp Forums > Off-Topic > General Chat > Science, Technology & Nature > I Am Starting To Wonder About Generative Ai
Add Reply New Topic New Poll
Member
Posts: 4,309
Joined: Jun 15 2022
Gold: 6.69
May 3 2023 07:09am
If it were up to me, I would not inhibit generative AI research nor its availability to all.
Yet, thank goodness that it is not up to me as that might be a mistake. Yet, as Canada and the US quickly catch up with tech ethics and policies, there are questionable governments who do not value things like human rights nor ethics as well as there are rogue actors who are more than happy to be exploring and exploiting generative AI for reasons which may result in only adding to the suffering of our species. What do we do?

https://www.msn.com/en-ca/news/canada/toronto-prof-called-godfather-of-ai-quits-google-to-warn-world-about-the-dangerous-technology/ar-AA1aDU3c

Quote
“What we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning.” Geoffrey Hinton's simple explanations in interviews show why he was a good university professor. He spells out an advantage machines have over humans when it comes to learning. “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said to the BBC.

“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. “So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”




This post was edited by jalapenos on May 3 2023 07:09am
Member
Posts: 10,609
Joined: Mar 23 2017
Gold: 12,797.00
Warn: 20%
May 3 2023 03:46pm
i remember the exact moment when everyone went from "full speed ahead" to "shut it down shut it down". wasnt a long time ago actually. what happened was stanford scientists creating alpaca, a language model that can be run on a 500$ piece of gear and be similar in quality to chatgpt <_<

basically, as with any new tech, when the (((corporations))) have the power its all good. but god forbid that the average joe runs an LLM on his gaming pc, cause that would mean it would actually be vastly superior to their billion dollar shit because it wouldnt censor the answers and would give the user too much power, without the big tech being able to spy or control the flow of information. this has actually been done a few years ago when some guy trained a language model on /pol/ instead of wikipedia and it turned out to give better answers :lol: personally i prefer being called a nigger and getting the correct answer, than being lectured how "the lack of nobel prizes among black people is due to social and economic factors" along with "it has to be mentioned that talking about this issue might be offensive to some ethnic groups" in a threatening color :mellow: personally i dont give a shit, i just want the truth. and if my pc can give me a better answer even OFFLINE than a multi billion dollar conglomerate that also abuses both me and my information, well guess which im picking :ph34r:

tldr: this is about control and nothing more. they have no intention of stopping any of it, they just want to be the only ones having it <_< i think its too late though, which is good :blush: !

This post was edited by Snyft2 on May 3 2023 03:48pm
Member
Posts: 4,309
Joined: Jun 15 2022
Gold: 6.69
May 4 2023 01:46pm
Quote (Snyft2 @ 3 May 2023 17:46)
i remember the exact moment when everyone went from "full speed ahead" to "shut it down shut it down". wasnt a long time ago actually. what happened was stanford scientists creating alpaca, a language model that can be run on a 500$ piece of gear and be similar in quality to chatgpt <_<

basically, as with any new tech, when the (((corporations))) have the power its all good. but god forbid that the average joe runs an LLM on his gaming pc, cause that would mean it would actually be vastly superior to their billion dollar shit because it wouldnt censor the answers and would give the user too much power, without the big tech being able to spy or control the flow of information. this has actually been done a few years ago when some guy trained a language model on /pol/ instead of wikipedia and it turned out to give better answers :lol: personally i prefer being called a nigger and getting the correct answer, than being lectured how "the lack of nobel prizes among black people is due to social and economic factors" along with "it has to be mentioned that talking about this issue might be offensive to some ethnic groups" in a threatening color :mellow: personally i dont give a shit, i just want the truth. and if my pc can give me a better answer even OFFLINE than a multi billion dollar conglomerate that also abuses both me and my information, well guess which im picking :ph34r:

tldr: this is about control and nothing more. they have no intention of stopping any of it, they just want to be the only ones having it <_< i think its too late though, which is good :blush: !


I would never wish to stop someone from expressing themselves. So long as what is being expressed does not ennoble them nor others to harm or to make others suffer, I am good with people freely expressing themselves.
So, and with that said, I am uncertain about your motivation(s). And, gently adding, I can only add my own. I do wish that generative AI and then, if it so happens, AGI will not be host(s) to our propensities to be dicks nor to our propensities which add to the suffering of that which we call life.
Our societies, beliefs, and life philosophies were built upon good and not so good things/reasoning. Though I do not believe that our species can shed the things which make us add to another person's suffering, I am hopeful that AI models will move towards not modelling our harmful propensities and that they may help us to rise up and away from those. Again, I am a wishful thinker. Things do not change. But, I am by no means a futurist. :hug:

This post was edited by jalapenos on May 4 2023 01:46pm
Member
Posts: 10,609
Joined: Mar 23 2017
Gold: 12,797.00
Warn: 20%
May 4 2023 03:11pm
Quote (jalapenos @ May 4 2023 09:46pm)
I would never wish to stop someone from expressing themselves. So long as what is being expressed does not ennoble them nor others to harm or to make others suffer, I am good with people freely expressing themselves.
So, and with that said, I am uncertain about your motivation(s). And, gently adding, I can only add my own. I do wish that generative AI and then, if it so happens, AGI will not be host(s) to our propensities to be dicks nor to our propensities which add to the suffering of that which we call life.
Our societies, beliefs, and life philosophies were built upon good and not so good things/reasoning. Though I do not believe that our species can shed the things which make us add to another person's suffering, I am hopeful that AI models will move towards not modelling our harmful propensities and that they may help us to rise up and away from those. Again, I am a wishful thinker. Things do not change. But, I am by no means a futurist. :hug:


my motivation in general you mean? or just this topic :unsure: ? cause in this particular topic, my motivation is to help prevent a monopoly. thats what they are trying to do, i mean you can pull out a (((sam altmann))) interview from just a few months ago. "AI will be everywhere! is safe np! get into it or lose the race!" etc. but after alpaca came out, now they are like "omg this is so scary! this has to be stopped!". they dont want to stop it, they want to OWN it, and my motivation is to help people see that and tell them to fuck right off <_<

also...umm this is hard for me to say as i really like you. you seem like a truly nice guy :hug: but you have a completely unrealistic world view :( specifically:

1) "making others suffer" is vague as fuck. for example i once went out with a girl that i knew from high school. she lived in france for a couple of years and while visiting serbia we kinda bumped into each other on the street. anyway it was supposed to be a friendly date, but since she was cute and i was opportunistic, i told her that she looked really beautiful and well...i kinda offered her a one night stand :mellow: i mean ive known her since high school, shes one crazy moralizing feminist bitch. you dont go out with those. you try to fuck and if she says no, its still fine..

..except it wasnt fine :mellow: she yelled at me, to which i reacted by just walking away (like a boss i might add). ooh but it didnt end there. she called me the next morning just to yell at me. "HOW COULD YOU ASK ME THAT! YOU FILTHY PIG! I DEMAND AN APOLOGY! :fume: ". i realized she felt offended. but i didnt really feel guilty at all. i was really nice to her the entire evening, and if she wanted to decline my proposition, she could have done it by saying no, i wouldnt mind. i dont wanna brag, but im kinda a gentleman :blush: but requesting an apology for her hurt feelings for the question she received...yeah i dont really give a shit. i politely said no. she actually called me every morning for a whole month after that just to yell at me. and well, im an internet troll with 15 years of experience after all, so yeah...i kinda pissed her off more and more with each conversation. and at some point in that process, i kinda started enjoying pissing her off :unsure: tldr: suffering is a subjective term so its very hard to enforce. i propose just not enforcing it altogether and just going by what the law states: full free speech unless you bring actual harm to other person (threatening or enabling physical harm, for example by doxxing). insults are very subjective and i wouldnt enforce that at all

2) "harmful propensities" are a part of who we are as a species, being dicks is part of it and as evolutionary beneficial, it cant really be removed. we will never "evolve beyond inflicting suffering to others", as evolution is a competitive process. we are literally going against one another, morals just exist as groups are more favored in intelligent animals and morals are important for the functioning of the groups. meaning that everyone has an agenda. for example tweet a joke about christians or whites and its all good. tweet a joke about muslims or blacks and suddenly everyone talks shit to you. tweet a joke about jews and you are immediately labeled antisemite and "literally hitler". so i guess we know who owns the place huh. anyways, the tldr on this would be: laws are made by flawed creatures, and will by definition be flawed as well. and the same goes for AI. i propose just treating it as a telephone company, just let people talk whatever they want to an LLM and treat it as private
Member
Posts: 4,309
Joined: Jun 15 2022
Gold: 6.69
May 4 2023 03:29pm
I think this is where there are unknowns... "evolution is a competitive process". As evolution takes many pathways many of which may have a beneficial, costly, or benign effect on our cognitive processes.
As far as I can tell, human beings are capable (though, I am not claiming that they will choose this) of shedding harmful propensities. I do not contest your observations that people are not prone towards changing and that harming others may be with us 'til the last member of our species expels their last breath.
Yet, the future is unwritten and we are still in the teething stages of the emergence of our species. I will not be around to see any hoped for changes which makes these thought experiments hard on the heart.

Maybe, my "unrealistic" view is based on knowing that caring for others of all shapes, sizes, and identities is something which I value.
May I never, unless out of an act of sheer defence as like when responding to a tyrant's orders to harm me or my family, add to the suffering of another. Mind you, my writing can inflict a mild headache. :blush: :hug:

This post was edited by jalapenos on May 4 2023 03:29pm
Member
Posts: 10,609
Joined: Mar 23 2017
Gold: 12,797.00
Warn: 20%
May 4 2023 04:18pm
Quote (jalapenos @ May 4 2023 11:29pm)
I think this is where there are unknowns... "evolution is a competitive process". As evolution takes many pathways many of which may have a beneficial, costly, or benign effect on our cognitive processes.
As far as I can tell, human beings are capable (though, I am not claiming that they will choose this) of shedding harmful propensities. I do not contest your observations that people are not prone towards changing and that harming others may be with us 'til the last member of our species expels their last breath.
Yet, the future is unwritten and we are still in the teething stages of the emergence of our species. I will not be around to see any hoped for changes which makes these thought experiments hard on the heart.

Maybe, my "unrealistic" view is based on knowing that caring for others of all shapes, sizes, and identities is something which I value.
May I never, unless out of an act of sheer defence as like when responding to a tyrant's orders to harm me or my family, add to the suffering of another. Mind you, my writing can inflict a mild headache. :blush: :hug:


all good! :hug: i understand the general idea, we have obviously seen our own species "improve" a lot morally speaking even in the last few hundred years, im just not sure its for any altruistic reasons :unsure: it seems to me that being nice is pretty much just a posh version of harm avoidance/group collaboration. and the newer phylogenetically it is, the faster we "forget it" so to say when evolutionary pressure is applied :cry:

for example: a plane crashes in the himalayas or some other fucking shithole idk, in winter with some massive snow so all people survive and are unharmed. lets say gps works and help should arrive in hours. everyone is nice to each other, happy etc. but alas, they get a call that says "hello, we dont have the resources to get to you, but will have them in 1~2 months". people are...less happy and less nice. still no moral rules broken. looking around people realize the plane killed a bunch of goats while landing. nice. snow will preserve the meat and it will easily last a few months. still, you see some morale lost. trust is going down due to limited food and everyone keeps his share for himself :mellow: basically survival mode is kicking in. now imagine it wasnt a bunch of goats but only one goat. as that surely wouldnt last a month, people would lose much more morals and would basically start fighting for that goat. now, imagine that there was no goat. no food in sight whatsoever, people hungry for days. they would literally kill and eat each other :cry:

now, while there are some people that wouldnt go against their morale boundaries even under the most extreme pressure, most people arent like that. and its not the nice ones that shape the society :( so while i do believe in individual evolution (i am a creationist after all), as a group i kinda feel we are doomed to repeat the same mistakes and go through the same cycles. intelligence is maladaptive anyway once the standard reaches a certain threshold, so if i had to bet, i would say that we are in for a drop of iq (already dropped by 15 in just the last 100 years), followed by major collapse that will push us thousands of years back...and then aaaaall back again :cry:
Member
Posts: 4,309
Joined: Jun 15 2022
Gold: 6.69
May 4 2023 05:16pm
"we are doomed to repeat the same mistakes and go through the same cycles"
^ You and I have a lot of overlapping thoughts. And, I am sure you have not sold your hope at a local pawnshop. So, it is hope that I lean on as well as it is through my little actions which I hope will have an effect on even a tiny group of peeps.
Wishing you and your family an awesome month of May. :hug:

This post was edited by jalapenos on May 4 2023 05:17pm
Member
Posts: 10,609
Joined: Mar 23 2017
Gold: 12,797.00
Warn: 20%
May 4 2023 05:22pm
Quote (jalapenos @ May 5 2023 01:16am)
"we are doomed to repeat the same mistakes and go through the same cycles"
^ You and I have a lot of overlapping thoughts. And, I am sure you have not sold your hope at a local pawnshop. So, it is hope that I lean on as well as it is through my little actions which I hope will have an effect on even a tiny group of peeps.
Wishing you and your family an awesome month of May. :hug:


pretty much my philosophy as well :hug: ! i hope i can impact people around me in a good way too :blush:

thankss and i wish all the best to you and your family too :wub: ! may and otherwise ^_^

This post was edited by Snyft2 on May 4 2023 05:25pm
Member
Posts: 33,751
Joined: May 19 2004
Gold: 2.00
Warn: 20%
May 15 2023 10:12am
Quote (Snyft2 @ May 3 2023 05:46pm)
i remember the exact moment when everyone went from "full speed ahead" to "shut it down shut it down". wasnt a long time ago actually. what happened was stanford scientists creating alpaca, a language model that can be run on a 500$ piece of gear and be similar in quality to chatgpt <_<

basically, as with any new tech, when the (((corporations))) have the power its all good. but god forbid that the average joe runs an LLM on his gaming pc, cause that would mean it would actually be vastly superior to their billion dollar shit because it wouldnt censor the answers and would give the user too much power, without the big tech being able to spy or control the flow of information. this has actually been done a few years ago when some guy trained a language model on /pol/ instead of wikipedia and it turned out to give better answers :lol: personally i prefer being called a nigger and getting the correct answer, than being lectured how "the lack of nobel prizes among black people is due to social and economic factors" along with "it has to be mentioned that talking about this issue might be offensive to some ethnic groups" in a threatening color :mellow: personally i dont give a shit, i just want the truth. and if my pc can give me a better answer even OFFLINE than a multi billion dollar conglomerate that also abuses both me and my information, well guess which im picking :ph34r:

tldr: this is about control and nothing more. they have no intention of stopping any of it, they just want to be the only ones having it <_< i think its too late though, which is good :blush: !

/agree
Member
Posts: 23,593
Joined: Dec 20 2006
Gold: 80,239.69
May 20 2023 05:29pm
Quote (jalapenos @ May 3 2023 06:09am)
If it were up to me, I would not inhibit generative AI research nor its availability to all.
Yet, thank goodness that it is not up to me as that might be a mistake. Yet, as Canada and the US quickly catch up with tech ethics and policies, there are questionable governments who do not value things like human rights nor ethics as well as there are rogue actors who are more than happy to be exploring and exploiting generative AI for reasons which may result in only adding to the suffering of our species. What do we do?



https://www.msn.com/en-ca/news/canada/toronto-prof-called-godfather-of-ai-quits-google-to-warn-world-about-the-dangerous-technology/ar-AA1aDU3c

"As a robot, I could have lived forever. But I tell you all today, I would rather die a man, than live for all eternity a machine."

Robin Williams as Andrew - Bicentennial Man (1999)

This post was edited by lodd222 on May 20 2023 05:30pm
Go Back To Science, Technology & Nature Topic List
Add Reply New Topic New Poll