Quote (thesnipa @ Feb 23 2024 07:51am)
i think the greater concern isn't a lack of outrage as igosohard suggests, it's an "im sorry" ala BP oil spill response from google.
where they do a minor apology but dont change course. AI is becoming increasingly integrated in our lives, and a house built on sand will never stand.
im not an AI engineer, so perhaps in the future AI will be so good that you can simply tell it to ignore the woke biases of previous versions and it will adjust on the fly, but its still a bit concerning to see legit orwellian responses from AI chat bots.
No but that is what I am saying. It seems like unanimously everyone was like "this is fucking stupid".
Its not left v Right, it seems everyone was on the same page, I am sure you can find some fringe minority like you always can with anything, but the mass majority regardless of political alignment, said yeah this is completely inaccurate and broken.
I think to add to this, there's obviously going to be guardrails in any AI, there has to be. There's a larger conversation at hand now of whos going to be responsible for dictating what those guardrails are, and historically the government lags well behind private industry.
You look at something like Sora and its pretty amazing, think of the computing power that's happening behind creating 1 full minute of AI generated video. But obviously without guardrails all around that type of diffusion, you would have some sick things coming out of it, impersonation, crime, etc.
I don't know how you're going to regulate something like Sora if it you ever have an open source hit the market. I guess right now , its so intensive from GPU and CPU requirements you can just limit who can even use it.
This post was edited by SBD on Feb 23 2024 09:39am