Certainly the Russians won’t want to be left behind. Perhaps the Ai bots will become like the news and we will just choose the one that most reflects our personal bias.

2 Likes

Note to Admin: It’s ok to kill this thread, while interesting to me, it doesn’t further the plot. :grin:

2 Likes

The journals are absolutely right to worry; ChatGPT and, presumably, its A.I. successors yet to come represent a potential existential threat to the peer review process—a fundamental mechanism that governs how modern science is done. But the nature of that challenge isn’t fundamentally about the recent, rapid improvement in A.I. mimicry as much as it is about a much slower, more insidious disease at the heart of our scientific process—the same problem that makes A.I. such a threat to university teaching and to journalism.

ChatGPT isn’t the first research-paper-writing machine to drive journal editors to distraction. For nearly two decades, computer science journals have been plagued with fake papers created by a computer program written by MIT grad students. To use this program, namedSCIgen, all you have to do is enter one or more names and, voilà, the program automatically spits out a computer science research paper worthy of submission to a peer-reviewed journal or conference. Worthy, that is, if none of the peer reviewers bothered to actually read the paper. SCIgen-written articles were so transparently nonsense that anyone with the slightest expertise in computer science should have spotted a hoax before finishing the first paragraph. Yet not only were SCIgen papers regularly getting past the peer review process and into the pages of scientific journals, it was happening so regularly that, in the mid-2010s, journals deployed an automated detector to try to stem the tide. Nowadays, unretracted SCIgen papers are harder to find, but you can still spot them in bottom-feeder journals every so often.

A.I. Like ChatGPT Is Revealing the Insidious Disease at the Heart of Our Scientific Process

Occasionally chatGPT churns out complete nonsense too (binary on a spectrum, wtf?) but is completely wrong too

1 Like

Google joined the party{they let out an AI version]today, the 2nd of February, 2023

Others are sure to follow, Meta and several more.

1 Like

Sam has some interesting ideas…

Exclusive Interview: OpenAI’s Sam Altman Talks ChatGPT And How Artificial General Intelligence Can ‘Break Capitalism’

1 Like
2 Likes

The pillars of the establishment are trembling in fear.

Joseph… I know lots of professional workers / white collar workers who are also trembling…

Also - but in the area of good news, this looks interesting…

ChatGPT Personalized health recommendations

1 Like

Meanwhile, the FDA actually seems to be moving relatively quickly now on AI:

1 Like

The party is getting hotter!

China poised to launch ChatGPT rival

Btw I got insider access to Google’s Bard based on Google LaMDA LLM. Not particularly impressive, but doesn’t feel as much of a desperation play. As mentioned, it is possible they took more time to greenlight Bard because of misinformation and reputational risks - I can say Bard is better at giving “safe” answers than ChatGPT with fewer safety loopholes. Still not particularly impressed. Unfortunately, I’m not at liberty to share screenshots but it’ll be obvious soon enough when public.

While I am aware of Chinese “AI” competitors with some advantages in number of parameters and other structural benefits - the semiconductor access is somewhat of a bottleneck. China is still way behind at least a decade on the chip side. I currently believe Alibaba M6’s is probably going to be slightly more impressive than Baidu based on what I’m seeing. It’s easier to share sensational articles and be excited about any hint of progress (as seen in some of these overnight buzzword AI stock prices which have little to substantiate those increases on), not as easy to parse out risks and actual trajectory.

I will also say that this thread is overall tending toward overexuberance on something they don’t even have access to and there is a lack of actual context.

If I had to go bullish on a subfield of “AI” - it’s adversarial ML. It’s currently easy to “manipulate” these systems to do what one wants - which includes demonstrably guiding “AI” enabled missiles back to their creators who launched them. Seems like the US military is asleep in terms of these risks when buying into “AI” - shows how far the exuberance appears to be.

FWIW

Join us in the AI Test Kitchen

People Tend To Overestimate What Can Be Done In One Year And To Underestimate What Can Be Done In Five Or Ten Years

https://quoteinvestigator.com/2019/01/03/estimate/

While it may be true (and clearly not always) - it’s also fair to say that accurately predicting 7-10 years out takes a lot more effort than it seems.

I’d like to see actual evidence that “AI” is compounding “intelligence” first. There are plenty of waves of “AI” since 1950s. There are no indications I’m aware of that the current methods are trending towards such an explosive compounding rate.

1 Like

I’ll give you a personal example as an analogy. “Blockchain” and crypto was all the rage and crypto to some extent still is (and yes, there are some real use cases).

In fact, I made a good amount off the gold rush by “selling the picks and axes” so to speak. I’m quite aware of the general possibilities involved with that industry as well. But I was deeply aware of the security risks in say smart contracts, on top of counterparty risks abound. Made a good amount on even simple poorly designed pseudorandom number generators by getting the seed to beat Ethereum lotteries, because they were just that poorly designed. I’ve watched people lose their shirts time and time again, despite urging them to simply read the fine print and fully understand what they’re investing in because evaluating risks (usually counterparty risk being the first one) is the first step. Most exuberant people refuse to do even the bare minimum due diligence and make predictions not grounded on an informational advantage.

I’ve written this elsewhere and mentioned the banks that made their own blockchain consortium to pretend they are “innovating” are pretty useless - they also quietly disappeared. Nobody who was exuberant talks much about the failures and few recalls all the poor predictions made over the last 10 years despite several waves of busts in ICOs, blockchain buzzword stocks, and NFTs.

It’s not that all of these are completely useless instruments at all (it’s easy to be overly skeptical and pessimistic without actually understanding what’s going on), but most people are talking out of their butt based on exuberance, without even learning Solidity (for example) or ever taking some time to read the whitepaper and code of Bitcoin at the very least, and most people do not discuss based on context - they tended towards sharing buzzwordy articles looking for extra clicks. The ones that did go through learning what’s actually going on with a open mind had much better and accurate predictions than not on average - both on not being overly exuberant or overly pessimistic.

The hard part to foresight it seems is not feeling vindicated by a dramatic drop in interest if things go down and erroneously expect “innovation” to disappear, but to separate the true value from public perception and buzzword marketing/sensationalism, while looking out for any possibilities that are missed by both sides in a complex system. Many are too easily fooled they can predict what will happen in a complex system without ever reading up or getting any experience, then assuming they have an informational advantage when they clearly do not. One can be directionally correct with luck, but for the wrong reasons and the wrong timeframe.

I can go further down history, but just figured it might be helpful to detect a pattern with a more recent example.

1 Like
1 Like

Нажаль ві не оракул. Він не може передбачувати події, навіть гіпотетично. Завжди вказує на те, що йому замало данних для цього.

Welcome Andriy to the forums. I’ll translate to english:

Unfortunately, it is not an oracle. He cannot predict events, even hypothetically. Always indicates that he does not have enough data for this.

Perhaps you are right, but its not a bad thing to consider possible downsides to AI. That is all I’m suggesting (by posting that story).

1 Like