With the inflation that (at least it seems in certain parts of the world) will only be increasing and the growing wealth inequality in a lot of societies, I’m not sure if current capital gains of most individuals will be sufficient to pass on wealth to the next generation.

Also not sure if UBI will really become a thing, as it would have to be brought up by a ‘select few’ that will certainly be discontent with the ‘lack of (economic) value of the masses’.

Right. We have this fear that our creations will kill us as for example Frankenstein. LLM is no Frankenstein unless you consider statistical human created content analysis a Frankenstein. All this fear because you don’t explicitly program an outcome and use stat and calculus instead?

1 Like

“How, exactly, could AI take over by 2027?”

Full Details:

The scenario: http://ai-2027.com
PDF version: https://ai-2027.com/scenario.pdf

1 Like

I read maybe half of it, kept skipping ahead. They’re wildly optimistic and believe the graph will continue to climb at a steeper angle all the time. It could just as easily level off and stay there awhile. Maybe there will be problems that are hard. Something this smart will be power hungry. I doubt we can keep it fed.

Cool though that they think it can happen. Maybe before I’m dead.

AI 2027

Yesterday, a new pdf dropped. The report was authored by five Manifold users (some of whom are better known for other things): Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean.

This report, called “AI 2027,” was released primarily as an interactive website by the AI Futures Project, and contains a hybrid forecast and fictionalized scenario for the next few years of AI developments.

550x299.55357142857144

Graphical summary of AI-2027’s forecast. Lines going up.

This report got press coverage from the NYT and resulted in Scott Alexander’s (perhaps) first podcast appearance. While the report is best interpreted as a single potential scenario (that represents the authors’ “best guess about what [the near-term impact of AI] might look like”), Manifold users are skeptical that these forecasts will be borne out as described.

547x316.7433264887064

The above market resolves based on consensus from Manifold moderators at the end of 2026 as to whether the AI-2027 report’s forecast has been more or less correct to that point:

Resolution will be via a poll of Manifold moderators. If they’re split on the issue, with anywhere from 30% to 70% YES votes, it’ll resolve to the proportion of YES votes. Otherwise it resolves YES/NO.

Part of this skepticism is natural, given that AI-2027 has given a timeline for something resembling AGI that is much faster than Manifold’s own market estimate. The report forecasts AGI (or at least several intersecting technological developments that equivalent Manifold markets would almost certainly characterize as such) by the end of 2027. Manifold does agree that 2027 is the modal year by which we could expect AGI, but (1) most of the probability space lies beyond that, with a median date of around 2029, and (2) the criteria for this market is a high quality Turing test, a lower bar than the ones AI-2027 thinks AI will clear.

550x310.37344398340247

Hopefully Manifold users will, as usual, do a great job of operationalizing the falsifiable predictions from the AI-2027 report into well-defined individual markets.|