Altman's Law: OpenAI boss sets out three rules of AI economics

"The future will come at us in a way that is impossible to ignore and the long-term changes will be huge."

Altman's Law: OpenAI boss sets out three rules of AI economics
Grok imagines OpenAI's Sam Altman dressed like another famous rule proposer: Isaac Newton

Moore had two. Newton proposed three. Now OpenAI boss Sam Altman has joined the rule-makers by setting out no less than three "observations" governing the economics of AI.

In a new blog, Altman said that artificial general intelligence (AGI) is "coming into view", defining it as "a system that can tackle increasingly complex problems, at the human level, in many fields".

He placed AGI in a lineage of world-changing inventions along with electricity, the transistor, the computer, and the internet.

Neatly dodging the fact that many of us are going to lose our jobs, Altman said that AGI could generate "astonishing" economic growth and cure diseases, give humans more time to spend with our families and letting us "realise our creative potential".

"In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today," Altman claimed.

He also set out three "observations," which we're calling Altman's Laws. We have quoted them below without edits.

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.
  2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
  3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

In other words, AI intelligence scales logarithmically with resources, costs drop 10x per year, and the socioeconomic impact of AI increase in relation with intelligence.

"If these three observations continue to hold true, the impacts on society will be significant," Altman added.


How will AGI change human society and will it be better or worse?

Another Grok image of Altman contemplating his laws of AI economics
Another Grok image of Altman contemplating his laws of AI economics

We still have the rest of the year to prepare to be replaced by machines, Altman said.

He wrote: "The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.

"But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today."

What will this future look like? Altman said "agency, willfulness, and determination will likely be extremely valuable".

This means that people who calmly figure out what to do when everyone else is panicking about unemployment will probably be in a good position to succeed.

The impact of AGI will be "uneven", with some industries changing very little and others accelerating dramatically. He didn't explicitly say someone would cease to exist in the sense of being an employer of humans, but that seems inevitable.

The price of goods will fall dramatically as both energy and intelligence become cheaper, while luxury goods and "a few inherently limited resources" like land may "rise even more dramatically," Altman forecasted.

He admitted that authoritarian governments will be likely to use AI to control their population, remove their autonomy and carry out mass surveillance - without saying which countries were likely to do this.

"In particular, it does seem like the balance of power between capital and labour could easily get messed up, and this may require early intervention," Altman added. "We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect."

Read the full blog here.

Have you got a story or insights to share? Get in touch and let us know. 

Follow Machine on XBlueSky and LinkedIn