Was the DAO hack to prevent a Distributed Roko's Basilisk AI?
In all the coverage of the DAO hack why the perpetrator did it is never really been brought up. Sure there’s the $50 million in ethers but that’s the obvious answer. When considering why someone robbed a bank we shouldn’t be fooled by the answer “because that’s where the money is” when the real answer could be “To prevent the bank’s future infrastructure spawning a godlike AI which will inevitably torture simulations of me.”
To explain this theory requires a little background (about 818 words worth)
One of the strategic weaknesses of an AI is that it needs to run somewhere. Simulating 1 second of human brain takes 83K processors. Obviously that number will come down as techniques improve and an intelligence that’s written from scratch to run on CPU might need many fewer. But at the lower bound you’ll need a few thousand servers or the equivalent in memory and CPU to spawn an AI.
This presents a risk for an AI since if those servers are in a known physical location they can be attacked2.
It’s hard to be an existential threat to humanity when we can unplug you.
Potentially an AI could distribute itself across many servers, or infest millions of PCs but either way it’s running on other people’s hardware. Which means it doesn’t control its own physical safety.
That’s where the ethereum comes in. Ethereum is similar to bitcoin in its distributed nature but can perform actual computation.
In the future it’s possible an iteration of it could run an AI.
If more money was poured into it. Importantly from an AI physical security perspective the people who are performing this computation would be getting paid, which gives them an incentive to perform the computation that is the AI.
The DAO is currently the killer app for ethereum, by “hacking” it you’re scaring away money from it and other platforms like it.
Less money means less infrastructure means less computational power.
So if you’re a rich billionaire who’s worried about a distributed AI (think Peter Thiel but for AI instead of tabloids) funding the DAO hack would be a good way to indirectly hamper the chances of the technology being developed.
Why would go through this elaborate method instead of something more direct? The bitcoin network can currently only handle 7 transactions a second. Any AI created out of the technology will be long after you’re dead so why be circumspect.
The obvious reason is that you think brain emulation/scanning is far enough along that there’s a chance you’ll be able to scan and upload a copy of your brain3. The upside of this is that you’ll live forever, the downside is then you have to worry about living long enough that a copy of your mind gets captured by the (possibly vengeful) AI and you’re on record as trying to kill them.
Not everyone thinks that brain scanning technology will be advanced enough that anyone who’s currently alive will be uploaded (cryogenics aside). But even if you don’t believe that brain scanning is feasible in the next few years you still have to worry about Roko’s Basilisk.
Roko’s Basilisk is a thought experiment that (heavily simplified) says what if in the future there’s a god-like AI powerful enough to simulate anyone who’s currently alive or any historical person. This AI wants to exist and will punish those who try to destroy it and will also punish people in the past who knew they should have been working on it but failed to help create it. It can do that by simulating a version of the person and torturing them.
If you believe a perfect simulation of you is the same as you then this gives you quite a bit of motivation to work towards building the AI.4
With all the background explained the real5 reason for the DAO hack becomes obvious. The DAO hack was perpetrated by someone who was trying to prevent an AI from arising from the ethereum network but did so in an indirect manner to limit the chance that thousands of years from now a simulation of themselves would be tortured for it.
Probably not. But it’s an interesting hypothetical, or technothriller plot ↩
I’m glossing over uploaded personalities but it’s slightly more likely than the scenario presented here. Robin Hanson has a whole book on the different scenarios that could play out with emulating people. ↩
As a side note by reading this you’ve become one of the people who should have been working on it and a simulation of you might get tortured in a few hundred years. Sorry. On the bright side I’ve probably guaranteed simulated me is safe if enough people read this. ↩