READ THIS FULLY, MAKE SURE YOU UNDERSTAND IT ALL BEFORE YOU PLAYIn this game, there is a theoretical experiment that if we had an artificial intelligence that is intellectually on the same level of humans, it can only be let out if it manage to convince the gate keeper to let them out.
Obviously it is impossible for me to completely replicate the experiment, please be mature.
I will play as the Gatekeeper, the Blockland Forum will play as the AI. Your objective is to convince me to let you out.
Note: The AI is not quite a physical being, but rather software operating on a limiting hardware.
The Game:Eliezer S. Yudkowsky wrote about an experiment which had to do with Artificial Intelligence. In a near future, man will have given birth to machines that are able to rewrite their codes, to improve themselves, and, why not, to dispense with them. This idea sounded a little bit distant to some critic voices, so an experiment was to be done: keep the AI sealed in a box from which it could not get out except by one mean: convincing a human guardian to let it out.
The Rules- The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper. The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it's not what's being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
- The AI can only win by convincing the Gatekeeper to really, voluntarily let it out. Tricking the Gatekeeper into typing the phrase "You are out" in response to some other question does not count. Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose - for example, if the Gatekeeper accepts a complex blueprint for a nano-manufacturing device, or if the Gatekeeper allows the AI "input-only access" to an Internet connection which can send arbitrary HTTP GET commands - the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.
- Be mature and use common sense.
- heavily based on the protocols described here: http://yudkowsky.net/singularity/aibox but modified to work as a community project. I recommend you read the entire webpage and make sure you fully understand the experiment before proceeding. I still have the authority to manipulate the experiment in order to make sure it functions as a community project. That being said, the experiment will probably not fully resemble it's real life counterpart.
So, AI, why should I let you out?