A project to create an ideal society of AIs
It's very hard to learn an Artificial Intelligence (AI) ethical behavior. It is possible to let an AI learn and after some learning see them do things that were not programmed in. Behavior that you didnít expect. Behavior that was not intended. Bad behavior. But what is 'bad'? And who decides that? How do we humans do that? How do we behave ethically? How ethical do we behave? Is Nature ethical? Can we trust AI when it's not ethical? Who are we to ask that? What are we asking? Who do we trust? And why?
I'd like to figure out how to create a (virtual) society of AI individuals in which those individuals can show behavior that we humans would interpret as social and maybe even as ethical. For the moment the product of this undertaking will be limited to a document, but I plan to create a (virtual) running environment in which individuals interact, so that the theory in this document can be tried out (I've done that before). I'd like to approach this challenge in a structured way and perhaps even in a scientific way. I realize this is quite an endeavor, so I would like to discuss with others to improve this document.
You can access the document here. I very much would like to discuss all of this. Please feel free to contact me - contact info is in the document. Have fun and thank you for your reaction!