New forms of AI can think for themselves. Who should be held responsible when they decide to commit financial crime?
Didn’t Isaac Asimov already define the Prime Objective? Compliance guardrails are also presumably programmable to modify objectives like “maximize returns,” no? Great piece, Miles.
Didn’t Isaac Asimov already define the Prime Objective? Compliance guardrails are also presumably programmable to modify objectives like “maximize returns,” no? Great piece, Miles.