As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
Well most of human management can’t be held accountable (unless they step on the toes of someone above them) so honestly, what would be the difference?
Well most of human management can’t be held accountable (unless they step on the toes of someone above them) so honestly, what would be the difference?
Don’t confuse can’t for won’t. Unacceptable behavior doesn’t exist if it’s accepted.