We should obviously better regulate AI with increased standards on transparency, explainability, privacy, safety, and accountability.
More regulation will slow AI down.
But I’d go two steps further…
Outside of healthcare, we should ban and then reverse the development of artificial intelligence.
After all, outside of healthcare, what could an ever-improving AI give us?
Okay, fair enough, but on the road to developing this genie lamp I believe we’d succumb to entropy, enslavement, evolution, and/or extinction.
After a certain point, the risks will outweigh the rewards, and the longer the time horizon you concern yourself with, perhaps, because you love your descendants or because we’ll live longer then the more weight you’ll give to the risks.
When should we ban it?
We can’t ban it until the US wins the AI arms race by unlocking an AI capable of global surveillance and sabotage.
Once we have this ultimate eye then we’ll “encourage” the world to sign a non-development treaty like we did with varying levels of success with nuclear weapons, except here we’d have the insight and power to enforce 100% compliance.
At this point, it’d then become a matter of where we with the international community draw the line?
At first, we may draw it at whatever happens to be the status quo, but we need not let happenstance dictate our indefinite reality.