Xavier Frank | American Journal of Law & Medicine
Artificial intelligence (AI) machines hold the world’s curiosity captive. Futuristic television shows like West World are set in desert lands against pink sunsets where sleek, autonomous AI fulfill every human need, desire, and kink. But I, Robot, a movie where robots turn against the humans they serve, reminds us that AI is precarious. Academicians who study how AI interacts with tort law, such as Jessica Allain, David Vladeck, and Sjur Dyrkoltbotn, claim that the current legal regime is incapable of addressing the liability issues AI present. Both Allain and Vladeck focus their research on whether tort law can accommodate claims against fully autonomous AI machines, while Dyrkoltbotn explores how AI can be leveraged to help plaintiffs identify the genesis of their injuries. The solution this article presents is not exclusively tailored to fully autonomous AI and does not identify how technology can be used in tort claims. It instead demonstrates that the current tort law regime can provide relief to plaintiffs who are injured by AI machines. In particular, this article argues that the manner in which Watson for Oncology is designed presents a new context in which courts should adopt a per se rule of liability that favors plaintiffs who bring damage claims against AI machines by expanding the definition of what it means for a device to be unreasonably dangerous.