AI Safety: A Poisoned Chalice?

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We hear a lot about the awesome potential of AI-the achievements of reinforcement learning, the astonishing power of foundation models and generative AI. Amplifying the hype, AI Safety has emerged as its counterpoint. AI Safety, when I first encountered it, brought to mind autonomous vehicle crashes, nuclear meltdowns, killer drones, and robots-gone-haywire. Nowadays, I see a different, more aggressive intention as AI Safety has come to dominate the public agenda around AI, beyond the purely technical and economic.

Cite

CITATION STYLE

APA

Nissenbaum, H. (2024, March 1). AI Safety: A Poisoned Chalice? IEEE Security and Privacy. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/MSEC.2024.3356848

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free