Latest News
February 14, 2024
Microsoft says US rivals are beginning to use generative AI in offensive cyber operations

NYU MS CRS professor and former AT&T Chief Security Officer Edward Amoroso said that while the use of AI and large-language models may not pose an immediately obvious threat, they “will eventually become one of the most powerful weapons in every nation-state military’s offense.” Click to read more.
February 2, 2024
The Just Security Podcast: How Should the World Regulate Artificial Intelligence?

From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems. Policymakers have taken steps to […]
December 15, 2023
Can AI Streamline Washington, D.C.?

Law professor Catherine Sharkey explains how artificial intelligence is being used to tackle the arduous work of keeping our federal agencies in check.
December 13, 2023
Impact of the SEC Position on Cyber Security for the CISO (Panel discussion)

MS CRS faculty & guest speakers, Randy Milch, Ed Amoroso, Joe Sullivan, and Joel Caminer, discuss the impact of the SEC position on cyber security for the CISO.
November 2, 2023
Generative AI Legal Explainer

This explainer is an evolving project to provide everyone with the types of answers that legal experts might informally provide to each other. Each question includes a short response to help you understand the most likely answer in the most likely cases. That answer is then given a confidence score on a scale of 1-5, […]
September 5, 2023
Knowing Legal Machines

Many of the social questions raised by artificial intelligence are mediated through the legal system. Policymakers explore new rules to govern the technology, courts work to apply existing legal framework to new situations, and advocates propose entirely new approaches to deal with novel problems (or old problems with new prominence).
July 21, 2023
NYU Law Professor Catherine Sharkey provides guidance for how federal agencies can use AI to review regulations

Segal Family Professor of Regulatory Law and Policy Catherine Sharkey examined those questions in a report she prepared in May for the Administrative Conference of the United States (ACUS), an independent executive branch agency charged with issuing nonbinding recommendations to improve administrative and regulatory processes. Drawing on Sharkey’s report, on July 3, the ACUS published a recommendation […]
July 18, 2023
Bugs in the Software Liability Debate

The Biden administration’s National Cybersecurity Strategy, released earlier this year, calls for shifting liability for insecure software, via legislation and agency action, onto software producers that fail to take “reasonable precautions.” It would impose the cost of security flaws onto the party best-positioned to avoid them while rejecting industry’s attempt to shift liability downstream. While not without critics, […]
June 20, 2023
Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence

Generative AI has great commercial promise but also poses immediate dangers. A new report from the NYU Stern Center for Business and Human Rights argues that the best way to prepare for potential existential risks in the future is to begin now to regulate the AI harms right in front of us.
May 18, 2023
What AI Regulations Should Go on the Napkin?

MSCRS Professor Ed Amoroso outlines a simple framework (suitable for we-humans to sketch on a napkin) that is based on an acronym called PILOT. The framework suggests how the US should begin to regulate artificial intelligence using an oversight board within NIST.