|
Matthew Watkins works in AI safety research, including a stint at SERI-MATS in Berkeley and London studying language models. His posts for the tech-rationalist website LessWrong, discussing glitches he discovered in foundational AI models, were among the most upvoted in the site's history. The leading AI critic Elizer Yudkowsky called his research 'one of the more hopeful processes happening on Earth right now - because it may give rise to a culture of people with something like security mindset, who try to break things, instead of imagining how wonderfully they'll work'.
|