Eliezer Yudkowsky

Eliezer Yudkowsky at the 2006 Singularity Summit

Eliezer Yudkowsky is co-founder of the Singularity Institute and a key researcher in the field of AI ethics.

At the age of sixteen, Yudkowsky first read about 'intelligence explosion' - a future scenario in which an artificial intelligence improves its own intelligence until it is vastly smarter than humans. He described this discovery as causing "a vast, calm feeling that now I know how I'll be spending the rest of my life". He left school and taught himself computer science, mathematics, programming, physics, and cognitive science. In 2000, five years after that first revelation he co-founded the Singularity Institute to promote "points of leverage for increasing the probability that the advent of AI turns out positive".

Yudkowksy envisions the near-future creation of self-modifying artificial intelligences with the capacity to recursively self-improve at an explosive rate, transforming from human-level intelligence to superintelligence too quickly for humans to predict or react. To deal with this challenge, he has pioneered the field of "Friendly AI", which seeks stable goal systems that can guide such intelligences and keep them benevolently inclined toward humans.

He has since written several foundational articles on the subject, including 'Creating Friendly AI' (2001), 'Coherent Extrapolated Volition' (2004), 'AI as a Positive and Negative Factor in Global Risk' (2008), and 'Timeless Decision Theory' (2010).

Through his work with the Institute, he has co-founded the Singularity Summit, an annual TED-style event featuring prominent scientists, philosophers, and visionaries who discuss Singularity-related issues.

Yudkowsky maintains a keen interest in raising awareness of Singularity-related topics, and often writes popular articles on probability theory, decision theory, cognitive science, and artificial intelligence at LessWrong.com.

Comments for Eliezer Yudkowsky

Average Rating starstarstarstarstar

Click here to add your own comments

Rating
starstarstarstarstar
Era of Great Benefit
by: Sathya Bhasa

This is the Era of Great Benefit. The entire galaxy is pervaded by a comfortably increasing most sublime intelligence and spontaneous altruism, a radical display of open-ended knowledge creation of benefit to all.

Click here to add your own comments

Join in and write your own page! It's easy to do. How? Simply click here to return to Your Singularity Content.


Get Free Updates