Nick Bostrom

Nick Bostrom

Nick Bostrom is a famous Swedish philosopher who proposed that if we introduce super intelligence before solving control problems, the future development of artificial intelligence research would pose a significant threat to human survival. Bostrom warned

2019-03-30  

Nick Bostrom is a famous Swedish philosopher who proposed that if we introduce super intelligence before solving control problems, the future development of artificial intelligence research would pose a significant threat to human survival. Bostrom warned that in addition to artificial intelligence taking over the world and intentionally exterminating humanity, even if given a harmless task, super artificial intelligence will be optimized, leading to the extinction of humanity. He said that although artificial intelligence brings us many benefits, solving control problems is still a top priority. Some critics in the field of artificial intelligence believe that artificial intelligence actually does not have such great capabilities, and this situation only occurs in long-term and extreme situations. Bostrom has published over 200 works, including the New York Times bestseller Super Intelligent Pathways, Dangers, Strategies 2014, and Observational Selection Effects in Human Bias Science and Philosophy 2002. In 2009 and 2015, he was selected as one of the top 100 global thinkers in foreign policy. Bostrom works in super intelligence and is concerned that it will pose a threat to human development in the coming centuries - similar views shared by Bill Gates and Elon Musk. Bostrom was born in Helsingborg, Sweden in 1973. He grew tired of studying when he was young and dropped out of high school in his final year to study at home. He strives to learn various disciplines, including anthropology, art, literature, and science. Despite being called a serious person, he also worked for the London Comedy Circus. He holds a bachelor's degree in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, a master's degree in philosophy and physics from the University of Stockholm, and a master's degree in computational neuroscience from King's College London. During his studies at the University of Stockholm, he studied and analyzed the relationship between language and reality through the philosopher Quine W V Quine. In 2000, he obtained a PhD in philosophy from the London School of Economics. He served as a professor at Yale University from 2000 to 2002 and as a postdoctoral researcher at the British Academy of Sciences at the University of Oxford from 2002 to 2005. Risky Bostrom's research focuses on the future and long-term outcomes of humanity. He introduced the concept of risk and defined it as a negative outcome that would lead to the extinction of primitive intelligent organisms on Earth or permanently curb their development potential. Bostrom and Milan Cherkovich described the relationship between the presence of risk and significant global risks in the 2008 Global Disaster Risk Report, linking the presence of risk with the observation selection effect and Fermi paradox. In 2005, Bostrom established the Institute for the Future of Humanity to study the distant future of human civilization. He is also a risk research consultant for the center. The fragility of human beings in the face of the progress of artificial intelligence, as mentioned by Bostrom in his book \Paths, Hazards, and Strategies of Super Intelligence\ in 2014, the birth of super intelligence means that humans may be heading towards extinction. Bostrom cites the Fermi paradox to prove that extraterrestrial intelligent life in the universe is a victim of its own technology. One machine is equivalent to human intelligence