AI 'godfather' Yoshua Bengio wins Canada's top science award
Yoshua Bengio recognized for 'remarkable discoveries and breakthroughs' in artificial intelligence
Canada's most prestigious science prize was awarded this week to Yoshua Bengio, a pioneer in artificial intelligence who's got some honest doubts about the future of his field.
Bengio, the scientific director of the Montreal Institute for Learning Algorithms and Université de Montréal professor, is this year's recipient of the Herzberg Canada Gold Medal for Science and Engineering, the Natural Sciences and Engineering Research Council of Canada (NSERC) announced Wednesday. The award is presented annually to Canadians whose work has shown "persistent excellence and influence" in the fields of natural sciences or engineering.
Bengio's breakthrough work in artificial neural networks and deep learning earned him the nickname of "godfather of AI," which he shares with Yann LeCun and fellow Canadian Geoffrey Hinton.
With neural networks, "the idea was that we might be able to build intelligent machines by taking inspiration from neuroscience, from the brain," Bengio told Quirks & Quarks. Deep learning underlies much of the recent advancement in AI technology, from image and speech recognition to generative AI and natural language processing behind tools like ChatGPT.
In recent years, Bengio has also been among the AI researchers who have spoken out about the potential risks of this technology, and called for prompt and rigorous regulation of the field.
Bengio spoke with Quirks & Quarks host Bob McDonald about the recent developments in the field of AI and his hopes and concerns about the future.
Here is part of their conversation.
What are some of the achievements that you are particularly proud of?
Well, I'll start with something that's very relevant today. In 2000, I published a paper at the main neural net conference called NeurIPS, and it was about neural networks for modelling language, sequences of words. And a recipe very similar to this is actually what is used right now to train those huge language models and chatbots.
Another discovery of mine in 2014 introduced something inspired by the brain, which is attention mechanisms — something that allows us to focus on the few elements, like a few words in the calculations that our brain does. So we put something like this into these artificial neural nets and it turned out to be extremely useful, and it gave rise to much better machine translation first and then much better language models. And these kinds of attention mechanisms are used in the state of the art today more and more.
When did you start to be concerned that the field of artificial intelligence is moving too fast?
I've been concerned about social impact for many years already. A decade ago, when large companies started using machine learning, neural nets, deep learning for advertising, I was a bit worried that it would end up being used to manipulate people. But it's really this year with ChatGPT that my concerns have increased by a whole notch.
Essentially, the question that I've been worried about is: we are on a trajectory to build machines that may eventually surpass us in many areas, and potentially on everything. And what's going to happen when along that trajectory, is the power of the tool going to become something dangerous in the wrong hands? Or could we even lose control of these systems if they are smarter than us? These are all important questions to which, unfortunately, we don't have the answers.
And the answers are both scientific — like how do we make sure any AI system does what we want, and we don't have the answer to that — and they are political, or about governance. What sort of regulation and laws and international treaties should we put in place to make sure that such a powerful and potentially extremely useful tool is not harming people and society?
WATCH | Bengio on the pitfalls of AI:
Montreal-based AI godfather warns about dangers of artificial intelligence
Now, you're one of the people who brought us this technology. Why didn't you anticipate the dangers that it might pose?
I should have. Well, it sounded like science fiction before I saw the incredible abilities of these modern systems in 2022-2023. I thought that, well, it would be decades, if not centuries, before we got to human-level performance.
But I think there are other reasons that are psychological. You know, researchers are human beings. We may reason in ways that are aligned with what motivates us, what makes us feel good about our work. It's hard to suddenly consider your work as something that could be dangerous for society, and you may look the other way. So I think there are many factors here that explain also even why now, it's difficult for many in the community to take these risks seriously.
So what do you think the recipe for regulating the technology is?
The first thing is not to be discouraged, and to think about the little things that we can do as quickly as possible that can move the needle. So we first need to get governments to understand that this is very powerful technology, like any other scientific output, that can change, transform society. It needs to be done carefully. And Canada has been moving fairly well and preparing a law that would also already do a good job.
But what we need to do more, we need to work on the international level to make sure that as many countries as possible work together to harmonize their legislation, to make sure, for example, that all of these potentially dangerous systems are registered. We make sure that the companies or the organizations working on them are taking the right precautions.
We want to make sure that there is also democratic oversight. So what I mean by this is, well, yes, regulators need to know what is going on, but also media and academics and civil society. Because we are building tools that will be more and more powerful, and power concentration is sort of the opposite of democracy. We need to make sure that there are checks and balances, so that this power is used for good.
What is your optimistic vision for the future of artificial intelligence?
For many years now, and especially since the beginning of the pandemic, I've been interested in how machine learning, deep learning could be used to help scientific discovery in many fields. And in particular, I think that it's very likely that we'll see a revolution in some of these fields. I think of biology especially, because we are now generating huge quantities of data, for example, about what is going on in your cells. You know, your cells are incredibly complex machines. But we now have ways of peeking and poking and measuring huge quantities of what is going on. And that provides information that the human brain cannot digest directly. But AI can really help us form theories and models that could help us understand that on a scientific level, but also cure. Once we understand how something works, we can design the drugs.
This Q&A has been edited for length and clarity. Interview with Yoshua Bengio produced by Jim Lebans.
Corruption of the information ecosystem,:
ReplyDeleteIt is a troubling reality that trustworthiness of information is being undermined by content that is fraudulent, dishonest and fake.
Content moderation by. social media corporations is in effective
Shut down the factory of lies with gov.. regulation to ensure the integrity of facts and hold corporations accountable.
Runaway AI needs to be reigned in!
ReplyDeletehttps://www.cbc.ca/listen/live-radio/1-63-the-current/clip/16092371-does-yoshua-bengio-regret-helping-create-ai
ReplyDeletehttps://www.cbc.ca/player/play/video/9.6512470
Historian Yuval Noah Harari says AI is the first technology that is not just a tool, but “an active agent” doing things we didn’t anticipate and might lose control over. He spoke to Matt Galloway in front of a live audience in Toronto about the threat of unknown unknowns, and why he prefers the term "alien intelligence." You can hear the second part of this conversation here.
Aired: Sep. 17, 2024
Part. 2:
ReplyDeletehttps://www.cbc.ca/listen/live-radio/1-63-the-current/clip/16095307-why-yuval-noah-harari-recommends-information-diet
The 'godfather of AI' says he's worried about 'the end of people'
ReplyDeleteA pioneering British-Canadian computer scientist (Jeff Hinton)has quit his job at Google and is issuing dire warnings about the rapid advancement of artificially intelligence. But some experts in the AI field caution against such dystopian visions.