A ‘godfather’ of he is raising a red flag over the agents of him

  • Talking about agents and he is everywhere in Davos. The pioneer of he Yoshua Bengio warned against them.
  • Bengio said agents with Agi power could lead to “catastrophic scenarios”.
  • Bengio is researching how to build non -agents to keep agents under control.

Artificial intelligence pioneer Yoshua Bengio has been at the World Economic Forum in Davos this week with a message: agents he can end badly.

The topic of he’s agents – artificial intelligence that can act independently of the human contribution – has been one of the most loud in this year’s meeting in snowy Switzerland. The event has attracted a collection of pioneers to debate where he goes further, how to go governed, and when we can see signs of cars that can reason as people – a historic moment known as general artificial intelligence (Agi).

“All catastrophic scenarios with agi or superintelligence occur if we have agents,” Bengio Bi told in an interview. He said he believes it is possible to achieve Agi without building agency systems.

“All he about science and medicine, all the things people interested in are not agents,” Bengio said. “And we can continue to build more powerful systems that are not agents.”

Bengio, a Canadian research scientist, whose early research in deep learning and nerve networks laid the foundation for the modern boom of him, is considered one of the “godfathers of he” along with Geoffrey Hinton and Yann Lecun. Like Hinton, Bengio has warned against possible damage to him and called for collective actions to alleviate the risks.

After two years of testing, businesses recognize the tangible return of the investment provided by the agents of he, which can enter the workforce significantly this year. Openi, who does not have a presence in Davos this year, this week revealed an agent of one who can browse online for you and perform tasks such as reserving restaurants or adding food items to your basket. Google has seen a similar tool of its own.

The problem that Bengio sees is that people will continue to build agents no matter what, especially as competing companies and countries worry that others will reach the agency in front of them.

“The good news is that if we build non-agent systems, they can be used to control agent systems,” he told BI.

One way would be to build the most sophisticated “monitors” that could do so, though this would require considerable investment, Bengio said.

He also called for national arrangement that would prevent companies from building agents from building models without proves that the system would be safe.

“We can advance our science of safe and capable, but we must accept the dangers, scientifically understand where it comes from, and then make technological investment to make it happen before it is too late, and we We build things that can destroy us.

“I want to raise a red flag”

Before talking to BI, Bengio spoke on a security panel with Google Deepmind’s CEO, Demis Hassabis.

“I want to raise a red flag. This is the most dangerous way,” Bengio told the audience when asked about him agents. He pointed out the ways he can be used for scientific discoveries, such as deepmind’s discovery in protein folding, as examples of how it can still be deep without being an agent. Bengio said he believes it is possible to go to Agi without giving him the agency.

“It’s a bet, I agree,” he said, “but I think it’s a valuable bet.”

Hassabis agreed with Bengio that measures must be taken to mitigate risks, such as cyber security protection or experimentation with agents in simulations before issuing them. This would only work if everyone agreed to build them the same way, he added.

“Unfortunately I think there is an economic gradient, beyond science and workers, that people want their systems to be agents,” Hassabis said. “When you say ‘recommend a restaurant’, why wouldn’t you like the next step, that is, reserve the table.”

Scroll to Top