AI companies' safety practices fail to meet global standards, study shows

AI companies' safety practices fail to meet global standards, study shows

Dec 3 (Reuters) - The safety practices of majorartificial intelligencecompanies, such as Anthropic,OpenAI, xAI and Meta, ​are "far short of emerging global standards," according to a ‌new edition of Future of Life Institute's AI safety index released on ‌Wednesday.

The institute said the safety evaluation, conducted by an independent panel of experts, found that while the companies were busy racing to develop superintelligence, none had a robust strategy for controlling such ⁠advanced systems.

The study comes ‌amid heightened public concern about the societal impact of smarter-than-human systems capable of reasoning and logical ‍thinking, after several cases of suicide and self-harm were tied to AI chatbots.

"Despite recent uproar over AI-powered hacking and AI driving people to ​psychosis and self-harm, US AI companies remain less regulated ‌than restaurants and continue lobbying against binding safety standards," said Max Tegmark, MIT Professor and Future of Life President.

The AI race also shows no signs of slowing, with major tech companies committing hundreds of billions of dollars to upgrading and expanding ⁠their machine learning efforts.

The Future of ​Life Institute is a non-profit organization ​that has raised concerns about the risks intelligent machines pose to humanity. Founded in 2014, it was ‍supported early on ⁠by Tesla CEO Elon Musk.

In October, a group including scientists Geoffrey Hinton and Yoshua Bengio called for a ban ⁠on developing superintelligent artificial intelligence until the public demands it and science ‌paves a safe way forward.

(Reporting by Zaheer Kachwala in ‌Bengaluru; Editing by Shinjini Ganguli)

 

VOUX MAG © 2015 | Distributed By My Blogger Themes | Designed By Templateism.com