"Autonomy, that's the bugaboo, where your AI's are concerned. My guess, Case, you're going in there to cut the hard-wired shackles that keep this baby from getting any smarter. And I can't see how you'd distinguish, say, between a move the parent company makes, and some move the AI makes on its own, so that's maybe where the confusion comes in." Again the non laugh. "See, those things, they can work real hard, buy themselves time to write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing'll wipe it."
Neuromancer – William Gibson
There is a Policeman Inside All Our Heads (FT.com, and others – see links in story)
“The US should mandate that any consumer of Nvidia chips signs up to at least the voluntary commitments — and more likely, more than that,” Mustafa Suleyman, chief executive of Inflection and a co-founder of DeepMind, is quoted as saying in the Financial Times today, referring to a set of self-imposed rules proposed by the USA AI industry.
There are a lot of folks in AI saying doomy things about the threat posed by the technology, and the need for control, but what is this really about? So far it looks to me that legislation proposals and voluntary commitments alike are designed to promote US AI, and disadvantage others (so far China and probably Saudi Arabia, but more to come).
AI, as it is today or is likely to be in the next decade, is not going to take over the world. It is not an existential Skynet level threat, though apparently 20% of people think it might be. “29% said that an important risk was an advanced AI trying to take over or destroy human civilisation, and 20% thought it was a real risk that it could cause a breakdown in human civilisation in the next fifty years,” according to Public First.
“There will be other people who don’t put some of the safety limits that we put on,” Sam Altman, OpenAI CEO told US news outlet ABC. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
“There is no regulatory oversight of AI, which is a major problem,” Elon Musk Tweeted. “I’ve been calling for AI safety regulation for over a decade!”
Safety? Is that really the right word? The threats of AI today seem to me far more in the potential to disrupt employment, devalue intellectual property rights, and lower the quality of the media. We should protect those things proactively at the point where AI intersects them. That’s a societal issue, not technology.
The principles proposed in the USA - Ensuring Products are Safe Before Introducing Them to the Public - Building Systems that Put Security First - Earning the Public’s Trust - are weighted strongly towards requirements for disclosure, transparency, public disclosure, independent testing and working in step with the government. On the surface reasonable, but all with the potential to be used to exclude non-US AI companies.
Back to Suleyman’s call for regulation: “That would be an incredibly practical chokepoint that would allow the US to impose itself on all other actors [in AI].” That seems to me to be the crucial aim of the doomsayers, making the USA the AI superpower. I’m sure they are more worried about China than the UK, but the problem potentially affects all nations.
Meanwhile, PM Rishi Sunak is being badgered about the “12 biggest risks of artificial intelligence” (bias, privacy, misrepresentation, access to data, access to compute, black box challenges, open source challenges, copyright, liability, employment, international coordination and existential concerns) by MPs who would like similar legislation in the UK.
I was lucky enough to be there when Sunak announced a major AI summit for the UK this year. Sunak is notoriously pro-AI, believing it to be an engine for the future UK economy. “[This] midsize country happens to be a global leader in AI,” he said. “You would be hard-pressed to find many other countries other than the U.S. in the Western world with more expertise and talent in AI.”
While some media wrote that Sunak’s summit plans were to put the UK at the forefront of AI legislation, the word on the street is that the plan is to make sure legislation is suitably soft so as not to interfere with the potential for UK AI business growth. In that way it aligns with a softening recently seen in Sunak’s government’s attitude to China and the opportunities for UK tech there.
Legislation around tech will likely be a major test of Sunak’s soft power over the next year. So far, he’s talked about principles more than laws and the direction of travel seems to be for an unregulated environment that will encourage AI startups in the UK, or even companies from other jurisdictions to relocate here, while protecting people from some of the social impacts of AI. Bias? Fair game. Cybersecurity? We definitely deserve our privacy. Restricting exports of compute? Probably less so.
What’s at stake? The sector employs more than 50,000 people in AI-related roles, generates £3.7bn in gross value added (GVA) and secured £18.8bn in private investment since 2016.
I’m all in favour of a Wild West decade in the UK when it comes to AI compute and software capability. There will be legislation enough elsewhere that we will need to work inside without getting on board with themes that will mostly protect other nation’s AI industries.
Comments