Introducing AI Exchange: what does the future hold for the fast-evolving technology?
- by The Financial Times
- Sep 27, 2024
- 0 Comments
- 0 Likes Flag 0 Of 5
Save
Introducing AI Exchange: what does the future hold for the fast-evolving technology? on x (opens in a new window) Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest -- delivered directly to your inbox.
Technological advances always raise questions: about their benefits, costs, risks and ethics. And they require detailed, well-explained answers from the people behind them. It was for this reason that we launched our series of monthly Tech Exchange dialogues in February 2022.
Now, 18 months on, it has become clear that advances in one area of technology are raising more questions, and concerns, than any other: artificial intelligence. There are ever more people — scientists, software developers, policymakers, regulators — attempting answers.
Hence, the FT is launching AI Exchange, a new spin-off series of long-form dialogues.
Over the coming months, FT journalists will conduct in-depth interviews with those at the forefront of designing and safeguarding this rapidly evolving technology, to assess how the power of AI will affect our lives.
To give a flavour of what to expect, and the topics and arguments that will be covered, below we provide a selection of the most insightful AI discussions to date, from the original (and ongoing) Tech Exchange series.
They feature Aidan Gomez, co-founder of Cohere; Arvind Krishna, chief executive of IBM; Adam Selipsky, former head of Amazon Web Services; Andrew Ng, computer scientist and co-founder of Google Brain; and Helle Thorning-Schmidt, co-chair of Meta’s Oversight Board.
From October, AI Exchange will bring you the views of industry executives, investors, senior officials in government and regulatory authorities, as well as other specialists, to help assess what the future will hold.
If AI can replace labour, it’s a good thing
Arvind Krishna, chief executive IBM, and Richard Waters, west coast editor
Richard Waters: When you talk to businesses and CEOs and they ask ‘What do we do with this AI thing?’ What do you say to them?
Arvind Krishna: I always point to two or three areas, initially. One is anything around customer care, answering questions from people . . . it is a really important area where I believe we can have a much better answer at maybe around half the current cost. Over time, it can get even lower than half but it can take half out pretty quickly.
A second one is around internal processes. For example, every company of any size worries about promoting people, hiring people, moving people, and these have to be reasonably fair processes. But 90 per cent of the work involved in this is getting the information together. I think AI can do that and then a human can make the final decision. There are hundreds of such processes inside every enterprise, so I do think clerical white collar work is going to be able to be replaced by this.
If you think about most of the use cases I pointed out, they’re all about improving the productivity of an enterprise
Arvind Krishna, chief executive IBM
Then, I think of regulatory work, whether it’s in the financial sector with audits, whether it’s in the healthcare sector. A big chunk of that could get automated using these techniques. Then I think there are the other use cases but they’re probably harder and a bit further out . . . things like drug discovery or in trying to finish up chemistry.
We do have a shortage of labour in the real world and that’s because of a demographic issue that the world is facing. So we have to have technologies that help . . . the United States is now sitting at 3.4 per cent unemployment, the lowest in 60 years. So maybe we can find tools that replace some portions of labour, and it’s a good thing this time.
RW: Do you think that we’re going to see winners and losers? And, if so, what’s going to distinguish the winners from the losers?
AK: There’s two spaces. There is business to consumer . . . then there are enterprises who are going to use these technologies. If you think about most of the use cases I pointed out, they’re all about improving the productivity of an enterprise. And the thing about improving productivity [is that enterprises] are left with more investment dollars for how they really advantage their products. Is it more R&D? is it better marketing? Is it better sales? Is it acquiring other things? . . . There’s lot of places to go spend that spare cash flow.
Read the full interview here
AI threat to human existence is ‘absurd’ distraction from real risks
Aidan Gomez, co-founder of Cohere, and George Hammond, venture capital correspondent
George Hammond: [We’re now at] the sharp end of the conversation around regulation in AI, so I’m interested in your view on whether there is a case — as [Elon] Musk and others have advocated — for stopping things for six months and trying to get a handle on it.
Aidan Gomez: I think the six-month pause letter is absurd. It is just categorically absurd . . . How would you implement a six-month clause practically? Who is pausing? And how do you enforce that? And how do we co-ordinate that globally? It makes no sense. The request is not plausibly implementable. So, that’s the first issue with it.
The second issue is the premise: there’s a lot of language in there talking about a superintelligent artificial general intelligence (AGI) emerging that can take over and render our species extinct; eliminate all humans. I think that’s a super-dangerous narrative. I think it’s irresponsible.
Debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time
Aidan Gomez, co-founder of Cohere
That’s really reckless and harmful and it preys on the general public’s fears because, for the better part of half a century, we’ve been creating media sci-fi around how AI could go wrong: Terminator-style bots and all these fears. So, we’re really preying on their fear.
GH: Are there any grounds for that fear? When we’re talking about . . . the development of AGI and a potential singularity moment, is it a technically feasible thing to happen, albeit improbable?
AG: I think it’s so exceptionally improbable. There are real risks with this technology. There are reasons to fear this technology, and who uses it, and how. So, to spend all of our time debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time and the public’s mindspace.
We can now flood social media with accounts that are truly indistinguishable from a human, so extremely scalable bot farms can pump out a particular narrative. We need mitigation strategies for that. One of those is human verification — so we know which accounts are tied to an actual, living human being so that we can filter our feeds to only include the legitimate human beings who are participating in the conversation.
There are other major risks. We shouldn’t have reckless deployment of end-to-end medical advice coming from a bot without a doctor’s oversight. That should not happen.
So, I think there are real risks and there’s real room for regulation. I’m not anti-regulation, I’m actually quite in favour of it. But I would really hope that the public knows some of the more fantastical stories about risk [are unfounded]. They’re distractions from the conversations that should be going on.
Read the full interview here
There will not be one generative AI model to rule them all
Adam Selipsky, former head of Amazon Web Services, and Richard Waters, west coast editor
Richard Waters: What can you tell us about your own work on [generative AI and] large language models? How long have you been at it?
Adam Selipsky: We’re maybe three steps into a 10K race, and the question should not be, ‘Which runner is ahead three steps into the race?’, but ‘What does the course look like? What are the rules of the race going to be? Where are we trying to get to in this race?’
If you and I were sitting around in 1996 and one of us asked, ‘Who’s the internet company going to be?’, it would be a silly question. But that’s what you hear . . . ‘Who’s the winner going to be in this [AI] space?’
Generative AI is going to be a foundational set of technologies for years, maybe decades to come. And nobody knows if the winning technologies have even been invented yet, or if the winning companies have even been formed yet.
Generative AI is going to be a foundational set of technologies for years, maybe decades to come. And nobody knows if the winning technologies have even been invented yet
Adam Selipsky, former head of Amazon Web Services
So what customers need is choice. They need to be able to experiment. There will not be one model to rule them all. That is a preposterous proposition.
Companies will figure out that, for this use case, this model’s best; for that use case, another model’s best . . . That choice is going to be incredibly important.
The second concept that’s critically important in this middle layer is security and privacy . . . A lot of the initial efforts out there launched without this concept of security and privacy. As a result, I’ve talked to at least 10 Fortune 1000 CIOs who have banned ChatGPT from their enterprises because they’re so scared about their company data going out over the internet and becoming public — or improving the models of their competitors.
RW: I remember, in the early days of search engines, when there was a prediction we’d get many specialised search engines . . . for different purposes, but it ended up that one search engine ruled them all. So, might we end up with two or three big [large language] models?
AS: The most likely scenario — given that there are thousands or maybe tens of thousands of different applications and use cases for generative AI — is that there will be multiple winners. Again, if you think of the internet, there’s not one winner in the internet.
Read the full interview here
Do we think the world is better off with more or less intelligence?
Andrew Ng, computer scientist and co-founder of Google Brain, and Ryan McMorrow, deputy Beijing bureau chief
Ryan McMorrow: In October [2023], the White House issued an executive order intended to increase government oversight of AI. Has it gone too far?
Andrew Ng: I think that we’ve taken a dangerous step . . . With various government agencies tasked with dreaming up additional hurdles for AI development, I think we’re on the path to stifling innovation and putting in place very anti-competitive regulations.
Having more intelligence in the world, be it human or artificial, will help all of us better solve problems
We know that today’s supercomputer is tomorrow’s smartwatch, so as start-ups scale and as more compute [processing power] becomes pervasive, we’ll see more and more organisations run up against this threshold. Setting a compute threshold makes as much sense to me as saying that a device that uses more than 50 watts is systematically more dangerous than a device that uses only 10W: while it may be true, it is a very naive way to measure risk.
RM: What would be a better way to measure risk? If we’re not using compute as the threshold?
Throwing up regulatory barriers against the rise of intelligence, just because it could be used for some nefarious purposes . . . would set back society
Andrew Ng, computer scientist and co-founder of Google Brain
AN: When we look at applications, we can understand what it means for something to be safe or dangerous and can regulate it properly there. The problem with regulating the technology layer is that, because the technology is used for so many things, regulating it just slows down technological progress.
At the heart of it is this question: do we think the world is better off with more or less intelligence? And it is true that intelligence now comprises both human intelligence and artificial intelligence. And it is absolutely true that intelligence can be used for nefarious purposes.
But over many centuries, society has developed as humans have become better educated and smarter. I think that having more intelligence in the world, be it human or artificial, will help all of us better solve problems. So throwing up regulatory barriers against the rise of intelligence, just because it could be used for some nefarious purposes, I think would set back society.
Read the full interview here
‘Not all AI-generated content is harmful’
Helle Thorning-Schmidt, co-chair of Meta’s Oversight Board, and Murad Ahmed, technology news editor
Murad Ahmed: This is the year of elections. More than half of the world has gone to, or is going to, the polls. You’ve helped raise the alarm that this could also be the year that misinformation, particularly AI-generated deepfakes, could fracture democracy. We’re midway through the year. Have you seen that prophecy come to pass?
Helle Thorning-Schmidt: If you look at different countries, I think you’ll see a very mixed bag. What we’re seeing in India, for example, is that AI [deepfakes are] very widespread. Also in Pakistan it has been very widespread. [The technology is] being used to make people say something, even though they are dead. It’s making people speak, when they are in prison. It’s also making famous people back parties that they might not be backing . . . [But] If we look at the European elections, which, obviously, is something I observed very deeply, it doesn’t look like AI is distorting the elections.
What we suggested to Meta is . . . they need to look at the harm and not just take something down because it is created by AI
Helle Thorning-Schmidt, co-chair of Meta’s Oversight Board
What we suggested to Meta is . . . they need to look at the harm and not just take something down because it is created by AI. What we’ve also suggested to them is that they modernise their whole community standards on moderated content, and label AI-generated content so that people can see what they’re dealing with. That’s what we’ve been suggesting to Meta.
I do think we will change how Meta operates in this space. I think we will end up, after a couple of years, with Meta labelling AI content and also being better at finding signals of consent that they need to remove from the platforms, and doing it much faster. This is very difficult, of course, but they need a very good system. They also need human moderators with cultural knowledge who can help them do this. [Note: Meta started labelling content as “Made with AI” in May.]
Read the full interview here
Copyright The Financial Times Limited 2024.
All rights reserved.
Elon Musk and the Hyperloop
Aug 14, 2013
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Sponsored
Popular Post
Middle-Aged Dentist Bought a Tesla Cybertruck, Now He Gets All the Attention He Wanted
32 ViewsNov 23 ,2024
Tesla: Buy This Dip, Energy Growth And Margin Recovery Are Vastly Underappreciated
28 ViewsJul 29 ,2024