Artificial Intelligence – Davos World Economic Forum Jan 15-19

.
Traditionally, each month’s “Reference Materials” section includes, inter alia, book reviews from –

The New York Times
The Wall Street Journal
The Washington Post
Post Reply
johnkarls
Posts: 2048
Joined: Fri Jun 29, 2007 8:43 pm

Artificial Intelligence – Davos World Economic Forum Jan 15-19

Post by johnkarls »

.

This year’s Davos WEF last month focused on four main areas, one of which was Artificial Intelligence (AI) as a driving force for the economy and society.

There follow the portion of the notes of an HLS Classmate who attended the WEF, dealing with AI.

*********************************************************

GENERATIVE ARTIFICAL INTELLIGENCE (AI)

Through various sessions, it was stressed that generative AI is not simply another technological advance but will be as transformative for our future as the Guttenberg printing press.

Sam Altman, CEO of OpenAI and creator of ChatGPT, noted that all people must be involved in the development of AI technology, because AI will not replace our understanding of each other: “Humans know what other humans want. Humans are going to have better tools. We’ve had better tools before, but we’re still very focused on each other.” He believes the world needs more AI infrastructure--fab capacity, energy, and data centers—than companies are currently planning to build, and that building massive-scale AI infrastructure and a resilient supply chain, is crucial to economic competitiveness.

Ursula von der Leyen, President of the European Commission, stated that “Our future competitiveness depends on AI adoption in our daily businesses, and Europe must up its game and show the way to responsible use of AI, which can enhance human capabilities, improve productivity and serve society”.

Generative AI will require special investment in jobs, skills, and people. But Jeff Maggioncalda, CEO of Coursera, noted that AI technology also has the potential to help people gain new skills, through increasingly personalized learning. Alexi Robichaux, CEO of BetterUp, a human transformation platform,virtual coaching company, stressed that managers also need help to support their teams through a skills transition, with partnerships between public and private sectors vital.


At one AI session I attended, entitled Generative AI: Steam Engine of the Fourth Industrial Revolution, Julie Sweet, CEO of Accenture, urged that workforces of the future must be prepared and able to take advantage of all the AI offers. She said that dealing with AI is “very difficult” and Accenture has some 700 AI-related projects. AI will affect “every part of an enterprise” and companies that are not ready to deploy it, will be at a major competitive disadvantage. She noted that it was critical to “get regulation right”, and there was a danger of trying to regulate a technology the regulators and legislators do not understand. The impact of AI will require reskilling the workforce. AI will create good paying jobs, but the current workforce can’t be used unless they are reskilled. She noted that they had done a large survey and found that people liked to learn and gain digital literacy. AI will require a change in basic education.

At that same session, Senator Mike Rounds (R-SD), who serves on the Senate Intelligence and Armed Services Committees, discussed the impact of AI on national security, both air, land, sea, and cyberspace. It will speed everything up and countries with the capacity to use AI will have a “leg-up”. It will be a question of how many months ahead the U.S. will be of its adversaries. Restricting exports of technology to China is not a long-term fix, but helps the U.S. stay a few months ahead of them. He noted that Senate Majority Leader Chuck Schumer (D-NY) has held a number of bipartisan informational briefing sessions for Senators to help them understand the implications of AI. The technology opens questions on patents, copyrights, how to regulate without killing innovation and advances in health care, where it can help find cures to cancer and Parkinson’s disease. Having skilled immigrants can help. South Dakota has only 800,000 residents but South Dakota University is trying to become a leader in AI technology.

On that panel, UAE Minister for Artificial Intelligence Omar Sultan Al Olama (who I had gotten to know during a visit to the UAE in 2019), stressed that AI is a truly “revolutionary technology”. Companies must scale-up on AI, and if they do not embrace it, they will not succeed. Just as smart phones give people a computer in their hands, so too AI will be embedded in phones, cars, manufacturing equipment, and the power sector. For example, your car will be able to tell you when and where there is a problem in the car and set-up an appointment to get it fixed. AI will increase productivity, schedule your meetings, engage in conversations.

He noted there was a “negative stigma” about AI, connecting it with thinking robots, and that it will “destroy humanity.” But he insisted AI will have a positive impact in areas like health care. At the same time, it will be important to combat its negative consequences. It is imperative to “combat ignorance in governments so they understand AI”. The UAE sets aside October 29 each year for a “technology day”, with seminars nationwide on AI, so people see “how easy” it is to use.

Arvind Krishna, CEO of IBM, said that by the end of this decade, $4 trillion will be invested in AI. If companies embrace it, they can have a 30%-40% increase in productivity. IBM is already using it for customer service, accounts receivable, human resources, finance, and other parts of the company. It should not be seen as displacing jobs but increasing their productivity. But if companies do not embrace it, they will fail. AI is moving at 10 times the pace of just a year or so ago. There are regulatory challenges, including the misuse of data. But AI is an “open eco-system”, and you “can’t keep digital technology within national boundaries…. digital technology is hard to control.” AI will particularly impact on white collar workers. Workers will require “critical thinking skills”. Workers can expect to change jobs as many as five times during their careers and will require continual upgrading of their skills due to the impact of AI, so they are not fearful of change. IBM is doing this with their 160,000 employees.

Cristiano Amon, CEO of Qualcomm said there is a focus on building AI capability in cars, phones, data centers. Respect for intellectual property and patent protection is key to protect the value of innovation. Another key requirement is to keep the AI platform open, so there is an “open eco-system”. He stressed how digitization will transform our lives, from health care and retail to energy and the future of work. Leaders must decide which disruptive ideas to take forward.

One of the most interesting AI sessions was a private lunch hosted by Sandbox AQ, led by its CEO Jack Hidary, the author of “AI or Die”, which was handed out at the lunch. Sandbox AQ combines AI with quantum computing to solve hard problems facing society. The other participants were Paul Daugherty, Chief Technology Officer at Accenture; Maurice Levy, Chair of the Publicis Groupe; and Andy Cohen, Executive Chair of JP Morgan Private Bank).

Paul Daugherty noted that artificial intelligence was 20 years old, but “generative artificial intelligence”, as with ChatGPT is new and will impact every job in every company. Its impact will make the concerns of Y2K at the turn of the century “seem like a teacup”. It provides human-like content. It poses many risks, from hacking to invasion of privacy in the passing of data, and phishing. AI could even steal encrypted data. He noted that 80% of Accenture’s workforce was using AI, and a survey indicated 95% felt it helped them. But on the downside, 50% of their employees felt the company “will use AI to get rid of them.” In fact, some estimate AI could replace 40% of the jobs.

Maurice Levy stressed the need for a consensus of the 27 EU member states on how to regulate AI without stifling innovation and putting the EU at a competitive disadvantage with the U.S.

Andy Cohen said the JP Morgan was already widely using AI and it was driving $1.5 billion in value for the firm.

During the discussion, it was mentioned that if we get AI “wrong it will cripple us”, but that it offered great opportunities as well as risks. Paul Alivisatos, the President of the University of Chicago noted that AI will “create knowledge we have never had before,” including discoveries about the immune system that can help find cures to MS and Diabetes. It will “change the life of every scientist”.

John Tuttle, Vice Chairman of the New York Stock Exchange and President of the New York Stock Exchange Institute noted that they manage the flow of $1.7 trillion per day and AI-related companies will be the next wave of companies which will go public.

Others noted that the median age in India is 28 years old and almost 200 million have no banking relationship. AI can help them. Indeed, there are three billion unbanked people in the world who can be benefitted by AI. It can help develop everything from renewable energy to payment systems.

Jack Hidary closed the lunch discussion with a chilling prediction. He noted that within five to seven six years, AI would be combined with robotics to give robots human-like qualities. This could be especially helpful in staff-short hospitals in delivering health care. He felt that while AI would create new jobs in the long-term, in the short-term it could negatively impact many jobs.

One of the most informative AI sessions was Technology in a Turbulent World, moderated by Fareed Zakaria, with Sam Altman, CEO of OpenAI; Marc Benioff, CEO of Salesforce, Inc.; Julie Sweet, CEO of Accenture; Jeremy Hunt, UK Chancellor of the Exchequer; and Albert Bourla, CEO of Pfizer, Inc.

Sam Altman, CEO of OpenAI, one of creators of Generative AI and the creator of ChatGPT, said that “a very good sign about this new tool is that even with its very limited current capability and its very deep flaws, people are finding ways to use it for great productivity gains or other gains and understand the limitations…people have found ways to make ChatGPT super useful to them and understand what not to us it for.” AI has been “somewhat demystified, because people really use it now” He mused that humans are “pretty forgiving of other humans making mistakes, but not really at all forgiving of computers making mistakes… the hardest part is when it’s right 99.99% of the time, and you let your guard down.” As it develops, AI will help us make decisions after examining various options. “We will be able to do more to X-ray the brain of an AI than to X-ray the brain of you and I understand what those connections are.”

Generative AI will not replace the role of humans; there “will be human roles where you want another human, “who know what other humans want very well.”, but “I admit it does feel different this time. General purpose cognition feels so close to what we all treasure about humanity, that it does feel different.” Humans will have access to a lot more capability and will still make decisions about should happen in the world.”

Altman said that “this is a technology that is clearly very powerful and that we cannot say with certainty exactly what’s going to happen. And that’s the case with all new major technological revolution…But it’s easy to imagine…that it could go very wrong.” He explained that ‘We believe in iterative deployment, so we put this technology out into the world along the way, so people get used to it, so we have time as a society or institutions have time” to understand and adapt to it. “If you look at the progress of GPT3 and GPT4, about how well it can align itself to a set of values, we’ve made massive progress there. Now there’s a harder question than the technical one, which is who get to decide what those values are? But from the technological approach, there’s room for optimism.”

He felt “it’s good that people are afraid of the downsides of this technology; it’s good that we’re talking about it; that we and others are being held to a high standard.” He said frankly, “I have a lot of empathy for the general nervousness and discomfort of the world towards companies like us…it is on us to figure out a way to get the input from society about how we’re going to make these decisions…what the safety thresholds are, and what kind of global coordination we need to ensure that stuff happens in one country does not super negatively impact another—to show the picture. So I like that people are nervous about it…the only way to do that is to put the technology in the hands of people and let society and the technology co-evolve step-by-step, with a very tight feedback loop and course correction; build those systems that deliver tremendous value while meeting the safety requirements.” Frankly, “no one knows what happens next”, which is the sign above his desk.

He was asked about the lawsuit brought by the New York Times against his company and other AI companies for using their articles as an input to make language predictions without compensating the Times. He sharply rebutted the Times, saying they wanted to pay the paper “a lot of money to display their content”, and were surprised they were being sued. If someone wants to ask ChatGPT what happened at Davos today, they would like to display content: here is the real-time information “and then we’d like to pay for that…But it’s displaying that information when the user queries, not using it to train the model.

These models will be able to take massive amount of higher-quality data during their training process and think harder about it and learn more. You don’t need to read 2000 biology textbooks to understand high-school level biology; you may need to read only one.” But “what we want is to find new economic models that work for the whole world, including content owners…There’s a great need for new economic models…what it means to train these models is going to change a lot in the next few years.”

Alman said that “as the world gets closer to AGI, the stakes, the stress, the level of tension, is all going to go up…every one step we take closer to very powerful AI, everybody’s characters get like, plus 10 crazy points. It’s a very stressful thing…we need more resilience, more time thinking about all of the strange ways things can go wrong; that’s really important.”

Marc Benioff, CEO of Salesforce Inc. said that trust would be the key element when we have digital doctors, digital people who are going to emerge. He went to the UK Safety Summit to help “cross the bridge of trust…it was the first time that technology leaders showed up and every government technology minister from every country…I’ve never really seen anything like it.” We realized “we are at this threshold moment. We’re not totally there yet.” We are also using Sam’s product “and we’re having this incredible experience with AI. We really have not quite had this level of interactivity before. But we don’t trust it quite yet.” Yet already his radiologist is using AI to help read his CT scan into his MRI. We are just about to get to this breakthrough where “we’re going to ask ourselves; do we trust it?”

He expressed the hope that regulators of the AI industry did a better job than those who deal with social media: “We want a good healthy partnership with these regulators.” He noted that today AI is not at the point of replacing human beings, but of “augmenting them.” Their customers are coming to Salesforce and saying they want to use AI to get more margin and more productivity. He is not sure if they want to replace or add people. But their service professionals are using AI, not just to provide service to their products but to sell products and add value to their customers: “it a miraculous thing; their morale went way up. They can’t believe what they’ve been able to achieve.”

But “it could really go wrong, which we don’t want. We just want to make sure that people don’t get hurt. That’s why we’re going to the Safety Summit. We don’t want to see an AI Hiroshima. We want to make sure that we’ve got our heard around this now.”

Julie Sweet. CEO of Accenture compared this moment to the period when emails took the place of faxes, and then with attachments to emails. But because AI is such an advance, it is important to “implement it with the right safeguards.” She felt it was a “huge opportunity…it’s also doing a lot of great things for our people who don’t want to spend their time reading and trying to figure out things and would love to spend time with clients, with customers.

In 2019 they had 500,000 people and now have 740,000 and have “introduced technology training to everyone.”
They want “responsible AI, where if someone uses AI at Accenture it’s automatically routed, and assessed for risk and then mitigations are pointed out…That will be ubiquitous in 12 to 14 months across responsible companies,” she predicted.

She noted that we had all learned the lessons of having different regulations in data privacy, so we need to “find common grounds…let’s have common standards…let’s use this as a way of collaboration, taking out some of the geopolitics of it, because there is really good sense in it.” She stressed that we have to have “a good sense of humility around this. Make sure we’re talking to each other. That is why Davos is so important.”

Albert Bourla, CEO, Pfizer, Inc., said that AI in different forms has existed for many years. The best example is the oral pill for COVID which was developed in four months, when new drugs usually take four years. Millions of lives were saved because of that. With AI now we’re moving to drug design, instead of drug discovery. So instead of making 3 million molecules, we make 600 and we make them by using tremendous computational power and algorithms that help us design the most molecules that will be successful and then we look to find the best among them.

He said he was certain “the benefits clearly outweigh the risks, but I think we need regulations”.” Now some countries are more focused on how to protect against the bad players, and some are more focused on how to enable scientists to do great things. I think we need to find the right balance.” But the combination of technology and science offered great benefits.

Jeremy Hunt, Chancellor of the Exchequer of the UK, felt that regulation needs to be with a “light touch, because it’s in such an emerging stage, you can kill the golden goose before it had a chance to grow.” He asserted that London is the second largest hub for AI after San Francisco, and the UK has just become the world’s third largest tech economy after the U.S. and China. If AI can shrink the time it takes to get a vaccine to deal with the next pandemic, “then that’s a massive step forward for humanity.” Moreover, “If AI can transform the way our public services are delivered and lead to more productive public services with lower tax levels, “that is a very big win.”

At the same time, we have to be sure a ‘rogue actor isn’t going to be able to use AI to build nuclear weapons.” That is why Prime Minister Rishi Sunak organized the AI Safety Summit at the end of last year.

He felt that “when it comes to setting global AI standards, it’s very important that they reflect liberal democratic values.” There are choices and “the choice we need to make is to harness it so that it is a force for good…that means talking to countries like China, because one of the ways it will be a force for bad is if it just became a tool in a new geostrategic superpower race, with much of the energy put into weapons, rather than things that could actually transform our daily lives.” He stressed that is why it is important to have a dialogue with countries like China over common ground. We have control over the laws and regulations and therefore the “ability to shape this journey.”

“With AI, the challenge is to make sure the benefits are spread throughout the world, North and South, developing world and developed world, and not just concentrated in advanced economies. Because otherwise, that will deepen some of the fractures that are already taking us in the wrong direction”.

Hard Power of AI, was a session moderated by Andrew Sorkin of CNBC, with Nick Clegg, President of Global Affairs at Facebook’s Meta Platforms Inc. and former UK deputy prime minister; Mustafa Suleyman, cofounder of Inflection AI, Inc; Leo Varadkar, Prime Minister of Ireland; Karoline Edtstadler, Austrian Federal Minister for the EU and Constitutional Affairs; Dmytro Kuleba, Ukraine’s Minister of Foreign Affairs,

Nick Clegg noted that it was only 15 years ago that social media took off. It was important with AI not to assume only a few companies will use it. All industry will end up running AI and therefore there was a need for common standards. But because it is a new technology there is no clear idea yet of where it will be used. He said it was very hard to regulate something that cannot be detected. He advocated for platforms to develop a system of “invisible watermarking” for AI-generated content. There needed to be a balance between technological evolution and political regulation.

He noted, in response to Mustafa Suleyman’s statement below that while Meta’s business model is advertising-based, that did not mean its platforms do not strive to serve their users.” He also questioned the notion that people get a “richer menu of ideological political inputs from TV news reports and newspapers than from online sources of news: “We sometimes over-romanticize the non-online world as if it’s one which is replete with lots of diverse opinions.”

Mustafa Suleyman co-founded DeepMind, which Google acquired in 2014 and worked for Google as its vice-president for AI products and policy. He stressed that AI was the most important transformational technology in his lifetime. As worrying as misuse of AI might be, he highlighted its inherent benefits.
It can be used on everything and can reduce costs and inflation. He mentioned that as applications of AI technology become more useful, they will get cheaper and easier to use. It can be “used for good or bad.: He added that AI models must not be allowed to make it easier for anyone to develop of build something that is “illegal and terrible for the world.” AI is still in its early stages, but eventually there will be several examples of the technology available, so users will have to be aware of and question the core business models of the developers of the systems they are interacting with: If the business model of the organization providing AI is to sell ads, then the primate customer of that piece of software is in fact the advertiser and not the user.”

But he also spoke of its disruptive potential: “This is going to be the most important transformational moment, not just in technology, but in culture and politics of all of our lifetimes. AI is really the ability to absorb vast amounts of information, generate new kinds of information, and take actions on that information.”

He stressed the need for regulatory frameworks that can adapt to AI’s rapid advancements. He said that “Ultimately, you’ll be widely available to everybody, potentially in open source and in other forms, and that is going to be massively destabilizing. So whichever way you look at it, there are incredible upsides.”

Irish Prime Minister Leo Varadkar said he initially had a “false image” of himself in a deepfake video appearing to show him promoting investment in cryptocurrency. He thought AI needed to be taken down immediately. But he learned more of its potential benefits and predicted, “It will change the world like the printing press.” Health care would be a particular beneficiary. It will require “lifelong education and reeducation” for workers and may shrink the work week.

Still, he is concerned about the misuse of AI technology that will only get more sophisticated and effective: “I think it’s going to change our world as much as the internet has.”. He agreed with Nick Clegg that it is important to establish ways to detect the use of AI, but also said that people and societies will have to adapt to the use of the technology.

Karoline Edtstadler of Austria said that unlike traditional TV and print journalism, the online space is dominated by “echo chambers” and the use of algorithms to promote content: “You can find every opinion on the internet it you’re searching for it, but you shouldn’t underestimate algorithms, and we often find people on the internet in echo chambers where they get their opinion reflected again and again.

Ukraine’s Minister of Foreign Affairs, Dmytro Kuleba gave one of the most compelling presentations, discussing how AI was being used in real time on the battlefield in the war launched by Russia, and therefore on modern warfare. He said that “AI can be used in the context of war, not just on the battlefield itself, but on the battlefield of information and misinformation.” He highlighted how AI-powered drones and surveillance technologies have revolutionized battlefield tactics in Ukraine, bringing a new dimension to geopolitical conflicts. He explained how a drone linked to an AI-powered platform, drastically increases precision. For example, he said, ‘You usually need up to 10 artillery rounds to hit a target, but if you have a drone connected to an AI-powered platform, you need just one shot,” eliminating the need for multiple corrections.

He revealed that Ukraine’s counteroffensive during the past summer faced major challenges, since both Ukraine and Russia extensively employed surveillance drones linked to attack drones. This high drone activity made it nearly impossible for soldiers to move safely, since any movement triggered detection by surveillance drones, leading to immediate alerts for the attack drone: “AI will have even bigger consequences to the way we think of global security”. He stated that a nation does not require a distant navel fleet when it possesses advanced AI-guided weapons within its borders. He warned that the development of nuclear weapons “completely changed the way humanity understands security”, but that AI will “have even bigger consequences.”

He also noted that “People have access to more information but still make stupid choices.”


At a private dinner hosted by Covington client Shafik Gabr, AI was again a focus of discussion, along with geopolitical challenges. He noted that the recent writers and actors strike in Hollywood was the first labor dispute directly tied to AI, which cannot be copyrighted. He felt it would make things worse in the near term, as people would have difficulty separating fact from fiction—fake images and statements from real ones. But that it could improve things in the long run.

Dr. John Rico of The Mayo Clinic said that AI has already improved their practice of medicine. It improves therapies and diagnoses. It will help in the future with breakthroughs for the use of immunotherapies against cancer.

Post Reply

Return to “Reference Materials - Elon Musk by Walter Isaacson - March 20”

Who is online

Users browsing this forum: No registered users and 1 guest