The everyday ethics of AI

As artificial intelligence assumes larger roles, ethical concerns are rising, says Juliette Powell at UND’s Olafson Ethics Symposium

Juliette Powell, featured speaker at UND’s 17th Annual Olafson Ethics Symposium, speaks to an audience at Nistler Hall on November 3 on “The AI ​​Dilemma.” Photo by Tom Dennis/UND Today.

Editor’s note: A video of the Olafson Ethics Symposium can be found at the end of this story.

Call it the artificial intelligence dilemma and recognize it as one of the most important technological challenges of the 21st Century.

But don’t get it wrong, said Juliette Powell, a featured speaker at the 17ththe Annual Olafson Ethics Symposiuman event organized by the UND Nistler College of Business and Public Administration.

“Because when I talk about dilemmas, it’s not like robots are going to come and steal our jobs,” Powell told the audience. “It is not that they are going to become the lords of the robots.”

Instead, the AI ​​dilemma is less exotic than the previous examples, but at the same time, deeper. “This is about how we will deal with the technology that we use every day of our lives,” he said, because AI is already giving us banner ads that analyze us even as we stare at them, algorithms that predict criminal recidivism and inform sentencing. , and voice generation software so precise that it will make even the person being “fake” wonder, “Why the hell did I say that?”

As the saying goes, with great power comes great responsibility, Powell noted. But now that much of humanity carries around smartphones that have unlimited power, the saying is relevant to all of us, not just kings and queens.

“Increasingly, modern life is being driven by artificial intelligence,” he said. “So even if you’re not into technology, even if AI isn’t something you’ve really thought about, this could be interesting if you’re interested in being part of the human race.”

Amy Henley, dean of the Nistler College of Business & Public Administration at UND, introduces Juliette Powell Nov. 3 at the College’s 17th Annual Olafson Ethics Symposium. Photo by Tom Dennis/UND Today.

Author, analyst and commentator

Powell is an author and consultant at the intersection of technology, business, and ethics, having advised organizations large and small on how to deal with AI-enabled technological innovation.

The winner of the 1989 Miss Canada pageant, Powell has worked on television as a host, business reporter and analyst. She has provided live commentary on Bloomberg, BNN, NBC, CNN, ABC, and BBC, and presented at institutions such as The Economist, Harvard, and MIT on topics that focus on digital literacy and the responsible deployment of AI.

Olafson’s annual Ethics Symposium is intended to give students and the business community an opportunity to explore the importance of ethics both personally and professionally, he said. amy henley, dean of the Nistler College of Business and Public Administration. The event is funded through the support of Robert Olafson, a UND mathematics and business graduate and his dedication to ethical business practices and the University. SEI Investments Company has provided additional support.

This year’s Symposium was the first to be held in the new Nistler Hall, the new building for Nistler College, Henley noted. Plus, Henley said, she was thrilled to finally welcome Powell as a keynote speaker.

“We’ve been talking to Juliette for over two years, through all the challenges of COVID,” Henley told the Barry Auditorium audience of about 200 people on Nov. 3. “We didn’t bring her here as soon as we would. They have taken a liking to her, so we are thrilled to finally have her on campus.”

Powell said that for her, the visit was worth the wait. In addition to seeing the “fantastic, fantastic” new Nistler Hall, “everyone I’ve talked to at the school so far has made me feel at home and welcome,” she said.

Emails from the university were full of that sentiment, and when roadblocks arose, “there was always someone here to make me feel like everything was going to be okay,” she said.

“And that is a great gift. I have spoken all over the world and rarely have I met with such a warm and sincere welcome.”

Author, analyst, and commentator Juliette Powell outlined the vexing problems that artificial intelligence poses and will continue to pose for society. Photo by Tom Dennis/UND Today.

The four logics of power

In his talk, Powell outlined some of the most prominent projects that governments are considering to regulate AI. And the best way to understand them, he said, is from the perspective of risks and benefits, not necessarily good and bad.

First, consider the “four logics of power,” four approaches to decision making that tend to vary depending on a person’s position in society. For example, corporate logic is the logic of markets and competitive advantage. It prioritizes profit, growth, expansion and new business, all in the name of shareholder value, Powell said.

Engineering logic is the logic that technologists use. It prioritizes efficiency and fluidity, and values ​​technology as a way to solve human problems.

The logic of government is the vision of authority. It prioritizes law and order, and values ​​technology as a way to track, serve, and protect people and institutions.

And last but not least, the logic of social justice prioritizes humanity. From this point of view, people are more important than profit or efficiency, Powell said. This vision values ​​people as a way to solve human and technological problems.

“The key here is not to focus on a specific logic, but to try to take all of them into account when making decisions,” he said.


In particular, the European Union AI Law is an attempt to do just that.

The AI ​​Law is a proposal for a European law on artificial intelligence. Although not yet in force, it is the first such law on AI proposed by a major regulator anywhere, and it is being scrutinized around the world because many tech companies do extensive business in the EU.

The law assigns AI applications to four risk categories, Powell said. First, there is “minimum risk”: benign apps that do not harm people. Think of AI-enabled video games or spam filters, for example, and understand that the EU proposal allows unlimited use of those apps.

Then there are “limited risk” systems, such as chatbots, where, the AI ​​Law states, the user must be aware that they are interacting with a machine. That would meet the EU’s goal of letting users decide for themselves whether to continue the interaction or take a step back.

“High risk” systems can cause real damage, and not just physical damage, as can happen in self-driving cars. These systems can also hurt job prospects (by sorting resumes, for example, or tracking productivity on a warehouse floor). They may be denied credit or loans or the ability to cross an international border. And they can influence criminal justice outcomes through AI-enhanced sentencing and investigative programs.

According to the EU, “any producer of this type of technology will have to provide not only justifications for the technology and its potential harms, but also business justifications for why the world needs this type of technology,” Powell said.

“This is the first time in history, to my knowledge, that companies are responsible for their products to the point of having to explain the business logic of their code.”

Then there is the fourth level: “unacceptable risk”. And under the AI ​​Act, all systems that pose a clear threat to people’s security, livelihoods and rights will be banned, plain and simple.

“Again, I’m not here to tell you what’s right and what’s wrong,” Powell said. “The question is, can we decide as a society, and not just for ourselves, but for our children and future generations?”

That is the AI ​​dilemma, and solving it will be a challenge for all of society, he said. “But that’s the exciting part, because we’re actually living in a time in history where we can decide the future. … For that to happen, it means that we must step up when the call is there; And I’ll make the call to all of you tonight.”

Leave a Reply

Your email address will not be published. Required fields are marked *