Ethics of the Digital Transformation

URS GASSER: All right. Hello, everyone and welcome. How are you? Let’s try again. How are you this morning? [APPLAUSE] A very warm welcome to
the Berkman Klein Center, to Harvard Law
School on this sunny but windy and
slightly chilly day. We’re really delighted
to have you here. My name is Urs Gasser. I serve as the
executive director of the Berkman Klein Center. I’m also on the
Harvard Law School faculty with my colleague, that
I will introduce just in a bit. And of course, I am
very, very pleased to moderate this special
conversation about the ethics of digital transformation. And I’m of course, even
more pleased and actually honored to welcome the
federal president of Germany, Frank-Walter Steinmeier. [APPLAUSE] FRANK-WALTER STEINMEIER: Hello. URS GASSER: [SPEAKING GERMAN] FRANK-WALTER
by a wonderful group of colleagues and experts. And I will– if you
are not mad with me, I will just briefly
introduce you, and we will get to know each
other a bit better as we go along and talk about
your work, and of course as we engage in the
opening conversations. So this should be interactive
as much as we can. Eva Weber-Gurska is a ethicist,
a philosopher currently at Ruhr University in Bochum. She’s doing amazing work, and
I’m already looking forward to learning from you today. Quite often, these
debates about ethics happen without having
philosophers around us. I’m grateful that you’re here. Matthew Liao is at NYU, and
is a professor of bioethics. He also runs a center
on the same topic, and I’m particularly
curious to hear also some of the lessons
learned from past cycles of technological
innovation, as we now talk about digital things
and AI, IT, and the like. Jeanette Hofmann, welcome back. Great to have you here. Jeanette is a professor of
internet policy at the Free University in Berlin. She’s also the director of
the Alexander von Humboldt Institute for Internet
and Society in Berlin. I’ve introduced the
president already, and he doesn’t need
an introduction. So next we have
Dean Melissa Nobles, who is the Dean of the
School of Humanities, Arts, and Social Sciences at MIT. Really great to have you here,
as we will hear more about MIT is building a new
school of computing. MELISSA NOBLES: Correct. URS GASSER: The
Schwarzman College. MELISSA NOBLES: That’s right. URS GASSER: Lots of
interesting things happening there at
the intersection of engineering and ethics. So we’re looking forward to your
thoughts on this conversation. Wolfgang Schulz,
professor for media and public law at the
University of Hamburg. He’s also the director–
and now I have to read that, because I still
cannot remember it– the director of the Leibnitz
Institute for Media Research, which is known to me as
the Hans-Bredow-Institut, but I learned that it’s
important to emphasize the Leibnitz part. Crystal Yang, really great
faculty colleague here, professor of law at
Harvard Law School. Does wonderful work, important
work on criminal justice and the use of algorithms
and data in that area. We’ll talk more about that. So as you can see,
a fantastic lineup. And of course, I’m so grateful
to you, Mr. President, that you joined this
group as a participant, and I get a sense already
that you’re ready to jump in and will take over
the moderation function in due course,
which is totally fine and will make my job easier. So one or two logistical notes. First, we will end
at roughly 11:30. That’s the plan. Some seconds of the
conversation may be in German. You should have translation. If it’s OK with
you, I will continue to moderate in
English, and I think the reason is straightforward,
because my Swiss accent is so strong when I speak
German, that it’s easier for the Germans
when I talk English, so just to make that [? clear. ?] [APPLAUSE] So with that, Herr
Bundespresident, here’s the question for
you to start us off. We met the last time in 2012 in
Berlin, and had a conversation about what does it mean
to make good policies for the internet age? And I googled this
morning actually, and tried to remember what
happened in 2012, right? It seems like in internet
times it’s more like 70 years ago than seven years ago. FRANK-WALTER STEINMEIER: Yeah. URS GASSER: And when I googled
things that happened there in the technology space was
the Google Glass project was kicked off. The iPad Mini was introduced. Facebook went public. And a bill was
signed in California that self-driving cars are
now allowed and regulated. So I’m wondering, that seems
to be a very different stage in our digital
transformation process, if you look back only a few
years and now fast forward to 2019. Where have we arrived? What are you thinking about? What are your concerns? What are your hopes and
how does that connect with the topic of today? Mr. President. FRANK-WALTER STEINMEIER:
you very much indeed for these kind words of welcome. Ladies and gentlemen,
dear students, I think it is fantastic. You know, you’ve been
given the alternative to enjoy a sunshiny day, though
somewhat windy, late autumn, but you’ve nevertheless
decided in favor of the alternative– coming
here into an enclosed room to listen to us. Thank you very much for that. And thank you also for
reminding me of the year 2012, which I remember well for
quite different reasons, and I’ll get back
to that in a second. You know, my visit to
Boston in [? 1220. ?] But allow me to begin by
saying that I haven’t been here for the very first
time, and I’m always happy to be back in this
academic, scientific center, a center not only with regard
to the United States of America but also in a much
broader sense, because it is a center that is exemplary
in bringing together researchers and academics from all
parts of the world, from all countries of the
world, to make them work on subjects of common concern. And when I remember that visit
in 2012, or earlier visits too and later visits,
you know, no matter whether we talked about foreign
policy issues or other issues, either here in the
hall or in other places in Harvard, whether we
talked about questions to do with climate policy,
about the state of affairs of transatlantic
relations, rest assured, every time that I came
here, I returned home having benefited
to a large extent from the discussions
I had at Harvard. There was one exception, and
that brings me back to 2012, really. Just once did I risk
myself, my whole existence here in Harvard,
because I allowed myself to be talked into throwing
the first pitch in a baseball match. And I was extremely naive. Never ever had I before I
attended a baseball match, held a baseball in my hands,
nor been in a baseball stadium. And my then colleague, secretary
of state Condoleezza Rice, we all remember her,
was aghast when she heard what I was about to do. And the only comment she gave
to me was, “don’t do it.” But you know, that is
typical of us Germans. I had to accepted and I didn’t
want to go back on my promise. So once we entered the stadium
on the afternoon of that day, I got an inkling of why
my colleague Condoleezza Rice was so aghast, because
the stadium was filled to the very last seat, 40,000 or
50,000 people in the audience. And I had a certain feeling
that they hadn’t come just because of me, but
they’d come because they wanted to watch the
match, a match that was the match really. Here in the United
States, the Red Sox were playing the
New York Yankees. And I realized all of a
sudden that this is not just any match. It’s about religious
issues, really. Still, it worked out somehow. I survived it. And having survived
that experience, I was happy to come
back every time. I wasn’t shy of returning
to Boston, to Harvard. But today, it’s a
different topic, really, that brings me here, different
from the topics we focused on in the past years. We’re no longer on the
threshold of the digitization, of digitization,
of the digital age. But we have already
entered that age. I’ve come here because the
topic we will be talking about directly refers
back to topics I’m focusing on in my
presidency, the future of liberal democracy, that is. How does the internet, how do
Facebook, Twitter, algorithms, anonymity in the internet,
how do all these things change the democratic culture
of debate, which is of such great importance
to us in Germany, just as much as you do in
the United States of America? Despite the daily
waves of outrage that you have to
live with, how can we make sure that we keep
a general overview? How can we distinguish
what is important from what is unimportant? And does this
culture of thinking in simple opposites, yes
or no, black or white, harsh approaches, whether
that takes away from us our ability to see the nuances
between black and white? Are we capable of doing that? Do we continue to be capable of
entering into compromise, which I believe to be vital
for any democracy, if we no longer have the time to
differentiate or to see things in nuances in carefully
weighing the pros and cons, because it’s no longer popular? We talked about this
yesterday in Boston with American and German
academics in great depth. Today, though, we
are again talking about digital
transformation, how that has changed our lives
and daily experiences. But as Mr. Gasser
kindly indicated, we will be focusing on
a different priority. We’re not really
focusing on the question, whether we need
digital technologies. They’re there, anyway. No one is denying the
fact that they open up enormous opportunities
for all of us, when it comes to fighting
poverty, for example, when it comes to tackling the
impact of climate change, when it comes to combating
diseases and their effects. Undoubtedly,
Germany is a country that has no resources
over its own, outside the humanities
and human resources. We want to be a country that
has technology to offer, and we want to participate in
the developments they entail. And that is a kind of
introductory remark on my part. As regards to the
topic we intend to talk about today the
ethic– a code of ethics for the digital
transformation, I would like to just
briefly focus on why this topic is
so important to me. I actually have come
from two visits, and I refer back
to those visits. I visited Stanford last
year, focusing again on the future of digitization. When we traveled there, a few
days before we left we read in the papers that Elon Musk
had bought up a company that was engaging in the research
in brain implants, and that was doing very
well in that regard, that this might help
tackle diseases like Parkinson’s and Alzheimer’s. I learned a lot about the
imagination of researchers during my encounters
there, how one can influence brain
activities with the help of implants and algorithms. This has undeniable and
obvious consequences. But at the end of the
discussion, it wasn’t I, but it was someone who is
very well known in the United States, George Shultz. That is the former Secretary of
State under Ronald Reagan, who also is or was at the time a
member of the board of Stanford University. He said at the end of the
discussion, guys, really, I am fascinated by the scenario
you have been painting, drawing of the future. But let’s not forget, we
are living in a democracy. And democracy relies on
independent, self-determined, confident human beings
if it is to survive And he addressed himself to the
researchers and the academics. So when developing
these technologies, don’t forget to think of the
consequences of your inventions and how that fits into
democracy and its principles. In my second trip,
and I’m going to be brief about this, with
my visit to China, again, we also [? focused ?]
on this topic. And we also talked about social
scoring, the opportunities, the perspectives that result
for the members of a society. The debates we had were not
easy, because at the beginning the Chinese didn’t
understand why we were asking these
questions at all, and why we would find some
of these things complicated that come up in the
context of social scoring. Because they said, we have 80,
90% of support, popular support for these topics. Why are you against it? You know, we live under
different political circumstances, are
scared and shocked by the idea of having to
submit to a total surveillance of all aspects of our lives,
that no matter what we do, this might be linked
up to a system that assesses our performance
in a negative or a positive way, and that this, of course, has an
effect on the way we developed as human beings,
that hopes, wishes, and dreams are
becoming externalized, that they are stored
on a software I no longer have any influence over. For our concept of
individual responsibility and of personal freedom is
being called into question by such an approach. We, however, know that
this is not a problem that is exclusive to the Chinese. German companies,
American companies that invest in
the United States, that employ people
in the United States, they will be working under
the very same conditions. And thus we have
to have an interest in what is happening here. But let me close by saying
that the debate in China hasn’t yet come to an end yet. It is still ongoing. We don’t know what will be
the outcome, the result of all those tests and
experiments that are being carried out in China right now. But the obvious question
is on the table. Is there something
like a minimum of morals for the digital age? Shouldn’t we work to have
something like that like, like a common expression
of the limits of the digital future in the
decades or centuries to come? Which brings me down to the
question of whether we do not really need a much more
intensive exchange of thoughts between the tech community,
the political scientists about the philosophy
of the individual than is happening at this point in
time, at least as I see it. Well, you know, if I were to
choose, I would very much like to be in a position where
I could leave Boston today, having received the
confirmation from all of you that I need not be afraid,
that I need not be concerned, that the debate is taking
place in the very intensity that I would wish to
see attributed to it. But whether that
is the case or not, we will have to hear
and see from you. I very much look
forward to this debate. [APPLAUSE] URS GASSER: Thank you
so much for setting the stage so beautifully. And I realized while
you were speaking that my American colleagues
didn’t have translation, but you followed it very nicely. CRYSTAL YANG: And I very
much appreciated the baseball reference. MELISSA NOBLES: Yeah,
I got that part. There were a couple of words
in there I kind of got. URS GASSER: Yes, but you
really set the stage so well. Yes, that may be helpful. MELISSA NOBLES:
Should I repeat it? MELISSA NOBLES: Yeah, thank you. Bit of a summary. I heard “Stanford.” I heard– URS GASSER: Right, right. And of course you picked up
on that, which is exactly the segue to my question. MELISSA NOBLES: Sure. URS GASSER: So the
president was putting sort of the societal change
that we’re going through, where technologies
of different sorts play such a vital
role in the larger context of the
future of democracy, and the question of how do
we want to live our lives and interact with each
other and shape our future. And within that, he
also referred to, as you picked up on,
a trip to Stanford, and pointed out
already, and that’s a theme I want to follow
up on for a few minutes– MELISSA NOBLES: Sure. URS GASSER: –that there are
tremendous opportunities, although currently
the focus is really on the risks of new
technologies essentially in public discourse, for
sure, particularly in Europe. But before we go into
risk mode and talk about all the pitfalls of
these new technologies, I would like to
pause and really zoom in a little bit on this
question, what can technology do for climate change
and other areas that the president mentioned. FRANK-WALTER STEINMEIER:
Against climate change. URS GASSER: Against,
yes, against. MELISSA NOBLES:
Right, right, exactly. URS GASSER: To address some
of the big, big challenges of our time. MELISSA NOBLES: Sure. URS GASSER: And there is this
other place closer to home, MIT– MELISSA NOBLES: Right. URS GASSER: –where many
of these technologies are developed in the lab, and I was
wondering whether you would be willing to share maybe two,
three examples from also your humanities perspective– MELISSA NOBLES: Sure. URS GASSER: –that give you
hope and optimism, maybe. MELISSA NOBLES: Sure,
I’m glad to do that. Good morning, everyone. Well, you know, one of
the things about MIT, I kind of hesitate
in a certain way to be able to say two or
three, since the Institute is connected to
technological innovation. So think I’d rather
say a bit about what has made MIT such a leader
in thinking innovatively. And a big part of that has been
the commitment to collaboration across all five schools. So it’s in recognition that many
of the problems that the world faces are obviously
global in nature, and they require knowledge
from all domains. It isn’t just a
scientific problem. It isn’t just an
engineering problem. It isn’t just an economic
problem or a social problem. It is all of these
things together. And part of our strengths have
been putting together research programs to deal with these. So we have, for example,
the MIT Energy Initiative, which brings together professors
from engineering, science, humanities, also social
sciences, the Sloane School, to look at the economics
and the business models of what is sustainable
and what not, as well as architecture and
planning to look at the ways in which climate
change and the way we use energy is changing
how we structure cities. So it is the scope
of the problems and a commitment to putting
intellectual energies that are commensurate with
them that, I think, has set MIT in a better way
for thinking about the future. So I hesitate to
say any particular, except to say that the
problems are so massive, there is no way that technology
cannot be a part of it. And the issue is, how
do we think creatively about technology to make
sure that’s happening. And that’s a big part of
what education has to do, to connect students
to understand that the technology
is an expression, is a human endeavor, right? We create the technology. The technology
doesn’t create us, and we have to start with
some basic commitments. So that’s where we are now. And I look forward to
saying a bit more later on about the College
of Computing. URS GASSER: Fabulous. Thank you so much. That’s very helpful. Some sort of an
iteration on this theme, and taking your point,
where you argue, well, there’s no future without
getting technology right in a way that helps
us to address some of these big challenges
we face as humanity, but also to embrace
the opportunities. And I was wondering
whether you would be willing to share your
thinking around this topic. Much of the ethical
debates of these days are focused on ethics
in the sense of telling us what not to do, right? What lines not to cross. And we will definitely
return to that, and this will be the
key part of the panel. But before we go
there, I was wondering, is there some sort of
an ethical obligation for the good use of technology? And basically a moral
imperative that would almost be in contrast to the
precautionary principle that’s so popular in Europe
these days, and say, no, we have to double down
on developing technologies for the social good and
in the public interest. How does a philosopher or
ethicist think about that? EVA WEBER-GURSKA: Yeah,
thank you for that question. I’m happy to answer. So I think there are
at least two ways to understand your question. First, we may ask if there
is a moral obligation to generally use
digital technology now that it has been
invented and developed up to a point where so much
concrete applications are possible. But my answer to
this would be no. There is no general
moral obligation to do what can be done. Because digitization
is just a means, and moral obligation refers to
ends, to purposes, not the way we get there. And so it is an open
question if digitization is the best way to get
there, where we want to go, to our moral purpose,
which is, as you already pointed out, good democracy,
human flourishing, and so on. And we have just to see exactly
where digitization is helpful and where not. But on the other
hand, if you ask if we theorists, theorists
like us here on the panel, should point out possible
positive uses of technology more often, I would say yes,
and that’s important, too. Because otherwise,
the development of digital technology
mostly is driven by interests for
financial profit, and that is not the best
premise for the best outcome in a moral perspective,
from a moral perspective. So it would be surely good to
have more people pointing out the positive uses, but I
think there are already quite examples for that, too. And also where reflection and
realization goes together. For example, at the
Weizenbaum Institute there was a fellow
in summer in Berlin. A young colleague
went and developed an app, which enables
people from different parts of the political spectrum
to chat and discuss it with each other
online, for example. And yeah, I mean, there
are a lot of opportunities, and we should point
them out, but also I want to add that although
these projects all have to be chosen carefully,
because you mentioned already with climate change, we can
do good things against climate change with digitalization. But on the other hand, we also
have to be aware of the fact that digitization, all
digital technologies themselves are consuming
masses of energy. So it would be best to choose
only those projects which have a really urgent reason. There has to be something
important at stake that we invent and
apply new technologies. And I remember the British
philosopher Dave Parfit saying that all people,
all humans with healthy, with two healthy legs should
use the stairs instead of the elevator in
order to save energy. Because he said elevators are
just made for people who cannot walk. And a bit in a similar
way, we should always watch out where are the
urgent reasons that we invent digital technologies for. And what is urgent,
what is important always depends on the domain. It’s different in every domain. In medicine, for example, it’s
the diminishing of suffering. In law, it’s justice. In a democracy, it’s
participation and about [? founded ?] formation
of political opinion. And only then, when we have
identified precise moral purposes and we see that
we cannot attain them but by digital technologies,
then I think then we might be seen as obliged to use them. URS GASSER: Wonderful. Great segue. You pointed out sort
of the big questions, but also that these
questions can only be answered or worked through
in a particular application context. You mentioned already a few. And if I may, to get a
little bit more specific and put the
conversation from 34– you know, go from 30,000 feet
a bit lower to 10,000 feet, maybe, and take
two examples that illustrate some sort
of the struggle, how we embrace opportunities, but
also protect against risks. And Matthew and
Crystal, as I already mentioned in the
introduction, you have interesting work
that’s sort of serves as a case study in our context. Matthew, focusing on health
and public health and the role of technology, whether
it’s AI or IoT, how are some of these
questions that you’ve identified crystallizing, and
where do you see things going? What are some of the concerns? What’s the state of play? MATTHEW LIAO: Yeah, so
good morning, everybody. So as a Professor Gasser
has said, I’m a philosopher. And I have a book
coming out called The Ethics of Artificial
Intelligence coming out next March. And we cover a number of
these different issues in the ethics of AI. And one of the applications
of the ethics of AI is in the realm of health care. They are actually a lot of
really exciting opportunities and a lot of development,
a lot of things being done in the area of health care. So for example, machine
learning is being deployed to screen cancer cells. It’s found that it’s almost
as effective as radiologists. It’s also been used
in ophthalmology. It’s been used to screen, to
figure out whether a embryo is going to be viable or not. Natural language
processing is being used to figure
out whether people are having suicidal thoughts. So there are a lot of
really exciting developments that are currently underway. And what that means
for us is that it can really, for example, reduce
health care costs in the US. I think we spend about $3 to
$4 trillion in health care each year. And so one of the things
that machine learning can do is reduce administrative costs
in health care, for example. It can also assist
facilitating drug discovery. And finally, another
example is, it can really realize the
vision of precision medicine. So for example,
Fitbits, wearables, to figure out
healthy lifestyles, what you should be
eating, your calorie intakes and so on and so forth. So all those are really,
really exciting developments. I’m an ethicist, so I
also think about some of the ethical problems. And I just want to
very quickly share some of the ethical
concerns with you as well. So one of the biggest
challenges with machine learning is that it requires
a lot of data. And so what that
means is, someone’s got to go out there and
collect all these data. And then you get into
issues about privacy, especially in health care. It’s personal data that
we’re talking about. So one obvious example
is Facebook and Cambridge Analytica collecting
a lot of information from Facebook users. Another example is
GlaxoSmithKline. They just recently bought
this company 23andMe, which is ancestry type, you
upload your information, it gets your
genetic information. So now they have
all the database. And so one of the things we
need to really be worried about is whether they’re collecting
the data appropriately, are they violating rights,
what are the implications of the individuals? Another issue is going
to be the garbage in, garbage out problem. So the algorithms
that we’re using today are going to only be as
good as the data themselves. And so what we’re
finding is that sometimes the data sets that
we’re collecting are not– they don’t have
accurate representations of the subjects. So take for example
self-driving cars. It turns out that
self-driving cars, they’re not so good at
detecting people of color, because the training sets, the
training data that they use, they don’t have enough of a set
of people of color in the data set. And so that’s a problem when we
deploy those sort of data, sort of the algorithm in the wild. And I’ll just say
one more thing. The biggest concern I have
with machine learning right now is there’s something
called deep learning. And deep learning is
actually a technical term. It just means that it’s using
a big network to figure out how a machine should act. And it’s powered a lot of the
recent developments since 2012. It’s powered a lot of
the new breakthroughs. But one of the problems
with deep learning is that it just doesn’t
capture the causality, the causal relations. It doesn’t really
understand what it’s doing. And so it’s linear regression. It’s a lot of math. But here’s one problem. So there’s something called
generative adversarial network. It’s a type of single– so one type of
attack is something called a single pixel attack. So machine learning is very
good at image classification. They can take images, and they
classify them very accurately. But science–
researchers have found that if you just take an
image, say an image of a car, and you just take one
pixel and you change it from black to white,
the machine learning will completely screw it up. So for example, with
the image of the car, it will now classify that image
as a dog with 99% confidence. And just imagine deploying
that type of machine learning in the context of health
care, when people’s lives are at stake, or in the context
of self-driving cars, right? And so I think
we’re going to get into more of these
discussions later, but I think that’s where we have
to be careful about rolling out these technologies. URS GASSER: Crystal,
does that sound familiar, listening to these
stories from health when you look at your work on
the use of algorithms and data in the criminal justice system
or where are differences? CRYSTAL YANG:
Yeah, I think there are a lot of similarities. And I think as some
of the other panelists have pointed out, while
algorithms are now basically used in so many
parts of society, one of the areas
where they have had a very dramatic
increase in usage is the United States
criminal justice system. And the algorithms here we
often call risk assessment instruments, because what the
algorithms are trying to do is predict somebody’s
future criminality. And these instruments
now are used at various stages of the
criminal justice system, things from policing, to pretrial
and bail decisions, to sentencing, to probation
and parole as well. And just some examples,
take predictive policing. One of the common
technologies is called Pred Poll, which is
used by the Los Angeles Police Department and over 60
other police departments across the United States. It uses historical
data on crime types, where crimes have
happened, to predict the future incidents of
different criminal incidents. In sentencing, now
many states allow judges to consider
risk scores that are meant to predict the
future risk of committing new criminal behavior. One common algorithm here, which
has been in the news a lot, you may have heard of,
is the Compass algorithm. It’s a proprietary
algorithm, so we actually don’t know exactly the
underlying algorithmic structure, that
classifies individuals on a scale of 1 to 10 in terms
of their predictive likelihood of recidivism, using how
that person answers questions on a 137 question survey. So these are just
some of the examples, and I think they raise a huge
host of issues and challenges. One that I think requires
a lot of understanding from philosophers and ethicists
is, do these risk assessment tools have a role to play, even,
in the criminal justice system? I think some view the endeavor
of predicting future risk as wrongheaded, and believe
that because of this input data garbage in, garbage out type of
problem, that using algorithms to predict future risk will
only entrench or potentially exacerbate inequalities
and inequities that we see in society at large. On the other hand, and I place
myself more in this camp, while there is acknowledgment
that the algorithms are often imperfect, I think it’s
also important to consider the relevant counterfactual. The counterfactual is not
a world free of inequality, of inequity. It’s a counterfactual in which
we have human decision makers. And guess what? There’s lots of evidence
that human decision makers have a big role to play in
perpetuating inequalities through bias and inconsistency. So there’s a role to, I
think, to consider what are we comparing the algorithms to? It’s not a perfect world. It’s humans. I think another set
of design questions that Matthew has gotten at is,
in the criminal justice system, there are lots of open,
unresolved questions about how do we
design an algorithm? If we’re going to
predict risk, can we consider individual
characteristics like somebody’s race or
ethnicity, what we often call protected characteristics? If you can’t, can you consider
non-protected characteristics, things like education,
where somebody lives, which can effectively proxy for
a person’s race or ethnicity? There is also
complicated questions about how to evaluate
if an algorithm is doing what we want it to do. What does it mean, for instance,
for an algorithm to be fair? It turns out here the
law has not so much to say so far about how to
define or measure fairness. And even outside the law,
there’s a very lively computer science debate about algorithmic
fairness, where there’s very different
definitions of fairness, that in many
circumstances we could all say that one sounds great
or that one sounds great, but it’s actually been
shown mathematically that in many instances it is
impossible to simultaneously satisfy all those notions
of algorithmic fairness. And so then that requires
a normative choice by us as a society or a
legal system to choose which of those definitions
of algorithmic fairness should dominate. So I think those are just
a couple of the key issues and challenges that I see in
the criminal justice system. URS GASSER: No
shortage of challenges. CRYSTAL YANG: No
shortage, absolutely. URS GASSER: It was
clear from both stories. I’d love to build up on
this and ask Wolfgang, Crystal made the
point that part of it is a story about
technology, but part of it also seems to be a story
about society at large, about the institutions
we already have in place. Part of it seems to
be about human nature, with our own biases. So how do you think about that
as we have this intense debate, debates about AI decision making
versus human decision making? And should we replace
judges by AIs or not? How much is it about
technology, really? WOLFGANG SCHULZ: I think
to respond to that, I have to go to flight
level 10,000 again. I would say that I
am descending later. URS GASSER: Only in a good way. Don’t worry. WOLFGANG SCHULZ:
In a good way, yes. I hope. So when we talk about
technology in expert circles but in society at large,
then we very often have a distinction between
here is the society, here is the technology. And that is a dangerous
thing, because then we frame technology as a
kind of natural disaster that is coming, and we have to
build walls to cope with that. And we are not in
the mode of creating the technology as a
society and together with the different disciplines. So I think we have
to be very careful how we talk about
these things and where we talk about tensions. And I can, I think, build on
what Crystal said, because we are doing some research on
the criminal justice system as well, and have done recently. And what I find
interesting is that when we talk about technology
coming into processes, then we start thinking
about what our quality measures are as a society here. And I had a discussion
with German judges a couple of months ago, and we
are talking about sentencing. And we are talking about
AI supporting that, and then I raised the
question of explainability, which is one of
the issues in AI, that say we cannot really see
and explain what happens there. And then one of the judges
said, wait a moment. Ask me, can I explain
what I am doing when I come to this decision? And I’m not sure that
I really can do that. I can give a reason that is
valid in the legal system, but I cannot really explain
what my motives were here. And then we had a debate on
what are the factors there. And in the German legal
system and the criminal law, it’s not very well elaborated
what the criteria are. So it’s very, very vague. And so we had a
very fruitful debate on what the values actually are. And you can have the same
in other fields of society. We have next week
or the week after, a workshop with
computer scientists and people from communication
science and law, talking about how to understand
diversity in recommender systems for the media. And we want to come up with
ideas what that actually is. And then you have
to go back, and what do you want as a
society actually, when you talk about diversity? So I think that’s a good
thing that technology forces us to ask those hard questions
about societal values and to better understand what
makes human decision making so special. We are talking a lot about
things like tested knowledge and tested norms, things that
we all understand because we are part of the society. And we cannot really
explain why we do that this way or that way, because it’s
tacit knowledge or tacit norms. And that is something that
you cannot really, now, I would say, build
into technology. That would require technology
to be part of society and learn in interaction, and
I think we are far from that so far. So I believe that this
is a twist of the debate that very often we
do not really include in our conversation, when
we talk about this society here, technology there aspect. URS GASSER: So if
Wolfgang is right, and he is most often
right, as we know– JEANETTE HOFMANN:
I don’t deny that. URS GASSER: And technology is
deeply embedded in society, and as we heard the president
opening, in his opening remarks, we as societies
are in a learning process ourselves, how to
cope with massive challenges and transformations
of all sorts. Based on the work you’ve
been doing, following early debates around
internet regulation and approaches to
governance, what’s currently happening in this
societal learning process as we try to identify
and agree and regulate good uses versus bad uses
across different contexts, and we only highlighted
two examples and could add many more. What sorts of norms
are emerging and what some of the dynamics around
these norms as you observe? JEANETTE HOFMANN:
Thank you, Urs. I could talk for hours
on this question. I really like it. Let me go one step back. At the time when the internet
and digital technologies– URS GASSER: Switch off the mic. On the mic. FRANK-WALTER STEINMEIER:
Well, that’s better. JEANETTE HOFMANN: –Digital
technologies really became more present
in our societies, Western societies went
through a long period of privatization and liberation
from old state monopolies. And we thought of that
the force of the internet as a form of liberalization,
and that kind of idea of self-regulation and “let the
markets determine the future,” we thought that this was
a very good alternative. And this we have driven
to a point where we now regard digital
technologies nearly as a self-driving
autonomous force. We ascribe a lot of power and
agency to digital technologies themselves and the
companies who develop them. I would say that the
debate we see now about AI and ethical
frameworks is an echo of that, the idea
that ethical principles might be good enough to give us an
orientation for the future of artificial intelligence. But we need to ask
ourselves whether we get enough accountability
out of ethical guidelines and frameworks. I just came back
from the West Coast, where you see really
a change of wind. Companies now begin
to wonder whether they do not need a legal framework
for the future development. Such a legal framework
could be, for example, anchored in human rights,
and legislation could build on fundamental rights. They could set limits
to future developments, also to make us see
that finally it is society that shapes technology. It’s not that technology
sets its own rules. But we are not really aware
of it, I think, at the moment. We nearly have
lost the capability to see and to recognize
how we change technologies as societies. So we need to perhaps
turn around a bit, give up this idea of
complete self-regulation, and come to new models
that sit somewhere in between a market approach
and a pure government approach. We need new
regulatory frameworks that need to work across
national boundaries, even though we can,
I think, not hope for multilateral approaches. We need something below, and
the GDPR, the General Data Protection Regulation that the
European Commission introduced, is often mentioned
as a gold standard for that kind of approach. Perhaps some countries
can get together, build a legal framework and
export it via trade agreements. [APPLAUSE] URS GASSER: That is
some good advice. Thank you, Jeanette. So a couple of things that I
would like to follow up on. One is this role of
the ethics principles. You mentioned there
is a flourishing of ethical principles
around AI in particular. I think 130 or
something are out there. We tried to map some of them,
but it’s getting quite a task. But on the other
hand side, given also Wolfgang’s remarks and the
opening statement by Herr Bundespresident, there is
value to these ethical debates, nonetheless, right? And you also make
this point, of course, that we need all different
approaches and tools, probably including law but also ethics. And if I may ask
you, how do you think about these ethical principles? What’s the value in
these ethical norms crystallizing guidelines
and things like that, whether it’s for
companies or an act by international organizations
like OECD or even by nation states? What’s the promise,
but also what are the limitations
of ethical approaches of this sort when we deal with
these complex, messy problems? EVA WEBER-GURSKA: Yeah, so
ethics and law of course have to be distinguished,
although they are connected. Ethics is, I would say,
the explicit formulation of implicit norms
that guide or should guide our everyday actions in
our life, our living together. And law, it’s the core
of the organization of a state or a nation,
transforms some of these norms into concrete rules, the
infringement of which then is bound up with
sanctions by the state. So this is something different. And not all moral norms are
legal norms and vice versa, of course. But ethical guidelines now for
new topics like digitization can be, I think,
helpful first steps to show something that then can
be transformed into law, too. URS GASSER: Speaking
of law, what’s your hope that, looking
at your area of research, the law will evolve in this
dynamic situation, where maybe ethical principles
may lead the way? Where do you see
the promise of law in these debates,
where we’re facing this shift from the human
towards the machine? CRYSTAL YANG: Yeah, I think
law has a very important role to play here. I think I share
Jeanette’s general sense that self-regulation
is probably not going to be a sufficient
solution, that there have to be legal interventions. And the law is both
instrumental in that it will undoubtedly,
by deciding what to permit and what to prohibit, shape
the behavior of governments, private companies,
in terms of how they design algorithms, how they
implement them on the ground. The law, I think, also has
important expressive principles maybe related to ethics,
where if the law allows for something, then
citizens, members of society, will view something as maybe
more socially acceptable. So I think the law here
has a big role to play. Coming back to the criminal
justice system, though, I think there are many ways
in which the current law, certainly in the
United States, falls short for a lot of the
new challenges that might come with algorithms. So to give you some
examples, many people are troubled by the
use of disparities that can emerge when you use
an algorithm to make decisions. And that could be because
of the data or the structure of the algorithm. Now, it turns out
there’s probably pretty limited legal
remedies for addressing those disparities. Under current US law, a
finding of discrimination under the Equal Protection
Clause of the US Constitution would require a showing
of discriminatory intent or purpose. And that’s hard, because when
an algorithmic designer chooses to use a variable or
certain types of data, there’s probably often
no discriminatory intent or purpose. And yet because
so many variables can be proxies for
things we’re troubled by, there’s often maybe no
direct legal remedy. And so this
traditional requirement we’ve had in the US
Constitution and case law of requiring
intent and motive is often ill suited
to addressing the new types of problems that
the algorithms can introduce. Moreover, it’s actually
been the case in the US that many have interpreted
the case law on discrimination as requiring or prohibiting
the use of characteristics like race or ethnicity. You can not use them in
any way, shape, or form. But the reality is that because
of the complex statistical relationships underlying many
variables, I, other computer scientists, economists
have written and shown that those proxy effects
that we may be worried about are often created because of
the prohibition on the use of those characteristics. And that once you take
statistics into account, you may actually want to use
protected characteristics in certain forms to actually
remedy those disparities. And so it’s actually
this problem right now, where I think the law is
pushing companies, governments to develop versions of
algorithms that may actually be counterproductive to
our larger societal goal of equality and opportunity. And I think, to the earlier
point about human decision making, the law often does
not consider counterfactuals in a very easy way. It often seems to require
perfection for algorithms, explainability. But as you point out,
what is more a black box than what is in a judge’s mind? Perhaps the judge’s mind
is more of a black box than a neural network or other
forms of machine learning. And so I worry that the law, by
sometimes requiring perfection and not considering
the counterfactual, will often chill and deter what
may be innovative and good uses of algorithmic decision making. URS GASSER: So also
the relationship between technology and
law and law and ethics is very complicated
and bi-directional, with unintended
consequences included. FRANK-WALTER STEINMEIER:
if I may, some of you have put this on the same, on
a par in a way that does not convince me of any decision
taken by an algorithm being plausible. You know, the fact
that we don’t always understand algorithms
and software that is guided by algorithms, that
leaves me very concerned. We say, and the
situation in America is slightly different from
the situation in Germany. A judge passes a
sentence or a judgment, he or she has to
justify that decision. Not every ruling or
decision has to be accepted. People may have a
different opinion. But as a rule, as far as
the tradition in Germany is concerned, you do have
a very extensive duty to justify your
sentence or your ruling. And that is what lacks when
you talk about algorithms. That is one of the questions
that we need to discuss, I believe. Is it conceivable at
all that algorithms, that the control
of our algorithms can become or be made
more transparent? Of course, not towards each
and every individual person, but perhaps with
regard to those who consider themselves
the representative of the government. You know, the body in question
responsible for protecting the rights and the
freedoms of the individual. Can you hear me? And a second remark
that I’d like to make against the
backdrop of what has just been said, it is
good that we have a debate up here on the
rostrum, so to speak, about the ethical principles
for digital transformation. But what struck
me, and I’ve tried to refer to that in my
introductory remarks, when I bring together a group
of experts in my office, I have experts briefing me about
the technological potential of AI. But, you know, I have an idea
after these talks of what is doable, what is conceivable. But when I have talks about the
ethical limits of digitization, it brings together a wholly
different group of people, because as a rule, I do not
meet IT experts or engineers, but I meet social
scientists, philosophers, political scientists, which is
an indication, to some extent, of something that keeps
me deeply troubled. And that is that we
have a debate, not that that debate takes
place within closed circles, closed communities. That is to say, we have a
debate about the ethical limits modality of a digital
age and that we have to bear in mind with
limits that we should not to surpass or overstep. We have a similar debate about
the function of democracy, but it is not carried beyond
the respective communities. Please tell me if I’m wrong. I’m happy to hear you
point that out to me. But as I see it, and
as I wish to see it, the two communities that
I’ve been mentioning, the tech community
on the one hand and the more
philosophical community bringing together social
scientists and philosophers, we don’t have a discussion that
brings both groups together. We haven’t been able to
link up that discussion. Is that impression
that I have correct? Would you agree with that? Is it limited to
Germany, or would you say that this is also
transferable to the debate in the United States? URS GASSER: Change that
right, over at MIT? Can you can share your thoughts? And then I would like to open
up for a number of questions. So be ready with your questions. MELISSA NOBLES: Sure. So the Schwarzman, the
College of Computing, which was announced
last year, is intended to get at
just this issue, that much of the technology
that is obviously being created at MIT, we recognize
that there has to be a bridge between
technology and the humanities and the social sciences in an
intentional, deliberate way. And part of why the
school was established is, tons of students are
interested in computing. They’re doing it. They’re coming in, wanting
to major in computer science, but many of them don’t want to
only be computer scientists. They want to apply that
knowledge to something else, but they want it to be guided
by some domain knowledge outside of computer science. And so the goal of the college
is to eventually– we’re beginning to see joint, blended
degrees between computer science and economics, computer
science and urban studies, computer science and music. Not all students are doing
this, but the interest is great, and it’s intended to
allow for this connection in a more organic way
from the beginning, such that students will
have the type of skills, so that we won’t be talking
about disparate communities, but that students
will have enough of an openness, at least
an exposure understanding that this is what it means
to be a computer scientist. And conversely, for
my own discipline, I’m a political
scientist, this is what it means to be a
political scientist, is to know something about this. So all of us are going to
have to learn more and be open to learning more, if we’re
going to successfully deal with this issue. One other thing I’d
like to say about it is, we’re starting to have
these conversations on campus. They are not easy
conversations to have. As much as we try
to be collaborative, we’ve really had to work. We may be using the
same terms, but we speak a different language. And it requires
patience to do this. So part of what
we’re doing is also learning some other principles
of generosity and patience as we deal with one another. Because if we want to
solve this problem, deal with technology in a
way that we all want to see, then that’s what it’s
going to require, some other human qualities
that we have to bring to bear for this to happen. And I think especially
just inclusion is really important for
us for our undergraduates, since many of them will be
going into leadership positions. They know the technologies. We need them working on
the congressional staff. If you all saw the hearings
with Mark Zuckerberg in the Congress, people didn’t
know what Google was, right? Or they didn’t
know the difference between Samsung and Apple. I mean, they didn’t
know anything about the technologies. If they don’t understand
how to open their phones, how can you imagine
that they could be responsible and entrusted
to do the kinds of things that you all are describing? Some of what we also
need, our students to be able to play those kind
of roles, precisely because– but we don’t want them to do
it only knowing the technology. They will also have to
understand economics, also have to understand
political science, and such. So that is the task
of the college. And we’re just getting
started, so stay tuned. URS GASSER: That’s exciting. That’s exciting. Thank you. OK, let’s open up
for a few questions. [? Becca, ?] our mic-runner
is ready and fast on her legs. Who has a question? And please end with
a question mark. That would be good. [LAUGHTER] AUDIENCE: Me? First thank you so much for
all the interesting insights you shared today. My question is, regarding
the fact that a lot of you mentioned today that
there’s an urgency to craft tech specific
ethics regulations as soon as possible. So in a way, this is
really a moral discussion with a deadline. When would you say
is this deadline? When do we have to formalize our
thoughts and put it into law? EVA WEBER-GURSKA: Oh, was
that a question for me? URS GASSER: Now it is, yes. [LAUGHTER] EVA WEBER-GURSKA: No, of course
I think the deadline is– it’s not far ahead, but
it’s just right now. But there are already a
lot of ethic guidelines being written right now. So we really have on national
level a different one in Germany. Then there are
international levels, like, for example,
the high level export group that got tasked by
the European Commission. And they wrote
something, and then we have already part of this
ethic guidelines, for example, extracted, and the
G20 group signed it. And so there are
already guidelines, but I think the guidelines
are only the first step. And then, of course,
they’re our first step to transform them into
law, as we heard already with the [GERMAN]. And so yeah, it’s right now,
and we should go further ahead. But it’s still already
something happening, I think. URS GASSER: Wolfgang? WOLFGANG SCHULZ: Yeah, maybe
I can add a legal perspective to that, because one of
the problems of the law is that the function of
law is to have stability. And what we need here is
some flexibility as well. And so what we are struggling
with, as legal scholars, is to find ways to
make law more flexible, to have constant evaluation,
to have sunset clauses and things like that, so
that we do not have to wait. I think it was Susan Crawford
said the marvelous sentence, we have to regulate things
that we don’t understand. And I think we are– can’t
wait until all of the lawmakers have understood
what it actually is. You have to act before. But then we need different
instruments, especially when you take into account
what you said, that when you are talking with
lawmakers, even if they really try hard, the problems
are really high tech. Only a couple of people really
understands what’s happening there, and even the best
member of parliament cannot be an expert in this field. So we need constant
evaluation and some mechanism seems to deal with. Every day, even
we as researchers, every week I have a new
understanding of how algorithms interact with society. Every meeting, I have an
interaction with some software engineer and oh, OK, it’s
a little bit different than I thought before. And on this basis to create
law, that’s, I would say, a fundamentally new challenge. FRANK-WALTER STEINMEIER:
add to what you’ve just said, the situation is changing. When you take a look at
the German legal culture, you know, it’s based
on the assumption that a law that is passed
is, until kingdom come, it’s for eternity, you know? When we know look at the
area of internet law, something interesting
is happening that is quite difficult, complicated. When you look to the
relationship between the one passing the law, making
the law, and the public, you know, public comments
about the Network Enforcement Act in Germany, for example,
is an indication of that. In some areas of legislation,
we have already reached a point where we can no longer
provide an eternal guarantee for the legislation
that is being passed. We are taking one step,
hesitatingly after the other, carefully trying out to see
how that intervention is going to affect the
reality in the future. This careful and cautious,
tentative approach in legislation that is
taking place right now, it’s not something that is
being greeted with enthusiasm. And I do understand, but
there might be no alternative to that approach, you know. Trying to time and
again refer back from the instruments to the
technology and vice versa and to amend things
when necessary. URS GASSER: One
of the dark sides of having such an
amazing group of people is, we could discuss 20
minutes just one question. So you want to jump in quickly
with comments, Jeanette and Matthew? JEANETTE HOFMANN: I thought
perhaps one way forward could be to pursue a
more procedural approach to these problems. For example, think of
ways to holding companies accountable to the kinds
of technology development they tried to bring
to the market, introducing auditing
requirements for certain type of algorithms. Make it mandatory to only
use in certain areas machine learning systems that
are self-explainable, that explain, in at
least basic ways, how they come to
certain recommendations and predictions. That seems to be a
way forward, rather than relying just on rules. MATTHEW LIAO: Maybe I
can also jump in here. So I think Professor
Gasser’s mentioned that there are about 130
ethical principles that are being presented
by different companies and so on and so forth. I think what we really
also need is a rationale, or philosophical– this is a plug
for philosophers– a philosophical justification
for some of these principles. So for example, they
talk about– a lot of these principles say things
like “we need explainability.” Why do we need explainability? I mean, we’ve heard some of the
panelists asking this question and various other things. And so here in that line,
I’m very sympathetic to what Professor Hofmann’s
saying, which is this idea of the human rights
framework, which says that, we need to look towards the goal. What are these algorithms for? Fundamentally, they are about
promoting human well-being, right? We want to make sure that we
have a harmonious society, one that works towards all of us. And so a human rights
framework, I think, can really move
towards that goal. And there’s a rich tradition. There’s a rich
literature on these philosophical justifications
of the different rights. And they go beyond just
discrimination, right? They say– they’re
positive rights. They’re rights where
we just– it’s not just about making sure that you don’t
discriminate, by making sure that your technologies
also work to help people and so on and so forth. And the other thing
about human rights is that it’s an
obligation on everybody. So it’s not just an
obligation on the engineers. It’s not just an
obligation on the company. It’s not just an obligation
on the government. It’s an obligation on all of us. We need to
collectively make sure that we’re working
towards making sure that these technologies
work well for everybody. URS GASSER: Crystal. CRYSTAL YANG: I promise
to be very brief. I know there are
a lot of hands up. This is such a
fascinating question. I think, I wholeheartedly
endorse the perspectives others have raised, especially
Mr. President, that the laws must be adaptive. They must be flexible, because
we are still learning how they work, how the algorithms work. And so we have to also study
when a new regulation goes into effect. What does that mean
about the types of algorithms we’re seeing
that flourish after that? What types of algorithms are
now disappearing as a result? To that point about
explainability, I think we all have this
desire to understand what the algorithm is doing. And so we often,
then, might shift towards regulation or
principles that the algorithm be explainable. The complication
there is, now there’s emerging work coming from
computer science and economics, showing that when you force an
algorithm to be explainable, you generally will choose
an algorithm that’s simpler, because it has to
be easier to understand. But a simpler algorithm, as
it turns out in some contexts, actually can lead to both
less efficient results and less equitable results,
which again raises a conundrum that I think no field
alone can address, but just reveals that there
are inherent trade-offs every time we make a
choice, like explainability. And we have to confront
those trade-offs and decide how do we weigh
competing values, which are inevitably going
to be at stake. URS GASSER: Great. So let’s collect
three questions, and then we’ll respond. One question here,
and then maybe one from over here, this area. AUDIENCE: Yeah, I’d like
to add an observation. I think it’s also
observable on the podium that we are missing
economists, and we are missing behavioral scientists. And it seems to me that
these two components are crucial in understanding
the impact that AI has had and will have on our
society and each of us. Why do I say this? Because AI has enormous
economic potency. In this country, the
majority of the productivity of this country comes from AI. And why is it that Facebook
and Google and other companies have been doing undaunted
what they have been doing is because exactly of that. And so that is one reality
that we have to face. And this reality is deeply
immersed in research as well. Where is most of
our funding going? It is going to computer science,
to computer engineering. And then we have
some alibi, excuse the term, addition
of social sciences, and if we are lucky,
behavioral sciences. There’s no– URS GASSER: Thank you. Sorry, we have to stop there. AUDIENCE: It’s just–
it’s very– well, we’ve heard a lot
from the panel. [LAUGHTER] URS GASSER: Fair enough. AUDIENCE: I’m sorry, but
there is no level playing field between the
behavioral sciences and all the psychological
dynamics that are opened up by AI
and computer scientists and computer engineering. Unless we change these
funding structures– and we’ve heard a lot about the
necessity for other regulations for companies, but
these funding structures have enormous consequences. President Steinmeier
was asking, is there any example about,
at the beginning of such an enterprise, to
bring disciplines together? I would say, yes. And actually at the
Ruhr Universitat Bochum, there is one
competence cluster that focuses on cybersecurity,
which tries to give level-headed,
equal importance to social behavioral science
on the one, economics and computer science and
computer engineering. So I just hope
that in the future, such discussions move beyond
very important contributions from philosophers,
ethicists, and lawyers to also have a brighter view. [APPLAUSE] URS GASSER: Not
really a question. AUDIENCE: I promise to be short. URS GASSER: Back up. We collect there, please. Go ahead, please. AUDIENCE: I’ll
keep myself short. URS GASSER: Yup. AUDIENCE: As we are at a
German-American conference right now, I just want to ask,
how does the transatlantic relationship help us in
solving all these challenges? What is needed for an effective
transatlantic relationship, especially the
German-American one, to solve all these
challenges together, as like a world and not
as like separate states? URS GASSER: OK. Maybe one or two more? Yes, please. AUDIENCE: I have a question. [LAUGHTER] URS GASSER: I appreciate that. AUDIENCE: Do you think,
talking about social media, that the time will
come that there will be a reliable algorithm
to identify hate speech? URS GASSER: OK, over there. Last question. AUDIENCE: I promised
a short question. AUDIENCE: I’m one of those
computer scientists writing those messy
algorithms, and I know that there are
people in my field who think very
critical about this. And I know there’s
a lot of discussion. So I can convince
you there are people behind the current
talking about the things. How can we reach
out to other people who are thinking about this? AUDIENCE: So my
question is short. Thank you all. This was delightful. I would like to understand, are
we being human race empowered by technology? Or are we powering
technology by humans? URS GASSER: Great. All right. We have 12 minutes left. And I’m Swiss, as I said. So I want to end on time. So what I would suggest is that
we actually do a closing round and pick the question that
you would like to address, but put it into the
context also of your work and what we’ve discussed here. So we have the question of
transatlantic relationships. We have the question
around social media and the role of
technology in creating a safer environment using
the example of hate speech. And we have this
ultimate question, is technology empowering
people, or are people here somehow to empower technology. So these are a
few of the themes. Perhaps we start with Eva. EVA WEBER-GURSKA: Mm-hm. Yeah, maybe to the
two last questions. Of course, I think that the
technology should empower people, but for that
human machine interaction, it is important that we
understand each other, as we already talked about. And maybe just into two aspects
that philosophy can contribute here, and the one is, it’s
not only about explainability that you said, and it’s
not only what is also important about the ethic or
moral justification of why it’s so important to explain, but
also to see that in morality, for example, it’s
about reason giving. So the whole validity
of moral norms depends, I think, on
the fact that they exist between
beings that can give reasons and understand reasons. And so one question would be, do
we want algorithmic structures who cannot give reasons in an
empathic sense, for example. And another topic would
be the topic of trust, because the ethic
guidelines often highlight trustworthy AI as a claim. I would also be skeptical
of this as a best aim, because trustworthiness
presupposes also being a moral
subject, because trust means to believe that someone
will hold to his or her commitment to do something. And this is also
something that is only possible for a moral subject. So AI systems cannot be
trustworthy agents or subjects. URS GASSER: Thank you. Matthew? MATTHEW LIAO: Yes, I’ll take
the question on hate speech. So I mean, there are some
attempts using machine learning algorithms to detect things like
fake news and things like that, but I actually want to give
you a very grim picture. This is like election
2.0, since we’re coming up to another election cycle. So there’s some evidence. There’s something
called deep fake, which is being able to produce all
these videos that just look like they can
superimpose your photo or you onto another video. And then they can get you to
talk and do various things. And what people are
finding is that– so there’s the theory
that you tend to vote for people who look like you. OK, and so now they’re creating
deep fake videos, where they super impose your picture
onto a candidate’s picture, so it looks like you. And now supposedly that’s
going to influence your voting behavior, because you’re
more likely to vote for people who look like you. And so that’s going to be
very worrying in the future. And then the question is,
we’re going to get to a point where it’s going to be
very hard for human eyes to be able to detect
those differences, and that’s going to
be very worrying. URS GASSER: Jeanette. JEANETTE HOFMANN: I’d
also like to pick up the question on hate speech. What I find really good
about this question is because we have
so many examples that show how deeply ambiguous
we are about such wording. Facebook once told me the
example of the term “bitch.” “Bitch” can be
really dismissive, when you call a woman a bitch. But nowadays, in
some circles, “bitch” can also be appreciative. Women might refer to
each other as bitches. How is Facebook
supposed to regulate– [? AUDIENCE: ?] Especially
on [? Halloween. ?] JEANETTE HOFMANN: I
mean, how is Facebook– URS GASSER: I did
not see that coming. JEANETTE HOFMANN: –supposed
to regulate wordings that have so different meanings? And that, I think,
shows also the limit, the limits of technical
filtering of language. Language is changing
all the time, and it differs across
cultures also very much. So there are really limits. Another point, if I may,
the question of empowering versus disempowering. I really like this question,
because it implicitly refers to autonomy of human beings. I think it’s a mistake to
think autonomy needs to be defended against technology. Technology, in many ways,
enhances our autonomy. Think of flying around. Think of your watch. I mean, we coordinate
as societies through these technologies. And at the same time, they
are disciplining ourselves. So it’s not an either/or. And technologies and human
beings are not opposites. It’s the matter of how
we structure and shape the relationship
between the two. URS GASSER: Dean Nobles, is
it OK if we go last with you? Mr. President, Dean Nobles. MELISSA NOBLES: Sure, I’ll
just take the question of the computer
scientist who said these kind of conversations
are also happening among computer scientists. But we need to get better
connection with others, who are thinking critically about it. I think that obviously education
plays a hugely important role in this. That is, early on
getting students of different disciplines
to work together and to learn together
in a way that addresses these questions exactly. The challenge of all
knowledge is making sure that it doesn’t stay
siloed, that we work in a truly collaborative way. And it seems to me that’s the
challenge for the 21st century. WOLFGANG SCHULZ:
Yeah, I think I’ll pick the comment, if
I may, on the funding and interdisciplinary research. I think we talk a lot about
interdisciplinary research, and we need it to
solve problems, but the academic
system is not really designed to cater that need. We still have
problems with that, and I constantly get phone
calls from colleagues that want to apply for
project funding next week. And they said, oh, we have just
seen we need some ethics in it, and we need a lawyer
or something like that. Would you be available? And normally I’d
say no, because it has to be part of
the project question and not just an icing
on the cake that has already been baked. That makes no sense. And so I think we have issues
here in the academic system. And maybe 30 seconds on
the transatlantic issue. I think it’s really helpful
and good for these questions that there are really
stable research relationships between
our American colleagues and the research in Germany. It’s really great,
and that survives even if there is a
political winter or autumn, that we have this relationship. And to solve these
kinds of problems, I think that’s
extremely helpful. CRYSTAL YANG: Yeah,
I’ll just follow up also on the research question to the
excellent question over here. I failed to mention, I
am actually an economist as well as a lawyer. And I would welcome many more
economists studying this area, and I hope that the
funding structures as well as the incentives do promote
that greater collaboration. I think the computer
science community is doing amazing work. It’s often siloed from what
the economics community is thinking about, what
the legal community is thinking about. And so I think initiatives
like what Dean Nobles is doing is probably a really great way
of bringing people together. And to education,
there is such a need for infusing this type of
learning in legal systems. And I don’t think that
US law schools, at least, have really been at
the forefront of this. In fact, many of the
decisions you read from state supreme court judges who are
ruling on the use of algorithms and making important case law
have explicit acknowledgments. I’m paraphrasing here, but I’m
not paraphrasing so far of, the judges in this case were
limited in their decision making, because they
didn’t understand how the algorithm worked. Well, that’s a
really big problem. And so we need to
train the lawyers who will be deciding these cases,
working on behalf of clients who are both creators
of algorithms and individuals
adversely affected by algorithms to understand
how algorithms work. URS GASSER: [SPEAKING GERMAN] INTERPRETER: Mr. President,
you have the final word. Thank you. Thank you, indeed. I’m not attempting,
even attempting to respond to all the
questions that were put to us, but allow me begin by
the following remark. The debate that we have
just been witnessing with the participation
of the audience would undoubtedly be
easier in the future if we were to keep it clear
from any misunderstandings. If I may come back
to the question that you put at
the beginning, why is there no economist
amongst people here? You know the economic
potential of IT and artificial intelligence
is being seen sufficiently, I believe. So if you take a
look at the expert, the group of experts
brought together here, you will undoubtedly
find, confirm that everyone here is aware
of the economic potential. Everyone is aware of the
technological potential. Everyone is aware of
the potential that exists when it comes to
fighting poverty, fighting disease, fighting the
impact of climate change. If we want to be
successful in those areas, we need experts
at the top level. And we in Germany intend to
participate in that development just as much as you do. But that is a kind
of advance remark. I want to be very
clear, having said what I’ve said doesn’t
mean that we end up in an age of unbridled
regulation, a crazy approach towards– a craze about regulation. When you look at
the field of tension between new technologies
and AI on the one hand and what is the constituent
element of our societies, and that is democratic
decision making processes in Western societies. Their is a field of tension. Shouldn’t we make that also
a topic of the discussion every once in a while? And that is why I suggested
to make that the topic of our discussion today. So no one should assume or be
afraid that this inherently entails a secret wish to in some
way influence the development or to slow down the
developments in the field of AI and technologies
of digitization. But that’s not my
intention, really, but there is this field
of tension I mentioned, and we have to focus on it. We have to deal
with it, and this is equally true
for all those who participate in the process
of technological development of these means of communication. This should not be
left to philosophers or individual groups. It has to be viewed as
a topic for all of us. And if we pursue such an
approach, we will, I believe, reach a point, and that
has become obvious here as a consequence
of the discussion, where we don’t leave
it to appealing to the morals and each
of every individual and his or her responsibility. But we need to have a debate
across borders, whether there should be limits to
technological process that we should not
surpass, because this, at the end of the day,
is what it is all about. It’s difficult enough when you
look at Germany and the United States of America, but it will
become even more difficult when you think about
those countries that have a completely different
social system or approach. But we need to have that debate. We need to have it with
a country like China. And in saying that, I’m
not cherishing any illusion about us having in 10 or five
years a kind of UN charter on artificial intelligence. We won’t get that. But nevertheless,
we should engage in that kind of a
debate, just as much as we have a debate
with China although we have different views on
the issues of bioethics and genetic engineering. We are not in agreement
on these issues, but nevertheless
we have succeeded in defining some limits or
ceilings or restrictions. Thus I am not discouraged,
you know, in any way, when I look to the
possibility of such a debate, although it’s going to
be a complicated one. But you know, this
was mentioned. What we need is a transatlantic
debate on the subject matter, too. Apart from all the topics of the
day, the conflicts of the day, and I don’t want to
downplay their importance, but we have to tackle the
question of the importance of the freedom of
the individual, of the democratic culture in
the states of the Western world. And we count ourselves
amongst those, just as well as the United States
of America, and we need to have that debate amongst
the West first and foremost. This is what I would wish, us
to have the opportunity, time and again, as I have been trying
to seek it during my visit here to engage in discussions
and debates that do not solely involve focus on the
present conflicts, trade conflicts being just
one case in point, but to have a transatlantic
dialogue about the issues that are really at the essence
of what links us and affects us in the years to come and will be
affecting us in the future too. I very much look
forward to my next visit to Boston and to Harvard, and
thank you for having come here. [APPLAUSE]

Leave a Reply

Your email address will not be published. Required fields are marked *