Wednesday, February 4, 2026

David DeSteno Teams, Trust, & Creativity

David: I think it'll come as no surprise to
you that the complexities of the problems
we're facing as organizations and as societies
are growing.
That means the complexities of the solutions
needed are growing too.
To find those solutions, we're going to have
to be creative.
That means we're going to have to bring together
people from different fields to work together.
We're going to have to trust and rely on each
other's expertise.
The days of one person having all the answers
are long gone.
For people to feel enabled and empowered to
be creative, they've got to be able to trust
their colleagues.
The problem there is trust can often be something
of a double-edged sword.
The reason humans trust each other is because
we can accomplish more working together than
we ever could alone.
When we do choose to trust someone, there's
a risk.
We're making ourselves vulnerable.
One way we do that is by relying on other
peoples' competencies.
Do these people have the skills and the knowledge
needed to accomplish the task?
If they don't, the team is in trouble.
In some ways, that's an easier nut to crack
because we're pretty good at assessing knowledge
and competence.
The other way trust matters is when it comes
to integrity.
Whenever two people work together, there's
always the chance that one may choose to act
selfishly for advantage.
Here, in this case, that might mean keeping
information to yourself so that you can then
give it out and raise yourself in status.
It might be condemning another person's idea
because they came up with a potentially creative
solution that turned out not to work.
You try and raise yourself up by selling them
out.
Whether you realize it or not, your mind is
always making these calculations.
The question is which is the better way to
go?
Should we trust and cooperate or be selfish?
Martin Nowak, who's an evolutionary biologist
at Harvard, has some wonderful simulation
that shows success depends on time scale.
In the short term, acting selfishly can allow
you to get ahead.
In the long term, and that's what most of
us care about, it's teams that share, cooperate,
support each other that have the best outcomes.
That, of course, raises the question of what
can we do to increase trust in teams so people
can feel free and comfortable to be creative,
to fail initially and go on?
To answer that question, you have to be willing
to accept the idea that trustworthiness isn't
a stable trait.
We all have this idea growing up, this common
motif that growing up there was an angel on
one shoulder and a devil on the other.
If you listen to the angel, you're going to
grow up and be a good person.
You'll be trustworthy, everything will be
great.
There's only one problem with that, and that
is the data.
If you look at scientific data on people's
cooperation, moral behavior, etc., what you
see is people's behavior is a lot more variable
than any of us might have expected.
That raises the importance of figuring out
what are the situational cues and situational
nudges we can institute to make people want
to trust each other more and work together.
Let me give you an example of just how situational
dependent trustworthiness might be.
In my lab, we often do experiments.
We don't ask people what they would do because,
if I say to you, "Are you going to be trustworthy?"
What you do is you'll say yes, even though
some of you know you might not be.
What I think is more likely is that you'll
say yes because you think you will.
When push comes to shove and real costs and
benefits are on the line, things change.
We bring people into the lab and we say, "There's
two tasks that need to be done.
A really short, fun one and a really long,
onerous one.
Here's a virtual coin flipper.
You can flip the coin to decide which one
of these you're going to get.
The photo hunt that's fun, the logic problems
that are long and bad.
The problem is the guy sitting next to you,
in the hall before you came in, he's getting
whichever one you don't do."
We leave them alone, at least they think.
They're on hidden video.
What do you think people do?
I will tell you, some people don't flip the
coin.
They do what I think is more interesting,
they flip the virtual coin, which we always
set to come up tails so they get the bad task.
They don't like it, so they flip it again,
and they flip it ... It's kind of like when
you're a kid and you roll the dice.
You get the wrong roll and you're like, "Wait,
wait, wait, no.
I need a do-over."
What percentage of people do you think cheated
on this task?
90.
We've done it many times so it's not like
I was running this experiment outside the
prison or something.
Sometimes it's 87%, sometimes it's 92%.
These are all good people.
If we ask our pool ahead of time, "Is this
the right thing to do?
Flip the coin?"
It's the only time I get unanimous data.
Everybody says, "If you don't flip the coin,
you're cheating, you're doing something wrong.
Yet, 90% of them do it.
These are good, upstanding people.
How does that happen?
Afterward, we asked them, "How fairly did
you behave?
How trustworthy did you behave?"
Higher numbers mean more trustworthy.
When people are judging themselves, they say,
"I did okay."
3.5 is the midpoint, so they're above the
midpoint.
"I did okay."
If you run the same experiment again but you
have people watch somebody else go through
it ... We have an actor who doesn't flip the
coin and looks like he's cheating.
"How fairly did he act?"
No, he didn't do as well.
This is the essence of hypocrisy.
It's the same thing.
We cut ourselves slack.
The problem is how do we deal with this?
How do we get rid of this problem?
To answer that, we need to know where it comes
from.
What gives people the self-control to be trustworthy?
Is it going to be a cognitive mechanism?
Is it going to be an emotional, intuitive
mechanism?
System 1 or system 2, if you're Danny Kahneman.
To answer that question again ... Sorry, we
ran the experiment again.
This time, at the end, we asked people, "How
fairly did you act?"
We put them under a cognitive load, which
means you have to remember these random digits
while you're asking questions.
What that does is it ties up your memory,
your executive function, so you can't really
rationalize and think about things.
How fairly did I act?
7865 ... I acted ... Boom.
Then you have to report the numbers.
What happens here?
Here's the data from before.
Under cognitive load, hypocrisy goes away.
The only bar that's different from any of
the others is this.
If we don't allow you to engage in rationalization,
what's happening is you know what you did
is wrong.
If we give you a few minutes to think about
it, your conscious mind comes in and it overrides
that pang of guilt.
It says, "Oh, there was a reason why I did
this."
It whitewashes it away.
That's why most people will create a story.
They'll say things like, "Well, I normally
wouldn't have done it but I have an appointment
I couldn't be late for.
Whatever it is.
In washing it away, we actually don't realize
how untrustworthy we can be.
That causes a problem.
By giving people the anonymity here, it made
them want it.
Why not take the short term benefit?
There's no long term cost and no one will
know.
Because we think guilt might have been pushing
the other way, we're thinking about what might
increase people's willingness to be trustworthy.
In my lab, we study a lot of moral emotions.
One emotion we look at a lot is gratitude.
We decided to run a study where we would see,
if we could make people feel grateful, would
they actually be more trustworthy?
We brought people into the lab, because I
like to do things in real time.
We had them sit down at a computer and had
them do this god awful, onerous task.
As they worked on it, the computer was rigged
to crash.
They're like, "Ugh."
You have to start all over from the beginning
and they don't really want to do this.
At that point, somebody comes over to them
and says ... Another person, who's an actor.
He says, "Oh, that didn't happen to me.
Let me see if I can help you."
Hits a key.
Surreptitiously, the computer starts to come
back on.
Low and behold, said subject is very grateful
that they don't have to do this god awful
task again.
We compared that to a condition where the
computer doesn't break and they're just feeling
rather neutral.
Then we have them play this trust game, which
is kind of like a Prisoner's Dilemma, for
those of you who know Prisoner's Dilemma.
Except, it allows you to cheat a little bit
instead of just completely defecting or cooperating.
The way the game works is each person has
four tokens.
Tokens are worth $1 to you but $2 to someone
else.
You can exchange them.
You have to decide how many you want to exchange.
If you want to be untrustworthy, what you're
trying to do is convince the other person
to give you all of their tokens and then don't
give them any.
Which means you now have $12 and they have
none.
The best cooperative solution that's trustworthy
is we each exchange everything we had.
Now we're making $8.
We can share profit, or I can screw you over
and make a lot more, you have nothing.
What did people do?
Here, we have them either feeling grateful
because the person helped them or the computer
didn't break, they're feeling neutral, and
they're going to play this economic game with
that person, or they're going to play it with
a complete stranger.
The more they give here means they're either
behaving more trustworthy.
What you find is when they're feeling grateful,
they act more trustworthy than when they're
feeling neutral.
You might say, "That's not surprising.
This person just helped them.
Of course they're going to do that.
They think this person is trustworthy or they
owe them something."
What happens when they play a stranger that
they've never met and are told they never
will meet?
Same thing.
How much they give is also directly predicted
in the linear way by how grateful they feel.
Any subject who felt more grateful for the
help paid it forward to the stranger and gave
that person more.
What this suggests is that, if you can induce
feelings of gratitude in people, it automatically
makes them want to behave in a more trustworthy,
more empathic way to build a team.
We've done it with helping behavior in lots
of other things as well, not just money.
What this suggests is that if we can increase
moral emotions in people, we find similar
things of compassion and empathy as well,
within a group, it's going to nudge them automatically
to want to be more trustworthy, to be more
cooperative, support each other, have those
long term gains.
The problem now though is people are working
remotely, asynchronously, they're not working
face to face.
How do we have that emotional contagion going
back and forth?
People use emoticons.
Why?
Because it's the world's most simplest and
worst way of trying to indicate some emotion
in some email that you're sending.
We need to do it better.
The question is how can people figure out
they can trust someone using new technologies?
There's a lot of work out there looking at
how do tell if someone is trustworthy?
People have been looking at this forever.
The TSA, you can name it.
Is it the smile?
Is that the golden cue?
Is it the eyes?
Is it whatever?
It's none of those things.
TSA spent $40 million on a program devised
by Paul Ekman for microexpressions.
Didn't work.
The problem is, expressions have to be ... To
understand them, they're going to subtle and
dynamic.
That is, you're not going to broadcast if
you're trustworthy just like that.
Then you could be taken advantage of.
They're going to be very context-dependent.
This is the problem with a lot of technologies
right now.
They don't pay attention to context.
Features are going to occur in sets.
The only way you're going to understand what
a simple thing ... If I'm touching my face,
does that mean I'm nervous or does that mean
I have an itch?
If all you're looking at is touching my face,
you can't tell.
You need to look at it in a context constraint
satisfaction way.
Let me give you an example of why all this
software that's designed to read people's
faces is problematic.
What is this person feeling?
Pain, fear, sorrow, victory.
If you just look at the face, in terms of
everything we know about facial features based
on the basic emotions, that ain't happy.
This is.
We can do it with lots of things.
What we're seeing in science now ... One of
my day jobs is Editor for the Journal Emotion
so I'm trying to resolve all these debates
about papers coming in.
What we're knowing is face is not very good.
Except for, if people are smiling, you can
tell that they're happy.
Beyond that, it's not very good.
There are no expressions for gratitude, or
compassion, or empathy.
To understand what a person is feeling, the
body becomes very important.
The context behind it becomes hugely important.
All of the software out there that's based
on faces alone, especially for more complex
states, is not going to work very well.
We decided to figure out, can we figure out
if people are trustworthy?
We threw out everything we thought we knew
and started from the ground up.
We wanted to identify cues and we wanted to
demonstrate that there's some accuracy.
Then, to show that we were right, we wanted
to manipulate those cues to see if we could
push people's judgments around about what
is trustworthy and what isn't.
You can think of it as an exploratory and
confirmatory phase.
We brought people into the lab for phase one.
What are the candidates for trust related
signals?
The way it worked is you come in and you are
... 86 folks came in.
The only thing was they couldn't know each
other.
We broke them into 43 diads.
They were going to talk for 5 minutes to get
to know each other.
Then they were going to play this game for
money.
That same game I showed you before.
We give some game and they could talk about
anything they wanted.
We gave them a list of topics to get going.
After that, we separate them.
Sorry, while they're doing this, we have three
cameras on them so we're time-syncing everything
that they do so we can video lock it and look
at their expressions.
Some people are doing it face to face, some
people are having their get to know you conversation
over the net.
The reason why is semantic information is
the same, here you have nonverbal exposure,
here you don't.
We separate them and they go play that gypsum
game to see how trustworthy they're going
to be.
We're also asking them to predict what you
think your partner is going to do.
That person you were just talking with.
What we find is that actual giving was the
same in both conditions.
The average level of trustworthiness was the
same whether or not you had talked face to
face or remotely.
Predictions for what the other person was
going to do ... These are absolute values
of errors.
Were significantly better when you had access
to their nonverbals than when you didn't.
Which means people were picking up on something
that allowed them to predict ground truth.
What?
We built all kinds of models with all different
cues that we could measure and think of.
What came out as the best set of predictors
were 4 cues taken together.
On their own, they didn't predict.
Taken together, they did.
Crossing your arms and leaning away, what
do those signal?
Usually it means I don't want to affiliate
with you, I don't like you.
Hand touching your face, touching usually
goes around being nervous.
What's the gestalt here?
I don't like you and I'm kind of nervous because
I'm going to screw you over in a minute.
What you see here is people were asked to
judge what their partner did, or how untrustworthy
their partner acted.
The higher the number means the more trustworthy
people were.
The more often you saw your partner emit these
cues, the less trustworthy you predicted that
person was going to be.
In fact, the more any person did emit these
cues, the more untrustworthy they were.
We actually have some level of ground truth
here, being able to predict what people were
going to do.
The interesting though is, of course, nobody
had any idea what their mind was doing.
People couldn't verbalize this yet, their
intuition for telling.
How do I know that it's actually those cues
and not like, when I touch my face, my left
pupil is dilating and that's the magic cue?
You need to control exactly what people are
doing.
The problem is you can't do that with a human.
Imagine if you were trying to have a conversation
and I told you, "Now cross your arms, now
do this" through an earphone.
There's no way you can do that and carry on
a normal conversation.
You can do it with a robot.
This is Nexi the robot, who was designed by
my collaborator, Cynthia Breazeal, at MIT's
Media Lab.
What we did was train this system with human
biological motion.
If you're an engineer and you want to say,
"Robot, pick up something."
It's going to go like this, it's going to
spin its hand and do something really weird.
That doesn't work.
We needed to train it based on true human
biological motion, because that's what the
brain is gong to use.
We ran the same experiment except we replaced
the person with a robot.
Now you're talking to the robot.
It's called the Wizard of Oz paradigm because
we're controlling the robot behind the curtain,
in another room.
The way it works is with this following setup.
This person is the voice of the robot.
Here you can see the person you're going to
talk to through the cameras in the robot's
eyes.
There is a camera in her head so as she moves
her head, in real time, the robot's head moves
in real time.
As she speaks, it picks up the the [inaudible
00:16:50].
The robot's mouth moves in real time.
The only thing she doesn't control is whether
the robot is giving you those untrustworthy
cues or something else.
We didn't want any unconscious bias to come
in.
This person controls whether or not Nexi gives
those.
Like crossing your arms, touching the face.
Untrustworthy cues or similar cues that weren't
related to trustworthiness.
This person controls when the robot freaks
out because 10% of the time, something goes
wrong and it does something weird.
It's the problem of cutting-edge technology,
right?
We brought in 65 people.
31 of them were assigned to the condition
where you get the trustworthy cues, the others
weren't.
Here's Nexi crossing its arms.
Here it is touching its face.
First, you have to get people used to the
fact that they're talking to a robot.
There's this minute of, "Oh my god, I'm talking
to a robot."
There is this part where they are getting
to know each other.
Go.
Nexi: My name's Nexi.
What's your name?
Ken: My name's Ken.
Nexi: Ken, it's very nice to meet you.
Ken: You too.
Nexi: To get us started today, why don't I
tell you a little bit about myself.
Ken: Okay.
David: We had to put this bar here because
people were worried the robot was going to
roll over to them.
Nexi: In robot years, that's more like being
20.
David: You can see, she's a little nervous.
As time goes on, people have a normal conversation.
They start self-disclosing.
Female: I don't have a lot of time.
Nexi: Did you grow up in upstate New York?
Female: Yeah, I did.
Until I was 18 when I moved out here.
Nexi: It seems like that must have been a
big transition for you.
Female: It was.
It was a really good thing.
David: In case you want to see it head on,
here's what it looks like.
Nexi: Cords and gadgets.
It's probably not like your house, but it's
home for me.
Why don't you tell me about where you're from.
Speaker 5: I was born in Massachusetts.
I have a residency [inaudible 00:19:01] last
four months.
David: Then they're told, now you're going
to play this game with the robot.
The robot has an artificial intelligence algorithm
and it's going to make predictions based on
the conversation with you.
No, it doesn't have that.
But they believe that it does and they had
to predict what they wanted to do in this
trust game with the robot and answer some
questions about it.
What we see, those of you who are quant folks,
this is kind of a regression model.
If people saw the cues, the untrustworthy
cues, they rated the robot as less trustworthy.
They didn't rate it as less likable, which
is important to me because we all have friends
who we like but wouldn't trust with our money.
It lets me know that it's actually targeting
the trust.
The more untrustworthy you felt the robot
was, the less you predicted she would give
you and the less you gave to the robot itself.
What this shows is that the human mind is
willing to ascribe moral intent and emotional
responses to technological entities, if they're
human enough.
It doesn't have to be perfectly human but
they're human enough.
What that suggests is, you may not get it
here, but you will get it here.
Wall-E's not humanoid, but it has enough of
the features that it can evoke emotional responses.
The question for us is, if you're in a situation
where the boss is a robotic avatar, this is
useful, if it's not, it's not.
What I suggest is how do we take these cues
into the platforms we have?
What's going to be important in understanding
that is figuring out the way to present them
in a way that the mind is used to and can
make use of in its normal currency and normal
systems.
I think that's one of the things we're going
to need to talk about and think about, going
forward.
Thank you.

No comments:

Post a Comment