Psychology of Computing: Crash Course Computer Science #38

Psychology of Computing: Crash Course Computer Science #38


Hi, I’m Carrie Anne, and welcome to Crash
Course Computer Science! So, over the course of this series, we’ve
focused almost exclusively on computers – the circuits and algorithms that make them tick. Because…this is Crash Course Computer Science. But ultimately, computers are tools employed
by people. And humans are… well… messy. We haven’t been designed by human engineers
from the ground up with known performance specifications. We can be logical one moment and irrational
the next. Have you ever gotten angry at your navigation
system? Surfed wikipedia aimlessly? Begged your internet browser to load faster? Nicknamed your roomba? These behaviors are quintessentially human! To build computer systems that are useful,
usable and enjoyable, we need to understand the strengths and weaknesses of both computers
and humans. And for this reason, when good system designers
are creating software, they employ social, cognitive, behavioral, and perceptual psychology
principles. INTRO No doubt you’ve encountered a physical or
computer interface that was frustrating to use, impeding your progress. Maybe it was so badly designed that you couldn’t
figure it out and just gave up. That interface had poor usability. Usability is the degree to which a human-made
artifact – like software – can be used to achieve an objective effectively and efficiently. To facilitate human work, we need to understand
humans – from how they see and think, to how they react and interact. For instance, the human visual system has
been well studied by Psychologists. Like, we know that people are good at ordering
intensities of colors. Here are three. Can you arrange these from lightest to darkest? You probably don’t have to think too much
about it. Because of this innate ability, color intensity
is a great choice for displaying data with continuous values. On the other hand, humans are terrible at
ordering colors. Here’s another example for you to put in
order… is orange before blue, or after blue? Where does green go? You might be thinking we could order this
by wavelength of light, like a rainbow, but that’s a lot more to think about. Most people are going to be much slower and
error-prone at ordering. Because of this innate ineptitude of your
visual system, displaying continuous data using colors can be a disastrous design choice. You’ll find yourself constantly referring
back to a color legend to compare items. However, colors are perfect for when the data
is discrete with no ordering, like categorical data. This might seem obvious, but you’d be amazed
at how many interfaces get basic things like this wrong. Beyond visual perception, understanding human
cognition helps us design interfaces that align with how the mind works. Like, humans can read, remember and process
information more effectively when it’s chunked – that is, when items are put together into
small, meaningful groups. Humans can generally juggle seven items, plus-or-minus
two, in short-term memory. To be conservative, we typically see groupings
of five or less. That’s why telephone numbers are broken
into chunks, like 317, 555, 3897. Instead of being ten individual digits that
we’d likely forget, it’s three chunks, which we can handle better. From a computer’s standpoint, this needlessly
takes more time and space, so it’s less efficient. But, it’s way more efficient for us humans
– a tradeoff we almost always make in our favor, since we’re the ones running the
show…for now. Chunking has been applied to computer interfaces
for things like drop-down menu items and menu bars with buttons. It’d be more efficient for computers to
just pack all those together, edge to edge – it’s wasted memory and screen real estate. But designing interfaces in this way makes
them much easier to visually scan, remember and access. Another central concept used in interface
design is affordances. According to Don Norman, who popularized the
term in computing, “affordances provide strong clues to the operations of things. Plates are for pushing. Knobs are for turning. Slots are for inserting things into. […] When affordances are taken advantage
of, the user knows what to do just by looking: no picture, label, or instruction needed.” If you’ve ever tried to pull a door handle,
only to realize that you have to push it open, you’ve discovered a broken affordance. On the other hand, a door plate is a better
design because it only gives you the option to push. Doors are pretty straightforward – if you
need to put written instructions on them, you should probably go back to the drawing
board. Affordances are used extensively in graphical
user interfaces, which we discussed in episode 26. It’s one of the reasons why computers became
so much easier to use than with command lines. You don’t have to guess what things on-screen
are clickable, because they look like buttons. They pop out, just waiting for you to press
them! One of my favorite affordances, which suggests
to users that an on-screen element is draggable, is knurling – that texture added to objects
to improve grip and show you where to best grab them. This idea and pattern was borrowed from real
world physical tools. Related to the concept of affordances is the
psychology of recognition vs recall. You know this effect well from tests – it’s
why multiple choice questions are easier than fill-in-the-blank ones. In general, human memory is much better when
it’s triggered by a sensory cue, like a word, picture or sound. That’s why interfaces use icons – pictorial
representations of functions – like a trash can for where files go to be deleted. We don’t have to recall what that icon does,
we just have to recognise the icon. This was also a huge improvement over command
line interfaces, where you had to rely on your memory for what commands to use. Do I have to type “delete”, or “remove”,
or… “trash”, or… shoot, it could be anything! It’s actually “rm” in linux, but anyway,
making everything easy to discover and learn sometimes means slow to access, which conflicts
with another psychology concept: expertise. As you gain experience with interfaces, you
get faster, building mental models of how to do things efficiently. So, good interfaces should offer multiple
paths to accomplish goals. A great example of this is copy and paste,
which can be found in the edit dropdown menu of word processors, and is also triggered
with keyboard shortcuts. One approach caters to novices, while the
other caters to experts, slowing down neither. So, you can have your cake and eat it too! In addition to making humans more efficient,
we’d also like computers to be emotionally intelligent – adapting their behavior to
respond appropriately to their users’ emotional state – also called affect. That could make experiences more empathetic,
enjoyable, or even delightful. This vision was articulated by Rosalind Picard
in her 1995 paper on Affective Computing, which kickstarted an interdisciplinary field
combining aspects of psychology, social and computer sciences. It spurred work on computing systems that
could recognize, interpret, simulate and alter human affect. This was a huge deal, because we know emotion
influences cognition and perception in everyday tasks like learning, communication, and decision
making. Affect-aware systems use sensors, sometimes
worn, that capture things like speech and video of the face, as well as biometrics,
like sweatiness and heart rate. This multimodal sensor data is used in conjunction
with computational models that represent how people develop and express affective states,
like happiness and frustration, and social states, like friendship and trust. These models estimate the likelihood of a
user being in a particular state, and figure out how to best respond to that state, in
order to achieve the goals of the system. This might be to calm the user down, build
trust, or help them get their homework done. A study, looking at user affect, was conducted
by Facebook in 2012. For one week, data scientists altered the
content on hundreds of thousands of users’ feeds. Some people were shown more items with positive
content, while others were presented with more negative content. The researchers analyzed people’s posts during
that week, and found that users who were shown more positive content, tended to also post
more positive content. On the other hand, users who saw more negative
content, tended to have more negative posts. Clearly, what Facebook and other services
show you can absolutely have an affect on you. As gatekeepers of content, that’s a huge
opportunity and responsibility. Which is why this study ended up being pretty
controversial. Also, it raises some interesting questions
about how computer programs should respond to human communication. If the user is being negative, maybe the computer shouldn’t be annoying by responding in a cheery, upbeat manner. Or, maybe the computer should attempt to evoke
a positive response, even if it’s a bit awkward. The “correct” behavior is very much an
open research question. Speaking of Facebook, it’s a great example
of computer-mediated communication, or CMC, another large field of research. This includes synchronous communication – like
video calls, where all participants are online simultaneously – as well as asynchronous
communication – like tweets, emails, and text messages, where people respond whenever
they can or want. Researchers study things like the use of emoticons,
rules such as turn-taking, and language used in different communication channels. One interesting finding is that people exhibit
higher levels of self-disclosure – that is, reveal personal information – in computer-mediated
conversations, as opposed to face-to-face interactions. So if you want to build a system that knows
how many hours a user truly spent watching The Great British Bakeoff, it might be better
to build a chatbot than a virtual agent with a face. Psychology research has also demonstrated
that eye gaze is extremely important in persuading, teaching and getting people’s attention. Looking at others while talking is called
mutual gaze. This has been shown to boost engagement and
help achieve the goals of a conversation, whether that’s learning, making a friend,
or closing a business deal. In settings like a videotaped lecture, the
instructor rarely, if ever, looks into the camera, and instead generally looks at the
students who are physically present. That’s ok for them, but it means people
who watch the lectures online have reduced engagement. In response, researchers have developed computer
vision and graphics software that can warp the head and eyes, making it appear as though
the instructor is looking into the camera – right at the remote viewer. This technique is called augmented gaze. Similar techniques have also been applied
to video conference calls, to correct for the placement of webcams, which are almost
always located above screens. Since you’re typically looking at the video
of your conversation partner, rather than directly into the webcam, you’ll always
appear to them as though you’re looking downwards – breaking mutual gaze – which
can create all kinds of unfortunate social side effects, like a power imbalance. Fortunately, this can be corrected digitally,
and appear to participants as though you’re lovingly gazing into their eyes. Humans also love anthropomorphizing objects,
and computers are no exception, especially if they move, like our Robots from last episode. Beyond industrial uses that prevailed over
the last century, robots are used increasingly in medical, education, and entertainment settings,
where they frequently interact with humans. Human-Robot Interaction – or HRI – is
a field dedicated to studying these interactions, like how people perceive different robots
behaviors and forms, or how robots can interpret human social cues to blend in and not be super
awkward. As we discussed last episode, there’s an
ongoing quest to make robots as human-like in their appearance and interactions as possible. When engineers first made robots in the 1940s and 50s, they didn’t look very human at all. They were almost exclusively industrial machines
with no human-likeness. Over time, engineers got better and better
at making human-like robots – they gained heads and walked around on two legs, but…
they couldn’t exactly go to restaurants and masquerade as humans. As people pushed closer and closer to human
likeness, replacing cameras with artificial eyeballs, and covering metal chassis with
synthetic flesh, things started to get a bit… uncanny… eliciting an eerie and unsettling
feeling. This dip in realism between almost-human and actually-human became known as the uncanny valley. There’s debate over whether robots should
act like humans too. Lots of evidence already suggests that even
if robots don’t act like us, people will treat them as though they know our social
conventions. And when they violate these rules – such
as not apologizing if they cut in front of you or roll over your foot – people get
really mad! Without a doubt, psychology and computer science
are a potent combination, and have tremendous potential to affect our everyday lives. Which leaves us with a lot of question like
you might lie to your laptop, but should your laptop lie to you? What if it makes you more efficient or happy? Or should social media companies curate the
content they show you to make you stay on their site longer to make you buy more products? They do by the way. These types of ethical considerations aren’t
easy to answer, but psychology can at least help us understand the effects and implications
of design choices in our computing systems. But, on the positive side, understanding the
psychology behind design might lead to increased accessibility. A greater number of people can understand
and use computers now that they’re more intuitive than ever. Conference calls and virtual classrooms are
becoming more agreeable experiences. As robot technology continues to improve,
the population will grow more comfortable in those interactions. Plus, thanks to psychology, we can all bond
over our love of knurling. I’ll see you next week.