Angela Colter podcast interview: testing content, low literacy & continuous improvement

Angela Colter

In Episode 7 of the Together London Podcast, I talk to Angela Colter about testing content, considering users with low literacy skills, and continuous improvement.

Check out Angela’s blog and publications and follow her on twitter @angelacolter.

Listen to the podcast

Download MP3 file or subscribe in iTunes.

Read the transcript

Jonathan Kahn: I’m talking to Angela Colter, who’s joining me from Philadelphia today. She’s a user researcher and usability consultant, and she’s been designing information for people, both online and in print, since 1997. Angela’s Principal of Design Research at Electronic Ink in Philadelphia, and she’s presented worldwide at conferences like UPA, Confab, STC, IA Summit, and the Plain Language Association. So, Angela, thanks so much for taking the time to join me today.

Angela Colter: My pleasure.

Jonathan: According to your website, you’ve been working in information design since ’97, originally as a print designer. So what I want to know is, how did you end up in the usability field, and how did that lead you to content?

Angela: So I started working as a graphic designer after I finished my master’s degree at a college outside of Baltimore, Maryland. And what I found was that, while I was doing print design work, I found that, in many cases, I felt like the clients, the internal folks at the college that I was doing work for, were very interested in getting their message out. So, “This is the message that we want to get to our students, prospective students, and so forth.” But I found it a little frustrating that the conversation was all about the information that the department wanted to communicate out, but there was very little acknowledgement of what kind of information their audience was looking for or needed from them.

So, it was a frustration that grew over the course of my career, until I sort of stumbled onto the field of, initially, information architecture. So, this idea of cataloging information, of organizing it in such a way that made it easy for people to find. So I thought, “Ah, that’s pretty interesting. I’ll go back and take a course on information architecture.” But it just so happened that the course was only offered once a year, and it wasn’t offered [laughs] the semester that I had intended to take it. And instead, there was available a research-methods class, where you learned the basics of usability testing and user research.

So I had intended to do one thing, after becoming sort of disillusioned with the career that I had chosen, and by accident, I suppose, hit on this other sort of career interest. That was about 10 years ago, and I’ve been doing that ever since.

Jonathan: OK. I think you’re best known as a usability person. So how did that lead you to presenting and writing and talking so much about content?

Angela: Yeah, so that’s a very interesting question. I suppose that, with doing usability testing, mostly for websites, early in my career, I worked on a project with my graduate adviser on creating guidelines–print guidelines, in this case–for people with low literacy skills. And that project sort of expanded into, “Well, now that we’ve got these guidelines established for how to communicate health-care information to people who don’t read easily, how would you translate that to a website?” So if you’ve got print guidelines, what are the corresponding web guidelines? It was just very early in my experience with usability that I was exposed to this idea of different audiences that had different needs from content and how do you satisfy those needs. I don’t know. It just sort of happened that way, that content came, and how we communicate with our audiences just sort of happened, I suppose, organically, as part of the beginnings of doing this type of work.

Jonathan: OK. And so, I think I first came across your work when you wrote an article for “A List Apart Magazine” in December 2010, which was called “Testing Content.” And I thought this was a big eye-opener for me, at least, because you called out a common usability practice, which is testing whether users can find content rather than whether they actually understand it. When I saw that, I thought, “Wow, that is one of the problems with usability testing as a practice.” So why does this happen?

Angela: I think it happens from the client’s point of view, and really, I think, in the usability field, we tend to be somewhat complicit in this, is because that’s what we’re comfortable with. It’s a very on-off sort of switch: “Are you able to find it, or are you not able to find it?” That’s a very binary sort of issue. And I think that, naturally, we’re sort of drawn to answering the more simple questions. And the issue of “Can you find it or can you not find it?” that’s a very simple question that can be answered, and so you tend to gravitate towards the things that are maybe a little more simple to answer.

The question of whether somebody understands what it is that you’ve set out for them, that’s a much more complex issue. And it gets into not only what you have control over, which is what you’ve written, but it also involves things that you really have no control over, which is the domain knowledge of your audience, the reading skills of your audience and so forth.

I mean, that’s the excuse that I will use for myself. [laughs] When I’ve gravitated towards “Can you find it or not?” it’s because that’s an easier question to answer.

Jonathan: Yeah. So it’s like a quick win that we could fix. The other part of that is you can fix it more easily, so maybe you should label it more clearly.

Angela: Mm-hmm. Yeah.

Jonathan: There’s a great reality check in that piece, I think. When you say that if you ask the user, “Do you like this information? Do you understand this information?” that isn’t really the useful question to ask. Why is that the case?

Angela: If you’ve ever observed a usability test, at the end of a usability test, you’ve maybe spent half an hour or an hour or longer with a person, and you’ve asked them to complete a bunch of different tasks, and maybe they’ve struggled with some of those tasks, and maybe, in some cases, they even failed to complete them. In my entire career, every single time I ask somebody at the end of the test, “So, how did that site work for you?” I never get any answer other than, “It was great! Thumbs-up! It was really easy!” And I’ve never understood why somebody would tell me, “Oh, yeah, that was easy,” when, “Oh, that’s funny, because you actually failed every single task that I gave you.” And I think there’s something similar going on when you ask somebody, “Did you understand that?” And I think part of it is, with a usability test, you’re already setting up this artificial environment where, despite the fact that you want to make it very clear that you’re not testing the person, you’re testing the interface or the content or whatever it is that you’re looking at, the person who’s participating, there’s no way they’re not going to feel like you’re testing them. And I think people, they don’t want to be perceived as dumb or “You didn’t succeed. You failed at doing this.”

So I think that folks who maybe did have a hard time with some tasks in a usability test are going to say, “Yeah, it was great. It was easy,” because they either don’t want to admit otherwise and admit that they had a difficult time, or I think what’s also likely going on is, “Well, it wasn’t easy, but it’s no more difficult than any other thing that I’ve ever used on the web.”

Jonathan: Right, right, right. Usability itself is this very relative term. If you can complete the task, you may regard that as usable. [laughs]

Angela: Yeah, yeah. If you call, for example, you’re on the line with customer service for some company, and you’re on the phone for five minutes, and you didn’t really get the answer that you wanted from the first person that you talked to and you got shuttled around, but eventually you got the answer that you came for, but it took 45 minutes, well, how did that go? And it’s like, “Well, about as well as I expected.” Because, if your experience with something is always bad, then the fact that it’s bad when you test it, it’s like, “Well, it’s no worse than I’ve ever encountered before.” If something matches up with a person’s expectations, they’re going to say, “Yeah, I guess it was fine.” I think that’s, going back to your original question, why asking people, “How did this content work for you?” you’ll often not get a very accurate answer. “Well, it worked well enough,” or “I had to read it two or three times, but I finally made sense of it.” I think the expectation is well, that’s usually what I have to do so that’s not outside of the realm of people’s experience.

Jonathan: So you actually… in this article you go through three practical ways that people can start testing their content right now which are moderated usability tests like you were describing, although you haven’t yet told us how to do it, an unmoderated test, or something called a cloze test. So can you talk me through those techniques?

Angela: Yes, so real quickly, a moderated usability test is where you have a moderator who’s actually talking to the participant of the test. So the moderator is asking the participant to complete…if you’re usability testing a website, for example, you usually will have a list of tasks that you want the participant to attempt. And then you watch them as they attempt those tasks, and you might ask them to think aloud, kind of give you the running commentary of what’s going on in their head while they’re trying to complete that task.

And what that does is that allows you see what they are actually doing. The running commentary also gives you some insight into what they’re thinking while they’re trying to complete the tasks. But what’s also very interesting is that what people…how people describe what they’re doing and then what they’re actually doing do not necessarily overlap 100 percent.

So in other words an example might be, “Oh, yeah, I thought that task was pretty easy,” when, in fact, the participant may not realize that they actually failed to do what you asked them to do. So you can’t 100 percent rely on what people are…how people are describing what they’re doing. You always have to sort of match that up with, well, what did they actually do and pay attention to kind of both of those things.

So that’s a moderated usability test. That can happen either with the participant in the same room with the moderator, or it can happen remotely where you’re talking with somebody perhaps over Skype. You’re doing screen sharing so that you’re seeing with testing a website what they’re clicking on, where they’re going, and so forth. So just because it’s moderated doesn’t necessarily mean that you’re in the same room with the participant.

Jonathan: Specifically when you said testing content in this way, how are you determining whether they’ve understood the content in this test?

Angela: So one good way of doing that is to ask somebody to paraphrase what they’ve read. So if you give somebody a piece of content, maybe an article about how to address a particular healthcare situation, and you ask them to read it and then ask them to paraphrase what they read, you can look at did they get the gist of what the article was saying or did they completely miss the main point of what you ask them to read? So paraphrasing is one thing you can do. Another thing that you can do is to ask them questions about what they just read. Ideally you would ask them questions where they really wouldn’t know the answer unless they read what you just gave them.

That can be difficult, because people are also bringing domain knowledge to bear so getting a sense of what people already know prior to asking them to read a piece of content is always useful, but then asking them questions to see if they were able to pick out that information in the content is another way that you would do that.

And then you mentioned the cloze test, which I mentioned in the article. That’s another really interesting technique that you can use, which is kind of a funny little thing. It looks like, if you know what a Mad Lib is, it kind of looks like a Mad Lib, where you’ve got a piece of text, and you remove every five words or so, and you give that piece of text with every fifth word removed and blank put in its place, and you ask them to fill in the word, the original word, that they think that the author would have used.

And what that does is it kind of it uses the idea of closure from your gestalt theory, the idea of closure that the brain is going to fill in the blanks of incomplete information. So the brain will do the same thing with words where you take a look at the rest of the words that are present, and are you able to, from the context of the remaining words, sort of intuit what’s missing there?

Jonathan: It’s one of those things where until you actually do it yourself, it doesn’t seem to make any sense, because I remember you actually have…you must have a sample of it in this article, and then you gave a talk at Confab in Minneapolis where you actually got us all to do this test as I remember. And people around the room were just like, “Wow.” I’d never seen this before, and everyone around the room is just like, “This is so useful. How does it work?” And it does work so I think you gave us one that we could do, and then you gave us one that was financial which we just could nowhere near do.

Angela: [laughs]

Jonathan: And we were all just going, “But we’re all supposed to be clever people, and we can’t do this. What’s going on?” And I think you explained afterwards.

Angela: Yeah, so what’s really interesting is, I know that when I first encountered this concept of the cloze test, which is something…I mean it’s been in use for, I don’t know, the past 70 years or so. It’s often used in classrooms, particularly for folks that are teaching English as a second language. So this is an established method that’s used mainly in education up to this point, but it’s a technique that’s commonly used in areas outside of usability and content website and that sort of thing. But when I first encountered it, I remember thinking, “This is complete voodoo. There’s no way that this actually makes any sense. It just doesn’t make any sense.”

But then I worked on a project where the client said, “We want you to use a cloze test,” and they explained what it was and how to use it to test – in this case they were asking us to test two different types of articles.

It was for a government agency. The agency was trying to come up with a readability formula that could be applied to healthcare information, specifically, and so they were kind of…this was a method of testing whether the tool that they had come up with was actually working.

They gave us a series of articles. Half of them their experts had judged to be easy and the other half the experts had judged to be difficult. We were to take these, turn them into a cloze test, and then give them to participants.

Sure enough, based on the method that you use to score the cloze test, which is you figure out the percentage of fill in the blank answers that got correct. The benchmark that you use is if they were able to guess 60 percent of the words or more then your article is appropriate for that audience. If you are between 40 and 60 percent, it might be a little difficult for this particular audience, but if you’ve got supplementary materials you might be OK.

If they got 40 percent or below, you really need to rewrite it. It’s not appropriate for the audience that you’re presenting this to. I think at Confab when I had you guys do the exercise I had an easy article and a difficult article and, sure enough, that was borne out by the folks doing each one of the cloze tests. The folks that had the “easy article” had an easy time filling the information and they got above 60 percent, if memory serves.

The poor folks that got the more difficult article, I could even see it in your faces. People were struggling to come up with what could possibly go in this blank. It really was a struggle.

Jonathan: It was a real document. It was some kind of financial, legal terms and conditions that you found. That’s why it was so powerful, I think.

Angela: Yeah. I think I pulled it from a credit card disclosure document or something like that so it was an actual thing, it was actual content that exists out there in the world. A room full of highly educated, very smart people had trouble with it.

Jonathan: I think that’s why this is so interesting on a number of levels because that actual content was, in theory, supposed to be informing a party to a contract about their side of a contract who, by this evidence, highly educated writers, first language English, could not understand what that meant. Before we move on to the next thing, I want to say the great thing about all these things you talked about so far is these are things that we can all use to start testing our content now and, maybe more importantly, to actually demonstrate to people who should care more about this in our organizations that this is a problem we should care about.

I can imagine you could sit down with some blocker in your organization about content quality and get them to do the cloze test on their own content and say look at this, this is what this means. You’ve just got this very practical way of getting on with it and saying these are techniques you can use now.

If anyone’s listening who would like to create an agenda for better quality content in their organizations, I think Angela’s work is fantastic for just getting into that stuff.

The other thing that that threw out for me, in terms of the terms and conditions for the person that’s supposed to be on one side of a contract, is that the other assumption we tend to ignore is we assume that everyone’s like us and has the same educational background as us and the same language skills as us. The other article that you wrote that blew everyone’s mind was for a Contents Magazine and it was called “The Audience You Didn’t Know You Had,” which is about low literacy users.

You also gave a fantastic talk just a few months ago in Cape Town, Content Strategy Forum in Cape Town, about this same topic. Why is this an important topic for you?

Angela: I think it’s an important topic for me because low literacy affects an astounding number of people, not just in the US but really in so many different countries. It’s such a universal issue and I think that it’s one that people just aren’t really aware of. I’ll throw some statistics at it. Something like 46 percent of the population in the United States, the adult population in the United States, has low literacy skills. I guess it’s commonly defined as reading at or below an eighth grade level. The folks that actually do the national adult literacy survey in the US would resist the eighth grade level or below label. Other places that I’ve read make that connection and I think that’s an easy thing for at least my brain to grab onto so I tend to use that even though the folks that author the survey would resist that definition.

Nearly half of the population has low literacy skills and about a quarter of the population, 21 to 23 percent, something like that, have very low literacy skills, often defined as fifth grade or below. That is not an insignificant number of people. I think that’s one of the reasons why I have an interest in low literacy is because if that’s the number of folks that we’re talking about in many countries, we’ve got to be conscious of that when we are building sites for a general audience, when we’re writing content for a general audience, because that general audience is going to include people who are going to have difficulty reading, understanding, and acting on that content.

Jonathan: What are the behaviors that you’ve observed among people with low literacy? What type of techniques do you recommend to try to make stuff useful and usable and effective for this audience?

Angela: Things that you tend to notice with people with low literacy skills is, for one, nobody will ever admit to having difficulty reading. That’s just very rare. It makes recruiting for a usability test very difficult because you can’t just ask somebody do you have difficulty reading or what grade level do you read at because nobody’s going to know the answer to that. It makes that difficult. In addition, even if somebody does know that they have difficulty reading they will likely not acknowledge that to people. I think in the contents article I’ve got one citation that says a significant percentage of folks who have difficulty reading have never told their spouse. This is not something that people are necessarily even talking to their own family about.

You will see folks who don’t acknowledge the fact that they have difficulty reading. In fact, referring back to the national adult literacy survey, folks who do have difficulty reading, when you ask them how are your reading skills they’re going to answer 90 percent of the time, “My reading skills are good,” or, “They’re very good.”

Whether they simply don’t have an accurate perception of how their reading skills are or that’s just their normal I don’t know. I can’t really speak to that. They may not even be aware of the fact that they have difficulty reading. Some of the other behaviors that you’ll see, particularly with usability testing, really in testing content as well, is the tendency to satisfice.

What that means is that’s a made-up word. It’s a combination of satisfy and suffice. What that means is folks with low literacy skills will often stop searching when they find the first plausible answer to their question. They’re looking for a piece of information and they see something that’s plausible, not necessarily the best answer or even the right answer, but something that’s plausible, they’ll stop looking. You will see that satisficing behavior with folks with low literacy skills.

What that means is when we’re designing information you need to be very conscious of what you’re communicating to folks. You need to be aware that if you’re not really clear about what you’re saying somebody may take incomplete information from what you’ve written if you’re not nice and clear. Or if you throw a bunch of extraneous information into what you’re trying to communicate, somebody may pick out the wrong thing and stop there instead of pushing a little more to get the best answer or even the right answer.

Jonathan: Any other things you can share about what we should be bearing in mind when we try to make our stuff accessible to people with low literacy?

Angela: One other thing I would point out is literacy isn’t just about word recognition. A lot of times when I talk about low literacy I’ve gotten some push-back from people saying I don’t understand this people reading at a fifth grade level or an eighth grade level. My eighth grader can read just fine. They’re really thinking more about those word recognition skills. Literacy actually has to do with a lot more than just word recognition. It’s also understanding the structure of sentences. It’s understanding how one sentence relates to another. It also gets into issues of being able to find a piece of content within a paragraph. It’s being able to identify when you need to do calculations, when you need to do math to figure something out and actually doing that math.

Literacy is a whole host of things. It’s not just word recognition, although word recognition is certainly a very important component to that. It really has to do more with once you’ve recognized all these words in a sentence are you able to make sense of the sentence? Are you able to make sense of the paragraph that you just read? That’s also something to keep in mind. It’s not just about word recognition.

Jonathan: When I’m listening to you talking about this stuff it’s really making me think about some of the social changes we’re seeing as a result of the web. For example, there’s this whole new expectation on governments that they should be communicating information to their citizens online in a clear way. There are a number of different ways where we can get this, but one of those things is there are certain types of situations which you can imagine people, in the past, having an intermediary working with them to, for example, decide what their rights are.

So, for example, you might go to what we have here, the Citizen’s Advice Bureau where they’ll actually give you free legal advice about your rights and perhaps if you have low literacy that may have been one your main things you would do in the past.

If you’re now making that same decision without the help of a professional using a government website which uses formal and legal…Archaic type legal language, you’re much more likely to actually misunderstand what it is that the website’s telling you.

Angela: Right.

Jonathan: And so, it’s that whole kind of…We talk about accessibility and inclusion a lot. I think it’s very easy to just throw that into, “Oh, yeah. I need to make sure that these websites work in screen readers for people who can’t see the screen.” Actually, that is one aspect of accessibility. That technical aspect of does it get to the device that the person uses is necessary but it’s not sufficient if we’re not thinking about, is the language we’re using appropriate for our stated aim, which is to — for example, everyone is supposed to be equal when it comes to the law, but they’re obviously not going to be on equal standing if they can’t understand the law because of the way it’s written.

Angela: Right. I think a big part of that is when you realize that information, legal information, terms and conditions for a product or a website. Even the folks who are writing it will tell you that the audience for this really are the lawyers. The lawyers and maybe the judges and the actual people, the customers or consumers of a particular product, you guys are just an afterthought. What we’re really doing here is writing it in such a way that lawyers can understand and make sense of it. I think that it’s really important to make sure that when you’re writing something you’re honest with yourself about who the audience for this actually is. Is it the lawyers who are going to hash out something later if something goes wrong with the product that you’ve purchased or what have you? Or if you are arrested and you’re asked to sign something are you able to understand it?

The lawyer can fend for themselves, but who is the primary audience for this information? I think that in many cases we just sort of gloss over the fact that actual human beings that don’t have a law degree are using this and really should be the primary audience for this information. But in many cases, with legal language in particular, that’s not the case. It’s really written for the lawyers.

Jonathan: When I was watching your talk in Cape Town about this topic, it just came across to me that what you were really saying, what I was hearing at least, is that so often we design stuff under this delusion of who our audience is which is like me and you, people in our studio, people like us. Actually, that’s so rarely the case. As we go out there and use this evidence based approach of saying I’m actually going to find people who actually fit in this group and see whether they’re actually using this stuff in the way that I intended to. I see a connection between that audience delusion and content strategy. I think so much of our digital products and services are delusional in that same way in that we throw all this stuff out there, we leave it to rot, we don’t really maintain it, we don’t really look at it anymore, and we just pretend that it’s working and hope it’ll go away. We don’t really find out whether it’s effective for our customers or our businesses or if it’s sustainable.

My question is how do you see your work in this area, or just in general, relating to content strategy?

Angela: I think there’s a very close relationship with that. Really, in my practice I see it almost on a daily basis. It can’t just be here is what I am putting out into the world, now it’s out there, my work is done. We really do have to be conscious of here was my intent with what I’ve created, what I’ve written, what have you, but I think there really needs to be a reality check of here was my intent, now how does that actually manifest itself? How are people using this? How are people acting on this information? Now that I’m sensitive to this, I see it all the time. Here was my intention. Now does that match up with what I intended to happen for the people who are meeting or using or otherwise consuming this content, what’s that output? Is it having the intended effect? If not, then I’ve got more work to do.

Jonathan: It never really stops. I think that’s part of what I’m trying to get at here. This is an ongoing commitment to something.

Angela: It is. It really is. It is an iterative process. Even once you’ve launched something and you’ve tested it and things are going well, does the user landscape change? Do people’s domain knowledge change over time? Do their needs change over time? That’s something that I think is worth attending to once you’ve launched it. “Oh, we’ve tested it. Yeah, things are going pretty well.” But how about when your users have changed from novice users…How are they interacting with the site the first time that they encounter it to, well, now they’re customers and they’re using your product or your site on a daily basis. How have their needs changed and are you still meeting those needs?

Jonathan: What we actually need to do is to work stuff like usability into our processes and keep iterating on what we’re designing and what we’re publishing and actually continuously measuring how well we’re actually doing.

Angela: I do believe that.

Jonathan: Angela, thank you so much for your time today. It’s been a fantastic podcast and super interesting stuff. I think people are going to have lots to practically use out of this. Thank you so much.

Angela: You’re very welcome. Thank you for having me.