W3C Workshop on Web and Machine Learning

Cognitive Accessibility and Machine Learning - by Lisa Seeman, Joshue O’Connor ()

Previous: AI (Machine Learning): Bias & Garbage In, Bias & Garbage Out All talks Next: Interactive ML - Powered Music Applications on the Web

    1st

slideset

Slide 1 of 40

Hi, welcome to our cognitive accessibility and machine learning presentation for the W3C machine-learning workshop.

I'm very, very happy to be here.

My name's is Josh O'Connor.

I'm emerging technology specialist with W3C Web Accessibility Initiative.

I'm here with my colleague, Lisa Seeman also, who is the task force facilitator from the COGA, the Cognitive Accessibility Task Force.

And we are going to be talking about cognitive accessibility and machine learning - beyond rule-based testing environments.

So great, Lisa, over to you.

Let's hear a little bit about what we're gonna be discussing.

And these are really research topics or early prototype stages for these kinds of things.

The first is that machine learning can offer potential solutions for people with cognitive disability, to bridge the gap in understanding on what people can can comprehend and putting things in a format that they can understand better, like simplified languages, as one example, usability testing for people with cognitive disabilities to be as software as a service or some other compatible mechanism that would enable automatic testing.

And also, there's some movement in the other direction.

Because I think there's the potential for what we're doing for support for cognitive disabilities, to actually help machine learning type algorithms perform better, and have maybe an anchor or more information on what's going on in the page so that if it's adapting it for other applications, it will perform a bit better.

So we're going to be talking about how machine learning can support people with cognitive disabilities, how this could potentially enable the better testing of that support.

Just generally understand how inclusive design can support even machine learning by developing data sets, which understand what works or what does not work for particular diverse user groups.

Is that right?

Yeah.

Some kind of going in the other direction.

There's tons of things where machine learning can help us as people trying to provide support for people with Cognitive and Learning Disabilities, and create new tools.

But actually, we could be helping other examples of machine learning application by what we're doing.

Absolutely.

So going on to the next sections on slide two user testing and cognitive and machine learning.

Yeah, so right now, a lot of the conformance for accessibility regulations and guidelines are kind of pattern matching rules.

So does this image have an alt tag?

If not, it might not conform to something.

You know part level of pattern match.

And that really didn't work well for us, trying to incorporate the user needs for people with Cognitive and Learning Disabilities .

For instance, move to slide two, an example of a user need or pattern that we would be introducing Cognitive and Learning Disabilities is: can we identify the controls on the page?

Does a button look like a button?

Or do they think it's a heading?

Or just text?

Is it recognizable?

Well, this is something that's very hard to do with straightforward rules.

You know, if this is missing, if there's a button that doesn't have this, then it's a fail.

Those kind of unit test rules.

It's very hard.

It's a bit...

it's subjective, and it moves.

What's familiar depends on which group.

So we would recommend user testing with a bunch of different groups, and recording what people find usable.

So if you're using a link, and it's blue and underlined, let's assume that it's recognizable.

But if you've got other ways of doing a link, well, it might be very different for an 18-year-old with down syndrome to a 70-year-old with age appropriate forgetfulness, who's having trouble learning new design patterns.

Putting that into a standard has been really, really challenging.

People will often know if it's recognizable or not, but how do you make that something that you could test?

So there are issues, Lisa, with mapping affordances in a way that's consistent, depending on a user's preference, or the user's requirements, particularly if they've got diverse ability.

Yes, that's one thing...

that's one way we're trying to look at it.

So the advantages of machine learning here is that the algorithm could be trained on what people have found usable and recognize straight away, and learn what for this user base is considered recognizable and what's not recognizable.

And we could introduce a certainty components such as if we say something should be likely it's over 90% certainties, assume that it's accessible.

And that could kind of bridge the gap.

As we've got on slide five, between the need of saying, perform user testing, that's how you're gonna know and having some kind of automated solution that's less scary and easier.

So, the machine learning patterns will have a way of giving us a certain statistical confidence that something is likely to work or not based on large pattern matching of various different kinds of interactions.

So how can inclusive design support machine learning?

If we move on then to slide six?

Well, we have some new technologies that we've made.

We've got at wide review going to CR of Personalization Semantics, where we will put on a mapping to a concept notes so that symbols can be added for different use cases or symbols can be adapted if people have learned a specific symbol set, and that's what is familiar to them, as well as other support can be integrated into the page such as instructions or help scaffolding.

So this kind of information actually tells machine learning, the right answer of what this button is, what does this do?

What concept did I mean when I said this?

So that would then have almost a 100% certainty or close to it.

You can then use that for giving context, and that could upgrade the performance of things like automatic translation, and other transformations or analysis of web content.

Great.

That sounds great.

So we're talking about, on slide seven semantic tokens, allowing the association of these various different preferred symbols with current elements.

How do you see machine learning even supporting the creation of these symbols?

Do you think it's because if you have things described in a way that's semantically accurate, that allows levels of abstraction that machine learning could support in a large kind of aggregated manner?

Is that something that you think…?

Yeah, it can go back and forth Josh.

Because if you've got, say, an authoring tool that adds the concept notes, if you like, into the code.

And if it does it 80% accurate, which currently is about the rate.

It makes mistakes, but the author will understand the symbol and will understand what they intended.

So if you said bank and you meant the financial institution, it puts a bank of river, you'll know this is wrong and you can switch it, if you're using that as an anchor for machine translation to other languages maybe, and I leave this to the audience who knows more, but maybe that could provide the context to provide better machine translation a bit of context can trigger more certainty down the line, and we're all fighting the same ambiguities when we make mistakes.

So if we're solving it to be able to translate across different symbol languages, in one language, then it could also potentially help to map across different languages.

Great stuff.

So in terms of new solutions for inclusion why do you feel that machine learning can be more similar to how humans actually think that's a very interesting insight?

Yeah, well, machine learning and new on that, for instance, is more similar to how most people think, most of us don't think like, not often so much in a sequential way of these rule-based algorithms.

So in a way machine learning is based on how people think.

And so therefore, it could fill the gaps on where there's some kind of problem.

So, for instance, people with dyscalculia, the area of the brain, which processes numerical concepts, is impaired.

So they struggle very much with money and trying to figure out what's more, what's less, let alone what's expensive.

So being able to make a judgment call on whether this item is expensive or not, that would be quite a long sequential algorithm with traditional computing.

Something like machine learning might be...

I mean, I know some bots that are working on that, so that kind of decision making, it could be supportive.

Another example is vision loss.

You have an image, they haven't all had got alternative text.

What would be a good description of the image?

And what's the image trying to convey with its red and yellow signage?

That might be a warning, which is more than OCR could do.

So that's more like that the instant recognition of the mood of this image that's portraying something.

Right.

So that's sophisticated image recognition, which can support different disability types.

I know one of the things that you are quite concerned about is the inclusion of people with cognitive disabilities and their related data within the data sets, they're used to power machine learning or any kind of algorithmic based application, such as smart cities or whatnot.

So could you speak a little bit about just the importance of having diverse data sets?

Yeah.

And this is one of the big fears, I think.

And it happens already that people with disabilities, or other groups, it might be people who can't afford a smartphone.

They're not using the apps and they're not using the websites and some of the tools that the phone calls that you can't manage because of the phone menu systems can be very difficult for people to manage.

And so then they don't call because they know they can't reach a human and then missed out of the data gathering process and not in those data sets.

And therefore they're kind of invisible in the whole decision making process that's based on these data sets, whether it's machine learning, or other algorithms.

And that then will result in solutions that don't consider them.

But as this becomes more pervasive with smart cities, with managing all these critical services, and people again think, “Oh yes, our services are good.” Et cetera.

And it's just because the people who can't manage this service, who aren't getting the critical help they need are missing from the data sets.

And that's going to be a very big fear.

Well, I think that's a very prescient and a sobering kind of assessment and shows that when we have generic data sets that are being used to drive algorithmic type decisions, it can have a very negative impact on people in ways that we can't initially perceive.

So Lisa, just finally, thanks very much.

This is really, really great.

We do focus in APA, the Accessible Platforms Architecture group, and the Research Questions Task Force on where there are various nexuses between different technologies, different platforms and accessibility.

You know, we do invite everyone who's at this workshop to contribute to this work and we welcome feedback on the presentation that we've given and the research that Lisa's is doing in the Cognitive Accessibility Task Force.

And we hope that this discussion is useful and beneficial, and everyone has enjoyed it, and found it useful.

Lisa, thank you very much.

Thank you, Josh.

Keyboard shortcuts in the video player
  • Play/pause: space
  • Increase volume: up arrow
  • Decrease volume: down arrow
  • Seek forward: right arrow
  • Seek backward: left arrow
  • Captions on/off: C
  • Fullscreen on/off: F
  • Mute/unmute: M
  • Seek percent: 0-9

Previous: AI (Machine Learning): Bias & Garbage In, Bias & Garbage Out All talks Next: Interactive ML - Powered Music Applications on the Web

Thanks to Futurice for sponsoring the workshop!

futurice

Video hosted by WebCastor on their StreamFizz platform.