These researchers aim to improve accessibility with augmented reality.

RASSAR

RASSAR is an app that scans a home, highlights accessibility and safety issues, and lets users click on them to learn more. CREDIT Su et al./ASSETS ‘23

Big Tech’s race into augmented reality (AR) grows more competitive daily. This month, Meta released the latest iteration of its headset, the Quest 3. Apple plans to drop its first headset early next year, the Vision Pro. The announcements for each platform emphasize games and entertainment that merge the virtual and physical worlds: a digital board game imposed on a coffee table, and a movie screen projected above aeroplane seats.

Some researchers, though, are more curious about other uses for AR. The University of Washington’s Makeability Lab is applying these budding technologies to assist people with disabilities. This month, researchers from the lab will introduce multiple projects that deploy AR — through headsets and phone apps — to make the world more accessible.

Researchers from the lab will first present RASSAR, an app that can scan homes to highlight accessibility and safety issues, on Oct. 23 at the ASSETS ‘23 conference in New York.

Shortly after, on Oct. 30, other teams in the lab will present early research at the UIST ‘23 conference in San Francisco. One app lets the headsets better understand natural language and the other aims to make tennis and other ball sports accessible for low-vision users.

UW News spoke with the three studies’ lead authors, Xia Su and Jae (Jaewook) Lee, both UW doctoral students in the Paul G. Allen School of Computer Science & Engineering, about their work and the future of AR for accessibility.

What is AR and how is it typically used right now?

Jae Lee: One commonly accepted answer is that you use a wearable headset or a phone to superimpose virtual objects in a physical environment. Many people probably know AR from “Pokémon Go,” where you’re superimposing these Pokémon into the physical world. Now Apple and Meta are introducing “mixed reality” or passthrough AR, which further blends the physical and virtual worlds through cameras.

Xia Su: I have also been observing lately that people are trying to expand the definition beyond goggles and phone screens. There could be AR audio, which is manipulating your hearing, or devices trying to manipulate your smell or touch.

A lot of people associate AR with virtual reality, and it gets wrapped up in discussion of the metaverse and gaming. How is it being applied for accessibility?

JL: AR as a concept has been around for several decades. But in Jon Froehlich’s lab, we’re combining AR with accessibility research. A headset or a phone can be capable of knowing how many people are in front of us, for example. For people who are blind or have low vision, that information could be critical to how they perceive the world.

XS: There are two different routes for AR accessibility research. The more prevalent one is trying to make AR devices more accessible to people. The other, less common approach is asking: How can we use AR or VR to improve the accessibility of the real world? That’s what we’re focused on.

JL: As AR glasses become less bulky and cheaper, and as AI and computer vision advance, this research will become increasingly important. But widespread AR, even for accessibility, brings up a lot of questions. How do you deal with bystander privacy? As a society, we understand that vision technology can benefit blind and low-vision people. But we also might not want to include facial recognition technology in apps for privacy reasons, even if that helps someone recognize their friends.

Let’s talk about the papers you have coming out. First, can you explain your app RASSAR?

XS: It’s an app that people can use to scan their indoor spaces and help them detect possible accessibility safety issues in homes. It’s possible because some iPhones now have lidar (light detection and ranging) scanners that tell the depth of space, so we can reconstruct the space in 3D. We combined this with computer vision models to highlight ways to improve safety and accessibility. To use it, someone — perhaps a parent who’s childproofing a home, or a caregiver — scans a room with their smartphone and RASSAR spots accessibility problems. For example, if a desk is too high, a red button will pop up on the desk. If the user clicks the button, there will be more information about why that desk’s height is an accessibility issue and possible fixes.

JL: Ten years ago, you would have needed to go through 60 pages of PDFs to fully check a house for accessibility. We boiled that information down into an app.

And this is something that anyone can download to their phones and use.

XS: That’s the eventual goal. We already have a demo. This version relies on lidar, which is only on certain iPhone models. But if you have such a device, it’s very straightforward.

JL: This is an example of these hardware and software advancements that let us quickly create apps. Apple announced RoomPlan, which creates a 3D floor plan of a room when they added the lidar sensor. We’re using that in RASSAR to understand the general layout. Building on that lets us come up with a prototype very quickly.

So RASSAR is nearly deployable now. The other areas of research you’re presenting are earlier in their development. Can you tell me about GazePointAR?

JL:  It’s an app deployed on an AR headset to enable people to speak more naturally with voice assistants like Siri or Alexa. All these pronouns we use when we speak are difficult for computers to understand without visual context. I can ask “Where’d you buy it from?” But what is “it”? A voice assistant has no idea what I’m talking about. With GazePointAR, the goggles are looking at the environment around the user and the app is tracking the user’s gaze and hand movements. The model then tries to make sense of all these inputs — the word, the hand movements, the user’s gaze. Then, using a large language model, GPT, it attempts to answer the question.

How does it sense what the motions are?

JL: We’re using a headset called HoloLens 2 developed by Microsoft. It has a gaze tracker that’s watching your eyes and trying to guess what you’re looking at. It has hand-tracking capability as well. In a paper we submitted building on this, we noticed that we have many problems with this. For example, people don’t just use one pronoun at a time — we use multiple. We’ll say, “What’s more expensive, this or this?” To answer that, we need information over time. But, again, you can run into privacy issues if you want to track someone’s gaze or visual field of view over time: What information are you storing and where is it being stored? As technology improves, we must watch out for these privacy concerns, especially in computer vision.

This is difficult even for humans. I can ask, “Can you explain that?” while pointing at several equations on a whiteboard and you won’t know which I’m referring to. What applications do you see for this?

JL: Being able to use natural language would be major. But if you expand this to accessibility, there’s the potential for a blind or low-vision person to use this to describe what’s around them. The question “Is anything dangerous in front of me?” is also ambiguous for a voice assistant. But with GazePointAR, ideally, the system could say, “There are possibly dangerous objects, such as knives and scissors.” Or low-vision people might make out a shape, point at it, then ask the system what “it” is more specifically.

And finally, you’re working on a system called ARTennis. What is it and what prompted this research?

JL: This is going even more into the future than GazePointAR. ARTennis is a prototype that uses an AR headset to make tennis balls more salient for low-vision players. The ball in play is marked by a red dot and has a crosshair of green arrows around it. Professor Jon Froehlich has a family member who wants to play sports with his children but doesn’t have the residual vision necessary to do so. We thought if it works for tennis, it will work for many other sports since tennis has a small ball that shrinks as it gets further away. If we can track a tennis ball in real-time, we can do the same with a bigger, slower basketball.

One of the co-authors on the paper is low vision himself, and he plays a lot of squash, and he wanted to try this application and give us feedback. We did a lot of brainstorming sessions with him, and he tested the system. The red dot and green crosshairs is the design that he came up with, to improve the sense of depth perception.

What’s keeping this from being something people can use right away?

JL: Well, like GazePointAR, it’s relying on a HoloLens 2 headset that’s $3,500. So that’s a different accessibility issue. It’s also running at roughly 25 frames per second and for humans to perceive in real time it needs to be about 30 frames per second. Sometimes we can’t capture the speed of the tennis ball. We’re going to expand the paper and include basketball to see if there are different designs people prefer for different sports. The technology will certainly get faster. So our question is: What will the best design be for the people using it?

For more information, contact Jon Froehlich at jonf@cs.washington.edu,

New wearable sensor makes continuous analysis of sweat possible, researchers say

Hand holding sensor

Penn State researchers developed a new wearable sensor to monitor glucose levels in sweat over multiple weeks. CREDIT Kate Myers/Penn State

Continuous monitoring of sweat can reveal valuable information about human health, such as the body’s glucose levels. However, wearable sensors previously developed for this purpose have been lacking, unable to withstand the rigors or achieve the specificity needed for continuous monitoring, according to Penn State researchers. Now, the research team has created a novel wearable patch that may be up to the task.

Made with a laser-modified graphene nanocomposite material, the device can detect specific glucose levels in sweat for three weeks while simultaneously monitoring body temperature and pH levels, the researchers reported in Advanced Functional Materials.

“Sweat is ideal for real-time, continuous and noninvasive biomarker detection,” said principal investigator Huanyu “Larry” Cheng, the James L. Henderson, Jr. Memorial Associate Professor of Engineering Science and Mechanics (ESM) at Penn State. “But low biomarker concentration levels in sweat and variability of other factors such as pH, salinity and temperature have pushed previous sweat biosensors past the limits of their detection and accuracy. This device is able to account for this variability while measuring glucose with needed specificity for weeks at a time.”

Cheng and his colleagues recognized from their previous sensor studies and work conducted by other researchers that laser-induced graphene (LIG) electrodes — electrodes fabricated with a nanomaterial constructed in a single step with laser scribing — could offer a promising starting point to develop a more effective wearable sweat sensor. Despite limitations due to low sensitivity to glucose and a limited surface area for the necessary electrochemistry, Cheng said, LIG electrodes are simple to fabricate, affordable and flexible. 

Working at the nanoscale, the researchers reported using a simple laser treatment to create a stable, 3D network of highly conductive noble metal alloys — gold and silver in this case — and carbon-based nanocomposite materials on the porous LIG electrode. Noble metals are not only highly conductive but are also resistant to oxidation, Cheng said.

By heating the gold and silver alloy nanocomposite material with a simple laser treatment, Cheng said the material also resists agglomeration. This is a common phenomenon where nanoparticles coalesce into clusters, limiting the material’s surface area. 

“Glucose on the surface of the modified LIG electrode oxidizes at lower potential,” said first author Farnaz Lorestani, ESM postdoctoral scholar. “This oxidation generates a measurable current or potential change that is directly proportional to the overall glucose concentration in the solution. We also see far greater stability over time, with the laser-treated sensor losing only 9% of its sensitivity over three weeks compared to 20% sensitivity loss for a sensor without laser treatment.” 

In addition to measuring glucose, the modified LIG electrode responded to changes in pH levels, too, according to the researchers. To fabricate the wearable device, they combined the dual glucose and pH sensor with another LIG-based temperature sensor and a stretchable layer with coil-shaped microfluidic channels to continuously collect and route sweat for sampling.

The device allows for the calibration of glucose measurements based on fluctuations in sweat pH and body temperature from activities such as exercise and eating, Cheng said. Worn as a patch roughly twice the width of a postage stamp and affixed to the skin with adhesive tape, it can wirelessly communicate its collected data to a computer or mobile device for real-time monitoring and analysis.

“The result of our work is a sensor with the notable sensitivity and stability to monitor glucose levels over multiple weeks,” Cheng said. “It is a low-cost platform offering convenient, accurate and continual analysis of sweat in diverse conditions, which has great potential for individual and population health, personalized medicine and precision nutrition.” 

Do chatbot avatars prompt bias in health care?

Matthew DeCamp, MD, Ph.D., and other University of Colorado School of Medicine researchers are shining a light on artificial intelligence’s role — and appearance — in health care.
Matthew DeCamp, MD, Ph.D., and other University of Colorado School of Medicine researchers are shining a light on artificial intelligence’s role — and appearance — in health care.

Chatbots are increasingly becoming a part of health care around the world, but do they encourage bias? That’s what University of Colorado School of Medicine researchers are asking as they dig into patients’ experiences with the artificial intelligence (AI) programs that simulate conversation.

“Sometimes overlooked is what a chatbot looks like – its avatar,” the researchers write in a new paper published in Annals of Internal Medicine. “Current chatbot avatars vary from faceless health system logos to cartoon characters or human-like caricatures. Chatbots could one day be digitized versions of a patient’s physician, with that physician’s likeness and voice. Far from an innocuous design decision, chatbot avatars raise novel ethical questions about nudging and bias.”

The paper, titled “More than just a pretty face? Nudging and bias in chatbots”, challenges researchers and health care professionals to closely examine chatbots through a health equity lens and investigate whether the technology truly improves patient outcomes.

In 2021, the Greenwall Foundation granted CU Division of General Internal Medicine Associate Professor Matthew DeCamp, MD, PhD, and his team of researchers in the CU School of Medicine funds to investigate ethical questions surrounding chatbots. The research team also included Internal medicine professor Annie Moore, MD, MBA, the Joyce and Dick Brown Endowed Professor in Compassion in the Patient Experience, incoming medical student Marlee Akerson, and UCHealth Experience and Innovation Manager Matt Andazola.

“If chatbots are patients’ so-called ‘first touch’ with the health care system, we really need to understand how they experience them and what the effects could be on trust and compassion,” Moore says.

So far, the team has surveyed more than 300 people and interviewed 30 others about their interactions with health care-related chatbots. For Akerson, who led the survey efforts, it’s been her first experience with bioethics research.

“I am thrilled that I had the chance to work at the Center for Bioethics and Humanities, and even more thrilled that I can continue this while a medical student here at CU,” she says.

The face of health care

The researchers observed that chatbots were becoming especially common around the COVID-19 pandemic.

“Many health systems created chatbots as symptom-checkers,” DeCamp explains. “You can go online and type in symptoms such as cough and fever and it would tell you what to do. As a result, we became interested in the ethics around the broader use of this technology.”

Oftentimes, DeCamp says, chatbot avatars are thought of as a marketing tool, but their appearance can have a much deeper meaning.

“One of the things we noticed early on was this question of how people perceive the race or ethnicity of the chatbot and what effect that might have on their experience,” he says. “It could be that you share more with the chatbot if you perceive the chatbot to be the same race as you.”

For DeCamp and the team of researchers, it prompted many ethical questions, like how health care systems should be designing chatbots and whether a design decision could unintentionally manipulate patients.

There does seem to be evidence that people may share more information with chatbots than they do with humans, and that’s where the ethics tension comes in: We can manipulate avatars to make the chatbot more effective, but should we? Does it cross a line around overly influencing a person’s health decisions?” DeCamp says.

A chatbot’s avatar might also reinforce social stereotypes. Chatbots that exhibit feminine features, for example, may reinforce biases on women’s roles in health care.

On the other hand, an avatar may also increase trust among some patient groups, especially those that have been historically underserved and underrepresented in health care, if those patients are able to choose the avatar they interact with.

“That’s more demonstrative of respect,” DeCamp explains. “And that’s good because it creates more trust and more engagement. That person now feels like the health system cared more about them.”

Marketing or nudging?

While there’s little evidence currently, there is a hypothesis emerging that a chatbot’s perceived race or ethnicity can impact patient disclosure, experience, and willingness to follow health care recommendations.

“This is not surprising,” the CU researchers write in the Annals paper. “Decades of research highlight how patient-physician concordance according to gender, race, or ethnicity in traditional, face-to-face care supports health care quality, patient trust, and satisfaction. Patient-chatbot concordance may be next.”

That’s enough reason to scrutinize the avatars as “nudges,” they say. Nudges are typically defined as low-cost changes in a design that influence behavior without limiting choice. Just as a cafeteria putting fruit near the entrance might “nudge” patrons to pick up a healthier option first, a chatbot could have a similar effect.

“A patient’s choice can’t actually be restricted,” DeCamp emphasizes. “And the information presented must be accurate. It wouldn’t be a nudge if you presented misleading information.”

In that way, the avatar can make a difference in the health care setting, even if the nudges aren’t harmful.

DeCamp and his team urge the medical community to use chatbots to promote health equity and recognize the implications they may have so that the artificial intelligence tools can best serve patients.

“Addressing biases in chatbots will do more than help their performance,” the researchers write. “If and when chatbots become a first touch for many patients’ health care, intentional design can promote greater trust in clinicians and health systems broadly.”