Researchers Develop AI-Boosted Virtual Reality System to Help Autistic Students Improve Social Skills

Extended reality capable iKNOW system to broaden, benefits already validated by research
Extended reality capable iKNOW system to broaden, benefits already validated by research

Over the past decade, researchers at the University of Kansas (KU) have developed an innovative virtual reality (VR) system aimed at helping students with disabilities, particularly those with autism r, to enhance their social skills. Now, this groundbreaking project is set to reach new heights with the addition of artificial intelligence (AI) components, thanks to a five-year, $2.5 million grant from the U.S. Office of Special Education Programs.

The project, titled Increasing Knowledge and Natural Opportunities With Social Emotional Competence (iKNOW), aims to create an extended reality (XR) experience that offers students more natural and immersive social interactions. Building on the success of the existing VR system known as Virtual reality Opportunity to Integrate Social Skills (VOISS), which has already proven effective in improving social skills for students with disabilities, iKNOW will integrate advanced AI technologies to enhance these capabilities further.

The original VOISS system includes 140 unique learning scenarios designed to teach 183 social skills in virtual school environments, such as classrooms, hallways, cafeterias, and buses. These scenarios are accessible through various platforms, including iPads, Chromebooks, and Oculus VR headsets, allowing students to practice social skills across multiple settings.

Introducing AI to Enhance Realism and Interaction

The new iKNOW system will leverage large language models and AI technologies to enable more natural user interactions. Instead of responding to pre-recorded narratives by pushing buttons, students can engage in real-time conversations, with AI accurately transcribing spoken language and generating appropriate video responses from avatars. This approach aims to make social skill practice more relatable and effective.

Amber Rowland, an assistant research professor at KU’s Center for Research on Learning and one of the grant’s co-principal investigators, explained, “Avatars in iKNOW can have certain reactions and behaviors based on what we want them to do. They can model the practices we want students to see. The system will harness AI to ensure students have more natural interactions and put them in the role of the ‘human in the loop’ by allowing them to speak, and it will respond like a normal conversation.”

Monitoring Progress and Ensuring Realism

iKNOW will also include a real-time student progress monitoring system, providing valuable feedback to students, educators, and families. This system will track metrics such as the length and frequency of spoken responses, the number of keywords used, and areas where students may have struggled, helping to enhance overall understanding and progress.

To ensure a realistic experience, all avatar voices in iKNOW are provided by real middle school students, educators, and administrators. This approach eliminates students’ potential discomfort when practising social skills with classmates in supervised sessions, allowing them to practice until they are confident enough to transfer these skills to real-life situations.

Benefits for Teachers and Students

Maggie Mosher, an assistant research professor at KU’s Achievement & Assessment Institute and co-principal investigator for the grant, emphasized the system’s potential to lighten teachers’ workloads and provide practical student tools. “It will leverage our ability to take something off of teachers’ plates and provide tools for students to learn these skills in multiple environments,” Mosher said.

Mosher’s doctoral dissertation compared VOISS to other social skills interventions and found it to be statistically significant and valid in improving social skills and knowledge across multiple domains. This research, published in high-impact journals, underscores the system’s effectiveness.

Future Prospects and Collaboration

The iKNOW project, supported by one of four OSEP Innovation and Development grants, will be presented at the annual I/ITSEC conference, the world’s most extensive modeling, simulation, and training event. The research team, including principal investigator Sean Smith, professor of special education, and other esteemed colleagues, will showcase their work at this prestigious event.

The team has successfully implemented VOISS in schools nationwide, and iKNOW is set to expand its reach. Interested parties can learn more about the system, access demonstrations and videos, and contact developers through the iKNOW website.

Additionally, iKNOW will provide resources for teachers and families through a dedicated website called iKNOW TOOLS (Teaching Occasions and Opportunities for Learning Supports), supporting the generalization of social skills across real-world settings.

“By combining our research-based social emotional virtual reality work (VOISS) with the increasing power and flexibility of AI, iKNOW will further personalize the learning experience for individuals with disabilities along with struggling classmates,” said Sean Smith. “Our hope and expectation is that iKNOW will further engage students to develop the essential social emotional skills to then apply in the real world to improve their overall learning outcomes.”

How AI Could Transform ADHD Care Through Medical Chart Review

Stanford Medicine researchers uncover how AI can streamline ADHD treatment tracking.
Stanford Medicine researchers uncover how AI can streamline ADHD treatment tracking.

In a groundbreaking study, Stanford Medicine researchers have harnessed artificial intelligence (AI) to comb through thousands of doctors’ notes in electronic medical records (EMRs), revealing trends that could improve care for children with attention deficit hyperactivity disorder (ADHD). This innovative use of AI promises to relieve researchers and clinicians of tedious medical chart reviews, enabling them to focus on improving patient outcomes.

The study, published on December 19 in Pediatrics, demonstrates how large language models (LLMs) can efficiently detect gaps in ADHD management and suggest improvements. “This model enables us to identify some gaps in ADHD management,” said lead author Yair Bannett, MD, assistant professor of pediatrics at Stanford Medicine. Senior author Heidi Feldman, MD, added that the insights gained could be applied broadly across healthcare.

AI Revolutionizing ADHD Follow-Up Care

Medical records often contain critical information buried in freeform notes, making it challenging for researchers to identify patterns. For children with ADHD, proper follow-up care after starting medication is essential to monitor side effects and adjust dosages. However, manually reviewing thousands of notes is time-consuming and prone to human error.

The Stanford team trained an AI tool to analyze 15,628 notes from the medical records of 1,201 children aged 6 to 11. These children, treated across 11 pediatric practices, had been prescribed ADHD medications that can cause side effects like appetite suppression. The AI was tasked with identifying whether follow-up inquiries about side effects occurred within the first three months of medication use.

By training the model on 501 human-reviewed notes, researchers achieved 90% accuracy in classifying follow-up mentions. This AI-driven approach allowed them to analyze a task that would have otherwise taken more than seven months of full-time work.

Key Findings: Insights Beyond Human Reach

The AI model uncovered patterns that manual reviews could not. For instance, it revealed that some pediatric practices frequently conducted follow-ups via phone calls, while others did not. It also showed that doctors were less likely to ask about side effects for non-stimulant ADHD medications compared to stimulants.

“These are insights you wouldn’t detect without deploying AI across thousands of notes,” Bannett said. However, the model also highlighted its own limitations. While it identified patterns, it couldn’t explain the reasons behind them — a task that required input from pediatricians.

Challenges and Ethical Considerations

The researchers noted some limitations. The AI might have missed follow-ups recorded outside the EMRs or misclassified notes on medications unrelated to ADHD. Despite these challenges, the study highlights the importance of guiding AI tools with human expertise.

“AI is ideal for sorting through vast amounts of medical data, but ethical considerations and disparities in healthcare must remain front and center,” Bannett said. In a recent editorial, he and his colleagues emphasized the need to address biases in AI models trained on existing healthcare data.

A Vision for Personalized ADHD Care

Looking ahead, AI could help doctors make more personalized decisions for ADHD management. By analyzing large populations, AI might predict which patients are at risk of specific side effects based on age, race, genetic profile, and other factors. This capability could transform ADHD care, making it more precise and patient-centered.

“Each patient has their own experience, and with AI, we can complement that with the knowledge of large populations,” Bannett said. While the potential is immense, ensuring responsible AI deployment will be key to unlocking its full benefits for ADHD care and beyond.

This study marks a significant step toward integrating AI into routine medical care, offering a glimpse of a future where technology enhances clinicians’ ability to provide better, more tailored care.