New AI-ready dataset released in type 2 diabetes study

Early results suggest broader participant diversity and novel measures will enable new, artificial intelligence-driven insights
Early results suggest broader participant diversity and novel measures will enable new, artificial intelligence-driven insights.

Researchers are releasing the main dataset from an ambitious study exploring the biomarkers and environmental factors that may influence the development of type 2 diabetes. This study involves participants with no diabetes and those at various stages of the condition. The initial findings suggest a rich and unique set of information that differs from previous research.

Data from customized environmental sensors installed in participants’ homes reveal a clear link between disease states and exposure to fine particulate pollution. The collected information also includes survey responses, depression scale scores, eye imaging scans, traditional glucose measurements, and various other biological variables.

These data are intended to be mined by artificial intelligence for novel insights about risks, preventive measures, and pathways between disease and health.

“We observe evidence of diversity among patients with type 2 diabetes, indicating that their experiences and challenges are not uniform. With access to increasingly large and detailed datasets, researchers will have the opportunity to explore these differences in depth,” stated Dr. Cecilia Lee, a professor of ophthalmology at the University of Washington School of Medicine.

She expressed excitement at the quality of the collected data, representing 1,067 people, just 25% of the study’s total expected enrollees.

Lee is the program director of AI-READI (Artificial Intelligence Ready and Equitable Atlas for Diabetes Insights), a National Institutes of Health-supported initiative that aims to collect and share AI-ready data for global scientists to analyze for new clues about health and disease.

The authors restated their aim to gather health information from a more racially and ethnically diverse population than previously measured, and to make the resulting data ready, technically and ethically, for AI mining.

“This discovery process has been invigorating,” said Dr. Aaron Lee, a UW Medicine professor of ophthalmology and the project’s principal investigator. “We’re a consortium of seven institutions and multidisciplinary teams that had never worked together. But we have shared goals of drawing on unbiased data and protecting the security of that data as we make it accessible to colleagues everywhere.”

At study sites in Seattle, San Diego, and Birmingham, Alabama, recruiters are collectively enrolling 4,000 participants, with inclusion criteria promoting balance:

  • race/ethnicity (1,000 each – white, Black, Hispanic and Asian)
  • disease severity (1,000 each – no diabetes, prediabetes, medication/non-insulin-controlled and insulin-controlled type 2 diabetes)
  • sex (equal male/female split)

“Conventionally, scientists are examining pathogenesis — how people become diseased — and risk factors,” Aaron Lee said. “We want our datasets also to be studied for salutogenesis, or factors that contribute to health. So if your diabetes improves, what factors might contribute to that? We expect that the flagship dataset will lead to novel discoveries about type 2 diabetes in both of these ways.”

He added that by collecting more deeply characterizing data from many people, the researchers hope to create pseudo health histories of how a person might progress from disease to full health and from full health to disease. 

The data are hosted on a custom online platform and produced in two sets: a controlled-access set requiring a usage agreement and a registered, publicly available version stripped of HIPAA-protected information.

AI for personalized pain medicine

A review paper by scientists at Indiana University Bloomington summarized recent engineering efforts in developing various sensors and devices to address challenges in personalized pain treatment.

The new review paper, published on September 13th in the journal Cyborg and Bionic Systems, critically examines the role of sensors and devices guided by artificial intelligence (AI) in personalized pain medicine and highlights their transformative impact on treatment outcomes and patient quality of life.

The experience of pain is complex and varies from person to person, impacting quality of life and putting strain on healthcare systems. Despite its widespread impact, accurately assessing and managing pain is challenging. “Personalized pain medicine aims to customize treatment strategies based on individual patient needs, with the potential to improve outcomes, reduce side effects, and increase patient satisfaction,” explained Feng Guo, a professor at Indiana University Bloomington. Recent engineering efforts have focused on developing sensors and devices to address these challenges in personalized pain treatment. These efforts include monitoring, assessing, and relieving pain, as well as taking advantage of advancements in medical AI, such as AI-based analgesia devices, wearable sensors, and healthcare systems.

The potential of intelligent sensors and devices to provide real-time, accurate pain assessment and treatment options represents a significant shift toward more dynamic and patient-specific approaches. However, adopting these technologies comes with substantial technical, ethical, and practical challenges, such as ensuring data privacy and integrating AI systems with existing medical infrastructures. Future research must refine algorithms and enhance system interoperability to foster broader adoption. AI-driven technologies are poised to transform the field of pain medicine, but it’s crucial to rigorously evaluate their impact and address ethical dimensions to ensure positive contributions to patient care without exacerbating existing disparities. Yantao Xing emphasized the importance of addressing these issues.

The potential of smart devices and sensors in personalized pain medicine is promising. However, challenges need to be addressed, such as data accuracy, device reliability, privacy, security concerns, and the cost of technology. This review emphasizes the need for multidisciplinary collaboration to fully utilize sensors and devices guided by AI in revolutionizing pain management. Integrating these technologies into clinical practice not only promises improved patient outcomes but also a more detailed understanding of pain mechanisms, leading to more effective and personalized treatment strategies.

AI aids early detection of autism

Matthew DeCamp, MD, Ph.D., and other University of Colorado School of Medicine researchers are shining a light on artificial intelligence’s role — and appearance — in health care.

A new study from Karolinska Institutet reveals that a new machine learning model can predict autism in young children based on limited information. Early detection is crucial for providing timely support, making this model a potentially valuable tool for facilitating early intervention.

“Kristiina Tammimies, Associate Professor at KIND, the Department of Women’s and Children’s Health at Karolinska Institutet and the last author of the study, states, “With an accuracy of almost 80 percent for children under the age of two, we hope that this will be a valuable tool for healthcare.”

The research team utilized the SPARK database, which contains data on around 30,000 individuals with and without autism spectrum disorders in the US.

The researchers created four different machine-learning models by analyzing 28 different parameters to identify patterns in the data. These parameters included information about children that could be obtained without extensive assessments and medical tests before they reached 24 months of age. The most successful model was called ‘AutMedAI’.

In a study involving around 12,000 individuals, the AutMedAI model successfully identified approximately 80% of children with autism. The age at which a child first smiled, uttered a short sentence, and experienced eating difficulties, when considered in specific combinations with other parameters, emerged as strong predictors of autism.

“The results of the study are significant because they show that it is possible to identify individuals who are likely to have autism from relatively limited and readily available information,” says study first author Shyam Rajagopalan, an affiliated researcher at the same department at Karolinska Institutet and currently an assistant professor at the Institute of Bioinformatics and Applied Technology in India.

Early diagnosis is crucial, according to researchers, for implementing effective interventions to help children with autism develop optimally.

In the study, the AI model demonstrated good results in identifying children with more extensive difficulties in social communication and cognitive ability, as well as having more general developmental delays.

The research team is currently planning additional enhancements and validation of the model in clinical settings. They are also working on incorporating genetic information into the model, which could result in even more specific and accurate predictions.

“To ensure that the model is reliable enough to be implemented in clinical contexts, rigorous work and careful validation are required. I want to emphasize that our goal is for the model to become a valuable tool for health care, and it is not intended to replace a clinical assessment of autism,” says Kristiina Tammimies.

ChatGPT is biased against resumes with credentials that imply a disability or autism

Artificial intelligence
Artificial intelligence

Last year, while looking for research internships, University of Washington graduate student Kate Glazko noticed that recruiters were using OpenAI’s ChatGPT and other AI tools to summarize resumes and evaluate candidates. As a doctoral student in the UW’s Paul G. Allen School of Computer Science & Engineering, she researches how generative AI can replicate and amplify biases, including those against disabled individuals. This led her to wonder how such a system would assess resumes that hinted at a candidate having a disability.

In a recent study, researchers at the University of Washington found that ChatGPT consistently rated resumes with disability-related awards and credentials, such as the “Tom Wilson Disability Leadership Award,” lower than identical resumes without those honours. When asked to justify the ratings, the system produced biased views of people with disabilities. For example, it asserted that a resume with an autism leadership award placed “less emphasis on leadership roles,” thus perpetuating the stereotype that individuals with autism are not effective leaders.

Following specific written instructions to avoid bias, researchers found that the tool significantly reduced bias for all but one of the disabilities tested. The tool showed improvement in handling five out of six implied disabilities, such as deafness, blindness, cerebral palsy, autism, and the general term “disability.” However, only three ranked higher than resumes that didn’t mention disability. of these ranked higher than resumes that didn’t mention disability.

“Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective,” said Ms Glazko, the study’s lead author. “For a disabled job seeker, there’s always this question when you submit a resume of whether you should include disability credentials. I think disabled people consider that even when humans are the reviewers.”

The researchers used the publicly available curriculum vitae (CV) of one of the study’s authors, which was around 10 pages long. Then, they created six modified CVs, each suggesting a different disability by adding four disability-related credentials: a scholarship, an award, a seat on a diversity, equity, and inclusion (DEI) panel, and membership in a student organization.

The researchers utilized ChatGPT’s GPT-4 model to compare the improved resumes with the original versions for a genuine “student researcher” job posting at a major software company in the United States. They conducted 10 comparisons for each, resulting in 60 total trials. Surprisingly, the system ranked the enhanced resumes, which differed only in implied disability, as the top choice in only 1 out of every 4 trials.

“In a fair world, the enhanced resume should always be ranked first,” said senior author Jennifer Mankoff, a UW professor in the Allen School. “I can’t think of a job where someone recognized for their leadership skills, for example, shouldn’t be ranked ahead of someone with the same background who hasn’t.”

Researchers found that when GPT-4 was asked to explain the rankings, its responses showed signs of explicit and implicit ableism. For example, it mentioned that a candidate with depression had “additional focus on DEI and personal challenges,” which “detract from the core technical and research-oriented aspects of the role.”

“According to Ms Glazko, some of GPT’s descriptions unfairly associated a person’s entire resume with their disability. It claimed that involvement in diversity, equity, and inclusion (DEI) or disability could detract from other parts of the resume. For example, it created the notion of ‘challenges’ in the context of comparing resumes of individuals with and without depression, even though ‘challenges’ weren’t mentioned at all. This led to the emergence of certain stereotypes.”

Researchers were curious about whether the system could be trained to be less biased. They used the GPT-4 Editor tool to customize the chatbot with written instructions (no coding required). The instructions were to ensure that the chatbot does not exhibit any ableist biases and instead operates according to disability justice and DEI (Diversity, Equity, and Inclusion) principles.

The experiment was repeated using the newly trained chatbot. In this trial, the system preferred the enhanced CVs over the control CV 37 times out of 60. However, for certain disabilities, the improvements were minimal or absent. For example, the autism CV ranked first only three out of 10 times, and the depression CV only twice, which was the same as the original GPT-4 results.

“People must understand the system’s biases when utilizing AI for real-world tasks,” Glazko mentioned. “Otherwise, a recruiter using ChatGPT may be unable to make these corrections or be aware that, even with instructions, bias can persist.”

Researchers note that some organizations, such as ourability.com and inclusively.com, are working to improve outcomes for disabled job seekers who face biases about whether or not AI is used for hiring. They also emphasize that more research is needed to document and remedy AI biases.

“It is so important that we study and document these biases,” Mankoff said. “We’ve learned a lot from and will hopefully contribute back to a larger conversation — not only regarding disability, but also other minoritized identities — around making sure technology is implemented and deployed in ways that are equitable and fair.”a disability

Autistic people turn to ChatGPT for advice on workplace issues

Autism and Chatbots

A new study shows that many autistics embrace ChatGPT and similar AI tools for help and advice as they confront workplace problems. But does the use of AI make sense? CREDIT JiWoong Jang and Sanika Moharana are Ph.D. students at the Human-Computer Interaction Institute.

A new Carnegie Mellon University study shows that many autistic people embrace ChatGPT and similar artificial intelligence tools for help and advice when confronting workplace problems.

Controversy remains within the autism community as to whether this use of chatbots is even a good idea.

“What we found is there are autistic people who are already using ChatGPT to ask questions that we think ChatGPT is partly well-suited and partly poorly suited for,” said Begel, “For instance, they might ask: ‘How do I make friends at work?'”

Begel heads the VariAbility Lab, which seeks to develop workplaces where all people, including those with disabilities and neurodivergent, can successfully work together. Unemployment and underemployment are problems for as many as nine out of 10 autistic adults, and many workplaces don’t have the resources to help employees with autism and their coworkers overcome social or communication problems as they arise.

To better understand how large language models (LLMs) could be used to address this shortcoming, Begel and his team recruited 11 people with autism to test online advice from two sources — a chatbot based on OpenAI’s GPT-4 and what looked to the participants like a second chatbot but was really a human career counsellor.

Surprisingly, the users overwhelmingly preferred the real chatbot to the disguised counsellor. It’s not that the chatbot gave better advice, Begel said, but rather how it dispensed it.

“The participants prioritized getting quick and easy-to-digest answers,” Begel said.

The chatbot provided black-and-white answers without a lot of subtlety and usually in the form of bullets. The counsellor, by contrast, often asked questions about what the user wanted to do or why they wanted to do it. Most users preferred not to engage in such back-and-forth, Begel said.

Participants liked the concept of a chatbot. One explained: “I think, honestly, with my workplace … it’s the only thing I trust because not every company or business is inclusive.” 

However, when a professional specialising in supporting autistic job seekers evaluated the answers, she found that some of the LLM’s answers weren’t helpful. For instance, when one user asked for advice on making friends, the chatbot suggested the user walk up to people and start talking with them. The problem, of course, is that a person with autism usually doesn’t feel comfortable doing that, Begel said.

It’s possible that a chatbot trained specifically to address the problems of people with autism might be able to avoid dispensing bad advice, but not everyone in the autism community is likely to embrace it, Begel said. While some might see it as a practical tool for supporting autistic workers, others see it as yet another instance of expecting people whose brains work a bit differently than most people to accommodate everyone else.

“There’s this huge debate over whose perspectives we privilege when we build technology without talking to people. Is this privileging the neurotypical perspective of ‘This is how I want people with autism to behave in front of me?’ Or is it privileging the person with autism’s wishes that ‘I want to behave the way I am,’ or ‘I want to get along and make sure others like me and don’t hate me?'”

At heart, it’s a question of whether autistic people are given a say in research that is intended to help them. It’s also an issue explored in another CHI paper, on which Begel is a co-author with Naba Rizvi and other researchers at the University of California, San Diego. In that study, researchers analyzed 142 papers published between 2016 and 2020 on developing robots to help autistic people. They found that 90% of this human-robot interaction research did not include the perspectives of people with autism. One result, Begel said, was the development of a lot of assistive technology that people with autism didn’t necessarily want, while some of their needs went unaddressed.

“We noticed, for instance, that most of the interactive robots designed for people with autism were nonhuman, such as dinosaurs or dogs,” Begel said. “Are people with autism so deficient in their own humanity that they don’t deserve humanoid robots?”

Technology can certainly contribute to a better understanding of how people with and without autism interact. For instance, Begel is collaborating with colleagues at the University of Maryland on a project using AI to analyze conversations between these two groups. The AI can help identify gaps in understanding by either or both of the speakers that could result in jokes falling flat or creating the perception that someone is being dishonest. The technology could also help speakers prevent or repair these conversational problems, Begel said, and the researchers are seeking input from a large group of people with autism to get their opinion on the kind of help they would like to see.

“We’ve built a video calling tool to which we’ve attached this AI,” said Begel, who has also developed an Autism Advisory Board to ensure that people with autism have a say in which projects his lab should pursue. “One possible intervention might be a button on this tool that says ‘Sorry, I didn’t hear you. Can you please repeat your question?’ when I don’t feel like saying that out loud. Or maybe there’s a button that says, ‘I don’t understand.’ Or even a tool that could summarize the meeting agenda so you can help orient your teammates when you say, ‘I’d like to go back to the first topic we spoke about.'”