A study published in the New England Journal of Medicine found that ten years after undergoing bariatric surgery during their teenage years, more than half of the participants maintained significant weight loss. Additionally, many of these individuals showed improvements in obesity-related conditions, including type 2 diabetes, high blood pressure, and high cholesterol.
“Our study demonstrates remarkable results from the longest follow-up of weight loss surgery during adolescence, confirming that bariatric surgery is a safe and effective long-term strategy for managing obesity,” stated lead author Justin Ryder, PhD. He is the Vice Chair of Research in the Department of Surgery at Ann & Robert H. Lurie Children’s Hospital of Chicago and an Associate Professor of Surgery and Pediatrics at Northwestern University Feinberg School of Medicine.
Bariatric surgery is significantly under-utilized in the U.S., with only one out of every 2,500 teens with severe obesity undergoing the procedure. Based on existing recommendations, nearly five million adolescents qualify for effective weight loss interventions, such as bariatric surgery.
Hillary Fisher, now 31 years old, is glad she decided to undergo surgery at the age of 16. She was one of 260 adolescents who participated in the long-term Teen-LABS study.
“I felt overwhelmed by the daily struggles I faced due to my weight, health issues, and bullying in high school,” Ms. Fisher said. “After several unsuccessful attempts to lose weight, I weighed 260 pounds, and we decided that bariatric surgery was the solution. It changed my life; the improved health and self-esteem that came with losing 100 pounds were significant for me, and I would absolutely do it again.”
Notably, the study found that 55 per cent of the participants who had type 2 diabetes as teenagers and underwent surgery were still in remission of their diabetes at 10 years.
“This is considerably better than the outcomes reported in people who underwent bariatric surgery as adults, a major reason why treating obesity seriously in adolescents is so important,” added Dr. Ryder.
Indeed, a recent multi-centre randomized controlled trial found diabetes type 2 remission in adults to be 12-18 per cent at seven to 12 years after bariatric surgery.
Diabetes is a growing global issue, currently affecting over 500 million adults. As there is still no cure for either type 1 or type 2 diabetes, patients need to regularly monitor their blood glucose levels (BGLs) to manage their condition. While traditional BGL-measuring devices that require painful finger pricks have been the standard for many years, modern technology is beginning to offer better alternatives.
Many researchers have proposed noninvasive methods to monitor blood glucose levels (BGLs) using commonly available wearable devices, such as smartwatches. For instance, by positioning the LEDs and photodetectors found in certain smartwatches against the skin, it is possible to measure the pulse signals of oxyhemoglobin and haemoglobin. This data can then be used to calculate a metabolic index, which can help estimate BGLs. However, due to the small size and limited power of these smartwatches and similar wearables, the quality of the measured signals is often low. Additionally, daily movements can introduce measurement errors because these devices are typically worn on the extremities. These issues hinder the accuracy and clinical applicability of wearables for managing diabetes.
A team from Hamamatsu Photonics K.K. in Japan has been actively researching solutions to a pressing issue. In a recent study led by Research and Development Engineer Tomoya Nakazawa and published in the Journal of Biomedical Optics (JBO), they conducted a thorough theoretical analysis of the errors associated with the metabolic index-based method. Based on their findings, they developed a novel signal quality index to filter out low-quality data as a preprocessing step, which enhances the accuracy of estimated blood glucose levels (BGLs).
“As smartwatches are widely adopted across different regions and age groups, and with the global rise in diabetes cases, a signal quality enhancing method that is easy to implement and apply regardless of personal and individual differences is essential for meeting the increasing worldwide demand for noninvasive glucose monitoring devices,” remarks Nakazawa, explaining the motivation behind the study.
First, the researchers mathematically showed that the discrepancy between the two types of phase delays in the oxyhemoglobin and haemoglobin pulse signal calculated by different methods provides a good measure of the influence of noise. They then considered two primary sources of phase error: a background noise level and the estimation errors introduced via sampling at discrete intervals. After formalizing these sources of errors, they calculated the effect on the estimated metabolic index.
The proposed screening approach involves implementing thresholds for the phase estimation and metabolic index errors. Data chunks that exceed the set thresholds are discarded, and the missing values are approximated using other means based on the rest of the data.
To test this strategy, the researchers conducted a long-term experiment in which the sensors in a commercial smartwatch were used to monitor the BGLs of a healthy individual during “oral challenges.” In each of the 30 tests conducted over four months, the subject would fast for two hours before consuming high-glucose foods. Their BGLs were measured using the smartwatch and a commercial continuous glucose monitoring sensor, which was used to capture the reference values.
Notably, preprocessing the data with the proposed screening method led to a notable increase in accuracy. Using the Parkes error grid technique to categorize measurement errors, a substantially higher percentage of data points ended up in Zone A when screening was applied. This refers to clinically accurate values that would lead to correct treatment decisions. “Adopting the screening process improved BGL estimation accuracy in our smartwatch-based prototype,” remarks Nakazawa, “Our technique could facilitate the integration of wearable and continuous BGL monitoring into devices such as smartwatches and smart rings, which are typically constrained in terms of size and signal quality,” he adds, highlighting the impact of the research work.
The research team also noted some of the current limitations of smartwatches that lead to inferior performance compared to smartphone camera-based techniques. Though the proposed method could certainly help enhance the former’s performance, hardware improvements in the photodetector and amplifier circuits could go a long way toward making wearable electronics a more attractive and clinically acceptable option for monitoring BGLs.
Natália Ružičková Institute of Science and Technology Austria
Many statistical models and algorithms scientists use can be imagined as a “black box.” These powerful models give accurate predictions, but their internal workings are not easily understood. In an era dominated by deep learning, where an ever-increasing amount of data can be processed, Natália Ružičková, a physicist and PhD student at the Institute of Science and Technology Austria (ISTA), chose to take a step back at least in the context of genomic data analysis.
Ružičková, along with recent ISTA graduate Michal Hledík and Professor Gašper Tkačik, has proposed a model to analyze polygenic diseases—conditions where multiple regions of the genome contribute to dysfunction. This model also aids in understanding the role of these identified genomic regions in developing these diseases. Their research provides valuable findings by integrating advanced genome analysis with fundamental biological insights. The results have been published in the Proceedings of the National Academy of Sciences (PNAS).
Decoding the human genome
In 1990, the Human Genome Project was launched to decode human DNA fully—the genetic blueprint that defines humanity. By 2003, the project was completed, leading to numerous scientific, medical, and technological breakthroughs. By deciphering the human genetic code, scientists aimed to learn more about diseases linked to specific mutations and variations in this genetic map. The human genome comprises approximately 20,000 genes and even more base pairs, which are the letters of the blueprint. This complexity made ample statistical power essential, resulting in the development of “genome-wide association studies” (GWAS).
GWAS approach the issue by identifying genetic variants potentially linked to organismal traits such as height. Notably, they also include the propensity for various diseases. The underlying statistical principle is relatively straightforward: participants are divided into two groups—healthy and sick individuals. Their DNA is then analyzed to detect variations—changes in their genome—that are more prominent in those affected by the disease.
An interplay of genes
When genome-wide association studies emerged, scientists expected to find just a few mutations in known genes linked to a disease that would explain the difference between healthy and sick individuals. The truth, however, is much more complicated. “Sometimes, hundreds or thousands of mutations are linked to a specific disease,” says Miss Ružičková. “It was a surprising revelation and conflicted with our understanding of biology.”
Each individual mutation contributes only minimally to the risk of developing a disease. However, when combined, these mutations can provide a better—though not complete—understanding of why some individuals develop the disease. Such diseases are known as “polygenic.” For instance, type 2 diabetes is considered polygenic because it cannot be attributed to a single gene; rather, it involves hundreds of mutations. Some of these mutations influence insulin production, insulin action, or glucose metabolism, while many others are found in genomic regions that have not been previously linked to diabetes or have unknown biological functions.
The omnigenic model
In 2017, Evan A. Boyle and colleagues from Stanford University proposed a new conceptual framework called the “omnigenic model.” They proposed an explanation for why so many genes contribute to diseases: cells possess regulatory networks that link genes with diverse functions.
“Since genes are interconnected, a mutation in one gene can impact others, as the mutational effect spreads through the regulatory network,” Ružičková explains. Due to these networks, many genes in the regulatory system contribute to a disease. However, until now, this model has not been formulated mathematically and has remained a conceptual hypothesis that was difficult to test. In their latest paper, Ružičková and her colleagues introduce a new mathematical formalization based on the omnigenic model named the “quantitative omnigenic model” (QOM).
Combining statistics and biology
To demonstrate the new model’s potential, they needed to apply the framework to a well-characterized biological system. They chose the typical lab yeast model Saccharomyces cerevisiae, better known as the brewer’s yeast or the baker’s yeast. It is a single-cell eukaryote, meaning its cell structure is similar to that of complex organisms such as humans. “In yeast, we have a fairly good understanding of how regulatory networks that interconnect genes are structured,” Miss Ružičková says.
Using their model, the scientists predicted gene expression levels—the intensity of gene activity, indicating how much information from the DNA is actively utilized—and how mutations spread through the yeast’s regulatory network. The predictions were highly efficient: The model identified the relevant genes and could clearly pinpoint which mutation most likely contributed to a specific outcome.
The puzzle pieces of polygenic diseases
The scientists’ goal was not to outdo the standard GWAS in prediction performance but rather to go in a different direction by making the model interpretable. Whereas a standard GWAS model works as a “black box,” offering a statistical account of how frequently a particular mutation is linked to a disease, the new model also provides a chain-of-events causal mechanism for how that mutation may lead to disease.
In medicine, understanding the biological context and such causal pathways has huge implications for finding new therapeutic options. Although the model is far from any medical application, it shows potential, especially for learning more about polygenic diseases. “If you have enough knowledge about the regulatory networks, you could also build similar models for other organisms. We looked at the gene expression in yeast, which is just the first step and proof of principle. Now that we understand what is possible, one can start thinking about applications to human genetics,” says Miss Ružičková.
A team of researchers in nutritional sciences, kinesiology, and health education at the University of Texas at Austin has found that eating more ultra-processed foods—from diet sodas to packaged crackers to certain cereals and yoghurts—is closely linked with higher blood sugar levels in people with Type 2 diabetes.
In a recent paper published in the Journal of the Academy of Nutrition and Dietetics, the team describes how, more than just the presence of sugar and salt in the diet, consuming more ultra-processed foods loaded with additives can lead to higher average blood glucose levels over several months, as measured by HbA1C.
“We wanted to understand the impact of different types of foods on blood sugar control in people with Type 2 diabetes,” said Marissa Burgermaster, assistant professor of nutritional sciences at UT and the senior author of the study. “Our findings showed that individuals who consumed more ultra-processed foods had poorer blood sugar control, while those who included more minimally processed or unprocessed foods in their diet had better control.”
The researchers examined the diet recalls and scored them against three widely used indexes that look at the overall quality or nutrition in a person’s diet. Still, those tools were not associated with blood glucose control. Instead, how many grams of ultra-processed food the participants ate or drank was linked to worse control, and a correspondingly better control occurred in participants who ate more whole foods or foods and drinks with minimal processing.
Recent studies have indicated that eating more ultra-processed foods is linked to higher rates of cardiovascular disease, obesity, sleep disorders, anxiety, depression and early death. Ultra-processed foods are typically higher in added sugars and sodium. Still, the researchers concluded that the A1C increases were not about merely added sugar and sodium, or they would have correlated with the tools that measure overall nutritional quality in the diet. Synthetic flavours, added colours, emulsifiers, artificial sweeteners and other artificial ingredients may be in part to blame, hypothesized Erin Hudson, a graduate student author of the paper, and this would suggest that dietary guidelines may need to begin to place more emphasis on ultra-processed foods.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.