Monthly Archives: October 2023

A Short Comment on “1 Case Clinical Report of Cranium Aggressive Fibromatosis and Literature Review”

DOI: 10.31038/JCRM.2023632

Short Commentary

Aggressive fibromatosis (AF) [1], known as a benign fibrous neoplasm arising from fascia, periosteum and musculoaponeurotic structures of the body, are rarely occurred in the head and neck region and tends to be locally aggressive, with nature to invade and erode skull and soft tissues, making excision much difficult. The presence of vital neurovascular structures in the head and neck makes it even more complicated. Owing to the features above mentioned, it is wise not to compromise on the vital structures considering the benign nature of the disease.

This clinical one case report revealed the clinical characteristics and appropriate therapeutic methods of aggressive fibromatosis (AF) in skull. In the meantime, it recommended the reasonable strategies for prevention and relatively favorable prognosis of AF, especially occurs in skull, which is very rare in neurosurgical clinic.

Findings and Significance of the Characteristics in Skull AF from this Work

It was indicated that the main manifestation of skull AF was headache and skull tumor. There was prominent osteolytic destruction found in X-ray plain film for skull AF. And CT scanning showed that skull sclerotin was disorganized and inhomogeneous, with widen diploe. The skull fibromatosis constituted by fibroblasts and myofibroblas, which were mainly spindle-shaped without heteromorphism. Immunohistochemistry showed positive expression of β-catenin and Vim in these cells. The enlarged incision was adopted for the strategy of operation in this patient of skull AF. After follow-up, there was no recurrence of AF discovered.

Taken together, it has been demonstrated from this study that skull AF is very rare in neurosurgical clinic. The clinical manifestation and iconography of AF were lack of specificity. Therefore, skull AF is hard to diagnose preoperatively. The effective diagnose is mainly dependent on histopathologic examination. As for treatment, operation is the most optimal method so far, which has a good therapeutic effect.

This work gave deeper insights into the distinctive elevation of the effects of AF clinical administration in a sooner future. From this case report, it therefore clearly can be seen that skull AF ought to be diagnosed by neuroimaging, such as CT and MRI. As for treatment, complete surgical excision of aggressive fibromatosis has been considered the only effective treatment. Chemotherapy may have a significant role, considering the possible hormonal etiopathogenesis of the condition. Radiotherapy, Other methods, expatiated as follows, are partially effective, already confirmed clinically previously.

Current Therapeutic Strategies and Future Prospect of Head and Neck AF

Head and neck fibromatosis is a rare condition with heterogeneity in presentation, proximity to vital structures, and locally aggressive nature. For the AF occurs in the skull, it is much rarer than any locations in the head and neck. These features make its treatment extremely challenging. Because of its rarity, variability in behavior, and the characteristics of these tumors, a standard treatment protocol has not yet been established. Although retrospective in nature, this study sheds new lights insight on various aspects of management of this rare entity. It has inherent limitations as it was retrospective, with a limited number of patients. However, as per our experience, we could conclude that surgery followed by multimodality management offers the best control, if not cure, for fibromatosis of head and neck region, may represent a superior strategy in AF administration clinically until now. Importantly, for differently special cases, unique therapeutic methods would be taken according to different types and locations of AF.

Apart from existing therapeutic strategies [2-4], such as complete surgical excision, chemotherapy post operations and antioestrogen therapy [5], etc., recently, therapies by using [6-8] non-steroidal anti-inflammatory drugs (NSAIDs) and interferon (IFN) α and tyrosine kinase inhibitor Imatinib have come to the fore. Some other novel and more effective treating methods are under studying.

References

  1. Raghunath Prabhu, Arjun Natarajan, Rajgopal Shenoy, Kuldeep Vaidya (2013) Aggressive fibromatosis (desmoid tumour) of the head and neck: a benign neoplasm with high recurrence. BMJ Case Rep 2013: bcr2013200156. [crossref]
  2. Kruse AL, Luebbers HT, Grätz KW, Obwegeser JA (2010) Aggressive fibromatosis of the head and neck: a new classification based on a literature review over 40 years (1968–2008). Oral Maxillofac Surg 14: 227-232. [crossref]
  3. Kinzbrunner B, Ritter S, Domingo J, Rosenthal CJ (1983) Remission of rapidly growing desmoid tumors after tamoxifen therapy. Cancer 52: 2201-2204. [crossref]
  4. Mukherjee A, Malcolm A, De la Hunt M, Neal DE (1995) Pelvic fibromatosis (desmoid)-treatment with steroids and tamoxifen. Br J Urol 75: 559-560. [crossref]
  5. Weiss AJ, Lackman RD (1989) Low-dose chemotherapy of desmoid tumors. Cancer 64: 1192-1194. [crossref]
  6. Fowler CB, Hartman KS, Brannon RB (1994) Fibromatosis of the oral and paraoral region. Oral Surg Oral Med Oral Pathol 77: 373-386. [crossref]
  7. Enzinger FM, Shiraki M (1967) Musculoaponeurotic fibromatosis of the shoulder girdle. Cancer 2013: 1131-1140. [crossref]
  8. Goullner JR, Soule EH (1980) Desmoid tumors: an ultrastructural study of eight cases. Hum Pathol 2013: 43-50. [crossref]
FIG 2

Not from Concentrate – An Exploration of the Minds of Consumers by Combining Experimental Design of Ideas (Mind Genomics) with Artificial Intelligence (AI)

DOI: 10.31038/PSYJ.2023571

Abstract

AI (artificial intelligence) was used in the Mind Genomics platform (BimiLeap) to generate sets of messages about orange juice, not from concentrate. These messages, called ‘elements’ were edited slightly, and combined into vignettes comprising 2-4 elements, the combinations dictated by an underlying experimental design. Each of 100 respondents evaluated totally unique sets of 24 vignettes, with the vignettes created to allow statistical analysis by OLS regression and then clustering. Respondents rated each vignette on a two-dimensional scale; desire to drink, and believe the information, respectively. The key equation was the relation between the presence/absence of the 16 elements and the desire to drink. Two clear mind-sets emerged, MS 1=Stress better ingredients:  MS2=Stress better functionality. AI was then used to summarize the results emerging from the two mind-sets, providing the summary based on themes, points of view, and opportunities for new products and services. The paper demonstrates synergies, viz.,  speed, simplicity, learning, and commercial opportunities currently available when one merges an information generating tool (AI) with rapid, hard-to-game evaluations by ‘real people.’

Introduction

The topic of NFC, not from concentrate, is an interesting issue in the change of the desirability of a product description over time. Where NFC was once the hallmark of quality for orange juice, the change in the world of citrus, especially in Florida, has eroded the value of NFC. Typical studies on these types of topics focus on limited aspects, such as the change in the dollar value of the slogan (viz., what are people willing to pay), or perhaps a historical retrospective of the business literature dealing with aspects of NFC.

The study reported here moves beyond a focused investigation of the topic into what might be more appropriately called an AI-enhanced exploration of the topic, coupled with the response of people. The approach used here, Mind Genomics, allows the researcher to explore how a person responds to various aspects of a topic, doing so in a way which moves towards the world of induction and so-called grounded-theory al. research [1]. Rather than developing a hypothesis emerging from a thorough understanding of the past, through published literature the emerging science of Mind Genomics encourages the exploration of a topic in a structured, templated fashion. The exploration is fast, inexpensive, disciplined, and almost always generates powerful new insights as it encourages research to explore and discover, rather than to confirm or falsify a hypothesis.

The origin of this specific study can be traced to the International Citrus & Beverage Conference, held in September 2023, in Clearwater, Florida. The conference brings together the various individuals involved in the world of citrus and allied products and services. The specific origin of the study was the conference presentation given by author Moskowitz to demonstrate the Mind Genomics method as a new technology. Discussions with authors Plotto and Sims about the best way to talk about Mind Genomics devolved into the notion that one could best explain the method by a live demonstration, from start to (almost) finish, in the allotted time of 50 minutes. Author Sims suggested the topic of ‘NFC’, and acted as the technical expert, to introduce the problem. Author Schneider, in turn, ran the computer, typed in the ideas or selected them when the ideas were presented as a group, from which one was to select messages (elements, ideas) to be tested. The output of the demonstration comprised both a book of information about NFC from different points of view (Idea book, produced by the embedded AI, called Idea Coach), as well as the results from the Mind Genomics ‘experiment’, after the 100 respondents participated.

Background to NFC

In the mid 1940s, in order to get a natural source of vitamin C to those in war-torn Europe, frozen concentrated orange juice was developed and became a leading seller once it hit the retail market. Evolving processing and storage techniques later allowed for the use of orange juice concentrate to develop refrigerated ‘Ready to Serve’ orange juice, creating a new segment in the market. Soon, this reconstituted ready to serve orange juice became the preferred choice of consumers over the frozen concentrate [2].

The reconstituted refrigerated 100% orange juice led the market until the introduction of not from concentrate, or NFC, orange juice to the market in the 1980s. Not from concentrate 100% orange juice is essentially made by extracting the juice, pasteurizing, and packaging. Though this juice was more costly to produce, store, and distribute, it was marketed as a ‘premium’ experience with superior flavor as it is not subject to the heat of evaporation.

NFC orange juice grew in popularity as consumers shifted from juice from concentrate to NFC, looking for more natural and healthier products [3]. Today, over 90% of Florida’s oranges are processed for NFC juice. However, due to challenges such as the state-wide devastation of citrus greening disease, juice production is declining [4]. Citrus greening has led to a decrease in Florida’s juice quality as infected oranges are described as bitter and sour and are lacking in sugar and orange flavor [5]. The lower sugar content and off attributes associated with infected oranges is making it more challenging for Florida orange juice producers to make 100% NFC orange juice with only Florida oranges. For example, companies such as Florida’s Natural, are now adding Mexican Valencia orange juice concentrate to their NFC juice to increase the sugar content of the juice and meet consumer demand. No studies have been done to assess what consumers think about not from concentrate orange juice versus from concentrate orange juice.

Background to Mind Genomics

Mind Genomics emerged from the confluence of three disciplines, psychophysics, statistics, and consumer research, respectively.

Psychophysics – provided Mind Genomics with the goal of measuring the strength of ideas. The origins of psychophysics lie in the pioneering work of scientists such as G.T. Fechner and S.S. Stevens, both focused on measuring the strength of sensations [6,7]. Their pioneering work, often called ‘Outer Psychophysics’ by Harvard’s S.S. Stevens, focused on the relation between the physical measurement of stimulus magnitude and the perceived magnitude. Stevens’ ‘Inner Psychophysics’ was to measure the strength of the percept.  Mind Genomics used Stevens’ notion of the magnitude of an idea as the basis for the effort to measure the strength of our perception.

Statistics – provided Mind Genomics with a way to organize the ‘test stimuli’ into combinations, so that these combinations or vignettes could somewhat approximate the nature of information coming to respondents in the form of text information from which ideas would be generated in the mind of the person. The specific approach contributed by statistics is known as ‘experimental design.’ The contribution comprises the precise combinations needed to test, so that one can deconstruct the response to the combination (called vignettes henceforth in this paper) to the presence/absence of specific phrases. In this way it would be possible to create known combinations of test stimuli, present them to people, get the reactions, and finally use statistics to estimate the contribution of each component in the vignette to the response.

Consumer research – provided Mind Genomics with the recognition of the importance of the everyday. Rather than putting the respondent into an unusual situation, and then doing the experiment with the respondent now considered a ‘test subject’, consumer research focused on the quotidian, the ordinary. The goal was no longer to prove or disprove a hypothesis by experimentation, but rather to focus on the normal world, albeit from the eyes of someone who wants to know that world, in a quantitative fashion. Could numbers be put on the features of the ordinary world, to express the magnitude of different features of this world as they are perceived by people.

During the past three decades, the ‘emerging’ science of Mind Genomics has evolved to the point where it has become a DIY, do-it-yourself, research system, almost fully templated. The approach has evolved from the user creating one set of test vignettes (combinations of elements, viz., messages) to small, automatically created sets of vignettes, different for each respondent (study participant).  In the most current version of Mind Genomics, each respondent evaluates sets of 24 vignettes, each vignette comprising a minimum of two and a maximum of four elements (messages). The underlying experimental design works with four topics, viz., questions, and with each topic generating four different elements, viz., answers. The experimental design puts together the answers into small, easy to read vignettes, the aforementioned combinations. The respondent reads each of 24 vignettes, and for each vignette assigns a response from a rating scale.

The objective of the Mind Genomics study is to make the effort easier, so that anyone can become a researcher. Indeed, elementary school students ages eight and above have found this templated approach to be fun, investigating topics such as the nature of third grade mathematics in ten years [8].

The Mind Genomics study continues to be enhanced. Current efforts, presented in this paper, include the use of AI (artificial intelligence) to help the researcher come up with the elements by suggesting questions and then answers to those questions, once the researcher describes the issue in the ‘Idea Coach’. Thus, it becomes far easier to investigate new topics, even with virtually no knowledge, because the embedded AI provides a true coach.  Additional enhancements using AI include the summarization of results by AI using a number of queries to bring together the strong performing results in a user-friendly way.

Running a Mind Genomics Study on the Topic of NFC, not from Concentrate

The Mind Genomics study begins at the website (www.BimiLeap.com). After the researcher has created an account, the researcher begins a study by naming the study, selecting the language (currently only a few languages are implemented beyond English), and then agreeing to respondent privacy.

The next step requires the researcher to develop four questions, and for each question develop four answers. Figure 1 Panel A shows the request for the four questions. Figure 1 Panel B shows the request for four answers to one question, the question having been developed already by the researcher. Both Panel A and Panel B are shown ‘filled in’. For the researcher beginning the study, these screens are empty, requesting the researcher to fill them in, the researcher first creating the four questions and then afterwards filling four Panel B’s, one for each question.

FIG 1

Figure 1: Panels showing the input of the researcher. Panel A shows the request for four questions. Panel B shows the request for four answers to one of the four questions. Panel C shows the instructions to create a self-profiling classification question. Panel D shows the anchored five-point rating scale.

It is at this point that many researchers feel nervous. Idea Coach, embodying AI, was developed to decrease the nervous response, and push the study towards creation and then completion. Table 1 shows the first iteration of the Idea Coach, to provide questions. The researcher need only press the red oval in the formatted BimiLeap program to be taken to a screen which instructs the researcher to describe the topic in a box provided. The researcher then presses the request and receives a set of 15 questions. The researcher can select one or more questions, paste those questions into the appropriate screen (Figure 1 Panel A), edit the question if desired, add one’s own question, or run the request again for a mostly new set of questions. This process of requesting questions, selecting, and pasting, can go on for a while, but usually by four or five requests, and by thus 60-75 mostly different questions, the researcher will have selected the best questions and edited them. The same process happens for the request for four answers for each question.

While the researcher is using Idea Coach, both for questions and for answers, in the background the program is storing the guiding ‘squib’ for the development of questions, and the guiding question for the development of answers. When the researcher asks Idea Coach four times for questions, and three times for answers to each of four questions, the Idea Coach will produce 3 + 3 + 3 + 3 + 3 ‘pages’ of questions or answers. These are recorded for the researcher, along with a detailed analysis of the patterns uncovered on the particular page whether question or answer. The material is returned to the researcher in the form of an Excel book, the Idea Book, each page or tab corresponding to one of the different requests. Table 1 shows an example of what is returned after the first request to Idea Coach for questions.

Table 1: Results from the Idea Book, showing the 15 questions and AI summarization of those questions. Table 1 shows the results from the third time the Idea Coach was requested to provide 15 questions to address the topic.

TAB 1(1)

TAB 1(2)

TAB 1(3)

The depth of information in Table 1 deserves a comment. One of the benefits of current AI is that the AI technology can be queried, as it was to develop the 15 questions, using the short paragraph, here really a sentence: Topic: Why would from concentrate orange juice be any less acceptable then NFC orange juice, please expand on this because I’m less than 12 years old. This single statement became a query, which generated the 15 Topic Questions listed. The AI then stored these 15 questions, while the Mind Genomics program, BimiLeap, continued to interact with the researcher, in order to select the four questions. After the four questions and four answers to each question were selected, BimiLeap used AI to ‘interrogate’ each set of 15 questions (and later each set of 15 answers). The results of the interrogation, viz., the summarization by AI, appears in Table 1, which shows the results from the third iteration, viz., the third time the researcher asked for the 15 questions to address the topic.

Once the researcher has selected the elements (the four sets of four answers), the next step is to add self-profiling classification questions. The now-standard version of BimiLeap automatically asks the respondent for gender and age, allowing the researcher to generate an additional eight questions, each with up to eight possible answers. Figure 1 Panel C shows the self-profiling classification question asked in this study. With up to eight classification questions, it is possible to use the first portion of the study, self-profiling, as a complete study in itself.

When the researcher has completed the setup of questions and answers (topics and elements), and then completed the self-profiling classification, it is time to create the rating scale. The rating scale for this study is unique in that it contains two parts, wanting to drink orange juice versus not wanting to drink orange juice, as well as believing versus not believing the material. Figure 1 Panel D shows the rating scale and the five answers.

How do feel about orange juice when you read this?
1=I don’t want to drink orange juice and i don’t believe what i just read
2=I don’t want to drink orange juice but i do believe what i just read
3=I can’t answer
4=I do want drink orange juice but i don’t believe what I just read
5=I do want drink orange juice and i do believe what I just read

Continue reading

FIG 2

The Evolution of Type 2 Diabetes Mellitus and Insulin Resistance

DOI: 10.31038/EDMJ.2023732

Abstract

Over the past 25 years of studying type 2 diabetes mellitus, a working hypothesis has emerged to move the development of precision medicine for type 2 diabetes mellitus forward. Earlier studies using amplified genomic DNAs for genomic-wide searches of human genes have led many investigators astray. However, a recent study has taken a different approach, using next-generation RNA sequencing, revealing an essential down-regulation of two genes, TPD52L3 and NKX2-1. The current compendium focuses on describing all of the important priciples to clarify the hypothesis from the beginning: insulin sensitivity and glucose effectiveness, genetics, free fatty acids, cell membranes, atomistic glucose and glucose transport, β-cell functions, membrane flexibility and (pre-) diabetes type 2.

Furthermore, this study sheds light on the importance of considering membrane flexibility in the context of type 2 diabetes and questions the potential risk associated with using the term ‘insulin resistance’.

Keywords

Type 2 diabetes mellitus; Insulin sensitivity; Glucose effectiveness; Genetics; Free fatty acid; Cell membrane; Glucose transport; β-cell function; membrane flexibility

Introduction

While Falta and Boller introduced the concept of “insulin resistance” in their seminal work published in 1931, it was not until 1933, when MacBryde noted that scholars had not reached a consensus on its definition, leading to gaps in research and clinical care. Despite remarkable advancements in medicine, these gaps continue to exist, even after nine long decades [1,2]. Over the years, accumulating data on insulin resistance have been published, enabling reconsideration of its meaning. Another unresolved problem relates to the diabetes susceptibility loci in and around the CDKAL1, CDKN2A/CDKN2B, HHEX, KCNJ11, SLC30A8, and TCF7L2 genes, suggesting that the single-nucleotide polymorphisms within or near these genes most likely do not alter their expression or function [3]. A recent study reported that common variant studies of type 2 diabetes mellitus have identified more than 700 risk loci for type 2 diabetes, half of which have been discovered in the past three years [4]. The question is: Are we on the right track? Indeed, genes are normally copied exactly during chromosome duplication. Rarely, however, mutations occur in genes to give rise to altered forms, most –but not all– of which function less well than the wild-type alleles. One study, based on next-generation RNA sequencing, found that an inherited mitochondrial defect, reducing the mitochondrial respiratory chain complex activity, as well as a defect associated with alterations in lipid storage played a critical role in the onset of type 2 diabetes mellitus. Against the background of the above considerations, this type 2 diabetes mellitus compendium furnishes an overview of recent advances in biochemistry and molecular biology in the context of type 2 diabetes mellitus. The compendium will help physicians and/or students of medicine gain an in-depth understanding of the molecular mechanisms of this disease.

Insulin Sensitivity and Glucose Effectiveness

Computational modeling of glucose and insulin kinetics following intravenous glucose challenge has demonstrated that individuals with type 2 diabetes mellitus show significant reductions in insulin sensitivity (SI) and insulin-independent glucose removal rate (SG) compared with normoglycemic individuals (Table 1) [5]. Insulin sensitivity essentially reflects the ability of insulin to enhance the effect of glucose to normalize its own concentration, and glucose effectiveness refers to the ability of glucose, independent of a dynamic insulin response, to enhance net glucose disappearance. Thus, these individuals exhibit reduced responsiveness to circulating insulin as well as reduced glucose effectiveness.

Table 1: Measures of glucose effectiveness and insulin sensitivity for a two- compartment minimal model

Units

Control subjects

Type 2 diabetes individuals

P value

Δ (%)

Tracer

SG
h-1 0.41 ± 0.04 0.33 ± 0.02

< 0.001

19.5

13C

h-1 0.52 ± 0.05 0.37 ± 0.02

< 0.001

28.8

2H

average

24.1

SI
pmol x L-1 x h-1 0.0082 ± 0.0012 0.0036 ± 0.0006

< 0.001

56.1

13C

pmol x L-1 x h-1 0.0098 ± 0.0013 0.0042 ± 0.0008

< 0.001

57.1

2H

Average

56.6

Data are based on the reference data listed by Weijers [5]. SG: glucose effectiveness; SI: insulin sensitivity

In the latter condition, glucose ⎼ independent of changes in the insulin concentration ⎼ is less able to facilitate its own uptake through a mass action effect and suppress its own release. A prospective study investigated the development of type 2 diabetes in normoglycemic offspring of parents who had type 2 diabetes. The study revealed that the offspring exhibited significant defects in both glucose effectiveness and insulin sensitivity more than a decade before disease development [6]. Moreover, a key feature in type 2 diabetes is an essentially larger defect in insulin sensitivity (56.6%) compared with glucose effectiveness (24.1%). What do these findings imply?

Genetics

In the ongoing research on the genetic basis of type 2 diabetes, earlier studies using amplified genomic DNAs for genome-wide searches of human genes have led many investigators astray. However, a recent study has taken a different approach, using next-generation RNA sequencing to examine genome-wide changes in gene expression in the skin of patients with type 2 diabetes, compared to non-diabetic patients [7]. This new study identified two previously unknown genes significantly downregulated in indivividuals.

The previous findings revealed that tumor protein D52-like3 (TPD52L3), a gene in the gene metabolism category, exhibited the most significant downregulation with a value of 3.7 × 10-9 in the studied group, which consisted of individuals with type 2 diabetes. There is no established link between the gene TPD52L3 and type 2 diabetes or wound healing. However, a study involving exogenous expression of human TPD52 in cultured cells demonstrated a notable increase in lipid droplets [8]. Lipid droplets serve as storage organelles for excess fatty acids within adipocytes, which are fat cells. Tumor protein D52 is the founding member of the TP52-like protein family representing four paralogous mammalian genes, i.e. TPD52, TPD52L1, TPD52L2, and TPD52L3 [8,9]. When analyzing TPD52 and TPD52L3 genes, researchers found that the two sequencers shared 63 identical positions and 42 similar positions, resulting in an overall homology of 67.9% (Figure 1) [10]. Indeed, based on the high sequence homology with TPD52L3, it appears plausible that the primary function of TPD52L3 is lipid storage in adipose cells. A reduction in TPD52L3 expression could increase the release of free fatty acids into the bloodstream. The difference in the unsaturation index (number of cis carbon-carbon double bonds per 100 fatty acyl-chains) between the released free fatty acids from human white fat cells and the serum-free fatty acids in the healthy controls (85.5 versus 191.9, respectively) is significant [5]. The release of these saturated free fatty acids leads to a considerable decrease in the unsaturation index of erythrocytes and vascular membranes. As a result, the membrane flexibility of these cells is reduced. The decrease in membrane flexibility, in turn, negatively impacts the rate of glucose transport across the cell membrane, initiating the onset of type 2 diabetes.

FIG 1

Figure 1: Alignment of the human TPD52 (upper row) and the human TPD52L3 (lower row) protein sequences. Amino acid residues are indicated by single letters. Vertical lines indicate identical residues and colons/dots indicate highly/weakly conserved residues.

In the second scenario, the most downregulated gene in the gene regulation category was NKX2-1, and it exhibited a down regulation value of 3.7 × 10-9 [7]. NKX2-1, a transcription factor, is associated with reduced mitochondrial respiratory chain complex activity, resulting in decreased ATP production, among other cellular functions [11]. This idea supports a study’s data proposing that the dysregulation of intramyocellular fatty acid metabolism in the offspring of individuals with type 2 diabetes is linked to an inherited defect in mitochondrial oxidative phosphorylation [12]. The β-oxidation of fatty acids plays a crucial role in compensating and restoring ATP production by increasing plasma-free fatty acids through hydrolysis. However, this increase in free fatty acids results in significant decrease in the unsaturation index of serum free fatty acids. Consequently, the reduction in the unsaturation index leads to decreased membrane flexibility and lowers the rate of glucose transport across the cell membrane, ultimately initiating the onset of type 2 diabetes. Notably, reduced mitochondrial activity is recognized as one of the key characteristics of type 2 diabetes [13].

Free Fatty Acids

After four billion years of evolution, the earliest protocells evolved to ‘modern’ cells enclosed by membranes consisting of phospholipids, the chief constituents of biological membranes. Glycerol-based phospholipids are the major class of naturally occurring phospholipids. Typically, a phospholipid consists of glycerol-3-phosphate, with a saturated fatty acid at position 1 and an unsaturated fatty acid at position 2 of the glycerol. Saturated fatty acids possess essentially linear alkyl chains with no double bonds. On the other hand, double bonds in unsaturated fatty acids are nearly in the cis configuration, which creates a bend in the fatty-acid chain. Molecules such as palmitoleic acid (C16:1) and oleic acid (C18:1) are bent at the cis double bond, and the two chain parts form an angle of 133 degrees [14,15]. This bend has important consequences for structure and functionality of biological membranes because, while saturated fatty acids are able to pack closely together, unsaturated fatty acids prevent such close packing.

Type 2 diabetes mellitus, gestational diabetes mellitus, and impaired glucose tolerance are characterized by elevated plasma free fatty acid levels [16,17]. This is confirmed by findings that the percentages of docosahexaenoic acid (C22:6 n-3) and arachidonic acid (C20:4 n-6), released from white adipocytes, are decreased by approximately 110-fold and 9-fold, respectively, compared with the human serum pool, and the unsaturation index of released free fatty acids from human white adipocytes is markedly lower than the unsaturation index of serum free fatty acids in healthy controls (85.5 and 191.9, respectively) [5]. Therefore, an increased release of free fatty acids from adipose tissue into the blood circulation elevates the plasma concentration of saturated fatty acids. Hence, a shift from unsaturated to saturated fatty-acyl chains in phospholipids of erythrocyte membrane and vascular endothelium is a hallmark of type 2 diabetes mellitus [18]. Borkman et al. in 1993 suggested that decreased insulin sensitivity is associated with decreased concentration of polyunsaturated fatty acids in skeletal-muscle phospholipids, raising the possibility that changes in the fatty-acid composition of muscle modulate the action of insulin [19].

Cell Membranes

Phospholipid bilayers form spontaneously and rapidly, when phospholipids are added to water. As evident in Figure 2, two acyl chains (the hydrocarbon chain region) yield a roughly cylindrical molecule with an area (A) that can pack in parallel arrays to form extended sheets of membranes composed of a mosaic of proteins and phospholipids in a fluid phospholipid matrix [20].

FIG 2

Figure 2: The most basic structural result obtained from x-ray scattering from oriented bilayers in model phospholipid membrane systems is the area (A) per lipid molecule (the cross-sectional area of the cylindrical part of the phospholipid). DHH represents the membrane bilayer thickness.

The driving force behind this aggregation phenomenon is the weak, noncovalent bond (van der Waals force) between a pair of carbon atoms, which can be calculated with the Lennard-Jones potential [21]. The interaction energy (U) is related to the distance (r) between two carbon atoms, as illustrated graphically in Figure 3. This graph suggests that the minimum energy principle favors a carbon-carbon distance of about 4 Å, which is the most stable distance between the centers of two carbon atoms, with a minimum interaction energy of -0.77 kJ/mol. Furthermore, when the carbon atoms in two acyl chains of a phospholipid diverge, their interaction energy decreases as a function of distance r approximately with the sixth power, and when they approach each other, their interaction energy increases as a function of distance r approximately with the sixth power. Thus, the flexibility of a lipid bilayer is largely determined by the amount of weak noncovalent forces of carbon–carbon interactions, i.e., the number of -C=C- double bonds along the phospholipid unsaturated acyl chains.

FIG 3

Figure 3: The van der Waals interaction energy profile as a function of the distance (r) between the centers of two carbon atoms. The energy is calculated using the empirical equation U=B/r12 – A/r6. Values for the parameters B=11.5 ⨯ 10-6 kJnm12/mol and A=5.96 ⨯ 10-3 kJnm6/mol for the interaction between two carbon atoms.

The unsaturation index is widely recognized as a useful parameter for describing the flexibility of a biological membrane. It is calculated by multiplying the mean number of cis double bonds per lipid acyl chain by 100 [22]. Therefore, an increase in saturated fatty acids of membrane phospholipids, as observed in erythrocytes, results in a decrease in membrane flexibility and is marked by a decrease in the unsaturation index. A number of studies of fully hydrated, fluid phase, model phosphatidylcholine bilayers have shown that introducing one or more carbon-carbon cis double bonds into the saturated acyl chains will increase the cross-sectional area A by about 18% and decrease the attraction energy by about 34% (Table 2) [23].

Table 2: Experimental data of fully hydrated fluid phase phosphatidylcholine lipid bilayers

 

DLPC

DMPC

DPPC

 

DOPC

PDPC

Fatty acid structure

[C12:0)]2

[C14:0]2

[C16:0]2

[C18:1]2

C16:0,C22:6

Temperature (C°)

30

30

50

30

30

Area A per lipid molecule (Å)2

63.2

60.6

64.0

72.5

74.8

Mean Area A sat. unsat. (Å)2

62.6

73.6

Mean -C=C- interchain dist.(Å)

4.48

4.84

Mean interact. energy U (kJ/mol)

-0.59

-0.39

UI

0

0

0

100

300

Data are based on the original data listed by Weijers [40].
DLPC: Dilauroylphosphatidylcholine; DMPC: Dimyristoylphosphatidylcholine; DPPC: Dipalmitoylphosphatidylcholine; DOPC: Dioleoylphosphatidylcholine; PDPC: Palmitoyl-Docosahexaaenoic-Phosphatidylcholine; –C=C⎼: Carbon-Carbon double bound; UI: Unsaturation Index.

An important different method, compared to the number of studies of fully hydrated, fluid phase, model phosphatidylcholine bilayers, for quantifying the mechanical properties of a single cell, has recently developed, in which a spherical cell is aspirated into a micropipette aspiration channel with a controlled suction pressure [24]. The micropipette pressurization of giant diacylphosphatidylcholine bilayers demonstrated that poly-cis unsaturated chain bilayers are thinner and more flexible than saturated/monounsaturated chain bilayers. However, the most striking result was the major increase in bending flexibility, which occurred when two or more cis double bonds were present in one or both chains of the lipid [25].

Atomistic Glucose and Glucose Transport

Glucose tissue uptake is performed by different specific glucose transporters. Glucose transporter proteins are integral membrane proteins containing 12 membrane-spanning helices. The glucose channel of a glucose transporter comprises eight helices that are immersed in a box formed by the remaining four helices [26]. The cross-section of this box has a mean surface area of 1,100 Å2, which covers an area of about 17 molecules of a phosphatidylcholine bilayer with saturated fatty acyl chains. Thus, the insertion of a glucose transporter molecule across a phospholipid cell membrane requires flexibility of the bilayer membrane.

β-Cell Functions

Variations in the lipid composition of cell membranes can profoundly impact the function of proteins embedded with them. Even small changes in the lateral pressure of a bilayer membrane can lead to significant alterations in the conformational distribution of the embedded proteins [27]. In type 2 diabetes mellitus, the redistribution process in membrane phospholipids can be triggered by a deficiency in cis carbon-carbon double bonds, compared to healthy individuals. A reduction in the area A of lipid molecules in the cell membranes leads to decreased flexibility. This reduction in flexibility can hinder the movement and conformational changes of proteins embedded in the membrane glucose transporters.

The collected information enables an exploration of ‘insulin resistance’ in the pathophysiology of type 2 diabetes. During the prediabetic phase, a crucial aspect of type 2 diabetes etiology is a decrease in unsaturation index of membrane phospholipids observed in erythrocytes compared to healthy controls. This reduction in the unsaturation index lowers flexibility of β-cell membranes, leading to a slower transmembrane glucose transport via GLUT2 in the β-cells. Consequently, the insulin granules inside the β-cells contain reduced insulin. Thus, instead of attributing the condition solely to ‘insulin resistance’, the primary factor behind decreased glucose levels within the β-cells and subsequent lower insulin production is the reduction in membrane flexibility.

Following the synthesis of monomer insulin within the β-cell, six monomer insulin molecules come together to form stable hexamers with a molecular weight of 36,000 [28]. These hexamers are then enclosed within mature intracellular vesicles and transported at any time to the β-cellʼs plasma membrane. A fusion pore is created upon the fusion of the intracellular vesicle membrane with the β-cell plasma membrane, releasing monomer insulin molecules into the bloodstream. Given the relative large size of the monomer insulin molecule (30Å wide and 35Å high), compared to the glucose molecule (overall size 10Å), both the vesicle membrane and the β-cell plasma membrane require significantly flexibility [29,30]. In individuals with type 2 diabetes, a reduction in the unsaturation index results in decreased membrane flexibility, leading to a slower rate of transmembrane insulin transport in the bloodstream. This phenomenon is in accordance with the data of individuals with type 2 diabetes, showing a reduction in insulin sensitivity by 57.9% and a much lower reduction in glucose effectiveness by 26.2% (Table 1). Thus, instead of using the term ‘insulin resistance’, the key issue lies in reducing flexibility in vesicle and β-cell plasma membranes, which impacts insulin release.

Membrane Flexibility and (Pre)diabetes Type 2

Table 3 records the biochemical outcomes of control individuals and individuals with prediabetes or type 2 diabetes mellitus. A study demonstrated that compared with healthy controls, individuals with gestational diabetes mellitus showed reductions in total polyunsaturated fatty acids (31.9 vs. 37.5; Δ=-14.9%), arachidonic acid (11.0 vs. 12.8; Δ=-14.1%), and unsaturation index (137 vs. 163; Δ=-15.8%) [31]. Another study revealed that compared with healthy controls, individuals with impaired glucose tolerance showed reductions, in total polyunsaturated fatty acids (25.8 vs. 30.7; Δ= -16.0%), arachidonic acid (11.1 vs. 12.5; Δ= – 11.2%), and unsaturation index (113 vs. 130; Δ= – 12.6%) [32]. Furthermore, a study found that compared with healthy controls, individuals with type 2 diabetes without retinopathy showed reductions in total polyunsaturated fatty acids (31.9 vs. 38.0; Δ= – 16.1%), arachidonic acid (11.3 vs. 13.0; Δ= -13.1%), and unsaturation index (134 vs. 155; Δ= – 13.6%) [33], whereas individuals with type 2 diabetes with retinopathy showed even lower values, compared to healthy controls, in total polyunsaturated fatty acids (29.5 vs. 38.0; Δ= – 22.4%), arachidonic acid (9.7 vs. 13.0; Δ= -25.4%), and unsaturation index (128 vs. 155; Δ= – 17.4%) [33].

Table 3: Erythrocyte acyl composition in phospholipids of control individuals, individuals with gestational diabetes, with impaired glucose tolerance, with type 2 diabetes without and with retinopathy, and type 2 diabetic men (study group).

Total SFAs (%)

Total MUFAs (%)

Total PUFAs (%)

C20:4 n-6 (%)

UI

GDM
 Controls (n=61)

33.6

15.9

37.5

12.8

163

 Patients (n=53)

37.7

18.0

31.9

11.0

137

 Δ (%)

+12.2

+13.2

-14.9

-14.1

-15.8

IGT
 Controls (n=42)

42.4

22.6

30.7

12.5

130

 Patients (n=28)

44.8

24.6

25.8

11.1

113

 Δ (%)

+5.6

+8.8

-16.0

-11.2

-13.1

Type 2 diabetes ret.(-)
 Controls (n=18)

42.1

18.8

38.0

13.0

155

 Patients (n=14)

44.2

21.7

31.9

11.3

134

 Δ (%)

+5.0

+15.4

-16.1

-13.1

-13.6

Type 2 diabetes ret.(+)
 Controls (n=18)

42.1

18.8

38.0

13.0

155

 Patients (n=46)

46.9

21.3

29.5

9.7

128

 Δ (%)

+11.4

+13.3

-22.4

-25.4

-17.4

Type 2 diabetes (study group)
 Controls (n=14)

44.0

20.2

28.8

11.1

126

 Patients (n=21)

42.9

20.6

31.6

14.3

141

 Δ (%)

-2.5

+2.0

+9.7

+28.8

+11.9

Ex-posed calculations performed by the author are based on the original data listed by Min et al. [31,32], Koehrer et al. [33], and Pelikánová et al. [40]. SFA: saturated fatty acid; MUFA: monounsaturated fatty acid; PUFA: poly-unsaturated fatty acid; UI: unsaturated index; C20:4 n-6: arachidonic acid. The values of gestational diabetes mellitus and impaired glucose tolerance are the means of phosphatidylcholine and phosphatidyletanolamine values. The values of type 2 diabetes mellitus with retinopathy are the means of mild, moderate, and severe diabetic retinopathy values [33].

These results imply that a decrease in the unsaturation index in individuals with prediabetes or type 2 diabetes mellitus has the potential to translate into an increase in attractive forces between the mutual membrane phospholipid acyl chains, which redistributes the lateral pressure profile of the cell membrane [27]. This redistribution induces alterations in mechanical and biochemical properties of the glucose transport channel proteins, i.e., it reduces the pore diameter, which in turn, reduces the rate of glucose transport across the cell membrane, and therefore causes the onset of type 2 diabetes mellitus [10].

These findings are substantiated by two surprising facts. First, Shulman et al. studied muscle glycogen synthesis in subjects with type 2 diabetes mellitus and matched controls using in vivo carbon-13 nuclear magnetic resonance spectroscopy [34-36]. They noted that the muscle glycogen synthesis rates in subjects with type 2 diabetes were about 50% lower of the rates observed in controls. The same group of researchers investigated, under hyperglycemic-hyperinsulinemic conditions, the pathway: transmembrane glucose transport into the muscle cell, conversion of intracellular glucose into glucose-6 phosphate, and then, after two more intermediates, the addition of the latter through glycogen synthase to the glycogen polymer. They concluded that their experimental results are consistent with the hypothesis that the transmembrane glucose transport into the muscle cell is the rate-controlling step in insulin-stimulated muscle glycogen synthesis in patents with type 2 diabetes, and the delivery of insulin is not responsible for the insulin resistance. This idea is in agreement with a constriction of the glucose channel within the three-dimensional structure of glucose transporter-4, which reduces the rate of transmembrane glucose-transport in type 2 diabetes [37]. Second, epidemiological evidence suggests that human plasma free fatty acid levels are generally elevated during the course of a pregnancy, probably with vital functions on fetal energy metabolism [38,39]. It is not unlikely that this increase also leads to an additional reduction in membrane flexibility and increases the amount of maternal glucose as an additional source of energy for the fetus.

The problem with the hypothesis is the lack of evidence of the healing power of an increase in unsaturation index in relation to type 2 diabetes. There is, however, an overlooked study with a surprising result. In 1991, Pelikánová et al. described, among others, a study group of 21 men with mild-to-moderate type 2 diabetes and 14 control men matched for age, sex, bodyweight, and dietary intake [40]. In this study, type 2 diabetes was defined by the criteria of the National Diabetes Data Group. The individuals were enrolled within 1 year after diagnosis, treated only with a diet, were less than 45 years of age and free of signs of atherosclerotic complications, and had a BMI < 30 kg/m2 [41]. The amount of saturated fatty acids in the diet was lower for the type 2 diabetes individuals than for the healthy controls (35.4 ± 12.2 g/d vs. 47.7 ± 10.8 g/d). Table 3 presents the ex-post calculations based on the original data of erythrocyte phospholipids, described by Pelikánová et al. [40]. The salient points in the data of the individuals in the study group show essential increases, compared with healthy controls, in the total polyunsaturated fatty acids (31.6 vs. 28.8; Δ=+9.7% ), arachidonic acid (14.3 vs. 11.1; Δ=+28.8%), and unsaturation index (141 vs. 126, Δ=+11.9%).

For the first time in the type 2 diabetes literature, individuals with mild-to-moderate type 2 diabetes have been treated successfully with diet alone in combination with a rise in the arachidonic acid level. The authors of the Pelikánová-study suggested that the increase in arachidonic acid level could be diet-induced or represent an increased incidence of atherosclerotic complications. A third option, however, arises in the use of arachidonic acid as a supplement in training adaptations among resistance-trained males [42]. However, the crucial biochemical outcome of this study concerns, independent of the provenance of arachidonic acid, the experimental result of an increase, compared with healthy controls, in the number of carbon-carbon double bonds in the cell membranes of the 21 men of the study group. This increased the unsaturation index, which is intricately linked to a rise of membrane flexibility and therefore led to an additional increase in the rate of glucose transport across the cell membrane. In addition, the increase in membrane flexibility resulted in an improved transmembrane insulin transport from pancreatic β-cells into the blood circulation. The influence of arachidonic acid on membrane flexibility is particularly important because arachidonic acid with four carbon-carbon double bonds is an important molecule in the field of membrane flexibility.

The most important steps of the hypothesis are diagrammatically illustrated in Figure 4 and predict that using RNA sequencing, the most essentially downregulated regulation-related gene, NKX2-1, and the most essentially downregulated metabolism-related gene, TPD52L3, cause an increase in plasma saturated free fatty acid levels, which leads to a decrease in the membrane unsaturation index and thereby reduces the rate of transmembrane glucose transport, resulting in type 2 diabetes mellitus. The reliability of the ideas is supported by the understanding of the observations that, first, the reduced glycogen synthesis rate in subjects with type 2 diabetes mellitus was about 50% of the rate observed in healthy controls; second, the normoglycemic offspring of parents, who had type 2 diabetes, exhibited significant defects in both insulin sensitivity and insulin-independent glucose removal rate; and third, the key feature of type 2 mellitus is an essentially larger defect in insulin sensitivity compared with glucose effectiveness.

FIG 4

Figure 4: Hypothetical pathway of the development of type 2 diabetes

Attention to this hypothesis was very tastefully articulated by late Denis McGarry in his article “What if Minkowski had been ageusic…” [43]. He argued that hyperglycemia and insulin resistance might be better explained when viewed in the context of underlying abnormalities of lipid metabolism.

Obesity and Type 2 Diabetes

Obesity is a chronic metabolic disease that has become the main risk factor for various non-communicable diseases, in particular, type 2 diabetes. Obesity has been reported to account for 80-85% of the risk of developing type 2 diabetes, while recent research suggests that obese people are up to 80 times more likely to develop type 2 diabetes than those with a BMI of less than 22 [44]. The concentration of free fatty acids is elevated in obese individuals [45]. The increased release of free fatty acids from adipose tissue into the blood circulation elevates the plasma concentration of saturated fatty acids because the unsaturation index of released free fatty acids from human white adipocytes is markedly lower than the unsaturation index of serum free fatty acids in healthy controls (85.5 versus 191.9, respectively). This phenomenon causes a shift from unsaturated to saturated fatty-acyl chains in phospholipids of erythrocyte membrane and vascular endothelium with all the associated consequences (Figure 4). There is no question of suggesting that free fatty acids cause insulin resistance, as suggested by Boden, but a decrease in the unsaturation index reduces the pore diameter of the glucose transporter protein, which in turn, reduces the rate of glucose transport across the cell membrane [45].

Insulin Resistance

The findings presented above propose a potential solution for the importance of ‘insulin resistance’ in the pathogenesis of type 2 diabetes. Specifically, the significant downregulation of NKX2-1 and TPD52L3 appears to be the main cause of the reduction in cis carbon-carbon double bonds of phospholipid membranes. This reduction decreases the cross-sectional area A of the cylindrical part of the phospholipid molecule. As a result, there are increased attractive forces between the phospholipid acyl chains, causing a redistribution of lateral pressure in cell membranes. This, in turn, induces a cross-sectional contraction of all Class1 GLUT proteins, ultimately leading to a lower rate of transmembrane glucose transport. The idea aligns with observations from biophysical and structural studies, highlighting the critical role of interactions between membrane proteins and lipid molecules in their folding and stability [46-48]. Clinically, the results of a study by the Diabetes Prevention Program Research group, are quite exciting. They indicate that among high-risk individuals, lifestyle intervention resulted in a 50% reduction in the incidence of type 2 diabetes development, while metformin led to a 31% reduction, compared to placebo [49]. This valuable insight into treating type 2 diabetes according the Diabetes Prevention Program Research group deserves wider recognition and attention.

The consistent findings from various studies strongly support the conclusion that lifestyle change treatment can effectively compensate for the loss of membrane flexibility by inducing an increase in the membrane unsaturation index. Therefore, it is advisable to incorporate the assessment of the unsaturation index into the treatment protocol. By doing so, we can better address the needs of individuals with type 2 diabetes and work towards normalizing membrane flexibility. Furthermore, it is essential to reconsider the notion of ‘insulin resistance’. The original interpretation that cells do not respond to insulin is inaccurate. Instead, the correct understanding is that the amount of carbon-carbon double bonds of cell membrane regulates the rate of glucose transport. Thus, the concept of ‘insulin resistance’ loses relevance, especially considering the significant role of membrane flexibility in glucose transport.

Conclusions and Future Recommendations

The proposals of the Diabetes Prevention Program Research group remain the method of choice for the treatment of type 2 diabetes. The key to lifestyle modification consists of increasing the patient’s unsaturation index level, which would promote a phospholipid shift from saturation to unsaturation, and thus reduce the incidence of type 2 diabetes. Clinical evidence has demonstrated that all of the 21 men in the study group managed their disease by increasing the level of the unsaturation index without taking metformin. Based on the considerations described above, one could conclude that considerable clinical benefit would accrue by the essential concept: saturated fatty acids makes the human cell membrane more rigid, while unsaturated fatty acid increases its flexibility, which ultimately postpones the onset of type 2 diabetes mellitus.

Two different techniques quantifying the mechanical properties of cell membranes came to the same conclusions: an increase in carbon-carbon double bonds by poly-cis unsaturated chain bilayers creates more membrane flexibility than saturated/monounsaturated chain bilayers.

A substantial difference can be made in extending the quality of type 2 diabetes mellitus treatment by introducing the unsaturation index assessment, because the regulation of membrane flexibility represents a great step forward in the development of precision medicine for type 2 diabetes mellitus.

The investigation of the genetic origin of type 2 diabetes mellitus must be moved from chemistry of DNA to chemistry of RNA.

Physical activity, as a part of standard lifestyle intervention, needs to be accompanied by the following official instructions throughout the disease period:

– Increase the rate of glucose transport and insulin transport across cell membranes through caloric restriction and walking 3 to 3.5 miles per hour, for at least 150 minutes per week.

– Replace “insulin resistance” by “reduction in membrane flexibility”.

Conflict-of-Interest

The author declares that the research was conducted in the absence of any commercial of financial relationships that could be constructed as a potential conflict of interest.

References

  1. Falta W, Boller R (1931) Insulärer und Insulinresistenter Diabetes. Klin Wochenschr 10: 438-443.
  2. Fernandez ML, Murillo AG (2022) Dietary treatments to reduce insulin resistance and inflammation in type-2 diabetic patients. Med Res Arch 10.
  3. Weijers RNM (2009) Risk loci for type 2 diabetes – Quo vadis? Clin Chem Lab Med 47: 383-386. [crossref]
  4. DeForest N, Majithia AR (2022) Genetics of type 2 diabetes: implications from large-scale studies. Curr Diab Rep 22: 227-235. [crossref]
  5. Weijers RNM (2016) Membrane flexibility, free fatty acids, and the onset of vascular and neurological lesions in type 2 diabetes. J Diabetes Metab Disord 15: 13. [crossref]
  6. Martin BC, Warram JH, Krolewski AS, Bergman RN, Soeldner JS, et al. (1992) Role of glucose and insulin resistance in development of type 2 diabetes mellitus: results of a 25-year follow-up study. Lancet 340: 925-229. [crossref]
  7. Takematsu E, Spencer A, Auster J, Chen PC, Graham A, et al. (2020) Genome wide analysis of gene expression changes in skin from patients with type 2 diabetes. Plos One 15: e0225267. [crossref]
  8. Chen Y, Frost S, Byrne JA (2016) Dropping in on the lipid droplet-tumor protein D52 (TPD52) as a new regulator and resident protein. Adipocyte 5: 326-332. [crossref]
  9. Cao Q, Chen J, Zhu L, et al. (2006) A testis-specific and testis developmentally regulated tumor protein D52 (TPD52)-like protein TPD52L3/hD55 interacts with TPD52 family proteins. Biochem Biophys Res Commun 344: 798-806.
  10. Weijers RNM (2020) Identification of the downregulation of TPD52-Like3 gene and NKX2-1 gene in type 2 diabetes mellitus via RNA sequencing. Arch Diab & Obes 3: 277-281.
  11. Coon EA, Ahlskog JE, Patterson MC, Niu Z,Milone M (2016) Expanding phenotypic spectrum of nkx2-1-related disorders-mitochondrial and immunologic dysfunction. JAMA Neurol 73: 237-238. [crossref]
  12. Petersen KF, Dufour S, Befroy D, Garcia R,Shulman GI (2004) Impaired mitochondrial activity in the insulin-resistant offspring of patients with type 2 diabetes. N Engl J Med 350: 664-671. [crossref]
  13. Sivitz WI, Yorek MA (2010) Mitochondrial dysfunction in diabetes: from molercular mechanisms to fuctional significance and therapeutic opportunities. Antioxid Redox Signal 12: 537-577. [crossref]
  14. PubChem Compound Database; CID=445638. Available from: (https://pubchem.ncbi.nlm.nih.gov/compound/palmitoleic_acid/)
  15. PubChem Compound Database; CID=445639. Available from: (https://pubchem.ncbi.nlm.nih.gov/compound/oleic_acid/)
  16. Bergman RN, Ader M (2000) Free fatty acids and pathogenesis of type 2 diabetes mellitus. Trends Endocrinol Metab 11: 351-356. [crossref]
  17. Villafan-Bernal JR, Acevedo-Alba M, Reyes-Pavon R, Diaz-Parra GA, Lip-Sosa DL, et al. (2019) Plasma levels of free fatty acids in women with gestational diabetes and its intrinsic and extrinsic determinants: systematic review and meta-anlysis. J Diabetes Res 2019: 7098470. [crossref]
  18. Weijers RNM (2012) Lipid composition of cell membranes and its relevance in type 2 diabetes mellitus. Curr Diabetes Rev 8: 390-400. [crossref]
  19. Borkman M, Storlien LH, Pan DA, Jenkins AB, Chisholm DJ, et al. (1993) The relation between insulin sensitivity and the fatty-acid composition of skeletal-muscle phospholipids. N Engl J Med 328: 238-244. [crossref]
  20. Singer SJ, Nicolson GL (1972) The fluid mosaic model of the structure of cell membranes. Science 175: 720-731. [crossref]
  21. Levitt M (1974) Energy refinement of hen egg-white. J Mol Biol 82: 393-420. [crossref]
  22. Baur LA, O’Conner J, Pan DA, et al. (1999) Relationships between maternal risk of insulin resistance and the child’s muscle membrane fatty acid composition. Diabetes 48: 112-116.
  23. Weijers RNM (2020) Fundamentals about onset and progressive disease character of type 2 diabetes mellitus. World J Diabetes 11: 165-181. [crossref]
  24. Li Y-J, Yang Y-N, Zhang H-J (2019) A microfluidic micropipette aspiration device to study single-cell mechanisms inspired by the principle of Wheatstone Bridge. Micromachines 10: 131. [crossref]
  25. Rawicz W, Olbrich KC, McIntosh T, Needham D, Evans E (2000) Effect of chain length and unsaturation on elasticity of lipid bilayers. Biophys J 79: 328-339. [crossref]
  26. Salas-Burgos A, Iserovich P, Zuniga F, Vera JC, Fischbarg J (2004) Predicting the three-dimensional structure of the human facilitaitive glucose transporter Glut1 by a novel evolutionary homology strategy: insights on molecular mechanisms of substrate migration, and binding sites of glucose and inhibitory molecules. Biophys J 87: 2990-2999. [crossref]
  27. Cantor RC (1997) Lateral pressures in cell membranes: a mechanism for modulating of protein function. J Phys Chem 101: 1723-1725.
  28. Blundell TL, Cutfield JF, Cutfield SM,Dodson EJ,Dodson GG, et al. (1972) Three-dimensional atomic structure of insulin and its relationship to activity. Diabetes 21: 492-505. [crossref]
  29. Weiss M, Steiner DF, Philipson LH, Feingold KR,Anawalt B, et al. (2014) Insulin biosynthesis, secretion, structure, and structure-activity. Bookshelf ID: NBK279029. [crossref]
  30. LaMorte WW (2016) From molecules to man, a perspective on size; Boston University School of Public Health.
  31. Min Y, Ghebremeskel K, Lowy C, Thomas B,Crawford MA (2004) Adverse effect of obesity on red cell membrane arachidonic and docosahexaenoic acids in gestational diabetes. Diabetologia 47: 75-81. [crossref]
  32. Min Y, Lowy C, Islam S, Khan FS, Swaminathan R (2011) Relationship between red cell membrane fatty acids and adipokines in individuals with varying insulin sensitivity. Eur J Clin Nutr 65: 690-695. [crossref]
  33. Koehrer P, Saab S, Berdeaux O,Isaïco R,Grégoire S, et al. (2014) Erythrocyte phospholipid and polyunsaturated fatty acid composition in diabetic retinopathy. PLos One 9: e106912. [crossref]
  34. Shulman GI, Rothman DL, Jue T,Stein P,DeFronzo RA, et al. (1990) Quantitation of muscle glycogen synthesis in normal subjects and subjects with non-insulin-dependent diabetes by 13C nuclear magnetic resonance spectroscopy. N Engl J Med 322: 223-228. [crossref]
  35. Shulman RG, Bloch G, Rothman DL (1995) In vivo regulation of muscle glycogen synthase and the control of glycogen synthesis. Proc Natl Acad Sci USA 92: 8535-8542. [crossref]
  36. Roden M, Price TB, Perseghin G,Petersen KF,Rothman DL, et al. (1996) Mechanism of free fatty acid-induced insulin resistance in humans. J Clin Invest 97: 2859-2865. [crossref]
  37. Weijers RNM (2014) Membrane flexibility and cellular energy management in type 2 diabetes, gestational diabetes, and obesity. EMJ Diabet 2: 65-72. [crossref]
  38. Duttaroy AK, Basak S (2021) Maternal fatty acid metabolism in pregnancy and its consequences in the feto-placental development. Front Physiol 12: 787848. [crossref]
  39. Sivan E, Boden G (2003) Free fatty acids, insulin resistance, and pregnancy. Curr Diab Rep 3: 319-322. [crossref]
  40. Pelikánová T, Kohout M, Válek J, Base J, Stefka Z (1991) Fatty acid composition of serum lipids and erythrocyte membranes in type 2 (non-insulin-dependent) diabetic men. Metabolism 40: 175-180. [crossref]
  41. National Diabetes Data Group (1979) Classification and diagnosis of diabetes mellitus and other categories of glucose intolerance. Diabetes 28: 1039-1057. [crossref]
  42. Roberts MD, Iosia M, Kerksick CM,Taylor LW,Campbell B, et al. (2007) Effects of arachidonic acid supplementation on training adaptations in resistance-trained males. J Int Soc Sports Nutr 4: 21. [crossref]
  43. McGarry JD (1992) What if Minkowski had been ageusic? An alternative angle on diabetes. Science 258: 766-770. [crossref]
  44. Leitner DR, Frühbeck G, Yumuk V,Schindler K,Micic D, et al. (2017) Obesity and type 2 diabetes: two diseases with a need for combined treatment strategies – EASO can lead the way. Obes Facts 10: 483-492. [crossref]
  45. Boden G (2008) Obesity and free fatty acids. Endocrinol Metab Clin North Am 37: 635-646. [crossref]
  46. Bond PJ, Sansom MS (2006) Insertion and assembly of membrane proteins via simulation. J Am Chem Soc 128: 2697-2704. [crossref]
  47. Lee AG (2003) Lipid-protein interactions in biological membranes: a structural perspective. Biochim Biophys Acta 1612: 1-40. [crossref]
  48. Lee AG (2004) How lipids affect the activities of integral membrane proteins. Biochim Biophys Acta 1666: 62-87.
  49. Knowler WC, Barrett-Conner E, Fowler SE, Hamman RF, Lachin JM, et al. (2002) Diabetes Prevention Program Research Group. Reduction in the incidence of type 2 diabetes with lifestyle intervention or N Engl J Med 346: 393-403. [crossref]
fig 6

Initial Processes on Replication of DNA by Interactions of Helical Structured Molecules – Origin of Life in the Water of the Earth (II)

DOI: 10.31038/GEMS.2023572

Abstract

This paper described a hypothesis about phenomenon those may have occurred early in the evolution of life or pre-life to reveal how every creature of the Earth has strong preference for specific chirality. Thread of helical molecule cannot be replicated by using a casting mold at once. Replication of creature has been carried out by DNA. The interactions among molecules of DNA are able to estimate from stereo structure of DNA. There are a D-typed chiral strand and a L-typed chiral strand linked by bases in a DNA. Those strands are generated simultaneously in the vicinity, but two will not coalesce due to the different typed chirality of the helix. The paired strands of DNA are able to become a representative of a series of amino acids for a protein i.e., DNA is able to memorize the information of replication of a protein. If amino acid molecules will be bonded with other amino acid at random timing, the alignment of amino acids is randomized in the protein. In such case, the replication is not accurate, and it will be eliminated by natural selection. Current amino acids alignment is stored by using genetic code in DNA, and replication of protein is constructed by using genetic codes sequences sequentially. In the double helix of DNA, there are leading strands and lagging strands, and the two strands proceed in rotationally opposite directions to find matched base pairs, the protein is formed when two genetic codes are in a key-keyhole relationship. The processing is carried out by amino acid units. The nucleobase ciphers of codon and anticodon represent amino acids at matching processing. The chirality of biomolecules takes one of the most important roles in the activities of creature in the Earth.

Keywords

Protein, Chirality, Helix, DNA, Leading strand, Lagging strand, mRNA, tRNA

Introduction

Although we have much knowledge in the field of molecular biology, the reason for biology’s strong preference for specific chirality of amino acids, sugars, and other molecules remains unanswered question in the field of life research [1]. S. Karasawa described that origin of life can be explained by intermolecular interactions among molecules via a helical structure of water [2]. After that, the study was advanced by focusing helical of DNA. This paper is the results of such studies. C. R. Cantor and P. R. Schimmel presented from mathematical requirements that asymmetric molecules can only have a helical structure as a polymer structure [3]. L. E. MacKenzie and P. Stachelek reported that the chirality twisted molecular structures will interlock and bind only if their chirality allows them. The chirality of a molecule interacts selectively with other molecules [4]. A. Shimada proposed that thing that separates life from non-life is enantioselectivity which escape from racemization [5]. N. Nemoto described that the chirality of biopolymers might depend on thesynthetic process of the biopolymer units [6]. The formation of biochemical molecules depends on the surrounding environment, and the homochirality of creature was established though natural selection that allowed to survive the creature fit for to live. K. Tamura pointed that biological homochirality could have been determined through the process of coevolution between nucleotides and amino acids. And he reported that aminoacylation of tRNA could be the key step in the origin of amino acid homochirality and once L-amino acids had been selected, the elongation of L-amino acids by the ribosome would have synthesized proteins composed of L-amino acids [7,8]. Current building blocks of amino acids are left-handed (L-type) chiral molecules, and ribose is right-handed chiralty (D-type). L-type amino acids never mingle with nucleic acids made of D-type sugars in a cell. K. Tamura and P. Schimmel showed that an RNA minihelix was aminoacylate by an aminoacyl phosphate D-oligonucleotide with a clear preference for L- as opposed to D-amino acids [9]. As the result of evolution, current ragging processing of D-type lagging strand in replication of DNA includes stops, turns, and synthesizes intermittently as shown in Okazaki fragment [10]. The double helix of DNA cannot form without selectivity of the chirality. The chirality of molecules supports self-assembly of molecules and formation of protein by using complex enzymes those are made by proteins. However, the replication of DNA has not fully discussed from the viewpoint of interaction among helical structures up to now. S. Karasawa described that if amino acid molecules are related to mRNA physically and amino acids bind at random timing to mRNA, accurate replication is impossible [2]. Current alignment of amino acids for formation of protein is realized by the genetic codes as merely representatives of amino acids. When codon and anticodon are in a key-keyhole relationship in the matching process, the amino acid is settled correct position, and it makes possible to reproduce the accurate protein.

The Environment for Formation of Cell Membrane

Movements of Liquid Water Molecules

The biochemical reactions are carried out in environment of liquid water. Even when ice melts into liquid water, there exist many of hydrogen bonds. The enthalpy of hydrogen bond of water is about 21 kJ/mol, and the change of the thermal energy from ice to water is 6.0 kJ/mol. There exist more than 90 % of hydrogen bonds among molecules in liquid water. The hydrogen bonds will generate many clusters in the liquid water. Since the tetrahedral molecule of water has an electric polarization, and it assists to form a cluster of helical structure similar to α-quartz that is minimum size with minimum energy in stereo structure made of asymmetric tetrahedrons [11]. The structure of liquid water is constantly being generated and extinguished, and its average lifespan is about 10-12 seconds. Helical structure of liquid water is similar to lattice structure of α-quartz, and it is minimum size with minimum energy in lattice structure of tetrahedrons The phase transition of quartz from β to αcan be explain as the shrink of size by rotation of unit SiO4 alternately around the electrical axis to change the symmetry from hexagonal to trigonal, and the tetrahedral unit projected to the X-Y plane from square to trapezoidal. The unit structure of helix is aligned with the central shafts stand on a planar boundary vertically and it has three-rotational symmetrical electrical axes. When the pair of hydrogen atoms located at the ends of the long and short sides vibrates around electrical axis, the up and down movements of the molecular pair of the inner vertices of tetrahedron and vertices of the outer side moves in the opposite direction. The movements of asymmetrical tetrahedrons make possible to form helical structure of DNA. Since helix structure is minimum size with minimum energy, it has tendency to expand more wider area. So, the clusters of helixes are formed. Even though the lifetime of the cluster is very short, the coupling among the helical area it yields rapid biochemical interactions. As evidence of connection by the helical state of water, linked bubbles are observed at thawing ice of carbonated water.

Catalytic Effects due to Helical Structure of Water Molecules

Proteins are made up of amino acids. On the other hand, nucleic acids are made up of nucleotides. Proteins and nucleic acids are different types of biomolecules those have different functions in the body. The control function is realized by different the chirality of molecules should be controlled and that of controller. Waves on the sea surface are happening all the time. Fluctuations of pressure in water causes the helical structure of water repeatedly expands and contracts via rotation of tetrahedron around each electric arises. Such repeated expansion and contraction of helical molecules will contribute to activities of a life. When a filamentous series of L-type of amino acids invades into the holes of helical structure, L-type structured of water will be long lifetime owing to the same chirality. The cluster of helix makes couplings with neighboring similar helical structures. The linkage takes role of a catalyst in the wet process. There are two types of helical molecular movements of liquid water as shown in Figure 1.

fig 1

Figure 1: Helical structure of liquid water. Here, electric polarization in a water molecule is expressed by arrows. An oxygen atom is expressed by large circle, and a hydrogen atom is expressed by small circle.

Largely Linked Helical Movements of Molecules in a Membrane

Tamura reported that a clear preference for L-amino acids as opposed to D-amino acids was noted in the efficient nonenzymatic aminoacylation of an RNA minihelix (progenitor of the modern tRNA) by an aminoacyl phosphate oligonucleotide [7]. There exists linkage of L-type of helical movement caused by aligned L-type molecules. Interlinks among helical movements will induce large helical movements. The exist of large systematic helical movements will support to form large helical arrangement of molecules such as DNA. L-type of helical movements in the vicinity of L-type amino acid able to assist to synthesize L-typed helical filamentous molecules of protein. Current DNA is consisted of enormous number of molecules. When DNA had been first created, every element that makes up the mechanism of DNA must be incorporated at the same time. The first DNA had formed through interactions among molecules in the vicinity of plural organized molecules. Since asymmetric tetrahedral molecules can only have a helical structure as a polymer structure [3] and D-type of sugars exist in the cell, we can assume that D-type of helical movements on the surface of cell membrane able to assist to synthesize D-typed helical filamentous molecules of sugars. The rotational thermal oscillations of helical structures of asymmetric molecules will induce surrounding thermal motions. It is considered that when two kinds of chiral molecules rotate, the opposite directional rotation may occur by different chiral molecules by a gear mechanism as shown in Figure 2.

fig 2

Figure 2: The illustration shows an image of only the mechanism that D-type of helical movements are induced by interactions among L-type of helical molecules. There exist large systematic helical movements.

Evolution of Cell Membrane

Origin of Main Component in Cell Membrane

The first life forms were born depending on the environment around them. The current cell membrane contains hydrophobic long chain of C16H34 or C18H38 as a main component. These molecules would have been in liquid state in the early Earth because these molecules are in liquid state between about from 20°C to 300°C. Since those molecules are hydrophobic long chains and the specific gravity are lighter than water, those molecules stay for long time as an oil film on the surface of water. Such hydrocarbon molecules had been synthesized from CO2 in the early atmosphere of the Earth due to the collision of the H+ of the solar wind. The smaller hydrocarbon molecules produced in the upper sky remain in the atmosphere and undergo repeated synthesis reactions [12]. In helical structure of liquid water, there are vacant shafts surrounded by a tetrahedron of water molecules as shown in Figure 1. The vacant shafts would have up-taken a C16H34 or C18H38, and those molecules aligned vertically on the water surface.

The processes of evolution of cell membrane are as follows.

  1. Fatty acids produced by oxidation of end terminal of hydrocarbons stayed in the hydrophobic layer.
  2. The membrane with linked head of fatty acids by glycerol laterally became robust.
  3. Long-chains polymeric carbohydrates hydrates (Cx(H2O)y) such as sugars tended to adhere on the hydrophobic layer.
  4. Adhered Sugars became a Component of Nucleic Acids

    The Structure of Phospholipid Membrane

    The current cell membrane has a hydrophilic “head” containing a phosphate group and two hydrophobic “tails” derived from fatty acids, joined by a glycerol molecule as shown in Figure 3.

    fig 3

    Figure 3: The structure of phospholipid membrane. Phospholipid arrangement in cell membranes contains a chiral center at the C2 position of glyceryl moiety.

    A phospholipid contains a chiral center at the C2 position of glyceryl moiety. The twisted phospholipids will interlock, and the interlocked molecules induce systematic motions of the same kind chiral molecules due to systematic thermal vibrations of atoms in the molecule.

    Screw Movement in a Phospholipid Bilayer

    In case that the bilayer of which one of the layers is rotated by 180°, the progress of helix is changed from output side to input side at the center of the bilayer as illustrated in Figure 4. Despite this bilayer having two chirality centers, the bilayer provides a one directional screw movement by the one directional rotation.

    fig 4

    Figure 4: The bilayer of which one of the layers is rotated by 180° where a progress of screw movement continues from output side to input side at the center of the bilayer.

    Here, there are two kinds of helical structures concerned with rotational direction of screw movements. When L-type of amino acids inserted in the phospholipid’s bilayer, L-type of bilayer will be formed, and D-type of sugar molecules proceeds in emitting direction along the induced L-type bilayer. The chirality of molecule that is synthesized by insertion and the molecule that is emitted by synthesize are different.

    How DNA Structure was Formed

    How mRNA and tRNA had Formed

    The evolution of repetitive productions of useful proteins has added the process of regenerating proteins using RNA precursors. The decompose shortly RNA of a tool for replication is favorable for life. So, DNA suitable for memory needs to be synthesized. The tool for replication of protein was replicated from DNA. The evolved process of replication of protein is carried out by mRNA and tRNA with support of plural of complicated enzymes, where amino acid is carried by tRNA, and those amino acids are arranged to a protein by the information of mRNA. Each step of matching between codon and anticodon was checked by as if a contact point of gear mechanism. The lagging strand that adhered with amino acid makes link to corresponding portion of mRNA by using genetic codes. Since the chirality of amino acid adhered to leading strand is L-type, the new leading strand fits with L-typed surrounding molecules and it will be a long strand. On the other hand, the chirality of new lagging strand is D-type, and it is easily leaved from surrounding molecules, and it will become short strands.

    Structure of Double Helix of DNA

    DNA is made of chemical building blocks of nucleotides. Each block is made of three parts: a phosphate group, a sugar group and nitrogen bases. The phosphate and sugar groups are linked into chains alternatingly, and a pair of strands is coupled by nitrogen bases [13,14]. There are rotational thermal vibrations of atoms in the helical structure of molecules. Sugar molecules are possible to adhere on the surface of the membrane, and those will be linked by phosphate. Strands of DNA will be formed along the rotating helical structures of the surface as shown in Figure 2. The synthesis of strand is facilitated by nucleophilic action of 3′ hydroxy group (-OH) of the terminal residue to phosphate group (PO4). Then, carbon 3′ will be adhered to carbon 5′ by phosphate-mediated ester bonds. The growing of leading strand proceeds continuously repeatedly from the 5′ to 3′ end. L-type of helical structure of strand in a cell are induced in the vicinity of the protein.

    How Genetic Codes had Formed

    Two of strands in a DNA are linked by bases. The strands of DNA are coupled by nitrogen bases by hydrogen bond at each synthesis steps, naturally generated nitrogen base pairs will be mixture of various combinations. As the result of natural selection, the linkage of strands favorable for survival was remained. Bases of current DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). There are two kinds of pairing bases those have complementary stable pairs. One of the pair is A (adenine) and T (thymine), and the other is C (cytosine) and G (guanine). The one amino acid is assigned by three set of bases among 4 set of bases. Since tRNA must link with anticodon and amino acid, structure of tRNA becomes complicated. “Aminoacyl tRNA synthetase” binds the amino acid to specific tRNA that identifies their corresponding codons and “ribosome” ensures that tRNA correctly identifies mRNA. Although the mechanism of tRNA is complicated, the resultant role of tRNA is simple. An illustration on the stepping proceedings of activated portions of leading strand and lagging strand are shown in Figure 5.

    fig 5

    Figure 5: Proceeding of activated paired bases in a double helix DNA

    Enzymes of “Helicase” and “Polymerase” for Replication of DNA

    Current replication of DNA is processed by using existing DNA. Depending on the twisting direction, the joint of nitrogen base of DNA can be separated, or that can be brought closer. The motion of twist cannot achieve without any change of the surrounding molecules. Since the system of molecules to drive the twist motion includes amino acids and the amino acids make protein, enzyme for the function of twist is reproduced by the protein. The current DNA system makes use of helicase to separate two strands of a DNA to join and polymerase to join two stands. Helicase continues to move toward the unelicited double-stranded region, and leaved two single-stranded DNA templates that induce the formation of double-stranded daughter DNA duplexes. Since the separated strand can only be synthesized in the direction of extending the 3′ end, only the leading strand can continuously replicate as helicase moves.

    The processes of current replication of DNA are as follows.

    1. Helicase unwinds DNA, and two strands are used for replication of DNA.
    2. New bases are added to the complementary parental strands.
    3. Leading strand is made continuously, while lagging strands is made by pieces by piece.
    4. When nucleotides (bases) are matched, two of double helices are synthesized.
    5. The representation on replication of DNA is shown in Figure 6. The proceeding of active portion in lagging strand rotates in the opposite for that of leading strand, and the synthesizing DNA waits for some parts of strand to be exposed and continues until it meets the 5′ end of the previously synthesized lagging strand. A short DNA fragment formed by the lagging strand is called “Okazaki fragment” [10]. The Okazaki fragment is connected to the next Okazaki fragment shortly after synthesis to form a continuous new DNA. The different chirality of strands is necessary because reproduced two DNA’s must be divided into two equal parts in the beginning processing of a cell division.

      fig 6

      Figure 6: Replication of DNA by using original DNA

      Conclusion

      This paper described a hypothesis about how DNA may have evolved in the water environment of the Earth. The interactions among molecules in DNA are able to get hints from stereo structure of DNA. There are leading strand and lagging strand in a DNA. The proceeding of lagging strand is rotationally opposite direction to that of leading strand. The strands with different chirality in a DNA are coupled by nitrogen bases through hydrogen bond at each synthesis steps. When DNA is first created, every element that makes up the mechanism must be incorporated at the same time. It is known that L-type chirality of amino acid had been selected, and L-type of helical movements of molecules are able to assist to synthesize L-typed helical filamentous molecules. It can be considered that D-type of helical movements will be induced through interactions among L-type of helix via gear mechanism. Based on such estimations, it is possible to explain the generation of D-type of sugar of DNA on the surface of cell membrane. The first formation must be achieved tremendous number of try and errors. Current process of replicating DNA uses existing DNA. After separation of paired strands of DNA, the proceeding of leading strand is rotationally opposite to that of lagging strand, it makes possible to check of key-lock relationship on codon and anticodon. Replication of a protein involves multiple reactions. As further prospect, an enzyme is made by protein. However, our understanding is causal law, we cannot express precise progress of concurrent different reactions by using logical expression. Overlapping multiple causal laws can be expressed by using a pattern. Although there exists unknown mechanisms, to reveal various aspects of surroundings contributes to improve the understanding of the complicated mechanism of protein replication. It can be concluded that the chirality of biomolecule is able to take a role of bridge between origin of life and the molecular biology.

      Acknowledgement

      The author expresses his sincere thanks to Professor Koji Tamura from Tokyo University of Science for helpful discussions and comments on the manuscript.

      References

      1. Lee C, Jessica MW, Laura ER, Rachel YS, Laura MB, et al. (2022) Chirality in organic and mineral systems: a review of reactivity and alteration processes relevant to prebiotic chemistry and life detection missions. Symmetry 14.
      2. Karasawa S (2023) Origin of life in the water of the Earth. Geology, Earth & Marine Sciences 5: 1-7.
      3. Cantor CR, Schimmel PR ((1980)) Biophysical Chemistry: the conformation of biological macromolecules; P7, W H Freeman & Co., USA.
      4. MacKenzie LE, Stachelek P (2021) The twists and turns of chiral chemistry. Nature Chemistry 13: 521-522.
      5. Shimada A (2016) Thing that separate life from non-life – it’s enantioselectivity, Special Feature: The 41st Symposium, Viva Origino, 44, 3.
      6. Nemoto N (2016) A chirality of biopolymer from the perspective of evolutionary molecular engineering, Special Feature: The 41st Symposium, Viva Origino, 44, 5.
      7. Tamura K (2008) Origin of amino acid homochirality: relationship with the RNA world and origin of tRNA aminoacylation. Biosystems 92: 91-98.
      8. Tamura K (2016) Interaction of sugars and amino acids in determining the origin of chirality, Special Feature: The 41st Symposium, Viva Origino, 44, 4.
      9. Tamura K, Schimmel P (2004), Chiral-selective aminoacylation of an RNA minimax. Science 305.
      10. Okazaki R, Okazaki T, Sakabe K, Sugimoto K,. Sugino A (1968) Mechanism of DNA chain growth. I: possible discontinuity and unusual secondary structure of newly synthesized chains. Proc Natl Acad Sci USA, 59: 598-605.
      11. Karasawa S (1974) Origin of Piezoelectricity in an α-Quartz, Japanese. Journal of Applied Physics 13: 799-803.
      12. Karasawa S (2022) Earliest BIF and life produced via submarine volcanism in carbonated seawater. Geology, Earth & Marine Sciences 4: 1-5.
      13. Watson JD, Hopkins NH, Roberts JW, Steitz JA, Weiner AM (1987) Molecular Biology of the gene [4th ED] Chapt.9, p.241, The Benjamine/Cumming Publishing Company, Inc., USA.
      14. McKee T, McKee JR (2016) Biochemistry the Molecular Basis of Life, 6th, Sec.18.1, Oxford Univ. Press, UK.
FIG 2

Extreme Enrichment of Cs during the Crystallization of the Ehrenfriedersdorf Pegmatite Melt Related to the Variscan Tin Mineralization

DOI: 10.31038/GEMS.2023571

Abstract

This short paper shows the excessive enrichment of Cs in a volatile-rich evolved pegmatitic silicate melt. This water-rich melt (plus B and F) with about 30% bulk water was, with high probability, created by an input of a supercritical fluid from the earth’s mantle. The water content of a granitic melt is at ~900°C, and crustal pressures (<5 kbar) are too low to create such a volatile-rich evolved melt. Furthermore, the quartz contains nanodiamonds and graphite. These are strong hints to the participation of supercritical fluids.

Keywords

H2O-B2O3-F-rich melt, Cs-rich melt inclusions, Multistage liquid-liquid immiscibility, Supercritical fluids

Introduction

During the study of melt inclusions in quartz of a pegmatite body related to the Variscan Ehrenfriedersdorf tin deposit in the Central Erzgebirge/Germany, we often found high concentrations of Cs in different inclusion types: water-poor melt inclusions rich in Cs, water-rich melt inclusions with a moderate Cs content, and extreme Cs-pentaborate rich inclusions trapped near the crest of the main solvus. The primary solvus curve (B vs. H2O) results from the simultaneous enrichment of water, boron, and fluorine and forms a pseudo-binary solvus (H2O + B2O3 + F + Silicate melt) versus temperature (Figure 1a and 1b).

FIG 1A

Figure 1a: Boron versus water concentration in conjugate type-A (blue) and type-B melt inclusions (red) in the Ehrenfriedersdorf pegmatite quartz. Both compounds portray a solvus curve (melt-water). Included are the corresponding isotherms (500, 600, and 650°C) – see Thomas et al. (2003) [1].

From these curves (Figures 1a and 1b), we see that at the critical point (solvus crest), a high concentration of H2O, B2O3, and F are present: about 27.5, 4.2, and 9.0%, respectively. The data are from published and unpublished data from Veksler and Thomas (2002) [1-4].

FIG 1B

Figure 1b: Fluorine versus water (H2O) concentration obtained from melt inclusions from the same Ehrenfriedersdorf pegmatite quartz.

From unpublished hydrothermal diamond anvil cell (HDAC) experiments performed in 2002 together with Ilya Veksler and Christian Schmidt, we know that using synthetic pegmatite melts similar to the Ehrenfriedersdorf pegmatite with about 50 [% (vol/vol)] water in the temperature range from 840 down to 300°C multistage liquid-liquid immiscibility processes happens. Each main phase ever formed tends to liquid immiscibility. Such compartments are very contrasting and show extreme enrichment of rare elements like boron, fluorine, cesium, beryl, tin, and others (e.g., Thomas et al. 2019 and 2022) [5,6]. This specific experiment, unfortunately, ended short before the total homogenization at about 900°C by the crash of a diamond of the HDAC. This experiment shows, however, clearly that the formation of such water-rich melt is only possible at very high temperatures, and consistently, with cooling, multi-phase separation happens steadily down to low temperatures around 160°C (Figures 2 and 3). If we look at Figure D at Schröcke’s contribution (1954) [7], it follows that primary forming this pegmatite type is impossible in situ. We need high water and energy to develop a mass of pegmatite bodies shown there (see also Johannes and Holtz, 1996) [8]. In a couple of publications, Thomas [9-14] and Thomas et al. [15,16] have shown that by the finding of typical mantle minerals in granites and pegmatites, the supplier of the necessary water (energy and other components) can be supercritical fluids coming directly from the mantle region. From the unfinished HDAC experiment (see above), we have learned that during heating and cooling, many phase changes happen, primarily by liquid-liquid immiscibility (see exemplary Figures 2 and 3).

FIG 2

Figure 2: Look through the microscope at the sample chamber formed by an Ir-gasket with a hole of 300 µm between two diamonds of the HDAC at two different temperatures. Conspicuous are the different phases formed by liquid-liquid immiscibility.

FIG 3

Figure 3: The same HDAC experiment at lower temperatures (390 and 160°C). V: Vapor, XXX: late-formed crystal. The “crystals” marked area stands for the melt 1+2, now wholly solidified.

Sample Material

The sample material (Qu8) comes from a miarolitic pegmatite body in the Sauberg tin mine near Ehrenfriederesdorf, Germany. A description of the locality is in Webster et al., 1997 [4]. The quartz sample was as big as your fist. Many 500 µm thick on both sides polished slices are produced from this sample. The vapor phase of some melt inclusions in this quartz contains high hydrogen, methane, and CO2 concentrations: XH2=0.58, XCH4=0.26, XCO2=0.16 (Thomas and Webster, 2000) [17]. In a quartz crystal (Qu8-45) from the same pegmatite, here not studied, we have found large aggregates of graphite and nanodiamond.

Methodology

The general used methods are described in Webster et al. 1997 [4] and Thomas [13] and references in there). For the microprobe study (main and trace elements), we mainly used the CAMECA SX50 microprobe. We used Raman spectroscopy (see Thomas 2023e and references) [13] to determine the water as the basis component for constructing the pseudo-binary solvus curves (see also Thomas and Davidson, 2016) [18].

Results

This contribution is only restricted to cesium (Cs) results. Cesium, with a Clarke value of about 5.0 ppm in granitic rocks (Rösler and Lange, 1975) [19], is enriched to extremely high values of about 160000 ppm. That corresponds to an enrichment of 32000 fold, an incredible value. Further, we see relationships between Cs and H2O, Cs and B2O3, and B2O3 with H2O. Figure 4a demonstrates the enrichment of Cs (as Cs2O) with water. The Cs shows here a good Lorentzian distribution. The data come from typical melt inclusion in the pegmatite quartz from the Sauberg mine. The same distribution type results for Cs and B in Figure 4b. This figure shows strong enrichment of Cs at a more or less constant B2O3 concentration of about 2.3% B2O3. According to He et al. (2020) [20], B2O3 reduces pollucite’s [(Cs, Na)(AlSi2)O6 • nH2O] crystallization temperature (maybe under 700-600°C) and improves immobilization through an encapsulation effect here by Al-silicates. The melt inclusions are relatively water-poor (~5%) and represent the heavy residue. Figure 4c shows the simplified solvus curve for the system melt-H2O versus B2O3 (Figure 1a). Figure 4d displays the extreme Cs enrichment near the solvus crest of the Melt-H2O – B2O3 system as Cs-pentaborate (Ramanite-(Cs). At first, this Cs-pentaborate was found in pegmatite material from the Isle of Elba (see Thomas et al., 2008) [21] – later also in Malkhan (Thomas et al., 2012) [22] and Ehrenfriedersdorf (Sauberg mine). The Lorentzian distribution of Cs vs. B is untypical, and Cs vs. H2O is typical for some elements (Be, Sn, and others [5,6]. Table 1 gives the fit data for the three Cs distributions.

Table 1: Lorentzian fit data

Distribution

Area

Center (%)

Width (%)

Offset (%)

Height (%)

R2

Cs2O-B2O3

5.226

2.264

0.553

1.621

6.012

0.90983

Cs2O-H2O

26.592

25.991

5.631

0.134

3.006

0.96752

Ramanite-(Cs)

13.309

29.992

0.900

5.676

9.419

0.99436

In comparison to Figure 4a, the distribution of Cs2O vs. H2O shown in Figure 4d is a little bit shifted to higher water concentrations (maybe the real critical point of the system) or by the crystal water in the formula of Ramanite-(Cs) [CsB5O8 • 4H2O] – see Thomas et al. 2008) [21].

FIG 4A

Figure 4: (a) Distribution of  Cs with H2O, (b) Cs with B, (c) H2O and B, (d) Cs as Ramanit-(Cs) with H2O. The microprobe data for (b) are from Thomas et al. 2019 [5].

Discussion

The here-shown enrichment of Cs during the crystallization of a pegmatite-forming melt related to the Variscan tin deposit Ehrenfriedersdorf is unexpectedly high. The origin of such distributions is, at the moment, not clear. More research is indispensable. Are such Lorentzian-type distributions of elements with water a characteristic feature of the participation of supercritical fluids at the tin mineralization here? Some elements (Be) are Gaussian distributed (see Thomas 2023d) [12]. What are the reasons for the different distribution types? Which physicochemical processes are the determining steps? Figure 4 shows that the behavior of Cs in water-rich high-temperature melt-water systems is very complicated. Therefore, is storing the radioactive 137Cs as boron-stabilized pollucite doubtful (see Yokomori et al., 2014) [23,24]. Because each element in the supercritical state can enriched to high values, we assume, in principle, that universal cooperative interactions between the particles under these conditions will work. The opposite is, up to now, not proved.

Acknowledgment

The first author thanked I-Ming Chou and William (Bill) A. Bassett for their professional introduction to the HDAC technique during my visit to the Argonne National Laboratory in the summer of 2001. Iliya Veksler and Christion Schmidt are thanked for the joint HDAC experiments on synthetic pegmatite melts.

References

  1. Thomas R, Förster HJ, Heinrich W (2003) Behaviour of Boron in a peraluminous granite-pegmatite system and associated hydrothermal solutions; a melt and fluid-inclusion study. Contribution to Mineralogy and Petrology 144: 457-472.
  2. Veksler IV, Thomas R (2002) An experimental study of B-, P- and F.rich synthetic granite pegmatite at 0.1 and 0.2 GPa. Contribution to Mineralogy and Petrology 143: 673-683.
  3. Thomas R, Schmidt C, Veksler I, Davidson P, Beurlen H (2006) The formation of peralkaline pegmatitic melt fractions: evidence from melt and fluid inclusion studies. Extended abstract. XLIII. Congresso Brasileiro de Geologia, 2006, published 2008, 757-761.
  4. Webster JD, Thomas R, Rhede D, Förster HJ, Seltmann R (1997) Melt inclusion in quartz from an evolved peraluminous pegmatite: Geochemical evidence for strong tin enrichment in fluorine-rich and phosphorus-rich residual liquids. Geochimica et Cosmochimica Acta 61: 2589-2604.
  5. Thomas R, Davidson P, Appel K (2019) The enhanced element enrichment in the supercritical states of granite-pegmatite systems. Acta Geochim 38: 335-349.
  6. Thomas R, Davidson P, Rericha A, Voznyak DK (2022) Water-rich melt inclusions as “frozen” samples of the supercritical state in granites and pegmatites reveal extreme element enrichment resulting under non-equilibrium conditions. Mineralogical Journal (Ukraine) 44: 3-15.
  7. Schröcke H (1954) Zur Paragenese erzgebirgischer Zinnlagerstätten. Neues Jahrbuch Mineral. 87: 33-109.
  8. Johannes W, Holtz F (1996) Petrogenesis and experimental petrology of granitic rocks. Springer.
  9. Thomas R (2023a) Unusual cassiterite mineralization, related to the Variscan tin-mineralization of the Ehrenfriedersdorf deposit, Germany. Aspects in Mining & Mineral Sciences 11: 1233-1236.
  10. Thomas R (2023b) Ultrahigh-pressure and temperature mineral inclusions in more crustal mineralizations: The role of supercritical fluids. Geology, Earth and Marine Sciences 5: 1-2.
  11. Thomas R (2023c) A new fluid inclusion type in hydrothermal-grown beryl. Geology, Earth and Marine Sciences 5: 1-3.
  12. Thomas R (2023d) To the geochemistry of beryllium: The other side of the coin. Geology, Earth and Marine Sciences 5: 1-4.
  13. Thomas R (2023e) Raman spectroscopic determination of water in glasses and melt inclusions: 25 years after the beginning. Geology, Earth and Marine Sciences 5: 1-2.
  14. Thomas R (2023f) The Königshain granite: Diamond inclusions in Zircon. Geology, Earth and Marine Sciences 5: 1-4.
  15. Thomas R, Davidson P, Rericha A, Recknagel U (2023a) Ultrahigh-pressure mineral inclusions in a crustal granite: Evidence for a novel transcrustal transport mechanism. Geosciences 13: 94: 1-13.
  16. Thomas R, Recknagel U, Rericha A (2023b) A moissanite-diamond-graphite paragenesis in a small beryl-quartz vein related to the Variscan tin-mineralization of the Ehrenfriedersdirf deposit, Germany. Aspects in Mining & Mineral Sciences 11: 1310-1319.
  17. Thomas R, Webster JD (2000) Strong tin enrichment in a pegmatite-forming melt. Mineralium Deposita 35: 570-582.
  18. Thomas R, Davidson P (2016) Revisiting complete miscibility between silicate melts and hydrous fluids, and the extreme enrichment of some elements in the supercritical state – Consequences for the formation of pegmatites and ore deposits. Ore Geology Reviews 72: 1088-1101.
  19. Rösler HJ, Lange H (1975) Geochemische Tabellen. VEB Deutscher Verlag für Grundstoffundustrie Leipzig. 675 p.
  20. He P, Fu S, Wang M, Duan X, Wang O, et al. (2020) B2O3-assisted low-temperature crystallization of pollucite structures and their potential applications in Cs+ immobilization. Journal of Nuclear Materials 540.
  21. Thomas R, Davidson P, Hahn A (2008) Ramanite-(Cs) and ramanite-(Rb): New cesium and rubidium pentaborate tetrahydrate minerals identified with Raman spectroscopy. American Mineralogist 93: 1034-1042.
  22. Thomas R, Davidson P, Badanina E (2012) Water- and boron-rich melt inclusions in quartz from the Malkhan pegmatite, Transbaikalia, Russia. Minerals 2: 435-458.
  23. Thomas R, Davidson P (2007) The formation of granitic pegmatites from the viewpoint of melt and fluid inclusions and new experimental work. Granitic Pegmatites: the state of the art – International Symposium, Porto, Portugal, 13-16.
  24. Yokomori Y, Asazuki K, Kamiya N, Yano Y, Akamatsu K, et al. (2014) Final storage of radioactive cesium by pollucite hydrothermal synthesis, Scientific Reports 4, 4195: 1-4.
fig 18

Prostate Cancer Prediction and Detection Using Hybrid CNN-Deep Learning Techniques

DOI: 10.31038/CST.2023833

Abstract

Prostate cancer is a widely recognized form of cancer characterized by the proliferation of malignant cells in the prostate gland, a small organ responsible for producing seminal fluid in men. Typically, the progression of prostate cancer is slow and initially confined to the prostate gland, often causing minimal harm. Common symptoms include frequent urination, weak urine flow, blood in the urine or seminal fluid, and pain during urination. However, the current method for detecting prostate cancer, known as Driving While Intoxicated (DWI), suffers from several limitations such as low accuracy, complexity, high computational requirements, and the need for extensive training data. To address these challenges, researchers in the medical imaging field are exploring different Convolutional Neural Networks (CNNs) models and techniques for object detection and segmentation. In this study, a modified CNN system is proposed to develop an automated algorithm capable of detecting clinically significant prostate cancer using DWI images of patients. The study employed a clinical database consisting of 970 DWI images from individuals, with 17 cases diagnosed with prostate cancer (PCa) and 14 cases considered healthy. The performance of the proposed system was evaluated using a training database containing 940 patients, while the remaining 20 patients were reserved for testing. The results demonstrated that the proposed system exhibited improved sensitivity, reduced computational requirements, high performance, and lower time complexity compared to the current prototype system.

Keywords

Detection, Prediction, Neural networks, CNN, Prostate cancer, Deep learning, DWI, CAD, Image processing and classification.

Introduction

In the year 2020, prostate cancer ranked as the third highest cause of cancer-related fatalities in men in the United States. During this period, there were an estimated 161,461 new cases of prostate [1,2] cancer, constituting 19% of all newly diagnosed cancer cases. Additionally, prostate cancer was responsible for 26,831 deaths, accounting for 8% of all cancer-related deaths. While prostate cancer is the most frequently occurring cancer among men, the prognosis for treatment is considerably favorable when the disease is detected in its early stages. Consequently, the implementation of effective monitoring and early detection methods plays a crucial role in enhancing the survival rates of patients. Machine Learning (ML) is an integral component of Artificial Intelligence (AI) that employs statistical, probabilistic, and historical tools to make informed decisions or predictions based on new data. Within the realm of clinical imaging, the fusion of imaging and ML techniques has given rise to computer-assisted detection and diagnosis. This innovative approach has demonstrated substantial potential in assisting radiologists with precise diagnoses, reducing diagnosis time, and enhancing diagnostic accuracy [3,4]. The conventional method for constructing ML models entails extracting quantitative imaging features like shape, volume, intensity, and other data attributes from the image data. These features are subsequently utilized in conjunction with ML classifiers such as Support Vector Machine (SVM) or Decision Trees (DT). Deep learning techniques have demonstrated their efficacy in various computer vision tasks, encompassing segmentation, detection, and classification. These techniques leverage convolutional layers to extract distinctive features, progressing from low-level local patterns to high-level global patterns within input images. The incorporation of a fully connected layer at the end of the convolutional layers enables the conversion of intricate patterns into probabilities assigned to specific labels [5,6]. The performance of deep learning-based methods can be further enhanced by employing different types of layers, such as batch normalization layers, which normalize layer inputs to possess a zero mean and unit variance, and dropout layers, which randomly exclude selected nodes. Nevertheless, achieving optimal performance necessitates identifying the ideal combination and configuration of these layers, as well as precise tuning of hyperparameters. This challenge persists as one of the primary obstacles in the application of deep learning techniques across diverse domains, including medical imaging. In the context of clinical imagery, each MRI slice contains valuable information pertaining to the location and size of prostate cancer. Ishioka et al. conducted a cut-level analysis involving 318 patients, utilizing U-Net and Res Net models. Remarkably, they achieved an impressive AUC (Area Under the Curve) value of 0.78 on the test set, utilizing only 16 separate slices, without any additional training or validation steps [7,8].

The research paper focuses on the following areas: Our study aims to predict the recurrence of prostate cancer using H&E stained tissue images without relying on manually designed features. Instead, we utilize deep learning techniques to automatically learn a hierarchical representation of features, enabling the differentiation between recurring and non-recurring morphological patterns [9,10]. We propose a two-stage deep learning-based approach to classify H&E stained tissue images and estimate the probability of prostate cancer recurrence. This approach can also be extended to predict other specific tissue classes, such as cancer grades, types, and molecular subtypes, across different organs to facilitate precise treatment planning. In the first stage, we employ a convolutional neural network (CNN) to accurately identify the locations of nuclear centers within a given tissue image. Our core location algorithm is trained to detect both epithelial and stromal cores within tumor areas as well as non-tumor regions of the tissue images. For the second stage, we utilize another modified CNN that takes patches centered on the identified nuclear centers as input, producing patch-wise predictions of cancer recurrence likelihood. This stage can be easily adapted and retrained to estimate the likelihood of different cancer subtypes in diverse types of tissue images [11,12]. The final estimation of subtype likelihood for a given patient (indicating the likelihood of recurrence) is determined by aggregating the patch-wise probabilities obtained from the modified CNN. To address unwanted variations in tissue images caused by staining and scanning discrepancies across medical clinics, we incorporate a color standardization step as a preprocessing technique. This helps correct the effects of such variations. Additionally, we conducted a literature survey where Jemal [1] proposed a cancer journal targeting clinicians, providing information on the number of reported deaths in 2012 categorized by age for the top 10 leading causes of death, including the 5 leading causes of cancer-related deaths [13,14].

Literature Survey

Jemal [1] introduced a publication called the “cancer journal for Clinicians,” which presents data on the actual number of deaths in 2012 categorized by age for the top 10 leading causes of death, including the top 5 leading causes specifically related to cancer. The NAACCR (North American Association of Central Cancer Registries) indicated that all cancer cases were classified using the International Classification of Diseases for Oncology, with the exception of cancers occurring in children and adolescents. Causes of death were documented based on the International Classification of Diseases [15,16]. To account for potential delays in data collection, the disease recurrence rate provided in the current study was adjusted whenever possible. These delays might arise due to the flexibility in data capture or the process of reviewing and updating the data. The likelihood of receiving a diagnosis of invasive cancer is higher in men (42%) compared to women (38%). In 2016, it was estimated that approximately 10,380 children (from birth to 14 years) would be diagnosed with cancer, excluding benign/marginal brain tumors. Among those diagnosed, 1,250 children were projected to succumb to the disease. The calculation method used for 2016 does not include case estimates for benign and marginal brain tumors, as reporting data on these tumors was not mandatory until 2004 [17,18]. From 1975 to 2012, the incidence rates of cancer in children and adolescents exhibited an annual increase of 0.6%. A publication called Cancer Inst. Monogr [2] proposed that “Overdiagnosis of prostate cancer” can be attributed to two main factors. Firstly, there is a relatively large, asymptomatic population affected by this disease, as indicated by postmortem studies and histologic examination of prostates removed for other reasons. Secondly, the widespread use of screening methods such as prostate-specific antigen (PSA) tests and digital rectal examinations, which have been implemented among various groups of men in the United States over the past two decades, contributes to the over-diagnosis of prostate cancer. Cancer overdiagnosis refers to the detection of cancers that would not have become clinically evident during a patient’s lifetime or would not have led to cancer-related death. Importantly, these studies identified instances of cancer even in men in their twenties, with a prevalence ranging from 8% to 11%, highlighting the long period of latency between the development of prostate cancer and the appearance of symptoms in some individuals [19,20]. The incidence of prostate cancer observed in the group undergoing screening showed a 58% higher rate compared to what would be expected in a contemporary US population, as determined using Surveillance, Epidemiology, and End Results (SEER) data. In addition, there was a substantial 200% increase in incidence when compared to men from the pre-PSA era. These findings suggest the potential for significant overdiagnosis when annual screening is conducted over many years, as practiced in the PLCO (Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial). Furthermore, assessments of overdiagnosis can be influenced by racial disparities. Etzioni et al., using SEER registry data from 1988 to 1998, demonstrated how racial differences impact the over-diagnosis of prostate cancer [21,22]. The majority of prostate tumors detected through screening are asymptomatic and localized within the clinical setting. According to SEER data from 2004 to 2005, approximately 94% of newly diagnosed men exhibit T1 or T2 disease classification. While the existence of overdiagnosis poses challenges, an even greater concern is the subsequent overtreatment of tumors detected through screening. In the United States, a significant number of men with such diseases receive aggressive treatment. Efforts have been made to improve the identification of aggressive prostate cancer by utilizing prostate-specific antigen (PSA) testing. One approach to mitigate over diagnosis involves selectively implementing screening and biopsy procedures solely for individuals at a higher risk of prostate cancer-related mortality. Various research groups have assessed the effectiveness of serum PSA as a predictor of future aggressive disease. Recently, Williams et al. developed a prediction model based on the Early Detection Research Network (EDRN) and validated it using the placebo group of the Prostate Cancer Prevention Trial. This model incorporates clinical factors to selectively identify only those cases exhibiting aggressive cancer [23,24]. As per the article titled “Prostate Magnetic Resonance Imaging Interpretation Varies Substantially Across Radiation Oncologists,” the proposed system underwent evaluation using diffusion-weighted imaging (DWI) datasets from 40 patients. Among these patients, 20 cases were benign while the other 20 were malignant, and the datasets were collected at seven different b-values. The evaluation revealed that the proposed system achieved an impressive area under the curve (AUC) of 0.99 after the second stage of classification. This indicates that the performance of the proposed system surpasses that of other systems that do not involve segmentation [25,26]. In recent statistics, it was reported that approximately 164,690 new cases of prostate cancer were diagnosed in 2018, resulting in about 29,430 deaths. The utilization of prostate-specific antigen (PSA) screening has played a role in reducing the mortality rate associated with prostate cancer by more than 20%. Magnetic resonance imaging (MRI) techniques have demonstrated the potential for detecting and localizing prostate cancer without the drawbacks associated with invasive procedures. Computer-aided detection (CAD) systems have been developed utilizing various features, such as dynamic, wavelet, and co-occurrence, derived from T2-weighted MRI, dynamic contrast-enhanced MRI (DCE-MRI), and apparent diffusion coefficient (ADC) maps. One particular CAD system was assessed using datasets obtained from three different institutions, and the highest AUC achieved was 0.71, with an accuracy of 88%. A novel Computer-Aided Detection (CAD) system is proposed for the non-invasive identification of prostate cancer through the utilization of Diffusion-Weighted Imaging (DWI). The system employs a two-stage classification approach, where refined Apparent Diffusion Coefficient (ADC) volumes at different b-values are utilized for training. What sets this proposed system apart is its ability to detect prostate cancer using DWI without the requirement for precise prostate segmentation. The accuracy of the system is assessed by evaluating DWI datasets acquired from two distinct scanners with different magnetic field strengths. In the paper titled “Prostate Imaging Reporting and Data System Version 2 (PIRADS v3): A Graphic Appraisal” authored by Hassan Zadeh et al., a suggestion is made to employ kernel methods in image analysis. These methods leverage Mercer kernel functions, which enable the mapping of input vectors into a higher-dimensional feature space. This mapping facilitates the efficient computation of inner products without the need to explicitly calculate the feature vectors. This technique, commonly known as the “kernel trick,” has found widespread application in various domains, including sequence analysis and regression analysis [27]. The recursive least-squares (RLS) algorithm is widely employed in the fields of signal processing, communications, and control. It serves as an efficient online method for finding the least-squares straight Predictor. In the work by Rosencrantz, A. B. et al. [5], titled “Inter observer reproducibility of the pi-rads form 2 dictionaries: a multi-center education of six knowledgeable prostate radiotherapists,” a method is proposed to enhance the reliability of the pi-rads form 2 dictionaries among multiple observers. The proposed approach aims to eliminate the dictionary entry with the least impact on the overall system when a new sample is added to the dictionary. The objective of the paper is to develop a pruning technique for Kernel Least Mean Square (KLMS) that is online, computationally simple, and capable of addressing three key issues mentioned earlier. Although the computational complexity of the critical condition remains high, scaling as O(K^2), the use of a recursive approach is motivated. The proposed pruning method, in conjunction with QKLMS (Quantized Kernel Least Mean Square), solves a fixed-budget KLMS algorithm and is referred to as QKLMS-FB [1]. The QKLMS algorithm incorporates a basic vector quantization (VQ) technique to quantize the feature space and control the size of the adaptive filter within the system. However, many existing VQ algorithms are unsuitable for online bit learning due to offline training of the codebook and heavy computational burden. In contrast, the proposed method distinguishes itself from other approaches by updating the coefficients with each new data sample, enabling real-time monitoring of dictionary growth. The computational complexity of Ek(i) is substantial as it involves all the learned observations. In the case of uniform input data distribution and fixed input statistics, the critical value is directly proportional to the magnitude of the coefficient data. Hence, our pruning rules focus on eliminating points with the smallest magnitudes. The Gaussian function is well-suited for piece methods because of its general, differentiable, and continuous characteristics. It offers the advantage that when two Gaussian functions are convolved, the resulting function is another Gaussian function with modified mean and variance. This property simplifies the integration process. By employing recursive exploitation, the computational complexity of the feature space can be significantly reduced from O(K^2) to O(K), which is typically sufficient for most piece methods [2].

Related Work

Existing System

In recent times, there have been multiple endeavors to create techniques utilizing quantitative image processing and artificial intelligence to achieve precise predictions of prostate cancer recurrence. The ability to accurately predict recurrence during the initial diagnosis stage assists healthcare professionals in determining suitable treatment options, including radical prostatectomy, radiation therapy, chemotherapy, hormone therapy, and active surveillance for less aggressive tumors. By accurately forecasting the tumor’s response to treatment, long-term treatment outcomes can be improved, and the risks associated with unnecessary treatment can be minimized [3]. There is a growing demand for recurrence prediction models based on image processing techniques. Previous attempts using clinical, pathological, and demographic factors have not been successful in accurately stratifying the majority of patients with intermediate cancer grades, such as Gleason scores of 3+4 or 4+3. Additionally, these models tend to overestimate the probability of recurrence for low-risk patients [5]. It is worth mentioning that in many image analysis-based approaches, the available image data from multiple patients is divided into case-control pairs. The cases consist of patients who experienced biochemical recurrence within a specified follow-up period (typically five years), while the controls are patients who did not experience recurrence. The case-control pairs are usually matched based on clinical and demographic factors such as age, Gleason score, pathological stage, and race, which help in the development of new biomarkers. In this paper, we will outline some of the previous methods used to predict prostate cancer recurrence solely based on H&E-stained images of the prostate tissue [6]. In recent times, various endeavors have been made to develop methods based on quantitative image processing and artificial intelligence (AI) that can effectively predict the recurrence of prostate cancer. Precisely predicting recurrence during the initial diagnosis stage enables healthcare professionals to make informed decisions regarding treatment options. Additionally, accurate prediction of tumor response to treatment contributes to improved long-term treatment outcomes and minimizes the risks associated with unnecessary treatment. Previous attempts utilizing clinical, pathological, and demographic factors have struggled to accurately stratify a significant portion of patients with intermediate cancer grades, such as Gleason scores of 3+4 or 4+3. Furthermore, these models generally tend to overestimate the likelihood of recurrence for patients at low risk. In previous studies, several approaches have been proposed to predict prostate cancer recurrence using H&E-stained tissue images. Jafari-Khouzani et al. were among the early contributors in this field, introducing a method that utilized second-order image intensity surface features extracted from co-occurrence matrices of H&E stained tissue images. Teverovskiy et al. conducted a comparative analysis to assess the predictive power of image-based features, such as nuclear shape and texture, along with clinical features like Gleason score, for prostate cancer recurrence prediction. Lee et al. presented a data integration scheme that incorporated both imaging and non-imaging data, such as proteomics, to enhance prediction accuracy [4]. Expanding on this data integration approach, Golugula et al. proposed an extension that included histological images in combination with proteomics data. Additionally, methods utilizing graph-based features, including co-occurring organ tensors, co-occurring organ angularity, and cell cluster graphs, have been suggested for distinguishing between recurrent and non-recurrent cases. Texture-based features derived from image analysis, such as first-order statistical intensity and steerable orientation channels (e.g., Gabor channels), have also been explored for prostate cancer recurrence prediction. In a recent development, Lee et al. introduced a novel data integration approach known as supervised multi-view canonical correlation analysis (SMVCCA). This method combines data from histology images and proteomic tests to predict prostate cancer recurrence more accurately [7]. While many of the aforementioned methods employ supervised learning using H&E stained tissue images to assess the probability of prostate cancer recurrence in a patient, they heavily rely on hand-crafted features. These features often necessitate extensive human effort for selection and are typically dependent on expert knowledge. Moreover, these features often struggle to generalize effectively across diverse patient populations, and some may require manual parameter tuning (e.g., edge orientations in surface-based features) before application to new images. Consequently, it is crucial to develop methods for automatic feature extraction to enhance the predictive capabilities of prostate cancer recurrence algorithms [8]. With this motivation, we propose a deep learning-based approach that exclusively utilizes tissue images, without relying on any additional data sources, to distinguish between recurrent and non-recurrent patients who have undergone radical prostatectomy. We believe that through thorough training, the proposed model will demonstrate robust generalization capabilities on independent external validation datasets. Before introducing our proposed approach for predicting prostate cancer recurrence, we provide a concise overview of the compelling field of deep learning [9]. Convolutional neural networks (CNNs) have demonstrated outstanding performance in image classification tasks, particularly when a large amount of labeled data is available for training. Traditional machine learning approaches often required substantial effort to preprocess raw data, such as image pixels, into suitable feature vectors, often involving manual feature engineering. In contrast, deep learning architectures, such as CNNs, learn data representations automatically, capturing multiple levels of abstraction within hierarchical layers of artificial neurons. These advancements, combined with improved training algorithms and hardware capabilities, have significantly enhanced accuracy across various domains, including speech recognition, visual object recognition, drug discovery, and economics [10]. To harness the capabilities of deep CNNs in extracting meaningful representations from raw image data, the proposed approach aims to predict prostate cancer (PCa) recurrence using tissue images from patients who have undergone radical prostatectomy. By leveraging the power of deep CNNs, the algorithm automatically learns and extracts relevant features from the provided tissue images. The subsequent sections will provide a detailed explanation of the proposed algorithm [11].

Disadvantages of the Existing System:

  • It has less accuracy.
  • It has high time complexity.
  • The Existing System has less performance.
  • The Existing System has High Computational Cost.
  • The Existing System uses a lot of training data.

Applied Prototype (System)

The applied system is a hybrid CNN-Deep Learning technique (CNN-RNN) having the following extremely efficient advantages:

Pros of Applied Approach (System): Below are furnished Pros of the applied system:

  • The Proposed System has high accuracy.
  • The Proposed System has low time complexity.
  • The Proposed System has high performance.
  • The Proposed System reduces Computational costs.
  • The Proposed System applies extremely well with less training database (Figure 1).

fig 1

Figure 1: Architecture of modified CNN

Implementation Prototype

The term “System Design” refers to the process of specifying the strategy, components, interfaces, artifacts, and documents of a system in order to meet certain requirements. It can be thought of as the application of systems theory to component development and often involves collaboration between the fields of systems analysis, systems design, and systems engineering. The success of a system is measured by its performance or effectiveness in providing the desired output. Requirement analysis is a critical step in the development of a system, as accurate and complete requirement information is necessary to design a system that will function appropriately in the desired environment. It is the responsibility of the users of the system to provide this information, as they are the individuals who will ultimately use the system.

Project Implementation Details

System Modules

There are three modules

  • Collecting Data Sources
  • Processing data sets
  • Feature Learning

Collecting Data Sources: Here 2 types of data sets are used to perform risk assessment.

♦ Structured Data. It refers to records with an excessive degree of association, to such a quantity that incorporation in a social database is steady and right away seek with the aid of using basic, direct net crawler calculations or different hunt activities. This is normally visible in .csv format.

Here data set related to hypertension shown in the following table refers to attributes used to predict either high or low risk (Table 1).

Table 1: Structured data

Name

Data type

Age int
Sex int
cp(chest pain) int
Trestbps (threshold blood pressure) int
chold int
smoke int
Physical inactivity int
Thalach (max heart rate) int
Ckd (chronic kidney disease) int
Pt (potassium levels in mg/mol) int
Stress levels int
Ca (blood vessels) int
Thal (sugar levels) int
target int

♦ Unstructured Data. The text is typically composed of words and sentences, but it may also include information such as dates, numbers, and facts. As a result, it often contains inconsistencies and ambiguities, which make it difficult to analyze using standard methods compared to data stored in structured form in databases or semantically labeled documents.

This looks as in the following manner:

Preprocessing data sets:

  • Properties i.e., qualities thru sufferers are exaggerated and removed from the (fact) datasets.
  • Preferring perhaps getting rid of replica values and including lacking values
  • Each characteristic significance in affecting the affected person may be determined by the usage of correlation evaluation or among max combining phases.
  • Unstructured facts to established facts with designated goals.

Feature learning:

  • The extracted features are passed into modified CNN layers to train the neural network structure.
  • Then extract the high-level features from the modified CNN.
  • Modified CNN consists of the following layers in this model:
  1. Contribution values
  2. Hybrid Hidden values
  3. Production values
  • This neural network has parameters (w, b) = {h1, h2, h3, b1, b2, b3}
  • The accuracy rate is predicted.
  • The results demonstrate that our method has the advantage to infer the prediction of such fatal diseases over the other three methods (KNN, Decision Tree, and Gaussian Naive Bayes).

Here Implementation goes on with structured data and unstructured data and then the obtained results are compared with performance metrics. Through them can know the best method for predicting the risk of hypertension.

Implementing Using Structured Data

This is done using three MLA (Machine Learning Algorithms) [8] like DT i.e., Decision

Tree, KNN i.e., K-Nearest Neighbor, and NB i.e., Naive Bayes. For implementation can use Python and Implementation starts with,

  • Load the data set in .csv format.
  • Then pre-process the data to remove any noisy data.
  • Need to do some correlation analysis so that how attributes are related to the target class using heat map in correlation matrix with the diverging palette.
  • Then need to divide the data set for training purposes to build the model.
  • Like here the cardinality of the data set is 303*14. So, need to split the data by giving test size and random state. Here test size is mentioned as 33% so 203 rows for training the model and the rest 100 rows to test the model and this could be randomly divided as the random state is given.
  • Here while doing this splitting the target class must not be included as the model is developed using only attributes.
  • Then after splitting always scale the features after splitting the data set because we want to ensure that the validation data is isolated. This is because the validation data acts as new, unseen data. Any transformation on it will reduce its validity.
  • Now we are finally ready to create a model and train it. Remember that this is a two-class classification [15] problem. We need to select a classifier, not a regression.
  • Let us analyze three models, Decision Tree Classifier, Gaussian Naive Bayes Classifier, and KNN Classifier.

Decision Tree

Decision Tree builds a hybrid prototype that forecasts the output features based on characteristic splitting criterion here GINI index is used based on the calculation with respect to values tree is constructed using the following algorithm:

Input – Preparing related records and their associated group names; qualities, list, and competitor arrangement qualities; Attribute choice technique, a system to decide the parting rule comprises of parting trait and, conceivably, but fragmented-point or parting subsection.

1

Output: A DT (decision tree).

Approach:

KNN-Nearest Neighbor

The K-Nearest Neighbor (KNN) specification [15] is provided by the preparation dataset, & Nearest K-case in preparation informational collection is originated. KNN is essential to decide the estimation of separation and the optimal K-value. The information is standardized at first. At that point, we utilize the Euclidean separation to gauge the separation. With respect to the choice of attributes K, it has been found that the prototype is most efficient if the k=5. Hence, pick k=5. The algorithm is as follows:

Data Intake

Consider, D (dataset), is a dataset of experimental-related records and groups;

K-assumption value;

Data Result: KNN approach;

Approach:

  1. Begin
  2. K-value and simu() function initiation;
  3. Experiment with the provided dataset (D);
  4. Di is experimenting set and y is evaluating the dataset.
  5. Calculate simu(Di,y);
  6. The biggest K-scores of simu(Di,y);
  7. Calculate average of simu() for KNN (K-Nearest Neighbors);
  8. Assume average of simu() is larger than edge
  9. The cancer patient output is ‘y’;
  10. Else, not a cancer patient;
  11. Finished.

Naive Bayes

Naive Bayes classification [15] is a basic probabilistic classifier. It requires ascertaining the likelihood of highlighting traits. This algorithm is as follows:

Input: D: a set of tuples Output: Calculates probability with respect to every member of the target class.

2

Method. These models are now capable of ‘predicting’ whether a patient is at high risk or low risk of hypertension.

Implementing Using Unstructured Data

Implementation using unstructured data can be trained using Convolutional [20] Neural Network techniques.

Neural Networks

  • NN (Neural Networks) are mathematical computational components known as AN (Artificial Neurons).
  • Inspired by the Neural Activity of the brain, they try to replicate similar behavior mathematically. (Practically far simpler than neurons in the brain).
  • Neural Networks are better than traditional machine learning algorithms [8] and perform well with huge data.
  • Because the machine learning algorithm’s performance saturates when the data set size grows.
  • The input layer is where data is provided.
  • Each neuron in the input layer corresponds to the value of a feature.
  • The Hidden Layer is where computation takes place.
  • Simple Neural Networks generally contain 1 or 2 hidden layers

The output Layer generates the result of the neural network (Figure 2).

fig 2

Figure 2: Neural Network Showing Layers

Neuron

There are weights associated with each neuron. Consider the figure to below. There are three features namely X1, X2, X3 (Figure 3).

fig 3

Figure 3: Neuron Computation with Weights

Why not TRADITIONAL machine learning?

  • Data from medical [14] data centers is usually a huge data set.
  • As shown in the figure traditional machine learning algorithm’s performance saturates when the data set size grows.
  • There is an under-utilization of data in decision-making. Deep Learning [19] systems are built just for that purpose (Figure 4).

fig 4

Figure 4: Performance Comparison

Procedure

Implementing the modified CNN is through using unstructured data because this predicts with high accuracy. This entire computation could be seen in hidden layers. This could be done in five stages.

  • At first, unstructured data which consists of patient data is taken to create vector values.
  • Then the data is organized into 2 columns such that one with text and the other with target class to experiment on NN (Neural Network) model & test predictions using the vector values.

Step 1: Representation of Text Data

  1. Each word can be spoken to as a vector of numerical qualities (A section grid).
  2. All words spoken to RD-characteristics vector, if d = 50 i.e., we speak to each word as a section vector (segment framework) containing 50 lines.
  3. Presently the content can be spoken to by attaching the segment vectors meaning we stack up the section vectors one next to the other to make of network of measurement d x n. (Just words are stacked next to each other in a sentence).

Step 2: Convolution Layer of Text MCNN

  1. Start a pointer at position 1. (1st word)
  2. Expecting the pointer is at the position I, take words locations I-1, I-2, I, i+2, i+1.
  3. Transpose every one of them to shape push grids of 50 segments and annex them next to each other changing over them into a solitary column vector of size 960×960.
  4. Addition the pointer and change to a new line.
  5. For first, second, and n-1 and nth words we have openings i.e., for the essential word we don’t have two past words. In such a case fill them with zero vectors.
  6. Toward the finish of the above procedure, we get an nx250 network which is our convoluted grid.
  7. The weight network W1∈R100×250 is of size 100×250. This means we are expecting the neural system to separate 100 highlights for us.
  8. Presently we complete the accompanying computation.
    h1i,j =f(W1[i]·sj+b1)
  9. This is the speck result of lattices. b1 is a segmented network of 100 lines. The inclination is utilized to use to move the learning procedure.
  10. Without including it, it is the straightforward weighted whole of highlights and there is no learning procedure.
  11. We get a 100xn element chart h1.f is an actuation work that is utilized to get non-linearity. We utilized Tanh’s actuation work.
    h1 =(h1i,j)100×n

Step 3: POOL Layer of Text Modified CNN

  1. From the element diagram h1 which is 100xn dimensional, we pick the most extreme component in each line of the lattice acquiring 100 greatest qualities from each line.
  2. From these 100 qualities we develop a 100×1 lattice h2 (segment vector).
  3. The explanation of picking max pooling activity is that the job of each word in the content isn’t totally equivalent, by most extreme pooling we can pick the components which assume key jobs in the content.
  4. Before the finish of Step 3 we have separated 100 highlights from unstructured information.step 3

Step 4: Full Connection Layer of Text Modified CNN

  1. At that point give this grid as a contribution to a neural system that conveys the accompanying calculation which is like that of in sync 2. (dot result of networks).
  2. W3 is the weighted network of the full association layer and b3 is inclination.
    h3 =W3h2+b3

Step 5: Modified CNN Classifier

  1. Delicate max classifier as yield classifier which predicts the danger of the disease (high or low).
  2. This calculation goes with the sigmoid formula and calculates the probabilistic value to predict (Figure 5).

fig 5

Figure 5: Modified CNN Step-wise Implementation

The algorithm or step-by-step procedure is as follows:

# At first unstructured data is taken and sent to the word2Vec algorithm to create vector values.

  1. train    pd.read_csv(fname)
  2. train    train[“TEXT”]
  3. train    str.lower()
  4. corpus    str.split()
  5. patient corpus    Word2Vec(corpus, size=50,min_count=3)
  6. patientcorpus.save(‘patientwordvec’)

# Then data is organized so that one contains text and other with target class.

  1. X and Y  placeholder_value(n_W0,n_C0, n_y, n_H0,)
  2. Initialize_attributes()
  3. Propagation_forward(parameters, X)
  4. Calculate_cost(Y,Z2)
  5. FineTuner tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
  6. inittf.Initializer_global_variables()
  7. tf.Session() of session:
  8. 9.1 session_run(initialization)
    9.2 epoch in range (epochs_num):
    9.2.1epcost 0.
    9.2.2 for j in range (0, m):
    9.2.2.1d j+1
    9.2.2.2 cost_tempsession_run([cost,optimizer],dictionary_feed={X:X_training[j:d][:][:][:],Y:Y_training[j:d][:]}
    9.2.2.3 epcost += temp_cost
  9. if print_cost == True
  10. 1print (“Cost after epoch%i:%f”% (epochs, epcost))
  11. if epochs% 1 == 0 and print_cost == True:
    12.1costs.append (epcost)
  12. correct_prediction = tf.equal(Z2, Y)
  13. predict_op           Z2
  14. Calculate accuracy using experimental dataset and evaluating dataset.

NN (Neural Network) Logical flow:

In this, can observe at first input value is sent to the neural network then based on weights convoluted matrix is built. Then predictions are made and adjusted by calculating the squared mean error means loss function. This loss score is done by using a gradient descent optimizer and then weights are updated again and prediction goes on (Figure 6).

fig 6

Figure 6: Logical Flow of Neural Network

Results

Analysis of Structured Data

The structured hypertension data set used for analysis is in the following Figures 7 and 8.

fig 7

Figure 7: Hypertension structured data

fig 8

Figure 8: Correlation Analysis

Here is the view of correlation analysis of attributes with target by knowing significant attributes with help of diverging palette. Before predicting scaling need to be done for the train data set and test data set. This could be seen in following Figure 9.

fig 9

Figure 9: Scaled Matrices for Training and Test data sets

Then the results that means predictions obtained by three algorithms [8] are shown in the following Figure 10:

fig 10

Figure 10: Decision Tree Results

Results of Decision Tree: Figure 10

Results of KNN: Figure 11

fig 11

Figure 11: KNN Results

Results of Naive Bayes: Figure 12.

fig 12

Figure 12: Naive Bayes Results

Analysis of Unstructured Data

The unstructured hypertension data set used for analysis to get numerical values for vector values is in following Figure 13. Then data need to be organized into the following way to experiment NN (Neural Network) dataset, test, and to predict (Figure 14). Then after training the neural network when the model is built sent to test set for predictions (Figure 15).

fig 13

Figure 13: Unstructured Data Sent to Word2Vec for Create Numerical Values

fig 14

Figure 14: Unstructured Data Processed to Structured Format

fig 15

Figure 15: Modified CNN Results

Validation Metrics

For the exhibition evaluation withinside the test. Initially, we suggest to calculate precision, accuracy, and recall.

  • Accuracy of Decision Tree: 71.0%
  • The precision of Decision Tree: 67.24137931034483%
  • The recall of Decision Tree: 79.59183673469387%
  • The F1-score of Decision Tree: 72.89719626168225%
  • The accuracy of KNN: 68.0%
  • The precision of KNN: 68.08510638297872%
  • The recall of KNN: 65.3061224489796%
  • The F1-score of KNN: 66.66666666666666%
  • The accuracy of NaiveBayes: 84.0%
  • The precision of NaiveBayes: 83.6734693877551%
  • The recall of NaiveBayes: 83.6734693877551%
  • The F1-score of NaiveBayes: 83.6734693877551%.

In organized information or structured data, the NB classification [15] is the best in test. In any case, it is likewise seen that we cannot precisely anticipate whether the patient is in a high hazard as indicated by the patient’s age, sex, clinical lab, and other organized information. In other words, on the grounds that cerebral dead tissue is an ailment with complex side effects, we can’t foresee where the patient is in a high-hazard gathering just in the light of these straightforward highlights.

  • The Train Accuracy of modified CNN: 1.0
  • The Test Accuracy of modified CNN: 90.90909090909091%
  • The precision of Modified CNN: 83.6734693877551%
  • The recall of Modified CNN: 83.6734693877551%
  • The F1-score of Modified CNN: 83.6734693877551% (Figure 16).

fig 16

Figure 16: Iteration vs. Cost

Taking everything into account, for illness chance demonstrating, the precision of hazard expectation relies Upon the various range spotlight of the medical [14] medical institution information, i.e., the higher is the detail portrayal of the ailment, the better the exactness will be. To find the precision rate can arrive at 90.00% to all the more likely assess the hazard. The following bar plots show the comparison: Figure 17.

fig 17

Figure 17: Accuracy Analysis of Different Algorithms

Research Implementation Time-Period

It takes few seconds for the execution. The mission execution additionally relies upon at the gadget overall performance. System overall performance is primarily based totally at the gadget software, gadget hardware and area to be had with inside the gadget (Figures 18-21 and Table 2).

fig 18

Figure 18: Prostate Cancer Data set

fig 19

Figure 19: Patients with and without prostate Cancer

fig 20

Figure 20: Roc Curve for accuracy

fig 21

Figure 21: Prostate Accuracy vs. Data Size

Table 2: Comparison of different models vs different sizes of the dataset

Model

Size

Top-1/Top-5 error

Layers

Model Description

Decision Tree

218

53/90

8

5 Conv + 3fc layer
Naive Bayes

440

38.66/7.2454

13

13 Conv + 3 fc layer
KNN

562

27.3312/11.3214

16

16 Conv + 3fc layer
CNN

668

19.456/10.767

19

19 Conv + 3fc layer
MCNN

740

12.7878/8.357

21

21 Conv + 3fc layer

Conclusion

The current research in the area of clinical big data analysis has not focused on both structured and unstructured types of information. The proposed framework outperforms several typical prediction algorithms in terms of accuracy, performance, and conjunction speed. This is achieved through a hybrid Convolutional Neural Networks approach that utilizes both structured and unstructured data from hospitals. Data mining and deep learning play critical roles in various fields including machine learning, artificial intelligence, and database systems, with the main aim of improving the performance of prediction models. The performance of the proposed algorithm was statistically compared with various approaches, including linear regression and SVM principals, and was found to be superior. Furthermore, there are several advanced strategies that can be used to further improve the accuracy of the model. Compared to other common prediction algorithms, our proposed algorithm achieved an accuracy of 91% with a faster assembly speed.

Conflicts of Interest

All authors involved in this research journal affirm that this article has no conflict of interest and has collectively contributed towards its goal and objectives.

Data Availability

The data used to support the findings of this study is available from the corresponding author upon request (head.research@bluecrest.edu.lr).

Funding

This research work was done independently by the authors who had not received any funds for it from Liberia Government because this country is in crisis and poor, and from the University.

Authors Contribution

Asadi Srinivasulu contributed to the conceptualization, references collection, data curation, formal analysis, methodology, software, data curation, investigation, resources, software selection, writing the original draft, methodology, supervision, writing review, and editing, project administration, visualization, and investigation. Participated in the formal analysis. Contributed to proofreading of the research article in data collection and environment, and participated in the plagiarism check and correction process. Saad Ali Alahmari contributed formal analysis, methodology, software, data curation, investigation, methodology, software, data curation, and investigation, and contributed analysis, visualization and data collection, plagiarism, paraphrasing, and diagrams.

References

  1. Neeraj Kumar, Ruchika Verma, Ashish Arora, Abhay Kumar, Sanchit Gupta, Amit Sethi, and Peter H. Gann “Convolutional neural networks for prostate cancer recurrence prediction”, Proc. SPIE 10140, Medical Imaging 2017: Digital Pathology, 101400H (1 March 2017); https://doi.org/10.1117/12.2255774.
  2. Siegel, RL, Miller, KD, Jemal A (2017) Cancer statistics, 2017. CA: a cancer journal for clinicians 67: 7-30.
  3. Sandhu GS, Andriole GL (2012) Over diagnosis of prostate cancer. J. Natl. Cancer Inst. Monogr. 2012: 146-151 (2012)10/12.
  4. Sonn GA, et al. (2017) Prostate magnetic resonance imaging interpretation varies substantially across radiologists. Eur Urology Focus. [crossref]
  5. Hassanzadeh E, et al. Prostate Imaging Reporting and Data System Version 2 (PIRADS v2): A pictorial review. Abdom Radiol 42: 278-289. [crossref]
  6. Rosenkrantz AB, et al. (2016) Inter observer reproducibility of the pi-rads version 2 lexicon: a multicenter study of six experienced prostate radiologists. Radiology 280: 793-804. [crossref]
  7. Nasrabadi, NM (2007) Pattern recognition and machine learning. J. electronic imaging 16: 049901.
  8. Goldberg DE, Holland, JH (1988) Genetic algorithms and machine learning. Mach. learning 3: 95-99.
  9. Michalski RS, Carbonell JG, Mitchell TM (2013) Machine learning: An artificial intelligence approach (Springer Science, Business Media, 2013)
  10. Cameron A, Khalvati F, Haider MA, Wong A (2016) Maps: a quantitative radiomics approach for prostate cancer detection. IEEE Transactions on Biomed Eng 63: 1145-1156. [crossref]
  11. Litjens G, Debats O, Barentsz J, Karssemeijer N, Huisman H (2014) Computer-aided detection of prostate cancer in MRI. IEEE Transactions on Medical Imaging 33: 1083-1092.
  12. Wang S, Burtt K, Turkbey B, Choyke P, Summers, RM (2014) Computer aided-diagnosis of prostate canceron multi-parametric mri: a technical review of current research. BioMed Research International 2014.
  13. Fehr D, et al. (2015) Automatic classification of prostate cancer Gleason scores from multipara metric magnetic resonance images. Proc Natl Acad Sci 112, E6265-E6273. [crossref]
  14. Erickson BJ, Korfiatis P, Akkus Z, Kline, TL (2017). Machine learning for medical imaging. Radiographics 37: 505-515.
  15. Orru G, Pettersson-Yeo W, Marquand AF, Sartori G, Mechelli A (2012) Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review. Neurosci Biobehav Rev 36: 1140-1152. [crossref]
  16. Krizhevsky A, Sutskever I, Hinton, GE (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
  17. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3431-3440. [crossref]
  18. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.
  19. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv: 1502.03167.
  20. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521: 436.
  21. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXivpreprint arXiv: 1409.1556.
  22. Srinivasulu Asadi and G.M.Chanakya, “Health Monitoring System Using Integration of Cloud and Data Mining Techniques”, Copyright © 2017 Helix ISSN 2319 – 5592 (Online), HELIX multidisciplinary Journal – The scientific explorer, Vol 5, Issue 5, and Helix Vol. 7(5): 2047-2052, September 2017.
  23. Litjens G, Kooi T, Bejnordi BE, Setio, AAA, Ciompi F, Ghafoorian M, Sánchez CI (2017) A survey on deep learning in medical image analysis. Medical Image Analysis 42: 60-88. [crossref]
  24. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Webster DR (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22): 2402-2410. [crossref]
  25. Abrol E, Khanna P, Jain A (2018) Prostate cancer detection from histopathology images using machine learning techniques: A review. Journal of Healthcare Engineering.
  26. Litjens G, Debats O, Barentsz J, Karssemeijer N (2014) Computer-aided detection of prostate cancer in MRI. IEEE Transactions on Medical Imaging 33(5): 1083-1092.
  27. Varghese BA, Chen F, Hwang DH, Stephens GM, Yoon SW, Fenster A (2019) Deep learning for automatic gleason pattern classification for grade group determination of prostate biopsies. Medical Physics 46(10): 4565-4575.
fig 1

Mind-Sets Regarding Responses to the Possibility of Ceasing to Consume Ultra-Processed Food

DOI: 10.31038/NRFSJ.2023621

Abstract

200 respondents, half from the United States and half from the United Kingdom each evaluated unique sets of 24 vignettes about their attitudes towards ultra processed foods. The vignettes were constructed from 16 different elements (messages), with 2-4 messages from the 16 incorporated into a vignette, specified by an underlying experimental design. The 16 elements were generated through AI (artificial intelligence, Idea Coach), dealing with different aspects of ultra processed food, messages meaningful to a consumer. Each respondent rated her or his specific, unique set of 24 vignettes using a two-dimensional, five-point scale, one dimension of the scale dealing with want to quit eating vs. do not want to quit eating the food (key dependent variable), and the second dimension of the scale dealing with the belief versus non-belief that the food was addictive. No strong messages emerged for the total panel, or by country. A few strong messages emerged by gender and age. Strong messages emerged when the 200 respondents were clustered by the 200 patterns of coefficients, one pattern per respondent. The 200 patterns emerged from the respondent-level equation relating the 16 elements to rating of ‘want to stop eating the food.’ After performing k-means clustering on the patterns, three mind-sets emerged that were: Mind Set 1 of 3 – Focus on the negative effects, especially eating patterns and effects on the body; Mind Set 2 of 3 – Focus on scare tactics, using well-known issues. Mind Set 3 of 3 – Focus on food as giving pleasure and then causing addiction. The approach shows that topics like social attitudes can harness the power of using AI to generate ideas in a directed fashion through question and answer (Idea Coach), followed by empirical testing of the ideas with real people, and final summarizations of the results by the researcher aided once again by AI.

Introduction – The Evolution of Food Design and Creation

The world of food product development as a serious effort is approximately 100 years old, although there are no seminal events that can really be said to usher in this period. Historians might look at the development of canning by Nicholas Appert as a key event in food preservation, one demarcated by the creation of a specific piece of technology. When we look for such a key event for the commercial design and development of food products, we might look at the origin of various historically well know processed foods, such as condiments, cereals, and the like. Those, however, are simply single events, commercial ones, which show the gradual evolution of processed foods.

A better marker of processed foods may come from the efforts by companies to determine the perceptions and preferences of their customers, subjective aspects which could guide the creation and manipulation of foods. With this revised viewpoint regarding processed food, it would be reasonable to point to the 1930’s ad 1940’s, to the seminal work of the Arthur D. Little Inc., group in Acorn Park, MA. In a westerly part of Cambridge, and in so doing to the work of Stanley Cairncross, Loren B Sjostrom and others [1]. Their creation of the Flavor Profile system reflects the effort to guide product development by creating foods which possessed specific and presumably desired sensory characteristics. Indeed, one of their papers was proudly titled ‘What makes flavor leadership? [2].

The history of food product design and creation, moving from foods available to foodstuff created by science and technology, moved into the world of statistics, design, and experiments. Early work on statistics by Joan Gordon called ‘response surface design’ [3] was expanded upon by author Howard Moskowitz to create many products in the 1970’s through the 1990’s, such as variations of Prego [4], Tropicana Grovestand Orange Juice [5], and even optimized mixtures of sweeteners incorporating Aspartame [6]. In those early experiments with mixtures of ingredients, the ingredients were primarily complete foods, such as amount of tomato sauce or size of tomato chunks for the case of Prego (and a year before, for Ragu as well, but with different ingredients). When it was time to optimize Vlasic pickles, to produce the lie of 1-4, culminating in Zesty, to become a best-selling pickle in the United States, the ingredients were again natural condiments, such as salt, acid, pepper. A finally, for Tropicana Grovestand Orange Juice, the variables were amounts of different types of orange juice amount of natural pulp, etc. The common them here was that in the period of 1970’s to 1990’s, the notion of ‘processed food’ turned out to be the combination of normal ingredients. The driving force was the demand by the president of the company to create a good testing, normal product [7].

In those halcyon years, 1970’s to around 2020, the issue of a ‘processed’ food was a non-event. There may have been some issues, but the number of publications devoted to the topic of ‘ultra-processed food’ was stable. Table 1 shows the number of hits for the term ‘ultra processed food’, and then ‘food as addiction’. Many of the papers emerging from this focus deal with the linkage of ultra-processed foods to emerging health issues world-wide, such as obesity [8-12]. Other papers simply focus on responses to ultra processed foods as part of the evolution of the way people eat, including eating as an addiction, or at least some foods as addicting [13]. Beyond the emotionally tinged word ‘addicting’ lies the full gamut of different ways that people eat, the nature of the knowledge of food wants and desires, as well as the desire to use such information to educate the consuming public, which comprises just about everyone [14,15].

It is not the focus of this paper to delve into the reasons for the emergence of ultra processed food, other than to remark on the ‘processization’ of product design and development. From the author’s experiment since 1975, one change has been that the call for new products is not the job of the person at the top,. But rather of middle managers, often MBA’s, and often working by the numbers Whereas in the 1970s and 1980’s the senior management would be at meetings, and the product developers and consumer researchers were encouraged to explore ways of getting ‘better tasting food to the consumer’, the more recent experiences suggest ‘management by objectives.’ The goal is to work with ingredient and flavor suppliers to get cost=effective ingredients that can be blended into one’s product, to achieve an adequate score with consumers as reported by market researchers who hand off the task to approved suppliers, and the measurement of success. There is precious little profound knowledge about how products are formulated to appeal to different tastes, and a lack of understanding of the precise messaging which entice the consumer.

The unhappy outcome of the former situation, evolving from producing products tasting good at a cutting to blending according to nutrition, stability, has end up discarding the exciting times when products are tasted, and when the product developers are pushed to make may products, and test them to find out just what wins. The process has been routinized, less exploratory, far less able to break out of the bounds, and simply hamstrung to produce a modest change so that a brand manager can produce another product quickly to add to the resume, planning for the next job. And there is a decrease in the systematic knowledge, information that could be put into a ‘book’ or computer program, to simulate new products.

Delving into the Mind of the Person

This paper moves away from the issues of ultra processed foods from the point of view of experts discussing its ramifications. Rather, this paper focuses on the mind of a typical individua, when presented with the various messages about ultra processed food. How do people respond to these messages. The objective is to move from the surface response that would be obtained in a simple survey, and instead dive below that surface to find out how people truly feel when they are unable to ‘game’ the survey by choosing the politically correct response, or at least the response that they perceive is the ‘right one.’

The approach used is known as Mind Genomics, an emerging branch of experimental psychology with the objective of measuring the subjective impact of messages presented to the respondent, and by so doing metricize thought [16-19].

The process of Mind Genomics is simple, and direct. The respondent is presented with structured combinations of elements, viz., messages, in short, easy to read vignettes (combinations) comprising 2-4 messages for each vignette. The combinations are created by an underlying experimental design, a plan, which specifies the components of each vignette. For this particular study, each respondent will evaluate a set of 24 vignettes, with each vignette different from every other vignette. IN the end, the respondents will each evaluate unique sets of vignettes. From the pattern of their ratings, it will be straightforward to estimate how each element or message drove the response for that individual respondent. Mind Genomics as a science enables the researcher to explore a topic, as long as the topic can be dimensionalized into different elements, or phrases.

Mind Genomics emerged from the confluence of three disciplines. The first discipline is psychophysics, the oldest branch of experiential psychology, whose focus is on relating the stimulus properties to how they are perceived. The second discipline is statistics, and more specifically experimental design, with the focus of laying out the proper combinations of stimuli to create the aforementioned ‘vignettes.’ The correct layout enables the researcher to use statistics such as OLS (ordinary least squares) regression and cluster analysis to identify meaningful patterns. The third discipline is consumer research, which focuses on the world of the everyday. The Mind Genomics effort is, primarily, to understand the mind of the person facing everyday situations.

A Worked Mind Genomics Cartography about Ultra-processed Foods across Two Countries

The paper presented here grew out of the interest of a broadcasting company in the topic of ultra processed foods. During the middle of 2023 the topic of ultra processed foods and its health and addiction ramifications took on a sense of increasing public focus. The reasons for the development of the interest are not important. In contrast, what emerged as being important was the reaction of ordinary people, not necessarily connected with the world of health and food, to the types of language that was used when dealing with aspects of ultra processed foods.

The Mind Genomics process works through a templated system (wwwBimiLeap.com). The objective is to enable anyone in the world to use research to ask questions (initial learning) and then to answer these questions through an experiment (experiential learning). The templated system permits the researcher to ‘fill in the empty spaces’ with information, this information comprising questions pertaining to a topic, and then answers to those questions. The actual research, described in more detail below, combines the answers (messages) about a topic, presents these combinations to respondents, obtains ratings, and then deconstructs the ratings of the vignette into the contributions of the individual elements. In that way, the manner in which a person ‘looks at the topic’ often becomes obvious, as will be seen below.

The first step is to consider what faces the researcher at the start of the effort. Panel A of Figure 1 shows the request for four questions pertaining to a topic, questions which in loose terms ‘tell a story’, or in more practical terms are simply different aspects or facets of a single topic. The left part of the panel shows the request for four questions, the right part of the panel shows four questions filled in, or more correctly the first few words of the four questions. Panel B below shows the same type of template, this time for four answers to the first question, the left being blank, the right to be completed by the researcher.

fig 1

Figure 1: Panel A (right) shows the template for the four questions and the first parts of the question as filled in (left). Panel B shows the same thing for the first question, requesting four answers to that first question.

During the 15 years that the BimiLeap system was being developed, the issue of not knowing much about a topic continued to arise, an issue occasionally resulting in some hard and deep thinking, but an issue more often leading to frustration, anger, and abandonment of the Mind Genomics approach entirely. The reality is that the education system around the world is good at teaching people to answer direct questions, questions with specific types of information or opinion, but not good at teaching people to think in a creative manner. It is the inability to think in a constructive manner which continued to emerge as the ‘pain point’ of the effort.

Rather than looking at the various papers and press releases dealing with ultra-processed foods, the authors began the experiment by using artificial intelligence to explore different questions that one could ask about ultra processed foods. The tool used was the Idea Coach feature of Mind Genomics (www.BimiLeap.com). The Idea Coach enables the researcher to specify a topic as a paragraph, and then return with sets of 15 questions, from which the researcher can choose up to four to research in the subsequent Mind Genomics experiment. The Idea Coach provides the researcher with the ability to select one or several questions to explore and is programmed to produce sets of 15 questions on the topic when required. The questions emerging can be edited, as can the topic paragraph used for the questions. In the end, however, the researcher generates four questions through the use of Idea Coach. These sets of questions are later returned to the researcher in the ‘Idea Book’. The Idea Book features one page for each set of 15 questions generated, along with AI summarization of the different aspects of the 15 questions, such as themes, innovations, etc., providing a unique, focused book on the topic.

Table 1 presents the first set of questions emerging from the topic paragraph. The table provides both the actual questions emerging from the AI, as well as a summarization of the tables using pre-set queries presented to the AI in Idea Coach. The queries summarize the patterns resident within the set of questions.

Table 1: Google Scholar ‘hits’ for ‘ultra processed food’ and for ‘food as addiction’, respectively

tab 1

Executing the Study

The actual study itself comprises four questions, and for each question, four answers. The underlying experimental design generates the structure of the individual vignettes, with each respondent evaluating precisely 24 vignettes. The vignettes are set up so that the 16 elements are presented in a way which defeats attempts to discover the pattern. Furthermore, and using as a metaphor the MRI (magnetic resonance imaging), the 24 vignettes for each person are different from each other, permuted so that the mathematical structure remains the same, but the specific combinations differ. Thus, the experimental design covers a great deal of the so-called design space, allowing the researcher the liberty of exploring the unknown, rather than confirming the correctness or incorrectness of a guess based on prior knowledge. As noted above, the researcher need not know anything. The research itself covers a great deal of the design space, not just 24 of the many hundreds of possible combinations. The approach is called a permuted design [20].

The particulars of the study are shown in Table 2. Each respondent, whether in the US or the UK, was invited to participate by Luc.Id, an online panel provider. The respondent was oriented in the study, then completed a set of self-profiling questions, including age, gender, and then statements about food. AT the bottom of Table 2 is shown the five rating questions. The focus here is on the combination of ratin5 5 and 4. That combination (R54) shows the ratings pertaining to the respondent wanting to stop eating the food, bad upon the message in the vignette.

Table 2: Questions emerging from the topic paragraph, along with additional commentary and interpretation of these questions by AI embedded in the Idea Coach. Each repetition of the Idea Coach generates a separate page of questions and AI summarization.

tab 2(1)

tab 2(2)

tab 2(3)

The actual study requires approximately 3-5 minutes, from the orientation to the completion. Most respondents who are not professionals simply read the instructions, and respond to the vignettes, using the scale. Exit interviews over the years of doing Mind Genomics studies reveal that the vast majority of conventional respondents don’t feel that they did the study correctly, that they guessed, that they stopped ‘caring’ and simply rated based upon their first impression. Nonetheless, as will emerge below the data are consistent. Despite the consistency, most professionals doing the same study find that they are irritated, they cannot ‘discern’ the pattern, they feel that they are simply guessing, and report feeling frustrated and irritated. Many quit in irritation, some in simple anger. This is the same reality which ended up producing the Idea Coach embodiment of AI. For respondents, the irritation simply means that there is more hope for data from regular people than from professionals who may pride themselves n their intellectual acumen, even in the case of simple topics such as ultra processed foods.

Analysis by Data Transformation, followed by OLS (Ordinary Least Squares Regression)

The analysis of these data begin by first transforming the five-point rating (bottom of Table 2) into a binary scale, based upon the rule that a rating of 5 and 4 are transformed to 100, and a rating of 1,2 or 3 are transformed to 0. The new binary variable, R54 shows us the desire to stop eating the food suggested by the text in the vignette. The respondent was presented with different vignettes. When the rating was 5 or 4, the vignette was given the aforementioned new value ‘100’, to which was added a vanishingly small random number (< 10-5). The purpose of the random number is purely prophylactic, viz., to ensure that there would be some level of variation in this newly created variable, R54.

For the initial analysis of the total panel, a second and new binary transformed variable was created, R52. This corresponds to the perception that what is read in the vignette describes something addictive. R52 takes on the value 100 when the rating is 5 or 2 and takes on the value 0 when the rating is 1, 3, or 4. Again, a vanishingly small random number is added to the newly created binary variable.

After the creation of the two binary variables, R52 and R52, respectively, it is straightforward to relate the presence/absence of the 16 elements to the newly transformed binary variables. The analysis is known as OLS (ordinary least squares) ‘dummy’ regression [21]. The independent variables are ‘dummy variables’, being either present or absent in a vignette. Neither the regression analysis nor we have any other information incorporated in this variable, such as magnitude. The independent variable, or rather the 16 independent variables, one per element, are simply present or absent in each of the 4800 vignettes generated by the 200 respondents who evaluated 24 unique vignettes each.

The equation is written as Binary Dependent Variable = k1(A1) + k2(A2) … k16(A16)

The foregoing equations summarize the relation between the messages or elements under the researcher’s control and the respondents rating, after that rating has been transformed to the binary scale. The coefficient shows how many points out of 100 can be traced to the appearance of the element in the vignette. Again, it is impossible for most if not all people to know since each person tests a unique set of 24 vignettes and cannot see the pattern.

Statistical analyses suggest that coefficients around 15-16 are statistically significant. More important than significance is relevance. Coefficients of 21 or higher are deemed relevant, at least operationally, showing a strong connection to the scores. Table 3 suggests that no elements for R54 (want to quite eating) or R52 (are addicting) perform strongly. It may well be that the poor performance of these elements emerge because the research has thrown together a random 200 people of ‘all types.’ It is always the option of researchers that by breaking down the respondents into more meaningful groups, e.g., by whom they are, by what they do, by what they believe in general about food and health that our low coefficients will increase dramatically.

Table 3: The study particulars

tab 3

Before moving to a deeper understanding of R54 (quit eating), it is instructive to look at the response times estimated by the weights assigned to the 16 elements. The Mind Genomics platform records the number of seconds between the presentation of the vignette and the rating assigned by the respondent, doing so to the nearest 100th of a second. For many respondents who multi-task, the response times van stretch out seconds. As a prophylactic step, all response times of 9 seconds or longer are brought to the arbitrary value of 9 seconds. Table 3 suggests that all the elements exhibit the approximately same response time, estimated to be between a low of 0.9 seconds and a high of 1.2 seconds. These are the estimated contributions to response time of each of the elements Once again we see some but not dramatic differences among elements.

What Drives ‘I Want to Stop Eating’ across Conventional Subgroups Defined by Who the Person Is?

The systematic permutation of the experiment design across the 200 respondents ensures that a valid statistical analysis by OLS regression can be done for any combination of respondents. One standard analysis is done according to the convention geo-demographics. Table 4 shows the total panel broken out by gender, age, and country. To allow the patterns to emerge more clearly, only coefficients of 21 or higher are shown. The very strong coefficients, those 21 and higher, are shown in shaded cells. When the granularity is deepened in terms of ‘who the respondents are’ we end up with stronger results, but unfortunately with no clear patterns defined by the meaning of the elements. That is, elements stand out in terms of their driving power to make a respondent say, ‘I would like to stop eating this food’, but there is no clear interpretation of ‘why’.

Table 4: Additive model showing the coefficients relating the presence/absence of the 16 elements to three responses; want to stop eating (R54), believe it is addictive (R52) and response time to the element in seconds. All coefficients were estimated from the Total Panel.

tab 4

Emergent Mind-sets Revealed by Clustering the Coefficients

A hallmark of Mind Genomics is the focus on emergent mind-sets at the granular level of the everyday. Many researchers focus on mind-sets, with the topic being well explored in the history of market segmentation (Reference). The traditional focus of clustering into mind-sets has begun from the top down, seeking large groups of what is believed to be first order segments. The effort, expense, and challenges of interpreting results have pushed so-called segmentation studies into the realm of the every expensive, also into the realm of what might be considered ‘hallmark studies.’ Doing segmentation studies to find out emergent mind-sets for small topics of importance to the everyday is simply not in the purview of conventional research.

The ease and the low expense, as well as the availability of simple DIY (do it yourself) software changes the focus, allowing research such as that presented here to reveal these mind-sets at the level of the granular, of the everyday. The simple precaution of having every respondent evaluate unique sets of 24 vignettes set up by experimental design means that it is straightforward to create a model for each individual. With 200 respondents, these individuals, each evaluating different combinations of vignettes, the statistics end up with 200 individual-level equations. The coefficients of the equations show how the individual element ‘drives’ the desire to stop eating, for each of the 16 elements, for one respondent.

With 200 respondents, each generating a valid set of 16 coefficients, comparable across respondents, comparable across elements, they are able use k-means clustering to put the respondents into two and then three, or even more groups, such that the people within the group show similar patterns of coefficients, whereas the centroid of the groups are quite different from each other [22]. The measure of distance use is (1-Pearson Correlation(R). The Pearson Correlation, R, shows the strength of the linear relation between two sets of objects (e.g., the distance between two people based upon the pattern of their 16 coefficients). The clustering is totally mathematical, with no effort in interpreting.

The ideal is to create two or perhaps three clusters, mutually exclusive and exhaustive, based upon the coefficients. These groups then become mind-sets. Table 5 presents the data from the two-mind-set solution emerging from clustering, and then the three-mind-set solution. When the clustering operates on the data without attempting to ‘tell a story’, at first the data are not particularly impressive. It is the job of clustering to put together like-minded people, based solely upon the patterns of the coefficients. We should expect nothing less than seeing strong performing elements.

Table 5: Additive models for key demographic groups, showing the coefficients relating the presence/absence of the 16 elements to the key transformed binary variable, R54 (want to stop eating). Only coefficients equal to or higher than 21 are shown, to allow patterns to emerge more clearly.

tab 5

The results become far more remarkable when the strong elements for each mind-set are put together

Mind Set 1 of 2 – Focus on the composition of ingredients, and how the combinations affect our brain.

  1. Enhanced flavors: Unique combinations of artificial flavors, additives, and high levels of sugar, salt, and fat make these foods highly palatable.
  2. Convenience and timesaving: Ultra-processed foods are quick and easy to prepare, saving valuable time in our busy lifestyles.
  3. Extended shelf life: Numerous preservatives and additives used in ultra-processed foods help them last longer without spoiling.
  4. Consistency: The uniformity in taste, texture, and appearance of these foods provides a predictable and familiar eating experience.
  5. Our brain associates the intense flavors with pleasure and happiness.
  6. These foods are engineered to stimulate the pleasure centers in our brain, making us want to eat more.

Mind Set 2 of 2 – Focus on the negative effect of ultra-processed food on personal health.

  1. Over time, reliance on these foods can lead to nutrient deficiencies and poor overall nutrition.
  2. These foods are often highly palatable, leading to overeating and a diminished ability to recognize true hunger cues.
  3. They can lead to chronic inflammation in the body, which is associated with several health issues.
  4. Ultra-processed foods tend to be highly addictive, leading to overconsumption and weight gain.

Mind Set 1 of 3 – Focus on the negative effects, especially eating patterns. and effects on the body

  1. These foods are often highly palatable, leading to overeating and a diminished ability to recognize true hunger cues.
  2. Ultra-processed foods can negatively impact our gut health by altering the balance of healthy gut bacteria.
  3. They can lead to chronic inflammation in the body, which is associated with several health issues.
  4. Over time, reliance on these foods can lead to nutrient deficiencies and poor overall nutrition.

Mind Set 2 of 3 – Focus on scare tactics, using well-known issues.

  1. Ultra-processed foods tend to be highly addictive, leading to overconsumption and weight gain.
  2. Ultra-processed foods are often high in empty calories, leading to weight gain without providing adequate nutrition.
  3. Ultra-processed foods often contain unhealthy fats and excessive sodium, contributing to obesity and weight-related issues.

Mind Set 3 of 3 – Focus on food as giving pleasure and then causing addiction.

  1. Enhanced flavors: Unique combinations of artificial flavors, additives, and high levels of sugar, salt, and fat make these foods highly palatable.
  2. Convenience and timesaving: Ultra-processed foods are quick and easy to prepare, saving valuable time in our busy lifestyles.
  3. Our brain associates the intense flavors with pleasure and happiness.
  4. Extended shelf life: Numerous preservatives and additives used in ultra-processed foods help them last longer without spoiling.
  5. Consistency: The uniformity in taste, texture, and appearance of these foods provides a predictable and familiar eating experience.
  6. These foods are engineered to stimulate the pleasure centers in our brain, making us want to eat more.
  7. Ultra-processed food hijacks our brain’s reward system.
  8. The convenience and availability of ultra-processed foods can make them more tempting, overshadowing healthier alternatives.

Measuring the Strengths of the Ideas Evaluated in a Mind Genomics Study

A continuing issue in the world of Mind Genomics is how one can measure the ‘strength’ of the ideas, and by doing so create a way to assess one’s own efforts in a particular project such as this study on ultra-processed food, or even to measure one’s growth in thinking and understanding a specific topic. Rather than having performance be a matter of opinion, always an option in science, it might be productive to create an index of the strength of the ideas. The operationally defined index is presented below, in Tables 6 and 7. The index is named IDT, Index of Divergent Thought. The calculations are shown in Table 6. Simply described, the IDT takes the positive coefficients for Total Panel, then for the two-set solution, and then for the three-mind-set solution, squares each, and weights the sum of square coefficients by the relative number of respondents. Since there are six groups, totaling 600 respondents, each column has a weight. For example, Mind-Set (MS) 1 of 2 comprises 107 respondents, thus having a weight of 0.18. The sum of squares for MS 1 of 2 is 5387, so the contribution is 0.18 x 5387 or 970. Summing the weighted sums of squares produces 4989. The square root is 71. That number 71 is the measure of the goodness of the thinking, the Index of Divergent thought. The IDT gives a metric which takes on meaning over time as researcher explores the topic and related topics.

Table 6: Additive models for mind-sets (clusters), showing the coefficients relating the presence/absence of the 16 elements to the key transformed binary variable, R54 (want to stop eating). Only coefficients equal to or higher than 21 are shown, to allow patterns to emerge more clearly.

tab 6

Table 7: Index of Divergent Thought

tab 7

Using AI to Interpret the Themes Emerging from the Strong Scoring Elements for Each Mind-set

AI Interpretation of themes which stop a person from eating ultra-processed foods (coefficients >= 21 for R54)

Mind Set 1 of 3 – Focus on the negative effects, especially eating patterns and effects on the body

Interested in:

Convenience and time-saving: Ultra-processed foods are quick and easy to prepare, saving valuable time in our busy lifestyles.; Ultra-processed foods can negatively impact our gut health by altering the balance of healthy gut bacteria.; They can lead to chronic inflammation in the body, which is associated with several health issues.; Over time, reliance on these foods can lead to nutrient deficiencies and poor overall nutrition.; These foods are often highly palatable, leading to overeating and a diminished ability to recognize true hunger cues.;

Create a label for this segment:

Time-conscious and Health-conscious Consumers

Describe this segment:

This segment consists of individuals who prioritize convenience and timesaving in their daily lives. They are attracted to ultra-processed foods because they are quick and easy to prepare, allowing them to save valuable time. However, their reliance on these foods comes with several negative consequences. Firstly, ultra-processed foods can negatively affect their gut health by disrupting the balance of healthy gut bacteria. This can lead to various digestive issues and potentially affect their overall well-being. Secondly, consuming these foods can result in chronic inflammation in the body, which is associated with a range of health problems. Additionally, individuals in this segment may face nutrient deficiencies and poor overall nutrition due to the limited nutritional content of ultra-processed foods. Moreover, these foods tend to be highly palatable, leading to overeating and a diminished ability to recognize true hunger cues. Overall, this segment is characterized by a preference for convenience and timesaving, but their reliance on ultra-processed foods can negatively impact their health and overall nutrition.

Describe the attractiveness of this segment as a target audience:

The target audience for the attributes described in this research study would be individuals who value convenience and time-saving methods when it comes to their food choices. This segment would likely consist of busy individuals or those with hectic lifestyles who prioritize efficiency and ease in their daily routines. They would be interested in ultra-processed foods that require minimal preparation time, allowing them to save valuable time in their packed schedules.

However, it is important to note that while this segment may find ultra-processed foods attractive for their convenience, there are several negative health implications associated with these types of foods. The study suggests that these foods can negatively impact gut health, lead to chronic inflammation in the body, and result in nutrient deficiencies and poor overall nutrition.

Thus, the attractiveness of this segment as a target audience is somewhat compromised by these health concerns. While they may be drawn to the convenience and time-saving aspects of ultra-processed foods, it is crucial to educate this audience about the potential risks and encourage them to prioritize healthier alternatives that provide both convenience and nutritional value.

Explain why this segment might not be attractive as a target audience:

This segment might not be attractive as a target audience because they are likely to prioritize convenience and timesaving over their health. They may not be motivated to make dietary changes or prioritize their overall well-being. Additionally, their reliance on ultra-processed foods can lead to negative health consequences such as gut health issues, chronic inflammation, nutrient deficiencies, and poor nutrition. This suggests that they may not have a strong interest in maintaining a healthy lifestyle or may not be receptive to messages promoting healthier food choices. Moreover, the fact that these foods are highly palatable and can lead to overeating indicates that this segment may have difficulty practicing portion control or recognizing true hunger cues.

List what is missing or should be known about this segment, in question form:

  1. How do ultra-processed foods save time and provide convenience compared to other food options?
  2. What specific alterations do ultra-processed foods cause in the balance of healthy gut bacteria?
  3. Which health issues are specifically associated with chronic inflammation caused by consuming ultra-processed foods?
  4. What are the specific nutrient deficiencies that can arise from relying on ultra-processed foods?
  5. How does the high palatability of ultra-processed foods result in overeating?
  6. What factors contribute to a diminished ability to recognize true hunger cues when consuming ultra-processed foods?

List and briefly describe attractive new or innovative products, services, experiences, or policies for this segment:

  1. Healthy convenience meal kits: These meal kits provide the convenience of ultra-processed foods but with a focus on healthier ingredients and preparation methods. They could include pre-packaged, pre-portioned, and pre-prepared ingredients that are minimally processed and rich in nutrients.
  2. Gut-friendly ultra-processed alternatives: Companies could develop ultra-processed foods that are specifically designed to support gut health. These products could contain ingredients that promote the growth of beneficial gut bacteria, such as prebiotics and probiotics.
  3. Anti-inflammatory food options: An innovative approach would be to create ultra-processed foods that are formulated to have anti-inflammatory properties. These products could contain ingredients known for their anti-inflammatory effects, such as certain spices, herbs, and plant-based oils.
  4. Nutrient-fortified ultra-processed foods: To address the issue of nutrient deficiencies, companies could develop ultra-processed foods that are enriched with essential vitamins, minerals, and other nutrients. This could help individuals maintain adequate nutrient intake while consuming these convenient food options.
  5. Mindful eating apps and tools: Innovative services could include smartphone apps or other tools that help individuals develop mindfulness around their eating habits. These apps could provide reminders to eat slowly, track hunger and fullness levels, and offer tips for recognizing true hunger cues.
  6. Personalized nutritional guidance: Services could be developed that provide personalized nutritional guidance for individuals who rely on ultra-processed foods. This could involve online platforms or apps that offer tailored meal plans or recommendations based on an individual’s specific dietary needs and goals.
  7. Industry regulations and labeling: Policies could be implemented that enforce stricter regulations on the labeling of ultra-processed foods. This could help consumers make more informed choices by providing clear information about the nutritional content and potential health impacts of these products.

Which messages will interest this segment?

This segment would likely be interested in messages related to the negative impacts of ultra-processed foods on their health. They may be particularly interested in messages about how these foods can negatively impact gut health, contribute to chronic inflammation, and lead to nutrient deficiencies and poor overall nutrition. They may also be interested in messages about the potential negative effects on weight and appetite control, such as overeating and diminished ability to recognize true hunger cues.

Mind-Set 2 of 3: Focus on scare tactics using well known issues Interested in:

Ultra-processed foods often contain unhealthy fats and excessive sodium, contributing to obesity and weight-related issues; Ultra-processed foods tend to be highly addictive, leading to overconsumption and weight gain; Ultra-processed foods are often high in empty calories, leading to weight gain without providing adequate nutrition.

Create a label for this segment:

“Processed Food Detractors”

Describe this segment:

This segment of the research study is discussing the attributes of ultra-processed foods. The description highlights several negative characteristics of these foods. Firstly, it states that ultra-processed foods often have unhealthy fats and excessive sodium, which can contribute to obesity and weight-related issues. Secondly, it suggests that these foods tend to be highly addictive, leading to overconsumption and weight gain. Lastly, it mentions that ultra-processed foods are often high in empty calories, meaning they provide little to no nutritional value and can result in weight gain without meeting the body’s nutritional needs. Overall, this segment focuses on the negative impacts of ultra-processed foods on weight and overall health.

Describe the attractiveness of this segment as a target audience:

The segment of individuals who consume ultra-processed foods may not be considered an attractive target audience from a health-focused standpoint. This is because these individuals are prone to obesity and weight-related issues due to the unhealthy fats and excessive sodium present in such foods. Moreover, ultra-processed foods tend to be highly addictive, which can lead to overconsumption and further weight gain.

Additionally, these foods are often high in empty calories, meaning they provide little to no nutritional value while contributing to weight gain. Hence, this segment may not be seen as attractive from a nutritional standpoint since they are likely to have inadequate nutrition despite gaining weight.

However, it is important to note that from a marketing perspective, this segment might still have potential as a target audience due to their consumption patterns. Ultra-processed foods are commonly accessible and widely consumed, therefore indicating a potential market size.

Explain why this segment might not be attractive as a target audience:

This segment might not be attractive as a target audience for a few reasons.

Firstly, this segment’s consumption of ultra-processed foods suggests a lack of awareness or concern about healthy eating habits. They may prioritize convenience or taste over the nutritional value of their food choices. This preference for unhealthy options may indicate a resistance to change or a lack of motivation to adopt healthier behaviors. As a result, targeting this segment may require significant effort and resources to overcome their existing habits and attitudes towards food.

Additionally, this segment’s tendency to over consume ultra-processed foods due to their addictive nature poses a challenge. Their behavior of overeating can make it harder to promote balanced eating habits or promote portion control. It might be challenging to convince them to reduce their consumption or change their eating preferences.

Moreover, the fact that ultra-processed foods high in empty calories contribute to weight gain without providing adequate nutrition is another reason why this segment may not be an attractive target audience. Convincing them to prioritize nutrition and the long-term health effects of their food choices may be difficult, as their current habits suggest a focus on immediate gratification rather than the overall quality of their diet.

Overall, the combination of their preference for ultra-processed foods, overconsumption, and disregard for adequate nutrition makes this segment less attractive as a target audience. Effectively influencing their behaviors would require significant effort, resources, and potentially a major shift in their attitudes towards food and health.

List what is missing or should be known about this segment, in question form:

  1. What specific types of ultra-processed foods were included in the study’s sample?
  2. Did the research study investigate the relationship between the consumption of ultra-processed foods and specific weight-related issues, such as diabetes or cardiovascular diseases?
  3. What criteria were used to classify the foods as ultra-processed?
  4. Were there any specific demographic characteristics (age, gender, socioeconomic status) of the participants that could have influenced the findings?
  5. How was the overconsumption of ultra-processed foods assessed in the study?
  6. Were there any other factors, aside from the addictive properties of ultra-processed foods, which could have contributed to weight gain in the participants?
  7. Did the study consider any potential confounding variables that could affect the relationship between ultra-processed foods and weight-related issues?
  8. Were the participants’ dietary patterns or overall diets considered while analyzing the association between ultra-processed foods and weight gain?
  9. How long was the study conducted, and did it examine any long-term effects of consuming ultra-processed foods?
  10. Did the research study compare the effects of ultra-processed foods to those of minimally processed or unprocessed foods on weight-related issues?

List and briefly describe attractive new or innovative products, services, experiences, or policies for this segment:

  1. “Clean Label” Ultra-Processed Foods: These are new products that aim to provide ultra-processed foods while minimizing the use of unhealthy fats and excessive sodium. These foods prioritize natural ingredients and are free from artificial additives, appealing to health-conscious consumers who still desire the convenience of processed foods.
  2. Nutrient-Enriched Ultra-Processed Foods: These innovative products add essential nutrients, such as vitamins, minerals, and fiber, to ultra-processed foods. By fortifying these products, they provide added nutritional value, making them a more attractive option for individuals concerned about weight gain and inadequate nutrition.
  3. Personalized Nutrition Apps: With the rise of smartphone technology, personalized nutrition apps can provide tailored recommendations and tracking for individuals consuming ultra-processed foods. These apps can help users make healthier choices within the constraints of their diet preferences, providing guidance on portion sizes, healthier alternatives, and overall nutritional balance.
  4. Reduced-Sodium Ultra-Processed Foods: In response to concerns about excessive sodium consumption, innovative products are minimizing sodium content in ultra-processed foods without sacrificing taste. These new options cater to individuals who are concerned about their weight and health but still crave the convenience and flavors of processed foods.
  5. Regulation of the Ultra-Processed Food Industry: Policy changes that aim to regulate and control the production and marketing of ultra-processed foods can have a significant impact. Introducing stricter labeling requirements, limiting advertising to children, and promoting healthier alternatives through taxation or subsidies can help steer consumers towards more nutritious options and improve overall public health.

Overall, these new and innovative products, services, experiences, or policies offer ways to address the negative attributes associated with ultra-processed foods. By improving the nutritional profile, personalizing choices, and regulating the industry, these options aim to promote healthier eating habits and combat the obesity and weight-related issues often associated with ultra-processed foods.

Which messages will interest this segment?

This segment may be interested in messages that highlight the negative impact of ultra-processed foods on health and weight-related issues. They might be particularly interested in messages that emphasize the unhealthy fats, excessive sodium, and empty calories found in these foods, which can contribute to obesity and lack of proper nutrition. They may also respond to messages that mention the addictive nature of ultra-processed foods, leading to overconsumption and weight gain.

Mindset 3 of 3: Focus on food as giving pleasure and then causing addiction

Interested in:

Ultra-processed food hijacks our brain’s reward system.; Our brain associates the intense flavors with pleasure and happiness.; These foods are engineered to stimulate the pleasure centers in our brain, making us want to eat more.; Convenience and time-saving: Ultra-processed foods are quick and easy to prepare, saving valuable time in our busy lifestyles.; Extended shelf life: Numerous preservatives and additives used in ultra-processed foods help them last longer without spoiling.; Consistency: The uniformity in taste, texture, and appearance of these foods provides a predictable and familiar eating experience.; Enhanced flavors: Unique combinations of artificial flavors, additives, and high levels of sugar, salt, and fat make these foods highly palatable.;

Create a label for this segment:

“Ultra-Processed Food Lovers”

Describe this segment:

This segment of the research study focuses on the various attributes of ultra-processed foods. It highlights how these foods can hijack our brain’s reward system by associating intense flavors with pleasure and happiness. The engineered nature of ultra-processed foods is emphasized, as they are specifically designed to stimulate the pleasure centers in our brain, leading to a desire to consume more of these foods.

The convenience and time-saving aspect of ultra-processed foods is discussed, as they are quick and easy to prepare, catering to busy lifestyles. Additionally, the use of numerous preservatives and additives in these foods allows for an extended shelf life, preventing them from spoiling quickly.

Consistency is another attribute mentioned, pointing out that ultra-processed foods deliver a predictable and familiar eating experience in terms of taste, texture, and appearance. This ensures a consistent consumer experience with these foods.

Furthermore, the enhanced flavors of ultra-processed foods are elaborated upon, with the inclusion of artificial flavors, additives, and high levels of sugar, salt, and fat. These combinations make these foods highly palatable, further contributing to their appeal.

Overall, this segment of the research study provides a comprehensive description of the attributes associated with ultra-processed foods, highlighting their impact on the brain’s reward system, convenience, shelf life, consistency, and their palatability due to enhanced flavors.

Describe the attractiveness of this segment as a target audience:

The attractiveness of this segment as a target audience is likely to be high. The description highlights several factors that make ultra-processed foods appealing to consumers. The fact that these foods hijack the brain’s reward system and create associations with pleasure and happiness suggests that individuals are likely to have a strong desire for these products.

The convenience and time-saving aspect of ultra-processed foods also makes them attractive to busy individuals who are looking for quick and easy meal options. This aligns with the busy lifestyles of many people today and can be highly appealing for those who are pressed for time.

Additionally, the extended shelf life provided by preservatives and additives makes these foods appealing to those who want their meals to last longer without spoiling. This can be particularly beneficial for individuals who prioritize budgeting or minimizing grocery shopping trips.

The consistency in taste, texture, and appearance of ultra-processed foods provides a predictable and familiar eating experience, which can be attractive to those who seek comfort or are resistant to change. Similarly, the unique combinations of flavors and high levels of sugar, salt, and fat enhance the palatability of these foods, appealing to individuals who crave rich and indulgent tastes.

Overall, the attractiveness of this segment lies in the combination of factors that make ultra-processed foods highly desirable. From the pleasure-inducing effects on the brain to the convenience and consistency they offer, these attributes make them an appealing target audience for marketers.

Explain why this segment might not be attractive as a target audience:

This segment might not be attractive as a target audience because the attributes mentioned are associated with negative health effects. The fact that ultra-processed foods hijack our brain’s reward system and stimulate the pleasure centers in our brain may lead to overeating and contribute to unhealthy eating habits. The extended shelf life and convenience of these foods may promote a sedentary lifestyle and discourage the consumption of fresh, nutritious food. Additionally, the high levels of artificial flavors, additives, sugar, salt, and fat in ultra-processed foods can be detrimental to one’s health and contribute to obesity, diabetes, heart disease, and other chronic conditions. Therefore, targeting this segment may not align with promoting a healthy and balanced lifestyle.

List what is missing or should be known about this segment, in question form:

  1. What are the potential negative health impacts of consuming ultra-processed foods?
  2. Are there any regulations in place regarding the use of preservatives and additives in ultra-processed foods?
  3. How do ultra-processed foods compare in nutritional value to unprocessed or minimally processed foods?
  4. Are there any specific ultra-processed foods that have been linked to particular health concerns?
  5. What are the long-term effects of regularly consuming ultra-processed foods?
  6. Are there any alternative options or healthier alternatives to ultra-processed foods?
  7. Are there any differences in the effects of ultra-processed foods on various age groups or populations?
  8. How do personal preferences and cultural factors influence the consumption of ultra-processed foods?
  9. Is there a correlation between the increasing availability and consumption of ultra-processed foods and rising rates of certain health conditions?
  10. What strategies can be implemented to reduce the consumption of ultra-processed foods in society?

List and briefly describe attractive new or innovative products, services, experiences, or policies for this segment:

  1. Healthier alternatives to ultra-processed foods: Companies can develop innovative products that offer a healthier alternative to ultra-processed foods. This can include ready-made meals or snacks that are made with natural ingredients, lower in sugar, salt, and fat, and free from artificial additives.
  2. Personalized meal planning services: Services that provide personalized meal plans based on individual preferences and nutritional needs can help people make healthier food choices. These services can also include pre-prepared meals that are made with fresh, whole foods and are delivered to the customer’s doorstep.
  3. Mindful eating apps: Mobile applications that promote mindful eating can help individuals overcome the intense cravings for ultra-processed foods by providing tools and techniques to manage emotional eating and develop healthier eating habits. These apps can include features such as guided meditation, food diaries, and tips for healthier snacking.
  4. Education and awareness campaigns: Policies that focus on educating the public about the negative effects of consuming ultra-processed foods and raise awareness about healthier alternatives can be effective in encouraging people to make healthier food choices. This can include campaigns through various media channels, public health programs, and collaboration with schools and community organizations.
  5. Strategic placement of healthy options: Policy measures that ensure healthy food options are readily available and prominently displayed in supermarkets, cafeterias, and other food retail establishments can make it easier for individuals to choose healthier alternatives instead of ultra-processed foods. This can involve collaborations with food retailers to promote and highlight nutritious options.
  6. Promotion of home cooking: Encouraging and supporting home cooking through policies that provide incentives, such as tax breaks or subsidies for purchasing fresh ingredients or kitchen equipment, can empower individuals to prepare their meals using whole and unprocessed ingredients. This can also be complemented by educational programs that teach cooking skills and provide recipe ideas for quick and healthy meals.
  7. Labeling and transparency: Policies that mandate clear and easy-to-understand labeling on food packaging can help individuals make informed choices by providing information about the level of processing, artificial additives, and nutritional content. This can empower consumers to select healthier options and avoid ultra-processed foods.
  8. Community gardens and urban farming initiatives: Creating community gardens or supporting urban farming initiatives can promote access to fresh and local produce, making it easier for individuals to incorporate healthier foods into their diets. These initiatives can also foster community engagement and social connections around food.

Which messages will interest this segment?

  1. “Discover how ultra-processed foods can hijack your brain’s reward system.”
  2. “Learn how the intense flavors of ultra-processed foods can bring pleasure and happiness.”
  3. “Uncover the science behind how ultra-processed foods stimulate the pleasure centers in your brain, making you crave more.”
  4. “Save valuable time with quick and easy-to-prepare ultra-processed foods.”
  5. “Enjoy the extended shelf life of ultra-processed foods, thanks to preservatives and additives.”
  6. “Experience the consistency in taste, texture, and appearance of ultra-processed foods for a predictably satisfying meal.”
  7. “Indulge in the enhanced flavors of ultra-processed foods, with unique combinations of artificial flavors, additives, and high levels of sugar, salt, and fat.”

Discussion and Conclusions

The history of social attitudes is a long and complex one. Only in the last century has the world of researching social attitudes emerged with new ideas. Hallowed approaches such as focus groups, surveys about topics, public opinion polls and now text mining of the available literature and press have provided much of what we know about how people feel. One need only visit the library to get a sense of the depth of knowledge that was acquired by researchers of the time. And, for those projects not guaranteeing a book, there are always the papers in the journals of then and now, for these shorter efforts.

At the same time, however, during the past fifty years the harder sciences, including psychology, have migrated away from books to journals, with the style of short, focused reports. These papers are less readable, more stylized, representing a new way to present information. This new way is considered to be professional because it focuses on a topic, presets a relevant literature review, an appropriate experiment, statistical analysis, and then discussion and conclusions.

When we compare the traditional studies of social and psychological issues to the shorter, focused, reports about experimental studies, we find far more flexibility in the harder sciences. The flexibility is not so much in the way the data are reported as the ability to carve out a piece of the world, and study it, focusing deeply on manipulating the piece of world and then measuring it. Social and traditional psychology research does not have that desirable capability.

The introduction of Mind Genomics adds a new dimension to social and attitudinal research. The underlying worldview of Mind Genomics is a combination of focus on the everyday, along with experimentation. Rather than asking the respondent to give an opinion, e.g., about desire to stop eating foods or the belief that the food is addictive, Mind Genomics actually does an experiment, to identify which particular messages, if any, drive the perception. As such, Mind Genomics brings social research to a new direction, where the topics of the everyday can be more deeply explored, with a richness of information. Rather than having to infer much of the richness from pulling together a picture from disparate sources, the Mind Genomics ‘program’ in its scientific meaning creates a matrix of direct knowledge from the responses of people to relevant stimuli. Indeed, the strong performing elements from Mind-Sets 1 of 3, 2 of 3, and 3 of 3, present to us information in a direct format, information that would have to be distilled from far more effortful, less direct source of information.

To conclude, therefore, this paper suggests that issues of attitudes can be straightforwardly investigated by experiments, using cognitively rich stimuli, in a manner that cannot be gamed, and in a fashion which immediately reveals the nature of people, and th different types of thinking about the same problem, not by those who are measurably different, but by those who think differently about the same micro-topic.

References

  1. Caul JF (1957) The profile method of flavor analysis. Advances in Food Research 7: 1-40.
  2. Sjostrom LB, Cairncross SE (1952) What makes flavor leadership? Food Technology 6: S22-S22.
  3. GORDON J (1965) Evaluation of sugar-acid-sweetness relationships in orange juice by a response surface approach. Journal of Food Science 30: 903-907.
  4. Moskowitz H (2004) From psychophysics to the world…. data acquired, lessons learned. Food Quality and Preference 15: 633-644.
  5. Moskowitz H, Krieger B (1998) International product optimization: a case history. Food Quality and Preference 9: 443-454.
  6. Moskowitz Howard R, Kathleen Wolfe, Charles Beck (1979) Sweetness and acceptance optimization in cola flavored beverages using combinations of artificial sweeteners-a psychophysical approach. Journal of Food Quality 2: 17-26.
  7. Moskowitz HR (2022) The perfect is simply not good enough–Fifty years of innovating in the world of traditional foods. Food Control 138: 109026.
  8. Askari M, Heshmati J, Shahinfar H, Tripathi N, Daneshzad E (2020) Ultra-processed food and the risk of overweight and obesity: a systematic review and meta-analysis of observational studies. International Journal of Obesity 44: 2080-2091. [crossref]
  9. Blanco-Rojo R, Sandoval-Insausti H, López-Garcia E, Graciani A, Ordovas JM, et al. (2019) Consumption of ultra-processed foods and mortality: a national prospective cohort in Spain. Mayo Clinic Proceedings 94: 2178-2188. [crossref]
  10. Elizabeth L, Machado P, Zinöcker M, Baker P, Lawrence M (2020) Ultra-processed foods and health outcomes: a narrative review. Nutrients 12: 1955. [crossref]
  11. Neri, Daniela, Eurídice Martínez Steele, Neha Khandpur, Gustavo Cediel, Maria Elisa Zapata, et al. (2022) Ultraprocessed food consumption and dietary nutrient profiles associated with obesity: A multicountry study of children and adolescents. Obesity Reviews 23: e13387. [crossref]
  12. Poti JM, Braga B, Qin B (2017) Ultra-processed food intake and obesity: what really matters for health—processing or nutrient content?. Current Obesity Reports 6: 420-431. [crossref]
  13. Gearhardt AN, Grilo CM, DiLeone RJ, Brownell KD, Potenza MN (2011) Can food be addictive? Public health and policy implications. Addiction 106: 1208-1212. [crossref]
  14. Malan H, Watson TD, Slusser W, Glik D, Rowat AC, Prelip M (2020) Challenges, opportunities, and motivators for developing and applying food literacy in a university setting: A qualitative study. Journal of the Academy of Nutrition and Dietetics 120: 33-44. [crossref]
  15. Sarmiento-Santos J, Souza MB, Araujo LS, Pion JM, Carvalho RA, et al. (2022) Consumers’ understanding of ultra-processed foods. Foods 11: 1359. [crossref]
  16. Moskowitz HR (2012) ‘Mind Genomics’: The experimental, inductive science of the ordinary, and its application to aspects of food and feeding. Physiology & behavior 107: 606-613. [crossref]
  17. Moskowitz HR, Gofman A (2007) Selling Blue Elephants: How to Make Great Products that People Want Before They Even Know They Want Them. Pearson Education.
  18. Moskowitz HR, Gofman A, Beckley J, Ashman H (2006) Founding a new science: Mind Genomics. Journal of Sensory Studies 21: 266-307.
  19. Porretta S, Gere A, Radványi D, Moskowitz H (2019) Mind Genomics (Conjoint Analysis): The new concept research in the analysis of consumer behaviour and choice. Trends in Food Science & Technology 84: 29-33.
  20. Gofman A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  21. Hardy MA (1993) Regression with Dummy Variables. 93 Sage.
  22. Likas A, Vlassis N, Verbeek JJ (2003) The global k-means clustering algorithm. Pattern Recognition 36: 451-461.
fig 1

Gender Responsive Programming: Towards the Promotion of Women’s Mental Well-being

DOI: 10.31038/AWHC.2023641

Short Commentary

The world of health policy and public health considers the health of women an important topic of focus, and in most cases link the well-being of women to that of children and the family, and, legitimately, to the health of society overall. More over the emphasis is more given to maternal and child health. This perspective is true and well-founded given that women health is well documented to promotion of the general health of family and everyone in society. The researcher however notes the otherwise limitation in promoting the general well-being of women across the divides in the society.

Women offenders fall among the special population groups in our society with dare need for attention towards their overall well-being. Unfortunately, very few studies have focused on who they are and reasons for their incarceration. In fact, the Risk-Need-Responsivity (RNR) model of offender rehabilitation [1] that has dominated rehabilitation programs globally in the last two decades provides the impression that reasons for criminality in women offenders are similar to those of their male counterparts, but the limited studies on incarceration of women has noted the assumption as erroneous. Thus, globally, the focus of rehabilitation programs for women offenders has often mirrored that of male offenders [2,3].

According to the study “Gender Responsive Programming in Kenya: Time is Ripe!” the researchers concluded that serious attention is needed in the rehabilitation of women in correctional facilities not only to help them to reform but to promote their mental well-being. The study noted that women have their distinct psychological needs associated with their criminal behaviors that rehabilitation programs must address. However, most governments and policy makers in the entire criminal justice system across the globe, Kenya included tend to focus more on reduction of recidivism in the women offenders through punishment, but with little focus on treatment of specific psychological needs that contribute to women’s offending behaviors.

Feminist psychologists working within the criminal justice system, e.g. (Van Voorhis and Salisbury, 2014) [4] proposes that rehabilitation within the correctional facilities should focus on restoration of offenders to useful lives through therapy, education and/ or training with the aim of promoting the psychological well-being of the individual to be rehabilitated. Their argument infers to the fact that informed rehabilitation practices is important rather than “punishment” of the offenders that tend to dominate most correctional programs.

The study “Gender Responsive Programming in Kenya: Time is Ripe!” was informed by two theories. The first one was the (RNR) model of offender rehabilitation developed by [5] emphasized the critical need of assessment of needs of offenders. Thus, for women offenders, there is need to assess psychological issues that contributed to offending through validated instruments to ensure that treatment can be matched to their actual needs and therefore promotion of entire well-being. This is well demonstrated in the second theory that the study employed; the Relational Theory of Women’s Psychological Development which was coined by Miller et al. (1976) [6-10]. In summary the theory acknowledges the difference in the moral and psychological development of men and women. Feminist criminologists have adapted this theory in the understanding of women’s pathways to criminality and denotes that “connections,” often disturbed in the lives of women offenders, is a critical developmental need that has to be addressed in rehabilitation for women for their mental well-being and to heal from their other past experiences that relate closely to their offending tendencies.

Through the mixed method data collection and analysis, the above study confirmed that women’s reasons for incarceration were in most cases different from those of their male counterparts. The study addressed specific psychological needs in relation to women offenders in the women correctional facilities in Kenya, notably: victimization or histories of abuse which were traumatic in nature, parental distress, dysfunctional relationships, low self-esteem and reduced self-efficacy.

To briefly explain, victimization or histories of abuse in women offenders were mostly physical and sexual in nature. The abuses happened mostly in childhood and in their spousal relationships. Due to lack of comprehensive model of treatment of the impact of their experiences, most victims ended up suffering from adverse psychological reactions including posttraumatic stress disorder (PSTD), clinical depression, clinical anxiety, toxic stress etc. In their disturbed state of mind, and not being able to reason out better ways of dealing with issues, they in turn acted out in wrong ways associated with criminal behaviors such as, murder, child neglect, drug use.

Most women offenders suffered parental distress that was associated with lack of parental skills, lack of support from their spousal relationships and poverty. Most were not able to adequately provide basic needs for their families. Moreover, majority were uneducated and this contributed to their poor financial status since they could not acquire jobs or afford capital income to engage in any form of self-employment. As sole bread winners for their families, their struggle towards meeting their obligations to provide for themselves and families sometimes pushed them to criminal activities such as theft, fraud, child neglect that led to their incarceration.

A significant number of women offenders were equally incarcerated due to offenses associated with dysfunctional relationships. High number of cases of dysfunctional relationships were associated with spousal fights followed by family of origin disagreements. These ended into murder and man slaughter cases. The study revealed psychological distress in struggling with dysfunctional relationships as contributory factor to the choice to offending.

The study found that with the many challenges that women experienced in their day to day living, many of the women offenders had low self-esteem and highly reduced self-efficacy. The low self-esteem in women offenders was liked to crimes such as; fraud, manslaughter, prostitution. It was noted that both low self-esteem and self-efficacy seem to push women to crimes in the same way an elevated self-esteem and self-efficacy poses problems.

The mental well-being of women offenders cannot therefore be ignored in the rehabilitation practices. World Health Organization (WHO, 2022) [11] defines mental health as a state of well-being in which an individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and is able to make contribution to his or her community. The findings in regards to psychological needs of women offenders contributing to activities that translate to criminality suggested a major focus of mental health of women in rehabilitation and the entire criminal justice system.

Most recently the researcher found it necessary to further develop this study by employing the biopsychosocial-spiritual model of illness and treatment in assessing the mental well-being of women offenders. The bio-psychosocial model was developed by George Engel and John Romano in the 1970s; noting that treatment of mental health denotes a multi-systems lens. The model recognizes the interaction between psychological, biological and social environment in illnesses. Therefore, treatment must focus on the different parts of the model. That is to say, an individual experiencing a psychological illness automatically suffers some biological illnesses and the social well-being of the person is equally affected. Other scholars have suggested the need for the fourth component, the “spiritual” component to be included in the model thus, the bio-psychosocial-spiritual model. The study found that the spiritual component is quite relevant as a majority of offenders observed that a spiritual life was important in developing resilience Figure 1.

fig 1

Figure 1: An illustration of a biopsychosocial-spiritual model of treatment in the mental well-being of women offenders. Source: Researcher-adaptation.

A critical review of the bio-psychosocial-spiritual model of treatment promotes the principles of gender responsive programming in women’s correctional facilities thus applicable in ensuring the well-being of women prisoners. The recent investigation employed 4 focus group discussions within two maximum women’s correctional facility in Kenya. The findings confirmed a need to incorporate the biopsychosocial-spiritual model of treatment in view of gender responsive programming in women’s correctional facilities to promote the mental well-being of the offenders.

The researcher therefore recommends, based on the R-N-R model of offender rehabilitation that upon admission to the correctional facility, women offenders be screened and/or assessed to establish their unique psychological needs associated with their offenses. This will ensure placement into correct rehabilitation and treatment programs. It is important that governments ensure that the gender responsive programs are established. In view of the further development of the original study, it is important that treatment of women offenders be informed by the principles of biopsychosocial-spiritual model of treatment to ensure that the broad issues of the offender is addressed in rehabilitation. As already been observed such an approach will improve the mental well-being of the offenders alongside effective rehabilitation.

References

  1. Andrews DA, Bonta J, Wormith JS (2011) The Risk- Need Responsitivity model: Does the Good Lives Model contribute to effective crime prevention? Criminal Justice and Behavior 38: 735-755.
  2. Van Voorhis P, Wright EM, Salisbury EJ, Bauman A (2010) Women’s risk factors and their contributions to existing risk/needs assessment: The current status of a gender-responsive supplement. Criminal Justice and Behavior 37261: 288.
  3. Covington SS, Bloom BE (2008) Addressing the mental health needs of women offenders. In R. Gido and L. Dalley eds., Women’s Mental Health Issues Across the Criminal Justice System. Columbus, OH: Prentice Hall 160-176.
  4. Van Voorhis P, Salisbury EJ (2014) Correctional counseling and rehabilitation. USA: Anderson Publishing.
  5. Andrews DA, Bonta J (2004) The psychology of criminal conduct. Cincinnati, OH: Anderson.
  6. Miller JB (1976) Towards a psychology of women. Boston: Beacon Press.
  7. Gilligan C (1982) In a different voice: Psychological theory and women’s development. Cambridge. MA: Harvard University Press.
  8. Jordan J (1984) Empathy and self- boundaries. Work in progress No. 16. Welleslay, MA: Stone Center. Working Paper Series.
  9. Kaplan A (1984) The self- in relation: Implications for depression in women. Work in Progress No.14. Wellesley. MA: Stone Corner, Working Paper Series.
  10. Surrey J (1985) Self-in relation: A theory of women’s development. Work in progress No 13. Wellesley. MA: Stone Center, Working Paper Series.
  11. World Health Organization (2022) Mental Health. Geneva, Switzerland: World Health Organization.

SARS-CoV-2 Infection: Associated Disorders and Post- COVID-19 Clinical Sequelae

DOI: 10.31038/IDT.2023414

Abstract

Three years after beginning of the COVID pandemic, SARS-CoV-2 infection still poses a multifaceted health problem in terms of acute infections or reinfections and associated disorders, including thrombotic complications, kidney diseases and endocrine disorders. Another important aspect of COVID-19 is the emergence of viral variants, each having unique and overlapping amino acid substitutions that affect transmissibility, disease severity and susceptibility to natural or vaccine-induced immune responses.

After COVID-19, 5-15% of patients suffer from postinfectious sequelae, involving multiple organ systems, grouped together as ‘long COVID’ or ‘postacute sequelae of SARS-CoV-2 (PASC)’. Long COVID is defined by one or multiple signs or symptoms persisting or occurring more than 4 weeks after the onset of acute SARS-CoV-2 infection, including shortness of breath, fatigue with or without exertion, postural orthostatic tachycardia, myalgia, peripheral neuropathy, endocrine and kidney disorders, thrombotic complications, multisystem inflammatory syndrome and others.

The pathophysiology of acute COVID-19, of viral variants as well as of long COVID remains elusive. In addition, to date there are few or no treatment options available that have been rigorously evaluated in clinical trials. Last but not least, patients with long COVID are subject to stigmatization because of perceived simulation or psychosomatization of symptoms, in view of the lack of specific diagnostic parameters.

Keywords

Anticoagulants, Dialysis, Endocrine system, Kidney diseases, Long COVID, Obesity, Post-COVID-19 condition, Thrombosis, Viral variants

Abbreviations

BMI: Body Mass Index; COVID-19: Coronavirus Disease 2019; ESKD; End-Stage Kidney Disease; mAB: Monoclonal Antibody; RCT: Randomized Controlled Trial; SARS-CoV-2: Severe Acute Respiratory Syndrome Coronavirus-2; VITT: Vaccine-induced Thrombotic Thrombocytopenia

Introduction

Globally, the pandemic of COVID-19 is caused by the RNA virus SARS-CoV-2 and has claimed more than six million lives globally. Basic research has addressed the mechanisms of viral entry and viral spreading, involving among others the angiotensin converting enzyme 2 (ACE2) receptor and the transmembrane protease/serine subfamily 2 (TMPRSS2). Further, host immune responses and pathological effects of SARS-CoV-2 infection in different tissues and organs have been described and risk factors for adverse outcomes of COVID-19 have been identified [1].

Shortly after the emergence of COVID-19 in late 2019, clinicians recognized an apparent association between SARS-CoV-2 and both arterial and venous thrombosis [2,3]. Further, it was apparent that COVID-19 can cause acute kidney injury and may cause or exacerbate chronic renal diseases [4-6]. Last but not least, the interaction between SARS-CoV-2 infection and the endocrine system has been an area of recent scientific and clinical research. Endocrine disorders, including obesity and diabetes mellitus were recognized as risk factors for poor COVID-19 outcomes [7-10].

Another important aspect of COVID-19 was the emergence of several viral variants, each having unique and overlapping amino acid substitutions. The five major variants have been designated by WHO and CDC as alpha, beta, gamma, delta and Omicron BA.1 and BA.2.They vary in terms of transmission efficiency, pathogenicity/disease severity and susceptibility to natural or vaccine-induced immune responses as well as resistance to monoclonal antibodies [11].

Finally, more recently postinfectious sequelae of SARS-CoV-2 infection have been recognized and have been termed, among others, post-acute COVID-19 syndrome, long COVID or long-haul COVID [12-14].

In the following some interesting some basic and clinical aspects of SARS-CoV-2 infection and COVID-19 will be presented in more detail to add to the understanding of the virological characteristics and the clinical sequelae of this infection.

SARS-CoV-2 Infection and Associated Clinical Diseases

Thrombotic Complications

In late 2019, shortly after the emergence of COVID 19 clinicians recognized an association between the infection and both arterial (myocardial infarction, stroke, acute limb and mesenteric ischemia, coronary stent thrombosis) and venous thromboses. Cohort studies revealed thrombotic events in 17-47% in critically ill patients and in 3-11% in noncritically ill patients. Later, multicenter prospective studies failed to confirm this very high burden of thrombotic complications [15-17].

The risk of thromboembolic complications of patients with COVID-19 is highest in hospitalized critically ill patients, Iower in hospitalized not critically ill patients and moderate in discharged post-COVID-19 patients and low in stable outpatients [17]. From this it follows that hospitalized critically ill patients should be treated with prophylactic-dose heparin or low molecular weight heparin (LMWH), i.e., daily 40 mg enoxiparin, 4,500 units tinzaparin, 5,000 units dalteparin or 2 x daily 5,000 units heparin. Hospitalized not critically ill patients should receive treatment-dose heparin or LMWH, i.e., 2 x daily enoxiparin 1 mg/kg, daily 175 units tinzaparin, 2 x daily 100 units/kg dalteparin or continuous i.v. heparin. Discharged post-COVID-19 patients should be treated with a prophylactic-dose oral anticoagulant, e.g., 10 mg rivaroxaban daily for 35 days. In view of the low risk of thromboembolic complications in stable COVID-19 outpatients no anticoagulation is recommended. As ongoing RCTs are completed, emerging new data need to be incorporated into updated evidence-based guidelines.

Kidney Injury

COVID-19 can cause acute kidney injury and may cause or exacerbate chronic renal diseases [4-6]. The causes of renal impairment in patients with COVID-19 are thought to be an impaired renal perfusion and immune dysregulation [18]. While renal parenchymal cells, in particular proximal tubular cells, express high levels of ACE2 and TMPRSS2 as well as other proteases, suggesting that the kidney may be susceptible to SARS-CoV-2 infection [19]. To date, it is controversial, however, whether the kidney is a target for SARS-CoV-2 infection [20].

Numerous glomerular diseases were found to be associated with COVID-19 and/or with vaccination against SARS-CoV-2: podocytopathies, focal segmental glomerulosclerosis, minimal change disease and membranous nephropathy. Several antiviral medications, including molnupiravir (inhibitor of viral RNA replication), ritonavir-boosted nirmatrelvir (inhibitor of SARS-CoV-2 MPRO protease) and remdesevir (inhibitor of viral RNA polymerase) have been approved for the treatment of patients with COVID-19 and kidney disease. In addition several anti-Spike monoclonal antibody formulations and immunomodulatory medications, including tocilizumab (an inhibitor of the interleukin-6 receptor) and baricitinib (an oral JAK1/JAK2 inhibitor) are recommended in selected patients with COVID-19 [21] Further studies are needed, however, to define the optimal strategies for prevention and treatment of COVID-19 in patients with kidney diseases.

Endocrine Disorders

The interaction between COVID-19 and the endocrine system has recently become a major area of scientific and clinical interest. In this context several endocrine disorders, including obesity and diabetes mellitus have been recognized among significant risk factors for COVID-19 severity [1,7-10]. In addition to body mass index, increased visceral adipose tissue appears to predict COVID-19 severity but not mortality [22]. In the course of the pandemic and adjusting for comorbidities, obesity was recognized as an independent risk factor for COVID-19 severity and mortality in women and men [7]. In addition, obesity-related comorbities were shown to be associated with poor COVID-19 outcomes [8,9]. Male sex appears to be associated with a greater COVID-19 severity and mortality, but not susceptibility [23].

The available data regarding the long-term effects of SARS-CoV-2 infection on thyroid, adrenal and male gonadal functions are too limited to draw scientific or clinical conclusions.

SARS-CoV-2 Variants

An important aspect of COVID-19 is the emergence of viral variants, each having unique and overlapping amino acid substitutions. The five major variants, designated by WHO and CDC as alpha, beta, gamma, delta and Omicron BA.1 and BA.2 vary in terms of transmission efficiency, pathogenicity/disease severity and susceptibility to natural or vaccine-induced immune responses and monoclonal antibodies. In general, RNA viruses display significant plasticity based on the low fidelity of the RNA-dependent RNA polymerases and the lack of genomic repair mechanisms such as proofreading and mismatch repair [24]. The phylogenetic evidence suggests that this plasticity may have allowed SARS-CoV-2 to jump from bats to humans [25].

The ancestral strain of SARS-CoV-2 was sequenced in Wuhan, China in December 2019 (GenBank accession no. MN908947). The first significant mutation was detected in March 2020: a D614G substitution in the viral spike glycoprotein S [11,26-28].

Alpha Variant

The alpha or B.1.1.7 variant was identified in September 2020 in the UK and was shown to have increased transmission efficiency and pathogenicity but no resistance to monoclonal antibodies, no protection from a previous infection or no change of vaccine efficiency [11].

Beta Variant

The beta or B.1.351 variant was identified in May 2020 in South Africa. It had increased transmission efficiency and pathogenicity as well as a resistance to the mAbs bamlanivimab-etesevimab, a reduced protection from a previous infection or reduced vaccine efficiency [11].

Gamma Variant

The gamma variant was identified in November 2020 in Brazil and was shown to have similar characteristics as the beta variant [11.

Delta Variant

The delta or B.1.617.2 variant was first identified in India in October 2020 and displayed similar properties as the beta and gamma variants, respectively [11].

Omicron Variants

The omicron or B.1.1.529 variant first emerged in November 2021 and was shown to have increased transmission efficiency, a lower pathogenicity, a reduced protection from a previous infection and lower vaccine efficiency. Two significant sublineages BA.1 and BA.2 have been described with predominant substitutions in the receptor binding domain of the Spike glycoprotein and in the receptor binding motif as compared to the ancestral Wu-Hu-1 strain [11].

Variants of SARS-CoV-2 continue to be a challenge for diagnosis, treatment and prevention of COVID-19. Tracking of the known variants and the early identification of new variants and updating of COVID-19 vaccines composition for new SARS-CoV-2 variants are of paramount importance for the effective control of the SARS-CoV-2 infection and the COVID-19 pandemy. New viral variants are for example the Omicron variants XBB.1.5, XBB.1.16.6, EG.5, FL.1.5.1 and FE.1.

Post-COVID Conditions

Postinfectious sequelae of SARS-CoV-2 infection have been recognized and are termed, among others, post-acute COVID-19 syndrome, long COVID or post- COVID-19 [12-14]. Common signs and symptoms of post-COVID-19 include among others pulmonary, cardiovascular and neuropsychiatric manifestations. Common manifestations are fatigue, shortness of breath, memory or cognitive disturbances, headache, smell or taste disturbances, autonomic dysfunction, anxiety and depression and a decreased functional capacity. While age and comorbidities, such as obesity and psychiatric illnesses, and the severity of the acute COVID-19 are risk factors for long COVID, long-lasting sequelae of COVID-19 can also affect young and previously healthy individuals with mild COVID-19. Symptoms may be new, persist after acute COVID-19 or may be relapsing or remitting. The duration of symptoms is variable. Most patients have a significant reduction of symptoms by one year postdiagnosis. In a cohort study from China, the proportion of patients with at least one symptom decreased from about 70% at 6 months to about 50% at 12 months, of patients with fatigue or muscle weakness from about 50% at 6 months to about 20% at 12 months. In addition, about 90% of subjects returned to their original work after 12 months [29]. Risk factors for long COVID include female sex and preexisting respiratory and psychiatric diseases. The mainstay of treatment is individualized, including supportive care to mitigate symptoms as well physical rehabilitation and mental health support.

Summary and Conclusions

Since the emergence of the COVID-19 three ago, SARS-CoV-2 is still with us. Much has been learned about the sequelae of the SARS-CoV-2 infection in terms of acute infections or reinfections, the recognition of associated complications (arterial and venous thromboses, kidney diseases and endocrine disorders), the identification and characterization of emerging viral variants and of the post-COVID-19 conditions. A major scientific achievement was the rapid development of safe and effective COVID-19 vaccines and therapies [30]. Since SARS-CoV-2 infection and its sequelae will continue to evolve and pose a challenge to our health care systems, innovative vaccines and therapies will be central to prevent or control future pandemics by SARS-CoV-2, seasonal influenza, respiratory syncytial virus and other pathogens.

References

  1. Hu B, Guo H, Zhou P, Shi ZL (2020) Characteristics of SARS-Cov-2 and COVID-19. Nat Rev Microbiol 19: 141-154. [crossref]
  2. Cui S, Chen S, Li X, et al. (2020) Prevalence of venous thromboembolism in patients with severe novel coronavirus pneumonia. J Thromb Haemost 18: 1421-1424. [crossref]
  3. Piazza G, Campia U, Hurwitz S, et al. (2020) Registry of arterial and venous complications in patients with COVID-19. J Am Coll Cardiol 76: 2060-2072. [crossref]
  4. Braun F, Lutgehetmann M, Pfefferle S, et al. (2020) SARS-CoV-2 renal tropism associates with acute kidney injury. Lancet 396: 597-598. [crossref]
  5. Puelles VG, Lutgehetmann M, Lindenmeyer MT, et al. (2020) Multiorgan and renal tropism of SARS-CoV-2. N Engl J Med 383: 590-592. [crossref]
  6. Diao B, Wang C, Wang R, et al. (2021) Human kidney is a target for novel severe acute respiratory syndrome coronavirus 2 infection. Nat Commun 12: 2506. [crossref]
  7. Hendren NS, de Lemos JA, Ayers C, et al. (2021) Association of body mass index and age with morbidity and mortality in patients hospitalized with COVID-19: results from the American Heart Association COVID-19 Cardiovascular Disease Registry. Circulation 143: 135-144. [crossref]
  8. Williamson EJ, Walker AJ, Bhaskaran K, et al. (2020) Factors associated with COVID-19-related death using OpenSAFELY. Nature 584: 430-436. [crossref]
  9. Schlesinger S, Neuenschwander M, Lang A, et al. (2021) Risk phenotypes of diabetes and association with COVID-19 severity and death: a living systematic review and meta-analysis. Diabetologia 64: 1480-1491. [crossref]
  10. Singh AK, Khunti K (2022) COVID-19 and diabetes. Annu Rev Med 73: 129-147. [crossref]
  11. Jacobs JL, Haidar G, Mellors JW (2023) COVID-19: challenges of viral variants. Annu Rev Med 74: 31-53. [crossref]
  12. Nalbandian A, Sehgal K, Gupta A, et al. (2021) Post-acute COVID-19 syndrome. Nat Med 27: 601-615. [crossref]
  13. Soriano JB, Murthy S, Marshall JC, et al. (2022) A clinical case definition of post-COVID condition by a Delphi consensus. Lancet Infect Dis 22: e102-107. [crossref]
  14. Nalbandian A, Desai AD, Wan EY (2023) Post-COVID-19 condition. Annu Rev Med 74: 55-64. [crossref]
  15. Al-Samkari H, Gupta S, Karp Leaf R, et al. (2023) Thrombosis, bleeding and the observational effect of early therapeutic anticoagulation on survival in critically ill patients with COVID-19. Ann Intern Med 174: 622-632. [crossref]
  16. Smilowitz NR, Subashchandran V, Yuriditsky E, et al. (2021) Thrombosis in hospitalized patients with viral respiratory infections versus COVID-19. Am Heart J 231: 93-95. [crossref]
  17. Fanaroff AC, Lopes RD (2023) COVID-19 thrombotic complications and therapeutic strategies. Annu Rev Med 74: 15-30. [crossref]
  18. Brogan M, Ross MJ (2023) COVID-19 and kidney disease. Annu Rev Med 74: 1-13. [crossref]
  19. Jackson CB, Farzan M, Chen B, Choe H (2022) Mechanisms of SARS-CoV-2 entry into cells. Nat Rev Mol Cell Biol 23: 3-20. [crossref]
  20. Hassler I, Reyes F, Sparks M (2021) Evidence for and against direct kidney infection by SARS-CoV-2 in patients with COVID-19. Clin J Am Soc Nephrol 16: 1755-1765. [crossref]
  21. COVID-19 Treatment Guidelines Panel (2022)
  22. Foldi M, Farkas N, Kiss S, et al. (2021) Visceral adiposity elevates the risk of critical condition in COVID-19: a systematic review and meta-analysis. Obesity 29: 521-528. [crossref]
  23. Peckham H, de Gruijter BN, Raine C, et al. (2020) Male sex identified by global COVID-19 meta-analysis as a risk factor for death and ITU admission. Nat Commun 11: 6317. [crossref]
  24. Dolan PT, Whitfield CJ, Andino R (2028) Mechanisms and concepts in RNA virus populations and evolution. Annu Rev Virol 5: 69-72. [crossref]
  25. Holmes EC, Goldstein SA, Rasmussen AL, et al. (2021) The origins of SARS-CoV-2: a critical review. Cell 184: 4848-4856. [crossref]
  26. Korber B, Fischer WM, Gnanakaran S, et al. (2020) Tracking changes of SARS-CoV-2 spike: evidence that D164G increases infectivity of COVID-19 virus. Cell 182-1827.e19. [crossref]
  27. Plante JA, Liu Y, Liu J, et al. (2021) Spike mutation D614G alters SARS-CoV-2 fitness. Nature 592: 116-121. [crossref]
  28. Yurkovetskiy L, Wang X, Pascal KE, et al. (2020) Structural and functional analysis of the D614G SARS-CoV-2 spike protein variant. Cell 183: 739-751.e8. [crossref]
  29. Huang L, Yao Q, Gu X, et al. (2021) 1-year outcomes in hospital survivors with COVID-19: a longitudinal cohort study. Lancet 398: 747-758. [crossref]
  30. Becerra X, Asish J (2023) Project NextGen-defeating SARS-CoV-2 and preparing for the next pandemic. N Engl J Med 389: 773-775. [crossref]
fig 4

Evaluation of the Positivity Rate of the Thick Drops and the Physical Integrity of the Nets Used by the Populations of Zogbo in the Cotonou Commune

DOI: 10.31038/IDT.2023413

Abstract

Background: Malaria is a potentially fatal disease caused by parasites transmitted by the bites of infected female Anopheles mosquitoes. It is a preventable and curable disease. Thus, the use of insecticide-treated mosquito nets is a means of combating malaria. In this context, this study assessed the positivity rate of strawberry drops at the St Vincent de Paul hospital in Zogbo and the physical integrity of mosquito nets used by the population in the same locality.

Methods: In order to determine the prevalence of malaria in the Saint Vincent de Paul hospital in Zogbo, we took venous and capillary samples from patients for thick blood tests from 31 May to 19 August 2022. We then interviewed 216 households including 100 positive patients in the Zogbo district in southern Benin over a period from 8 July to 19 August on the physical condition of the nets (torn, sewn, tied, frequency of washing, drying method, etc.) in order to assess their physical integrity.

Results: Out of 863 thick drops performed, 686 were positive, representing a positivity rate of 79.49%. The household surveys revealed that the majority of respondents were uneducated married women. Indeed, all households own at least one long-lasting insecticide-treated net of three different brands, namely Dawa net and Olyset net in particular and Permanet 2 and 3. However, all the nets observed were either with holes, tied in several places, loosely tied or sewn in several places. In addition, most households washed their nets three times in three months, a percentage of 66.20%, and 100% of households dried their nets in the sun.

Conclusion: This study showed that there is a high prevalence of malaria at St Vincent de Paul Hospital, which is due to the loss of physical integrity of the nets. In addition to industry- prescribed recommendations to reduce malaria mortality and morbidity in endemic areas.

Keywords

Prevalence, Malaria, Physical integrity, Mosquito nets, Zogbo

Introduction

Malaria is a potentially fatal disease caused by parasites transmitted by the bites of infected female Anopheles mosquitoes. It is a preventable disease that can be cured. According to the WHO, in 2020, there will be an estimated 241 million cases of malaria worldwide. The WHO African Region bears a large and disproportionate share of the global malaria burden. In 2020, 95% of malaria cases and 96% of malaria deaths were recorded in this region. (WHO: Annual Malaria Report 2020) [1]. In Benin, malaria is the leading cause of hospitalization and treatment. According to the Sanitary Statistics Yearbook generated by the National Health Management Information System of the Ministry of Health for the year 2020 [2], malaria represents 44.2% and 49.5% of the reasons for consultation in the general population and in children under five years of age respectively. The parasite responsible for malaria infection in Benin is a protozoan belonging to the genus Plasmodium mainly transmitted by the Anopheles gambiae s.l complex, Anopheles funestus and Anopheles nili groups [3]. In Benin, the main means of vector control relies essentially on the use of Long-Acting Insecticide-Treated Nets (LLINs). Much effort has been made over the last decade in many African countries to increase the accessibility of LLINs to the population, particularly to children under five years of age and pregnant women. (Yadouleton et al.). Unfortunately, significant challenges remain jeopardizing the goals and sustainability of the achievements. (Raphael Kelani et al. International Journal of) [4]. Indeed, many of the LLINs distributed become ineffective after two months of use due to the appearance of large holes resulting in the total tear of the net (Toe et al. 2009). In addition, the effectiveness of LLINs decreases with fewer washes, resulting in a progressive loss of the chemical barrier role of these LLINs (Curtis C.F 2008) [4]. However, these LLINs are supposed to maintain their effective biological action without further treatment after at least 20 washes and after three years of use. (Seck et al. 2008) and (Curtis C.F 2003) [5].  This is the context of the present study, which will allow us to evaluate the positivity rate of the Thick Drops at the St Vincent de Paul clinic in Zogbo and the physical integrity of the nets used by the population.

Materials and Methods

Framework of the Study

Our study was located in the Zogbo quarter, which is situated in the 9th arrondissement of Cotonou in the littoral department. It is located near Fifadji, and northwest of Houenoussou-Sainte Rita.The present study is a descriptive cross-sectional study based on a qualitative questionnaire that was carried out during the months of June, July and August 2022 and on direct observations of the slides of Thick Drops of the patients of the Saint Vincent de Paul clinic.

Sampling

Our study population consisted of 155 persons suspected of having malaria (fever with or without other symptoms) with no distinction of age or sex. Thus, patients who visited the laboratory of the St Vincent de Paul clinic for a thick drop/blood smear examination during the study period, residing in Zogbo and having at least one mosquito net used as a means of malaria control were included in the study. The size of the representative sample was calculated using the following formula for frequency calculation.

for

P: prevalence of malaria in the general population; P=17%.

Z: target confidence level; Z=1.96.

I: acceptable margin of error or precision; I=5%.

N: size of the representative sample.

Using this formula, our sample size N=216 (205-158) patients.

116 households were randomly selected in Zogbo and 100 households of patients at the St.

Vincent de Paul clinic who tested positive or negative for EW/DP were also considered.

Biological Materials

The biological material used was blood. An Olympus® CX31 light microscope (Olympus, Grenchen, Switzerland) was used to read Giemsa-stained slides (Cypress Diagnostics, Hulshout, Belgium).

Data Collection

In the Zogbo neighborhood, households were randomly selected and nets were examined in each household. A total of 216 nets were collected from the households. After the consent of the head of the household, a questionnaire on the origin, date of obtaining, use and maintenance of the nets was given to the head of the household or to a member of the household who was at least 18 years old. The data were collected using a questionnaire designed for this purpose. The questionnaire collected information on socio-demographic characteristics (age, sex, marital status, occupation, religion and place of residence), frequency of use of LLINs and brand.

Physical Integrity Assessment

The number, size, and location of tears on each side of the net and on the roof were determined for each sample (26). Three categories of holes (0.5 cm ≤ size 1 ≤ 2 cm; 2 cm < size 2 ≤ 10 cm; 10 cm < size 3) previously defined by Kilian et al. were retained during this study.

Analysis of Blood Samples

We proceeded to a blood sampling on EDTA tube and then we made a thick drop and a blood smear on a slide. The slides were then dried, fixed and stained with Giemsa (Cypress Diagnostics, Hulshout, Belgium) diluted 1:10 for 10 minutes. The reading was done with an Olympus® CX31 light microscope (Olympus, Grenchen, Switzerland) at the X100 objective with a drop of immersion oil. If a parasite is identified in the microscopic field, the result is positive. If, on the contrary, no parasite is found after 100 microscopic fields, the examination is considered negative.

Statistical Analysis

A questionnaire form was designed on the Odk Collect software. The data entry as well as their processing and analysis were done on the software Excel version.2019. The frequency of modalities for each variable was expressed.

Results

Prevalence of Malaria at the Saint Vincent de Paul Clinic during the Months of June, July and August

The Figure 1 presents the prevalence of malaria at the Saint Vincent de Paul clinic during the months of June, July and August. In total, out of 863 EWTs performed, 686 were positive or a percentage of 79.49% and 177 were negative or a percentage of 20.51%. The result of these analyses is that the prevalence of malaria is 79.49%.

fig 1

Figure 1: Prevalence of malaria at the Avé Maria hospital in Agla

Socio-demographic Characteristics

Our study was made up of 75% of women with a sex ratio (F/H) of 3. 75.93% were married at the expense of single people. The vast majority of respondents were uneducated (63.89%) and of Christian faith (81.02%). Table 1 presents the frequencies of the socio-demographic characteristics of the respondents.

Table 1: Socio-demographic parameters of patients

 

Modalities

Effectifs (n)

Frequency (%)

Sex Female

162

75%

Male

54

25%

Marital Status Single

52

24,07%

Married

164

75,93%

Religion Christian

175

81,02%

Muslim

41

18,98%

Education level Primary

30

13,89%

Secondary

22

10,19%

Superior

26

12,03%

Not instructed

138

63,89%

Total

216

100

Prevalence of Malaria in the Surveyed Population

Figure 2 shows the prevalence of malaria in the survey population. A total of  80 patients tested positive versus 20 patients who tested negative, for a malaria prevalence of 80%.

fig 2

Figure 2: Prevalence of malaria in the surveyed population

Prevalence of Malaria in the Surveyed Population by Age and Sex

Table 2 presents the evaluation of malaria prevalence according to age and sex. Its analysis reveals that the most represented age group is (0-5) with a proportion of 90.6% and the most represented sex is female with a proportion of 88.9%.

Table 2: Prevalence of malaria by sex and age

 

Effectifs

Number of positives

Percentages

Ages of patients [0-5]

32

29

90,6%

[5-15]

21

18

85,7%

[15-25]

23

17

74%

[25-35]

14

10

71,4%

≥35

10

6

60%

Sex Female

69

61

88.9%

Male

31

19

61,2%

Evaluation of Parasite Densities

Table 3 shows the parasite densities of the patients. Patients with a parasite density less than or equal to 5000 Parasites/µL of blood were the most represented with a percentage of 70%. 8% and 2% have respectively their PD lower or equal to 1000 and higher than 1000 Parasites/µL.

Table 3: Evaluation of parasite densities

Parasite densities Parasites/µL

Effectifs

Percentage

≤5000

70

70%

≤10000

08

8%

>10000

02

2%

Frequency of Use of LLINs

Figure 3 presents the frequency of LLINs use by the population of Zogbo. Of the 216 respondents, 116 people sleep under LLINs quite often, a percentage of 53%, compared to 101 people who rarely sleep under an LLINs, a percentage of 47%.

fig 3

Figure 3: Frequency of LLINs use by the population of Zogbo Brands of nets used in the Zogbo area

Figure 4 below shows the different brands of nets used in the Zogbo neighborhood. After analysis of this graph, it appears that the DAWA net brand is the most used by the population 56.94%, followed by OLYSET net 29.17%, and the least used by the population permanet 2, 13.43% and permanet 3, 0.46%.

fig 4

Figure 4: Different types of LLINs used by households in Zogbo

Frequency of Net Washing in Zogbo (Washing in 03 Months)

Figure 5 presents the number of times nets are washed in three months by households. The analysis shows that out of 216 households, 47 wash their nets twice in three months, 143 wash their nets three times in three months, 9 wash their nets four times in three months, and 17 wash their nets six times in three months.

fig 5

Figure 5: Frequency of net washing in three months

Physical Conditions of LLINs in Use in Surveyed Households

Table 4 presents the physical conditions of LLINs in use in the 216 households surveyed. Its analysis reveals that the population uses 90.74% of LLINs with holes of sizes T1, T2, T3 and T4.

Table 4: Physical conditions of LLINs in use in surveyed households

MIILDs without holes

LLINs with holes

Total LLINs with holes

% LLINs with holes

T1

T2

T3

T4

20

80

74

33

09

196

90,74 %

Table 5 presents the physical conditions of LLINs being used in the patients surveyed who performed the GEs. Its analysis reveals that 80% of LLINs with holes of sizes T1, T2, T3 and T4 are used by patients whose Thick Drop is positive and the 20% of LLINs without holes are used by patients whose Thick Drop is negative. It is therefore inferred that the positivity rate of the Thick Drops is closely related to the physical conditions of the nets used.

Table 5: Physical conditions of LLINs in use in surveyed patients who performed GE

 

MIILDs without holes

LLINs with holes

Total LLINs with holes

% LLINs with holes

T1

T2

T3

T4

Patients positifs

00

55

38

10

07

80

80%

Patients négatifs

20

00

00

00

00

00

00%

Discussion

In the context of malaria control, prevention of mosquito bites is a crucial step not only to protect but also to control the vectors of the disease. The general objective of this study is to evaluate the positivity rate of thick drops at the St. Vincent de Paul clinic in Zogbo and the physical integrity of the nets used by the population. This study focused on an urban area of Benin. During the three months, we observed a 79.49% positivity rate for thick drops. However, out of the 100 blood samples taken from patients who were surveyed, 80% of positive cases were recorded and the most represented age group was 0 to 5 years, i.e. a percentage of 90.6%. These results are similar to those of Tokponnon et al. [6] who found a prevalence of 32% in children under 5 years of age in four hospitals in the Plateau department in 2022, taking into account the study population. The effectiveness of LLINs has been widely demonstrated over the past two decades as a vector control measure against malaria (Lindblade et al. 2005 [7]; Killian et al. 2008). This effectiveness of LLINs has been made possible by the physical and chemical barriers they provide. However, brand, conditions of use and maintenance are factors that can affect the physical integrity and biological efficacy of LLINs. During our study, three brands of LLINs (DAWA net, OLYSET net and Permanet 2 or 3) were identified in the Zogbo area. The brands DAWA net and OLYSET net were more frequently found in the households surveyed, respectively a percentage of 56.94% and 29.17%. Our study showed that this difference was due to chance, as the population itself could not recognize the difference between the different brands. However, the work of Diouf et al. 2018 in Senegal showed that this difference in brands is in the acceptability, that is to say that households often tend to keep the LLINs they prefer, those less appreciated being either redistributed to other people or used for other purposes. The integrity assessment of LLINs showed that, regardless of brand, the majority of nets found in the Zogbo neighborhood were either damaged or torn. The same finding was reported in the Uganda and Chad studies, which showed that 45% to 78% of nets in operational use were damaged and only 30% were still in good condition after one year of use, respectively. The washing of LLINs three (03) times in three months by the majority of the population of the Zogbo district surveyed with products of all kinds even corrosive detergents of the populations interviewed may be involved in the ineffectiveness of the LLINs. The same logic is followed by the results of the studies of Kelani et al. 2018 which explains that the principle of low frequency of washing (once every three months) recommended by the industry as a factor of preservation of the insecticidal effect in the LLINs fibers, certainly does not seem to be possible in community settings. Indeed, the multiple washing frequencies of LLINs in community settings have often been reported by female breastfeeders who have to wash their nets every day, due to children’s urine. This perception should be taken into account by the industry when manufacturing LLINs. Insecticide-treated mosquito nets are very important in the fight against malaria, they reduce mortality and morbidity. However, they are ineffective when their physical integrity is degraded and consequently a high prevalence of malaria despite the use of nets [8-16].

Conclusion

The positivity rate of the Thick Drops at St Vincent de Paul Hospital during the months of June, July and August is 79.49%. This positivity rate is closely related to the actual use of the nets and especially to their effectiveness, which is mainly based on their physical integrity, an important indicator of their durability in households. Our research results demonstrate that the physical barrier provided by LLINs can be significantly altered during the normal course of use. The national malaria control program must therefore consider physical integrity performance under local conditions when selecting the types of LLINs to distribute.

References

  1. WHO: Malaria Annual Report 2020.
  2. Ministry of Health: Health Statistics Yearbook 2020.
  3. Rock Aikpon, Razaki Ossè, Renaud Govoetchan, Arthur Sovi, Frédéric Oké-agbo, et al. (2013) Entomological baseline data on malaria transmission and susceptibility of anopheles gambiae to insecticides in preparation for indoor residual spraying (IRS) in attacora, (Benin). Academic Journals 5: 102-111.
  4. Tokponnon Filémon, Aholoupke Bruno, Denon Eric, Gnanguenon Virgile, Bokossa Alexis, et al. (2013) Evaluation of the coverage and effective use rate of long lasting insecticide treated nets after the national scale up of their distribution in bnin. Parasites & Vectors 6. [crossref]
  5. Curtis CF (2003) Improving and scaling up vector control,the impact of insecticide resistance and possible means of resistance managment.in WHO /TDR, 2003, report of the scientific working group on malaria, annex 7, geneva, 71-82.
  6. Tokponnon TF, Ossè R, Arthur S, Wakpo A, Hounto OA, et al. (2022) Evaluation at the level of health facilities, of the protective efficacy of LLINs children under 5 years old from localities of low and high reisitance of vectors to pyrethroid inecticides in Benin in west africa, 12: 57-73.
  7. Kim A Lindblade, Ellen Dotson, William A Hawley, Nabie Bayoh, John Williamson, et al. (2023) Evaluation of long-lasting insecticidal mats after 2 years of household use. Trop Med and Int Hlt.
  8. WHO: Malaria, December 6, 2021.
  9. Mansiangi P, Kalonji A, Izale B, Nsiala A, Phanzu F, et al. Evaluation of the sustainability of LLINs distributed in the Democratic Republic of Congo through mass campaigns from 2011 to 2014.
  10. Ahogni Idelphonse, Durability and impact on Plasmodium falciparum infection and anemia in children under 5 years of age of eight types of nets in eight communes in Benin, West Africa, December 2020. (PhD thesis at the University of Abomey-calavi)
  11. Pryce J, Richardson M, Lengeler C (2018) Insecticide-treated nets for malaria prevention, December 6.
  12. Nopono FN, Enama L Offono, Tsila HG, Mbida A, Tonga C, et al. (2020) Evaluation of the efficacy of impregnated mosquito nets 36 months after their distribution in Southern Cameroon.
  13. Lafont Françoise (2016) Determining the sample size, October 28.
  14. Bambenongama Norbert Mandana (2022) Evaluation of the use of long-lasting insecticide-treated nets among pygmies in the Democratic Republic of Congo, April 30, 2022.
  15. Djegui (2006) Distribution of impregnated mosquito nets throughout Benin-Partners and initiators of the project consult each other, Benin, September 2006.
  16. Albert Kilian, Wilson Byamukama, Olivier Pigeon, Francis Atieli, Stephan Duchon et al. Long-term field performance of a polyester based long-lasting insecticidal mosquito net in rural Uganda. Malaria J 7.