Author Archives: author

Social and Cultural Representation of Autism as Experienced by Mothers in Cameroon

DOI: 10.31038/PSYJ.2023554

Abstract

This study poses the problem of the socio-cultural significance of autism spectrum disorders in the Cameroonian cosmogonic universe. In doing so, we set ourselves the objective of understanding the representations and attributions given to this developmental psychopathology which occurs in the individual from early childhood. Based on work carried out in the African anthropo-sociocultural context, we can draw etiological explanations of autism more from the relationships transgenerational cosmic events that the ego undergoes. There is a generalized paranormal attribution both from the nosological and etiopathogenic point of view which always crystallizes on a fatalistic prognosis. Speeches were collected with an interview grid from four (04) mothers of autistic children living in the city of Yaoundé-Cameroon. It emerges from the analysis of the contents that these children are still considered crazy, mentally disordered or “snake children”. In many Cameroonian cultures, children with ASD are considered born to die. For some, they don’t want to stay in the world of the living. For others, they disrupt the family line by being born and dying multiple times; they are inhabited by an evil spirit which threatens the family and we cannot send them to school, because they are useless. This plunges their parents and neighbors into feelings of shame, guilt and hostility that affect their affective exchanges and their socio-cognitive development. This implies the need for better awareness of this disorder and the possibilities of care and schooling for children who present with them.

Keywords

Autism, Mother, Social representation, Child

Introduction

In each culture, mental illness is interpreted by giving it a specific meaning. We thus try to shed light on the origin of the disease for cultural management. In this work, it is about the social and cultural representations of the autistic child in Cameroon. Childhood autism is defined by Kanner [1] as an inability of children to establish normal relationships with people and to react normally to situations, accompanied by a disorder of affective contact, and appearing from the beginning of life. It is a global and early developmental disorder, which appears from the age of 3 years. The criteria that describe it, however, still pose problems due to its categorization in psychiatric disorders and on the other hand to the fact that in the majority of African cultures, it is caused by the curse and/or the presence of impure spirits. It can therefore be seen that in Africa and in Cameroon in particular, the problem does not lie in the existence of autistic disorders but rather in the mastery of its clinical description as it has been developed by the Association of American Psychiatrists [APA] [2] and by the World Health Organization [WHO] [3].

According to the traditional approach, mental illness is of the magico-religious type, divine punishment, attack by evil spirits, sorcery, transgression by the parents (especially the mother) of a prohibition, malevolence or jealousy of a co-wife are very widely spread. Thus in Mali, in the popular consciousness, the mentally retarded is considered as a person animated by evil spirits [4]. It represents the sign of a curse or a deterioration of social and family relations due to a transgression of ancestral laws. In Senegal, among the Wolofs and Lebous, psychopathological behavior stems from aggression from the outside world, either from a human enemy or from an angry spirit. According to Boyer [5], the child can be the object of alienating parental projections. According to her, in the face of relationship difficulties with a child, parents can set up this type of projection. She thus speaks of a particular psychopathological entity described by Zempleni [6] in Senegal: The “nit-ku-bon” child who could be an example of alienating parental projections.

The set of traits presented by the “nit-ku-bon” child, combining a refusal of verbal and visual communication, singular beauty, excessive wisdom, evoke early childhood autism [4]. In Africa, people with autism are sometimes perceived as idiots, victims of faults committed by their parents or other family members [7]. Some are considered as wizards, as lucky charms, or on the contrary as a curse. The Yoruba of Benin also call children with autism Akibus, which means “to be born and die”. They suspect her children of communicating with spirits and wanting to harm their families. In Cameroon, Lolo [8] had observed that autistic children are often considered as “children born to die”. According to her, they don’t want to stay in the world of the living and disrupt family dynamics by being born and dying multiple times. Parents perceive them as inhabited by an evil, and therefore threatening, spirit. These studies give a brief overview of the autistic child in Africa, but what exactly is Cameroon?

The prevalence of autism and ASD in Cameroon is not precisely known due to the lack of a national registry. This prevalence is estimated at 1/165 [9]. Autism statistics in Cameroon as given by the Ministry of Public Health in 2013, amount to 100, 000 children. According to the WHO, tens of millions of people are affected by autism in Africa [10]. This can be explained by the fact that a majority of countries on this continent are under-informed and do not have appropriate structures for the management of this syndrome. This therefore prompts us to question the social and cultural representation of autism in Cameroon. The objective of this article is to grasp the socio-cultural significance of autism in the anthropo-socio-cosmogonic context of Cameroon.

Methodology

This study is qualitative in nature. The qualitative approach is a research method that makes it possible to analyze and understand phenomena, behaviors, facts or subjects [11]. It is a question of apprehending the social and cultural representation of the autistic child in Cameroon. To fully understand this, the clinical approach was used. Its purpose is to describe the phenomena or the set of observable facts, of events as they occur to us. It therefore made it possible to explore the participants according to their perception, their feelings and their subjective reality that they each have of the autistic child. The clinical method by its singularity and its totality has made it possible to grasp the social and cultural representation of the autistic child in Cameroon. We mainly based ourselves on the case, with the aim of understanding in depth the phenomenon as experienced in a very specific context which is that of Cameroon. The study was done during a camp for parent and child with autism in Yaoundé Cameroon.

The sampling technique adopted in this work is called non-probability. Participants were chosen based on their ability to provide interesting and relevant information about how others perceive their child. They are four mothers of autistic children, of Cameroonian nationality. Data collection was done through semi-structured interviews. This made it possible to center the comments of the participants as well as the perception of the others vis-à-vis their children. These voluntary participants, after signing the informed consent, were free to end the interviews at any time. As a data analysis technique, we used content analysis based on verbatim statements.

Results

This section presents the results of the interviews.

Presentation of the Case

The characteristics of the participants are presented in the Table 1.

Table 1: The characteristics of the participants

Characteristic

Mother 1

Mother 2

Mother 3

Mother 4

Age 49 years old 31 years old 39 years old 45 years old
Ethnic group Bamiléké Yambassa Bamiléké Bamiléké
Marital status Married Married Married Bachelor
Child’s age 4 years old 7 years old 10 years old 11 years old
Age of diagnosis 3 years old 2 years old 3 years old 2 years old
Family history of disability Yes Yes No Yes

Thematic Analysis of Interview Content

Clinical interviews with each participant highlight a number of factors that explain the representation of the child’s disability.

The Autistic Child Seen as Crazy

Mother 1 addresses the issue of representation by questioning family members about the origin and condition of her child. She says: “The family was wondering what went wrong. We wondered if he was crazy; if he is mad; what have we not done and we really wanted to know what autism is”. But this mother believes that family members who call the child crazy or crazy know nothing about the issue of autism. The words of Mother 3 are in the same direction as those of Mother 1. Her child is also described as crazy or mentally disordered. “Sometimes we say he acts like a madman, we say he acts like a mental disorder”.

Child Seen as a Snake Child

As for mother 2, her verbatims on the social representation of autism are different from those of mother 1. In her environment, we speak of a snake child. “This kind of child is a snake child. It was a nurse who told me to go throw my child in the river, she is a nurse. My daughter if there is any advice I can give you, the child here, eh, it is the kind of child that should be sent back to the ancestors. You have to look for a river, you throw it there and you let it go to see the ancestors”. These words of the nurse show to what extent the Cameroonian medical profession has limits with regard to the diagnosis of autism. The representation of the disease as presented by mother 4 is similar to that of mother 2. She relates: “People always have something to say. Either the mother is a witch, or she tried to have an abortion, or she gave birth to a snake child, or she gave birth to a Mongolian child. With these words, she describes the meaning that society and culture give to the situation of her child who is autistic.

In view of the above, it can be said that autism is still poorly perceived in Cameroonian culture. Society considers children with autism to be lunatics, mentally retarded, “snake” children.

Discussion

This research was devoted to the study of the conception that Cameroonians have of autism spectrum disorders. In Africa, people with autism are sometimes perceived as idiots, victims of faults committed by their parents or other members of their family. We find that autistic children in Cameroon are perceived as crazy or “snake” children. This study is consistent with the study by Ebwel et al. [7] on social representations of autism in Africa where he refers to cultural semantics in the Democratic Republic of Congo to demonstrate that autistic children are assimilated to those with mental retardation and/or deafness. According to Lolo [8], autistic children in Cameroon are considered as “children born to die”. For her, they do not want to remain in the world of the living and disturb the family line by being born and dying several times. Parents perceive them as inhabited by an evil spirit that threatens the family. The Yoruba of Benin also call children with autism Akibus, which means “to be born and die”. They suspect them of communicating with spirits and wanting to harm their family. Children with autism are often hidden away, largely because of the stigma associated with having a child with a disability [12]. Indeed, it is shameful and unacceptable for some parents to have an autistic child. The guilt felt is amplified by the family and the neighborhood who attribute any fault to the consequence of a parental fault.

Conclusion

The aim of this study was to grasp the social and cultural representation of autism as experienced by mothers in Cameroon. Autism is a very severe developmental disorder that requires an extremely rigorous approach in the argumentation and assessment of the severity of the disorder. It is characterized by the inability of children to establish normal relationships with people and to react normally to situations. Within Cameroonian communities, children with autism are assimilated to those with mental retardation. The results of the interviews with the mothers who participated in this study show that autistic children are represented as crazy, mentally disordered, “snake” children… Therefore, it is important to raise public awareness about autism childhood in the Cameroonian context.

References

  1. Kanner L (1943) Autistic disturbances of affective contact.Nervouschild 2: 217-250.
  2. American Psychological Association (2000) Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, TextRevision (DSM-IV-TR)
  3. World Health Organization (1993) International classification of mental and behavioral disorders: clinical descriptions and diagnostic guidelines. Mason.
  4. Mbassa Menick MD (2013) Impact de la culture dans la prise en charge de l’enfant en pratiques éducative, familiale et sociale. Communication présentée aux conférences sur la semaine de l’enfance, organisée par l’Institut Français du Congo, Pointe Noire (Congo).
  5. Boyer F (2002) Reflection on the possible vulnerability of “first-born” migrant children in France.17-19.
  6. Zempleni A (1985) The child Nit Ku Bon.A traditional psychopathological picture among the Wolof and the Lebou of Senegal in L’enfant ancestre.The child Nit Ku Bon.A traditional psychopathological picture among the Wolof and the Lebou of Senegal in L’enfant ancestre 4: 9-42.
  7. Ebwel J M, Roeyers H, Devlieger P (2010) Approaches to social representations of autism in Africa.Childhood Psy 4: 121-129.
  8. Lolo B (1991) The dyad of the mother-child relationship or the care of the African child.Transition 31.
  9. Awa HDM, Um SN, Dongmo F, Chelo D, Manyinga HP N, et al. (2017) Evaluation of the Knowledge, Attitudes and Practices of Health Professionals on Autism in three Pediatric Health Facilities in Cameroon.HEALTH SCIENCES AND DISEASE 18: 1.
  10. Jarekji A (2010) Autism, a reality poorly understood by Africans.
  11. Claude G (2019) Qualitative study: Definition, techniques, steps and analysis. Scribbr.
  12. Grinker RR (2008) Unstrangeminds: Remapping the world of autism.Basic Books.

Assessing Preinjury Frailty in the Elderly Hip Fracture Patient to Promote Palliative Care Referral in Those at Risk for High Morbidity and Mortality

DOI: 10.31038/IJNM.2023423

Abstract

Objective: To assess preinjury frailty in elderly hip fracture patients as a predictor for postsurgical morbidity and mortality, prompting early referral to palliative care services in patients deemed high-risk for postoperative complications. Including palliative care in the multidisciplinary care of the high-risk patient has been shown to improve quality of life (QOL), increase patient and caregiver satisfaction, and reduce healthcare costs.

Design: The design is a quality improvement initiative.

Setting: The setting is an academic medical center, serving as the region’s Level 1 Trauma Center. There is no current process for measuring frailty as a predictor of postsurgical morbidity and mortality.

Participants: The project’s participants are elderly adults aged 65 and older presenting to the emergency room for treatment following a hip fracture.

Interventions/Measurements: A frailty measurement tool using PDSA (plan-do-study-act) cycles was selected. Next, a clinical decision-making algorithm for risk assessment and palliative care referral was designed and implemented for the project participants. Pre- and post-implementation referral rates and post-implementation risk identification and compliance with the utilization of the risk assessment tool were measured. This initiative aimed to begin preoperative frailty assessment with 50% compliance in the target population, with palliative care referral occurring per recommendations based on an algorithm.

Results: Patients in the post-implementation group were more likely to have their frailty risk evaluated and to receive a palliative care referral than the pre-implementation group. Rates of risk identification and palliative care referral increased by 68% and 85%, respectively, which surpassed the goals of this initiative.

Conclusion: Identifying patients with higher preinjury frailty can predict those at risk for mortality and morbidity, thus indicating those patients for whom palliative care referral may be beneficial. Using a standardized process for preinjury frailty screening and referral increased risk assessment and palliative care referral for elderly hip fracture patients.

Keywords

Hip fracture, Frailty, Frailty screening, Elderly, Palliative care

Introduction

In the United States, there are an estimated two million bone fractures annually [1]. These fractures account for over 432,000 hospital admissions and around 180,000 nursing home admissions [1]. Hip fractures account for 14% of these bone fractures [2], accounting for over 300,000 hospital admissions annually in the United States [3]. Hip fractures represent 72% of fracture-related medical expenses [2], with the estimated cost of hip fracture in the United States being $12-15 billion annually [4]. A low-impact trauma, such as a fall from standing, can result in a fragility fracture, with one of the most common fracture sites being the hip [5]. Fragility fractures result from a force that would not ordinarily result in a fracture [6]. In elderly patients aged 65 and older, a hip fracture is associated with high mortality [7]. Approximately 8-10% of elderly hip fracture patients die within 30 days of surgery [8]. About 20% of older women and 37% of older men die in the year following injury [9]. Hip fracture in this elderly population also increases morbidity [7]. According to Johnston et al. [9], approximately 42% of elderly hip fracture patients will fail to return to their pre-fracture mobility, and 35% will become dependent on personal assistance or an assistive device for ambulation. These patients are four times more likely to need long-term care [9]. This population is more likely to suffer complications such as deep venous thrombosis (DVT), pulmonary embolism (PE), pneumonia (PNA), infection, bleeding, nonunion/malunion, and anesthesia-related complications [10]. Hip fractures are associated with high healthcare costs, with the total annual cost estimated at $50,508 per patient in the United States [2]. Pre-fracture comorbidities are associated with even higher costs [11]. These estimates correspond to $5.96 billion yearly in healthcare spending [2].

According to Alexiou et al. [8], a hip fracture in the elderly can severely impact physical, mental, and psychological health and diminish quality of life (QOL). Due to the high morbidity and mortality associated with a hip fracture in the elderly patient, as well as the economic and caregiver burdens of the injury, early referral to palliative care should be considered to meet the holistic needs of the patient, families and the healthcare system [7]. According to Archibald et al. [12], early palliative care referral is not routinely occurring, thus missing an opportunity to improve the quality of care. Frailty is a state of increased vulnerability to illnesses or health conditions following a stressor event such as a hip fracture, thus increasing the incidence of disability, hospitalization, long-term care, and premature mortality [12]. Frailty is characterized by increased deficits and decreased strength, endurance, and physiological function [13]. These frail, elderly patients are at increased risk of adverse events such as infection, anemia, delirium, and falls [5,10]. Frailty is associated with a 29% increase in hospital costs [3]. Frailty is also associated with increased postoperative mortality [14]. Frail patients who undergo an emergent surgical procedure are 23 times more likely than robust patients to expire on postoperative day one [14]. In the elderly hip fracture population, there is a positive correlation between frailty score and incidence of 1-year mortality [15].

Problem

Many elderly hip fracture patients experience a downward health trajectory despite being without a life-threatening diagnosis [13]. Others have multiple medical diagnoses and comorbidities [13]. A severe illness or injury, such as a hip fracture, can negatively affect QOL due to the burden of symptoms, treatment, or caregiver stress [16]. “Clinical vulnerability of older adults after hip fracture is a consequence of pre-existing frailty that is worsened as a consequence of fracture-fragility, exacerbating disability and driving poorer clinical outcomes over time” [17]. According to Archibald et al. [12], a higher level of frailty in the elderly patient is associated with increased intra-operative resource and postoperative care requirements, thus increasing the length of stay (LOS) and the likelihood of being institutionalized in a long-term care facility following discharge. Even in low-risk procedures, frail patients have a greater than three times incidence of serious complications, including sepsis, pneumonia, and delirium [14,18]. The American College of Surgeons and the American Geriatrics Society recommend that frailty screening be performed as a routine preoperative assessment on patients ≥65 years of age [12]. “The ability of acute care providers to adequately prepare for, recognize and respond to the needs of frail older adults is paramount to aiding prognosis and care plan optimization” [12]. In elective surgery, frailty evaluation can be utilized to optimize preoperative function in the individual [19,20]. Conversely, for emergency or non-elective surgery, such as hip fracture repair, frailty evaluation can trigger early discussion regarding “ceilings of care…and the futility of escalating interventions after complications…” [19]. Preoperative frailty assessment can also ensure appropriate resources are available pending surgical or postsurgical complications [20]. Despite these recommendations, providers often overlook this screening [12].

Clinical Significance

The organization participating in the project is a Magnet-recognized hospital and Level 1 Trauma Center serving as the area’s academic medical center. In the project’s setting, the hospitalist group routinely admits patients who experience a hip fracture with orthopedic consultation. These patients, specifically those 65 years and older, are not routinely screened for frailty by the hospitalist or the orthopedic group. The hospitalist or orthopedic provider can assess a patient’s perioperative risk and individualized needs by incorporating routine frailty screening [14]. The Clinical Frailty Scale (CFS) is a risk stratification tool that evaluates frailty based on comorbidity, function, and cognition to assess a numerical frailty score ranging from very fit to terminally ill [21]. By incorporating a routine frailty screening, the provider can identify patients who would benefit from early palliative care consultation.

Including palliative care in the multidisciplinary care of frail, elderly hip fracture patients is appropriate as these injuries can pose a risk to QOL [22]. Palliative care providers assist with symptom management, QOL, and advanced care planning [23,24]. The palliative care team helps patients determine the best management or treatment options considering the patient’s prognosis and can assist in providing safe and effective pain management to elderly patients [23,24]. “Recent models of optimal palliative care integration emphasize referral at diagnosis, increasing presence as time progresses, and a shift in focus toward rehabilitation and survivorship care if a patient’s illness trajectory improves or toward end-of-life care and hospice referral if their trajectory declines” [24]. Palliative care is associated with lower healthcare utilization and cost savings by honoring patients’ wishes and decreasing the number of medical procedures performed [25]. Palliative care-associated savings average $2,642 per admission for patients discharged alive and $6,896 for patients who pass away during their hospitalization [26]. Despite the benefits of palliative care, this service is often underutilized in this patient population [26]. Patients not diagnosed with cancer are less likely to receive a timely referral [24]. Barriers to the utilization of palliative care occur due to a knowledge deficit on the purpose and benefits of these services [27]. Many patients and providers are uncomfortable discussing advanced directives, leaving patients open to potentially unwanted invasive procedures in an emergency [7]. Providers may be reluctant to consult palliative care to prevent loss of hope or increased fear [27]. Additionally, palliative care is frequently mistaken for end-of-life care. According to the World Health Organization (WHO), palliative care is an approach that seeks to improve the QOL of patients and their families facing life-threatening illnesses “through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial, and spiritual” [16]. Radbrudh et al. [16] state that palliative care is not intended to expedite or postpone death but rather to manage symptoms. Palliative care has been shown to reduce pain symptoms and psycho-emotional stress, which correlates to higher patient satisfaction [28]. Palliative care assessment of every hip fracture patient is unrealistic due to limited resources; however, palliative care evaluation for those elderly hip fracture patients who score higher on the risk stratification scale, such as the CFS, is a practical approach [27].

Materials and Methods

This quality improvement (QI) project focuses on the principles of patient-centered care, which includes “respect for patient values, preferences, and expressed needs, coordination and integration of care, and providing emotional support alongside the alleviation of fear and anxiety associated with clinical care” [29]. Therefore, this project aims to enhance QOL by incorporating palliative care into holistic care through symptom management and patient and caregiver satisfaction [30]. These aims are accomplished by promoting open discussions regarding the goals of care and patient preferences [7].

Development of PICOT (Patient, Intervention, Comparison, Outcomes, Time) Question

The population of interest included patients aged 65 and older who had sustained a hip fracture. The primary intervention of interest was utilizing the CFS screening tool on each of these patients on admission, with a goal of at least 50% compliance with this risk assessment by the admitting provider. This intervention was compared to the current practice of not evaluating preoperative frailty in the target population, thus missing identifying those at increased risk for poor outcomes. The desired outcome included considering palliative care referrals for those who scored moderately frail and above. This project aimed to improve QOL and patient satisfaction in the target population. The project was implemented from November to December 2022, and the results were compared to the same period in 2021.

PICOT Question

In the elderly (≥years/age) hospitalized patient who experiences an acute fragility hip fracture (P), how does the implementation of the Clinical Frailty Scale (CFS) tool on hospital admission (I) compared to no frailty screening (C), increase the incidence of palliative care referral in the target population (O)?

Evidence: Review of Literature/Literature Search

A literature search was conducted with the previously mentioned PICOT question as the focus. The databases searched included PubMed and CINAHL; the search engine Google Scholar was also utilized. A PRISMA diagram (Figure 1) is included to describe the literature search. Two studies were excluded from the databases and three from the search engine due to duplication. PubMed was searched using the keywords (hip fracture AND frailty scale) and (hip fracture AND frailty). MeSH terms included the following: aged, conservative treatment, femoral fractures/therapy, femoral fractures/psychology, femoral fractures/rehabilitation, frailty/diagnosis, frailty/psychology, life expectancy, quality of life, activities of daily living, comorbidity, mobility limitation, recovery of function, walking, hip fractures/therapy, frail elderly, hip fractures/mortality, long-term care, frail elderly/statistics and numerical data, frail elderly/statistics and numerical data, decision making, hip fractures/complications, multimorbidity, and patient acceptance of health care. Boolean connectors included “Hip fracture AND frail AND mortality” and “hip fracture AND frail AND palliative care.” When limiting to publications over the past five years, PubMed revealed thirty-four studies with abstracts reviewed. Thirty-two articles were eliminated based on abstract evaluation lacking either hip fracture diagnosis or utilization of frailty scale. Two studies were retained for appraisal after being found to meet topic relevance.

fig 1

Figure 1: 1A PRISMA diagram
From : Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021; 372:n71.
doi: 10.1136/bmj.n71. For more information, visit: http://www.prisma-statement.org/

CINAHL search was performed using the keywords (hip fract* AND frailty scale), (femoral neck fract* AND frailty scale), and (femoral neck fract* AND frail*). Limitations included studies from the last five years and the English language. The search revealed sixteen studies with all abstracts reviewed. Fifteen articles were eliminated, not meeting topic relevance; one was retained for appraisal. A search was conducted utilizing the Google Scholar database with the keywords “hip fracture” AND “palliative care” AND “elderly” AND “frail” AND “frailty scale.” Narrowing the studies to include the last five years and review articles revealed 23 results. Three of the studies were duplicates of a previous search. Ten articles were retained for review, and four were eliminated based on a review of the abstract; five did not contain information regarding hip fracture. One study was included for appraisal.

Evidence Synthesis

The four articles were appraised using the John Hopkins Evidenced-Based Practice Model for Nursing and Healthcare Professionals. The Research Evidence Appraisal Tool, Appendix E, was utilized for article evaluation. Each article was assigned a level and grade of evaluation, as seen in the Evaluation Table (Table 1) and Study Level and Quality Table (Table 2).

Table 1: Grade of evaluation

Article Citation

Conceptual Framework and Purpose

Design/Method

Sample/Setting

Major Variables Studied (and Their Definitions)

Measurement

Data Analysis

Findings

Appraisal: Worth to Practice

Braude, P., Carter, B., Parry, F., Ibitoye, S., Rickard, F., Walton, B., Short, R., Thompson, J., & Shipway, D. (2021). Predicting 1-year mortality after traumatic injury using the clinical frailty scale. Journal of the American Geriatrics Society, 70(1), 158-167. https://doi.org/10.1111/jgs.17472 No conceptual framework described

Aim: to determine the effect of frailty on 1-year mortality in older adults admitted following trauma

Observational study Level 1 Evidence Quality Grade A High Quality Severn Major Trauma Net-work’s major trauma center based in South West England

Patients ≥ 65 years/age admitted between Nov. 2018 and Sept. 2019 with traumatic injuries (N = 585)

DV: Mortality at 1 yearIV: Level of frailty as measured by CFS Frailty was measured by the CFS included age, sex, comorbidities, injury type and injury severity score CFSNumber deceased at 1-year f/u Median age: 81 years/old55.7% female 44.3% male 50.8% living with frailty (CFS ≥ 5) At 1-year f/u 29.6% had deceased Strengths: large sample size, easily replicated Limitations: did not include hip fracture patients, CFS scores prior to March 2019 were retro-spectively assessed.

 

Conclusion: Association between increasing severity of frailty and 1- year mortality (the chance of dying increased with a higher frailty score)

Note: DV: Dependent Variable, IV: Independent Variable, CFS: Clinical Frailty Scale, f/u: Follow-Up

Article Citation

Conceptual Framework and Purpose

Design/Method

Sample/Setting

Major Variables Studied (and Their Definitions)

Measurement

Data Analysis

Findings

Appraisal: Worth to

Practice

Chan, S., Wong, E. K., Ward, S. E., Kuan, D., & Wong, C. L. (2019). The predictive value of the clinical frailty scale on discharge destination and complications in older hip fracture patients. Journal of Orthopaedic Trauma, 33(10), 497- 502. https://doi.org/10.1097 /bot.000000000000 1518 No conceptual framework described

Aim: to determine if the CFS is associated with discharge destination, in-hospital complica- tions, and length of stay following hip fracture

Retrospective cohort study

 

Level 1 Evidence Quality Grade A High Quality

Setting: Un-named academic level 1 trauma center in Canada Sample: all patients age ≥ 65 years admitted with an isolated hip fracture (N = 423) DV1: Discharge destinationDV2: in- hospital complications

DV3: Length of stay

IV: Level of frailty as measured by CFS

Frailty was measured by the CFS

DV1 measured as either death or discharge to long-term care facility DV2 measured as presence or absence of the hospital complications DV3 measured in days of hospital admission

Data was evaluated by comparing DVs to frailty score Median age: 82.5 years/old

63.3% female

36.7% male

 

15.9% died or were

d/c’d to long term facility 81.8% De-veloped at least 1 compli- cation Median LOS was 7 days

Strengths: first study to examine the use of CFS to predict adverse outcomes

Limitations: small percentage of CFS scores determined in retrospect; universal health care which may affect discharge destinationConclusion: frailty is associated with adverse d/c destination, in-hospital complications and increased LOS

Note: DV: Dependent Variable, IV: Independent Variable, CFS: Clinical Frailty Scale, d/c’d: Discharged

Article Citation

Conceptual Framework and Purpose

Design/Method

Sample/Setting

Major Variables Studied (and

Their Definitions)

Measurement

Data Analysis

Findings

Appraisal: Worth to

Practice

Chen, C., Chen, C., Wang, C., Ko, P., Chen, C., Hsieh, C., & Chiu, H. (2019). Frailty is associated with an increased risk of major adverse outcomes in elderly patients following surgical treatment of hip fracture. Scientific Reports, 9(1), 1 – 9. https://doi.org/10.1038/s41598-019-55459-2 No conceptual framework described

Aim: to determine the effect of the level of frailty on post-operative

emergency room visits, readmission, and mortality

Observational cohort study

Level 1 Evidence Quality Grade A High Quality

Setting: an orthopedic ward in a medical center and a district hospital in Changhua County, Taiwan Sample: Patients ≥ 50 years treated for a hip fracture (N=245) DV1: 1-, 3-, and 6- month emergency department visits

DV2: readmission rates

DV3: mortality ratesIV: level of frailty on CFS

Frailty was measured by the CFS

DV1 measured number of emergency department visits to participating hospitals

DV2 measured as readmissions to participating hospitals due to postoperative complications

DV3 measured in number of all cause mortalities

Data was evaluated by comparing DVs to frailty score at 3 points in time of the study Prevalence of pre-frailty and frailty were markedly higher in womenFrail patients were typically older, had lower BMI, and worse cognitive function Strengths: study examined relationships adjusted for covariates

Limitations: based on subjective data, may not represent all geographical areas

 

Conclusion: frailty is associated with more short-term mortality; pre-frailty was more strongly associated with early ED visits and hospital readmissions

Note: DV: Dependent Variable, IV: Independent Variable, CFS: Clinical Frailty Scale, BMI: Body Mass Index

Article Citation

Conceptual Framework and Purpose

Design/Method

Sample/Setting

Major Variables Studied (and Their Definitions)

Measurement

Data Analysis

Findings

Appraisal: Worth to

Practice

Thorne, G. & Hodgson, L. (2021). Performance of the Nottingham hip fracture score and clinical frailty scale as predictors of short and long-term outcomes: A dual-centre 3-year observational study of hip fracture patients. Journal of Bone and Mineral Metabolism, 39(3), 494-500. No conceptual framework described

Aim: to report outcomes for patients with a hip fracture and compare the performance of the NHFS with the CFS

Observational cohort study

Level 1 Evidence Quality Grade A

 

High Quality

Setting: two non- specialist hospitals on the South Coast of England over a 3- year period from Jan. 2016 to Dec. 2018

 

Sample: Any patient admitted during this time frame who suffered a hip fracture (N=2,422)

DV1: Inpatient mortality

DV2: 30-day mortality

DV3: LOS

IV1: NHFS score

IV2: CFS

score

30-day mortality prediction after hip fracture with NHFS

Frailty measured by CFS Inpatient mortality and 30-day mortality were measured as a percentage

LOS measured in days

Data was evaluated by comparing inpatient mortality, 30-day mortality, and LOS based on CFS scoring and NHFS Median age: 85 years 70.6% female 29.4% male

30-day mortality: 5.8%

 

1-year mortality: 23.5%

 

Average LOS: 18.0 days

Strengths: large sample population, only study to compare NHFS and CFS in predicting mortality and \ hospital stayLimitations: 28% of patients did not have NHFS; 42% did not have CFSzConclusions:

Both CFS and NHFS are useful to predict survival rates for 1 year following injury; neither score predicted LOS

Note: DV: Dependent Variable, IV: Independent Variable, LOS: Length of Stay, NHFS: Nottingham Hip Fracture Score, CFS: Clinical Frailty Scale.

Table 2: Study Level and Quality

Article 1

Article 2

Article 3

Article 4

Level I

·                     Experimental study (RCT)

·                     Systematic Review of RCT’s

·                     Explanatory mixed method design that includes level I quant study

 

 

 Xa

 

 

 

 

 Xa

 

 

 

 Xa

 

 

 

Xa

 

Level II

·                     Quasi-experimental study

·                     System Review w/combination of RCTs, Quasi-exp, or quasi-exp. Only

·                     Explanatory mixed method design that includes only Level II quant study

Level III

·                     Non-experimental

·                     Systematic Review w/combination of exp./non-exp studies

·                     Qualitative study or meta-synthesis

·                     Exploratory, convergent or multiphasic mixed methods

·                     Explanatory mixed method design that includes only a level III quant study

Level IV

Opinion of respected authorities/expert committees, or consensus panels

·                     Clinical practice guidelines

·                     Consensus panels

·                     Position statements

Level V

·                     Integrative/Scoping/Literature Review

·                     QI, program, financial evaluation

·                     Case Reports

·                     Expert opinion

Note: a: High Quality; b: Good Quality; c: Low Quality or Major Flaws; Article 1: Braude et al., (2021); Article 2: Chan et al., (2019); Article 3: Chen et al., (2019); Article 4: Thorne and Hodgkin, (2021).

Table 3 includes the Synthesis Table Outcomes for each study appraised. The synthesis reveals the relationship between frailty and 1-year mortality, short-term mortality, and adverse discharge destinations, including long-term institutionalization and death, in-hospital complications, LOS, early emergency department visits, and hospital readmissions following initial injury/hospitalization. Recommendations for practice change include evaluating acute hip fracture patients ≥65 years of age on a frailty scale as a predictor tool (Table 4), thus assisting the provider in identifying patients who may benefit from palliative care consultation.

Table 3: Table of Recommendation(s) for Practice Change

Recommendation

References in Support of Recommendation

Rationale

Level of Evidence

Quality Rating

1.                   Patients ≥ 65

years of age experiencing an acute hip fracture should be screened on a CFS as a predictor for mortality.

Braude et al., (2021)

 

Chen et al., (2019)

 

Thorne & Hodgson (2021)

To identify those at risk

for 1-year or early

mortality following a

hip fracture as there is a

positive correlation

between severity of

frailty and mortality.

 

I

 

A

2.                   Patients ≥ 65 years of age experiencing an acute hip fracture should be screened on a frailty scale as a predictor for adverse discharge destinations, in-hospital complications, and increased LOS Chan et al., (2019) To identify those at risk

for adverse discharge

destinations such as

death or long-term

institutionalization, in-

hospital complications

and prolonged LOS.

 

I

 

A

Note: CFS: Clinical Frailty Scale, LOS: Length of Stay

Table 4: Table of Strength of Recommendation(s)

Recommendation

Strength of Evidence for Recommendation

References in Support of Recommendation

1. Patients ≥ 65 years of age experiencing an acute hip fracture should be screened on a CFS as a predictor for mortality.

 

*Strong evidence = Strongly recommend

Based on the JHEBP level of evidence and

quality ratings, strong & compelling evidence

with consistent results was found to support

organizational translation (Dang et al., 2022).

 

Braude et al., (2021)

 

Chen et al., (2019)

 

Thorne & Hodgson (2021)

2. Patients ≥ 65 years of age experiencing an acute hip fracture should be screened on a frailty scale as a predictor for adverse discharge destinations, in-hospital complications, and increased LOS

 

*Strong evidence = Strongly recommend

 

Based on the JHEBP level of evidence and

quality ratings, strong & compelling evidence

with consistent results was found to support

organizational translation (Dang et al., 2022).

 

Chan et al., (2019)

 

 

Note: CFS: Clinical Frailty Scale, LOS: Length of Stay

Theoretical/Project Framework

The Model for Improvement guides this QI project using PDSA (Plan-Do-Study-Act) cycles. Initially, project planning included researching the evidence to determine the effectiveness of the proposed intervention. The literature demonstrates that frailty screening is recommended preoperatively for patients aged 65 and older [12]. The benefits of palliative care in frail, elderly patients, regardless of diagnosis, have been established, with improved QOL, patient and family satisfaction, and healthcare costs. Secondly, the plan was formulated. The instructions regarding implementation were widely disseminated among the hospitalist APRNs. This information detailed the scope of the project and project goals, the CFS, and the benefits of including palliative care in the multidisciplinary team caring for the frail, elderly hip fracture patient. SMART (specific, measurable, achievable, relevant, and time-bound) goals describe the project’s aim. The project took place at a university teaching hospital and included the hospitalist APRNs responsible for evaluating frailty in each elderly hip fracture patient utilizing the CFS. The APRNs were then prompted to consider palliative care consultation for patients identified as moderately frail or above. The initial goal for this project was 50% or greater compliance with the use of a CFS and palliative care consultation in the specified population. Measuring progress included evaluating the electronic health record (EHR) of those patients in the target population by measuring the use of a CFS followed by suggested recommendations for palliative care consultation when appropriate. Results were assessed throughout the project implementation to guide further education and project revisions to promote compliance.

Project Design

The project was initiated on a small scale with the hospitalist APRNs performing the CFS, with tentative plans to include all hospitalist providers pending project results. Data was collected and documented. The daily hospitalist patient logs were checked for the inclusion criteria. Once these patients were identified, EHRs were reviewed for the utilization of a CFS by the hospitalist APRNs and the subsequent palliative care referral in those deemed moderately frail and above. The data results were then compared to the same patient population and time frame from one year prior. Data were evaluated to determine the effectiveness of the project.

Implementation

The patients participating in the described project were identified by age and diagnosis, including those aged 65 years and older who sustained an acute hip or femoral neck fracture and who were admitted to the medical center by a hospitalist APRN. On admission, these patients were evaluated for frailty utilizing the CFS. Palliative care consultation was recommended for those scoring moderately frail (6) or above. SWOT (strengths, weaknesses, opportunities, and threats) analysis was conducted on the projected project. Strengths identified included the support of the palliative care team and the hospitalist group. An additional strength was the recommendation of the American College of Surgeons and the American Geriatrics Society to perform frailty screening routinely preoperatively on patients ≥ 65 years [12]. A project weakness included resistance to change by providers within the hospitalist group and misconceptions regarding palliative care. There was concern among providers that frailty screening would be time-consuming and burdensome. Additionally, providers often deferred/refrained from initiating palliative care referrals for fear that their patients would give up hope in their recovery [27]. Some providers misconstrued palliative care as end-of-life care [16]. These weaknesses were mitigated by incorporating education regarding the benefits of palliative care and frailty screening.

By including palliative care providers in the care planning of these patients within the target population, this project provided opportunities for improvement in the patient’s QOL, patient and caregiver satisfaction [31], and healthcare costs [25]. The interdisciplinary care promoted by this project encouraged patient-centered care through the holistic shared management of healthcare challenges [32]. The concern about eliminating potential operative cases from the orthopedic service was a potential threat. This threat was reduced by communicating with the orthopedic team the goals of care, including promoting patient-centered care with optimal surgical recovery based on the patient’s and family’s personal preferences. Barriers identified included increased time and workload, negative attitudes towards change, and the potential for ineffective communication regarding project goals and implementation. Mitigating actions included acknowledging concerns and reinforcing project goals, benefits, patient-centeredness, and cost-effectiveness. The project’s facilitators included multidisciplinary collaborations, communication, and teamwork. The project’s hospitalist group is a large medical group within a university medical center with various expert specialties and consultants. There is excellent teamwork between the hospitalist group and consulting services, such as orthopedics and palliative care, with open communication. Team leaders from the hospitalist service supported the project.

Stakeholders and Project Team

The project team included the Doctoral of Nursing Practice (DNP) student, hospitalist APRNs, palliative care providers, the medical center’s nursing and ancillary staff, the DNP project chair, the DNP project committee member, and the statistician. This multidisciplinary team worked together to provide patient-centered and cost-effective quality care. The CFS was disseminated among the hospitalist APRNs. Instructions regarding implementation were distributed via email and in person to all hospitalist APRNs detailing the project’s scope, project goals, and the benefits of including palliative care in the multidisciplinary team caring for the frail, elderly hip fracture patient. Implementation of the project began in November 2022, with data collection and evaluation from November 1, 2022 – December 31, 2022. Pre-implementation data was also obtained from November 1, 2021 – December 31, 2021. Pre- and post-implementation data included age in years, gender, race, and time of visit. Additional post-implementation data included utilization of CFS, ranking on CFS, risk identified, and referral to palliative care if appropriate. Data was collected via the hospitalist’s daily census reports and EHR chart review. No patient identifiers were required, collected, or saved; therefore, Institutional Review Board (IRB) approval was unnecessary.

Results and Discussion

The frailty assessment was evaluated on the CFS, with frailty measured numerically from 1 (very fit) to 9 (terminally ill) [21]. A study by Rockwood et al. [33] shows a high correlation between the judgment-based CFS and the mathematically based Frailty Index (FI), with a Pearson coefficient of 0.80 and p < 0.01. There is an excellent consistency of the CFS with an experienced geriatric medicine specialist’s opinion (Cohen’s K: 0.80, p < 0.0001) [34]. There is a strong inter-rater reliability (Cohen’s K: 0.811, p < 0.001) and a strong test-retest reliability utilizing the CFS (Cohen’s K: 1.0, p < 0.001) [34]. Data collected for this project included the patient’s age, gender, race, CFS score, month of admission, eligibility, CFS used (yes/no), risk identified (yes/no), and palliative care referral (yes/no) based on findings. The total palliative care referral numbers were compared to the same data from one year prior during the same period. The goal outcome was palliative care referral for those elderly frail hip fracture patients who score moderately frail or above (CFS ≥ 6). Meeting this goal outcome represents QI, with the expected results being improved patient and family satisfaction and reduced healthcare costs.

Findings

During the pre-implementation period from November – December 2021, 24 patients met the criteria with admission by the hospitalist APRNs. Of those 24 patients, only one received a palliative care referral during their hospitalization. In comparison, during project implementation from November – December 2022, 19 patients met the same specified criteria. The CFS risk assessment was performed on 13 of these 19 patients. This number equates to 68.4% compliance with the utilization of the risk assessment tool, surpassing the goal of 50%. Seven of these 13 assessed patients were deemed less than moderately frail, scoring ≤ 5 on the CFS assessment performed by the admitting APRN. Therefore, palliative care referral was not recommended for these seven low-frailty patients. Six of these 13 patients were moderately frail or above (CSF ≥ 6). Four of these six patients with a CFS score of ≥ 6 received the recommended palliative care referral. Based on this data, there was 84.6% compliance with appropriately placed palliative care referrals. Of the 13 patients assessed for frailty, the APRNs performing the assessment appropriately followed the referral recommendations for 11 patients (Table 5).

Table 5: Results

Year

Number in specified population admitted by hospitalist APRN

Number in specified population in which CFS was utilized

Number of those assessed scoring ≥ 6 (moderately frail or above) on CFS

Number in target population receiving palliative care referral

2021

24

N/A

N/A

1

2022

19

13

6

4

Note: APRN: Advanced Practice Registered Nurse, CFS: Clinical Frailty Scale

Implications for Practice/Policy

The goal of this project is to identify those patients who are considered frail by utilizing a CFS on all hip fracture patients within the target population who are admitted to the medical center by the hospitalist service as recommended by the American College of Surgeons and the American Geriatrics Society [12]. These frail patients are considered at high-risk for complications and mortality [5,7], which may affect the patient or caregiver’s QOL due to symptom burden, caregiver stress, and complex treatment options [16]. Palliative care referral is recommended for those patients in the target population who score moderately frail and above (CFS ≥ 6). This QI project is intended to improve patient and caregiver QOL and reduce healthcare costs. Palliative care assists with symptom management and advanced care planning, promoting QOL by identifying and respecting the patient’s personal goals of care [23]. Palliative care is also associated with lowered healthcare utilization and costs, saving an average of $2,642 – $6,896 per patient by respecting the individual’s wishes regarding the plan of care [26]. The study’s strengths included the excellent collaboration between the hospitalist group and the palliative care team. Numerous studies also show the superiority of the CFS over other frailty assessments and a positive correlation between a higher frailty score and morbidity and mortality. Limitations include the small sample size and provider subjectivity of the CFS scoring. Additionally, the study only evaluated elderly patients who had sustained an acute hip fracture and did not address additional types of injuries or surgical procedures. Another limitation includes a lack of evaluation of long-term outcomes, including the patient’s perceived QOL or patient and caregiver satisfaction following palliative care consultation.

References

  1. Lewiecki ME, Wright NC, Curtis JR, Siris E, Gagel RF, et al. (2017). Hip fracture trends in the United States, 2002 to 2015. Osteoporos Int 29: 717-722.
  2. Adeyemi A, Delhougne G (2019) Incidence and economic burden of intertrochanteric fracture. JBJS Open Access 4: 1-6. [crossref]
  3. Kwak M, Digbeu B, Des Bordes J, Rianon N (2022) The association of frailty with clinical and economic outcomes among hospitalized older adults with hip fracture surgery. Osteoporos Int 33: 1477-1484. [crossref]
  4. Arshi A, Rezzadeh K, Stavrakis AI, Bukata SV, Zeegen EN (2019) Standardized hospital-based care programs improve geriatric hip fracture outcomes: An analysis of the ACS NSQIP targeted hip fracture series. J Orthop Trauma 33: e223-e228. [crossref]
  5. Pioli G, Bendini C, Pignedoli P, Giusti A, Marsh D (2018). Orthogeriatric Co-management – managing frailty as well as fragility. Inj 49: 1398-1402.
  6. Kani KK, Porrino JA, Mulcahy H, Chew FS (2018). Fragility fractures of the proximal femur: Review and update for radiologists. Skeletal Radiol 48: 29-45. [crossref]
  7. Koso RE, Sheets C, Richardson WJ, & Galanos AN (2018). Hip fracture in the elderly patients: A Sentinel event. Am J Hosp Palliat Care 35: 612-619. [crossref]
  8. Alexiou K, Roushias A, Varitimidis S, Malizos, K (2018). Quality of life and psychological consequences in elderly patients after a hip fracture: A review. Clin Interv Aging 13: 143-150. [crossref]
  9. Johnston CB, Holleran A, Ong T, McVeigh U, Ames E (2018). Hip fracture in the setting of limited life expectancy: The importance of considering goals of care and prognosis. J Palliat Med 21: 1069-1073. [crossref]
  10. Cannada LK, Mears SC, Quatman C (2020). Clinical Faceoff: When should patients 65 years of age and older have surgery for hip fractures, and when is it a bad idea?. Clin Orthop Relat Res 479: 24-27. [crossref]
  11. Allsop S, Morphet J, Lee S, Cook O (2020). Exploring the roles of advanced practice nurses in the care of patients following fragility hip fracture: A systematic review. J Adv Nurs 77: 2166-2184. [crossref]
  12. Archibald MM, Lawless M, Gill TK, Chehade MJ (2020). Orthopaedic [sic] surgeons’ perceptions of frailty and frailty screening. BMC Geriatr 20: 1-11. [crossref]
  13. Stow D, Spiers G, Matthews FE, Hanratty B (2019). What is the evidence that people with frailty have needs for palliative care at the end of life? A systematic review and narrative synthesis. Palliat Med 33: 399-414. [crossref]
  14. Nidadavolu LS, Ehrlich AL, Sieber FE, Oh ES (2020). Preoperative evaluation of the frail patient. Anesth Analg 130: 1493-1503. [crossref]
  15. Traven SA, Reeves RA, Althoff AD, Slone HS, Walton ZJ (2019). New five-factor modified frailty index predicts morbidity and mortality in geriatric hip fractures. J Orthop Trauma 33(7): 319-323. [crossref]
  16. Radbruch L, De Lima L, Knaul F, Wenk R, Ali Z, et al. (2020). Redefining palliative care – A new consensus-based definition. J Pain and Symptom Manage 60: 754-764. [crossref]
  17. Pizzonia M, Giannotti C, Carmisciano L, Signori A, Rosa G, Santolini F, et al. (2020). Frailty assessment, hip fracture and long‐term clinical outcomes in older adults. Eur J Clin Invest 51: 1-9. [crossref]
  18. Zhang X, Jiao J, Xie X, Wu X (2021). The association between frailty and delirium among hospitalized patients: An updated meta-analysis. J Am Med Dir Assoc 22: 527-534. [crossref]
  19. Dhesi JK, Lees NP, Partridge JS (2019). Frailty in the perioperative setting. Clin Med 19: 485-489. [crossref]
  20. Lee S, Nam J, Kim Y, Kim M, Choi J, et al. (2021). Predictive model for the assessment of preoperative frailty risk in the elderly. J Clin Med 10: 4612. [crossref]
  21. Church S, Rogers E, Rockwood K, Theou O (2020). A scoping review of the clinical frailty scale. BMC Geriatr 20: 1-18. [crossref]
  22. Sullivan NM, Blake LE, George M, Mears SC (2019). Palliative care in the hip fracture patient. Geriatr Orthop Surg & Rehabil 10: 1-7. [crossref]
  23. Harries L, Moore A, Kendall C, Stanger S, Stringfellow TD, et al. (2020). Attitudes to palliative care in patients with neck-of-femur fracture—A multicenter survey. Geriatr Orthop Surg Rehabil 11: 1-7. [crossref]
  24. Santivasi WL, Partain DK, Whitford KJ (2019). The role of geriatric palliative care in hospitalized older adults. Hosp Pract 48: 37-47. [crossref]
  25. Sampaio SG, Motta LB, Caldas CP (2019). Value-based medicine and palliative care: How do they converge?. Expert Rev Pharmacoecon Outcomes Res 19: 509-515. [crossref]
  26. Konda SR, Lott A, Egol KA (2020). Development of a value-based algorithm for inpatient triage of elderly hip fracture patients. J Am Acad Orthop Surg 28: e566-e572. [crossref]
  27. Davies A, Tilston T, Walsh K, Kelly, M (2018). Is there a role for early palliative intervention in frail older patients with a neck of femur fracture?. Geriatr Orthop Surg Rehabil 9: 1-6. [crossref]
  28. Flöther L, Pötzsch B, Jung M, Jung R, Bucher M, et al. (2021). Treatment effects of palliative care consultation and patient contentment. Med 100: 1-6. [crossref]
  29. Finkelman A (2022) Quality improvement: A guide for integration in nursing (2nd ed.), Jones & Bartlett Learning, Burlington, MA Chan S, Wong EK, Ward SE, Kuan D, Wong CL (2019). The predictive value of the clinical frailty scale on discharge destination and complications in older hip fracture patients. J Orthop Trauma 33: 497-502.
  30. Porter AS, Harman S, Lakin JR (2020). Power and perils of prediction in palliative care. Lancet 395: 680-681. [crossref]
  31. Flaherty C, Fox K, McDonah D, Murphy J (2018). Palliative care screening: Appraisal of a tool to identify patients’ symptom management and advance care planning needs. Palliat Care 22: E92-E96. [crossref]
  32. Poitras M, Maltais M, Bestard-Denommé L, Stewart M, Fortin M (2018). What are the effective elements in patient-centered and multimorbidity care? A scoping review. BMC Health Serv Res 18: 1-9. [crossref]
  33. Rockwood K, Song X, MacKnight C, Bergman H, Hogan D, et al. (2005). A global clinical measure of fitness and frailty in elderly people. CMAJ 173: 489-495. [crossref]
  34. Özsürekci C, Balcı C, Kızılarslanoğlu MC, Çalışkan H, Tuna Doğrul R, et al. (2019). An important problem in an aging country: Identifying the frailty via 9-point clinical frailty scale. Acta Clinica Belgica 75: 200-204. [crossref]

In the Elderly (≥65 years/age) Hospitalized Patient Who Experiences an Acute Fragility Hip Fracture, How Does the Implementation of the Clinical Frailty Scale (CFS) Tool Compared to No Frailty Screening, Increase the Incidence of Palliative Care Referral in the Target Population?

DOI: 10.31038/IJNM.2023422

Introduction

Frailty is a state of increased vulnerability to illnesses or health conditions following a stressor event such as a hip fracture, thus increasing the incidence of disability, hospitalization, long-term care, and premature mortality [1] Hip fracture is associated with high morbidity and mortality in the frail, elderly patient [2]. A hip fracture in the elderly can also severely impact physical, mental, and psychological health and diminish quality of life (QOL) [3]. Palliative care has been shown to mitigate these impacts by managing symptoms, thus improving patient QOL and patient and caregiver satisfaction [4,5].

The American College of Surgeons and the American Geriatrics Society recommend that frailty screening be performed as a routine preoperative assessment on patients ≥65 years of age [1]. A standardized assessment tool can be used to measure frailty in this patient population as a predictor of those at risk for high morbidity and mortality. The Clinical Frailty Scale (CFS) is a standardized assessment tool that measures frailty based on comorbidity, function, and cognition to assess a numerical frailty score ranging from very fit to terminally ill [6]. The CFS was used in a quality improvement study to measure frailty in this patient population. The study recommended palliative care consultation for those who scored moderately frail or above.

Including palliative care in the multidisciplinary care of frail, elderly hip fracture patients is appropriate as these injuries can pose a risk to QOL [7]. Palliative care providers assist with symptom management, QOL, and advanced care planning [4,5]. The palliative care team helps patients determine the best management or treatment options considering the patient’s prognosis and can assist in providing safe and effective pain management to elderly patients [4,5]. This quality improvement initiative demonstrated the correlation between implementation of a frailty assessment on this patient population and the increase in palliative care consultations. Further studies are needed to evaluate the impact of frailty screening and subsequent palliative care inclusion on symptom management, QOL, and patient and caregiver satisfaction.

References

  1. Archibald MM, Lawless M, Gill TK, Chehade MJ (2020) Orthopaedic [sic] surgeons’ perceptions of frailty and frailty screening. BMC Geriatr 20: 1-11. [crossref]
  2. Koso RE, Sheets C, Richardson WJ, Galanos AN (2018) Hip fracture in the elderly patients: A Sentinel event. Am J Hosp Palliat Care 35: 612-619. [crossref]
  3. Alexiou K, Roushias A, Varitimidis S, Malizos, K (2018) Quality of life and psychological consequences in elderly patients after a hip fracture: A review. Clin Interv Aging 13: 143-150. [crossref]
  4. Harries L, Moore A, Kendall C, Stanger S, Stringfellow TD, Davies A, et al. (2020) Attitudes to palliative care in patients with neck-of-femur fracture—A multicenter survey. Geriatr Orthop Surg Rehabil 11: 1-7. [crossref]
  5. Santivasi WL, Partain DK, Whitford KJ (2019) The role of geriatric palliative care in hospitalized older adults. Hosp Pract 48: 37-47. [crossref]
  6. Church S, Rogers E, Rockwood K, Theou O (2020) A scoping review of the clinical frailty scale. BMC Geriatr 20: 1-18. [crossref]
  7. Sullivan NM, Blake LE, George M, Mears SC (2019) Palliative care in the hip fracture patient. Geriatr Orthop Surg & Rehabil 10: 1-7. [crossref]

Big Data vs. Big Mind: Who People ARE vs. How People THINK

DOI: 10.31038/IMROJ.2023814

Abstract

This paper presents a new approach to understanding Big Data. Big Data analysis allows to better hypothesize regarding what people think more about certain issues by extracting information on how people move around, what interests them, what the context is, and what do they do. We believe, however, by allowing users also to answer simple questions their interests can be captured more accurately, as the new area of Mind Genomics tries to do. It introduces the emerging science of Mind Genomics as a way to profoundly understand people, not so much by their mind as by the pattern of their reactions to messages. Understanding the way nature is, however, does not suffice. It is vital to bring that knowledge into action, to use the information about a person’s mind in order to drive behavior, i.e., to put the knowledge into action in a way that can be measured. The paper introduces the Personal Viewpoint Identifier as that tool and shows how the Viewpoint Identifier can be used to evaluate entire databases. The paper closes with the vision of a new web, Big Mind analyzing huge amounts of data, where the networks developed show both surface behavior that can be observed, and deep, profound information about the way each individual thinks about a variety of topics. The paper presents a detailed comparison with the Text Mining approach to Big Data in order to understand the advantages of understanding the ‘mind’ beneath the observed behavior in combination with the observed behavior. The potential ranges from creating personalized advertisements to discovering profound linkages between the aspects of a person and the mind of the person.

Introduction

When we look at networks, seeking patterns, we infer from the behaviors and the underlying structure what might be going on in the various nodes. We don’t actually communicate with the nodes; they’re represented geometrically as points of connection. Analytically, we can look at behavior, imposing structural analysis on the network, looking at the different connections—the nodes, the nature of what’s being transacted, and the number and type of connections. By doing so, we infer the significance of the node. But, what about that mind in the node? What do we know? What can we know? And more deeply, is that mind invariant, unchangeable, reacting the same way no matter what new configurations of externalities are imposed?

These are indeed tough questions. The scientific method teaches us to recognize patterns, regularities, and from those patterns to infer what might be going on, both at the location and by the object being measured, the object lying within the web of the connections. Mathematics unveils these networks, different patterns, in wonderful new ways, highlighting deeper structures, often revealing hitherto unexpected relations. Those lucky enough to have programs with false colors see the patterns revealed in marvelous reds, yellows, blues, and the other rainbow colors, colors which can become dazzling to the uninitiated, suggesting clarity and insight which are not really the case. The underlying patterns are clearly not in color, and the universe does not appear to us so comely and well-colored. It is technology which colors and delights us, technology which reveals the secrets.

Now for the deeper question, what lies beyond the network, the edges, inside the nodes, inside the mind lying in the center of a particular connection? Can we ever interrogate a node? Can we ever ask a point on a network to tell us about itself? Does the point remain the same when we shift topics, so the representation is no longer how the nodes interact on one day, but rather interact on another day, or in another situation?

Understanding the environment where a business occurs requires collecting and analyzing massive amount of data related to the potential clients; what they think about the offered products and the level of satisfaction for the offered services/products. The problem of understanding the mind of potential clients is not new; it has been on the focus of marketing researchers for some time to mention a few. One of the most prominent tools to be used for this purpose is text mining, defined as a process to extract interesting and significant patterns to explore knowledge from textual data sources. Usually, the collected data are unstructured, i.e., collected from blogs, social media, etc. As the amount of unstructured data collected by companies is constantly increasing, text mining is gaining a lot of relevance [1-7].

Text mining plays a significant role in business intelligence, helping organizations and enterprises to analyze their customers and competitors to make better decisions. It also helps in telecommunication industry, business and commerce applications and customer chain management system.

Usually, text mining combine discovery, text mining techniques, decoding, and natural language processing techniques. The most important elements of this approach are powerful mining techniques, visualization technologies and an interactive analysis environment to analyze massive sets of data so as to discover information of marketing relevance [8,9].

In the world of today, a number of studies suggest that that the efforts to create a technology of text mining have as yet fallen short. Today’s (2020) reality in text mining suggest that the effort has performed not as well as was hoped, neither in terms of explicit hopes and predictions, nor the vaster implicit hopes and predictions. Companies which have applied automated analysis of textual feedback or text mining have failed to reach their expectations. emphasize just how hard text mining can be. Research in the area of natural language processing (NLP) encounters a number of difficulties due to the complex nature of human language. Thus, this approach has performed below expectations in terms of depth of analysis of customer experience feedback and accuracy [10].

There are specific areas of disappointment. For example, major obstacles have been encountered in the field of predicting with accuracy the sentiment (positive/negative/neutral) of the customers. Despite what one might read in the literature of consumer researchers and others employing text mining for sentiment analysis, the inability to successfully address these issues has disillusioned some. Some of the disillusionment is to be expected because sentiment analysis must be sensitive to the nuances of many languages. Feelings expressed by words in one language may not naturally translate when the words are translated. Only a few tools are available that support multiple languages. It may be that better feedback might actually be obtained with structure systems, such as surveys.

In this paper we propose a new approach to understand Big Data from the point of view of understanding the mind of the person who is a possible ‘node.’ We operationally define the world as a series of experiences that might be captured in Big Data, and for each experience create a way of understanding the different viewpoints or mind-sets of the persons undergoing that experience. The effect is to add a deeper level to Big Data, moving beyond the patterns of what are observed, to the mind-sets of the people who undergo the experience. In effect, the approach provides a deeper matrix of information, more two dimensional, the first being the structure of what is being done (traditional Big Data), and the second being the mind-set of the person(s) reacting to this structure. In essence, therefore, a WHAT and the MIND(s) behind the WHAT. We conclude with the prospect of creating that understanding of the MIND by straightforward, affordable experiments, and a tool (Personal Viewpoint Identifier) which allows one to understand the mind of any person in terms of the relevant action being displayed.

Moving from Analysis of an Object to Interrogating It

We move now from analysis of an object in a network to actually interrogating the object in order to understand it from the inside, to get a sense of its internal composition. The notion here is that once we understand the network as externalities and understand deep mind properties of the nodes in the network, the people, we have qualitatively increased the value of the network by an order of magnitude. We not only know how the points in the network, the people, react, but we know correlates of that reaction, the minds and motivations of these points which are reacting and interacting.

Just how do we do that when we recognize that this mind may have opinions, that the mind may have a desire to be perceived as politically correct, and that, in fact, this mind in the object may not be able to tell us really what’s important? How do we work with this mind to find out what’s going on inside?

It is at this juncture that we introduce the notion of Mind Genomics, a metaphor for an approach to systematically explore and then quantitatively understand how things are perceived by person(s) using the system. The output of that understanding comprises content (the components of this mind), numbers (a way to measure the components of the mind), and linkages (the assignment of the content and its numbers to specific points, nodes, people in the network) [11,12].

A Typical Problem – What Should the Financial Analyst Say to Convince a Prospect to Commit?

Lest the foregoing seem to be too abstract, too esoteric, too impractical, let’s put a tangible aspect onto the idea. What happens when the point or node corresponds to a person walking in to buy a financial retirement product from a broker whom the person has never met? How does this new broker understand what to say to the person at the initial sales interaction, that first ‘moment of truth’ when there is a chance for a meaningful purchase to occur? And what happens when the interaction occurs in an environment where the financial consultant or salesperson never even meets the prospective buyer, but rather relies upon a Web site, or a simple outward-bound call-center manned by non-professionals?

The foregoing paragraph lays out the problem. We have our network, nodes connected by the sales activity. By understanding the mind of the prospective customer, the financial analyst has a much greater chance of making the sale, in contrast to simply by knowing the age, gender, family situation, income, and previous Web searching behavior—of the prospect, all available from Big Data and grist for the analytic mill. We want to go deeper, into the mind of that prospect.

Psychologists and marketers have long been interested in understanding what drives a person to do something, the former (along with some philosophers) to create a theory of the mind, the latter to create products and services, and sell them. We know that people can articulate what they want, describe to an interviewer the characteristics of a product or service that they would like, often just sketchily, but occasionally in agonizing detail. And all too often this description leads the manufacturer or the service supplier on a wild-goose-chase, running after features that people really don’t want, or features which are so expensive as to make the exercise simply one of wish description rather than preparation for design.

A more practical way runs an experiment presenting the person, this node in the system, with different ideas, different descriptions about a product, obtains ratings of the description, and then through statistical modelling, discovers those specific elements in the description which link to a positive response. In other words, run an experiment treating this node, this point in a network, as a sentient being, not just as something whose behavior or connections are to be observed as objective, measurable quantities. Looking at the network as an array of connected minds, not connected points, minds with feelings, desires, and opinions, will enrich us dramatically in theory and in practice.

The experiment, or better the paradigm of Mind Genomics, is rather simple. We use a paradigm known as Empathy and Experiment, empathy to identify the ‘what,’ the content, and experiment to identify the values, the ‘important’ [13].

Our strategy is simple. We want to add a new dimension to the network by revealing the mind of each nodal point. To do so requires empathy, understanding the ‘what,’ and experiment, quantifying the amount, revealing the structure. Putting the foregoing into operational terms, we will identify a topic area relevant to the node, the person, uncover elements or ideas appropriate to the topic, and then quantify the importance of each element. After Empathy uncovers the raw materials, the elements, Experiment mixes and matches these elements into different combinations, obtains ratings of the combinations, and then estimates how the individual elements in the combination drive the response.

The foregoing paragraph described an experiment not a questionnaire. Rather, we infer what the person, the node, wants by the pattern of responses and from behavior we determine what elements produce positive responses and what elements produce negative responses [14].

Putting the Emerging Science of Mind Genomics into Action – Setting Up a Study and Computing Results

The best way to understand the concepts of Mind Genomics, its application to knowledge and to networks, is through an illustration. This paper presents the application of Mind Genomics to create a micro- science about choosing a financial advisor for one’s retirement planning. The case history shows the input and practical output of Mind Genomics, how a financial advisor can understand the mind and needs of a customer, identifying the psychological mind-set and relevant points from the very beginning of the interaction. A sense of the process can be obtained from Figure 1. The paper will explicate the various steps, using actual data from a Mind Genomics experiment.

fig 1

Figure 1: The process of Mind Genomics, from setup to analysis and application. Figure courtesy of Barry Sideroff, Direct Ventures, LLC.

To create and to apply the micro-science we follow the steps below. Although the case history is particularized to selecting a financial advisor, the steps themselves would be followed for most applications. Only the topic area varies.

  1. We begin by defining the topic. We also specify the qualifications for the consumer respondents, those who will be part of what might initially look like a Web-based survey, but in reality, will participate in what constitutes a systematic experiment. For our study, the focus is on the interaction of the financial advisor with the consumer, with the specific topic being the sales of retirement instruments such as annuities. The key words here are focus and granularity. Specificity makes all the difference, elevating the study from general knowledge to particulars. Granularity means that the data provide results that can be immediately applied in practice.
  2. Since our focus here is on the inside of the mind, what motivates the person to listen to the introductory sales message of the financial planner, we will use simple phrases, statements that a prospective client of the financial analyst is likely to hear from the analyst himself or read in an advertisement. Table 1 presents the set of 36 elements divided into four questions (silos, categories), each question comprising exactly nine answers (elements.) The silos are presented as questions to be answered. This study used a so-called 4×9 matrix (four questions, nine answers per question.) The elements are short, designed to paint a word-picture, and are ‘stand-alone.
  3. A set of 36 elements covers a great deal of ground and typically suffices to teach us a lot about the particular minds of the participants, our respondents, or nodes in a web. The particular arrangement of four silos and nine elements is only one popular arrangement of silos and their associated elements. An equally popular arrangement is 6×6, six silos with six elements in Recent advances have shown good results with a much smaller set of 16 elements, emerging from four questions, each with four answers (four silos, four elements).
  4. Create vignettes, systematically varied vignettes (combinations). The 4×9 design requires 60 different vignettes. Each respondent will evaluate a completely different set of vignettes, enabling Mind Genomics to test a great deal of the possible ‘design’ space of potential combinations. Rather than testing the same 60 vignettes with many respondents, the strategy of testing different combinations tests more of the possible combinations. The pattern emerges with less error, even though each combination is tested by one, at most two respondents.
  5. The combinations, vignettes called profiles or concepts in other published work, comprise 2-4 elements, each element appearing five times. The elements appear against different backgrounds since all the elements vary from one vignette to another. The underlying experimental design, a recipe book controls which particular elements appear in each vignette. Although to the untutored eye the 60 different vignettes appear to be simply a random, haphazard collection of elements with no real structure, nothing could be further from the truth. The experimental design is a well-thought-out mathematical structure ensuring that each element appears independently of every other element, repeated the same number of times across each element., This allows us to deconstruct the response to the 60 test vignettes into the individual contribution of each element. Statistical analysis by OLS, ordinary least- squares regression, will immediately reveal which elements are responsible for the rating and which simply go along, not contributing anything.
  6. We see an example of a vignette in Figure 2A, program sets up the vignettes remotely on the respondent’s computer, presents each vignette, and acquires the rating. The bottom of the vignette shows the rating scale for the vignette. The respondent reads the vignette in its entirety and rates the vignette on the scale. The interview is relatively quick, requiring about 12 minutes for the presentation of the vignettes followed by a short classification questionnaire. The process is standardized, easy, disciplined, and quite productive in terms of well-behaved, tractable data that can be readily interpreted by most people, technical or non-technical alike. As long as the respondent is at least a bit interested and participates, the field execution of the study with respondents is straightforward. The process is automatic from the start of the experiment to the data analysis, making the system scalable. The experiment is designed to create a corpus of knowledge in many different areas, ranging from marketing to food to the law, education, and government. It is worth noting that whereas the 60 vignettes require about 12 minutes to complete, the shorter variation, the 4×4 with 24 vignettes, requires only about 3 minutes.
  7. The original rating scale that we see at the bottom of the vignette in Figure 2 is a Likert scale, or category scale, an ordered set of categories representing the psychological range from 1 (not at all interested) to 9 (very interested). For our analysis we simplify the results, focusing on two parts of this 9-point scale, with the lower part (ratings 1-6) corresponding to not interested and the upper part (ratings 7-9) corresponding to interested. We re-code ratings of 1-6 to the number 0 and ratings of 7-9 to the number The recoding loses some of the granular information, but the results are more easily interpreted. Although the 9-point scale provides more granular information, the reality is that managers focus on the yes/no aspect of the results.
  8. The Mind Genomics program also adds a vanishingly small random number to each newly create binary value, in order to ensure that the OLS (ordinary least-squares) regression does not crash in the event that a respondent assigns all vignettes ratings 1-6, or ratings 7-9, respectively. In that case, the transformed binary variables are all 0 or 100, respectively, and the random number adds need variability to prevent a ‘crash.’ The 60 vignettes allow the researcher to create an equation for each respondent Building the model at the level of the individual is a powerful format of control, known to statisticians as the strategy of ‘within- subjects design.’
  9. Some of the particulars underlying the modelling are:

a. The models are created at the level of the individual respondent, using the well-accepted procedure of OLS, ordinary least squares regression.

b. The experimental design ensures that the 36 elements are statistically independent of each other so that the coefficients, the impact values of the elements, have absolute value. The inputs are 0/1, 0 when the element is absent from a vignette, 1 when the elements is present in the vignette.

c. OLS uses the 60 sets of elements/ratings, one per vignette, as the cases. There are 36 independent variables and 60 cases, allowing sufficient degrees of freedom for OLS to emerge with robust estimates

d. We express the equation or model as: Binary Rating = k0 + k1(A1) + k2(A2)…k36(D9). For the current iteration of Mind Genomics, we estimate the additive constant k0, the baseline. Future plans are to move to the estimation of the coefficients, but ‘force the regression through the origin’, viz., to assume that the additive constant is 0.

e. The equation says that the rating is the combination of an additive constant, k0, and weights on the elements. The elements appear either as 0 (absent) or as 1 (present), so the weights, k1 – k36, show the driving force of the different elements.

Table 1: The raw material of Mind Genomics, elements arranged into four silos, each silo comprising nine elements.

tab 1

Understanding the Result Through the Additive Constant and the Coefficients

We now look at the strongest performing elements from the equation or model which relates the presence/absence of the elements to the transformed binary rating of 0 (not interested) or 100 (interested). The strongest performing elements appear in Table 2. The table shows all elements which generate an impact value or coefficient 8 or higher for any key subgroup, whether total sample, gender, age, or income, respectively.

  1. The total panel comprises 241 respondents. We can break out the total panel in self-defined subgroups, e.g., gender, age, and income. That information is available from the self-profiling classification, a set of questions answered by the respondent after the respondent rated the set of 60 vignettes.
  2. The additive constant tells us the conditional probability of a person saying interested in what the financial advisor has to say, i.e., assigns a rating of 7-9, when reading a vignette which has no elements (the baseline). Of course, by design all vignettes comprise elements, so the additive constant is an estimated We can use the additive constant as a baseline. For the total panel it is 35, meaning that 35% of the respondents would rate a vignette 7-9. Males are less likely to be positive whereas females are more likely to be positive (additive constants of 28 vs. 36). Those under 40 are far less likely to be positive, those over 40 are more likely to be positive (additive constant of 29 vs. 40). Income makes no difference.
  3. Beyond the baseline are the elements, which contribute to the total. We add up to four elements to the baseline to get an estimated total value, i.e., the percent of respondents who say that they would be interested in the vignette about the financial consult were the elements to be part of the advertising.
  4. To allow patterns to emerge the tables of coefficient show only those positive coefficients of +2 or higher, drivers of interest. Negative coefficients are not shown.
  5. The coefficients for the 36 elements are low. Table 2 shows the strongest elements only, and only elements which generate a coefficient or impact value of +8 for at least one subgroup. We interpret that +8 to mean that when the element is incorporated into the advertising vignette at least 8% more people will rate the vignette 7-9, i.e., say ‘I’m ’ The value +8 has been observed in many other studies to signal that the element is ‘important’ in terms of co-varying with a relevant behavior. Thus, the value +8 is used here, as an operationally defined value for ‘important.’
  6. Our first look into the results suggests nothing particularly strong emerges from the total sample. We do see six elements scoring well in at least one subgroup. However, we see no general pattern. That is, we don’t see an element working very well across the different groups. Furthermore, reading the different elements only confuses us. There are no simple patterns.
  7. Our first conclusion, therefore, is that the experiment worked at the simple level of discovering what is important, and what is not important. We are able to develop elements, test combinations, deconstruct the combinations, and identify winning The experiment, at least thus far, does not reveal to us deeper information about the mind(s) of the respondent. We will find that deeper information when we use clustering in the next section to identify mind-sets.

Table 2: Strong performing elements for the Total Sample and for key subgroups defined by how the respondent classifies himself or herself. The table presents only those strong-performing elements with average impacts of 8 or higher in at least one self-defined subgroup.

tab 2

Deeper, Possibly More Fundamental Structures of the Mind by Clustering

Up to now we have looked at people as individuals, perhaps falling into convenient groups defined by easy-to-measure variables such as gender, age, income. We could multiply our easy-to-measure variables by asking our respondents lots of questions about themselves, about their attitudes towards financial investors, about their feelings towards risk versus safety, and so forth. Then, we could classify the respondents by the different groups to which they belong, searching for a possible co-variation between group membership and response pattern to elements (Table 3).

Table 3: Performance of the strongest elements in the three mind-sets. emerging from the cluster analysis. People in MS1 appear to be the target group to be identified as the promising clients for the financial advisor.

tab 3

The just-described approach typifies the conventional way of thinking about people. We define people as belonging to groups and then search out the linkage between such groups and some defined behavior. Scientists call this strategy the hypothetico-deductive method, beginning first with a sense of ‘how the world might work,’ and then running an experiment to confirm, or just as likely, to falsify that hypothesis. We work from the top down, thinking about what might happen and proceeding merrily to validate or reject that thinking.

Let’s proceed in a different manner, without hypothesizing about how the world works. Let’s proceed with the data we have, looking instead for basic groups who show radically different, interpretable patterns. In the world of color this is analogous to looking for the basic colors of the spectrum, red, yellow, blue, which must emerge out of the measured thousands of colors of flowers. Let’s work from the bottom up, in a more pointillistic, empirical fashion, emulating Francis Bacon in his Novum Organum.

How then do we do this? How do we find naturally occurring groups of people in a specific population who show different patterns of behavior or at least responses for the micro, limited area? That is, we are working with a small corner of reality, one’s responses to messages about choosing a financial advisor. It’s a limited aspect of reality. How is that reality constituted? Are there different groups of minds out there, groups wanting different features? Are these groups of minds interpretable? To continue with the aforementioned metaphor, can we find the basic colors for this aspect of reality, the red/blue/yellow, not of the whole world, but the red/blue/yellow of choosing a financial advisor?

That we have limited our focus to the limited, micro area of messaging for client acquisition by a financial advisor makes our job easier:

  1. We are working in a corner, nook, a little region of reality. That small region is, however, quite granular. We already have rich material produced by our study. Our study with 36 elements and 241 profiles of impact values tells us how 241 individuals value the individual elements.
  2. Focusing only on that small wedge of reality, let us see whether there is a deeper structure, focusing only on the reality of choosing a financial advisor and using only the mind of the consumer as a way to organize reality. Continuing our metaphor of colors, we have come upon a new limited aspect of
  3. What are the basic dimensions of that new, limited aspect of reality? We have only two ground Parsimony and Interpretability, respectively Ground Rule 1, Parsimony: We should be looking for primaries, the fewer the better, for this new aspect of reality, our mind of selecting the investment advisor. Ground Rule 2, Interpretability: We must be able to interpret these primaries in a simple way. They must make sense, must tell a story.
  4. The foregoing introduction leads us naturally to our data, our 241 rows (one per respondent), and our 36 columns (one per element). The numbers in the 36 columns are the 36 coefficients from the model relating the presence/absence of the 36 elements to the binary transformed rating. We apply the method of cluster analysis to our 241 rows x 36 columns. We do not incorporate the additive constant into our cluster analysis, because it doesn’t give us information about the response to particular elements, the focus of the cluster analysis.
  5. Cluster analysis puts our 241 respondents first into two groups, then into three groups, then into four groups, and so forth. These are clusters, which we can call mind-sets or viewpoints because they represent different viewpoints that people have about what is important in the interaction with a financial advisor. Furthermore, the word ‘viewpoint’ emphasizes the psychological nature of the cluster, that we are dealing with the mind here, the mind as it organizes one small corner of reality, the interaction with a financial advisor.
  6. We end up with a solution suggesting three different viewpoints, as Table 3 shows. These three viewpoints are shown and named by virtue of the strongest performing elements in each viewpoint. The additive constants, our baselines, lie in the small range, and are fairly low in magnitude, 30-40. There is no mindset just ready to spring to attention, willing to buy the services of the financial advisor. That ready-to-act mind-set would be identified by a high additive constant.
  7. The total sample shows no strong elements. This means that without any knowledge of the mind of the prospect it’s unlikely that someone will know what to say, or the right thing to say. Perhaps the strongest message, with a coefficient of +7 (an additional 7% interested in working with the advisor) is the phrase: Tell us when you want to retire, and we will develop an action plan to get you there.
  8. The real differences come from the elements as responded to by the individuals in the different mind-sets. Our most promising group is Mind-Set 1, comprising 70 of our 241 respondents, or 28%. Use the six strong performing elements and one is likely to win over these respondents.
  9. If nothing else but the data in Table 3 are known, how might the salesperson ‘know’ that she or he is dealing with a prospect from Mind-Set 1, versus knowing that the person is in Mind-Set 2 or Mind- Set 3, the less promising mind-sets, the ones harder to convince? Table 3 simply tells us what to say, precisely, once we find the people, a major advance over knowledge that we began with, but not the whole story. It will be our job to assign a new person with some confidence to one of the three mind-sets, in order to proceed with the sales effort. Hopefully, most of the prospects will belong to Mind-Set 1.

fig 2

Figure 2: An example of a test vignette. The elements appear in a centered format with no effort to connect the elements, a format which enhances ‘information grazing.’ The vignette shows the ratings scale at the bottom, and the progress in the experiment at the top right (screen 15 out of 60).

Finding Viewpoints (Minds) in a Population

The foregoing results suggest that we might have significantly more success focusing on the group of people who are most ready to work with the financial advisor. But how do we find these people in the population? The analysis is data analytics, but exactly what should be done? And, in light of the enormous opportunities available to those who can consistently identify these mind-sets and then act on the knowledge, how can we create an affordable, scalable, ‘living’ mind-set assignment technology?

We walk around with lots of numbers attached to us. Data scientists can extract information about us from our tracks, whether these tracks are left by our behavior (e.g.. websites that we have visited), by forms that we have filled out and are commercially purchasable (e.g., through Experian or Trans Union or any of the other commercial data providers, by loyalty programs, etc.), or even by questionnaires that respondents complete in the course of their business transactions, medical transactions, and so forth.

All of the available data, properly mined, collated, analyzed, and reported, might well tell us when a person is ready to hire a financial advisor, e.g., upon the occasion of marriage, a child, a promotion, a job change, a move to another city, and so forth. But just what do we say to this particular prospect, the person standing before us in person, or interacting with our website, or even sitting at home destined to be sent a semi-impersonal phone message, email, or letter? In other words, and more directly, What are the precise words to say to this person?

Those in sales know that an experienced salesperson can intuit what to say to the prospect. Perhaps the answer is to hire only experienced, competent salespeople, with 20 years of experience. After the first 100 or them are hired, what should be done with the millions of salespeople who need a job, but lack the experience, the intuition, and the track of successes, and who are perhaps new to the workforce? In other words, how do we scale this knowledge of the mind of people, so that everyone can be sent the proper message at the right time, whether by a salesperson or perhaps even by e-commerce methods, by websites instead of salespeople?

The foregoing results in Table 3 show us what to say and to whom, especially to Mind-Set 1.. The problem now becomes one of discovering the mind-set to which a specific person belongs. Unfortunately, people do not come with brass plates across their foreheads telling us the viewpoints to which that person belongs. And there are so many viewpoints to discover for a person, as many sets of viewpoints as there are topic areas for Mind Genomics. The bottom line here is that data scientists working with so-called Big Data might be able to infer that a person is likely to be ready for a financial advisor, but as currently constituted, the same Big Data is unlikely to reveal the mind-set to which the individual person belongs. We have petabytes of data, reams of insights, but not the knowledge, the specificity about the way the mind works for any particular, limited, operationally defined topic in the reality of our experience.

We move now to the second phase of our work reported here, discovering the viewpoint to which any person belongs. We have already established the micro-science for the financial planner, the set of phrases to use for each of the three mind-sets uncovered and explicated in a short experiment. We know from our 241 respondents the mind-set to which each person belongs, having established the mind-sets and individual mind-set membership in the group membership by used cluster analysis. How then do we identify any new person, anywhere, as belonging to one of our three mind-sets, and thus know just what to say to that person?

In today’s computation-heavy world one might think that the best strategy is to ‘mine’ the data with an armory of analytic tools, spending hours, days, weeks, months attempting to figure out the relation between who a person is, and what to say, in this small, specific, virtually micro-world. Once that computation is exhausted, there may be some modest covariation between a formula encompassing all that is known about a person and membership in the mind-set. A simpler way, developed by authors Gere and Moskowitz, called the PVI (personal viewpoint identifier), does the same task in minutes, at the micro-level, with modest computer resources, and with the same granularity as the original Mind Genomics study from which the mind-sets emerged.

In simple terms, the PVI works with the data from the Mind Genomics study, viz., the specific information from which the mind-sets emerged. The PVI system perturbs the data, using a Monte-Carlo system, and over 20,000+ runs, identifies the combinations of elements which best differentiate among the segments. The PVI emerges with six elements, all taken from the original study, and with a two-point rating scale. The pattern of responses to the six questions assigns a new person to one of the three (or two) mind- sets.

Figure 2 shows an example of the introduction to the PVI, which asks for information from the respondent. It will be this information which allows the user of the PVI to create a database of ‘minds-sets’ of people for future research and marketing efforts. Furthermore, the introduction to the PVI has information about the time when the PVI is being completed (important for future work on best contact times), age, gender, etc. The specific questions can be included or suppressed, depending upon the type of information that will be necessary when the PVI is used (viz., research on the time-of-day dependence of mind-sets, if it actually exists.) As of this writing (2023)the PVI can be accessed at: https://www.pvi360.com/TypingToolPage.aspx?projectid=213&userid=2018.

Figure 3 shows the actual PVI portion, comprising three questions about one’s current life-stage (what is one thinking about in terms of retirement planning), and then six questions designed to assign the new person to one of the three mind-sets. It is important to realize that instead of requiring weeks and heavy computation, the entire process, from the set-up of the PVI to the deployment, is approximately 20 minutes. Like the work to set up a Mind Genomics experiment, system to create a PVI for that study is ‘templated’, making it appropriate for ‘industrial strength’ data acquisition. Several studies can be incorporated into one PVI, with studies randomized, and questions randomized, each study or project requiring only six questions, developed from the elements. The process is automatic and can be deployed immediately with thousands of participants within the hour.

fig 3

Figure 3: Introductory page to the PVI (personal viewpoint identifier

Figure 4 shows the feedback emerging immediately from the PVI. The shaded cell shows the mind-set to which the respondent belongs. The PVI stores the respondent’s background information (Figure 2) and mind-set information (Figure 4) in a database. Furthermore, the PVI is set up to send the respondent immediately to a website, or to show the respondent a video relevant to the mind-set to which the respondent has been assigned by the PVI (see Figure 5). Thus, the Mind Genomics system comprising knowledge acquisition by a small, affordable experiment, coupled with the PVI, expands the scope of Mind Genomics so that the knowledge of mind-set membership can be deployed among a far greater population, those who have been assigned to a mind-set by the PVI.

fig 4

Figure 4: The actual PVI for the study, showing three up-front ‘questions’ about one’s general attitude, and then six questions and a 2-point response scale for each, used to assign the person to one of the three mind-sets.

fig 5

Figure 5: Immediately feedback about mind-set membership

Evolving into BIG MIND – The Nature Marriage of PVI-enhanced Mind Genomics with Big Data

Up to now we have been dealing with small groups of individuals whose specific mind-sets or viewpoints in a specific, limited topic area we can discover, and then act upon. But what are we to do when we want to deal with thousands, millions, and even billions of new people? Consider, for example, the points in Figure 6, top panel. Consider these points as individuals. Measurement of behaviors show how these individuals connect with each other at a superficial level, at the phenotypical level. There are many visualization techniques which create the interconnections based upon one or another criterion. And from these visualizations we can ascribe something to the network. We can deduce something about the network and the nodes, although not much, perhaps. We are like psychologists studying the rat. If only the rat could talk, how much it would say about what it is doing and why? Alas, it is a rat, or perhaps a pigeon, the favorite test subjects of those who follow strict behaviorism, of the type suggested by BF Skinner and his Behaviorist colleagues and student at Harvard University. . (Full disclosure – author Moskowitz was a graduate student in some of Skinner’s seminars and colloquia, at Harvard, 1965-1968.)

fig 6

Figure 6: Set up template for the PVI, showing the ability to show the respondent a video or send the respondent to a landing page, depending upon the mind-set to which the respondent has been assigned by the PVI.

What happens, however, when we know the mind of each person, or at least the membership in, say, four or ten or perhaps 100 or perhaps 1000 different topic areas relevant to the granular richness of DAILY EXPERIENCE? What deep, profound understanding would emerge if we were to know the network itself, the WHO and BEHAVIOR of people, coupled with the structure of their MIND, viz., the ‘MIND OF EACH POINT IN THE NODE!

Consider Figure 6. The top panel shows the aggregate of people. We know WHO the people are. The bottom panel shows the network, WHAT the people do, how they link to each other. What if now we know WHY for each point, how each point thinks about a set of topics. We create a web of interconnected points and discover some of the commonalities of the points, not based on who the points are or what the points did, but rather how the points think about many relevant topics.

How do we move from Mind Genomics of one topic, say our choice of financial advisor, to many topics in common space, say the space of ‘personal finances’ and then through typing people around the world on a continuing basis, as life progresses and events progress: thousands, not hundreds, and finally millions, tens of millions of people. In essence this ‘project’ creates a true ‘wiki of the mind and society’, empirically sound, extensive, actionable, and archival for decades? In essence, how do we go from a map of nodes to a map of connected minds in the every-day life, and across the span of countries and time? (Figure 7)

fig 7

Figure 7: Example of nodes (i.e., people), perhaps connected by a network. The top panel shows the network of people as points. The bottom panel shows the potential of knowing the mind of each person, i.e., each point in the network.

To reiterate, our goal is to understand the specific mind-set memberships of each point in the network, where the point corresponds to a person. The big picture is thus millions, perhaps hundreds of millions of points, people, observed two ways, and even expanded a third way to billions of people who have completed the PVI, but who may be ‘imputed’ to belong to a mind-set through look-alikes. The is the DVI, the Digital Viewpoint identifier, explicated in step 3 below:

1. Granular Mental Information about Each Node

The minds or at least the pattern of mind-set membership of many people determined through Mind Genomics and the PVI, for a set of different topic areas. There may be as few as one topic area, or several dozen or even 100 or more topics. This information can be obtained through small-scale Mind-Genomics studies, executed and analyzed within 1-2 hours (www.BimiLeap.com), and followed by an easy-to-deploy PVI (www.PVI360.com).

2. Correlate Behavior Observed Externally with the Underlying Mind-sets

The interactions of nodes with each other, as measured objectively, either by who they are or by how they behave, such as what they view on the Web, what they order, with whom they interact in conversations. This information is readily available today from various sources, known collectively as Big Data.

3. Expand the PVI (Personal Viewpoint Identifier)

The goal here is to work with 1000 respondents, each of whom provides 5 minutes of her or his time to complete a set of PVI’s on a topic. Let’s choose a number of PVI, say 12. Each PVI of six questions takes about 15 seconds to compete. In three minutes, a person can do 12 PVI’s, comprising 72 questions.

4. Augment the Data

Let’s purchase publicly available information about these 1000 known respondents. The goal now is to predict the viewpoints of the 1,000 people on the 12 topics from purchasable data about those 1,000 people. Once that is done, one has developed a simple predictive model which uses purchasable data to estimate the mind-set membership of a person in each of 12 topic areas from purchasable information that can be readily obtained. This simple predictive model is the aforementioned DPI, Digital Personal Identifier. It has now become straightforward to create a ‘scoring system’ which moves systematically through the data already available, and ‘scores’ each respondent on 12, 120, or even 1200 different granular topics, to create a true Wiki of the Mind and Society.

5. Fast time frame, low cost: Let’s consider a simple scenario, the creation of this mass of data for the financial trajectory of a person, from early adulthood to late adulthood, through all the relevant financial aspects. Let’s assume 300 different identifiable activities involved in decision-making. The foregoing steps mean that within a period of six months to one year, and some concerted effort, it will become possible, and indeed quite straightforward, to move from say 300 topic studies to 300 micro sciences and viewpoints, to the creation of 300 digital viewpoint identifiers, to the application of those identifiers, i.e., scoring systems, to the purchasable data of 1-2 billion people. Within the Big Data the data scientist and entrepreneur will have an associated Big Mind, a vector of perhaps 300 numbers underneath each node, each person, each node corresponding to one of those 300 activities. The analytic possibilities emerging from knowing both the behavior and the mind-set of the behaving organism on 300 (or more) topics can only be surmised. One would not be far off to think that the possibilities are enormous for new understanding of behavior, a possibly new engineering of society.

Acknowledgments

Attila Gere thanks the support of the Premium Postdoctoral Researcher Program of the Hungarian Academy of Sciences.

References

  1. Ordenes FV, Theodoulidis B, Burton, J, Gruber, T, Zaki M (2014) Analyzing Customer Experience Feedback Using Text Mining: A Linguistics-Based Journal of Service Research 17: 278-295.
  2. Aciar S (2010) Mining Context Information from Consumer’s Reviews. 2nd Workshop on Context-Aware Recommender Systems (CARS-2010).
  3. Bucur C (2015) Using Opinion Mining Techniques in Tourism. Procedia Economics and Finance 23(Supplement C): 1666-1673.
  4. Ritbumroong T (2015) Analyzing Consumer Behavior Using Online Analytical Mining. In Marketing and Consumer Behavior: Concepts, Methodologies, Tools and Applications (1st ed, 894-910) IGI Global.
  5. Roll I, Baker RS, Aleven V, McLaren BM, Koedinger KR (2005) An analysis of differences between preferences in real and supposed contexts. User Modeling- Springer Berlin Heidelberg 367-376.
  6. Talib R, Hanif MK, Ayesha S, Fatima F (2016) Text Mining: Techniques, Applications and International Journal of Advanced Computer Science and Applications 7: 414-418.
  7. Zhong N, Li Y, Wu ST (2012) Effective Pattern Discovery for Text IEEE Transactions on Knowledge and Data Engineering 24: 30-44.
  8. Auinger A, Fischer M (2008) Mining consumers’ opinions on the In FH Science Day. 410;419.Linz, Österreich.
  9. Fan W, Wallace L, Rich S, Zhang Z (2010) Tapping the power of text mining, Communications of the ACM, vol. 49, no. 9, pp. 76-82, 2006.Fatima, F, Islam, W, Zafar, F, Ayesha, S. Impact and usage of internet in education in Pakistan. European Journal of Scientific Research 47: 256-264.
  10. Fenn, J, LeHong H (2012) Hype Cycle for Emerging Gartner.
  11. Gofman, A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  12. Jung, C. H (1976) Psychological Types: A Revision (The Collected Works of C.G. Jung) Princeton, NJ: Princeton University Press.
  13. Moskowitz HR, Batalvi B, Lieberman E (2012) Empathy and Experiment: Applying Consumer Science to Whole Grains as Foods. In Whole Grains Summit (2012) (pp. 1-7) Minneapolis, MN: AACC International.
  14. Gofman A (2012) Putting RDE on the R&D Map: A Survey of Approaches to Consumer-Driven New Product Development. In A. Gofman, H. R. Moskowitz (Eds.), Rule Developing Experimentation: A Systematic Approach to Understand & Engineer the Consumer Mind (pp. 72-89) Bentham Books.

Developing an Inner Psychophysics for Social Issues: Reflections, Futures, and Experiments

DOI: 10.31038/IMROJ.2023813

Abstract

This paper introduces Inner Psychophysics, a new approach to measuring the values of ideas, applying the approach to the study of responses to 28 different types of social problems. The objective of Inner Psychophysics is to provide a number, a metric for ideas, with the number showing the magnitude of the idea on a specific dimension of meaning. The approach to create this Inner Psychophysics comes from the research system known as Mind Genomics. Mind Genomics presents the respondent with the social problem, and a unique set of 24 vignettes presenting solutions to the problem. The pattern of responses to the vignettes is deconstructed into the contribution of each ‘answer’, through OLS (ordinary least squares) regression. The approach opens up the potential of a ‘metric for the social consensus,’ measuring the value of ideas relevant to society as a whole, and to the person in particular.

Introduction

Psychophysics is the oldest branch of experimental psychology, dealing with the relation between the physical world (thus ‘physics’) and the subjective world of our own consciousness (thus ‘psycho’). The question might well be asked what is this presumably arcane psychological science dealing with up to date, indeed new approaches to science? The question is relevant, and indeed, as the paper and data will show. The evolution of ‘inner psychophysics’ provides today’s researcher with a new set of tools to think about the problems of the world. The founder of today’s ‘modern psychophysics,’ the late S.S. Stevens (1906-1973) encapsulated the opportunity in his posthumous book, ‘Psychophysics: An Introduction to its Perceptual, Neural and Social Prospects. Stevens also introduced the phrase ‘a metric for the social consensus,’ in his discussions about the prospects of psychophysics in the world of social issues. This paper presents the application of psychophysical thinking and disciplined rigor to the study of how people ‘think’ about large-scale societal problems [1,2].

The original efforts in psychophysics began about 200 years ago, with the world of physiologists and with the effort to understand how people distinguish different levels of the same stimulus, for example, different levels of sugar in water, or today, different levels of sweetener in cola. Just how small of a difference can we perceive? Or, to push things even more, what the is lowest physical level that we can detect? [3] These are the difference and the detection threshold, respectively, both of interest to scientists, but of relatively little interest to the social scientist and researcher.

The important thing to come out of psychophysics is the notion of ‘man as a measuring instrument,’ the notion that there is a metric of perception. Is there a way to assign numbers to objects or better to experiences of objects? In simpler terms, think of a cup of coffee. If we can measure the subjective perception of aspects of that coffee, such as its coffeeness’, then what happens when we add milk. Or add sugar. Or change coffee roast, and so forth. At a mundane level, can we measure how much perceived ‘coffeeness’ changes? With that in mind can we do this type of measurement for social issues?

Steven’s ‘Outer’ and ‘Inner’ Psychophysics

By way of full disclosure, author HRM was one of the last PhD students of the SS Stevens, receiving his PhD in the early days of 1969. Some 16 months before, Stevens had suggested that HRM ‘try his hand’ at something such as taste or political scaling, rather than pursuing research dealing with topics requiring sophistication in electronics, such as hearing and seeing. That suggestion would become a guide through a 54-year future, now a 54-year history. The notion of measuring taste forced thinking about the mind, the way people say things taste versus how much they like what they taste. This first suggestion, studying taste, focused attention on the inner world of the mind, one focused on what things taste like, why people differ in what they like, whether there are basic taste preference groups, and so forth. The well-behaved and delightfully simple regularities, ‘change this, you get that,’ working so well in loudness, seem to break down in taste.

If taste was the jumping off point from this outer psychophysics to the measurement of feelings such as liking, then the next efforts would be even more divergent. How does one deal with social problems which have many aspects to them? We are no longer dealing with simple ingredients, which when mixed create a food, and whose mixtures can be evaluated by a ‘taster’. We are dealing now with the desire to measure the perception of a compound, complex situation, the resultant of many interacting factors. Can the spirit of psychophysics add something, or we stop at sugar coffee, or salt in pickles?

Some years later, through ongoing studies of perception, it became obvious that one could deal with the inner world, using man as a measuring instrument. The slavish adherence of systematic change of the stimulus in degrees and the measurement, had to be discarded. It would be nice to say that a murder is six times more serious than a bank robbery with two people injured, but that type of slavish adherence would not create this new inner psychophysics. It would simply be adapting and changing the hallowed methods of psychophysics (systematically change, and then measure), moving from tones and lights to sugar and coffee, and now to statements about crimes. There would be some major efforts, such as the utility of money [4], efforts to maintain the numerical foundations of psychophysics because money has an intrinsic numerical feature. Another would be the relation between perceived seriousness of crime and the measurable magnitude punishment. But there had to be a profound re-working of the problem statement.

Enter Mathematics: The Contribution of Conjoint Measurement, and Axiomatic Measurement Theory

If psychophysics provided a strong link to the empirical world, indeed a link which presupposed real stimuli, then mathematical psychology provided a link to the world of philosophy and mathematics. The 1950’s saw the rise of interest in mathematics and psychology [5]. The goal of mathematical psychology in the 1950’s and 1960’s was to put psychology on firm theoretical footing. Eugene Galanter became an active participant in this newly emerging, working at once with Stevens in psychophysics at Harvard, and later with famed mathematical psychologist R. Duncan Luce. Luce and his colleagues were interested in ‘fundamental measurement’ of psychological quantities, seeking to measure psychology with the same mathematical rigor that physicists measured the real world. That effort would bring to fruition the Handbook of Mathematical Psychology [6], and the work of Luce and Tukey [7] well as the efforts of psychologist Norman Anderson [8] who coined the term ‘functional measurement.’

The simple idea which is relevant to us is that one could mix test stimuli, ideas, not only food ingredients, instruct the respondent to evaluate these mixtures, and estimate the contribution of each component to the response assigned to the mixture. Luce and Tukey suggested deeply mathematical, axiomatic approaches to do that. Anderson suggested simpler approaches, using regression. Finally, the pioneering academics at Wharton Business School, Paul Green and Yoram (Jerry) Wind showed how the regression approach could be used to deal with simple business problems [9,10].

The history of psychophysics and the history of mathematical psychology met in the systematics delivered by Mind Genomics. The mathematical foundations had been laid down by axiomatic measurement theory. The objective, systematized measurement of experience, had been laid down by psychophysics at first, and afterwards by applied psychology and consumer research. What remained was to create a ‘system’ which could quantify experience in a systematic way, building databases, virtually ‘wikis of the mind’, rather than simply providing one or two papers on a topic which solved a problem with an interesting mathematics. It was time for the creation of a corpus of psychophysically motivated knowledge, an inner psychophysics of thought, rather than the traditional psychophysics of perception.

Reflections on the Journey from the Outer Psychophysics to an Inner Psychophysics

New thinking is difficult, not so much because of the problems as the necessity to break out of the paradigms which one ‘knows’ to work, even though the paradigm may no longer serve its purpose in an optimal fashion. Inertia seems to be a universal law, whether the issue be science and knowledge, or business. This is not the place to discuss the business aspect, but it is the place to shine a light on the subtle tendency to stay within the paradigms that one learned as a student, the tried and true, those paradigms which get one published.

The beginning of the journey to inner psychophysics occurred with a resounding NO, from S. S. Stevens, in 1967, when author HRM asked permission to combine studies of how sweet an item tasted, and how much the item was liked. This effort was a direct step away from simple psychophysics, with the implicit notion of a ‘right answer’. This notion of a ‘right answer’ summarizes the worldview by Stevens and associates that psychophysics was searching for, invariance, for ‘rules’ of perception. Departures from the invariances would be seen as the irritating contribution of random noise, such as the ‘regression effect’ [11], wherein the tendency of research is to underestimate the pattern of the relation between physical stimulus and subjective, judged response. “Hedonics” was a complicating, ‘secondary factor’, which could only muddle the orderliness of nature, and not teach anything, at least to those imbued with exciting Harvard psychophysics of the 1950’s and 1960’s.

The notion of cognition, hedonics, experience as factors driving the perception of a stimulus, could not be handled easily in this outer psychophysics except parametrically. That is, one could measure the relation between the physical stimulus and the subjective response, create an equation with parameters, and see how these parameters changed when the respondent was given different instructions, and so forth. An example would be judging the apparent size of a circle of known diameter versus judging the actual size. It would be this limitation, this refusal to accept ideas as subject to psychophysics, that author HRM would end up attempting to overcome during the course of the 54-year journey.

The course of the 54-year journey would be marked by a variety of signal events, events leading to what is called in today’s business ‘pivoting.’ The early work on the journey dealt with judgments of likes and dislikes, as well as sensory intensity [12]. The spirit guiding the work was the same, search for lawful relations, change one parameter, and measure the change in a parameter of that lawful relation. The limited, disciplined approach of the outset psychophysics was too constraining. It was clear at the very beginning that the rigorous scientific approaches to measuring perceptual magnitudes using ‘ratio-scaling’ would be a ‘non-starter.’ The effort of the 1950’s and 1960’s to create a valid scale of magnitude was relevant, but not productive in a world where the application of the method would drown out methodological differences and minor issues. In other words, squabbles about whether the ratings possessed ‘ratio scale’ properties might be interesting, but not particularly productive in a world begging for measurement, for a yet-to-be sketched out inner psychophysics.

The movement away from simple studies of perceptual magnitudes was further occasioned by the effort to apply the psychophysical thinking to business issues, and the difficulties ensuing in the application of ratio scaling methods, such as magnitude estimation. The focus was no longer on measurement, but on creating sufficient understanding about the stimulus, the food or cosmetic product, so that the effort would generate a winner in the marketplace.

The path to understanding first comprises experiments with mixtures, first mixtures of ingredients, and then mixtures of ideas, steps needed to define the product, to optimize the product itself, and then to sell the product. Over time, the focus turned mainly to ideas, and the realization that one could mix ideas (statements, messages), present these combinations to respondents, get the responses to the combinations, and then using statistics such as OLS (ordinary least-squares regression) one could estimate the contribution of each idea in the mixture to the total response.

Inner Psychophysics Propelled by the Vision of Industrial-scale Knowledge Creation

A great deal of what the author calls the “Inner Psychophysics” came about because of the desire to create knowledge at a far more rapid level than was being done, and especially the dream that the inevitable tedium of a psychophysical experiment could simply be eliminated. During the 20th century, especially until the 1980’s, researchers were content to work with one subject at a time, the subject being call the ‘O’, an abbreviation for the German term Beobachter. The fact that the respondent is an observer suggests a slow, well-disciplined process, during which the experimenter presents one stimulus to one observer, and measures the response, whether the response is to say when the stimulus is detected as ‘being there,’ when the stimulus quality is recognized, or when the stimulus intensity is to be assigned a response to report its perceived intensity.

The psychophysics of the last century, especially the middle of the 20th century, focused on precision of stimulus, and precision of measurement, with the goal of discovering the relations between variables, viz., physical stimuli versus perception of those stimuli by the person. It is important to keep in mind the dramatic pivot or change in thinking that would ensue when reality and opportunity presented themselves as disturbances. Whereas psychophysics of the Harvard format searched for lawful relations between variables (physical stimulus levels; ratings of perceived magnitude), the application of the same thinking to food and to ideas was to search for usable relations. The experiments need not reveal an ‘ultimate truth’, but rather needed to be ‘good enough,’ to identify a better pickle, salad dressing, orange juice or even features of a cash-back credit card.

The industrial-scale creation would be facilitated by two things. The first was a change in direction. Rather than focusing one’s effort on the laws relating physical stimulus and subjective response (outer psychophysics), the new, and far-less explored area would focus on measuring ideas, not actual physical things (inner psychophysics).

The second would focus on method, on working not with single ideas, but deliberately with mixtures of ideas, presented to, and evaluated by the respondent. in a controlled situation. These mixtures of ideas, called vignettes, would be created by experimental design, a systematic prescription of the composition of each mixture, viz., which phrases or elements would appear in each vignette. The experimental design ensured that the researcher could link a measure of the respondent’s thinking to the specific elements. The rationale for vignettes was the realization that single ideas were not the typical ‘product’ of experience. We think of mixtures because our world comprises compound stimuli, mixtures of physical stimuli, and our thinking in turn comprises different impressions, different thoughts. Forcing the individual to focus on one thought, one impression, one message or idea, is more akin to meditation, whose goal is to shunt the mind away from the blooming, buzzing confusion of the typically disordered mind, filled with ideas flitting about.

The world view was thus psychophysics, search for relations and for laws. The world view was also controlled complexity, with the compound stimulus taking up the attention of the respondent and being judged. The structure of the mixtures appeared to be a ‘blooming, buzzing confusion’ in the words of Harvard psychologist William James. To create the Inner Psychophysics meant to prevent the respondent from taking active psychological control of the situation. Rather, the designed forced the respondent to pay attention to combinations of meaningful messages (vignettes), albeit messages somewhat garbled in structure, which avoided revealing the underlying structure, and thus prevented the respondent from ‘gaming’ the system.

As will be shown in the remainder of this paper, the output of this mechanized approach to research produced an understanding of how we think and make decisions, in the spirit of psychophysics, at a pace and scope that can be only described as industrial scale/

The Mind Genomics ‘Process’ for Creating an Experiment

The study presented here comes from a developing effort to understand the mind of ordinary people in terms of what can solve well-known social problems. At a quite simple level, one can either ask respondents to tell the researcher what might solve the problems, or present solutions to the respondent, and ask the respondent to scale each solution in terms of expected ability to solve the problem. The solutions are concrete, simple, relevant. The pattern of responses gives a sense of what the respondent may be thinking with respect to solving a problem.

The study highlighted here went several stages beyond that simple, straightforward approach. The stimulus for the underlying thinking came from traditional personality theory, and from cognitive psychology. In personality theory, psychologist Rorschach among many others believed that people were not often able to paint a picture of their own mind, at the deepest levels. Rorschach developed a set of ambiguous pictures and required the respondent to describe them, to tell a story. The pattern of what the respondent saw could tell the research how the respondent organized her or his perceptions of the world. Could such an approach be generalized, so that the pictures would be replaced by metaphoric words, rich with meaning? And so was born the current study. The study combines a desire to understand the mind of the individual, the use of Mind Genomics to do the experiment, and the acceleration of knowledge development through a novel set of approaches to the underlying experimental design (see also Goertz & Mahoney) [13]

Let us first look at the process itself.

  1. The structure of the experimental design begins with a single topic (e.g., a social problem), continues with four questions dealing with the problem, and in turn four specific answers to each question. Thus, there are three stages, easy to create, amenable to being implemented through a template. Good practice suggests that the 16 answers (henceforth elements) be simple declarative statements, 14 words or fewer, with no conjunctives. These declarative statements should be easily and quickly scanned, with as little attention, as little ‘friction’ as possible.
  2. A basic experiment specified 24 unique combinations or vignettes, each vignette comprising 2, 3 or 4 elements. No effort was made to connect these elements. Rather, each element was placed atop the other.
  3. The experimental design ensured that each element appeared exactly five times across the 24 vignettes, and that the pattern of appearances made each element statistically independent of the other 15 elements.
  4. The experimental design was set up to allow the 24 vignettes to be subject to OLS (ordinary least-squares) regression, at the level of the individual, or the level of the group, respectively.
  5. A key problem in experimental design is the underlying structure of what is tested, which is a single set of combinations. The quality of knowledge suffers because only a set of combinations is tested, one small region of the design space. There is much more to the design space. The researcher’s resources are wasted suppressing the noise in this region, either by eliminating noise (impossible in an Inner Psychophysics), or by averaging out the noise in this region by replication (a waste of resources).
  6. The solution of Mind Genomics is to permute the experimental design [14]. The permutation strategy maintains the structure of the experimental design but changes the specific combinations. The task of permuting requires that the four questions be treated separately, and that the elements within a question be juggled around but remain with the question. In this way, no element was left out, but rather its identification number changed. For example, A1 would become A3, A2 would become A4, A4 would become A2 and A3 would become or remain A3. At the initial creation of the permuted designs, each new design was tested to ensure that it ran with the OLS (ordinary least-squares) regression package.
  7. Each respondent would test a different set of 24 combinations. What was critical was to create a scientific experiment in which the experiment need not know anything about the topic to explore the full range of the topic as represented by the 16 elements. The data from the full range of combination tested would quickly reveal what elements performed well, and what elements performed poorly.
  8. The benefit to research was that research could become once again exploratory as well as confirmatory, due to the wide variation in the combinations. It was no longer a situation of knowing the answer or guessing at the answer ahead of time. The answer would emerge quickly.
  9. Continuing and finishing with an overview of the permuted design of Mind Genomics, it quickly became obvious that studies needed not be large nor expensive. The ability to create equations or models with as few as 5-10 respondents, because of the ability to cover the design space, meant that one could get reasonable indications with so-called ‘demo studies’, virtually automatic studies, set up and implemented at low cost. The setup takes about 20 minutes once the ideas are concretized in the mind of the research. The time from launch (using a credit card to pay) to delivery of the finalized results in tabulated form, ready for presentation, is approximately 15-30 minutes.
  10. It was important to create rapid summarizations of the results. Along with the vision of ‘industrial strength research’ was the vision of ‘industrial scale insights.’ These would be provided by simple templated outputs, along with AI interpretations of the strong performing elements for each key group in the population. The latter would develop into the AI ‘summarizer’.
  11. The final step, as of this writing is to make the above-mentioned system work simultaneously with a series of different studies, e.g., 25-30 studies, in an effort to create powerful databases, across topics, people, cultures, and over time. In the spirit of accelerated knowledge development, each study is a carbon copy of the other study, except for one item, the specific topic being addressed in the study. That is, the orientation, rating scale, and elements are identical. What differs is the problem being addressed.
  12. When everything else is held constant, only the topic being varied, we have then the makings of the database of the mind, done at industrial scale.

Applying the Approach to the ‘Solution’ of Social Problems

We begin with a set of 28 social problems, and a set of 16 ‘messages’ as tentative solutions to a problem. The problems are simple to describe and are not further elaborated. In turn the 16 elements or solutions are general approaches, such as the involvement of business, rather than more focused solutions comprising specific steps. These 28 problems are shown in Table 1 and the 16 solutions are shown in Table 2.

Table 1: The 28 problems

tab 1

The 28 problems enumerated in Table 1 represent a small number of the many possible problems one can encounter, and Table 2 shows a few of the many the solutions that might be applied. The number of problems is unlimited. For this introductory study, using the Mind Genomics template, we are limited to four types of solutions for a problem, and four specific solutions in each type.

Table 2: The 16 solutions (four silos, each silo with four solutions)

tab 2

The actual process follows these steps, which give a sense of the total effort needed for the project.

  1. Develop the base study (orientation page, rating scale, questions, answers); Figures 1a and 1b shows some relevant screen shots. Each problem is represented by a single phrase describing the problem. That phrase is called ‘the SLUG’. It will be the SLUG which changes in the various steps, one SLUG for each study (Figure 2).
  2. Create a copy of the base study, changing the nature of the problem in the introduction and in the rating scale. This activity requires about 3-5 minutes for each study due to its repetitive, simple nature. Then launch each study in rapid succession with the same panel requirements (50 respondents), and let each study amass the data from the 50 respondents. The field time is about 30 minutes when the studies are launched during the daytime, and when the respondents have been invited by an on-line panel provider specializing in this type of research. The expected time for Step 2 for 28 studies is about 3-4 hours, to acquire all of the data.
  3. Create the large scale datafile, comprising one set of 24 rows for each respondent. This effort ends up being simple a ‘cut and paste’ effort, with slight editing. The 24 rows of data per respondent ends up generating 1200 rows of data for each of the 28 studies. The final database will comprise the information about the study, about the respondent, and then the set of 16 columns to show the presence/absence of the 16 elements (answers to the question), as well a 17th column to show the rating assigned for the particular vignette, and an 18th column showing the ‘response time’ for the vignette, defined as the time between the appearance of the vignette on the respondent’s screen and the assignment of the rating.
  4. Pre-process the ratings by converting the 5-point rating scale to a new, binary scale. Ratings of 1-3 are converted to 0 to denote that the respondent does not feel that the combination of offered actions presented in the vignette will ‘solve’ the problem. In turn, ratings of 4-5 are converted to 100 to denote that the respondent does feel that the combination of offered actions will solve the problem. The binary transformation is generally more intuitive to users of the data, these users wanting to determine ‘no or yes.’ To these users the intermediate scale values are hard to interpret, even though those scale values are tractable for statistical analysis.
  5. Since the 24 vignettes evaluated by a respondent are created according to an underlying experimental design, we know that the 16 independent variables (viz., the 16 solutions) are statistically independent of each other. Thus, the program creates an equation or model relating the presence/absence of the 16 elements to the newly created binary variable ‘will work.’ We express the equation as: Work (0/100) = k1(Solution A1) + k2(Solution A2) + …. K16(Solution D4). To make the results comparable instant from the study to study the equation is estimated without an additive constant, to force all the information about the pattern to emerge from the coefficients.
  6. Each respondent thus generates 16 coefficients, the ‘model’ for that respondent. The coefficient shows the number of points on a 100-point scale for ‘working’ contributed by each of the 16 solutions. Array all the coefficients in a data matrix, each row corresponding to a respondent, and each column corresponding to one of the 16 solutions or elements.
  7. Cluster all respondents in the 28 studies into three groups independent of the problem topic, but simply based on the pattern of the 16 coefficients for the respondent. The clustering is called k-means [15]. The researcher has a choice of the measure of distance or dissimilarity. For these data we cluster using the so-called Pearson Model, where the distance between two respondents is based on the quantity (1-R), with R=Pearson Correlation Coefficient. The Pearson correlation coefficient for two respondents is computed across computed across the 16 pairs of coefficients). Note again that the clustering program ‘does not know’ that there are 28 studies. The structure of the data is the same from one study to another, from one respondent to another.
  8. Each respondent is assigned to one of the three clusters (now called mind-set). Afterwards, the researcher create summary models or equations, first for each study independent of mind-set, second for each mind-set independent of study, and finally for each combination of study and the three mind-sets. These summary models generate four tables of coefficients, first for total, and then for mind-set 1, mind-set 2, and mind-set 3, respectively. Each vignette clearly belongs to one of the respondents, and therefore belong both to one specific study of the 28, and to one of the three emergent mind-sets. For these final summary models, the (arbitrary) decision was made to discard all vignettes that were assigned the rating ‘3’ (cannot decide). This decision sharpens the data by considering only the vignettes where a respondent felt that the problem would be solved or not be solved.
  9. Build three large models or equations relating the presence/absence of the 16 elements (specific solutions) to the binary rating of ‘can solve the problem’, incorporating all respondents in a mind-set. Then build the three sets of models, for each problem, by respondents in the appropriate mind-set. This creates 28 (problems) x 3 (mind-sets) = 84 separate models. We look at the patterns across the tables to get a sense of the different mind-sets, how they differ from the Total Panel, and what seems to be the defining aspects for each mind-set.
  10. The effort for one database, for one country, easy easily multiplied, either to the same database for different countries, or different topic databases for the country. From the point of view of cost in today’s dollars (Spring, 2023), each database of 28 studies and 50 respondents per study can be created for about $15,000, assuming that the respondents are easy to locate. That effort comes to about $500 per study.

fig 1

Figure 1: Study name (left panel), four questions (middle panel), and four answers to one question (right panel)

fig 2

Figure 2: Self profiling question (left panel), and rating scale (right panel)

What Patterns Emerge from Problem-Solution Linkages – Total Panel

Let us now look at the data from the total panel. Table 1 shows us 16 columns, one per solution, and 28 rows, one per problem. Models were estimated after excluding all vignettes assigned the rating 3 (cannot decide). The table is sorted in descending order by ability for a specific solution, and from left to right, by median coefficient, both for solutions and for problems, respectively:

  1. The rows (problems) are sorted in descending order by the median coefficient for the problem across 16 solutions. This means that the problems at the top of the table are those with the highest median coefficients, viz., the most likely to be solved by the solutions proposed in the study.. The problems at the bottom of the table are those least likely to be solved by the solutions proposed in the study
  2. The columns (solutions) are sorted in descending order by the median coefficient for the solution across all 28 problems. This means that the solutions to the left, those with the highest median coefficients, are the most to solve problems. The solutions to the right, those with the lowest median coefficients, are least likely to solve problems.
  3. The medians are calculated for all coefficients, those shown and those not shown. The table shows only the strong performing combinations, those with coefficients of +20 or higher.
  4. Table 3 is extraordinarily rich. There are several strong-performing elements. The interesting observations, however, emerges from the pattern of darkened cells, those with strong coefficients. These tend to be solutions from group B (social action) and from group C (business). Initiatives from education and government do work, but without any additional information, there seems to be little belief in the efficacy of the public domain to produce a solution.

Table 3: Summary table of coefficients for model relating presence/absence of 16 solutions (column) to the expected ability to solve the specific problem.

tab 3

The Lure of Mind-sets

We finish this investigation by looking at mind-sets, one of the key features of Mind Genomics. The notion of mind-sets is that for each topic area one can discover different patterns of ‘weights’ applied by the respondent to the information. The analysis to create these mind-sets will use the 16 coefficients for each respondent, independent of the problem presented to the respondent.

The notion of combining all respondents, independent of the problem, may sound strange at first, but there is a spark of reason. We are simply looking at the way the person deals with a problem. We are more focused on general patterns, even if these end up being ‘weak signals.’ The fact that there are 28 different problems dealt with in the project is not relevant for the creation of the mind-set, but will become important afterwards, for the deeper understanding of each mind-set.

The rationale for combining problems and solutions (viz., coefficients) into one database comes from the well-accepted fact that consumers differ when they think about purchasing a product. Studies of the type presented here, but on commercial products, again and again show that when it comes to purchasing a food product, one pattern of weights suggests that the respondent pays attention to product features, whereas another pattern of weights applied to the same elements suggests that the respondent pays attention to the experience of consuming the product, or the health benefits of the product, rather than paying attention to the features [16]. Rarely do we go any deeper in our initial thinking about the individual differences.

    1. The coefficients for the three emergent mind-sets appear in Tables 2-4. Again, the tables are sorted by the median, and all coefficients of 20 or higher are shaded to allow the patterns to emerge. Our task here is to point out some of these general patterns.
    2. The range of coefficients is much larger for the mind-sets than for the total. Table 1 shows us many modest-size coefficients of 10-20 and a number of larger coefficients, 20 or higher. Tables 2-4 show us a much greater range of coefficients. We attribute the increased range to the hypothesis that people may deeply differ from each other in their mental criteria. Inner Psychophysics reveals that difference, doing so dramatically, and in a way that could not have been done before.
    3. The pattern of coefficients seems somewhat more defined, as if the respondents in a mind-set more frequently rely on the same set of solutions for the problems, although not always.

a. The mindsets do not believe that the key solutions will work everywhere, but just in some areas. The mind-sets do not line up in an orderly fashion. That is, we do not have a simplistic set of psychophysical functions for the inner psychophysics. We do have patterns, and metrics for the social consensus.

b. Mind-Set 1 (Table 2) appears to feel that business and education solutions will work most effectively. Mind-Set 1 does not believe strongly in the public sector as able to provide workable solutions to many problems.

c. Mind-Set 2 (Table 3) appears to feel that education and the law will work most effectively.

d. Mind-Set 3 (Table 4) appears to feel that law and business will work most effectively (Tables 4-6).

Table 4: Summary table of coefficients for model relating presence/absence of 16 solutions (column) to the expected ability to solve the specific problem (row). The data come from Mind-Set 1, which appears to focus on business as the preferred solution to problems.

tab 4

Discussion and Conclusion

The focus of this paper began with the desire to extend the notion of psychophysics to the measurement of internal ideas. As noted in the first part of this paper, the traditional focus of psychophysics has been the measurement of sensory magnitudes, and later lawful relations between the sensory magnitude as perceived and the physical magnitude as measured by standard instruments.

The early work in psychophysics focused on measurement, the assignment of numbers to perceptions. The search for lawful relations between these measured intensities of sensation and physical correlates would come to the fore even during the early days of psychophysics, in the 1860’s, with founder Gustav Theodor Fechner [17]. It was Fechner who would trumpet the logarithm ‘law of perception,’ such ‘laws’ being far more attractive than the very tedious effort to measurement the just notice differences, the underlying units of so-called sensory magnitude. Almost a century later Harvard psychophysicist S.S. Stevens (1975) would spend decades suggesting that this law of perception followed a power function of defined exponent, rather than a logarithmic function.

This paper moves psychophysics inward, away from the search for lawful ‘equations’ relating one set variables to another, viz., magnitudes of physical stimuli versus magnitudes of the co-varying subjective responses. This focus here is to measure ideas. The objective is to put numbers onto ideas, not by having the respondent introspect and rate the ideas, but rather by showing the magnitude of the linkage in the mind between ideas. The methods are experimentation, the results are numbers (coefficients of the equation), and the scope is to create this new iteration of psychophysics in a way consonant with the way we think about issues. The outcome comprises a set of relatively theory-independent methods which produce the raw material of this psychophysics for the consideration of both other researchers and for practical applications in the many areas of human endeavor.

References

      1. Stevens SS (1975) Psychophysics: Introduction to Its Perceptual, Neural, and Social Prospects. New York, John Wiley.
      2. Stevens SS (1966) A metric for the social consensus. Science 151: 530-541.
      3. Boring EG (1942) Sensation & Perception in the History of Experimental Psychology. Appleton-Century,
      4. Galanter E (1962) The direct measurement of utility and subjective probability. The American Journal of Psychology 75: 208-220.
      5. Miller GA (1964) Mathematics and Psychology, John Wiley, New York.
      6. Luce RD, Bush RR, Galanter E (Eds.) (1963) Handbook of Mathematical Psychology: Volume I. John Wiley.
      7. Luce RD, Tukey JW (1964) Simultaneous conjoint measurement: A new type of fundamental measurement. Journal of Mathematical Psychology 1: 1-27.
      8. Anderson NH (1976) How functional measurement can yield validated interval scales of mental quantities. Journal of Applied Psychology 61: 677-692.
      9. Green PE, Wind Y (1975) New way to measure consumers’ judgments,” Harvard Business Review 53: 107-17.
      10. Wind Y (1978) Issues and advances in segmentation research. Journal of Marketing Research 15: 317-337.
      11. Stevens SS, Greenbaum HB (1966) Regression effect in psychophysical judgment. Perception & Psychophysics 1: 439-446.
      12. Moskowitz HR, Kluter RA, Westerling J, Jacobs HL (1974) Sugar sweetness and pleasantness: Evidence for different psychological laws. Science 184:583-585. [crossref]
      13. Goertz G, Mahoney J (2013) Methodological Rorschach tests: Contrasting interpretations in qualitative and quantitative research. Comparative Political Studies 46: 236-251.
      14. Gofman A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
      15. Dubes R, Jain AK (1980) Clustering methodologies in exploratory data analysis. Advances in Computers 19: 113-228.
      16. Green PE, Srinivasan V (1978) Conjoint analysis in consumer research: issues and outlook. Journal of Consumer Research 5: 103-123.
      17. Fechner GT (1860) Elements of psychophysics (translated by H.E. Adler, 1966) Leipzig, Germany, Breitkopf and Hartel (Holt, Rinehart, and Winston).

Menace of Substance Abuse in Today’s Society: Psychosocial Support to Addicts and Those with Substance Use Disorder

DOI: 10.31038/IJNM.2023421

Abstract

Substance abuse among youths has been a problem to society in General. The continuous use of psychoactive substances among adolescents and youths has become a public concern worldwide because it potentially causes deliberate or unintended harm or injury. The consequences of drug abuse are not only on the individual user but also on his or her offspring, family and the society. This seminar topic discussed some drugs that are commonly abused by adolescents and youths such as cannabis, cocaine, amphetamine, heroin, codeine, cough syrup and tramadol. It also discussed the sources where abusers obtained drugs as well possible effects in terms of physical, psychological and social terms. The risk factors and the reason for substance abuse was discussed, how substance abuse interrupt the brain, which also tells us the ways of cubbing the menace of substance abuse by creating awareness about drug abuse and their adverse consequences through the aid of appropriate mass media tools. This write-up also discussed method of delivering customized information suitable to the target audience such as family, schools, workers, religious organization, homes in a sensitive manner. Also discussed is the strategies to use in collaboration with international agencies to monitor the sale of over-the-counter drugs and enforcing stricter penalties for individuals who are involved in trade of illicit drugs and many more. Recommendation are made where call on all categories of people including government ,family, community and National Agency for Food and Drug Administration and Control (NAFDAC) to contribute to preventing the menace of substance abuse. If the Nigerian youths should stop drug abuse, they will be useful to themselves, their families and the society in general.

Keywords

Substance abuse, Psychoactive substance, Society

Introduction

Substance abuse has been a cause of many debilitating conditions such as schizophrenia and psychosis, leading to psychiatric admissions. Substance abuse is emerging as a global public health issue. The recent world drug report-2019 of the United Nations Office on Drugs and Crime (UNODC) estimated that 271 million (5.5%) of the global population (aged between 15 and 64 years) had used drugs in the previous year. Also, it has been projected that 35 million individuals will be experiencing drug use disorders. Furthermore, the Global Burden of disease Study (2017), estimated that there were 585,000 deaths due to drug use, globally. The burden of drug abuse (usage, abuse, and trafficking) has also been related to the four areas of international concern, viz. organized crime, illicit financial flows, corruption, and terrorism or insurgency. Therefore, global interventions for preventing drug abuse including its impact on health, governance, and security, requires a wide spread understanding of the prevalence, frequently implicated drugs, commonly involved population, sources of the drugs and risk factors associated with the drug abuse. In Nigeria, the burden of drug abuse is on the rise and becoming a public health concern. Nigeria, which is the most populous country in Africa, has developed a reputation as a center for drug trafficking and usage mostly among the youth population, in which the menace is giving birth to a generation of drug addicts. Oftentimes, young men are seen with bottles of carbonated drinks (soft drinks), but laced with all kinds of intoxicating content. They move about with the soft drink bottles and sip slowly for hours while unsuspecting members of the public would easily believe that it is mere harmless soft drink.

According to Ladipo, a consultant psychiatrist at the Lagos University Teaching Hospital (LUTH), said that he had handled lot of mental cases in his career as fallouts of drug abuse, which often lead to mental disorder. He also stated that the effects of drug abuse and wrong use do not only take a toll on the individuals and their families but on society at large. According to UNODC, report on Drug use in Nigeria (the first large-scale, nationwide national drug use survey in Nigeria), one in seven persons (aged 15-64 years) had used a drug in the past year. Also, one in five individuals who had used drug in the past year is suffering from drug related disorders. Drug abuse has been a cause of many criminal offences such as theft, burglary, sex work, and shoplifting. A prevalence of 20-40% and 20.9% of drug abuse was reported among students and youths, respectively. Commonly abused drugs include cannabis, cocaine, amphetamine, heroin, diazepam, codeine, cough syrup and tramadol. Sources where abusers obtained drugs, were pharmacies/patent medicine shops, open drug markets, drug hawkers, fellow drug abusers, friends, and drug pushers. Drug abuse was common among undergraduates and secondary school students, youths, commercial bus drivers, farmers, and sex workers. Reasons stated for use include but not limited to increase physical performance, stress and to derive pleasure. Poor socioeconomic factors and low educational background were the common risk factors associated with drug abuse [1-10].

Objectives of the Seminar

  1. To identify the reasons and perceived benefits for substance abuse
  2. To identify psychological and social effects of substance abuse.
  3. To examine psychosocial supports rendered to substance users and addicts.
  4. To stimulate further discussions and research thoughts in an attempts to finding solutions to the menace

Clarification of Concepts

i. A Drug

It is any substance other than food that influences motor, sensory, cognitive or other bodily processes (APA, 2022).

ii. Drug Misuse

It is the use of a substance for a purpose not consistent with legal or medical guidelines (WHO, 2006).

iii. Psycho-Active Substance

Are substances that, when taken in or administered into the system, affect mental process, e.g., perception, consciousness, cognition or mood and emotions (WHO,2022).

iv. Substance Abuse

This, according to International Classification of Diseases (ICD10), is a pattern of psychoactive substance use that is capable of causing damage to physical or mental health. According to Diagnostic and statistical manual of mental disorders (DSM IV), it is a maladaptive pattern of substance use leading to significant clinical/social/legal/occupational distress or mental ill-health in the last 12 months.

Substance abuse can also be defined as;

  • Use of drugs without physician’s prescription.
  • Use of illicit drugs or legally banned drugs.

v. Addiction

This is a compulsive, chronic, physiological or psychological need for a habit forming substance, behaviour or activity having harmful effects and typically causing well defined symptoms such as irritability, anxiety, tremors upon withdrawal (NIH, 2019).

vi. Psychosocial

This are structured psychological or social interventions used to address substance-related problems (APA, 2022).

Literature Review

According to International Classification of Diseases ICD10 (2022), substance abuse as a pattern of psychoactive substance uses that is capable of causing damage due to physical or mental ill health. Substance abuse is emerging as a global public health issue, which is needed to be addressed. The effect of drug is due to the organism taking them and these drugs could be beneficial or harmful physically, psychological or physiologically. When the effects of drugs are beneficial, the drug is said to be serving its purpose but if otherwise, then a problem exists According to Abiodun et al. 1 in 7 persons aged 15-64 years in Nigeria had used a drug (other than tobacco and alcohol) in the past year. The past year prevalence of any drug use is estimated at 14.4% (range 14.0% -14.8%), corresponding to 14.3 million people aged 15-64 years who had used at least one psychoactive substance in the past year for non-medical purposes. Among every 4 drug users in Nigeria, 1 is a woman. More men (annual prevalence of 21.8% or 10.8 million men) than women (annual prevalence of 7.0% or 3.4 million women) reported past-year drug use in Nigeria. The highest levels of any past-year drug use were among those aged 25-39 years. 1 in 5 persons who had used drugs in the past year is suffering from drug user disorders. Cannabis is the most commonly used drug. An estimated 10.8% of the population or 10.6 million people had used cannabis in the past year. The average age of initiation of cannabis use among the general population was 19 years. Geographically, the highest past-year prevalence of drug use was found in the southern geopolitical zones (past year prevalence ranging between 13.8 percent and 22.4 percent) compared to the northern geopolitical zones (past year prevalence ranging between 10 percent and 13.6 percent). Two-thirds of people who used drugs reported having serious problems as result of their drug use, such as missing school or work, doing a poor job at work/school or neglecting their family or children.

Classification of Substance of Abuse

Classification of Substance of Abuse is given in Table 1.

Table 1: Classification according to Diagnostic Systematic Manual IV and International Classification of Diseases 10.

S/N

DSM(IV)

ICD10

1. Alcohol Alcohol
2. Stimulants (cocaine, amphetamines) Other substances including caffeine
3. Caffeine       ____
4. Cannabis Cannabinoids
5. Hallucinogens (lysergic acid, ecstasy, ketamine) Hallucinogen
6. Inhalants (fumes from petrol, glue, adhesive) Volatile solvents
7. Tobacco Tobacco
8. Opioid  (morphine, pentazocine, pethidine,  tramadol) Opioid
9. CNS depressants (sedatives, hypnotics, anxiolytics) Sedatives, hypnotics
10. Unknown substance/others (fecal, cow dump) Unknown substances/others

Substance Abuse Stages

In discussing substance abuse, it is generally agreed that substance is not a one-stage process. According to Brookdale, there are seven stages of substance abuse, namely:

Stage 1: Initiation
Stage 2: Experimentation
Stage 3: Occasional user
Stage 4: Regular user
Stage 5: Risky user
Stage 6: Dependent
Stage 7: Addiction

1. Initiation Stage

This is the first stage during which time the individual tries a substance for the first time. This can happen at almost any time in a person’s life, but according to National Institute on Drug Abuse, the majority of people with an addiction tried their drug of choice before 18 and had a substance use disorder by 20. The reasons a teenager experiments with drugs can vary widely, but two common reasons are because of either curiosity or peer pressure. This latter choice is made with intent of trying to fit in better with that particular group of peers. Another reason that teenagers are more likely to try a new drug than most age groups is due to how the prefrontal cortex in their brain is not yet completely developed. This affects their decision-making process, and as a result many teenagers make their choice without effectively considering the long-term consequences of their actions.

2. Experimental Stage

At the experimentation stage, the user has moved past simply trying the drug on its own and is now taking the drug in different contexts to see how it impacts their life. Generally, in this stage, the drug is connected to social actions, such as experiencing pleasure or relaxing after a long day. For teenagers, it is used to enhance party atmospheres or manage stress from schoolwork. Adults mainly enter experimentation either for pleasure or to combat stress. In this stage, there are little to no cravings for the drug and the individual will still be making a conscious choice of whether to use or not. They may use it impulsively or in a controlled manner, and the frequency of both options mainly depends on a person’s nature and reason for using the drug. There is no dependency at this point, and the individual can still quit the drug easily if they decide to. Some youths repulsed by first unpleasant experiment never to use it again. Others, however assured by the more seasoned users become occasional users

3. Occasional User Stage

The new user seems to be passive accepting drugs if and when offered rather than seeking it out himself, such person believes he or she can handle the situation.

4. Regular User Stage

As a person continues to experiment with a substance, its use becomes normalized and grows from periodic to regular use. This does not mean that they use it every day, but rather that there is some sort of pattern associated with it. The pattern varies based on the person, but a few instances could be that they are taking it every weekend or during periods of emotional unrest like loneliness, boredom or stress. At this point, social users may begin taking their chosen drug alone, in turn taking the social element out of their decision. The drug’s use can also become problematic at this point and have a negative impact on the person’s life. For example, the individual might begin showing up to work hung-over or high after a night of drinking alcohol or smoking marijuana. There is still no addiction at this point, but the individual is likely to think of their chosen substance more often and may have begun developing a mental reliance on it. When this happens, quitting becomes harder, but still a manageable goal without outside help. At this stage, users actually seek after the drugs and maintain their own supply; they show high motivation to get on drugs

5. Risky User Stage

The individual’s regular use has continued to grow and is now frequently having a negative impact on their life. While a periodic hangover at work or an event is acceptable for Stage 3, at Stage 4 instances like that become a regular occurrence and its effects become noticeable. Many drinkers are arrested for a DUI (Driving Under the influence) at this point, and all users will likely see their work or school performance suffer notably. The frequent use may also lead to financial difficulties where there were none before. Although the user may not personally realize it, people on the outside will almost certainly notice a shift in their behavior at this point. Some of the common changes to watch out for in a drug user include:

  • Borrowing or stealing money
  • Neglecting responsibilities such as work or family
  • Attempting to hide their drug use
  • Hiding drugs in easily accessible places (like mint tins)
  • Changing peer groups

6. Dependent Stage

This stage, the person’s drug use is no longer recreational or medical, but rather is due to becoming reliant on the substance of choice. This is sometimes viewed as a broad stage that includes forming a tolerance and dependence, but by now, the individual should already have developed a tolerance. As a result, this stage should only be marked by a dependence, which can be physical, psychological, or both.

For a physical dependence, the individual has abused their chosen drug long enough that their body has adapted to its presence and learned to rely on it. If use abruptly stops, the body will react by entering withdrawal. This is characterized by a negative rebound filled with uncomfortable and sometimes dangerous symptoms, that should be managed by medical professionals. In most cases, individuals choose to continue their use, rather than seeking help, because it is the easiest and quickest way to escape withdrawal.

7. Addictive Stage

At this stage, the drug becomes a major part of the user’s life. The user become obsessed with drugs obtaining them at all cost without consideration for food, job, family etc. Individuals at this stage feel as though they can no longer deal with life without access to their chosen drug, and as a result, lose complete control of their choices and actions. The behavioral shifts that began during Stage 4 will grow to extremes, with the user likely giving up their old hobbies and actively avoiding friends and family. They may compulsively lie about their drug use when questioned and are quickly agitated if their lifestyle is threatened in any way. Users, at this point, can also be so out of touch with their old life that they do not recognize how their behaviors are detrimental and the effects that it has had on their relationships.

8. Crisis/Treatment Stage

The final stage of addiction is the breaking point in a person’s life. Once here, the individual’s addiction has grown far out of their control and now presents a serious danger to their well–being. It is sometimes referred to as the crisis stage, because at this point the addict is at the highest risk of suffering a fatal overdose or another dramatic life event.

Of course, while crisis is the worst-case scenario for this stage, there is also a positive alternative that fits here instead. Either on their own or as a result of a crisis, this is when many individuals first find help from a rehab center to begin receiving treatment. As a result, this stage can mark the end of their addiction, as well as the start of new life without drugs and alcohol, that is filled with hope for the future

Drug/Substance Dependence

According to DSM IV (2018), it is defined as a maladaptive pattern of substance use leading to clinically significant impairment or distress occurring at any time in the same 12 months period as manifested by 3 or more of the following;

1. Tolerance

The individual needs a higher dose of the substance to achieve the usual initial satisfactory effect or the current dose doesn’t give the usual initial satisfactory effect.

2. Primacy

The substance of abuse becomes the priority in the abuser’s hierarchy of needs.

3. Withdrawal

This occurs once an abuser stops ingesting the substance the body begins to react to it negatively e.g. an individual abusing Valium (Diazepam) and stopped suddenly such person can experience seizures, insomnia.

Opioid withdrawal symptoms include; excessive yawning, tearing, diarrhea, diaphoresis, joint pain, vomiting

4. Harmful Use

Regardless of the negative effect the abuser continually engage in, the abuse even with the knowledge of its detrimental effects.

5. Inability to Cut Down

An individual who voluntarily stopped abusing substances finds himself/herself engaging in it.

6. Excessive Craving

The individual finds the substance pleasurable and ensure to find it at all cost.

Risk Factors Associated With Substance Abuse

  1. Age (15-24 yrs)
  2. Male Gender
  3. Siblings or parental exposure
  4. Parental deprivation (divorce, separation, death of spouse)
  5. Exposure to high-risk job (breweries, bar, tobacco companies)
  6. Advertisement
  7. Poor economic status
  8. Experiment/curiosity: Experimental Curiosity: Curiosity to experiment the unknown facts about drugs thus motivates adolescents into drug use. The first experience in drug abuse produces a state of arousal such as happiness and pleasure which in turn motivate them to continue.
  9. Peer pressure: Peer Group Influence: Peer pressure plays a major role in influencing many adolescents into drug abuse. This is because peer pressure is a fact of teenage and youth life. As they try to depend less on parents, they show more dependency on their friends.
  10. Lack of parental supervision: Many parents have no time to supervise their sons and daughters. Some parents have little or no interaction with family members, while others put pressure on their children to pass exams or perform better in their studies. These phenomena initialize and increases drug abuse.
  11. Personality Problems due to socio-economic Conditions: Adolescents with personality problems arising from social conditions have been found to abuse drugs. The social and economic status of most Nigerians is below average. Poverty is widespread, broken homes and unemployment is on the increase, therefore our youths roam the streets looking for employment or resort to begging. These situations have been aggravated by lack of skills, opportunities for training and re-training and lack of committed action to promote job creation by private and community entrepreneurs. Frustration arising from these problems lead to recourse in drug abuse for temporarily removing the tension and problems arising from it.
  12. The Need for Energy to Work for Long Hours: The increasing economic deterioration that leads to poverty and disempowerment of the people has driven many parents to send their children out in search of a means of earning something for contribution to family income.These children engage in hawking, bus conducting, head loading, scavenging, serving in food canteens etc. and are prone to drug taking so as to gain more energy to work for long hours.
  13. Availability of the Drugs: In many countries, drugs have dropped in prices as supplies have increased.

Theories of Drug Addiction

There are several theories that model addiction which are genetic theories, exposure theories (both biological and conditioning), and adaptation theories.

1. Genetic Theory

According to Danielle, stated that Genetic influences affect substance use and substance use disorders but largely are not specific to substance use outcomes.The genetic theory of addiction, known as addictive inheritance, attempts to separate the genetic and environmental factors of addictive behavior. Numerous large-scale twin studies have documented the importance of genetic influences on how much people use substances (alcohol, tobacco, other drugs) and the likelihood that users will develop problems. However, twin studies also robustly demonstrate that genetic influences affect multiple forms of substance use (alcohol, illicit drugs) as well as externalizing behaviors such as adult antisocial behavior and childhood conduct disorder. Accordling to stated that the majority of genetic influence on substance use outcomes appears to be through a general predisposition that broadly influences a variety of externalizing disorders and is likely related to behavioral undercontrol and impulsivity, which is a heterogeneous construct in itself.

2a. Exposure Theories: Biological Models

The exposure model is based on the assumption that the introduction of a substance into the body on a regular basis will inevitably lead to addiction. These theories suggest that brain chemistry, brain structure, and genetic abnormalities cause human behavior. The biological, as opposed to the conditioning models, believe that this is a consequence of biology. Underlying the exposure model is the assumption that the introduction of a narcotic into the body causes metabolic adjustments requiring continued and increasing dosages of the drug in order to avoid withdrawal. Although changes in cell metabolism have been demonstrated, as of yet they have not been linked with addiction. Some theorize that those drugs that mimic endorphins (naturally occurring pain killers), if used on a regular basis, will reduce the body’s natural endorphin production and bring about a reliance on the external chemical agent for ordinary pain relief. The neurological basis of substance abuse is an example of the biological models, as shown below (Figure 1).

fig 1

Figure 1: Neuro-Biological Basis of Drug Dependence

Dependence results from complex interaction of psychological effects of substance in brain area associated with motivation and emotion, combined with learning. Some area in the brain are responsible for pleasure which causes release of dopamine, for example dopamine level increases after sexual intercourse and intake of favorite meal, but for drug abusers, drugs became substituted for the activities that increases the level of dopamine, the brain learns to reinforce the pleasure by stimulating more of eating. The brain learns to substitute natural substances with natural activities and it increases dopamine level which causes increase pleasurable effect which is desired.

Anatomical Areas Involved in Drug Dependence

  1. Nucleus accumbiens
  2. Mesolimbic pathway in mid brain
  3. Central tegmental

2b. Exposure Theories: Conditioning Models

The basis of conditioning theories is that addiction is the cumulative result of the reinforcement of drug administration. The substance acts as a powerful reinforcer and gains control over the user’s behavior. In contrast to the biological models of the exposure theories, these conditioning models suggest that anyone can be driven to exhibit addictive behavior given the necessary reinforcements, regardless of their biology. The advantage of this theory is that it offers the potential for considering all excessive activities along with drug abuse within a single framework: those of highly rewarding behavior. There are many reinforcement models that have been defined including the opponent- process model of motivation and the well-known classical conditioning model. Both of these models define addiction as a behavior that is refined because of the pleasure associated with its reinforcement.

3. Adaptation Theories

The adaptation theories include the psychological, environmental and social factors that influence addiction. Advocates of these theories have analyzed how expectations and beliefs about what a drug will do for the user influence the rewards and behaviors associated with its use. They recognize that any number of factors, including internal and external cues, as well as subjective emotional experiences, will contribute to addictive potential. They support the views that addiction involves cognitive and emotional regulation to which past conditioning contributes.

The adaptation theory has also broadened the scope of addiction into psychological realms. Investigators have noted that drug users rely on drugs to adapt to internal needs and external pressures.

Common Signs of Drug Abuse

According to Williams, the common signs include:

A. Physical Warning Signs of Substance Abuse

These include

  • Bloodshot eyes, pupils larger or smaller than usual.
  • Changes in appetite or sleep patterns.
  • Sudden weight loss or gain.
  • Deterioration of physical appearance, personal grooming habits.
  • Unusual smells on breath, body, or clothing.
  • Tremors, slurred speech, or impaired coordination.

B. Behavioral Signs Of Substance Abuse

These include:

  • Drop in attendance and performance at work or school.
  • Unexplained need for money or financial problems. May borrow or steal to get it.
  • Engaging in secretive or suspicious behaviors.
  • Sudden change in friends, favorite hangouts, and hobbies.
  • Frequently getting into trouble (fights, accidents, illegal activities).

C. Psychological Warning Signs Of Substance Abuse

These include:

  • Unexplained change in personality or attitude.
  • Sudden mood swings, irritability, or angry outbursts.
  • Periods of unusual hyperactivity, agitation, or giddiness.
  • Lack of motivation; appears lethargic
  • Appears fearful, anxious, or paranoid, with no reason.

Reasons for Substance Abuse in Nigeria

The commonly reported reasons include the following:

  1. To increase physical performance
  2. To derive pleasure
  3. Desire to relax/sleep
  4. To keep awake
  5. To relieve stress
  6. To relieve anxiety
  7. Unemployment
  8. Frustration
  9. Easy access

Effects of Substance Abuse

The implications of substance abuse to the life of an individual are enormous and can be categorized as Physical, social and Psychological.

A. Physical Impact

There are also a number of issues affecting the physical health of the individual who is abusing drugs over a sustained period of time. According to the National Institute on Drug Abuse (2019), long-term drug abuse can affect:

  • The Kidneys. The human kidney can be damaged both directly and indirectly by habitual drug use over a period of many years. Abusing certain substances can cause dehydration, muscle breakdown, and increased body temperature—all of which contribute to kidney damage over time. Examples are, heroin, cocaine, marijuana.
  • The Liver. Liver failure is a well-known consequence of alcoholism, but it also can occur with individuals using opioids, steroids, inhalants, or habitually over many years. The liver is important for clearing toxins from the bloodstream, and chronic substance abuse can overwork this vital organ, leading to damage from chronic inflammation, scarring, tissue necrosis, and even cancer, in some instances. The liver may be even more at risk when multiple substances are used in combination.
  • The Heart. Many drugs have the potential to cause cardiovascular issues, which can range from increased heart rate and blood pressure to aberrant cardiac rhythms and myocardial infarction (i.e., heart attack). Injection drug users are also at risk of collapsed veins and bacterial infections in the bloodstream or heart.
  • The Lungs. The respiratory system can suffer damage related to smoking or inhaling drugs, such as marijuana and crack cocaine. In addition to this kind of direct damage, drugs that slow a person’s breathing, such as heroin or prescription opioids, can cause serious complications for the user.

Physical Signs Include

  • Insomnia
  • Tremor
  • Thought disturbance
  • Drowsiness
  • Weakness
  • Coma
  • Respiratory depression (depression of the central nervous system)
  • Sexually transmitted diseases(e.g. HIV/AIDS, hepatitis)
  • Death

B. Social Impact

Addiction creates social issues and public health concerns that extend beyond the home, school, and workplace to negatively impact larger groups of individuals.

  • Substance Abuse and the Home: Unfortunately, families all throughout society know the impact of addiction. If a person’s spouse or parent is abusing drugs, the results can be life-altering. It can result in financial hardships (due to job loss or money being diverted to fuel the habit). It may also cause reckless behavior that puts the family at risk. Addiction affects the entire family unit when one member is suffering.

Many cases of domestic violence within relationships are related to substance abuse. Addiction can happen on both sides of the conflict, not only by the abuser but also by the victim who uses drugs to cope. Drug use in the family is not limited to spouses or parents. Adolescents, especially during times of transition, may find themselves struggling with substance use. Children may experience maltreatment (including physical and sexual abuse and neglect), which may require the involvement of child welfare. Watching their parents suffer from substance use disorders may result in long-term mental and emotional disorders and delayed development. Children whose parents abuse drugs are more likely to end up using drugs or alcohol, as well.

  • Substance Abuse and the Workplace: Drug abuse social issues occur in the workplace, the substance use of employees can cause problems. An individual’s drug use will likely impact their work performance. Or, it may even stop them from going to work entirely. Substance abuse can lead to:
  • Decreased work productivity
  • Increased lateness and absences
  • Inappropriate behaviors at work, such as selling drugs to co-workers

These could lead to disciplinary actions and dismissal. Further, drug and alcohol abuse can lead to impaired judgment, alertness, and motor coordination, creating unsafe workplace conditions especially in an environment with heavy machinery.

Social Vices

One of the social effects of drug abuse on society is its direct link on criminal acts, murders etc. that affects the society at large.

D. Psychological Impacts

Substance abuse and mental health are linked because the psychological effects of drug addiction, including alcohol, cause changes in body and brain. A careful balance of chemicals keeps the cogs turning inside the body, and even the smallest change can cause one to experience negative symptoms.

  • Anxiety . There are a lot of similarities between anxiety and the effects of stimulants such as cocaine and methamphetamine. Conversely, using central nervous system depressants can also increase the risk of a person developing anxiety. A person could have a long-standing pattern of drug abuse and consequently develop anxiety problems. Many substances, particularly stimulants like cocaine, can cause anxiety as a dose-dependent side effects. Other drugs, like benzodiazepines, can bring about increased anxiety as part of their withdrawal syndromes.

Anxiety is best described as a disorder of the fight-or-flight response, where someone perceives danger that isn’t there. It includes the following physical and mental symptoms:

  • Rapid heart rate
  • Excessive worrying
  • Sweating
  • An impending sense of doom
  • Mood swings
  • Restlessness and agitation
  • Tension
  • Insomnia

Additionally, many addicts experience anxiety around trying to hide their habits from other people. In a lot of cases, it’s difficult to tell whether anxious people are more likely to abuse substances or if drugs and alcohol cause anxiety.

  • Depression. There is a clear association between substance abuse and depression. This relationship could be attributed to preexisting depression that led to drug abuse or it could be that substance use caused changes in the brain that increased depressive symptoms. Some people use drugs to self-medicate symptoms of depression, but this only alleviates the symptoms while the user is high. It may even make depression symptoms worse when the user is working through withdrawal. Many drugs have a withdrawal syndrome that includes depression or other mood disturbances, which can complicate recovery. The main symptoms associated with depression are:
  • Hopelessness
  • Lack of motivation
  • Dysregulated emotion
  • Loss of interest
  • Sleep disturbances
  • Irritability
  • Weight gain or loss
  • Suicidal ideation
  • Paranoia. Some drugs, like cocaine and marijuana, can cause feelings of paranoia that may amplify with long-term abuse. On top of this, people struggling with addiction may feel that they need to hide or lie about their substance use, indicating a fear of being caught. The fact that many substances of abuse are illegal can also contribute to mounting feelings of paranoia among long-term substance users.
  • Shame and Guilt. There is a stigma attached to addiction in society, and there’s a lot of guilt and shame for the individuals who struggle with the condition. Often, this is adding fuel to a fire that was already burning strong. People with substance use disorders tend to evaluate themselves negatively on a regular basis, which is a habit that has its roots in childhood experiences. Continual negative self-talk adds to feelings of shame and guilt. When you constantly feel as if you’ve done something wrong, it’s tempting to try to cover up these challenging emotions with drugs and alcohol. These unhelpful emotions contribute to the negative feedback loop that sends people spiraling into addiction.
  • A Negative Feedback Loop. From an outside perspective, someone with an addiction looks like they’re repeatedly making bad choices and ignoring reason. However, the truth is far more complicated and nuanced so much so that it can be very difficult for people to overcome a substance use disorder without inpatient or outpatient treatment. This is partly due to a negative feedback loop that occurs in the mind. When someone is addicted to drugs or alcohol, they feel a sense of comfort they haven’t been able to get elsewhere. Inevitably, this feeling is replaced by guilt and shame. They sober up and face the consequences of their actions. However, the weight of these feelings forces them to seek comfort in substances.
  • Loss of Interest. Loss of interest in activities you used to enjoy is a key symptom of both addiction and depression, but overcoming the former makes it much easier to gain control over the latter. It’s such a destructive symptom because of how demotivating it is to feel there’s no joy in the world. Everyone has passions and interests, but getting back to finding them isn’t easy for someone with these conditions [11-20].

Management of Substance Abuse

According to APA (2018), The management includes:

Pharmacologic Management

Pharmacologic management in substance abuse has two main purposes:

  • To permit safe withdrawal from substance of abuse and
  • To prevent relapse.

The drugs that consist the pharmcological intervention include:

  • alcohol withdrawal is usually managed with benzodiazepine-anxiolytic agent, which is used to suppress the symptoms of abstinence.
  • Disulfiram (antabuse). This may be prescribed to help deter clients from drinking.
  • Acamprosate (campral). This may be prescribed for clients recovering from alcohol abuse or dependence to help reduce cravings for alcohol and decrease the physical and emotional discomfort that occurs especially in the first few months of recovery.
  • It is a potent synthetic opiate used as a substitute for heroine in some maintenance programs.
  • it is a narcotic analgesic whose only purpose is the treatment of opiate dependence.
  • Naltrexone: It is an opioid antagonist often used in the treatment of overdose

1. Public Health approach: This Includes

Primary Level Management/Prevention

  • Creating awareness about substance abuse and their adverse consequences through aid of appropriate mass media tools delivering customized information suitable to the target audience such as family, schools, workers, religious organization, homes in a sensitive manner, Owing to the impact on all age groups of the society.
  • Provision of recreational activities for youths in urban areas.
  • Moral realignment for a derailed person.
  • Educational approaches targeting parents improving family lifestyle.
  • Drug education as part of school curriculum.
  • Screening ( drug screening for undergraduates)

Secondary Level Management

  • Laboratory tests such as
  • Blood test
  • Mean corpuscular volume
  • Urine drop test
  • Urinalysis
  • Detoxification
  • Treatment of associated mental and physical disorder
  • Psychotherapy
  • Cognitive behavioral therapy(CBT)
  • Family therapy
  • Maintenance of drug-free behavior such as use of anti-craving drugs

Tertiary Level Management

  • Occupational rehabilitation
  • Educational rehabilitation and counseling
  • Social rehabilitation
  • Provision of legal aid for abuser in legal dilemma
  • Social support

2. Psychosocial Supports To Substance Use Disorders

Psychosocial interventions are structured psychological or social interventions used to address substance-related problems. (APA,2022)..They can be used at different stages of drug treatment to identify the problem, treat it, and assist with social reintegration.The psychological aspects of development refer to an individual’s thoughts, emotions, behaviors, memories, perceptions, and understanding. The social aspects of development refer to the interaction and relationships among the individual, family, peers, and community (UNRWA, 2017). Psychosocial interventions can be used in a variety of treatment settings either as stand-alone treatments or in combination with pharmacological intervention. They can be implemented individually or in groups and delivered by a range of health workers. It is also considered to be the foundation of drug and alcohol treatment, especially for substances where pharmacological treatments have not been sufficiently evaluated. It involves the following

Psychological Supports for Substance Abuse Disorders and Addicts

A. Individual Therapy Interventions. The effectiveness of this interventions has been established primarily for alcohol use problems, although they have been applied to patients using other substances as well. The aim of the intervention is to help the patient understand that their substance use is putting them at risk and to encourage them to reduce or give up their substance use. It can range from 5 min of brief advice to 15-30 min of brief counseling. Intensive counseling is especially effective and there is a strong dose-response relation between counseling intensity and quitting success. In general, more the intense the treatment intervention greater is the rate of abstinence.

B. Motivation Interviewing. Motivational interviewing is a collaborative conversation style for strengthening a person’s own motivation and commitment to change. It is used to help people with different types of drug problems. Frequently, individuals are not fully aware of their drug problems or they can be ambivalent about their problems. It is often referred to as a conversation about change and it is used to help assist drug users to identify their need for change which is characterized by an emphatic approach in which the therapist helps to motivate the patient by asking about the pros and cons of specific behaviors, exploring the patient’s goals and associated ambivalence about reaching those goals, and listening reflectively to the patient’s response.

It seeks to address an individual’s ambivalence about their drug problems, as this is considered the main barrier to change.

It follows five stages:

  1. Expressing empathy for the client
  2. Helping the client to identify discrepancies between their behavior and their goals
  3. Avoiding arguments with the patient about their motivations and behaviors
  4. Rolling with the resistance of the patient to talk about some issues
  5. Supporting the patient s sense of self-efficacy

C. Cognitive Bhavioural Therapy. Cognitive behavioral therapy (CBT) is a umbrella term that encompasses cognitive therapy on its own and in conjunction with different behavioral strategies. Cognitive therapy is based on the principle that the way individuals perceive and process reality influences the way they feel and behave. As part of drug treatment, cognitive therapy helps clients to build self-confidence and address the thoughts that are believed to be at the root of their problems. Clients are helped to recognize the triggers for substance use and learn strategies to handle those triggers. Treatment providers work to help patients to identify alternative thoughts to those that lead to their drug use, and thus facilitate their recovery. Generally, cognitive therapy is provided after a client has been diagnosed as having drug dependence problems.

CBT treatment usually involves efforts to change thinking patterns. These strategies might include:

  • Learning to recognize one’s distortions in thinking that are creating problems, and then to reevaluate them in light of reality.
  • Gaining a better understanding of the behavior and motivation of others.
  • Learning to develop a greater sense of confidence in one’s own abilities.
  • Using role playing to prepare for potentially problematic interactions with others.
  • Learning to calm one’s mind and relax one’s body.

D. Contingency Management. Contingency management refers to a set of interventions involving concrete rewards for clients who achieve target behaviors. This approach is based around recognizing and controlling the relationship between behaviors and their consequences. It can be applied to drug users with different types of problems in a variety of settings. It has been used, for example, with opioid and cocaine users, and with homeless clients. Contingency management is used to maintain abstinence by reinforcing and rewarding alternative behaviors to drug use with the aim of making abstinence a more positive experience. Contingency management programs can, for example, be used during drug treatment to reward a user remaining abstinent or to incentivize a user’s presence at work in a social reintegration programme.

Social Skills Therapy. Social skills are defined as the ability to express positive & negative feelings in the interpersonal context without suffering loss of interpersonal reinforcement. Social skills training (SST) is a type of behavioral therapyused to improve social skills in people with mental disorders or developmental disabilities. Social skills can be taught, practiced and learned.The main purpose of social skills training is teaching persons who may or may not have emotional problems about the verbal as well as nonverbal behaviors involved in social interactions.

Another goal of social skills training is improving a patient’s ability to function in everyday social situations.

SST Techniques

  • Behavioral Rehearsal. Role play which involves practicing new skills during therapy in simulated situations
  • Corrective Feedback. Used to help improve social skills during practice
  • The educational component of SST that involves the modeling of appropriate social behaviors
  • Positive Reinforcement. used to reward improvements in social skills
  • Weekly Homework Assignments. Provide the chance to practice new social skills outside of therapy

E. Family Behavior Therapy (FBT). FBT focuses on how the behaviors of the person with the SUD affect the family as a whole and works to change those behaviors with the involvement of the entire family.Goals of family therapy include obtaining information about the patients and his factors which contribute to substance abuse. These include the patient’s attitude toward substance abuse, treatment adherence, social and vocational adjustment, level of contact with substance using peers, and degree of abstinence. Family support for abstinence, maintaining marital and family relationships are encouraged.Even the brief involvement of family members in the treatment program can enhance treatment engagement and retention.

F. Self Help Groups. Self-help groups are voluntary not-for-profit organizations where people meet to discuss and address shared problems, such as alcohol, drug or other addictions. Participants seek to provide support for each other, with senior members often mentoring or sponsoring new ones. Prominent examples include Alcoholics Anonymous and Narcotics Anonymous, and there is a range of other groups with similar purposes. As well as helping drug users, some self-help groups exist to support the family members of people with alcohol- and drug-related problems. Self-help groups can be used to help people to recognize their drug-related problems and can be a support during drug treatment, and they can help users to maintain abstinence and prevent relapse.

The groups aim to create a drug-free supportive network around the individual during the recovery process and provide opportunities to share experiences and feelings.

H. Therapeutic Communities. Residential rehabilitation programs (sometimes called therapeutic communities) are usually long-term programs where people live and work in a community of other substance users, ex-users and professional staff. Programs can last anywhere between 1 and 24 months (or more). The aim of residential rehabilitation programs is to help people develop the skills and attitudes to make long-term changes toward an alcohol- and drug-free life-style. Programs usually include activities such as employment, education and skills training, life skills training (such as budgeting and cooking), counseling, group work.

Implications

Nursing Education and Practice

  • Advocacy to focus on strengthening family support system, self help and peer group optimizations.
  • Creating awareness about substance abuse and their adverse consequences through aid of appropriate mass media tools delivering customized information suitable to the target audience such as family, schools, workers, religious organization, homes in a sensitive manner, Owing to the impact on all age groups of the society.
  • It is of prime importance to design and formulate an effective community based and a holistic strategy to address the needs of the drug abuser and their family comprehensively. Multiple measures such as identifying the psychosocial determinants that may determine the use of illicit drug, developing family prevention programs in the form of multi-dimensional family therapy and individual cognitive behavioral therapy
  • Sensitizing clinicians to identify patients at risk for nonprescription drug abuse, strengthening preclinical assessment to predict substance abuse liability, encouraging exercises as a potential treatment for drug abuse and building mechanisms for tracking and monitoring prescription drug abuse.
  • Formulating strategies in collaboration with international agencies to monitor the sale of over-the-counter drugs and enforcing stricter penalties for individuals who are involved in trade of illicit drugs.
  • Also, an important role to play in screening the adolescent, youths for drug use during routine medical checkup.

Nursing Research

  • Collaborate with other health personnel in research study relating to substance abuse thus providing new information in the Psychological care of clients with substance abuse [21-28].

Conclusion

Substance abuse is still a menace and has grown to become global subculture whose effects is cataclysmic and cuts across every society, creed, or race. However, no individual is born an abuser, but the multifarious human activities have through learning, interaction, and curiosity made man to develop this habit. It is empirical that substance abuse is more common amongst the youth especially in Nigeria. The habit develops as an attempt for instance to justify a curiosity in the daily interactions as man is a gregarious animal.

To the individual, its effects can be physiological and psychological, which gradually penetrates the society and affects all productive endeavors both socially and economically. As a menace, substance abuse has habitually become a means to an end which calls for individuals, families, groups, communities, societies and the Nigerian government to collaboratively join hands in curbing the menace. Psychosocial support is presented here as a way out of the menace. Mental health nurses are central to providing the support.

Recommendations

In an attempt to proffer some meaningful solutions to curb the menace of substance abuse, the following recommendations are presented to both government and the society at large.

(a) Government policies targeted at developing the society are more often than not mere paper work. Thus, the government should ensure that through its policies, jobs are created, social services are rendered, and above all, its policies should be feasible and capable of implementation.

(b) Hospitals and clinics should be well stocked with genuine drugs and trained physicians put in place to ensure proper prescription of drugs while monitoring how the patients take such drugs to avoid over or under dosage tendencies which will lead to drug abuse.

(c) There should be a proper scrutiny and licensing of patent medicine stores, and such should be operated by well-trained Pharmacists. Alongside this, street drug hawking should be discouraged since this can promote accessibility to drug abusers.

(d) Individuals, families, communities, and the entire society should ensure that moral values are inculcated in the youths, by joining the government’s fight against the menace.

(e) Implementing a policy of asking patients about their needs an wishes concerning psychosocial supports, as well routinely assessing their levels of psychosocial which may bring about meaningful progress for psychosocial care.

(f) Rehabilitation centers such as therapeutic and penal institutions should be equipped, employ trained staff as well as involve in proper guidance and counseling.

(g) Institutions like the National Drugs Law Enforcement Agency (NDLEA) and the National Agency for Food and Drug Administration and Control (NAFDAC) should be empowered to squarely deal with “Drug Barons” as well their traffickers, peddlers, and conduits. This is because at times, their performances are undermined by the threats they get as well as the purported connections such barons and the traffickers have with people in higher authority.

(h) Government should encourage even development at all levels by providing the required skills, social services and recreational facilities to reduce Rural-Urban migration, as it was also found that so many youths migrate from rural areas to urban areas to search for the greener pastures and facilities lacking in the rural areas.

(i) Non-Governmental Organizations (NGOs) and Community Based Organizations (CBOs) should encourage the sensitization campaigns against drug abuse as well as engage in rehabilitation programs.

(j) Educational Institutions at all levels whether public or private should organize workshops, lectures/ symposiums to enlighten the people on the dangers of drugs and substance abuse.

References

  1. Abubakar IJ, Abubakar SK, Abubakar G, Zayyanu S, Garba Mohammed K, et al. (2021) The Burden of Drug Abuse in Nigeria: A Scoping Review of Epidemiological Studies and Drug Laws. National library of medicine. National library Of Medcine. [crossref]
  2. American Psychological Association (2022) Breaking Free From Addiction.
  3. Abiodun O (2021) Drug abuse and its clinical implications with special reference to Nigeria. Central.
  4. Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH (2017) Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study.
  5. Yunusa U, Bello UL, Idris M, Haddad MM, Adamu D (2017) Determinants of substance abuse among commercial bus drivers in Kano Metropolis, Kano State, Nigeria. American. Journal of Nursing Science.
  6. Brookdale Premier Addiction Recovery (2022) Seven Stages of Addiction.
  7. Daniel M (2016) The Genetics of Addiction. Journal of Studies on Alcohol and Drug 77: 673-675.
  8. Drugs, Brains, and Behaviour, The Science Of Addiction (2014). National Institute on Drug Abuse.
  9. Adamson TA, Onifade PO, Ogunwale A (2010) Trends in socio demographic and drug abuse variables in Patients with alcohol and drug use disorders in a Nigerian treatment facility. West Afr J Med 29: 12-18. [crossref]
  10. Arli C (2020) Overview of Social Skills Training.
  11. Benjamin A, Chidi N (2014) Drug abuse, addiction and dependence, pharmacology and therapeutic. Swiss School of Public Health Journals.
  12. Behavioural Health Resources and Services Directory For Carrol Country (2020): Signs And Symptoms Of Drug Abuse.
  13. Dankani I (2017) Abuse of cough syrups: a new trend in drug abuse in north western Nigerian states of kano, Sokoto, Katsina, Zamfara and Kebbi. International Journal of Physical and Social Science 2: 199-213.
  14. Essien CF (2010) Drug use and abuse among students in tertiary institutions-the case of federal University of technology, Minna. Journal of Reseach in National Development.
  15. Erah F, Omaseye A (2017) Drug and alcohol abuse among secondary school students in a rural community in southsouth Nigeria. Annals of Medical and Surgical Practice Journal 2.
  16. Famuyiwa O, Aina OF, Bankole-Oki OM (2011) Epidemiology of psychoactive drug use amongst adolescents in metropolitan Lagos, Nigeria. European Child & Adolescent Psychiatry Journal 20: 351-359. [crossref]
  17. Gobir A, Sambo M, Bashir S, Olorukoba A, Ezeh O, etal. (2017) Prevalence and determinants of drug abuse among youths in a rural community in north western Nigeria. Tropical Journal of Health Sciences.
  18. Gureje O, Olley D (1992) Alcohol and drug abuse in Nigeria: view of the literature Contemporary Drug Problems.
  19. Makanjuola BA, Sabitua O, Tanimola M (2007) National Drug Laws EnforcementAgency (2020).
  20. Namadi M (2016) Drug abuse among adolescents in Kano metropolis, Nigeria. Ilimi Journal of Art and Social Sciences 2.
  21. Nigeria, Federal Ministry of Health, National Policy for Controlled Medicines, 2017.
  22. Pela OA, Ebie C (1982) Drug abuse in Nigeria: a review of epidemiological studies. National Library Of Medicine pubmed.
  23. Pharmacists Council of Nigeria (2020).
  24. Abubakar IJ, Abubakar SK, Abubakar G, Zayyanu S, Garba Mohammed K, et al. (2019) The Burden of Drug Abuse in Nigeria: A Scoping Review of Epidemiological Studies and Drug Laws. National library of medicine. National Library of Medcine. [crossref]
  25. Ladipo A (2021) Menace Of Drug Abuse. The Sun Journals.
  26. Lauren B (2022) Long Term Drug Addiction Effects. American Addiction Centers Drug Abuse.Com.
  27. Mohd F (2022) Social Skills Among Psychiatric Patient.
  28. Theories Of Substance Abuse (2022).

Numerical Simulation of Surface and Internal Wave Excitation due to an Air Pressure Wave

DOI: 10.31038/GEMS.2023533

Abstract

The excitation of surface and internal water waves by an air pressure wave has been numerically simulated in several model cases, using a nonlinear shallow water model of velocity potential. Water waves were excited when the air pressure wave speed was close to the water wave speed in the surface mode or internal mode. The surface mode waves traveling as free waves after being excited by an air pressure wave  were  also  amplified  by  the shallowing on a sloping seabed. When the air pressure wave with a speed close to that of the internal mode stopped, free surface waves in the internal mode hardly appeared, unlike the free internal waves.

Keywords

Surface wave, Internal wave, Air pressure wave, Proudman resonance, Nonlinear shallow water

Introduction

Internal waves in various waters, such as the East China Sea, e.g., [1,2], and Lake Biwa, e.g., [3,4], may gain large wave heights because the density ratio in water is not as large as that of surface waves. Although various sources of internal waves—tidal currents [5], wind- driven near-inertial waves [6], etc.—have been revealed, the causes of internal waves are unknown in many actual waters.

In the present study, we consider surface/internal wave excitation due to an air pressure wave. Regarding surface waves, air pressure waves of a few hectopascals often generate meteotsunamis around the world, e.g., [7,8,9]. For example, at the west coasts of Kyushu, Japan, meteotsunamis called “Abiki” are observed, e.g., [10,11]. Conversely, internal waves are also generated and amplified by air pressure waves due to meteorological factors including typhoons [12,13]. The excitation mechanism underlying these phenomena is the Proudman resonance [14], which is also known as the cause of other transient waves, e.g., [15,16,17,18,19]. Moreover, the resonance triggered by air pressure waves from a volcanic eruption may generate global tsunamis, e.g., [20,21]. Artificial waves can also be created by the resonance when an airplane moves on a very large floating airport [22].

In this basic research, numerical simulations of surface and internal wave excitations due to an air pressure wave have been generated in several model cases, using a nonlinear shallow water model of velocity potential. Although the wave dispersion and Coriolis force are not considered, the proposed simple model will provide an easy-to-use tool for predicting long-wave excitations from air pressure changes estimated in weather forecasts. We consider the cases in which the air pressure wave speed is close to the surface or internal mode speed.

Method

We consider the irrotational motion of inviscid and incompressible fluids in two layers, as illustrated in Figure 1.

FIG 1

Figure 1: Two-layer water

The still water depths of the upper and lower layers are h1(x) and h2(x), respectively, and h(x) = h1(x) + h2(x). We assume that the densities of the upper and lower layers, ρ1 and ρ2, respectively, are uniform and constant, and that the fluids do not mix even in motion. The water surface displacement, interface displacement, and seabed position are denoted by ζ(x, t), η(x, t), and b(x), respectively. Friction is ignored everywhere for simplicity. The velocity potentials of the upper and lower layers are ϕ1(x, t) and ϕ2(x, t), respectively.

The nonlinear shallow water equations of velocity potential considering the pressure on the water surface, p0(x, t), are

Upper Layer

∂η/∂t = ∂ζ/∂t + ∇[(ζ η) ∇ϕ1],      (1)

∂ϕ1/∂t = – [+ p0 /ρ1 + (∇ϕ1)2/2],      (2)

Lower Layer

∂η/∂t = –∇ [(η b) ∇ϕ2],       (3)

∂ϕ2 /∂t = – [+ (p1 + P2 )/ρ2  + (∇ϕ2 )2/2],      (4)

where ∇ = (∂/∂x, ∂/∂y) is a horizontal partial differential operator. The gravitational acceleration g is 9.8 m/s2, p1(x, t) is the pressure at the interface, and P2 = (ρ2ρ1)gh1. Equations (1)–(4) can be derived by reducing the nonlinear equations based on the variational principle [23].

Substituting Equation (3) into Equation (1), we obtain

∂ζ/∂t = – {∇ [(ζ η) ∇ϕ1] + ∇ [(η b) ∇ϕ2]}.       (5)

In the upper layer, reversing the direction of the integration with respect to z gives the following auxiliary equation as

∂ϕ1/∂t + + p1/ρ1  + (∇ϕ1)2/2 = 0,      (6)

which corresponds to the Bernoulli equation on z = η.

By substituting Equation (2) into Equation (6), we obtain

p1  = p0  + ρ1 g(ζ η),       (7)

which expresses the hydrostatic pressure distribution. By substituting Equation (7) into Equation (4), we obtain

∂ϕ2/∂t = – [+ p0/ρ2 + r-1g(ζ η) + (1 – r-1)gh1 + (∇ϕ2)2/2], (8)

where r = ρ2/ρ1 > 1.

By eliminating p1 from Equations (4) and (6), we obtain

∂ϕ1/∂t r∂ϕ2 /∂t = (r – 1)g(η + h1) – [(∇ϕ1 )2r(∇ϕ2)2]/2.                   (9)

We explicitly solve the above equations using a finite difference method with the central difference in space and the forward difference in time. When the pressure at the water surface, p0, is known and the water surface displacement ζ is unknown, the procedure shown in

Figure 2 is repeated, starting from the initial still water state, to obtain new time-step values one after another.

FIG 2

Figure 2: Procedure for obtaining the surface displacement ζ,interface displacement η, and velocity potentials in the upper and lower layers, ϕ1 and ϕ2, respectively, when the pressure at the water surface, p0, is given.

Conversely, when the pressure at the water surface, p0, is unknown and the water surface displacement ζ is known, we adopt the procedure shown in Figure 3, which was not used in the present calculations.

FIG 3

Figure 3: Procedure for obtaining the interface displacement η and velocity potentials  in the upper and lower layers, ϕ1  and ϕ2, respectively, when the surface displacement ζ  is given.

Conditions

Focusing on one-dimensional wave propagation in the x-axis direction, we assumed that a steady air pressure wave W, as sketched in Figure 4, traveled in the positive direction of the x-axis with a constant speed vP. The waveform of the air pressure wave was an isosceles triangle, where the length of its base, i.e., the wavelength λ, was 10 km or 20 km. The maximum and minimum pressures pm of positive and negative air pressure waves, respectively, were 2hPa and −2 hPa, respectively, referring the values in the meteotsunami and eruption cases [11,21]. The position of the air pressure wave center at the initial time, i.e., t = 0 s, was x0 = 50 km.

FIG 4

Figure 4: Waveform of the steady air pressure wave W at the initial time, i.e., t = 0 s. The air pressure wave traveled in the positive direction of the x-axis with constant speed vp.

The densities of the upper and lower layers were ρ1 = 1000 kg/m3 and ρ2 = 1025 kg/m3, respectively. Both the initial velocity potentials ϕ1(x, 0 s) and ϕ2(x, 0 s) were 0 m2/s. The grid width Δx was 250 m and the time step interval Δt was 1 s.

Excitation of the Surface Mode

In Figure 4, the wavelength λ and the maximum pressure pm of the air pressure wave were 10 km and 2 h Pa, respectively. In the initial still water state, the total water depth h was 5000 m and the upper layer depth h1 was 1000 m, in Figure 1. For linear shallow water waves, the phase velocity of the surface mode, Cs, is √(gh) ≃ 220 m/s. When the traveling velocity of the air pressure wave, vp, is 207 m/s, which is close to Cs , the time variations of the air pressure distribution and both the surface and interface profiles are depicted in Figure 5, in which the results for 100 s ≤ t ≤ 1000 s are displayed every 100 s.

FIG 5

Figure 5: Time variations of the air pressure distribution, surface profile, and interface profile every 100 s. The still water depth h was 5000 m and the still water depth ratio h1/h was 0.2. The wavelength λ, maximum pressure pm, and speed vp of the air pressure  wave were 10 km, 2 hPa, and 207 m/s, respectively.

Figure 5 indicates that the crests and troughs in the surface mode were excited by the Proudman resonance not only at the surface but also at the interface, because the positions of the surface and interface were relatively close. When t = 100 s, the water wave crests have been generated at the air pressure rise, whereas the water wave troughs at the air pressure fall. The length of the water wave crests and troughs was approximately half the wavelength of the air pressure wave. Thereafter, the water wave crests gradually led away from the air pressure wave because the surface mode speed was greater than the air pressure wave speed. When t = 1000 s, the water wave crests were propagating as free waves, whereas the water wave troughs have been constrained by the air pressure wave, and the wavelength of each crest and trough was approximately the same as that of the air pressure wave.

When the seabed is partially sloping, Figure 6 depicts the numerical results for the same conditions as in the case above, except for the topography, where the seabed position b is described as

b = −5000 m for 0 ≤ x < 150 km,

b = −3500 m − 1500 m × cos π (x/150 km – 1) for 150 km ≤ x ≤ 300 km. (10)

FIG 6

Figure 6: Time variations of the air pressure distribution, surface profile, and interface profile every 100 s. The seabed profile is also depicted, where the seabed position b is described by Equation (10). The initial water depth in the upper layer, h1 was 1000 m. The wavelength λ, maximum pressure pm, and speed vp of the air pressure wave were 10 km, 2 hPa, and 207 m/s, respectively.

As indicated in Figure 6, the second peaks of water wave crests were generated when the air pressure wave speed approached the surface mode speed on the slope. Moreover, both the water wave crests and troughs were amplified by shallowing on the slope after they moved away from the air pressure wave. It should be noted that the shallowing effect requires water waves that are traveling as free waves apart from the air pressure waves that excited the water waves. When an eruption creates air pressure waves with different speeds, as in the case of the 2022 Hunga Tonga–Hunga Ha`apai volcanic eruption,  the air pressure waves excite tsunamis at water depths corresponding to the air pressure wave speeds [24], and each tsunami traveling apart from the air pressure wave that excited it can be amplified by shallowing on a ridge, shelf slope, continental shelf, etc. Tsunamis traveling as free waves after being excited by air pressure waves may also be amplified by being passed by subsequent air pressure waves over topography [21], as indicated in the water wave crests at t = 1000 s in Figure 6. Moreover, bay oscillations, currents, and horizontally two-dimensional changes in topography may amplify tsunamis, similar to submarine earthquake tsunamis.

Excitation of the Internal Mode

The wavelength λ and the maximum pressure pm of the air pressure wave were 10 km and 2 hPa, respectively, in Figure 4. The still water depth h was uniformly 5000 m, and the still water depth ratio h1/h was 0.2, in Figure 1. The internal mode speed for linear shallow water waves without surface waves is

11

so Ci ≃ 14 m/s in the present case. We assumed that while 0 s ≤ t < 1000 s, the air pressure wave speed vp was 14 m/s, which was almost equal to Ci, whereafter the air pressure wave stopped at t = 1000 s, and the air pressure distribution was stagnated for t ≥ 1000 s. The time variations of the air pressure distribution and both the surface and interface profiles are depicted in Figure 7, in which the results for 200 s ≤ t ≤ 2000 s are displayed every 200 s.

FIG 7

Figure 7: Time variations of the air pressure distribution, surface profile, and interface profile every 200 s. The still water depth h was 5000 m and the still water depth ratio h1/h was 0.2. The wavelength λ, maximum pressure pm, and speed vp of the air pressure wave were 10 km, 2 hPa, and 14 m/s, respectively.

Based on Figure 7, the internal waves in the internal mode were excited by the Proudman resonance, and especially the crest was amplified remarkably. Conversely, free surface waves in the internal mode hardly appeared because the surface wave crest was constrained by the stagnant air pressure distribution.

When the wavelength λ and the minimum pressure pm of the air pressure wave are 20 km and −2 hPa, respectively, Figure 8 presents the numerical results for the same other conditions as in the above case.

FIG 8

Figure 8: Time variations of the air pressure distribution, surface profile, and interface profile every 200 s. The still water depth h was 5000 m and the still water depth ratio h1/h was 0.2. The wavelength λ, minimum pressure pm, and speed vp of the air pressure wave were 20 km, −2 hPa, and 14 m/s, respectively.

In Figure 8, the waveform of the generated internal waves propagating as free waves is different from the vertically inverted waveform of the above-mentioned internal waves due to the air pressure wave of positive pressure, disregarding the difference in wavelength. Therefore, future work is required to investigate the stability of the upward and downward convex internal waves due to an air pressure wave, considering higher-order terms of the velocity potential.

Conclusion

The excitation of surface and internal water waves by an air pressure wave was numerically simulated using the nonlinear shallow water model of velocity potential. The water waves were excited when the air pressure wave speed was close to the water wave speed in each mode. The surface mode waves traveling as free waves after being excited by an air pressure wave were also amplified by the shallowing on the sloping seabed. When the air pressure wave, the speed of which was close to the internal mode speed, stopped, free surface waves in the internal mode hardly appeared, unlike the free internal waves.

In the present model, wave dispersion is ignored, so in the future, the excitation of relatively shorter water waves by air pressure waves should be investigated using a numerical model with higher-order terms of velocity potential.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hsu MK, Liu AK, Liu C (2000) A study of internal waves in the China Seas and Yellow Sea using Continental Shelf Research 20 : 389-410.
  2. Nam S, Kim Dj, Lee SW, Kim BG, Kang Km, Cho YK (2018) Nonlinear internal wave spirals in the northern East China Scientific Reports 8.
  3. Kanari S (1973) Internal waves in Lake Biwa (H)—numerical experiments with a two layer model. Bulletin of the Disaster Prevention Research Institute 22 : 70-96.
  4. Jiao C, Kumagai M, Okubo K (1993) Solitary internal waves in Lake Bulletin of the Disaster Prevention Research Institute 43 : 61-72.
  5. Hibiya T (1988) The generation of internal waves by tidal flow over Stellwagen Journal of Geophysical Research 93 : 533-542.
  6. Le Boyer A, Alford MH (2021) Variability and sources of the internal wave continuum examined from global moored velocity Journal of Physical Oceanography 51: 2807-2823.
  7. Vilibić I, Monserrat S, Rabinovich A, Mihanović H (2008) Numerical modelling of the destructive meteotsunami of 15 June, 2006 on the coast of the Balearic Islands. Pure and Applied Geophysics, 165 : 2169-2195.
  8. Bailey K, DiVeglio C, Welty, A (2014) An examination of the June 2013 East Coast meteotsunami captured by NOAA observing systems. NOAA Technical Report, NOS CO-OPS 079.
  9. Niu X, Zhou H (2015) Wave pattern induced by a moving atmospheric pressure disturbance. Applied Ocean Research 52 : 37-42.
  10. Hibiya T, Kajiura, K (1982) Origin of the Abiki phenomenon (a kind of seiche) in Nagasaki Journal of the Oceanographical Society of Japan 38 : 172-182.
  11. Kakinuma T (2019) Long-wave generation due to atmospheric-pressure variation and harbor oscillation in harbors of various shapes and countermeasures against meteotsunamis. In Natural Hazards—Risk, Exposure, Response, and Resilience; Tiefenbacher JP, Ed : IntechOpen: London, Pg : 81-109.
  12. Geisler JE (1970) Linear theory of the response of a two layer ocean to a moving hurricane. Geophysical and Astrophysical Fluid Dynamics 1 : 249-272.
  13. Dotsenko SF (1991) Generation of long internal waves in the ocean by a moving pressure zone. Soviet Journal of Physical Oceanography 2 : 163-170.
  14. Proudman J (1929) The effects on the sea of changes in atmospheric Geophysical Journal International 2 : 197-209.
  15. Whitham GB (1974) Linear and Nonlinear Waves, John Wiley & Sons, Inc.: New York, NY, Pg : 511-532.
  16. Lee S, Yates G, Wu T (1989) Experiments and analyses of upstream-advancing solitary waves generated by moving disturbances. Journal of Fluid Mechanics 199 : 569-593.
  17. Kakinuma T; Akiyama M (2007) Numerical analysis of tsunami generation due to seabed deformation. In Coastal Engineering 2006; Smith JM, Ed : World Scientific Publishing , Pte. Ltd.: Singapore. Pg : 1490-1502.
  18. Dalphin J, Barros R (2018) Optimal shape of an underwater moving bottom generating surface waves ruled by a forced Korteweg-de Vries Journal of Optimization Theory and Applications 180 : 574-607.
  19. Michele S, Renzi E, Borthwick A, Whittaker C, Raby A (2022) Weakly nonlinear theory for dispersive waves generated by moving seabed deformation. Journal of Fluid Mechanics 937.
  20. Garrett CJR (1970) A theory of the Krakatoa tide gauge disturbances. Tellus 22 : 43- 52.
  21. Kakinuma T (2022) Tsunamis generated and amplified by atmospheric pressure waves due to an eruption over seabed Geosciences 12.
  22. Kakinuma T, Hisada M (2023) A numerical study on the response of a very large floating airport to airplane Eng 4 : 1236-1264.
  23. Kakinuma T (2003) A nonlinear numerical model for surface and internal waves shoaling on a permeable beach. In Coastal Engineering VI; Brebbia CA, Lopez- Aguayo F, Almorza D, Eds : Wessex Tech. Press, Pg : 227-236.
  24. Yamashita K, Kakinuma T (2022) Interpretation of global tsunami height distribution due to the 2022 Hunga Tonga-Hunga Ha’apai volcanic Preprint available at Research Square.

Double Cote’s Spiral in M83 Galaxies, NGC 1566 and Cyclone in the South Georgia and South Sandwich Islands

DOI: 10.31038/GEMS.2023532

 

One comparative analysis of the shape of spiral galaxies and the subtropical cyclone that formed north of Georgia Island and passed north of the South Sandwich Islands, in the South Atlantic Ocean. Subtropical cyclones with double spirals appear to be common in these areas of the South Atlantic. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. In mathematics, a spiral is a curve, which emanates from a point, moving farther away as it revolves around the point. The characteristic shape of hurricanes, cyclones, typhoons is a spiral. The characteristic equation of which spiral the Extratropical Cyclone (EC) Its double spiral shape, whose mathematical equation has already been defined as Cote’s spiral, Gobato et al. (2022) and similarly Lindblad (1964) show shape of double spiral galaxies, already studied among others is discussed here [44].

The South Georgia Group lies about 1,390 km (860 mi; 750 mi) east-southeast of the Falkland Islands, at 54°-55°S, 36°-38°W. It comprises South Georgia Island itself by far the largest island in the territory, and the islands that immediately surround it and some remote and isolated islets to the west and east-southeast. It has a total land area of 3,756 square kilometers (1,450 sq. mi), including satellite islands, but excluding the South Sandwich Islands, which form a separate island group [53,56]. A cyclone is a large air mass that rotates around a strong center of low atmospheric pressure, counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere as viewed from above (opposite to an anticyclone) [14,27,29]. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel [19,26,27].

These storms usually have a radius of maximum winds that is larger than what is observed in purely tropical systems, and their maximum sustained winds have not been observed to exceed about 32 m/s (64 knots). Subtropical cyclones sometimes become true tropical cyclones, and likewise, tropical cyclones occasionally become subtropical storms. Subtropical cyclones in the Atlantic basin are classified by their maximum sustained surface winds: Subtropical depressions have surface winds less than 18 m/s (35 knots), while subtropical storms have surface winds greater than or equal to 18 m/s [921,26,27,29].

In mathematics, a spiral is a curve, which emanates from a point, moving farther away as it revolves around the point [2325]. The characteristic shape of hurricanes, cyclones, typhoons is a spiral [26,27,29,3441]. There are several types of turns, and determining the characteristic equation of which spiral the cyclone bomb (CB) [28] fits into is the goal of the work. Spiral galaxies form a class of galaxy originally described by Edwin Hubble in his 1936 work The Realm of the Nebulae and, as such, form part of the Hubble sequence. Most spiral galaxies consist of a flat, rotating disk containing stars, gas and dust, and a central concentration of stars known as the bulge. These are often surrounded by a much fainter halo of stars, many of which reside in globular clusters [54].

The core of cyclone presents the form of a double spiral, Figure 1, in the same way the study of the spiral of the galaxies of Lindblad, (1964) [32]. This spiral is denoted from Cotes Spiral Gobato et al. (2000) [711,1820,2225,44].

fig 1

Figure 1: Image of Georgia, scale 1:200, on April 11, 2003, PM, and nucleus at the coordinates given in the image [46] [Authors].

The very fine image quality of this camera, coupled with the huge light-collecting power of the VLT, reveals vast numbers of stars within the galaxy. The images were taken in three different parts of the infrared spectrum and the total exposure time was eight and a half hours, split into more than five hundred exposures of one minute each. The field of view is about 13 arcminutes across [49,55].

The Figure 2 show Hubble image captures hundreds of thousands of individual stars, thousands of star clusters and hundreds of supernova remnants in the spiral galaxy M83. Also known as the Southern Pinwheel, this galaxy is located 15 million light-years away from Earth in the constellation Hydra. It was discovered in 1752 by the French astronomer Nicolas Louis de Lacaille. With an apparent magnitude of 7.5, M83 is one of the brightest spiral galaxies in the night sky. It can be observed using a pair of binoculars most easily in May [49,50].

fig 2

Figure 2: Spectacular spiral galaxies using the impressive power of the HAWK-I [49,50]

NGC 1566, sometimes known as the Spanish Dancer, is an intermediate spiral galaxy in the constellation Dorado, positioned about 3.5° to the south of the star Gamma Doradus (Figure 3). It was discovered on May 28, 1826 by Scottish astronomer James Dunlop. At 10th magnitude, it requires a telescope to view. The distance to this galaxy remains elusive, with measurements ranging from 6 Mpc up to 21 Mpc [50,51]. The small but extremely bright nucleus of NGC 1566 is clearly visible in this image, a telltale sign of its membership of the Seyfert class of galaxies. The centers of such galaxies are very active and luminous emitting strong bursts of radiation and potentially harboring supermassive black holes that are many millions of times the mass of the sun [50,51].

fig 3

Figure 3: Hubble image shows NGC 1566, a beautiful galaxy located approximately 40 million light-years away in the constellation of Dorado (The Dolphinfish). NGC 1566 is an intermediate spiral galaxy, meaning that while it does not have a well-defined bar-shaped region of stars at its center like barred spirals it is not quite an unbarred spiral either [50,51].

NGC 1566 is not just any Seyfert galaxy; it is the second brightest Seyfert galaxy known. It is also the brightest and most dominant member of the Dorado Group, a loose concentration of galaxies that together comprise one of the richest galaxy groups of the southern hemisphere. This image highlights the beauty and awe-inspiring nature of this unique galaxy group, with NGC 1566 glittering and glowing, its bright nucleus framed by swirling and symmetrical lavender arms [50,51].

NGC 1566 is an intermediate spiral galaxy, meaning that while it does not have a well-defined bar-shaped region of stars at its center like barred spirals it is not quite an unbarred spiral either [50,51]. The Figure 1 show the image of Georgia, scale 1:200, on April 11, 2003, PM, and nucleus at the coordinates given in the image. The Georgia, on April 11, 2003, PM and nucleus at the coordinates given in the image. The of Georgia, in an atmospheric pressure gradient model generated by the Zoom Earth system, on April 11, 2003, 12:30, with 951 mbar, and whose core was located at approximate coordinates of image. The image of Georgia, in surface wind model generated by the Zoom Earth system, on April 11, 2003, 12:00, with 5 km/h WSW, and nucleus at the coordinates given in the image.

The model of wind currents for the displacement of air masses observed in the images is consistent with that observed which presents a great turbulence in the vortex. The highlighted cyclone vortex still in turbulent formation presents two linear containment barriers, in an L shape. The subtropical cyclone that formed northwest of South Georgia and South Sandwich Island is here called Georgia. It moved 237 km in 12 h towards the West, when it was 589 km from South Georgia Island, to 809 km from the center of the coast of the South Georgia Island. During this time interval, it maintained an atmospheric pressure at sea level at its vortex close to 951 hPa. It presented rotational winds of 5 km/h approximately 8 km from the central vortex (Figure 4).

fig 4

Figure 4: Image of Georgia, scale 1:100, in surface wind model generated by the Zoom Earth system, on April 11, 2003, 12:00, with 5 km/h WSW, and nucleus at the coordinates given in the image.

The analogous shape of Georgia and the galaxies Messier 83 and NGC 1566, studied here, is clear. These present a double spiral, as studied by Lindblad [47], but with the Cote’s spiral form, Gobato et al. (2022) [8,9,11] (Table 1).

Table 1: Subtropical Cyclone Georgia: Location/Pressure

April 11, 2023

Coordinates

Pressure (hPa)

AM 5313’09”S 2745’05”W

951

PM 5316’42”S 2400’38”W

951

The subtropical cyclone that formed northwest of South Georgia & South Sandwich Island is here called Georgia. It moved 237 km in 12 h towards the West, when it was 589 km from South Georgia Island, to 809 km from the center of the coast of the South Georgia Island. During this time interval, it maintained an atmospheric pressure at sea level at its vortex close to 951 hPa. It presented rotational winds of 5 km/h approximately 8 km from the central vortex. With an approximate dimension of 1,000,000 km2, and an area of direct influence of 3,500,000 km2, the subtropical cyclone Georgia moved at an average speed of 19.75 km/h.

The mathematical model for the atmospheric pressure gradient used by ZoomEarth [43] matches the correct way to scale the atmospheric pressure, as can be seen in the comparison of the satellite images. The model of wind currents for the displacement of air masses observed in the images is consistent with that observed in which presents a great turbulence in the vortex. The image of Georgia, scale 1:20, on April 11, 2003, PM and nucleus at the coordinates given in the image. The image of Georgia, on a 1:100 scale, in an atmospheric pressure gradient model generated by the Zoom Earth system, on April 11, 2003, 12:30, with 951 mbar, and whose core was located at approximate coordinates of image, and image of Georgia, scale 1:100, in surface wind model generated by the Zoom Earth system, on April 11, 2003, 12:00, with 5 km/h WSW, and nucleus at the coordinates given in the image. The highlighted cyclone vortex still in turbulent formation presents two linear containment barriers, in an L shape. The have Georgia’s double spiral Cote’s shape. The analogous shape of Georgia and the galaxies Messier 83 and NGC 1566, studied here, is clear. These present a double spiral, as studied by Lindblad (1964) [47], but with the Cote’s spiral form, Gobato et al. (2022) [8,9,11,44].

References

  1. (2023) Creative CC BY-SA. Cyclone.
  2. American Meteorological Society (2020) Glossary of Meteorology:
  3. Landsea C (2009) Subject: (A6) What is a sub-tropical cyclone?
  4. Atlantic Oceanographic and Meteorological Laboratory.
  5. Armentrout D and Armentrout P (2007) Rourke Publishing (FL) Tornadoes, Series: Earth’s Power.
  6. Edwards R (2006) Storm Prediction National Oceanic and Atmospheric Administration. The Online Tornado.
  7. Gobato R, Mitra A and Valverde L (2022) Tornadoes analysis Concordia, Santa Catarina, Southern Brazil, 2022 season. Aeronautics and Aerospace Open Access Journal.
  8. Gobato R, Mitra A, Gobato MRR and Heidari A (2022) Cote’s Double Spiral of Extra Tropical Cyclones. Journal of Climatology & Weather Forecasting.
  9. Gobato R, Mitra A, Heidari A and Gobato MRR (2022) Spiral galaxies and powerful extratropical cyclone in the Falklands Islands. Physics & Astronomy International Journal.
  10. Gobato R, Heidari A, Mitra A and Gobato MRR (2022) Spiral Galaxies and Powerful Extratropical Cyclone in the Falklands Islands.
  11. Gobato R, Heidari A, Mitra A and Gobato MRR (2022) Extratropical Cyclone in the Falklands Islands and the Spiral Sumerianz Journal of Scientific Research.
  12. Gobato R, Heidari A, Mitra A and Gobato MRR (2022) Spiral Galaxies and Extratropical Cyclone.
  13. Gobato R, Mitra A (2022) Vortex Storms in the West of Santa Biomedicine and Chemical Sciences.
  14. Gobato R, Heidari A and Mitra A (2021) Mathematics of the Extra-Tropical Cyclone Vortex in the Southern Atlantic Ocean. Journal of Climatology & Weather Forecasting.
  15. Bluestein, HB (2013) Severe Convective Storms and Tornadoes: Observations and Dynamics, Series: Springer Praxis Books Springer-Verlag Berlin Heidelberg.
  16. Gobato R, Gobato MRR and Heidari A (2018) Evidence of Tornadoes Reaching the Countries of Rio Branco do Ivai and Rosario de Ivai, Southern Brazil on June 6, 2017. Climatol Weather Forecasting.
  17. Gobato R, Gobato MRR and Heidari A (2019) Evidence of Tornadoes Reaching the Countries of Rio Branco do Ivai and Rosario de Ivai, Southern Brazil on June 6, 2017.
  18. Gobato R, Gobato MRR and Heidari A (2019) Storm Vortex in the Center of Paraná State on June 6, 2017: A Case Study. Sumerianz Journal of Scientific Researcht.
  19. Gobato R, Heidari A, Mitra A and Gobato MRR (2020) Vortex Cote’s Spiral in an Extratropical Cyclone in the Southern Coast of Brazil. Archives in Biomedical Engineering and Biotechnology.
  20. Gobato R and Heidari A (2020) Vortex Cote’s Spiral in an Extratropical Cyclone in the Southern Coast of Brazil. J Cur Tre Phy Res App.
  21. Gobato R, Heidari A, Mitra A and Gobato MRR (2020) Cotes’s Spiral Vortex in Extratropical Cyclone Bomb South Atlantic Oceans. Aswan University Journal of Environmental Studies (AUJES)
  22. Gobato R, Gobato A and Fedrigo DFG (2016) Study of tornadoes that have reached the state of Parana. Parana J Sci Educ.
  23. Vossle DL (1999) Exploring Analytical Geometry with Mathematica. Academic Press,
  24. Casey J (2001) A treatise on the analytical geometry of the point, line, circle, and conic sections, containing an account of its most recent extensions, with numerous examples. University of Michigan Library.
  25. SharipovR (?) Course of Analytical Geometry. Bashkir State Univer-sity (Russian Federation)
  26. de Leão M and Rodrigues PR (1989) Methods of Differential Geometry in Analytical Mechanics, Series: Mathematics Studies. Elsevier Science
  27. Vasquez T (2002) Weather Forecasting Handbook (5th Edition) Weather Graphics
  28. Bluestein HB, Bosart LF and Bluestein Synoptical of Dynamic Meteorology and Weather Analysis and Forecasting: A Tribute to Fred Sanderson, Series: Meteorological Monographs 3(55) American Meteorological Society.
  29. Gobato R, and Heidari A (2020) Cyclone Bomb Hits Southern Brazil in 2020. Journal of Atmospheric Science Research.
  30. Rafferty JP (2010) Storms, Violent Winds, and Earth’s Atmosphere. Series: Dynamic Earth. Britannica Educational
  31. Krasny R (1986) A study of singularity formation in a vortex sheet by the point vortex approximation. Fluid Mech.
  32. Saffman PG (1992) Vortex Series: Cambridge monographs on mechanics and applied mathematics. Cambridge University Press.
  33. Sokolovskiy MA and Verron J (2000) Four-vortex motion in the two layer approximation – integrable case.
  34. Whittaker ET and McCrae Sir W (1989) Treatise on analytical dynamics of particles and rigid bodies. Cambridge Mathematical Library, Cambridge University
  35. George JJ (1960) Weather Forecasting for Aeronautics. Elsevier Inc.
  36. Yorke S (2010) Weather Forecasting Made Simple. Countryside Books Countryside Books.
  37. Anderson JD (1984) Fundamentals of Aerodynamics. McGraw-Hill
  38. Weisstein EW (2023) Cotes’s Spiral Cotes’s Wolfram MathWorld.
  39. Whittaker ET (2022) A Treatise on the Analytical Dynamics of Particles and Rigid Bodies: With an Introduction to the Problem of Three Bodies.
  40. Gobato R, Heidari A, Mitra A and Gobato MRR (2020) Cotes’s Spiral Vortex in Extratropical Cyclone bomb South Atlantic Oceans.
  41. Fischer R (1993) Fibonacci Applications and Strategies for Traders: Unveiling the Secret of the Logarithmic Spiral.
  42. Toomre A (?) Theories of Spiral Structure. Annual Review of Astronomy and Astrophysics.
  43. Oort jH (1970) The Spiral Structure of Our Galaxy, Series: International Astronomical Union. Becker, w. Contopoulos G.(eds.); Union Astronomique Internationale 38, Springer Netherlands.
  44. Nezlin MV and Snezhkin EN (1993) Rossby Vortices, Spiral Structures, Solitons: Astrophysics and Plasma Physics in Shallow Water Experiments, Series: Springer Series in Nonlinear Dynamics. Springer Verlag Berlin Heidelberg.
  45. Gobato R, Mitra A and Mullick P (2023) Double Spiral Galaxies and the Extratropical Cyclone in South Georgia and the South Sandwich Islands.Climate Research.
  46. Brazil’s Navy. Synoptic Letters (2023) Brazil’s navy. Synoptic Letters.
  47. (2023) Zoom Earth. NOAA/NESDIS/STAR, GOES-East, zoom.earth
  48. Lindblad B (1964) ON THE CIRCULATION THEORY OF SPIRAL STRUCTURE. ASTROPHYSICA NORVEGICA (12) Stockholms Observatorium, Saltsjobaden.
  49. Gobato R, adn Heidari A (2020) Vortex hits southern Brazil in 2020.
  50. NASA gov (2017) Messier 83 (The Southern Pinwheel)
  51. ESA/Hubble & NASA (2020) NGC 1566. European Space Agency.
  52. (2023) NGC 1566. Creative Commons.
  53. Jeynes C (2019) MaxEnt double spirals in space-time Maximum Entropy (Most Likely) Double Helical and Double Logarithmic Spiral Trajectories in Space-Time. Scientific Reports.
  54. (2023) South Georgia and the South Sandwich Islands. Creative Commons. CC BY-SA 3.0. https://en.wikipedia.org/wiki/South_Georgia_ and_the_South_Sandwich_Islands
  55. (2023) Spiral galaxy. Creative Commons.
  56. Heyer HH (2020) The classic spiral Messier 83 seen in the infrared with HAWK-I. ESO. https://www.eso.org/public/images/eso1020a/Gobato, Ricardo & Mitra, Abhijit & Mullick, Poulomi (2023) Double spiral galaxies and the extratropical cyclone in South Georgia and the South Sandwich Islands.

Significant of Molecular Genotyping Over Serological Phenotyping Techniques in Determination of Blood Group Systems among Multiple Transfused Patients and Blood Donors to Prevent Alloimmunization: A Review Article

DOI: 10.31038/CST.2023821

Summary

Erythrocyte serological phenotyping is very important in determining the identity of suspected alloantibodies and also to facilitate the identification of antibodies that may be formed in the future. Serological phenotyping is a conventional method which is based on the presence of visible haemagglutination or haemolysis. This technique has some limitations in successful determination of blood group due to presence of donor red blood cell in the circulation of recent multiple transfused patients, taking certain medications or some diseases condition which may alter the erythrocyte composition, this make accurate determination of blood group of such patients to be time consuming and difficult to interpret. It is often more complicated to determine the blood group if direct antiglobulin test of such patients were positive and there is no direct agglutinating antibody. Molecular Genotyping of blood group systems led to the understanding of the molecular basis of many blood group antigens, many blood group polymorphisms are associated with a single point mutation in the gene encoding of protein carrying the blood group antigen. This knowledge allows the use of molecular testing to predict the blood group antigen profile of an individual and to overcome the limitations of conventional serological blood group phenotyping. Determination of blood group polymorphism at the genomic level facilitates the resolution of clinical problems that cannot be addressed by serological techniques. Applications of blood group genotyping for red cell blood group antigens affecting several areas of medicine which includes identification of fetuses at risk for haemolytic disease of the newborn and candidates for Rh-immune-globulin, to determine antigen types for which currently available antibodies are weakly reactive, to determine blood group of patients who had recent multiple transfusion, to increase the reliability of repositories of antigen negative RBCs for transfusion, to select appropriate donor for bone marrow transplantation, to provide transfusion support for highly alloimmunized patients, to resolve ABO and Rh discrepancies, to confirm sub group of A2 status of kidney donors, to provide comprehensive typing for patients with haematological diseases requiring chronic transfusion and oncology patients receiving monoclonal antibody therapies that interfere with pretransfusion testing.

Keywords

Molecular, Serological, Red cell antigens, Alloantibodies and transfusion

Introduction

Blood group systems are characterized by the presence or absence of antigens on the surface of erythrocytes, the specificity of antigens is controlled by a series of genes which can be allelic or linked very closely on the same chromosome which persist throughout life and serve as identity markers. Presently, International Society of Blood Transfusion (ISBT) has acknowledged about 36 blood group systems and more than 420 blood group antigens have been discovered on the surface of the human red cell (Storry et al., 2016). The clinical importance of RBC antigens is associated with their ability to induce alloantibodies that are capable of reacting at 370C (body temperature), these antibodies have ability to cause destruction of erythrocytes. The major clinically significant antibodies include ABO, Rh, Kell, Kidd and Duffy antigens (Karafin et al., 2018). ABO antibodies are naturally occurring antibodies, while Rh and Kell antibodies are immunogenic in nature. The immune system produces alloantibodies when it is exposed to foreign antigens (incompatible erythrocyte) these antibodies form a complex with donor cells causing haemolytic transfusion reactions. Patients with Rh and Kell alloantibodies should be transfused with blood that is lacking the antigens because these antibodies are capable of causing severe haemolytic anaemia and haemolytic disease of the newborns (Singhal et al., 2017). Hence, it is important to transfused females of child-bearing age with compatible blood in order to reduce or minimize the possibility of sensitizing their immune system with clinically important antigens (Guelsin et al., 2015). Unexpected incompatibility reactions are the major risks of blood and blood products transfusion apart from transfusion transmissible infections, clinically significant alloantibodies play a critical role in transfusion medicine by causing either acute or delayed haemolytic transfusion reactions (HTRs) and haemolytic disease of the fetus and newborn (HDFN) ranging from mild to severe grades. The degree of production of alloantibodies that is capable of destroy foreign or donor red cells is higher amongst multi-transfused patients compared with general population (Karafin et al., 2017). Serological phenotyping of blood group system is a classical and conventional method of detecting erythrocyte antigen by haemagglutination or haemolysis, accurate phenotyping of blood group system among multi-transfused patients is a very complex process due to the presence of donor’s blood cells in the patient’s circulation except serological phenotyping of blood group system is performed before the initiation of transfusion. Blood group genotyping has recently been developed to determine the blood group antigen profile of an individual, with the goal of reducing risk or identifying a fetus at the risk of haemolytic disease of the newborn (HDN). Blood group genotyping improve the accuracy of blood typing where serology alone is unable to resolve red cell serological phenotyping especially in individuals with weak antigen expression due to presence of genetic variants or in case of rare phenotype where antisera are unavailable or in the case of recent multiple blood or blood products transfusion, or in patients whose RBCs are coated with immunoglobulin (Ye et al., 2016). Genotyping technique also help to determine which phenotypically antigen-negative patients can receive antigen-positive RBCs, to type donors for antibody identification panels, to type patients who have an antigen that is expressed weakly on RBCs, to determine Rh D zygosity, to mass screen for antigen-negative donors, ability to routinely select donor units antigen matched to recipients apart from ABO and Rh D which reduce complications in blood and blood products transfusion (Guelsin et al., 2010). The growth of whole-genome sequencing in chronic disease and for general health will provide patients’ comprehensive extended blood group profile as part of their medical record to be used to inform selection of the optimal transfusion therapy (Connie and Westhoff, 2019).  DNA-based genotyping is being used as an alternative to serological antibody-based methods to determine blood groups for matching donor to recipient because most antigenic polymorphisms are due to single nucleotide polymorphism changes in the respective genes. Importantly, the ability to test for antigens by genetic technique where there are no serologic reagents is a major medical advance and breakthrough to identify antibodies and find compatible donor units, which can be lifesaving.  The molecular genotyping of blood group antigens is an important aspect and is being introduced successfully in transfusion medicine. Genotyping has been shown to be effective and advantageous in relation to phenotype from genomic DNA with a high degree of precision (da Costa et al., 2013) A notable advantage of molecular testing is its ability to identify variant alleles associated with antigens that are expressed weakly or have missing or altered epitopes, thus helping to resolve discrepant or incomplete blood group phenotyping. The disadvantages of molecular testing are mainly the longer turnaround time and higher cost, compared with serologic typing (Marilia et al., 2019). The molecular basis for most erythrocyte antigens is known and numerous DNA analysis methodologies have been developed, all based on PCR which can detect several alleles simultaneously as long as the alleles studies have products of different sizes (Swati et al., 2018). The detection of blood group antigens is essential in transfusion practice in order to prevent alloimmunization, especially in multiple transfused patients. Erythrocyte antibodies that are clinically significant in transfusion medicine can lead to acute or delayed blood transfusion reactions and haemolytic disease of the fetus and newborn which increases the morbidity and mortality rate of the patients. In addition, alloimmunization may delay the localization of a compatible blood bag. The probability of an individual producing one or more anti-erythrocyte antibodies is approximately 1% per unit of blood transfused and in chronically multiple transfused patients, the alloimmunization rate may reach 50%. Both the blood donors and recipients can be genetically typed for all the clinically significant blood group antigens and antigen-matched blood can be provided to the recipient (Guelsin et al., 2010). This approach could significantly reduce the rate of alloimmunization.

Serological Phenotyping

Knowledge of the role of blood groups with their antigens and variants in alloimmunization was pivotal for the development of transfusion practices and medical interventions that require blood transfusion such as trauma, organ transplantation, cancer treatment, haematological diseases (like sickle cell disease, thalassaemia, and aplastic anemia). Serology has been considered the gold standard technique for blood group typing for a long time (Yazdanbakhsh et al., 2014). Serological methods detect the antigen expressed on the red cell using specific antibodies and can be carried out manually or by automated platforms. Typing blood group antigens using this method is easy, fast, reliable, and accurate for most of the antigens. However, serology has limitations, some of which cannot be overcome when it is used as a standalone testing platform (Das et al., 2020). Scarcity of serological reagents for some blood group systems for which there is no monoclonal antibody available is a major limitation to serological technique. In addition, human serum samples from different donors vary in reactivity, which is an issue when a nearly exhausted batch of reagent needs to be replaced. This is especially problematic when an alloantibody for that antigen is suspected to be causing adverse events after transfusion. In those circumstances, molecular methods can be used as an alternative or as a complementary test for identification of genes associated with the blood group antigens expression and prediction of antigenic profile.

Molecular Genotyping

The identification of genes that encode proteins carrying blood group antigens and the molecular polymorphisms that result in distinct antigenicity of these proteins is possible using molecular typing methods, which facilitate blood typing resolution in complex cases and overcome limitations of serological techniques when dealing with allo-immunized and multi-transfused patients (da Costa et al., 2013). In addition, molecular techniques have allowed identification of genes encoding clinically relevant antigens where serological reagents are not available. In those instances, genotyping is critical to resolve clinical challenges. Blood group genotyping is performed to predict blood group antigens by identifying specific polymorphisms associated with the expression of an antigen (Connie and Westhoff, 2019). Most variations in the blood group antigens are linked to point mutations, but for some, other molecular mechanisms are responsible, such as deletion or insertion of a gene, an exon or a nucleotide sequence (for example ABO, RH, and DO blood group systems), sequence duplication, (for example RHD gene and GE blood group system), nonsense mutation (for example RHD gene), and hybrid genes (for example RH, MNS, ABO, and CH/RG blood group systems) (Bakanay et al., 2013). In contrast to serological technique, molecular genotyping tests are performed on DNA obtained from nucleated cells and are not affected by the presence of donor’s red cells in patient’s sample, which is a common occurrence in samples of patients with recent multiple blood and blood products transfusions. Thus, erythrocyte genotyping can resolve blood group typing discrepancies in multi-transfused patients presenting with mixed field reactions, alloantibodies, or autoantibodies. Also, blood group genotyping can substantially help patients who were not previously phenotyped and need regular transfusions by facilitating management of these patients and preventing alloimmunization (Guelsin et al., 2015). Studies comparing serology and genotyping in multi-transfused population such as patients with thalassaemia and sickle cell disease have shown that genotyping is superior to serology for resolving discrepancies. Use of genotyped matched units has been shown to decrease alloimmunization rates, increase haemoglobin levels and in vivo erythrocyte survival, and diminish frequency of transfusions.

Erythrocyte Antigen Disparity and its significant in Transfusion Medicine

Patients who develop alloantibodies might have been received significantly multiple transfusions, or due to pregnancy making alloimmunization a particular problem for such patients especially those requiring chronic RBC transfusion support as a result of haematological diseases (Ngoma et al., 2016). Incidence of alloimmunization is highly variable between individual patients’ health condition, rate of exposure to a foreign antigen, ethnicity and geographical area. However, Knowledge of the genotypes of both patients and donors has led to a greater understanding of potential mechanisms for persistent alloimmunization despite serologic antigen matching for transfusion, also extended matching to include the Duffy, Kidd, and MNS systems has been shown to reduce the rate of alloimmunization (Jenna and Meghan, 2018). It is therefore clear that serologic phenotyping is inadequate to capture allelic diversity in minority populations. Without accurate characterization of the patient and donor genotypes, true antigen matching to prevent alloimmunization is not possible (Chou et al., 2013). In addition, there is still an inadequate understanding of the risk of alloimmunization with specific blood group gene haplotypes, particularly for RHD. Large, multi-institutional studies with genotyping of both patient and donor and better characterization of the specificity of antibodies formed are needed to clarify the clinical significance and immunogenic risks of variant alleles (Putzulu et al., 2017).

Blood Transfusion and Risk of Erythrocyte Alloimmunization

Erythrocyte alloimmunization is a serious adverse event of blood and blood products transfusions which can cause further clinical problems in the recipient patients including worsening of anaemia, development of autoantibodies, acute or delayed haemolytic transfusion reactions, bystander haemolysis, organ failure, and cause serious complications during pregnancies (Singhal et al., 2017). Frequent transfusions can lead to the production of multiple alloantibodies, which is often associated with autoantibodies requiring extensive serological workups and additional transfusions for proper treatment, increasing time and resources to find compatible RBC units (Yazdanbakhsh et al., 2014). Reported erythrocyte alloimmunization rates have considerable variations depending on the population and disease studied. The rates are estimated between 1 and 3% in patients that receive episodic transfusions, while for patients who receive chronic blood transfusions like patients with sickle cell disease, rates vary between 8 and 76% (Chou et al., 2013). The development of RBC antibodies is influenced by many factors including recipient’s gender, age, and underlying disease. The diversity of the blood group antigen expression among the donor and patient populations contributes substantially to the high alloimmunization rates (Ryder et al., 2014). Studies in sickle cell disease patients have reported that inflammation is associated with higher likelihood of alloimmunization, and it is suggested that the extent of the alloimmune response is higher when RBCs are transfused in the presence of an inflammatory signal. Several studies have suggested that genetic variation in immune-related genes and human leukocyte antigens might be associated with susceptibility to or protection from alloimmunization (Zimring and Hendrickson, 2008).

Consequences of Alloimmunization in Transfusion Medicine

Depending on the antigen and clinical significance of the antibody formed, patients can suffer morbidity and mortality due to an acute or delayed hemolytic transfusion reaction if incompatible blood or blood products are transfused (Jenna and Meghan, 2018). A rare but life-threatening consequence of recurrent transfusions is a hyperhaemolytic reaction which occurs in patients with hemoglobinopathies especially in sickle cell disease patients, the mechanism of hyperhaemolytic reaction in SCD could be a complication of alloimmunization, with possible contribution of an underlying genetic predisposition (Putzulu et al., 2017).  The development of RBC alloantibodies also impacts patient care by increasing the cost and time required to find compatible RBC units. Once an RBC antibody is identified, all subsequent transfusions must be negative for that antigen to prevent a delay haemolytic transfusion reaction from a robust secondary immune response (Ngoma et al., 2016).  An additional risk for previously sensitized patients is the inability to detect evanesced RBC antibodies at future transfusion events, failure to identify pre-existing antibodies is a significant contributor to haemolytic transfusion reaction. Minority patients may be at greater risk of complications from alloimmunization because the presence of antibodies may not be accurately characterized. One reason is that they are more likely to be negative for high-prevalence antigens (Jenna and Meghan, 2018). Antibodies to high-prevalence RBC antigens will react with all reagent RBCs. This is further complicated if the patient also has a positive direct antiglobulin test (DAT), as patients with SCD frequently do; the antibody to a high-prevalence antigen can then easily be confused with a warm autoantibody (Jain et al., 2016). In addition, because most reagent RBCs are not from minority populations, there is a risk that immunogenic Rh variants and other low-prevalence antigens are not expressed on the reagent RBCs, rendering antibody detection tests false-negative. Genotyping can be particularly useful to clarify antibody specificity, identify if there is a lack of a high-prevalence antigen, and identify appropriate donors.

Prevention of Alloimmunization and Improvement of Transfusion Therapy

Prevention of alloimmunization is desirable for any blood and blood products transfusion. Hence, patients not previously transfused or only having episodic blood transfusions, matching for all clinically significant antigens is not of great concern, but can result in alloimmunization against non-matched antigens (Agrawal et al., 2016). Patients previously transfused, particularly transfusion-dependent patients, the alloimmunization risk is higher and management of alloimmunized patients is of greater concern. Their alloimmunization status, including antigens of low clinical significance, is a critical part of their clinical history that may enable health care providers to take measures to prevent further alloimmunization (Singhal et al., 2017). Antigens have variable immunogenicity and not all blood group antigens are involved with the production of clinically significant antibodies after blood transfusion or pregnancy. Ideally, every blood transfusion should be compatible with the most clinically significant antigens to prevent alloimmunization (Swati et al., 2018). However, the standard pre-transfusion cross-matching is only performed for ABO blood group and the Rh (D) antigen; ABO matching is performed to avoid acute haemolytic transfusion reactions caused by natural IgM antibodies against ABO antigens, and Rh (D) matching is performed because of the high immunogenicity of the Rh (D), which is implicated in delay haemolytic transfusion reaction and haemolytic disease of foetus and newborn (Chou et al., 2013). Currently, recommendations for partial and extended donor unit or patient matching are limited to specific groups including patients on long-term transfusion protocol (sickle cell disease, thalassaemia, and aplastic anemia), patients who have developed alloantibodies and patients with warm autoimmune haemolytic anaemia (Kulkarni et al., 2018). Verification of compatibility for Rh (D, E, C, c, e) and K, which are the most frequent antigens involved in alloimmunization, is considered partial matching. Extended matching should include at least RH (D, C, E, c, e), KEL (K), FY (Fya, Fyb), JK (Jka, Jkb), MNS (S, s) and, if available, additional antigens (Osman et al., 2017). Prevention of an initial alloimmunization event may be even more important than previously appreciated to prevent the development of subsequent antibodies. For patients with a tendency toward forming RBC antibodies, and also having a RBC phenotype with either multiple negative antigens and/or lacking high-prevalence antigens, compatible units may become so rare as to make transfusion support virtually impossible (Wilkinson et al., 2012)

Screening for Clinically Significant Alloantibodies

Alloantibodies are antibodies produced in a patient as a result of exposure to foreign red cell antigen through transfusion of blood or blood products, pregnancy or transplantation (Agrawal et al., 2016). In countries such as Nigeria, there are multiple ethnic groups and racial or genetic heterogeneity among the population. This can often be associated with a wide variation of alloantibodies. Other common factors that facilitate alloantibody formation in the recipient include: the immune competence, the dose of the antigen the recipient is exposed to, the route of exposure and how immunogenic the foreign antigen is (Erhabor et al., 2015). Development of alloantibodies can lead to difficulty in finding compatible blood for transfusion or it can result in severe delayed haemolytic transfusion reaction if the antibody titre is low, undetected, missed and if antigen positive units is transfused. Evidenced-based best practice in the developing world requires that alloantibody testing is carried out as part of pre-transfusion testing of patients who require a red cell transfusion as well as pregnant women presenting to antenatal clinic at booking (Guelsin et al., 2015). The purpose of this test is to detect the presence of unexpected red cell antibody in the patient’s serum. Once these antibodies are detected during the alloantibody screening, every effort must be made to identify the specificity of the alloantibody by doing a panel test. The aim of identifying the specificity of the alloantibody in a patient that requires a red cell transfusion is to enable the Medical Laboratory or Biomedical Scientist to select antigen negative donor unit for appropriate crossmatch (indirect antiglobulin test) for such patient (Agrawal et al., 2016). Panel test in the case of a pregnant women coming for antenatal booking is to identify the alloantibody, determine whether the antibody can potentially cause HDFN and to allow the monitoring of the titre or quantification of the antibody every 4 weeks from booking until 28 weeks’ gestation and every 2 weeks thereafter until delivery. This information is important to determine the extent of developing foetus is affected by HDFN, decide whether to monitor the baby for anaemia using Doppler ultrasound, determine whether the baby will require intrauterine transfusion and to make an informed decision to possibly deliver the baby earlier. These evidence-based best practices are not being implemented in many settings in Nigeria (Erhabor et al., 2015). Testing of donor units for other clinically relevant red cell antigens other than ABO and Rh D is not routinely carried out (Singhal et al., 2017). This is a complete failure in stewardship by the Nigerian government and can compromise the transfusion service delivery to pregnant women and patients that require red cell transfusion. Settings and implement a policy to routinely test all group O donor units for haemolysins in other to identify group O donors with high titre of IgG anti A and/or anti B whose blood should be reserved only for transfusion to group O recipient while those that test negative can be transfused to A, B or AB individual as a way to maximizing the use of our limited allogeneic stock (Obisesan et al., 2015).

Applications of Molecular Genotyping Over Serological Phenotyping in Transfusion Medicine

Multiply-transfused Patients

The ability to determine a patient antigen profile by DNA analysis when haemagglutination tests cannot be used is a useful adjunct to a serologic investigation. Blood group genotyping in the transfusion setting is recommended for multiply transfused patients such as sickle cell disease (SCD), as part of antibody identification process (Castilho et al., 2018). Determination of a patient’s blood type by analysis of DNA is particularly useful when a patient, who is transfusion-dependent, has produced alloantibodies. This will help in the selection of antigen-negative RBCs for transfusion. It also assists in selection of compatible units for patients with discrepancies between genotype and phenotype, leading to increased cell survival and a reduction of the transfusion frequency (Bakanay et al., 2013). In addition to its contribution to the general accuracy of identification of red blood cell antigens, genotyping of transfusion dependent SCD patients allows assessment of the risk of alloimmunization against antigens

Patients Whose RBCs are Coated with IgG

Patients with autoimmune haemolytic anaemia (AIHA), whose RBCs are coated with IgG cannot be accurately typed for RBC antigens, particularly when directly agglutinating antibodies are not available, or IgG removal by chemical treatment of RBCs is insufficient. Blood group genotyping is very important for determination of the true blood group antigens of these patients (Jain et al., 2016). Patients received antigen-matched RBCs typed by blood group genotyping increases erythrocytes in vivo survival, as assessed by rises in haemoglobin levels and diminished frequency of transfusions.

Blood Donors

DNA-based typing can also be used to antigen-type blood donors both for transfusion and for antibody identification reagent panels. This is particularly useful when antibodies are not available or are weakly reactive (Huang et al., 2019). The molecular analysis of a variant gene can also assist in resolving a serologic investigation.

Resolution of Weak A, B, and D Typing Discrepancies

A proportion of blood donors and patients who historically have been typed as group O are now being recognized as group A or group B with the use of monoclonal antibodies capable of detecting small amounts of the immuno-dominant carbohydrate responsible for A or B specificity (Das et al., 2020). A typing result that differs from the historical record often results in time-consuming analyses. Since the bases of many of the weak subgroups of A and B are associated with altered transferase genes, PCR-based assays can be used to define the transferase gene and thus the ABO group (Nair et al., 2019). Similarly with the D antigen of the Rh blood group system, a proportion of blood donors that historically have been typed as D-negative are now reclassified as D-positive, due to monoclonal reagents that detect small and specific parts of the D antigen. The molecular basis of numerous D variants can be used to identify the genes encoding altered Rh D protein in these individuals (Huang et al., 2019).

Applications to Maternal-fetal Medicine

Alloimmunization against the Rh D antigen during pregnancy is the most frequent cause of haemolytic disease of the newborn (HDN). Immunization occurs when fetal cells, carrying antigens inherited from the father, enter the mother’s circulation following fetal-maternal bleeding. The mother, when not expressing the same antigen(s), may produce IgG antibodies towards the fetal antigen and these antibodies can pass through the placenta causing a diversity of symptoms, ranging from mild anaemia to death of the foetus (Erhabor et al., 2015). Apart from antibodies to the Rh D blood group antigen, other specificities within the Rh system and several other blood group antigens can give rise to HDN, but Rh D is by far the most immunogenic. Prenatal determination of fetal Rh D status is desirable in pregnancies to prevent sensitization and possible hydrops foetalis in foetuses of Rh D negative mothers with Rh D positive fathers. Fetal DNA has been detected in amniotic cells, chorionic villus samples, and as recently reported, in maternal plasma. It is now well accepted that a minute number of copies (as low as 35 copies/mL) of cell-free fetal RHD DNA in the maternal plasma can be utilized as a target for non-invasive genotyping of the foetus (Swati et al., 2018). Unlike fetal DNA isolated from the cellular fraction of maternal blood samples, free fetal DNA isolated from maternal plasma has been shown to be specific for the current foetus and is completely cleared from the mother’s circulation by postpartum. It has been reported that fetal RHD can be determined by PCR in DNA extracted from maternal plasma of pregnant women with Rh D positive foetuses, in a non-invasive procedure. PCR amplification of RHD in maternal plasma may be useful for the management of Rh D negative mothers of Rh D positive foetuses and for the study of foetus-maternal cell trafficking (Legler et al., 1999).

Conclusions

Determination of blood group polymorphism at the genomic level facilitates the resolution of clinical problems that cannot be addressed by serological technique. They are useful to determine antigen types for which currently available antibodies are weakly reactive; to type patients who have been recently transfused; to identify fetuses at risk for haemolytic disease of the newborn and to increase the reliability of repositories of antigen negative RBCs for transfusion. Mass scale genotyping, if applied to routine blood group of patients and blood donors, would significantly change the management of blood provision. Better matching of donor blood to patient would be the most significant benefit. This is primarily because a large numbers of low frequency antigens (or absence of high frequency antigen) are not routinely tested for, and donor-patient mismatches are only detected by serological cross-matching (only if an antibody has been generated) immediately prior to transfusion. This review overviews the current situation in this area and attempts to predict how blood group genotyping will evolve in the future.

Limitations

It is important to note that PCR based assays are prone to different types of errors than those observed with serological assays. For instance, contamination with amplified products may lead to false positive test results. In addition, the identification of a particular genotype does not necessarily mean that the antigen will be expressed on the RBC membrane.

Recommendation

As a word of caution, we should emphasize that the interpretation of molecular blood group genotyping results must take into account that potential contamination of PCR-based amplification assays and the observation that the presence of a particular genotype antigen does not guarantee expression of this antigen on the RBC membrane. The possibility to have an alternative to serological tests to determine the patient’s antigen profile should be considered for multiply transfused patients and for patients with autoimmune haemolytic anaemia (AIHA) by allowing the determination of the true blood group genotype and by assisting in the identification of suspected alloantibodies and in selection of antigen-negative RBCs for transfusion. This ensures a more accurate selection of compatible donor units and is likely to prevent alloimmunization and reduce the potential for haemolytic reactions. As automated procedures attain higher and faster throughput at lower cost, blood group genotyping is likely to become more widespread. We believe that the PCR technology may be used in a transfusion service in the next few years to overcome the limitations of serological technique.

References

  1. Agrawal A, Mathur A, Dontula S, Jagannathan L (2016) Red blood cell alloimmunization in multi-transfused patients: A Bicentric study in India. Global Journal of Transfusion Medicine 12(1): 12-17. [crossref]
  2. Bakanay SM, Ozturk A, Ileri T, Ince E, Yavasoglu S, Akar N (2013) Blood group genotyping in multi-transfused patients. Transfusion and Apheresis Science 48(2): 257-261. [crossref]
  3. Castilho L, Dinardo CL (2018) Optimized antigen-matched in sickle cell disease patients: Chances and challenges in molecular times—The Brazilian way. Transfusion Medicine and Hemotherapy 45(4): 258-262. [crossref]
  4. Chou ST, Jackson T, Vege S, Smith-Whitley K, Friedman DF, Westhoff CM (2013) High prevalence of red blood cell alloimmunization in sickle cell disease despite transfusion from Rh-matched minority donors. Blood 122(6): 1062-1071. [crossref]
  5. Connie M, Westhoff SB (2019) Blood group genotyping; Transfusion Medicine 133(17): 1814-1820. [crossref]
  6. da Costa DC, Pellegrino J, Guelsin GA, Ribeiro KA, Gilli SC, Castilho L (2013) Molecular matching of red blood cells is superior to serological matching in sickle cell disease patients. Revista Brasileira de Hematologia and Hemoterapia 35(1): 35-38. [crossref]
  7. Das SS, Biswas RN, Safi M, Zaman RU (2020) Serological evaluation and differentiation of subgroups of “A” and “AB” in healthy blood donor population in Eastern India. Global Journal Transfusion Medicine 20(5): 192-196. [crossref]
  8. Erhabor O, Malami AL, Isaac Z, Yakubu A, Hassan M (2015) Distribution of Kell phenotype among pregnant women in Sokoto, North Western Nigeria. Pan African Medical Journal 301(21): 1-9. [crossref]
  9. Guelsin GA, Rodrigues C, Visentainer JE, De-Melo, Campos. P, Traina F, Gilli SC (2015) Molecular matching for Rh and K reduces red blood cell alloimmunisation in patients with myelodysplastic syndrome. Blood Transfusion 13(1): 53-58. [crossref]
  10. Guelsin GA, Sell AM, Castilho L, Masaki VL, Melo FC, Hashimoto MN (2010) Benefits of blood group genotyping in multi-transfused patients from the south of Brazil. Journal of Clinical Laboratory Analysis 24(5): 311-316. [crossref]
  11. Huang H, Jin S, Liu X, Wang Z, Lu Q, Fan L, (2019) Molecular genetic analysis of weak ABO subgroups in the Chinese population reveals ten novel ABO subgroup alleles. Blood Transfusion 17(1): 217-222. [crossref]
  12. Jain A, Agnihotri A, Marwaha N, Sharma RR (2016) Direct antiglobulin test positivity in multi-transfused thalassemics. Asian Journal of Transfusion Science 10(1): 161-163. [crossref]
  13. Jenna, Khan, Meghan, Delaney (2018) Transfusion Support of Minority Patients: Extended Antigen Donor Typing and Recruitment of Minority Blood Donors. Transfusion Medicine and Hemotherapy,45(4): 271–276. [crossref]
  14. Karafin MS, Westlake M, Hauser RG, Tormey CA, Norris PJ, Roubinian NH (2018) Risk factors for red blood cell alloimmunization in the recipient epidemiology and donor evaluation study (REDS-III) database. British Journal Haematology 181(5): 672-681. [crossref]
  15. Kulkarni S, Choudhary B, Gogri H, Patil S, Manglani M, Sharma R (2018) Molecular genotyping of clinically important blood group antigens in patients with thalassaemia. The Indian Journal of Medical Research 148(6): 713-720. [crossref]
  16. Legler TJ, Eber SW, Lakomek M, Lynen R, Maas JH, Pekrun A (1999) Application of RHD and RHCE genotyping for correct blood group determination in chronically transfused patients. Transfusion 39(8): 852-855. [crossref]
  17. Marilia GQ, Cristiane MC, Luciana CM, Ana MS, Jeane EL (2019) Methods for blood group antigen detection; cost-effectiveness analysis of phenotyping and genotyping. Haematology Transfusion and cell therapy 41(1): 44-49. [crossref]
  18. Nair R, Gogri H, Kulkarni S, Gupta D (2019) Detection of a rare subgroup of A phenotype while resolving ABO discrepancy. Asian Journal of Transfusion Science 13(1): 129-131. [crossref]
  19. Ngoma AM, Mutombo PB, Ikeda K, Nollet KE, Natukunda B, Ohto H (2016) Red blood cell alloimmunization in transfused patients in sub-Saharan Africa: a systematic review and meta-analysis. Transfusion Apheresis Science 54: 296-302. [crossref]
  20. Obisesan OA, Ogundeko TO, Iheanacho CU, Abdulrazak T, Idyu VC, Idyu II, Isa AH (2015) Evaluation of Alpha (α) and Beta (β) Haemolysin Antibodies Incidence among Blood Group ‘O’ Donors in ATBUTH Bauchi Nigeria. American Journal of Clinical Medicine Research 3(3): 42-44. [crossref]
  21. Osman NH, Sathar J, Leong CF, Zulkifli NF, Raja, Sabudin RA, Othman A (2017) Importance of extended blood group genotyping in multiply transfused patients. Transfusion Apheresis Science 56(3): 410-416. [crossref]
  22. Putzulu R, Piccirillo N, Orlando N, Massini G, Maresca M, Scavone F (2017) The role of molecular typing and perfect match transfusion in sickle cell disease and thalassaemia: an innovative transfusion strategy. Transfusion Apheresis Science 56(1): 234-237. [crossref]
  23. Ryder AB, Zimring JC, Hendrickson JE (2014) Factors influencing RBC alloimmunization: Lessons learned from murine models. Transfusion Medicine and Hemotherapy 41(6): 406-419. [crossref]
  24. Singhal D, Kutyna MM, Chhetri R, Wee LYA, Hague S, Nath L (2017) Red cell alloimmunization is associated with development of autoantibodies and increased red cell transfusion requirements in myelodysplastic syndrome. Haematological 102(12): 2021-2029. [crossref]
  25. Storry JR, Castilho L, Chen Q, Daniels G, Denomme G, Flegel WA, Gassner C, de Haas M (2016) International society of blood transfusion working party on red cell immunogenetics and terminology: report of the Seoul and London meetings. International Society of Blood Transfusion Science 11(2): 118–122. [crossref]
  26. Swati, Kulkarni, Bhavika, Choudhary, Harita, Gogri, Shashikant, Patil, Mamta, Manglani, Ratna, Sharma, Manisha, Madkaikar (2018) Molecular genotyping of clinically important blood group antigens in patients with thalassaemia. Indian Journal of Medical Research 14(8): 713-720. [crossref]
  27. Wilkinson K, Harris S, Gaur P, Haile A, Armour R, Teramura G (2012) Molecular blood typing augments serologic testing and allows for enhanced matching of red blood cells for transfusion in patients with sickle cell disease. Transfusion 52(1): 381-388. [crossref]
  28. Yazdanbakhsh K, Ware RE, Noizat-Pirenne F (2014) Red blood cell alloimmunization in sickle cell disease: Pathophysiology, risk factors, in individuals with single and multiple clinically relevant red blood cell antibodies. Transfusion 54(8): 1971-1980. [crossref]
  29. Ye Z, Zhang D, Boral L, Liz C, May J (2016) Comparison of blood group molecular genotyping to traditional serological phenotyping in patients with chronic or recent blood transfusion. Journal of Biomedical Science 4(1): 1-4. [crossref]
  30. Zimring JC, Hendrickson JE (2008) The role of inflammation in alloimmunization to antigens on transfused red blood cells. Current Opinion in Hematology 15(6): 631-635. [crossref]

Climate Summit and the Egyptian Vision

DOI: 10.31038/GEMS.2023531

 

“We are meeting today and the environmental clock is ticking, marking the end of the planet if we do not do our best to preserve it”.

“Although not responsible for the climate crisis, the African continent faces the most negative consequences of the phenomenon and its economic, social, security and political implications. However, the continent is a model of serious climate action as far as its capabilities and available support allow”. With these words, the British Prime Minister, Boris Johnson and His Excellency President Abdel Fattah El-Sisi began their speeches before the Climate Summit in Glasgow, Scotland, which began on Sunday, October 31, 2021 under the auspices of the United Nations – known as the twenty-sixth Conference of the Parties to the Framework Convention on Climate Change – in the Scottish city of Glasgow and will continue until November 12 within the ceiling of high expectations in dealing with the problems of climate change besetting our planet. The event is abbreviated as “26COP”, which will last for two weeks and is an acronym for the words “26th Conference of the Parties to the Framework Convention on Climate Change,” For the first time, delegations representing 200 countries participated in the summit to discuss ways to reduce emissions by 2030 and help improve life on the planet.

The summit was honored by the honorable presence of the Arab Republic of Egypt with an official delegation headed by His Excellency President Abdel Fattah El-Sisi, President of the Republic, who gave an important speech at the summit, which is attended by world leaders as evidence that we are a large country with its position at the level of the continent and the whole world. Earth’s climate depends mainly on the sun, with about 30 percent of sunlight scattered back into space, some of it absorbed by the atmosphere and the rest absorbed by the Earth’s surface. The Earth’s surface also reflects part of the sunlight in the form of animated energy called infrared radiation. What is happening is that infrared radiation is delayed by “greenhouse gases” such as water vapor, carbon dioxide, ozone and methane, which cause infrared radiation to bounce back, raising the temperature of the lower atmosphere and the Earth’s surface.

Although greenhouse gases make up only one percent of the atmosphere, they form a blanket around the ground or a glass roof, which traps heat and keeps the Earth’s temperature at 30 degrees higher than otherwise. However, human activities contribute to making this cover “thicker” because natural levels of these gases are supported by carbon dioxide emissions from the combustion of coal, oil and natural gas, through the emission of more methane and nitrous oxide produced from agricultural activities and landuse changes, and through long-lived industrial gases that are not produced naturally. People usually use the terms global warming and climate change interchange, assuming they say the same thing. But there is a difference between the two: global warming refers to rising average temperatures near the Earth’s surface, while climate change refers to changes in atmospheric layers such as temperature, rainfall and other changes measured over decades or longer periods.

What is Climate Change?

Climate change refers to long-term shifts in temperature and weather patterns. These shifts may be natural – for example, through changes in the density of the Sun, slow changes in the Earth’s rotation around the Sun, or natural processes within the climate system (such as changes in the water cycle in the oceans), but since the nineteenth century, human activities have become the main cause of climate change on the planet. This is mainly due to the burning of fossil fuels, such as coal, oil and gas as a result of various industries and human activities such as the use of fuel in cars, deforestation, reforestation, urbanization, desertification, etc., where the burning of fossil fuels produces the emission of gases that act as a cover that wraps around the globe, especially carbon nan gas and methane, which leads to the capture of the sun’s heat and raising the earth’s temperatures. This releases carbon dioxide and landfills are a major source of methane emissions. Energy production and consumption, industry, transport, buildings, agriculture and land use are the main sources of emissions. Recent studies have shown that concentrations of these gases are now at their highest levels in two million years. Emissions continue to rise as a result of their continued sources. As a result, the globe is now 1.1 degrees Celsius warmer than it was in the late nineteenth century.

What are the Expected Effects of Climate Change?

The phenomenon of climate change is distinguished from most other environmental problems in nature by being global in nature, as it transcends the borders of countries to pose a danger to the whole world. The steady increase in surface air temperatures on the globe as a whole has been confirmed, with the global average increasing at a rate of 0.3 to 0.6 degrees over the past 100 years. Studies by the Intergovernmental Panel on Climate Change (IPCC) have indicated that the continuous rise in the global average temperature will lead to many serious problems such as sea level rise threatening to submerge some areas in the world, as well as the impact on water resources and crop production, in addition to the spread of some diseases. Climate change is sure to affect our health and our ability to farm, live, safety and work as the consequences of climate change include severe drought, water scarcity, severe fires, rising sea levels, saltwater infiltration of adjacent lands, floods, melting polar ice and degradation of biodiversity. In a 2018 United Nations report, scientists acknowledged that limiting global warming to no more than 1.5°C would help us avoid the worst climate impacts and maintain a livable climate. Conversely, the current trajectory of carbon dioxide emissions could increase global temperatures by up to 4.4°C by the end of the century.

Everyone Asks: Can We Stop the Phenomenon of Climate Change?

The honest scientific answer: It is only possible to slow the pace of global warming, not stop it completely, thus delaying the scale of the damage and reducing it until the end of the current century in the hope that we can coexist as a human race with the variables that we have caused. Climate change poses a great challenge to humanity, so do we have solutions to this phenomenon? The countries of the world have become aware of the danger of silence on the phenomenon of climate change and the need to confront it effectively, as the effects of climate change have existed for a while, but some countries of the world were not dealing with this crisis effectively and adequately, especially the industrialized countries that cause climate change and are negligent in the right of developing countries, where they do not take measures that protect the world from climate change and do not provide adequate funding. Despite this, there are some measures that can be taken to reduce this phenomenon and its catastrophic effects, the most important of which are:

Emission Reduction

This can be done by shifting existing energy systems from fossil fuels to new and renewable energy sources, such as solar or wind, thereby reducing climate-changing emissions. Here, a growing coalition of countries is committed to bringing emissions to net zero by 2050. However, current emissions must be cut by about half by 2030 to keep global warming below 1.5°C, and fossil fuel production must be reduced by about 6 percent per year during the decade 2020-2030.

Adaptation to Climate Impacts

Humanity must also adapt to the potential future consequences of climate change. Priority must be given to the most vulnerable people with the least resources to face climate risks, especially in developing countries that are least involved in and most affected by the phenomenon.

Financing the required amendments and procedures. Climate adaptation and coping with its effects require significant financial investments, but inaction on climate action comes at a high price. An important step is the fulfillment by industrialized countries and the main cause of the phenomenon of their commitment to provide financial allocations to developing countries so that they can adjust and move towards greener economies.

Is July 2021 really the hottest month in recorded history?

One of the parties says that July 2021 is the hottest month in the history recorded on the surface of the earth.

What is the Truth of That?

This was shown in a report by one of the US federal agencies concerned with monitoring the atmosphere and oceans in August 2021, where it announced that July 2021 was the hottest month in history since the start of the world’s temperature recording system on the planet 142 years ago. The recorded data reveal that the average temperature during this month over the earth, and the oceans together, increased by about 0.93 degrees Celsius, from the average temperature in the twentieth century, which is 15.8 degrees Celsius, and scientists believe that this is due to the long-term effects of the phenomenon of climate change. Has the number of days of extreme heat really doubled globally since the eighties of the last century?

Research conducted by the BBC found that the number of very hot days in which temperatures exceed 50°C, which are witnessed in different parts of the world annually, has doubled since the eighties of the last century. The total number of days when temperatures exceeded 50°C has increased in each of the past four decades. Between 1980 and 2009, temperatures exceeded 50°C for just 14 days, while the number rose to 26 days between 2010 and 2019. This has happened in increasing areas of our globe, which presents humanity with new challenges, especially in terms of health and livelihood aspects in general.

What are the Groups Most Affected by the Phenomenon of Climate Change?

Although all groups are affected by the results of the phenomenon of negative climate change, children bear the brunt of its effects, although they are the least responsible group for the occurrence of the phenomenon, as climate change poses a direct threat to the child’s ability to survive, develop and prosper.

In terms of:

  • The severity of weather phenomena such as hurricanes and heat waves threaten children’s lives and destroy infrastructure vital to their well-being.
  • Floods cause the destruction and damage of water and sanitation facilities, leading to the spread of various diseases,
  • which represent an imminent danger to humans in general and children in particular.
  • Drought and global change in rainfall lead to disruption in crop productivity and increased food prices, which means food insecurity and food deprivation for poor people, including of course children.

Children are the most vulnerable group to diseases that will become more prevalent as a result of climate change and drought, such as malaria, fever and pneumonia, which alone kills 2,400 children a day globally and is closely linked to under nutrition, lack of safe drinking water and air pollution, symptoms exacerbated by climate change.

The frightening and terrifying effects of climate change: In a report broadcast by the agency “AFP”, on the impact of climate change on humanity, it is clear that: Some 166 million people in Africa and Central America needed assistance between 2015 and 2019 due to food emergencies linked to climate change. Between 15 and 75 million people are at risk of famine by 2050. Some 1.4 million children will be severely stunted in Africa due to climate change in 2050. Agricultural yields have declined by 4-10% globally over the past 30 years.

Catches in the tropics have declined by 40-70%, with rising emissions.

  • As for the impact of climate change on internal migration, between 2020 and 2050, the rate will increase to 6 times the current rate.
  • Global warming will also have terrifying effects on “water stress”, with 122 million people in Central America, 28 million in Brazil, and 31 million in the rest of South America affected by a shortage of water allocations.

Climate change in Egypt and its negative effects Egypt is one of the countries most affected by the negative effects of climate change, and these effects are summarized as follows:

  1. Impact on food security
  2. Impact on water resources
  3. Impact on the ecosystem
  4. Impact on public health
  5. Impact on urban areas
  6. Impact on energy
  7. Impact on the economy

What about the Egyptian Strategy to Confront Climate Change?

The phenomenon of climate change is distinguished from most other environmental problems as it is global in nature, as it transcends the borders of countries to pose a danger to the whole world, as President Abdel Fattah El-Sisi participated in the Climate Change Summit under the auspices of the United Nations – known as the Twenty-sixth Conference of the Parties to the Framework Convention on Climate Change – in the Scottish city of Glasgow, which began on Sunday, October 31 and continued until November 12. During the closing session of the Glasgow Conference “COP 26”, it was announced that Egypt was chosen to host the 27th session of the COP27 conference on November 7 and 8 in Sharm El Sheikh, making Egypt the first country at the level of Africa to host the next climate summit, so the entire black continent will be represented in the conference, and therefore the whole world will see the efforts of Egypt and the African continent in confronting climate change.

The Egyptian strategy to confront climate change is represented in many points, the most important of which are:

  • Establishing the National Council for Climate Change to formulate the state’s general policies regarding dealing with climate change, and work to develop and update sectoral strategies and plans for climate change, in light of international agreements and national interests, and link these plans to the sustainable development strategy 2030.
  • Egypt protects its coasts on the White and Red Sea from the impact of sea level rise through clear plans carried out in cooperation between ministries and concerned scientific authorities.
  • Research bodies in Egypt are working to develop drought resistant agricultural crops and crops that reduce emissions.
  • Egypt is working to protect the agricultural area adjacent to the beaches from deterioration through mega projects.
  • Provide climate finance for the implementation of the adaptation component of the NDCs.
  • Egypt is implementing a huge desalination program (Ain Sokhna desalination plant at a cost of 2.3 billion pounds) and tertiary treatment of wastewater (Bahr Al-Baqar water treatment plant at a cost of 20 billion pounds) and Egypt is updating its strategy for low-emission development and implementing a huge renewable energy program (wind power generation project on the west coast of the Gulf of Suez at a cost of 4.3 billion pounds,)
  • Implementation of huge projects in the villages of the Egyptian countryside, such as the Decent Life project at a cost of 700 billion pounds.
  • Implementation of projects to preserve available water resources, such as the project of lining canals at a cost of 6 billion pounds. Egypt is the first country in the region to issue $750 million worth of green bonds last year.
  • Expansion of projects to establish greenhouses with the aim of adapting to climate change (the target during the next five years is about one million greenhouses.)
  • Expanding sustainable transport projects, developing the transport and communications network (at a total cost of 377 billion pounds until 2024), converting cars to work with electricity or natural gas, and operating trains with electricity to eliminate pollution.
  • Expanding health initiatives to maintain the health of citizens from various diseases.
  • Production of new varieties and hybrids of rice, such as short-lived varieties, which reduces methane emissions.