Monthly Archives: June 2023

In the Elderly (≥65 years/age) Hospitalized Patient Who Experiences an Acute Fragility Hip Fracture, How Does the Implementation of the Clinical Frailty Scale (CFS) Tool Compared to No Frailty Screening, Increase the Incidence of Palliative Care Referral in the Target Population?

DOI: 10.31038/IJNM.2023422

Introduction

Frailty is a state of increased vulnerability to illnesses or health conditions following a stressor event such as a hip fracture, thus increasing the incidence of disability, hospitalization, long-term care, and premature mortality [1] Hip fracture is associated with high morbidity and mortality in the frail, elderly patient [2]. A hip fracture in the elderly can also severely impact physical, mental, and psychological health and diminish quality of life (QOL) [3]. Palliative care has been shown to mitigate these impacts by managing symptoms, thus improving patient QOL and patient and caregiver satisfaction [4,5].

The American College of Surgeons and the American Geriatrics Society recommend that frailty screening be performed as a routine preoperative assessment on patients ≥65 years of age [1]. A standardized assessment tool can be used to measure frailty in this patient population as a predictor of those at risk for high morbidity and mortality. The Clinical Frailty Scale (CFS) is a standardized assessment tool that measures frailty based on comorbidity, function, and cognition to assess a numerical frailty score ranging from very fit to terminally ill [6]. The CFS was used in a quality improvement study to measure frailty in this patient population. The study recommended palliative care consultation for those who scored moderately frail or above.

Including palliative care in the multidisciplinary care of frail, elderly hip fracture patients is appropriate as these injuries can pose a risk to QOL [7]. Palliative care providers assist with symptom management, QOL, and advanced care planning [4,5]. The palliative care team helps patients determine the best management or treatment options considering the patient’s prognosis and can assist in providing safe and effective pain management to elderly patients [4,5]. This quality improvement initiative demonstrated the correlation between implementation of a frailty assessment on this patient population and the increase in palliative care consultations. Further studies are needed to evaluate the impact of frailty screening and subsequent palliative care inclusion on symptom management, QOL, and patient and caregiver satisfaction.

References

  1. Archibald MM, Lawless M, Gill TK, Chehade MJ (2020) Orthopaedic [sic] surgeons’ perceptions of frailty and frailty screening. BMC Geriatr 20: 1-11. [crossref]
  2. Koso RE, Sheets C, Richardson WJ, Galanos AN (2018) Hip fracture in the elderly patients: A Sentinel event. Am J Hosp Palliat Care 35: 612-619. [crossref]
  3. Alexiou K, Roushias A, Varitimidis S, Malizos, K (2018) Quality of life and psychological consequences in elderly patients after a hip fracture: A review. Clin Interv Aging 13: 143-150. [crossref]
  4. Harries L, Moore A, Kendall C, Stanger S, Stringfellow TD, Davies A, et al. (2020) Attitudes to palliative care in patients with neck-of-femur fracture—A multicenter survey. Geriatr Orthop Surg Rehabil 11: 1-7. [crossref]
  5. Santivasi WL, Partain DK, Whitford KJ (2019) The role of geriatric palliative care in hospitalized older adults. Hosp Pract 48: 37-47. [crossref]
  6. Church S, Rogers E, Rockwood K, Theou O (2020) A scoping review of the clinical frailty scale. BMC Geriatr 20: 1-18. [crossref]
  7. Sullivan NM, Blake LE, George M, Mears SC (2019) Palliative care in the hip fracture patient. Geriatr Orthop Surg & Rehabil 10: 1-7. [crossref]
fig 1

Big Data vs. Big Mind: Who People ARE vs. How People THINK

DOI: 10.31038/IMROJ.2023814

Abstract

This paper presents a new approach to understanding Big Data. Big Data analysis allows to better hypothesize regarding what people think more about certain issues by extracting information on how people move around, what interests them, what the context is, and what do they do. We believe, however, by allowing users also to answer simple questions their interests can be captured more accurately, as the new area of Mind Genomics tries to do. It introduces the emerging science of Mind Genomics as a way to profoundly understand people, not so much by their mind as by the pattern of their reactions to messages. Understanding the way nature is, however, does not suffice. It is vital to bring that knowledge into action, to use the information about a person’s mind in order to drive behavior, i.e., to put the knowledge into action in a way that can be measured. The paper introduces the Personal Viewpoint Identifier as that tool and shows how the Viewpoint Identifier can be used to evaluate entire databases. The paper closes with the vision of a new web, Big Mind analyzing huge amounts of data, where the networks developed show both surface behavior that can be observed, and deep, profound information about the way each individual thinks about a variety of topics. The paper presents a detailed comparison with the Text Mining approach to Big Data in order to understand the advantages of understanding the ‘mind’ beneath the observed behavior in combination with the observed behavior. The potential ranges from creating personalized advertisements to discovering profound linkages between the aspects of a person and the mind of the person.

Introduction

When we look at networks, seeking patterns, we infer from the behaviors and the underlying structure what might be going on in the various nodes. We don’t actually communicate with the nodes; they’re represented geometrically as points of connection. Analytically, we can look at behavior, imposing structural analysis on the network, looking at the different connections—the nodes, the nature of what’s being transacted, and the number and type of connections. By doing so, we infer the significance of the node. But, what about that mind in the node? What do we know? What can we know? And more deeply, is that mind invariant, unchangeable, reacting the same way no matter what new configurations of externalities are imposed?

These are indeed tough questions. The scientific method teaches us to recognize patterns, regularities, and from those patterns to infer what might be going on, both at the location and by the object being measured, the object lying within the web of the connections. Mathematics unveils these networks, different patterns, in wonderful new ways, highlighting deeper structures, often revealing hitherto unexpected relations. Those lucky enough to have programs with false colors see the patterns revealed in marvelous reds, yellows, blues, and the other rainbow colors, colors which can become dazzling to the uninitiated, suggesting clarity and insight which are not really the case. The underlying patterns are clearly not in color, and the universe does not appear to us so comely and well-colored. It is technology which colors and delights us, technology which reveals the secrets.

Now for the deeper question, what lies beyond the network, the edges, inside the nodes, inside the mind lying in the center of a particular connection? Can we ever interrogate a node? Can we ever ask a point on a network to tell us about itself? Does the point remain the same when we shift topics, so the representation is no longer how the nodes interact on one day, but rather interact on another day, or in another situation?

Understanding the environment where a business occurs requires collecting and analyzing massive amount of data related to the potential clients; what they think about the offered products and the level of satisfaction for the offered services/products. The problem of understanding the mind of potential clients is not new; it has been on the focus of marketing researchers for some time to mention a few. One of the most prominent tools to be used for this purpose is text mining, defined as a process to extract interesting and significant patterns to explore knowledge from textual data sources. Usually, the collected data are unstructured, i.e., collected from blogs, social media, etc. As the amount of unstructured data collected by companies is constantly increasing, text mining is gaining a lot of relevance [1-7].

Text mining plays a significant role in business intelligence, helping organizations and enterprises to analyze their customers and competitors to make better decisions. It also helps in telecommunication industry, business and commerce applications and customer chain management system.

Usually, text mining combine discovery, text mining techniques, decoding, and natural language processing techniques. The most important elements of this approach are powerful mining techniques, visualization technologies and an interactive analysis environment to analyze massive sets of data so as to discover information of marketing relevance [8,9].

In the world of today, a number of studies suggest that that the efforts to create a technology of text mining have as yet fallen short. Today’s (2020) reality in text mining suggest that the effort has performed not as well as was hoped, neither in terms of explicit hopes and predictions, nor the vaster implicit hopes and predictions. Companies which have applied automated analysis of textual feedback or text mining have failed to reach their expectations. emphasize just how hard text mining can be. Research in the area of natural language processing (NLP) encounters a number of difficulties due to the complex nature of human language. Thus, this approach has performed below expectations in terms of depth of analysis of customer experience feedback and accuracy [10].

There are specific areas of disappointment. For example, major obstacles have been encountered in the field of predicting with accuracy the sentiment (positive/negative/neutral) of the customers. Despite what one might read in the literature of consumer researchers and others employing text mining for sentiment analysis, the inability to successfully address these issues has disillusioned some. Some of the disillusionment is to be expected because sentiment analysis must be sensitive to the nuances of many languages. Feelings expressed by words in one language may not naturally translate when the words are translated. Only a few tools are available that support multiple languages. It may be that better feedback might actually be obtained with structure systems, such as surveys.

In this paper we propose a new approach to understand Big Data from the point of view of understanding the mind of the person who is a possible ‘node.’ We operationally define the world as a series of experiences that might be captured in Big Data, and for each experience create a way of understanding the different viewpoints or mind-sets of the persons undergoing that experience. The effect is to add a deeper level to Big Data, moving beyond the patterns of what are observed, to the mind-sets of the people who undergo the experience. In effect, the approach provides a deeper matrix of information, more two dimensional, the first being the structure of what is being done (traditional Big Data), and the second being the mind-set of the person(s) reacting to this structure. In essence, therefore, a WHAT and the MIND(s) behind the WHAT. We conclude with the prospect of creating that understanding of the MIND by straightforward, affordable experiments, and a tool (Personal Viewpoint Identifier) which allows one to understand the mind of any person in terms of the relevant action being displayed.

Moving from Analysis of an Object to Interrogating It

We move now from analysis of an object in a network to actually interrogating the object in order to understand it from the inside, to get a sense of its internal composition. The notion here is that once we understand the network as externalities and understand deep mind properties of the nodes in the network, the people, we have qualitatively increased the value of the network by an order of magnitude. We not only know how the points in the network, the people, react, but we know correlates of that reaction, the minds and motivations of these points which are reacting and interacting.

Just how do we do that when we recognize that this mind may have opinions, that the mind may have a desire to be perceived as politically correct, and that, in fact, this mind in the object may not be able to tell us really what’s important? How do we work with this mind to find out what’s going on inside?

It is at this juncture that we introduce the notion of Mind Genomics, a metaphor for an approach to systematically explore and then quantitatively understand how things are perceived by person(s) using the system. The output of that understanding comprises content (the components of this mind), numbers (a way to measure the components of the mind), and linkages (the assignment of the content and its numbers to specific points, nodes, people in the network) [11,12].

A Typical Problem – What Should the Financial Analyst Say to Convince a Prospect to Commit?

Lest the foregoing seem to be too abstract, too esoteric, too impractical, let’s put a tangible aspect onto the idea. What happens when the point or node corresponds to a person walking in to buy a financial retirement product from a broker whom the person has never met? How does this new broker understand what to say to the person at the initial sales interaction, that first ‘moment of truth’ when there is a chance for a meaningful purchase to occur? And what happens when the interaction occurs in an environment where the financial consultant or salesperson never even meets the prospective buyer, but rather relies upon a Web site, or a simple outward-bound call-center manned by non-professionals?

The foregoing paragraph lays out the problem. We have our network, nodes connected by the sales activity. By understanding the mind of the prospective customer, the financial analyst has a much greater chance of making the sale, in contrast to simply by knowing the age, gender, family situation, income, and previous Web searching behavior—of the prospect, all available from Big Data and grist for the analytic mill. We want to go deeper, into the mind of that prospect.

Psychologists and marketers have long been interested in understanding what drives a person to do something, the former (along with some philosophers) to create a theory of the mind, the latter to create products and services, and sell them. We know that people can articulate what they want, describe to an interviewer the characteristics of a product or service that they would like, often just sketchily, but occasionally in agonizing detail. And all too often this description leads the manufacturer or the service supplier on a wild-goose-chase, running after features that people really don’t want, or features which are so expensive as to make the exercise simply one of wish description rather than preparation for design.

A more practical way runs an experiment presenting the person, this node in the system, with different ideas, different descriptions about a product, obtains ratings of the description, and then through statistical modelling, discovers those specific elements in the description which link to a positive response. In other words, run an experiment treating this node, this point in a network, as a sentient being, not just as something whose behavior or connections are to be observed as objective, measurable quantities. Looking at the network as an array of connected minds, not connected points, minds with feelings, desires, and opinions, will enrich us dramatically in theory and in practice.

The experiment, or better the paradigm of Mind Genomics, is rather simple. We use a paradigm known as Empathy and Experiment, empathy to identify the ‘what,’ the content, and experiment to identify the values, the ‘important’ [13].

Our strategy is simple. We want to add a new dimension to the network by revealing the mind of each nodal point. To do so requires empathy, understanding the ‘what,’ and experiment, quantifying the amount, revealing the structure. Putting the foregoing into operational terms, we will identify a topic area relevant to the node, the person, uncover elements or ideas appropriate to the topic, and then quantify the importance of each element. After Empathy uncovers the raw materials, the elements, Experiment mixes and matches these elements into different combinations, obtains ratings of the combinations, and then estimates how the individual elements in the combination drive the response.

The foregoing paragraph described an experiment not a questionnaire. Rather, we infer what the person, the node, wants by the pattern of responses and from behavior we determine what elements produce positive responses and what elements produce negative responses [14].

Putting the Emerging Science of Mind Genomics into Action – Setting Up a Study and Computing Results

The best way to understand the concepts of Mind Genomics, its application to knowledge and to networks, is through an illustration. This paper presents the application of Mind Genomics to create a micro- science about choosing a financial advisor for one’s retirement planning. The case history shows the input and practical output of Mind Genomics, how a financial advisor can understand the mind and needs of a customer, identifying the psychological mind-set and relevant points from the very beginning of the interaction. A sense of the process can be obtained from Figure 1. The paper will explicate the various steps, using actual data from a Mind Genomics experiment.

fig 1

Figure 1: The process of Mind Genomics, from setup to analysis and application. Figure courtesy of Barry Sideroff, Direct Ventures, LLC.

To create and to apply the micro-science we follow the steps below. Although the case history is particularized to selecting a financial advisor, the steps themselves would be followed for most applications. Only the topic area varies.

  1. We begin by defining the topic. We also specify the qualifications for the consumer respondents, those who will be part of what might initially look like a Web-based survey, but in reality, will participate in what constitutes a systematic experiment. For our study, the focus is on the interaction of the financial advisor with the consumer, with the specific topic being the sales of retirement instruments such as annuities. The key words here are focus and granularity. Specificity makes all the difference, elevating the study from general knowledge to particulars. Granularity means that the data provide results that can be immediately applied in practice.
  2. Since our focus here is on the inside of the mind, what motivates the person to listen to the introductory sales message of the financial planner, we will use simple phrases, statements that a prospective client of the financial analyst is likely to hear from the analyst himself or read in an advertisement. Table 1 presents the set of 36 elements divided into four questions (silos, categories), each question comprising exactly nine answers (elements.) The silos are presented as questions to be answered. This study used a so-called 4×9 matrix (four questions, nine answers per question.) The elements are short, designed to paint a word-picture, and are ‘stand-alone.
  3. A set of 36 elements covers a great deal of ground and typically suffices to teach us a lot about the particular minds of the participants, our respondents, or nodes in a web. The particular arrangement of four silos and nine elements is only one popular arrangement of silos and their associated elements. An equally popular arrangement is 6×6, six silos with six elements in Recent advances have shown good results with a much smaller set of 16 elements, emerging from four questions, each with four answers (four silos, four elements).
  4. Create vignettes, systematically varied vignettes (combinations). The 4×9 design requires 60 different vignettes. Each respondent will evaluate a completely different set of vignettes, enabling Mind Genomics to test a great deal of the possible ‘design’ space of potential combinations. Rather than testing the same 60 vignettes with many respondents, the strategy of testing different combinations tests more of the possible combinations. The pattern emerges with less error, even though each combination is tested by one, at most two respondents.
  5. The combinations, vignettes called profiles or concepts in other published work, comprise 2-4 elements, each element appearing five times. The elements appear against different backgrounds since all the elements vary from one vignette to another. The underlying experimental design, a recipe book controls which particular elements appear in each vignette. Although to the untutored eye the 60 different vignettes appear to be simply a random, haphazard collection of elements with no real structure, nothing could be further from the truth. The experimental design is a well-thought-out mathematical structure ensuring that each element appears independently of every other element, repeated the same number of times across each element., This allows us to deconstruct the response to the 60 test vignettes into the individual contribution of each element. Statistical analysis by OLS, ordinary least- squares regression, will immediately reveal which elements are responsible for the rating and which simply go along, not contributing anything.
  6. We see an example of a vignette in Figure 2A, program sets up the vignettes remotely on the respondent’s computer, presents each vignette, and acquires the rating. The bottom of the vignette shows the rating scale for the vignette. The respondent reads the vignette in its entirety and rates the vignette on the scale. The interview is relatively quick, requiring about 12 minutes for the presentation of the vignettes followed by a short classification questionnaire. The process is standardized, easy, disciplined, and quite productive in terms of well-behaved, tractable data that can be readily interpreted by most people, technical or non-technical alike. As long as the respondent is at least a bit interested and participates, the field execution of the study with respondents is straightforward. The process is automatic from the start of the experiment to the data analysis, making the system scalable. The experiment is designed to create a corpus of knowledge in many different areas, ranging from marketing to food to the law, education, and government. It is worth noting that whereas the 60 vignettes require about 12 minutes to complete, the shorter variation, the 4×4 with 24 vignettes, requires only about 3 minutes.
  7. The original rating scale that we see at the bottom of the vignette in Figure 2 is a Likert scale, or category scale, an ordered set of categories representing the psychological range from 1 (not at all interested) to 9 (very interested). For our analysis we simplify the results, focusing on two parts of this 9-point scale, with the lower part (ratings 1-6) corresponding to not interested and the upper part (ratings 7-9) corresponding to interested. We re-code ratings of 1-6 to the number 0 and ratings of 7-9 to the number The recoding loses some of the granular information, but the results are more easily interpreted. Although the 9-point scale provides more granular information, the reality is that managers focus on the yes/no aspect of the results.
  8. The Mind Genomics program also adds a vanishingly small random number to each newly create binary value, in order to ensure that the OLS (ordinary least-squares) regression does not crash in the event that a respondent assigns all vignettes ratings 1-6, or ratings 7-9, respectively. In that case, the transformed binary variables are all 0 or 100, respectively, and the random number adds need variability to prevent a ‘crash.’ The 60 vignettes allow the researcher to create an equation for each respondent Building the model at the level of the individual is a powerful format of control, known to statisticians as the strategy of ‘within- subjects design.’
  9. Some of the particulars underlying the modelling are:

a. The models are created at the level of the individual respondent, using the well-accepted procedure of OLS, ordinary least squares regression.

b. The experimental design ensures that the 36 elements are statistically independent of each other so that the coefficients, the impact values of the elements, have absolute value. The inputs are 0/1, 0 when the element is absent from a vignette, 1 when the elements is present in the vignette.

c. OLS uses the 60 sets of elements/ratings, one per vignette, as the cases. There are 36 independent variables and 60 cases, allowing sufficient degrees of freedom for OLS to emerge with robust estimates

d. We express the equation or model as: Binary Rating = k0 + k1(A1) + k2(A2)…k36(D9). For the current iteration of Mind Genomics, we estimate the additive constant k0, the baseline. Future plans are to move to the estimation of the coefficients, but ‘force the regression through the origin’, viz., to assume that the additive constant is 0.

e. The equation says that the rating is the combination of an additive constant, k0, and weights on the elements. The elements appear either as 0 (absent) or as 1 (present), so the weights, k1 – k36, show the driving force of the different elements.

Table 1: The raw material of Mind Genomics, elements arranged into four silos, each silo comprising nine elements.

tab 1

Understanding the Result Through the Additive Constant and the Coefficients

We now look at the strongest performing elements from the equation or model which relates the presence/absence of the elements to the transformed binary rating of 0 (not interested) or 100 (interested). The strongest performing elements appear in Table 2. The table shows all elements which generate an impact value or coefficient 8 or higher for any key subgroup, whether total sample, gender, age, or income, respectively.

  1. The total panel comprises 241 respondents. We can break out the total panel in self-defined subgroups, e.g., gender, age, and income. That information is available from the self-profiling classification, a set of questions answered by the respondent after the respondent rated the set of 60 vignettes.
  2. The additive constant tells us the conditional probability of a person saying interested in what the financial advisor has to say, i.e., assigns a rating of 7-9, when reading a vignette which has no elements (the baseline). Of course, by design all vignettes comprise elements, so the additive constant is an estimated We can use the additive constant as a baseline. For the total panel it is 35, meaning that 35% of the respondents would rate a vignette 7-9. Males are less likely to be positive whereas females are more likely to be positive (additive constants of 28 vs. 36). Those under 40 are far less likely to be positive, those over 40 are more likely to be positive (additive constant of 29 vs. 40). Income makes no difference.
  3. Beyond the baseline are the elements, which contribute to the total. We add up to four elements to the baseline to get an estimated total value, i.e., the percent of respondents who say that they would be interested in the vignette about the financial consult were the elements to be part of the advertising.
  4. To allow patterns to emerge the tables of coefficient show only those positive coefficients of +2 or higher, drivers of interest. Negative coefficients are not shown.
  5. The coefficients for the 36 elements are low. Table 2 shows the strongest elements only, and only elements which generate a coefficient or impact value of +8 for at least one subgroup. We interpret that +8 to mean that when the element is incorporated into the advertising vignette at least 8% more people will rate the vignette 7-9, i.e., say ‘I’m ’ The value +8 has been observed in many other studies to signal that the element is ‘important’ in terms of co-varying with a relevant behavior. Thus, the value +8 is used here, as an operationally defined value for ‘important.’
  6. Our first look into the results suggests nothing particularly strong emerges from the total sample. We do see six elements scoring well in at least one subgroup. However, we see no general pattern. That is, we don’t see an element working very well across the different groups. Furthermore, reading the different elements only confuses us. There are no simple patterns.
  7. Our first conclusion, therefore, is that the experiment worked at the simple level of discovering what is important, and what is not important. We are able to develop elements, test combinations, deconstruct the combinations, and identify winning The experiment, at least thus far, does not reveal to us deeper information about the mind(s) of the respondent. We will find that deeper information when we use clustering in the next section to identify mind-sets.

Table 2: Strong performing elements for the Total Sample and for key subgroups defined by how the respondent classifies himself or herself. The table presents only those strong-performing elements with average impacts of 8 or higher in at least one self-defined subgroup.

tab 2

Deeper, Possibly More Fundamental Structures of the Mind by Clustering

Up to now we have looked at people as individuals, perhaps falling into convenient groups defined by easy-to-measure variables such as gender, age, income. We could multiply our easy-to-measure variables by asking our respondents lots of questions about themselves, about their attitudes towards financial investors, about their feelings towards risk versus safety, and so forth. Then, we could classify the respondents by the different groups to which they belong, searching for a possible co-variation between group membership and response pattern to elements (Table 3).

Table 3: Performance of the strongest elements in the three mind-sets. emerging from the cluster analysis. People in MS1 appear to be the target group to be identified as the promising clients for the financial advisor.

tab 3

The just-described approach typifies the conventional way of thinking about people. We define people as belonging to groups and then search out the linkage between such groups and some defined behavior. Scientists call this strategy the hypothetico-deductive method, beginning first with a sense of ‘how the world might work,’ and then running an experiment to confirm, or just as likely, to falsify that hypothesis. We work from the top down, thinking about what might happen and proceeding merrily to validate or reject that thinking.

Let’s proceed in a different manner, without hypothesizing about how the world works. Let’s proceed with the data we have, looking instead for basic groups who show radically different, interpretable patterns. In the world of color this is analogous to looking for the basic colors of the spectrum, red, yellow, blue, which must emerge out of the measured thousands of colors of flowers. Let’s work from the bottom up, in a more pointillistic, empirical fashion, emulating Francis Bacon in his Novum Organum.

How then do we do this? How do we find naturally occurring groups of people in a specific population who show different patterns of behavior or at least responses for the micro, limited area? That is, we are working with a small corner of reality, one’s responses to messages about choosing a financial advisor. It’s a limited aspect of reality. How is that reality constituted? Are there different groups of minds out there, groups wanting different features? Are these groups of minds interpretable? To continue with the aforementioned metaphor, can we find the basic colors for this aspect of reality, the red/blue/yellow, not of the whole world, but the red/blue/yellow of choosing a financial advisor?

That we have limited our focus to the limited, micro area of messaging for client acquisition by a financial advisor makes our job easier:

  1. We are working in a corner, nook, a little region of reality. That small region is, however, quite granular. We already have rich material produced by our study. Our study with 36 elements and 241 profiles of impact values tells us how 241 individuals value the individual elements.
  2. Focusing only on that small wedge of reality, let us see whether there is a deeper structure, focusing only on the reality of choosing a financial advisor and using only the mind of the consumer as a way to organize reality. Continuing our metaphor of colors, we have come upon a new limited aspect of
  3. What are the basic dimensions of that new, limited aspect of reality? We have only two ground Parsimony and Interpretability, respectively Ground Rule 1, Parsimony: We should be looking for primaries, the fewer the better, for this new aspect of reality, our mind of selecting the investment advisor. Ground Rule 2, Interpretability: We must be able to interpret these primaries in a simple way. They must make sense, must tell a story.
  4. The foregoing introduction leads us naturally to our data, our 241 rows (one per respondent), and our 36 columns (one per element). The numbers in the 36 columns are the 36 coefficients from the model relating the presence/absence of the 36 elements to the binary transformed rating. We apply the method of cluster analysis to our 241 rows x 36 columns. We do not incorporate the additive constant into our cluster analysis, because it doesn’t give us information about the response to particular elements, the focus of the cluster analysis.
  5. Cluster analysis puts our 241 respondents first into two groups, then into three groups, then into four groups, and so forth. These are clusters, which we can call mind-sets or viewpoints because they represent different viewpoints that people have about what is important in the interaction with a financial advisor. Furthermore, the word ‘viewpoint’ emphasizes the psychological nature of the cluster, that we are dealing with the mind here, the mind as it organizes one small corner of reality, the interaction with a financial advisor.
  6. We end up with a solution suggesting three different viewpoints, as Table 3 shows. These three viewpoints are shown and named by virtue of the strongest performing elements in each viewpoint. The additive constants, our baselines, lie in the small range, and are fairly low in magnitude, 30-40. There is no mindset just ready to spring to attention, willing to buy the services of the financial advisor. That ready-to-act mind-set would be identified by a high additive constant.
  7. The total sample shows no strong elements. This means that without any knowledge of the mind of the prospect it’s unlikely that someone will know what to say, or the right thing to say. Perhaps the strongest message, with a coefficient of +7 (an additional 7% interested in working with the advisor) is the phrase: Tell us when you want to retire, and we will develop an action plan to get you there.
  8. The real differences come from the elements as responded to by the individuals in the different mind-sets. Our most promising group is Mind-Set 1, comprising 70 of our 241 respondents, or 28%. Use the six strong performing elements and one is likely to win over these respondents.
  9. If nothing else but the data in Table 3 are known, how might the salesperson ‘know’ that she or he is dealing with a prospect from Mind-Set 1, versus knowing that the person is in Mind-Set 2 or Mind- Set 3, the less promising mind-sets, the ones harder to convince? Table 3 simply tells us what to say, precisely, once we find the people, a major advance over knowledge that we began with, but not the whole story. It will be our job to assign a new person with some confidence to one of the three mind-sets, in order to proceed with the sales effort. Hopefully, most of the prospects will belong to Mind-Set 1.

fig 2

Figure 2: An example of a test vignette. The elements appear in a centered format with no effort to connect the elements, a format which enhances ‘information grazing.’ The vignette shows the ratings scale at the bottom, and the progress in the experiment at the top right (screen 15 out of 60).

Finding Viewpoints (Minds) in a Population

The foregoing results suggest that we might have significantly more success focusing on the group of people who are most ready to work with the financial advisor. But how do we find these people in the population? The analysis is data analytics, but exactly what should be done? And, in light of the enormous opportunities available to those who can consistently identify these mind-sets and then act on the knowledge, how can we create an affordable, scalable, ‘living’ mind-set assignment technology?

We walk around with lots of numbers attached to us. Data scientists can extract information about us from our tracks, whether these tracks are left by our behavior (e.g.. websites that we have visited), by forms that we have filled out and are commercially purchasable (e.g., through Experian or Trans Union or any of the other commercial data providers, by loyalty programs, etc.), or even by questionnaires that respondents complete in the course of their business transactions, medical transactions, and so forth.

All of the available data, properly mined, collated, analyzed, and reported, might well tell us when a person is ready to hire a financial advisor, e.g., upon the occasion of marriage, a child, a promotion, a job change, a move to another city, and so forth. But just what do we say to this particular prospect, the person standing before us in person, or interacting with our website, or even sitting at home destined to be sent a semi-impersonal phone message, email, or letter? In other words, and more directly, What are the precise words to say to this person?

Those in sales know that an experienced salesperson can intuit what to say to the prospect. Perhaps the answer is to hire only experienced, competent salespeople, with 20 years of experience. After the first 100 or them are hired, what should be done with the millions of salespeople who need a job, but lack the experience, the intuition, and the track of successes, and who are perhaps new to the workforce? In other words, how do we scale this knowledge of the mind of people, so that everyone can be sent the proper message at the right time, whether by a salesperson or perhaps even by e-commerce methods, by websites instead of salespeople?

The foregoing results in Table 3 show us what to say and to whom, especially to Mind-Set 1.. The problem now becomes one of discovering the mind-set to which a specific person belongs. Unfortunately, people do not come with brass plates across their foreheads telling us the viewpoints to which that person belongs. And there are so many viewpoints to discover for a person, as many sets of viewpoints as there are topic areas for Mind Genomics. The bottom line here is that data scientists working with so-called Big Data might be able to infer that a person is likely to be ready for a financial advisor, but as currently constituted, the same Big Data is unlikely to reveal the mind-set to which the individual person belongs. We have petabytes of data, reams of insights, but not the knowledge, the specificity about the way the mind works for any particular, limited, operationally defined topic in the reality of our experience.

We move now to the second phase of our work reported here, discovering the viewpoint to which any person belongs. We have already established the micro-science for the financial planner, the set of phrases to use for each of the three mind-sets uncovered and explicated in a short experiment. We know from our 241 respondents the mind-set to which each person belongs, having established the mind-sets and individual mind-set membership in the group membership by used cluster analysis. How then do we identify any new person, anywhere, as belonging to one of our three mind-sets, and thus know just what to say to that person?

In today’s computation-heavy world one might think that the best strategy is to ‘mine’ the data with an armory of analytic tools, spending hours, days, weeks, months attempting to figure out the relation between who a person is, and what to say, in this small, specific, virtually micro-world. Once that computation is exhausted, there may be some modest covariation between a formula encompassing all that is known about a person and membership in the mind-set. A simpler way, developed by authors Gere and Moskowitz, called the PVI (personal viewpoint identifier), does the same task in minutes, at the micro-level, with modest computer resources, and with the same granularity as the original Mind Genomics study from which the mind-sets emerged.

In simple terms, the PVI works with the data from the Mind Genomics study, viz., the specific information from which the mind-sets emerged. The PVI system perturbs the data, using a Monte-Carlo system, and over 20,000+ runs, identifies the combinations of elements which best differentiate among the segments. The PVI emerges with six elements, all taken from the original study, and with a two-point rating scale. The pattern of responses to the six questions assigns a new person to one of the three (or two) mind- sets.

Figure 2 shows an example of the introduction to the PVI, which asks for information from the respondent. It will be this information which allows the user of the PVI to create a database of ‘minds-sets’ of people for future research and marketing efforts. Furthermore, the introduction to the PVI has information about the time when the PVI is being completed (important for future work on best contact times), age, gender, etc. The specific questions can be included or suppressed, depending upon the type of information that will be necessary when the PVI is used (viz., research on the time-of-day dependence of mind-sets, if it actually exists.) As of this writing (2023)the PVI can be accessed at: https://www.pvi360.com/TypingToolPage.aspx?projectid=213&userid=2018.

Figure 3 shows the actual PVI portion, comprising three questions about one’s current life-stage (what is one thinking about in terms of retirement planning), and then six questions designed to assign the new person to one of the three mind-sets. It is important to realize that instead of requiring weeks and heavy computation, the entire process, from the set-up of the PVI to the deployment, is approximately 20 minutes. Like the work to set up a Mind Genomics experiment, system to create a PVI for that study is ‘templated’, making it appropriate for ‘industrial strength’ data acquisition. Several studies can be incorporated into one PVI, with studies randomized, and questions randomized, each study or project requiring only six questions, developed from the elements. The process is automatic and can be deployed immediately with thousands of participants within the hour.

fig 3

Figure 3: Introductory page to the PVI (personal viewpoint identifier

Figure 4 shows the feedback emerging immediately from the PVI. The shaded cell shows the mind-set to which the respondent belongs. The PVI stores the respondent’s background information (Figure 2) and mind-set information (Figure 4) in a database. Furthermore, the PVI is set up to send the respondent immediately to a website, or to show the respondent a video relevant to the mind-set to which the respondent has been assigned by the PVI (see Figure 5). Thus, the Mind Genomics system comprising knowledge acquisition by a small, affordable experiment, coupled with the PVI, expands the scope of Mind Genomics so that the knowledge of mind-set membership can be deployed among a far greater population, those who have been assigned to a mind-set by the PVI.

fig 4

Figure 4: The actual PVI for the study, showing three up-front ‘questions’ about one’s general attitude, and then six questions and a 2-point response scale for each, used to assign the person to one of the three mind-sets.

fig 5

Figure 5: Immediately feedback about mind-set membership

Evolving into BIG MIND – The Nature Marriage of PVI-enhanced Mind Genomics with Big Data

Up to now we have been dealing with small groups of individuals whose specific mind-sets or viewpoints in a specific, limited topic area we can discover, and then act upon. But what are we to do when we want to deal with thousands, millions, and even billions of new people? Consider, for example, the points in Figure 6, top panel. Consider these points as individuals. Measurement of behaviors show how these individuals connect with each other at a superficial level, at the phenotypical level. There are many visualization techniques which create the interconnections based upon one or another criterion. And from these visualizations we can ascribe something to the network. We can deduce something about the network and the nodes, although not much, perhaps. We are like psychologists studying the rat. If only the rat could talk, how much it would say about what it is doing and why? Alas, it is a rat, or perhaps a pigeon, the favorite test subjects of those who follow strict behaviorism, of the type suggested by BF Skinner and his Behaviorist colleagues and student at Harvard University. . (Full disclosure – author Moskowitz was a graduate student in some of Skinner’s seminars and colloquia, at Harvard, 1965-1968.)

fig 6

Figure 6: Set up template for the PVI, showing the ability to show the respondent a video or send the respondent to a landing page, depending upon the mind-set to which the respondent has been assigned by the PVI.

What happens, however, when we know the mind of each person, or at least the membership in, say, four or ten or perhaps 100 or perhaps 1000 different topic areas relevant to the granular richness of DAILY EXPERIENCE? What deep, profound understanding would emerge if we were to know the network itself, the WHO and BEHAVIOR of people, coupled with the structure of their MIND, viz., the ‘MIND OF EACH POINT IN THE NODE!

Consider Figure 6. The top panel shows the aggregate of people. We know WHO the people are. The bottom panel shows the network, WHAT the people do, how they link to each other. What if now we know WHY for each point, how each point thinks about a set of topics. We create a web of interconnected points and discover some of the commonalities of the points, not based on who the points are or what the points did, but rather how the points think about many relevant topics.

How do we move from Mind Genomics of one topic, say our choice of financial advisor, to many topics in common space, say the space of ‘personal finances’ and then through typing people around the world on a continuing basis, as life progresses and events progress: thousands, not hundreds, and finally millions, tens of millions of people. In essence this ‘project’ creates a true ‘wiki of the mind and society’, empirically sound, extensive, actionable, and archival for decades? In essence, how do we go from a map of nodes to a map of connected minds in the every-day life, and across the span of countries and time? (Figure 7)

fig 7

Figure 7: Example of nodes (i.e., people), perhaps connected by a network. The top panel shows the network of people as points. The bottom panel shows the potential of knowing the mind of each person, i.e., each point in the network.

To reiterate, our goal is to understand the specific mind-set memberships of each point in the network, where the point corresponds to a person. The big picture is thus millions, perhaps hundreds of millions of points, people, observed two ways, and even expanded a third way to billions of people who have completed the PVI, but who may be ‘imputed’ to belong to a mind-set through look-alikes. The is the DVI, the Digital Viewpoint identifier, explicated in step 3 below:

1. Granular Mental Information about Each Node

The minds or at least the pattern of mind-set membership of many people determined through Mind Genomics and the PVI, for a set of different topic areas. There may be as few as one topic area, or several dozen or even 100 or more topics. This information can be obtained through small-scale Mind-Genomics studies, executed and analyzed within 1-2 hours (www.BimiLeap.com), and followed by an easy-to-deploy PVI (www.PVI360.com).

2. Correlate Behavior Observed Externally with the Underlying Mind-sets

The interactions of nodes with each other, as measured objectively, either by who they are or by how they behave, such as what they view on the Web, what they order, with whom they interact in conversations. This information is readily available today from various sources, known collectively as Big Data.

3. Expand the PVI (Personal Viewpoint Identifier)

The goal here is to work with 1000 respondents, each of whom provides 5 minutes of her or his time to complete a set of PVI’s on a topic. Let’s choose a number of PVI, say 12. Each PVI of six questions takes about 15 seconds to compete. In three minutes, a person can do 12 PVI’s, comprising 72 questions.

4. Augment the Data

Let’s purchase publicly available information about these 1000 known respondents. The goal now is to predict the viewpoints of the 1,000 people on the 12 topics from purchasable data about those 1,000 people. Once that is done, one has developed a simple predictive model which uses purchasable data to estimate the mind-set membership of a person in each of 12 topic areas from purchasable information that can be readily obtained. This simple predictive model is the aforementioned DPI, Digital Personal Identifier. It has now become straightforward to create a ‘scoring system’ which moves systematically through the data already available, and ‘scores’ each respondent on 12, 120, or even 1200 different granular topics, to create a true Wiki of the Mind and Society.

5. Fast time frame, low cost: Let’s consider a simple scenario, the creation of this mass of data for the financial trajectory of a person, from early adulthood to late adulthood, through all the relevant financial aspects. Let’s assume 300 different identifiable activities involved in decision-making. The foregoing steps mean that within a period of six months to one year, and some concerted effort, it will become possible, and indeed quite straightforward, to move from say 300 topic studies to 300 micro sciences and viewpoints, to the creation of 300 digital viewpoint identifiers, to the application of those identifiers, i.e., scoring systems, to the purchasable data of 1-2 billion people. Within the Big Data the data scientist and entrepreneur will have an associated Big Mind, a vector of perhaps 300 numbers underneath each node, each person, each node corresponding to one of those 300 activities. The analytic possibilities emerging from knowing both the behavior and the mind-set of the behaving organism on 300 (or more) topics can only be surmised. One would not be far off to think that the possibilities are enormous for new understanding of behavior, a possibly new engineering of society.

Acknowledgments

Attila Gere thanks the support of the Premium Postdoctoral Researcher Program of the Hungarian Academy of Sciences.

References

  1. Ordenes FV, Theodoulidis B, Burton, J, Gruber, T, Zaki M (2014) Analyzing Customer Experience Feedback Using Text Mining: A Linguistics-Based Journal of Service Research 17: 278-295.
  2. Aciar S (2010) Mining Context Information from Consumer’s Reviews. 2nd Workshop on Context-Aware Recommender Systems (CARS-2010).
  3. Bucur C (2015) Using Opinion Mining Techniques in Tourism. Procedia Economics and Finance 23(Supplement C): 1666-1673.
  4. Ritbumroong T (2015) Analyzing Consumer Behavior Using Online Analytical Mining. In Marketing and Consumer Behavior: Concepts, Methodologies, Tools and Applications (1st ed, 894-910) IGI Global.
  5. Roll I, Baker RS, Aleven V, McLaren BM, Koedinger KR (2005) An analysis of differences between preferences in real and supposed contexts. User Modeling- Springer Berlin Heidelberg 367-376.
  6. Talib R, Hanif MK, Ayesha S, Fatima F (2016) Text Mining: Techniques, Applications and International Journal of Advanced Computer Science and Applications 7: 414-418.
  7. Zhong N, Li Y, Wu ST (2012) Effective Pattern Discovery for Text IEEE Transactions on Knowledge and Data Engineering 24: 30-44.
  8. Auinger A, Fischer M (2008) Mining consumers’ opinions on the In FH Science Day. 410;419.Linz, Österreich.
  9. Fan W, Wallace L, Rich S, Zhang Z (2010) Tapping the power of text mining, Communications of the ACM, vol. 49, no. 9, pp. 76-82, 2006.Fatima, F, Islam, W, Zafar, F, Ayesha, S. Impact and usage of internet in education in Pakistan. European Journal of Scientific Research 47: 256-264.
  10. Fenn, J, LeHong H (2012) Hype Cycle for Emerging Gartner.
  11. Gofman, A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  12. Jung, C. H (1976) Psychological Types: A Revision (The Collected Works of C.G. Jung) Princeton, NJ: Princeton University Press.
  13. Moskowitz HR, Batalvi B, Lieberman E (2012) Empathy and Experiment: Applying Consumer Science to Whole Grains as Foods. In Whole Grains Summit (2012) (pp. 1-7) Minneapolis, MN: AACC International.
  14. Gofman A (2012) Putting RDE on the R&D Map: A Survey of Approaches to Consumer-Driven New Product Development. In A. Gofman, H. R. Moskowitz (Eds.), Rule Developing Experimentation: A Systematic Approach to Understand & Engineer the Consumer Mind (pp. 72-89) Bentham Books.
fig 2

Developing an Inner Psychophysics for Social Issues: Reflections, Futures, and Experiments

DOI: 10.31038/IMROJ.2023813

Abstract

This paper introduces Inner Psychophysics, a new approach to measuring the values of ideas, applying the approach to the study of responses to 28 different types of social problems. The objective of Inner Psychophysics is to provide a number, a metric for ideas, with the number showing the magnitude of the idea on a specific dimension of meaning. The approach to create this Inner Psychophysics comes from the research system known as Mind Genomics. Mind Genomics presents the respondent with the social problem, and a unique set of 24 vignettes presenting solutions to the problem. The pattern of responses to the vignettes is deconstructed into the contribution of each ‘answer’, through OLS (ordinary least squares) regression. The approach opens up the potential of a ‘metric for the social consensus,’ measuring the value of ideas relevant to society as a whole, and to the person in particular.

Introduction

Psychophysics is the oldest branch of experimental psychology, dealing with the relation between the physical world (thus ‘physics’) and the subjective world of our own consciousness (thus ‘psycho’). The question might well be asked what is this presumably arcane psychological science dealing with up to date, indeed new approaches to science? The question is relevant, and indeed, as the paper and data will show. The evolution of ‘inner psychophysics’ provides today’s researcher with a new set of tools to think about the problems of the world. The founder of today’s ‘modern psychophysics,’ the late S.S. Stevens (1906-1973) encapsulated the opportunity in his posthumous book, ‘Psychophysics: An Introduction to its Perceptual, Neural and Social Prospects. Stevens also introduced the phrase ‘a metric for the social consensus,’ in his discussions about the prospects of psychophysics in the world of social issues. This paper presents the application of psychophysical thinking and disciplined rigor to the study of how people ‘think’ about large-scale societal problems [1,2].

The original efforts in psychophysics began about 200 years ago, with the world of physiologists and with the effort to understand how people distinguish different levels of the same stimulus, for example, different levels of sugar in water, or today, different levels of sweetener in cola. Just how small of a difference can we perceive? Or, to push things even more, what the is lowest physical level that we can detect? [3] These are the difference and the detection threshold, respectively, both of interest to scientists, but of relatively little interest to the social scientist and researcher.

The important thing to come out of psychophysics is the notion of ‘man as a measuring instrument,’ the notion that there is a metric of perception. Is there a way to assign numbers to objects or better to experiences of objects? In simpler terms, think of a cup of coffee. If we can measure the subjective perception of aspects of that coffee, such as its coffeeness’, then what happens when we add milk. Or add sugar. Or change coffee roast, and so forth. At a mundane level, can we measure how much perceived ‘coffeeness’ changes? With that in mind can we do this type of measurement for social issues?

Steven’s ‘Outer’ and ‘Inner’ Psychophysics

By way of full disclosure, author HRM was one of the last PhD students of the SS Stevens, receiving his PhD in the early days of 1969. Some 16 months before, Stevens had suggested that HRM ‘try his hand’ at something such as taste or political scaling, rather than pursuing research dealing with topics requiring sophistication in electronics, such as hearing and seeing. That suggestion would become a guide through a 54-year future, now a 54-year history. The notion of measuring taste forced thinking about the mind, the way people say things taste versus how much they like what they taste. This first suggestion, studying taste, focused attention on the inner world of the mind, one focused on what things taste like, why people differ in what they like, whether there are basic taste preference groups, and so forth. The well-behaved and delightfully simple regularities, ‘change this, you get that,’ working so well in loudness, seem to break down in taste.

If taste was the jumping off point from this outer psychophysics to the measurement of feelings such as liking, then the next efforts would be even more divergent. How does one deal with social problems which have many aspects to them? We are no longer dealing with simple ingredients, which when mixed create a food, and whose mixtures can be evaluated by a ‘taster’. We are dealing now with the desire to measure the perception of a compound, complex situation, the resultant of many interacting factors. Can the spirit of psychophysics add something, or we stop at sugar coffee, or salt in pickles?

Some years later, through ongoing studies of perception, it became obvious that one could deal with the inner world, using man as a measuring instrument. The slavish adherence of systematic change of the stimulus in degrees and the measurement, had to be discarded. It would be nice to say that a murder is six times more serious than a bank robbery with two people injured, but that type of slavish adherence would not create this new inner psychophysics. It would simply be adapting and changing the hallowed methods of psychophysics (systematically change, and then measure), moving from tones and lights to sugar and coffee, and now to statements about crimes. There would be some major efforts, such as the utility of money [4], efforts to maintain the numerical foundations of psychophysics because money has an intrinsic numerical feature. Another would be the relation between perceived seriousness of crime and the measurable magnitude punishment. But there had to be a profound re-working of the problem statement.

Enter Mathematics: The Contribution of Conjoint Measurement, and Axiomatic Measurement Theory

If psychophysics provided a strong link to the empirical world, indeed a link which presupposed real stimuli, then mathematical psychology provided a link to the world of philosophy and mathematics. The 1950’s saw the rise of interest in mathematics and psychology [5]. The goal of mathematical psychology in the 1950’s and 1960’s was to put psychology on firm theoretical footing. Eugene Galanter became an active participant in this newly emerging, working at once with Stevens in psychophysics at Harvard, and later with famed mathematical psychologist R. Duncan Luce. Luce and his colleagues were interested in ‘fundamental measurement’ of psychological quantities, seeking to measure psychology with the same mathematical rigor that physicists measured the real world. That effort would bring to fruition the Handbook of Mathematical Psychology [6], and the work of Luce and Tukey [7] well as the efforts of psychologist Norman Anderson [8] who coined the term ‘functional measurement.’

The simple idea which is relevant to us is that one could mix test stimuli, ideas, not only food ingredients, instruct the respondent to evaluate these mixtures, and estimate the contribution of each component to the response assigned to the mixture. Luce and Tukey suggested deeply mathematical, axiomatic approaches to do that. Anderson suggested simpler approaches, using regression. Finally, the pioneering academics at Wharton Business School, Paul Green and Yoram (Jerry) Wind showed how the regression approach could be used to deal with simple business problems [9,10].

The history of psychophysics and the history of mathematical psychology met in the systematics delivered by Mind Genomics. The mathematical foundations had been laid down by axiomatic measurement theory. The objective, systematized measurement of experience, had been laid down by psychophysics at first, and afterwards by applied psychology and consumer research. What remained was to create a ‘system’ which could quantify experience in a systematic way, building databases, virtually ‘wikis of the mind’, rather than simply providing one or two papers on a topic which solved a problem with an interesting mathematics. It was time for the creation of a corpus of psychophysically motivated knowledge, an inner psychophysics of thought, rather than the traditional psychophysics of perception.

Reflections on the Journey from the Outer Psychophysics to an Inner Psychophysics

New thinking is difficult, not so much because of the problems as the necessity to break out of the paradigms which one ‘knows’ to work, even though the paradigm may no longer serve its purpose in an optimal fashion. Inertia seems to be a universal law, whether the issue be science and knowledge, or business. This is not the place to discuss the business aspect, but it is the place to shine a light on the subtle tendency to stay within the paradigms that one learned as a student, the tried and true, those paradigms which get one published.

The beginning of the journey to inner psychophysics occurred with a resounding NO, from S. S. Stevens, in 1967, when author HRM asked permission to combine studies of how sweet an item tasted, and how much the item was liked. This effort was a direct step away from simple psychophysics, with the implicit notion of a ‘right answer’. This notion of a ‘right answer’ summarizes the worldview by Stevens and associates that psychophysics was searching for, invariance, for ‘rules’ of perception. Departures from the invariances would be seen as the irritating contribution of random noise, such as the ‘regression effect’ [11], wherein the tendency of research is to underestimate the pattern of the relation between physical stimulus and subjective, judged response. “Hedonics” was a complicating, ‘secondary factor’, which could only muddle the orderliness of nature, and not teach anything, at least to those imbued with exciting Harvard psychophysics of the 1950’s and 1960’s.

The notion of cognition, hedonics, experience as factors driving the perception of a stimulus, could not be handled easily in this outer psychophysics except parametrically. That is, one could measure the relation between the physical stimulus and the subjective response, create an equation with parameters, and see how these parameters changed when the respondent was given different instructions, and so forth. An example would be judging the apparent size of a circle of known diameter versus judging the actual size. It would be this limitation, this refusal to accept ideas as subject to psychophysics, that author HRM would end up attempting to overcome during the course of the 54-year journey.

The course of the 54-year journey would be marked by a variety of signal events, events leading to what is called in today’s business ‘pivoting.’ The early work on the journey dealt with judgments of likes and dislikes, as well as sensory intensity [12]. The spirit guiding the work was the same, search for lawful relations, change one parameter, and measure the change in a parameter of that lawful relation. The limited, disciplined approach of the outset psychophysics was too constraining. It was clear at the very beginning that the rigorous scientific approaches to measuring perceptual magnitudes using ‘ratio-scaling’ would be a ‘non-starter.’ The effort of the 1950’s and 1960’s to create a valid scale of magnitude was relevant, but not productive in a world where the application of the method would drown out methodological differences and minor issues. In other words, squabbles about whether the ratings possessed ‘ratio scale’ properties might be interesting, but not particularly productive in a world begging for measurement, for a yet-to-be sketched out inner psychophysics.

The movement away from simple studies of perceptual magnitudes was further occasioned by the effort to apply the psychophysical thinking to business issues, and the difficulties ensuing in the application of ratio scaling methods, such as magnitude estimation. The focus was no longer on measurement, but on creating sufficient understanding about the stimulus, the food or cosmetic product, so that the effort would generate a winner in the marketplace.

The path to understanding first comprises experiments with mixtures, first mixtures of ingredients, and then mixtures of ideas, steps needed to define the product, to optimize the product itself, and then to sell the product. Over time, the focus turned mainly to ideas, and the realization that one could mix ideas (statements, messages), present these combinations to respondents, get the responses to the combinations, and then using statistics such as OLS (ordinary least-squares regression) one could estimate the contribution of each idea in the mixture to the total response.

Inner Psychophysics Propelled by the Vision of Industrial-scale Knowledge Creation

A great deal of what the author calls the “Inner Psychophysics” came about because of the desire to create knowledge at a far more rapid level than was being done, and especially the dream that the inevitable tedium of a psychophysical experiment could simply be eliminated. During the 20th century, especially until the 1980’s, researchers were content to work with one subject at a time, the subject being call the ‘O’, an abbreviation for the German term Beobachter. The fact that the respondent is an observer suggests a slow, well-disciplined process, during which the experimenter presents one stimulus to one observer, and measures the response, whether the response is to say when the stimulus is detected as ‘being there,’ when the stimulus quality is recognized, or when the stimulus intensity is to be assigned a response to report its perceived intensity.

The psychophysics of the last century, especially the middle of the 20th century, focused on precision of stimulus, and precision of measurement, with the goal of discovering the relations between variables, viz., physical stimuli versus perception of those stimuli by the person. It is important to keep in mind the dramatic pivot or change in thinking that would ensue when reality and opportunity presented themselves as disturbances. Whereas psychophysics of the Harvard format searched for lawful relations between variables (physical stimulus levels; ratings of perceived magnitude), the application of the same thinking to food and to ideas was to search for usable relations. The experiments need not reveal an ‘ultimate truth’, but rather needed to be ‘good enough,’ to identify a better pickle, salad dressing, orange juice or even features of a cash-back credit card.

The industrial-scale creation would be facilitated by two things. The first was a change in direction. Rather than focusing one’s effort on the laws relating physical stimulus and subjective response (outer psychophysics), the new, and far-less explored area would focus on measuring ideas, not actual physical things (inner psychophysics).

The second would focus on method, on working not with single ideas, but deliberately with mixtures of ideas, presented to, and evaluated by the respondent. in a controlled situation. These mixtures of ideas, called vignettes, would be created by experimental design, a systematic prescription of the composition of each mixture, viz., which phrases or elements would appear in each vignette. The experimental design ensured that the researcher could link a measure of the respondent’s thinking to the specific elements. The rationale for vignettes was the realization that single ideas were not the typical ‘product’ of experience. We think of mixtures because our world comprises compound stimuli, mixtures of physical stimuli, and our thinking in turn comprises different impressions, different thoughts. Forcing the individual to focus on one thought, one impression, one message or idea, is more akin to meditation, whose goal is to shunt the mind away from the blooming, buzzing confusion of the typically disordered mind, filled with ideas flitting about.

The world view was thus psychophysics, search for relations and for laws. The world view was also controlled complexity, with the compound stimulus taking up the attention of the respondent and being judged. The structure of the mixtures appeared to be a ‘blooming, buzzing confusion’ in the words of Harvard psychologist William James. To create the Inner Psychophysics meant to prevent the respondent from taking active psychological control of the situation. Rather, the designed forced the respondent to pay attention to combinations of meaningful messages (vignettes), albeit messages somewhat garbled in structure, which avoided revealing the underlying structure, and thus prevented the respondent from ‘gaming’ the system.

As will be shown in the remainder of this paper, the output of this mechanized approach to research produced an understanding of how we think and make decisions, in the spirit of psychophysics, at a pace and scope that can be only described as industrial scale/

The Mind Genomics ‘Process’ for Creating an Experiment

The study presented here comes from a developing effort to understand the mind of ordinary people in terms of what can solve well-known social problems. At a quite simple level, one can either ask respondents to tell the researcher what might solve the problems, or present solutions to the respondent, and ask the respondent to scale each solution in terms of expected ability to solve the problem. The solutions are concrete, simple, relevant. The pattern of responses gives a sense of what the respondent may be thinking with respect to solving a problem.

The study highlighted here went several stages beyond that simple, straightforward approach. The stimulus for the underlying thinking came from traditional personality theory, and from cognitive psychology. In personality theory, psychologist Rorschach among many others believed that people were not often able to paint a picture of their own mind, at the deepest levels. Rorschach developed a set of ambiguous pictures and required the respondent to describe them, to tell a story. The pattern of what the respondent saw could tell the research how the respondent organized her or his perceptions of the world. Could such an approach be generalized, so that the pictures would be replaced by metaphoric words, rich with meaning? And so was born the current study. The study combines a desire to understand the mind of the individual, the use of Mind Genomics to do the experiment, and the acceleration of knowledge development through a novel set of approaches to the underlying experimental design (see also Goertz & Mahoney) [13]

Let us first look at the process itself.

  1. The structure of the experimental design begins with a single topic (e.g., a social problem), continues with four questions dealing with the problem, and in turn four specific answers to each question. Thus, there are three stages, easy to create, amenable to being implemented through a template. Good practice suggests that the 16 answers (henceforth elements) be simple declarative statements, 14 words or fewer, with no conjunctives. These declarative statements should be easily and quickly scanned, with as little attention, as little ‘friction’ as possible.
  2. A basic experiment specified 24 unique combinations or vignettes, each vignette comprising 2, 3 or 4 elements. No effort was made to connect these elements. Rather, each element was placed atop the other.
  3. The experimental design ensured that each element appeared exactly five times across the 24 vignettes, and that the pattern of appearances made each element statistically independent of the other 15 elements.
  4. The experimental design was set up to allow the 24 vignettes to be subject to OLS (ordinary least-squares) regression, at the level of the individual, or the level of the group, respectively.
  5. A key problem in experimental design is the underlying structure of what is tested, which is a single set of combinations. The quality of knowledge suffers because only a set of combinations is tested, one small region of the design space. There is much more to the design space. The researcher’s resources are wasted suppressing the noise in this region, either by eliminating noise (impossible in an Inner Psychophysics), or by averaging out the noise in this region by replication (a waste of resources).
  6. The solution of Mind Genomics is to permute the experimental design [14]. The permutation strategy maintains the structure of the experimental design but changes the specific combinations. The task of permuting requires that the four questions be treated separately, and that the elements within a question be juggled around but remain with the question. In this way, no element was left out, but rather its identification number changed. For example, A1 would become A3, A2 would become A4, A4 would become A2 and A3 would become or remain A3. At the initial creation of the permuted designs, each new design was tested to ensure that it ran with the OLS (ordinary least-squares) regression package.
  7. Each respondent would test a different set of 24 combinations. What was critical was to create a scientific experiment in which the experiment need not know anything about the topic to explore the full range of the topic as represented by the 16 elements. The data from the full range of combination tested would quickly reveal what elements performed well, and what elements performed poorly.
  8. The benefit to research was that research could become once again exploratory as well as confirmatory, due to the wide variation in the combinations. It was no longer a situation of knowing the answer or guessing at the answer ahead of time. The answer would emerge quickly.
  9. Continuing and finishing with an overview of the permuted design of Mind Genomics, it quickly became obvious that studies needed not be large nor expensive. The ability to create equations or models with as few as 5-10 respondents, because of the ability to cover the design space, meant that one could get reasonable indications with so-called ‘demo studies’, virtually automatic studies, set up and implemented at low cost. The setup takes about 20 minutes once the ideas are concretized in the mind of the research. The time from launch (using a credit card to pay) to delivery of the finalized results in tabulated form, ready for presentation, is approximately 15-30 minutes.
  10. It was important to create rapid summarizations of the results. Along with the vision of ‘industrial strength research’ was the vision of ‘industrial scale insights.’ These would be provided by simple templated outputs, along with AI interpretations of the strong performing elements for each key group in the population. The latter would develop into the AI ‘summarizer’.
  11. The final step, as of this writing is to make the above-mentioned system work simultaneously with a series of different studies, e.g., 25-30 studies, in an effort to create powerful databases, across topics, people, cultures, and over time. In the spirit of accelerated knowledge development, each study is a carbon copy of the other study, except for one item, the specific topic being addressed in the study. That is, the orientation, rating scale, and elements are identical. What differs is the problem being addressed.
  12. When everything else is held constant, only the topic being varied, we have then the makings of the database of the mind, done at industrial scale.

Applying the Approach to the ‘Solution’ of Social Problems

We begin with a set of 28 social problems, and a set of 16 ‘messages’ as tentative solutions to a problem. The problems are simple to describe and are not further elaborated. In turn the 16 elements or solutions are general approaches, such as the involvement of business, rather than more focused solutions comprising specific steps. These 28 problems are shown in Table 1 and the 16 solutions are shown in Table 2.

Table 1: The 28 problems

tab 1

The 28 problems enumerated in Table 1 represent a small number of the many possible problems one can encounter, and Table 2 shows a few of the many the solutions that might be applied. The number of problems is unlimited. For this introductory study, using the Mind Genomics template, we are limited to four types of solutions for a problem, and four specific solutions in each type.

Table 2: The 16 solutions (four silos, each silo with four solutions)

tab 2

The actual process follows these steps, which give a sense of the total effort needed for the project.

  1. Develop the base study (orientation page, rating scale, questions, answers); Figures 1a and 1b shows some relevant screen shots. Each problem is represented by a single phrase describing the problem. That phrase is called ‘the SLUG’. It will be the SLUG which changes in the various steps, one SLUG for each study (Figure 2).
  2. Create a copy of the base study, changing the nature of the problem in the introduction and in the rating scale. This activity requires about 3-5 minutes for each study due to its repetitive, simple nature. Then launch each study in rapid succession with the same panel requirements (50 respondents), and let each study amass the data from the 50 respondents. The field time is about 30 minutes when the studies are launched during the daytime, and when the respondents have been invited by an on-line panel provider specializing in this type of research. The expected time for Step 2 for 28 studies is about 3-4 hours, to acquire all of the data.
  3. Create the large scale datafile, comprising one set of 24 rows for each respondent. This effort ends up being simple a ‘cut and paste’ effort, with slight editing. The 24 rows of data per respondent ends up generating 1200 rows of data for each of the 28 studies. The final database will comprise the information about the study, about the respondent, and then the set of 16 columns to show the presence/absence of the 16 elements (answers to the question), as well a 17th column to show the rating assigned for the particular vignette, and an 18th column showing the ‘response time’ for the vignette, defined as the time between the appearance of the vignette on the respondent’s screen and the assignment of the rating.
  4. Pre-process the ratings by converting the 5-point rating scale to a new, binary scale. Ratings of 1-3 are converted to 0 to denote that the respondent does not feel that the combination of offered actions presented in the vignette will ‘solve’ the problem. In turn, ratings of 4-5 are converted to 100 to denote that the respondent does feel that the combination of offered actions will solve the problem. The binary transformation is generally more intuitive to users of the data, these users wanting to determine ‘no or yes.’ To these users the intermediate scale values are hard to interpret, even though those scale values are tractable for statistical analysis.
  5. Since the 24 vignettes evaluated by a respondent are created according to an underlying experimental design, we know that the 16 independent variables (viz., the 16 solutions) are statistically independent of each other. Thus, the program creates an equation or model relating the presence/absence of the 16 elements to the newly created binary variable ‘will work.’ We express the equation as: Work (0/100) = k1(Solution A1) + k2(Solution A2) + …. K16(Solution D4). To make the results comparable instant from the study to study the equation is estimated without an additive constant, to force all the information about the pattern to emerge from the coefficients.
  6. Each respondent thus generates 16 coefficients, the ‘model’ for that respondent. The coefficient shows the number of points on a 100-point scale for ‘working’ contributed by each of the 16 solutions. Array all the coefficients in a data matrix, each row corresponding to a respondent, and each column corresponding to one of the 16 solutions or elements.
  7. Cluster all respondents in the 28 studies into three groups independent of the problem topic, but simply based on the pattern of the 16 coefficients for the respondent. The clustering is called k-means [15]. The researcher has a choice of the measure of distance or dissimilarity. For these data we cluster using the so-called Pearson Model, where the distance between two respondents is based on the quantity (1-R), with R=Pearson Correlation Coefficient. The Pearson correlation coefficient for two respondents is computed across computed across the 16 pairs of coefficients). Note again that the clustering program ‘does not know’ that there are 28 studies. The structure of the data is the same from one study to another, from one respondent to another.
  8. Each respondent is assigned to one of the three clusters (now called mind-set). Afterwards, the researcher create summary models or equations, first for each study independent of mind-set, second for each mind-set independent of study, and finally for each combination of study and the three mind-sets. These summary models generate four tables of coefficients, first for total, and then for mind-set 1, mind-set 2, and mind-set 3, respectively. Each vignette clearly belongs to one of the respondents, and therefore belong both to one specific study of the 28, and to one of the three emergent mind-sets. For these final summary models, the (arbitrary) decision was made to discard all vignettes that were assigned the rating ‘3’ (cannot decide). This decision sharpens the data by considering only the vignettes where a respondent felt that the problem would be solved or not be solved.
  9. Build three large models or equations relating the presence/absence of the 16 elements (specific solutions) to the binary rating of ‘can solve the problem’, incorporating all respondents in a mind-set. Then build the three sets of models, for each problem, by respondents in the appropriate mind-set. This creates 28 (problems) x 3 (mind-sets) = 84 separate models. We look at the patterns across the tables to get a sense of the different mind-sets, how they differ from the Total Panel, and what seems to be the defining aspects for each mind-set.
  10. The effort for one database, for one country, easy easily multiplied, either to the same database for different countries, or different topic databases for the country. From the point of view of cost in today’s dollars (Spring, 2023), each database of 28 studies and 50 respondents per study can be created for about $15,000, assuming that the respondents are easy to locate. That effort comes to about $500 per study.

fig 1

Figure 1: Study name (left panel), four questions (middle panel), and four answers to one question (right panel)

fig 2

Figure 2: Self profiling question (left panel), and rating scale (right panel)

What Patterns Emerge from Problem-Solution Linkages – Total Panel

Let us now look at the data from the total panel. Table 1 shows us 16 columns, one per solution, and 28 rows, one per problem. Models were estimated after excluding all vignettes assigned the rating 3 (cannot decide). The table is sorted in descending order by ability for a specific solution, and from left to right, by median coefficient, both for solutions and for problems, respectively:

  1. The rows (problems) are sorted in descending order by the median coefficient for the problem across 16 solutions. This means that the problems at the top of the table are those with the highest median coefficients, viz., the most likely to be solved by the solutions proposed in the study.. The problems at the bottom of the table are those least likely to be solved by the solutions proposed in the study
  2. The columns (solutions) are sorted in descending order by the median coefficient for the solution across all 28 problems. This means that the solutions to the left, those with the highest median coefficients, are the most to solve problems. The solutions to the right, those with the lowest median coefficients, are least likely to solve problems.
  3. The medians are calculated for all coefficients, those shown and those not shown. The table shows only the strong performing combinations, those with coefficients of +20 or higher.
  4. Table 3 is extraordinarily rich. There are several strong-performing elements. The interesting observations, however, emerges from the pattern of darkened cells, those with strong coefficients. These tend to be solutions from group B (social action) and from group C (business). Initiatives from education and government do work, but without any additional information, there seems to be little belief in the efficacy of the public domain to produce a solution.

Table 3: Summary table of coefficients for model relating presence/absence of 16 solutions (column) to the expected ability to solve the specific problem.

tab 3

The Lure of Mind-sets

We finish this investigation by looking at mind-sets, one of the key features of Mind Genomics. The notion of mind-sets is that for each topic area one can discover different patterns of ‘weights’ applied by the respondent to the information. The analysis to create these mind-sets will use the 16 coefficients for each respondent, independent of the problem presented to the respondent.

The notion of combining all respondents, independent of the problem, may sound strange at first, but there is a spark of reason. We are simply looking at the way the person deals with a problem. We are more focused on general patterns, even if these end up being ‘weak signals.’ The fact that there are 28 different problems dealt with in the project is not relevant for the creation of the mind-set, but will become important afterwards, for the deeper understanding of each mind-set.

The rationale for combining problems and solutions (viz., coefficients) into one database comes from the well-accepted fact that consumers differ when they think about purchasing a product. Studies of the type presented here, but on commercial products, again and again show that when it comes to purchasing a food product, one pattern of weights suggests that the respondent pays attention to product features, whereas another pattern of weights applied to the same elements suggests that the respondent pays attention to the experience of consuming the product, or the health benefits of the product, rather than paying attention to the features [16]. Rarely do we go any deeper in our initial thinking about the individual differences.

    1. The coefficients for the three emergent mind-sets appear in Tables 2-4. Again, the tables are sorted by the median, and all coefficients of 20 or higher are shaded to allow the patterns to emerge. Our task here is to point out some of these general patterns.
    2. The range of coefficients is much larger for the mind-sets than for the total. Table 1 shows us many modest-size coefficients of 10-20 and a number of larger coefficients, 20 or higher. Tables 2-4 show us a much greater range of coefficients. We attribute the increased range to the hypothesis that people may deeply differ from each other in their mental criteria. Inner Psychophysics reveals that difference, doing so dramatically, and in a way that could not have been done before.
    3. The pattern of coefficients seems somewhat more defined, as if the respondents in a mind-set more frequently rely on the same set of solutions for the problems, although not always.

a. The mindsets do not believe that the key solutions will work everywhere, but just in some areas. The mind-sets do not line up in an orderly fashion. That is, we do not have a simplistic set of psychophysical functions for the inner psychophysics. We do have patterns, and metrics for the social consensus.

b. Mind-Set 1 (Table 2) appears to feel that business and education solutions will work most effectively. Mind-Set 1 does not believe strongly in the public sector as able to provide workable solutions to many problems.

c. Mind-Set 2 (Table 3) appears to feel that education and the law will work most effectively.

d. Mind-Set 3 (Table 4) appears to feel that law and business will work most effectively (Tables 4-6).

Table 4: Summary table of coefficients for model relating presence/absence of 16 solutions (column) to the expected ability to solve the specific problem (row). The data come from Mind-Set 1, which appears to focus on business as the preferred solution to problems.

tab 4

Discussion and Conclusion

The focus of this paper began with the desire to extend the notion of psychophysics to the measurement of internal ideas. As noted in the first part of this paper, the traditional focus of psychophysics has been the measurement of sensory magnitudes, and later lawful relations between the sensory magnitude as perceived and the physical magnitude as measured by standard instruments.

The early work in psychophysics focused on measurement, the assignment of numbers to perceptions. The search for lawful relations between these measured intensities of sensation and physical correlates would come to the fore even during the early days of psychophysics, in the 1860’s, with founder Gustav Theodor Fechner [17]. It was Fechner who would trumpet the logarithm ‘law of perception,’ such ‘laws’ being far more attractive than the very tedious effort to measurement the just notice differences, the underlying units of so-called sensory magnitude. Almost a century later Harvard psychophysicist S.S. Stevens (1975) would spend decades suggesting that this law of perception followed a power function of defined exponent, rather than a logarithmic function.

This paper moves psychophysics inward, away from the search for lawful ‘equations’ relating one set variables to another, viz., magnitudes of physical stimuli versus magnitudes of the co-varying subjective responses. This focus here is to measure ideas. The objective is to put numbers onto ideas, not by having the respondent introspect and rate the ideas, but rather by showing the magnitude of the linkage in the mind between ideas. The methods are experimentation, the results are numbers (coefficients of the equation), and the scope is to create this new iteration of psychophysics in a way consonant with the way we think about issues. The outcome comprises a set of relatively theory-independent methods which produce the raw material of this psychophysics for the consideration of both other researchers and for practical applications in the many areas of human endeavor.

References

      1. Stevens SS (1975) Psychophysics: Introduction to Its Perceptual, Neural, and Social Prospects. New York, John Wiley.
      2. Stevens SS (1966) A metric for the social consensus. Science 151: 530-541.
      3. Boring EG (1942) Sensation & Perception in the History of Experimental Psychology. Appleton-Century,
      4. Galanter E (1962) The direct measurement of utility and subjective probability. The American Journal of Psychology 75: 208-220.
      5. Miller GA (1964) Mathematics and Psychology, John Wiley, New York.
      6. Luce RD, Bush RR, Galanter E (Eds.) (1963) Handbook of Mathematical Psychology: Volume I. John Wiley.
      7. Luce RD, Tukey JW (1964) Simultaneous conjoint measurement: A new type of fundamental measurement. Journal of Mathematical Psychology 1: 1-27.
      8. Anderson NH (1976) How functional measurement can yield validated interval scales of mental quantities. Journal of Applied Psychology 61: 677-692.
      9. Green PE, Wind Y (1975) New way to measure consumers’ judgments,” Harvard Business Review 53: 107-17.
      10. Wind Y (1978) Issues and advances in segmentation research. Journal of Marketing Research 15: 317-337.
      11. Stevens SS, Greenbaum HB (1966) Regression effect in psychophysical judgment. Perception & Psychophysics 1: 439-446.
      12. Moskowitz HR, Kluter RA, Westerling J, Jacobs HL (1974) Sugar sweetness and pleasantness: Evidence for different psychological laws. Science 184:583-585. [crossref]
      13. Goertz G, Mahoney J (2013) Methodological Rorschach tests: Contrasting interpretations in qualitative and quantitative research. Comparative Political Studies 46: 236-251.
      14. Gofman A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
      15. Dubes R, Jain AK (1980) Clustering methodologies in exploratory data analysis. Advances in Computers 19: 113-228.
      16. Green PE, Srinivasan V (1978) Conjoint analysis in consumer research: issues and outlook. Journal of Consumer Research 5: 103-123.
      17. Fechner GT (1860) Elements of psychophysics (translated by H.E. Adler, 1966) Leipzig, Germany, Breitkopf and Hartel (Holt, Rinehart, and Winston).
fig 1

Menace of Substance Abuse in Today’s Society: Psychosocial Support to Addicts and Those with Substance Use Disorder

DOI: 10.31038/IJNM.2023421

Abstract

Substance abuse among youths has been a problem to society in General. The continuous use of psychoactive substances among adolescents and youths has become a public concern worldwide because it potentially causes deliberate or unintended harm or injury. The consequences of drug abuse are not only on the individual user but also on his or her offspring, family and the society. This seminar topic discussed some drugs that are commonly abused by adolescents and youths such as cannabis, cocaine, amphetamine, heroin, codeine, cough syrup and tramadol. It also discussed the sources where abusers obtained drugs as well possible effects in terms of physical, psychological and social terms. The risk factors and the reason for substance abuse was discussed, how substance abuse interrupt the brain, which also tells us the ways of cubbing the menace of substance abuse by creating awareness about drug abuse and their adverse consequences through the aid of appropriate mass media tools. This write-up also discussed method of delivering customized information suitable to the target audience such as family, schools, workers, religious organization, homes in a sensitive manner. Also discussed is the strategies to use in collaboration with international agencies to monitor the sale of over-the-counter drugs and enforcing stricter penalties for individuals who are involved in trade of illicit drugs and many more. Recommendation are made where call on all categories of people including government ,family, community and National Agency for Food and Drug Administration and Control (NAFDAC) to contribute to preventing the menace of substance abuse. If the Nigerian youths should stop drug abuse, they will be useful to themselves, their families and the society in general.

Keywords

Substance abuse, Psychoactive substance, Society

Introduction

Substance abuse has been a cause of many debilitating conditions such as schizophrenia and psychosis, leading to psychiatric admissions. Substance abuse is emerging as a global public health issue. The recent world drug report-2019 of the United Nations Office on Drugs and Crime (UNODC) estimated that 271 million (5.5%) of the global population (aged between 15 and 64 years) had used drugs in the previous year. Also, it has been projected that 35 million individuals will be experiencing drug use disorders. Furthermore, the Global Burden of disease Study (2017), estimated that there were 585,000 deaths due to drug use, globally. The burden of drug abuse (usage, abuse, and trafficking) has also been related to the four areas of international concern, viz. organized crime, illicit financial flows, corruption, and terrorism or insurgency. Therefore, global interventions for preventing drug abuse including its impact on health, governance, and security, requires a wide spread understanding of the prevalence, frequently implicated drugs, commonly involved population, sources of the drugs and risk factors associated with the drug abuse. In Nigeria, the burden of drug abuse is on the rise and becoming a public health concern. Nigeria, which is the most populous country in Africa, has developed a reputation as a center for drug trafficking and usage mostly among the youth population, in which the menace is giving birth to a generation of drug addicts. Oftentimes, young men are seen with bottles of carbonated drinks (soft drinks), but laced with all kinds of intoxicating content. They move about with the soft drink bottles and sip slowly for hours while unsuspecting members of the public would easily believe that it is mere harmless soft drink.

According to Ladipo, a consultant psychiatrist at the Lagos University Teaching Hospital (LUTH), said that he had handled lot of mental cases in his career as fallouts of drug abuse, which often lead to mental disorder. He also stated that the effects of drug abuse and wrong use do not only take a toll on the individuals and their families but on society at large. According to UNODC, report on Drug use in Nigeria (the first large-scale, nationwide national drug use survey in Nigeria), one in seven persons (aged 15-64 years) had used a drug in the past year. Also, one in five individuals who had used drug in the past year is suffering from drug related disorders. Drug abuse has been a cause of many criminal offences such as theft, burglary, sex work, and shoplifting. A prevalence of 20-40% and 20.9% of drug abuse was reported among students and youths, respectively. Commonly abused drugs include cannabis, cocaine, amphetamine, heroin, diazepam, codeine, cough syrup and tramadol. Sources where abusers obtained drugs, were pharmacies/patent medicine shops, open drug markets, drug hawkers, fellow drug abusers, friends, and drug pushers. Drug abuse was common among undergraduates and secondary school students, youths, commercial bus drivers, farmers, and sex workers. Reasons stated for use include but not limited to increase physical performance, stress and to derive pleasure. Poor socioeconomic factors and low educational background were the common risk factors associated with drug abuse [1-10].

Objectives of the Seminar

  1. To identify the reasons and perceived benefits for substance abuse
  2. To identify psychological and social effects of substance abuse.
  3. To examine psychosocial supports rendered to substance users and addicts.
  4. To stimulate further discussions and research thoughts in an attempts to finding solutions to the menace

Clarification of Concepts

i. A Drug

It is any substance other than food that influences motor, sensory, cognitive or other bodily processes (APA, 2022).

ii. Drug Misuse

It is the use of a substance for a purpose not consistent with legal or medical guidelines (WHO, 2006).

iii. Psycho-Active Substance

Are substances that, when taken in or administered into the system, affect mental process, e.g., perception, consciousness, cognition or mood and emotions (WHO,2022).

iv. Substance Abuse

This, according to International Classification of Diseases (ICD10), is a pattern of psychoactive substance use that is capable of causing damage to physical or mental health. According to Diagnostic and statistical manual of mental disorders (DSM IV), it is a maladaptive pattern of substance use leading to significant clinical/social/legal/occupational distress or mental ill-health in the last 12 months.

Substance abuse can also be defined as;

  • Use of drugs without physician’s prescription.
  • Use of illicit drugs or legally banned drugs.

v. Addiction

This is a compulsive, chronic, physiological or psychological need for a habit forming substance, behaviour or activity having harmful effects and typically causing well defined symptoms such as irritability, anxiety, tremors upon withdrawal (NIH, 2019).

vi. Psychosocial

This are structured psychological or social interventions used to address substance-related problems (APA, 2022).

Literature Review

According to International Classification of Diseases ICD10 (2022), substance abuse as a pattern of psychoactive substance uses that is capable of causing damage due to physical or mental ill health. Substance abuse is emerging as a global public health issue, which is needed to be addressed. The effect of drug is due to the organism taking them and these drugs could be beneficial or harmful physically, psychological or physiologically. When the effects of drugs are beneficial, the drug is said to be serving its purpose but if otherwise, then a problem exists According to Abiodun et al. 1 in 7 persons aged 15-64 years in Nigeria had used a drug (other than tobacco and alcohol) in the past year. The past year prevalence of any drug use is estimated at 14.4% (range 14.0% -14.8%), corresponding to 14.3 million people aged 15-64 years who had used at least one psychoactive substance in the past year for non-medical purposes. Among every 4 drug users in Nigeria, 1 is a woman. More men (annual prevalence of 21.8% or 10.8 million men) than women (annual prevalence of 7.0% or 3.4 million women) reported past-year drug use in Nigeria. The highest levels of any past-year drug use were among those aged 25-39 years. 1 in 5 persons who had used drugs in the past year is suffering from drug user disorders. Cannabis is the most commonly used drug. An estimated 10.8% of the population or 10.6 million people had used cannabis in the past year. The average age of initiation of cannabis use among the general population was 19 years. Geographically, the highest past-year prevalence of drug use was found in the southern geopolitical zones (past year prevalence ranging between 13.8 percent and 22.4 percent) compared to the northern geopolitical zones (past year prevalence ranging between 10 percent and 13.6 percent). Two-thirds of people who used drugs reported having serious problems as result of their drug use, such as missing school or work, doing a poor job at work/school or neglecting their family or children.

Classification of Substance of Abuse

Classification of Substance of Abuse is given in Table 1.

Table 1: Classification according to Diagnostic Systematic Manual IV and International Classification of Diseases 10.

S/N

DSM(IV)

ICD10

1. Alcohol Alcohol
2. Stimulants (cocaine, amphetamines) Other substances including caffeine
3. Caffeine       ____
4. Cannabis Cannabinoids
5. Hallucinogens (lysergic acid, ecstasy, ketamine) Hallucinogen
6. Inhalants (fumes from petrol, glue, adhesive) Volatile solvents
7. Tobacco Tobacco
8. Opioid  (morphine, pentazocine, pethidine,  tramadol) Opioid
9. CNS depressants (sedatives, hypnotics, anxiolytics) Sedatives, hypnotics
10. Unknown substance/others (fecal, cow dump) Unknown substances/others

Substance Abuse Stages

In discussing substance abuse, it is generally agreed that substance is not a one-stage process. According to Brookdale, there are seven stages of substance abuse, namely:

Stage 1: Initiation
Stage 2: Experimentation
Stage 3: Occasional user
Stage 4: Regular user
Stage 5: Risky user
Stage 6: Dependent
Stage 7: Addiction

1. Initiation Stage

This is the first stage during which time the individual tries a substance for the first time. This can happen at almost any time in a person’s life, but according to National Institute on Drug Abuse, the majority of people with an addiction tried their drug of choice before 18 and had a substance use disorder by 20. The reasons a teenager experiments with drugs can vary widely, but two common reasons are because of either curiosity or peer pressure. This latter choice is made with intent of trying to fit in better with that particular group of peers. Another reason that teenagers are more likely to try a new drug than most age groups is due to how the prefrontal cortex in their brain is not yet completely developed. This affects their decision-making process, and as a result many teenagers make their choice without effectively considering the long-term consequences of their actions.

2. Experimental Stage

At the experimentation stage, the user has moved past simply trying the drug on its own and is now taking the drug in different contexts to see how it impacts their life. Generally, in this stage, the drug is connected to social actions, such as experiencing pleasure or relaxing after a long day. For teenagers, it is used to enhance party atmospheres or manage stress from schoolwork. Adults mainly enter experimentation either for pleasure or to combat stress. In this stage, there are little to no cravings for the drug and the individual will still be making a conscious choice of whether to use or not. They may use it impulsively or in a controlled manner, and the frequency of both options mainly depends on a person’s nature and reason for using the drug. There is no dependency at this point, and the individual can still quit the drug easily if they decide to. Some youths repulsed by first unpleasant experiment never to use it again. Others, however assured by the more seasoned users become occasional users

3. Occasional User Stage

The new user seems to be passive accepting drugs if and when offered rather than seeking it out himself, such person believes he or she can handle the situation.

4. Regular User Stage

As a person continues to experiment with a substance, its use becomes normalized and grows from periodic to regular use. This does not mean that they use it every day, but rather that there is some sort of pattern associated with it. The pattern varies based on the person, but a few instances could be that they are taking it every weekend or during periods of emotional unrest like loneliness, boredom or stress. At this point, social users may begin taking their chosen drug alone, in turn taking the social element out of their decision. The drug’s use can also become problematic at this point and have a negative impact on the person’s life. For example, the individual might begin showing up to work hung-over or high after a night of drinking alcohol or smoking marijuana. There is still no addiction at this point, but the individual is likely to think of their chosen substance more often and may have begun developing a mental reliance on it. When this happens, quitting becomes harder, but still a manageable goal without outside help. At this stage, users actually seek after the drugs and maintain their own supply; they show high motivation to get on drugs

5. Risky User Stage

The individual’s regular use has continued to grow and is now frequently having a negative impact on their life. While a periodic hangover at work or an event is acceptable for Stage 3, at Stage 4 instances like that become a regular occurrence and its effects become noticeable. Many drinkers are arrested for a DUI (Driving Under the influence) at this point, and all users will likely see their work or school performance suffer notably. The frequent use may also lead to financial difficulties where there were none before. Although the user may not personally realize it, people on the outside will almost certainly notice a shift in their behavior at this point. Some of the common changes to watch out for in a drug user include:

  • Borrowing or stealing money
  • Neglecting responsibilities such as work or family
  • Attempting to hide their drug use
  • Hiding drugs in easily accessible places (like mint tins)
  • Changing peer groups

6. Dependent Stage

This stage, the person’s drug use is no longer recreational or medical, but rather is due to becoming reliant on the substance of choice. This is sometimes viewed as a broad stage that includes forming a tolerance and dependence, but by now, the individual should already have developed a tolerance. As a result, this stage should only be marked by a dependence, which can be physical, psychological, or both.

For a physical dependence, the individual has abused their chosen drug long enough that their body has adapted to its presence and learned to rely on it. If use abruptly stops, the body will react by entering withdrawal. This is characterized by a negative rebound filled with uncomfortable and sometimes dangerous symptoms, that should be managed by medical professionals. In most cases, individuals choose to continue their use, rather than seeking help, because it is the easiest and quickest way to escape withdrawal.

7. Addictive Stage

At this stage, the drug becomes a major part of the user’s life. The user become obsessed with drugs obtaining them at all cost without consideration for food, job, family etc. Individuals at this stage feel as though they can no longer deal with life without access to their chosen drug, and as a result, lose complete control of their choices and actions. The behavioral shifts that began during Stage 4 will grow to extremes, with the user likely giving up their old hobbies and actively avoiding friends and family. They may compulsively lie about their drug use when questioned and are quickly agitated if their lifestyle is threatened in any way. Users, at this point, can also be so out of touch with their old life that they do not recognize how their behaviors are detrimental and the effects that it has had on their relationships.

8. Crisis/Treatment Stage

The final stage of addiction is the breaking point in a person’s life. Once here, the individual’s addiction has grown far out of their control and now presents a serious danger to their well–being. It is sometimes referred to as the crisis stage, because at this point the addict is at the highest risk of suffering a fatal overdose or another dramatic life event.

Of course, while crisis is the worst-case scenario for this stage, there is also a positive alternative that fits here instead. Either on their own or as a result of a crisis, this is when many individuals first find help from a rehab center to begin receiving treatment. As a result, this stage can mark the end of their addiction, as well as the start of new life without drugs and alcohol, that is filled with hope for the future

Drug/Substance Dependence

According to DSM IV (2018), it is defined as a maladaptive pattern of substance use leading to clinically significant impairment or distress occurring at any time in the same 12 months period as manifested by 3 or more of the following;

1. Tolerance

The individual needs a higher dose of the substance to achieve the usual initial satisfactory effect or the current dose doesn’t give the usual initial satisfactory effect.

2. Primacy

The substance of abuse becomes the priority in the abuser’s hierarchy of needs.

3. Withdrawal

This occurs once an abuser stops ingesting the substance the body begins to react to it negatively e.g. an individual abusing Valium (Diazepam) and stopped suddenly such person can experience seizures, insomnia.

Opioid withdrawal symptoms include; excessive yawning, tearing, diarrhea, diaphoresis, joint pain, vomiting

4. Harmful Use

Regardless of the negative effect the abuser continually engage in, the abuse even with the knowledge of its detrimental effects.

5. Inability to Cut Down

An individual who voluntarily stopped abusing substances finds himself/herself engaging in it.

6. Excessive Craving

The individual finds the substance pleasurable and ensure to find it at all cost.

Risk Factors Associated With Substance Abuse

  1. Age (15-24 yrs)
  2. Male Gender
  3. Siblings or parental exposure
  4. Parental deprivation (divorce, separation, death of spouse)
  5. Exposure to high-risk job (breweries, bar, tobacco companies)
  6. Advertisement
  7. Poor economic status
  8. Experiment/curiosity: Experimental Curiosity: Curiosity to experiment the unknown facts about drugs thus motivates adolescents into drug use. The first experience in drug abuse produces a state of arousal such as happiness and pleasure which in turn motivate them to continue.
  9. Peer pressure: Peer Group Influence: Peer pressure plays a major role in influencing many adolescents into drug abuse. This is because peer pressure is a fact of teenage and youth life. As they try to depend less on parents, they show more dependency on their friends.
  10. Lack of parental supervision: Many parents have no time to supervise their sons and daughters. Some parents have little or no interaction with family members, while others put pressure on their children to pass exams or perform better in their studies. These phenomena initialize and increases drug abuse.
  11. Personality Problems due to socio-economic Conditions: Adolescents with personality problems arising from social conditions have been found to abuse drugs. The social and economic status of most Nigerians is below average. Poverty is widespread, broken homes and unemployment is on the increase, therefore our youths roam the streets looking for employment or resort to begging. These situations have been aggravated by lack of skills, opportunities for training and re-training and lack of committed action to promote job creation by private and community entrepreneurs. Frustration arising from these problems lead to recourse in drug abuse for temporarily removing the tension and problems arising from it.
  12. The Need for Energy to Work for Long Hours: The increasing economic deterioration that leads to poverty and disempowerment of the people has driven many parents to send their children out in search of a means of earning something for contribution to family income.These children engage in hawking, bus conducting, head loading, scavenging, serving in food canteens etc. and are prone to drug taking so as to gain more energy to work for long hours.
  13. Availability of the Drugs: In many countries, drugs have dropped in prices as supplies have increased.

Theories of Drug Addiction

There are several theories that model addiction which are genetic theories, exposure theories (both biological and conditioning), and adaptation theories.

1. Genetic Theory

According to Danielle, stated that Genetic influences affect substance use and substance use disorders but largely are not specific to substance use outcomes.The genetic theory of addiction, known as addictive inheritance, attempts to separate the genetic and environmental factors of addictive behavior. Numerous large-scale twin studies have documented the importance of genetic influences on how much people use substances (alcohol, tobacco, other drugs) and the likelihood that users will develop problems. However, twin studies also robustly demonstrate that genetic influences affect multiple forms of substance use (alcohol, illicit drugs) as well as externalizing behaviors such as adult antisocial behavior and childhood conduct disorder. Accordling to stated that the majority of genetic influence on substance use outcomes appears to be through a general predisposition that broadly influences a variety of externalizing disorders and is likely related to behavioral undercontrol and impulsivity, which is a heterogeneous construct in itself.

2a. Exposure Theories: Biological Models

The exposure model is based on the assumption that the introduction of a substance into the body on a regular basis will inevitably lead to addiction. These theories suggest that brain chemistry, brain structure, and genetic abnormalities cause human behavior. The biological, as opposed to the conditioning models, believe that this is a consequence of biology. Underlying the exposure model is the assumption that the introduction of a narcotic into the body causes metabolic adjustments requiring continued and increasing dosages of the drug in order to avoid withdrawal. Although changes in cell metabolism have been demonstrated, as of yet they have not been linked with addiction. Some theorize that those drugs that mimic endorphins (naturally occurring pain killers), if used on a regular basis, will reduce the body’s natural endorphin production and bring about a reliance on the external chemical agent for ordinary pain relief. The neurological basis of substance abuse is an example of the biological models, as shown below (Figure 1).

fig 1

Figure 1: Neuro-Biological Basis of Drug Dependence

Dependence results from complex interaction of psychological effects of substance in brain area associated with motivation and emotion, combined with learning. Some area in the brain are responsible for pleasure which causes release of dopamine, for example dopamine level increases after sexual intercourse and intake of favorite meal, but for drug abusers, drugs became substituted for the activities that increases the level of dopamine, the brain learns to reinforce the pleasure by stimulating more of eating. The brain learns to substitute natural substances with natural activities and it increases dopamine level which causes increase pleasurable effect which is desired.

Anatomical Areas Involved in Drug Dependence

  1. Nucleus accumbiens
  2. Mesolimbic pathway in mid brain
  3. Central tegmental

2b. Exposure Theories: Conditioning Models

The basis of conditioning theories is that addiction is the cumulative result of the reinforcement of drug administration. The substance acts as a powerful reinforcer and gains control over the user’s behavior. In contrast to the biological models of the exposure theories, these conditioning models suggest that anyone can be driven to exhibit addictive behavior given the necessary reinforcements, regardless of their biology. The advantage of this theory is that it offers the potential for considering all excessive activities along with drug abuse within a single framework: those of highly rewarding behavior. There are many reinforcement models that have been defined including the opponent- process model of motivation and the well-known classical conditioning model. Both of these models define addiction as a behavior that is refined because of the pleasure associated with its reinforcement.

3. Adaptation Theories

The adaptation theories include the psychological, environmental and social factors that influence addiction. Advocates of these theories have analyzed how expectations and beliefs about what a drug will do for the user influence the rewards and behaviors associated with its use. They recognize that any number of factors, including internal and external cues, as well as subjective emotional experiences, will contribute to addictive potential. They support the views that addiction involves cognitive and emotional regulation to which past conditioning contributes.

The adaptation theory has also broadened the scope of addiction into psychological realms. Investigators have noted that drug users rely on drugs to adapt to internal needs and external pressures.

Common Signs of Drug Abuse

According to Williams, the common signs include:

A. Physical Warning Signs of Substance Abuse

These include

  • Bloodshot eyes, pupils larger or smaller than usual.
  • Changes in appetite or sleep patterns.
  • Sudden weight loss or gain.
  • Deterioration of physical appearance, personal grooming habits.
  • Unusual smells on breath, body, or clothing.
  • Tremors, slurred speech, or impaired coordination.

B. Behavioral Signs Of Substance Abuse

These include:

  • Drop in attendance and performance at work or school.
  • Unexplained need for money or financial problems. May borrow or steal to get it.
  • Engaging in secretive or suspicious behaviors.
  • Sudden change in friends, favorite hangouts, and hobbies.
  • Frequently getting into trouble (fights, accidents, illegal activities).

C. Psychological Warning Signs Of Substance Abuse

These include:

  • Unexplained change in personality or attitude.
  • Sudden mood swings, irritability, or angry outbursts.
  • Periods of unusual hyperactivity, agitation, or giddiness.
  • Lack of motivation; appears lethargic
  • Appears fearful, anxious, or paranoid, with no reason.

Reasons for Substance Abuse in Nigeria

The commonly reported reasons include the following:

  1. To increase physical performance
  2. To derive pleasure
  3. Desire to relax/sleep
  4. To keep awake
  5. To relieve stress
  6. To relieve anxiety
  7. Unemployment
  8. Frustration
  9. Easy access

Effects of Substance Abuse

The implications of substance abuse to the life of an individual are enormous and can be categorized as Physical, social and Psychological.

A. Physical Impact

There are also a number of issues affecting the physical health of the individual who is abusing drugs over a sustained period of time. According to the National Institute on Drug Abuse (2019), long-term drug abuse can affect:

  • The Kidneys. The human kidney can be damaged both directly and indirectly by habitual drug use over a period of many years. Abusing certain substances can cause dehydration, muscle breakdown, and increased body temperature—all of which contribute to kidney damage over time. Examples are, heroin, cocaine, marijuana.
  • The Liver. Liver failure is a well-known consequence of alcoholism, but it also can occur with individuals using opioids, steroids, inhalants, or habitually over many years. The liver is important for clearing toxins from the bloodstream, and chronic substance abuse can overwork this vital organ, leading to damage from chronic inflammation, scarring, tissue necrosis, and even cancer, in some instances. The liver may be even more at risk when multiple substances are used in combination.
  • The Heart. Many drugs have the potential to cause cardiovascular issues, which can range from increased heart rate and blood pressure to aberrant cardiac rhythms and myocardial infarction (i.e., heart attack). Injection drug users are also at risk of collapsed veins and bacterial infections in the bloodstream or heart.
  • The Lungs. The respiratory system can suffer damage related to smoking or inhaling drugs, such as marijuana and crack cocaine. In addition to this kind of direct damage, drugs that slow a person’s breathing, such as heroin or prescription opioids, can cause serious complications for the user.

Physical Signs Include

  • Insomnia
  • Tremor
  • Thought disturbance
  • Drowsiness
  • Weakness
  • Coma
  • Respiratory depression (depression of the central nervous system)
  • Sexually transmitted diseases(e.g. HIV/AIDS, hepatitis)
  • Death

B. Social Impact

Addiction creates social issues and public health concerns that extend beyond the home, school, and workplace to negatively impact larger groups of individuals.

  • Substance Abuse and the Home: Unfortunately, families all throughout society know the impact of addiction. If a person’s spouse or parent is abusing drugs, the results can be life-altering. It can result in financial hardships (due to job loss or money being diverted to fuel the habit). It may also cause reckless behavior that puts the family at risk. Addiction affects the entire family unit when one member is suffering.

Many cases of domestic violence within relationships are related to substance abuse. Addiction can happen on both sides of the conflict, not only by the abuser but also by the victim who uses drugs to cope. Drug use in the family is not limited to spouses or parents. Adolescents, especially during times of transition, may find themselves struggling with substance use. Children may experience maltreatment (including physical and sexual abuse and neglect), which may require the involvement of child welfare. Watching their parents suffer from substance use disorders may result in long-term mental and emotional disorders and delayed development. Children whose parents abuse drugs are more likely to end up using drugs or alcohol, as well.

  • Substance Abuse and the Workplace: Drug abuse social issues occur in the workplace, the substance use of employees can cause problems. An individual’s drug use will likely impact their work performance. Or, it may even stop them from going to work entirely. Substance abuse can lead to:
  • Decreased work productivity
  • Increased lateness and absences
  • Inappropriate behaviors at work, such as selling drugs to co-workers

These could lead to disciplinary actions and dismissal. Further, drug and alcohol abuse can lead to impaired judgment, alertness, and motor coordination, creating unsafe workplace conditions especially in an environment with heavy machinery.

Social Vices

One of the social effects of drug abuse on society is its direct link on criminal acts, murders etc. that affects the society at large.

D. Psychological Impacts

Substance abuse and mental health are linked because the psychological effects of drug addiction, including alcohol, cause changes in body and brain. A careful balance of chemicals keeps the cogs turning inside the body, and even the smallest change can cause one to experience negative symptoms.

  • Anxiety . There are a lot of similarities between anxiety and the effects of stimulants such as cocaine and methamphetamine. Conversely, using central nervous system depressants can also increase the risk of a person developing anxiety. A person could have a long-standing pattern of drug abuse and consequently develop anxiety problems. Many substances, particularly stimulants like cocaine, can cause anxiety as a dose-dependent side effects. Other drugs, like benzodiazepines, can bring about increased anxiety as part of their withdrawal syndromes.

Anxiety is best described as a disorder of the fight-or-flight response, where someone perceives danger that isn’t there. It includes the following physical and mental symptoms:

  • Rapid heart rate
  • Excessive worrying
  • Sweating
  • An impending sense of doom
  • Mood swings
  • Restlessness and agitation
  • Tension
  • Insomnia

Additionally, many addicts experience anxiety around trying to hide their habits from other people. In a lot of cases, it’s difficult to tell whether anxious people are more likely to abuse substances or if drugs and alcohol cause anxiety.

  • Depression. There is a clear association between substance abuse and depression. This relationship could be attributed to preexisting depression that led to drug abuse or it could be that substance use caused changes in the brain that increased depressive symptoms. Some people use drugs to self-medicate symptoms of depression, but this only alleviates the symptoms while the user is high. It may even make depression symptoms worse when the user is working through withdrawal. Many drugs have a withdrawal syndrome that includes depression or other mood disturbances, which can complicate recovery. The main symptoms associated with depression are:
  • Hopelessness
  • Lack of motivation
  • Dysregulated emotion
  • Loss of interest
  • Sleep disturbances
  • Irritability
  • Weight gain or loss
  • Suicidal ideation
  • Paranoia. Some drugs, like cocaine and marijuana, can cause feelings of paranoia that may amplify with long-term abuse. On top of this, people struggling with addiction may feel that they need to hide or lie about their substance use, indicating a fear of being caught. The fact that many substances of abuse are illegal can also contribute to mounting feelings of paranoia among long-term substance users.
  • Shame and Guilt. There is a stigma attached to addiction in society, and there’s a lot of guilt and shame for the individuals who struggle with the condition. Often, this is adding fuel to a fire that was already burning strong. People with substance use disorders tend to evaluate themselves negatively on a regular basis, which is a habit that has its roots in childhood experiences. Continual negative self-talk adds to feelings of shame and guilt. When you constantly feel as if you’ve done something wrong, it’s tempting to try to cover up these challenging emotions with drugs and alcohol. These unhelpful emotions contribute to the negative feedback loop that sends people spiraling into addiction.
  • A Negative Feedback Loop. From an outside perspective, someone with an addiction looks like they’re repeatedly making bad choices and ignoring reason. However, the truth is far more complicated and nuanced so much so that it can be very difficult for people to overcome a substance use disorder without inpatient or outpatient treatment. This is partly due to a negative feedback loop that occurs in the mind. When someone is addicted to drugs or alcohol, they feel a sense of comfort they haven’t been able to get elsewhere. Inevitably, this feeling is replaced by guilt and shame. They sober up and face the consequences of their actions. However, the weight of these feelings forces them to seek comfort in substances.
  • Loss of Interest. Loss of interest in activities you used to enjoy is a key symptom of both addiction and depression, but overcoming the former makes it much easier to gain control over the latter. It’s such a destructive symptom because of how demotivating it is to feel there’s no joy in the world. Everyone has passions and interests, but getting back to finding them isn’t easy for someone with these conditions [11-20].

Management of Substance Abuse

According to APA (2018), The management includes:

Pharmacologic Management

Pharmacologic management in substance abuse has two main purposes:

  • To permit safe withdrawal from substance of abuse and
  • To prevent relapse.

The drugs that consist the pharmcological intervention include:

  • alcohol withdrawal is usually managed with benzodiazepine-anxiolytic agent, which is used to suppress the symptoms of abstinence.
  • Disulfiram (antabuse). This may be prescribed to help deter clients from drinking.
  • Acamprosate (campral). This may be prescribed for clients recovering from alcohol abuse or dependence to help reduce cravings for alcohol and decrease the physical and emotional discomfort that occurs especially in the first few months of recovery.
  • It is a potent synthetic opiate used as a substitute for heroine in some maintenance programs.
  • it is a narcotic analgesic whose only purpose is the treatment of opiate dependence.
  • Naltrexone: It is an opioid antagonist often used in the treatment of overdose

1. Public Health approach: This Includes

Primary Level Management/Prevention

  • Creating awareness about substance abuse and their adverse consequences through aid of appropriate mass media tools delivering customized information suitable to the target audience such as family, schools, workers, religious organization, homes in a sensitive manner, Owing to the impact on all age groups of the society.
  • Provision of recreational activities for youths in urban areas.
  • Moral realignment for a derailed person.
  • Educational approaches targeting parents improving family lifestyle.
  • Drug education as part of school curriculum.
  • Screening ( drug screening for undergraduates)

Secondary Level Management

  • Laboratory tests such as
  • Blood test
  • Mean corpuscular volume
  • Urine drop test
  • Urinalysis
  • Detoxification
  • Treatment of associated mental and physical disorder
  • Psychotherapy
  • Cognitive behavioral therapy(CBT)
  • Family therapy
  • Maintenance of drug-free behavior such as use of anti-craving drugs

Tertiary Level Management

  • Occupational rehabilitation
  • Educational rehabilitation and counseling
  • Social rehabilitation
  • Provision of legal aid for abuser in legal dilemma
  • Social support

2. Psychosocial Supports To Substance Use Disorders

Psychosocial interventions are structured psychological or social interventions used to address substance-related problems. (APA,2022)..They can be used at different stages of drug treatment to identify the problem, treat it, and assist with social reintegration.The psychological aspects of development refer to an individual’s thoughts, emotions, behaviors, memories, perceptions, and understanding. The social aspects of development refer to the interaction and relationships among the individual, family, peers, and community (UNRWA, 2017). Psychosocial interventions can be used in a variety of treatment settings either as stand-alone treatments or in combination with pharmacological intervention. They can be implemented individually or in groups and delivered by a range of health workers. It is also considered to be the foundation of drug and alcohol treatment, especially for substances where pharmacological treatments have not been sufficiently evaluated. It involves the following

Psychological Supports for Substance Abuse Disorders and Addicts

A. Individual Therapy Interventions. The effectiveness of this interventions has been established primarily for alcohol use problems, although they have been applied to patients using other substances as well. The aim of the intervention is to help the patient understand that their substance use is putting them at risk and to encourage them to reduce or give up their substance use. It can range from 5 min of brief advice to 15-30 min of brief counseling. Intensive counseling is especially effective and there is a strong dose-response relation between counseling intensity and quitting success. In general, more the intense the treatment intervention greater is the rate of abstinence.

B. Motivation Interviewing. Motivational interviewing is a collaborative conversation style for strengthening a person’s own motivation and commitment to change. It is used to help people with different types of drug problems. Frequently, individuals are not fully aware of their drug problems or they can be ambivalent about their problems. It is often referred to as a conversation about change and it is used to help assist drug users to identify their need for change which is characterized by an emphatic approach in which the therapist helps to motivate the patient by asking about the pros and cons of specific behaviors, exploring the patient’s goals and associated ambivalence about reaching those goals, and listening reflectively to the patient’s response.

It seeks to address an individual’s ambivalence about their drug problems, as this is considered the main barrier to change.

It follows five stages:

  1. Expressing empathy for the client
  2. Helping the client to identify discrepancies between their behavior and their goals
  3. Avoiding arguments with the patient about their motivations and behaviors
  4. Rolling with the resistance of the patient to talk about some issues
  5. Supporting the patient s sense of self-efficacy

C. Cognitive Bhavioural Therapy. Cognitive behavioral therapy (CBT) is a umbrella term that encompasses cognitive therapy on its own and in conjunction with different behavioral strategies. Cognitive therapy is based on the principle that the way individuals perceive and process reality influences the way they feel and behave. As part of drug treatment, cognitive therapy helps clients to build self-confidence and address the thoughts that are believed to be at the root of their problems. Clients are helped to recognize the triggers for substance use and learn strategies to handle those triggers. Treatment providers work to help patients to identify alternative thoughts to those that lead to their drug use, and thus facilitate their recovery. Generally, cognitive therapy is provided after a client has been diagnosed as having drug dependence problems.

CBT treatment usually involves efforts to change thinking patterns. These strategies might include:

  • Learning to recognize one’s distortions in thinking that are creating problems, and then to reevaluate them in light of reality.
  • Gaining a better understanding of the behavior and motivation of others.
  • Learning to develop a greater sense of confidence in one’s own abilities.
  • Using role playing to prepare for potentially problematic interactions with others.
  • Learning to calm one’s mind and relax one’s body.

D. Contingency Management. Contingency management refers to a set of interventions involving concrete rewards for clients who achieve target behaviors. This approach is based around recognizing and controlling the relationship between behaviors and their consequences. It can be applied to drug users with different types of problems in a variety of settings. It has been used, for example, with opioid and cocaine users, and with homeless clients. Contingency management is used to maintain abstinence by reinforcing and rewarding alternative behaviors to drug use with the aim of making abstinence a more positive experience. Contingency management programs can, for example, be used during drug treatment to reward a user remaining abstinent or to incentivize a user’s presence at work in a social reintegration programme.

Social Skills Therapy. Social skills are defined as the ability to express positive & negative feelings in the interpersonal context without suffering loss of interpersonal reinforcement. Social skills training (SST) is a type of behavioral therapyused to improve social skills in people with mental disorders or developmental disabilities. Social skills can be taught, practiced and learned.The main purpose of social skills training is teaching persons who may or may not have emotional problems about the verbal as well as nonverbal behaviors involved in social interactions.

Another goal of social skills training is improving a patient’s ability to function in everyday social situations.

SST Techniques

  • Behavioral Rehearsal. Role play which involves practicing new skills during therapy in simulated situations
  • Corrective Feedback. Used to help improve social skills during practice
  • The educational component of SST that involves the modeling of appropriate social behaviors
  • Positive Reinforcement. used to reward improvements in social skills
  • Weekly Homework Assignments. Provide the chance to practice new social skills outside of therapy

E. Family Behavior Therapy (FBT). FBT focuses on how the behaviors of the person with the SUD affect the family as a whole and works to change those behaviors with the involvement of the entire family.Goals of family therapy include obtaining information about the patients and his factors which contribute to substance abuse. These include the patient’s attitude toward substance abuse, treatment adherence, social and vocational adjustment, level of contact with substance using peers, and degree of abstinence. Family support for abstinence, maintaining marital and family relationships are encouraged.Even the brief involvement of family members in the treatment program can enhance treatment engagement and retention.

F. Self Help Groups. Self-help groups are voluntary not-for-profit organizations where people meet to discuss and address shared problems, such as alcohol, drug or other addictions. Participants seek to provide support for each other, with senior members often mentoring or sponsoring new ones. Prominent examples include Alcoholics Anonymous and Narcotics Anonymous, and there is a range of other groups with similar purposes. As well as helping drug users, some self-help groups exist to support the family members of people with alcohol- and drug-related problems. Self-help groups can be used to help people to recognize their drug-related problems and can be a support during drug treatment, and they can help users to maintain abstinence and prevent relapse.

The groups aim to create a drug-free supportive network around the individual during the recovery process and provide opportunities to share experiences and feelings.

H. Therapeutic Communities. Residential rehabilitation programs (sometimes called therapeutic communities) are usually long-term programs where people live and work in a community of other substance users, ex-users and professional staff. Programs can last anywhere between 1 and 24 months (or more). The aim of residential rehabilitation programs is to help people develop the skills and attitudes to make long-term changes toward an alcohol- and drug-free life-style. Programs usually include activities such as employment, education and skills training, life skills training (such as budgeting and cooking), counseling, group work.

Implications

Nursing Education and Practice

  • Advocacy to focus on strengthening family support system, self help and peer group optimizations.
  • Creating awareness about substance abuse and their adverse consequences through aid of appropriate mass media tools delivering customized information suitable to the target audience such as family, schools, workers, religious organization, homes in a sensitive manner, Owing to the impact on all age groups of the society.
  • It is of prime importance to design and formulate an effective community based and a holistic strategy to address the needs of the drug abuser and their family comprehensively. Multiple measures such as identifying the psychosocial determinants that may determine the use of illicit drug, developing family prevention programs in the form of multi-dimensional family therapy and individual cognitive behavioral therapy
  • Sensitizing clinicians to identify patients at risk for nonprescription drug abuse, strengthening preclinical assessment to predict substance abuse liability, encouraging exercises as a potential treatment for drug abuse and building mechanisms for tracking and monitoring prescription drug abuse.
  • Formulating strategies in collaboration with international agencies to monitor the sale of over-the-counter drugs and enforcing stricter penalties for individuals who are involved in trade of illicit drugs.
  • Also, an important role to play in screening the adolescent, youths for drug use during routine medical checkup.

Nursing Research

  • Collaborate with other health personnel in research study relating to substance abuse thus providing new information in the Psychological care of clients with substance abuse [21-28].

Conclusion

Substance abuse is still a menace and has grown to become global subculture whose effects is cataclysmic and cuts across every society, creed, or race. However, no individual is born an abuser, but the multifarious human activities have through learning, interaction, and curiosity made man to develop this habit. It is empirical that substance abuse is more common amongst the youth especially in Nigeria. The habit develops as an attempt for instance to justify a curiosity in the daily interactions as man is a gregarious animal.

To the individual, its effects can be physiological and psychological, which gradually penetrates the society and affects all productive endeavors both socially and economically. As a menace, substance abuse has habitually become a means to an end which calls for individuals, families, groups, communities, societies and the Nigerian government to collaboratively join hands in curbing the menace. Psychosocial support is presented here as a way out of the menace. Mental health nurses are central to providing the support.

Recommendations

In an attempt to proffer some meaningful solutions to curb the menace of substance abuse, the following recommendations are presented to both government and the society at large.

(a) Government policies targeted at developing the society are more often than not mere paper work. Thus, the government should ensure that through its policies, jobs are created, social services are rendered, and above all, its policies should be feasible and capable of implementation.

(b) Hospitals and clinics should be well stocked with genuine drugs and trained physicians put in place to ensure proper prescription of drugs while monitoring how the patients take such drugs to avoid over or under dosage tendencies which will lead to drug abuse.

(c) There should be a proper scrutiny and licensing of patent medicine stores, and such should be operated by well-trained Pharmacists. Alongside this, street drug hawking should be discouraged since this can promote accessibility to drug abusers.

(d) Individuals, families, communities, and the entire society should ensure that moral values are inculcated in the youths, by joining the government’s fight against the menace.

(e) Implementing a policy of asking patients about their needs an wishes concerning psychosocial supports, as well routinely assessing their levels of psychosocial which may bring about meaningful progress for psychosocial care.

(f) Rehabilitation centers such as therapeutic and penal institutions should be equipped, employ trained staff as well as involve in proper guidance and counseling.

(g) Institutions like the National Drugs Law Enforcement Agency (NDLEA) and the National Agency for Food and Drug Administration and Control (NAFDAC) should be empowered to squarely deal with “Drug Barons” as well their traffickers, peddlers, and conduits. This is because at times, their performances are undermined by the threats they get as well as the purported connections such barons and the traffickers have with people in higher authority.

(h) Government should encourage even development at all levels by providing the required skills, social services and recreational facilities to reduce Rural-Urban migration, as it was also found that so many youths migrate from rural areas to urban areas to search for the greener pastures and facilities lacking in the rural areas.

(i) Non-Governmental Organizations (NGOs) and Community Based Organizations (CBOs) should encourage the sensitization campaigns against drug abuse as well as engage in rehabilitation programs.

(j) Educational Institutions at all levels whether public or private should organize workshops, lectures/ symposiums to enlighten the people on the dangers of drugs and substance abuse.

References

  1. Abubakar IJ, Abubakar SK, Abubakar G, Zayyanu S, Garba Mohammed K, et al. (2021) The Burden of Drug Abuse in Nigeria: A Scoping Review of Epidemiological Studies and Drug Laws. National library of medicine. National library Of Medcine. [crossref]
  2. American Psychological Association (2022) Breaking Free From Addiction.
  3. Abiodun O (2021) Drug abuse and its clinical implications with special reference to Nigeria. Central.
  4. Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH (2017) Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study.
  5. Yunusa U, Bello UL, Idris M, Haddad MM, Adamu D (2017) Determinants of substance abuse among commercial bus drivers in Kano Metropolis, Kano State, Nigeria. American. Journal of Nursing Science.
  6. Brookdale Premier Addiction Recovery (2022) Seven Stages of Addiction.
  7. Daniel M (2016) The Genetics of Addiction. Journal of Studies on Alcohol and Drug 77: 673-675.
  8. Drugs, Brains, and Behaviour, The Science Of Addiction (2014). National Institute on Drug Abuse.
  9. Adamson TA, Onifade PO, Ogunwale A (2010) Trends in socio demographic and drug abuse variables in Patients with alcohol and drug use disorders in a Nigerian treatment facility. West Afr J Med 29: 12-18. [crossref]
  10. Arli C (2020) Overview of Social Skills Training.
  11. Benjamin A, Chidi N (2014) Drug abuse, addiction and dependence, pharmacology and therapeutic. Swiss School of Public Health Journals.
  12. Behavioural Health Resources and Services Directory For Carrol Country (2020): Signs And Symptoms Of Drug Abuse.
  13. Dankani I (2017) Abuse of cough syrups: a new trend in drug abuse in north western Nigerian states of kano, Sokoto, Katsina, Zamfara and Kebbi. International Journal of Physical and Social Science 2: 199-213.
  14. Essien CF (2010) Drug use and abuse among students in tertiary institutions-the case of federal University of technology, Minna. Journal of Reseach in National Development.
  15. Erah F, Omaseye A (2017) Drug and alcohol abuse among secondary school students in a rural community in southsouth Nigeria. Annals of Medical and Surgical Practice Journal 2.
  16. Famuyiwa O, Aina OF, Bankole-Oki OM (2011) Epidemiology of psychoactive drug use amongst adolescents in metropolitan Lagos, Nigeria. European Child & Adolescent Psychiatry Journal 20: 351-359. [crossref]
  17. Gobir A, Sambo M, Bashir S, Olorukoba A, Ezeh O, etal. (2017) Prevalence and determinants of drug abuse among youths in a rural community in north western Nigeria. Tropical Journal of Health Sciences.
  18. Gureje O, Olley D (1992) Alcohol and drug abuse in Nigeria: view of the literature Contemporary Drug Problems.
  19. Makanjuola BA, Sabitua O, Tanimola M (2007) National Drug Laws EnforcementAgency (2020).
  20. Namadi M (2016) Drug abuse among adolescents in Kano metropolis, Nigeria. Ilimi Journal of Art and Social Sciences 2.
  21. Nigeria, Federal Ministry of Health, National Policy for Controlled Medicines, 2017.
  22. Pela OA, Ebie C (1982) Drug abuse in Nigeria: a review of epidemiological studies. National Library Of Medicine pubmed.
  23. Pharmacists Council of Nigeria (2020).
  24. Abubakar IJ, Abubakar SK, Abubakar G, Zayyanu S, Garba Mohammed K, et al. (2019) The Burden of Drug Abuse in Nigeria: A Scoping Review of Epidemiological Studies and Drug Laws. National library of medicine. National Library of Medcine. [crossref]
  25. Ladipo A (2021) Menace Of Drug Abuse. The Sun Journals.
  26. Lauren B (2022) Long Term Drug Addiction Effects. American Addiction Centers Drug Abuse.Com.
  27. Mohd F (2022) Social Skills Among Psychiatric Patient.
  28. Theories Of Substance Abuse (2022).
FIG 1

Numerical Simulation of Surface and Internal Wave Excitation due to an Air Pressure Wave

DOI: 10.31038/GEMS.2023533

Abstract

The excitation of surface and internal water waves by an air pressure wave has been numerically simulated in several model cases, using a nonlinear shallow water model of velocity potential. Water waves were excited when the air pressure wave speed was close to the water wave speed in the surface mode or internal mode. The surface mode waves traveling as free waves after being excited by an air pressure wave  were  also  amplified  by  the shallowing on a sloping seabed. When the air pressure wave with a speed close to that of the internal mode stopped, free surface waves in the internal mode hardly appeared, unlike the free internal waves.

Keywords

Surface wave, Internal wave, Air pressure wave, Proudman resonance, Nonlinear shallow water

Introduction

Internal waves in various waters, such as the East China Sea, e.g., [1,2], and Lake Biwa, e.g., [3,4], may gain large wave heights because the density ratio in water is not as large as that of surface waves. Although various sources of internal waves—tidal currents [5], wind- driven near-inertial waves [6], etc.—have been revealed, the causes of internal waves are unknown in many actual waters.

In the present study, we consider surface/internal wave excitation due to an air pressure wave. Regarding surface waves, air pressure waves of a few hectopascals often generate meteotsunamis around the world, e.g., [7,8,9]. For example, at the west coasts of Kyushu, Japan, meteotsunamis called “Abiki” are observed, e.g., [10,11]. Conversely, internal waves are also generated and amplified by air pressure waves due to meteorological factors including typhoons [12,13]. The excitation mechanism underlying these phenomena is the Proudman resonance [14], which is also known as the cause of other transient waves, e.g., [15,16,17,18,19]. Moreover, the resonance triggered by air pressure waves from a volcanic eruption may generate global tsunamis, e.g., [20,21]. Artificial waves can also be created by the resonance when an airplane moves on a very large floating airport [22].

In this basic research, numerical simulations of surface and internal wave excitations due to an air pressure wave have been generated in several model cases, using a nonlinear shallow water model of velocity potential. Although the wave dispersion and Coriolis force are not considered, the proposed simple model will provide an easy-to-use tool for predicting long-wave excitations from air pressure changes estimated in weather forecasts. We consider the cases in which the air pressure wave speed is close to the surface or internal mode speed.

Method

We consider the irrotational motion of inviscid and incompressible fluids in two layers, as illustrated in Figure 1.

FIG 1

Figure 1: Two-layer water

The still water depths of the upper and lower layers are h1(x) and h2(x), respectively, and h(x) = h1(x) + h2(x). We assume that the densities of the upper and lower layers, ρ1 and ρ2, respectively, are uniform and constant, and that the fluids do not mix even in motion. The water surface displacement, interface displacement, and seabed position are denoted by ζ(x, t), η(x, t), and b(x), respectively. Friction is ignored everywhere for simplicity. The velocity potentials of the upper and lower layers are ϕ1(x, t) and ϕ2(x, t), respectively.

The nonlinear shallow water equations of velocity potential considering the pressure on the water surface, p0(x, t), are

Upper Layer

∂η/∂t = ∂ζ/∂t + ∇[(ζ η) ∇ϕ1],      (1)

∂ϕ1/∂t = – [+ p0 /ρ1 + (∇ϕ1)2/2],      (2)

Lower Layer

∂η/∂t = –∇ [(η b) ∇ϕ2],       (3)

∂ϕ2 /∂t = – [+ (p1 + P2 )/ρ2  + (∇ϕ2 )2/2],      (4)

where ∇ = (∂/∂x, ∂/∂y) is a horizontal partial differential operator. The gravitational acceleration g is 9.8 m/s2, p1(x, t) is the pressure at the interface, and P2 = (ρ2ρ1)gh1. Equations (1)–(4) can be derived by reducing the nonlinear equations based on the variational principle [23].

Substituting Equation (3) into Equation (1), we obtain

∂ζ/∂t = – {∇ [(ζ η) ∇ϕ1] + ∇ [(η b) ∇ϕ2]}.       (5)

In the upper layer, reversing the direction of the integration with respect to z gives the following auxiliary equation as

∂ϕ1/∂t + + p1/ρ1  + (∇ϕ1)2/2 = 0,      (6)

which corresponds to the Bernoulli equation on z = η.

By substituting Equation (2) into Equation (6), we obtain

p1  = p0  + ρ1 g(ζ η),       (7)

which expresses the hydrostatic pressure distribution. By substituting Equation (7) into Equation (4), we obtain

∂ϕ2/∂t = – [+ p0/ρ2 + r-1g(ζ η) + (1 – r-1)gh1 + (∇ϕ2)2/2], (8)

where r = ρ2/ρ1 > 1.

By eliminating p1 from Equations (4) and (6), we obtain

∂ϕ1/∂t r∂ϕ2 /∂t = (r – 1)g(η + h1) – [(∇ϕ1 )2r(∇ϕ2)2]/2.                   (9)

We explicitly solve the above equations using a finite difference method with the central difference in space and the forward difference in time. When the pressure at the water surface, p0, is known and the water surface displacement ζ is unknown, the procedure shown in

Figure 2 is repeated, starting from the initial still water state, to obtain new time-step values one after another.

FIG 2

Figure 2: Procedure for obtaining the surface displacement ζ,interface displacement η, and velocity potentials in the upper and lower layers, ϕ1 and ϕ2, respectively, when the pressure at the water surface, p0, is given.

Conversely, when the pressure at the water surface, p0, is unknown and the water surface displacement ζ is known, we adopt the procedure shown in Figure 3, which was not used in the present calculations.

FIG 3

Figure 3: Procedure for obtaining the interface displacement η and velocity potentials  in the upper and lower layers, ϕ1  and ϕ2, respectively, when the surface displacement ζ  is given.

Conditions

Focusing on one-dimensional wave propagation in the x-axis direction, we assumed that a steady air pressure wave W, as sketched in Figure 4, traveled in the positive direction of the x-axis with a constant speed vP. The waveform of the air pressure wave was an isosceles triangle, where the length of its base, i.e., the wavelength λ, was 10 km or 20 km. The maximum and minimum pressures pm of positive and negative air pressure waves, respectively, were 2hPa and −2 hPa, respectively, referring the values in the meteotsunami and eruption cases [11,21]. The position of the air pressure wave center at the initial time, i.e., t = 0 s, was x0 = 50 km.

FIG 4

Figure 4: Waveform of the steady air pressure wave W at the initial time, i.e., t = 0 s. The air pressure wave traveled in the positive direction of the x-axis with constant speed vp.

The densities of the upper and lower layers were ρ1 = 1000 kg/m3 and ρ2 = 1025 kg/m3, respectively. Both the initial velocity potentials ϕ1(x, 0 s) and ϕ2(x, 0 s) were 0 m2/s. The grid width Δx was 250 m and the time step interval Δt was 1 s.

Excitation of the Surface Mode

In Figure 4, the wavelength λ and the maximum pressure pm of the air pressure wave were 10 km and 2 h Pa, respectively. In the initial still water state, the total water depth h was 5000 m and the upper layer depth h1 was 1000 m, in Figure 1. For linear shallow water waves, the phase velocity of the surface mode, Cs, is √(gh) ≃ 220 m/s. When the traveling velocity of the air pressure wave, vp, is 207 m/s, which is close to Cs , the time variations of the air pressure distribution and both the surface and interface profiles are depicted in Figure 5, in which the results for 100 s ≤ t ≤ 1000 s are displayed every 100 s.

FIG 5

Figure 5: Time variations of the air pressure distribution, surface profile, and interface profile every 100 s. The still water depth h was 5000 m and the still water depth ratio h1/h was 0.2. The wavelength λ, maximum pressure pm, and speed vp of the air pressure  wave were 10 km, 2 hPa, and 207 m/s, respectively.

Figure 5 indicates that the crests and troughs in the surface mode were excited by the Proudman resonance not only at the surface but also at the interface, because the positions of the surface and interface were relatively close. When t = 100 s, the water wave crests have been generated at the air pressure rise, whereas the water wave troughs at the air pressure fall. The length of the water wave crests and troughs was approximately half the wavelength of the air pressure wave. Thereafter, the water wave crests gradually led away from the air pressure wave because the surface mode speed was greater than the air pressure wave speed. When t = 1000 s, the water wave crests were propagating as free waves, whereas the water wave troughs have been constrained by the air pressure wave, and the wavelength of each crest and trough was approximately the same as that of the air pressure wave.

When the seabed is partially sloping, Figure 6 depicts the numerical results for the same conditions as in the case above, except for the topography, where the seabed position b is described as

b = −5000 m for 0 ≤ x < 150 km,

b = −3500 m − 1500 m × cos π (x/150 km – 1) for 150 km ≤ x ≤ 300 km. (10)

FIG 6

Figure 6: Time variations of the air pressure distribution, surface profile, and interface profile every 100 s. The seabed profile is also depicted, where the seabed position b is described by Equation (10). The initial water depth in the upper layer, h1 was 1000 m. The wavelength λ, maximum pressure pm, and speed vp of the air pressure wave were 10 km, 2 hPa, and 207 m/s, respectively.

As indicated in Figure 6, the second peaks of water wave crests were generated when the air pressure wave speed approached the surface mode speed on the slope. Moreover, both the water wave crests and troughs were amplified by shallowing on the slope after they moved away from the air pressure wave. It should be noted that the shallowing effect requires water waves that are traveling as free waves apart from the air pressure waves that excited the water waves. When an eruption creates air pressure waves with different speeds, as in the case of the 2022 Hunga Tonga–Hunga Ha`apai volcanic eruption,  the air pressure waves excite tsunamis at water depths corresponding to the air pressure wave speeds [24], and each tsunami traveling apart from the air pressure wave that excited it can be amplified by shallowing on a ridge, shelf slope, continental shelf, etc. Tsunamis traveling as free waves after being excited by air pressure waves may also be amplified by being passed by subsequent air pressure waves over topography [21], as indicated in the water wave crests at t = 1000 s in Figure 6. Moreover, bay oscillations, currents, and horizontally two-dimensional changes in topography may amplify tsunamis, similar to submarine earthquake tsunamis.

Excitation of the Internal Mode

The wavelength λ and the maximum pressure pm of the air pressure wave were 10 km and 2 hPa, respectively, in Figure 4. The still water depth h was uniformly 5000 m, and the still water depth ratio h1/h was 0.2, in Figure 1. The internal mode speed for linear shallow water waves without surface waves is

11

so Ci ≃ 14 m/s in the present case. We assumed that while 0 s ≤ t < 1000 s, the air pressure wave speed vp was 14 m/s, which was almost equal to Ci, whereafter the air pressure wave stopped at t = 1000 s, and the air pressure distribution was stagnated for t ≥ 1000 s. The time variations of the air pressure distribution and both the surface and interface profiles are depicted in Figure 7, in which the results for 200 s ≤ t ≤ 2000 s are displayed every 200 s.

FIG 7

Figure 7: Time variations of the air pressure distribution, surface profile, and interface profile every 200 s. The still water depth h was 5000 m and the still water depth ratio h1/h was 0.2. The wavelength λ, maximum pressure pm, and speed vp of the air pressure wave were 10 km, 2 hPa, and 14 m/s, respectively.

Based on Figure 7, the internal waves in the internal mode were excited by the Proudman resonance, and especially the crest was amplified remarkably. Conversely, free surface waves in the internal mode hardly appeared because the surface wave crest was constrained by the stagnant air pressure distribution.

When the wavelength λ and the minimum pressure pm of the air pressure wave are 20 km and −2 hPa, respectively, Figure 8 presents the numerical results for the same other conditions as in the above case.

FIG 8

Figure 8: Time variations of the air pressure distribution, surface profile, and interface profile every 200 s. The still water depth h was 5000 m and the still water depth ratio h1/h was 0.2. The wavelength λ, minimum pressure pm, and speed vp of the air pressure wave were 20 km, −2 hPa, and 14 m/s, respectively.

In Figure 8, the waveform of the generated internal waves propagating as free waves is different from the vertically inverted waveform of the above-mentioned internal waves due to the air pressure wave of positive pressure, disregarding the difference in wavelength. Therefore, future work is required to investigate the stability of the upward and downward convex internal waves due to an air pressure wave, considering higher-order terms of the velocity potential.

Conclusion

The excitation of surface and internal water waves by an air pressure wave was numerically simulated using the nonlinear shallow water model of velocity potential. The water waves were excited when the air pressure wave speed was close to the water wave speed in each mode. The surface mode waves traveling as free waves after being excited by an air pressure wave were also amplified by the shallowing on the sloping seabed. When the air pressure wave, the speed of which was close to the internal mode speed, stopped, free surface waves in the internal mode hardly appeared, unlike the free internal waves.

In the present model, wave dispersion is ignored, so in the future, the excitation of relatively shorter water waves by air pressure waves should be investigated using a numerical model with higher-order terms of velocity potential.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hsu MK, Liu AK, Liu C (2000) A study of internal waves in the China Seas and Yellow Sea using Continental Shelf Research 20 : 389-410.
  2. Nam S, Kim Dj, Lee SW, Kim BG, Kang Km, Cho YK (2018) Nonlinear internal wave spirals in the northern East China Scientific Reports 8.
  3. Kanari S (1973) Internal waves in Lake Biwa (H)—numerical experiments with a two layer model. Bulletin of the Disaster Prevention Research Institute 22 : 70-96.
  4. Jiao C, Kumagai M, Okubo K (1993) Solitary internal waves in Lake Bulletin of the Disaster Prevention Research Institute 43 : 61-72.
  5. Hibiya T (1988) The generation of internal waves by tidal flow over Stellwagen Journal of Geophysical Research 93 : 533-542.
  6. Le Boyer A, Alford MH (2021) Variability and sources of the internal wave continuum examined from global moored velocity Journal of Physical Oceanography 51: 2807-2823.
  7. Vilibić I, Monserrat S, Rabinovich A, Mihanović H (2008) Numerical modelling of the destructive meteotsunami of 15 June, 2006 on the coast of the Balearic Islands. Pure and Applied Geophysics, 165 : 2169-2195.
  8. Bailey K, DiVeglio C, Welty, A (2014) An examination of the June 2013 East Coast meteotsunami captured by NOAA observing systems. NOAA Technical Report, NOS CO-OPS 079.
  9. Niu X, Zhou H (2015) Wave pattern induced by a moving atmospheric pressure disturbance. Applied Ocean Research 52 : 37-42.
  10. Hibiya T, Kajiura, K (1982) Origin of the Abiki phenomenon (a kind of seiche) in Nagasaki Journal of the Oceanographical Society of Japan 38 : 172-182.
  11. Kakinuma T (2019) Long-wave generation due to atmospheric-pressure variation and harbor oscillation in harbors of various shapes and countermeasures against meteotsunamis. In Natural Hazards—Risk, Exposure, Response, and Resilience; Tiefenbacher JP, Ed : IntechOpen: London, Pg : 81-109.
  12. Geisler JE (1970) Linear theory of the response of a two layer ocean to a moving hurricane. Geophysical and Astrophysical Fluid Dynamics 1 : 249-272.
  13. Dotsenko SF (1991) Generation of long internal waves in the ocean by a moving pressure zone. Soviet Journal of Physical Oceanography 2 : 163-170.
  14. Proudman J (1929) The effects on the sea of changes in atmospheric Geophysical Journal International 2 : 197-209.
  15. Whitham GB (1974) Linear and Nonlinear Waves, John Wiley & Sons, Inc.: New York, NY, Pg : 511-532.
  16. Lee S, Yates G, Wu T (1989) Experiments and analyses of upstream-advancing solitary waves generated by moving disturbances. Journal of Fluid Mechanics 199 : 569-593.
  17. Kakinuma T; Akiyama M (2007) Numerical analysis of tsunami generation due to seabed deformation. In Coastal Engineering 2006; Smith JM, Ed : World Scientific Publishing , Pte. Ltd.: Singapore. Pg : 1490-1502.
  18. Dalphin J, Barros R (2018) Optimal shape of an underwater moving bottom generating surface waves ruled by a forced Korteweg-de Vries Journal of Optimization Theory and Applications 180 : 574-607.
  19. Michele S, Renzi E, Borthwick A, Whittaker C, Raby A (2022) Weakly nonlinear theory for dispersive waves generated by moving seabed deformation. Journal of Fluid Mechanics 937.
  20. Garrett CJR (1970) A theory of the Krakatoa tide gauge disturbances. Tellus 22 : 43- 52.
  21. Kakinuma T (2022) Tsunamis generated and amplified by atmospheric pressure waves due to an eruption over seabed Geosciences 12.
  22. Kakinuma T, Hisada M (2023) A numerical study on the response of a very large floating airport to airplane Eng 4 : 1236-1264.
  23. Kakinuma T (2003) A nonlinear numerical model for surface and internal waves shoaling on a permeable beach. In Coastal Engineering VI; Brebbia CA, Lopez- Aguayo F, Almorza D, Eds : Wessex Tech. Press, Pg : 227-236.
  24. Yamashita K, Kakinuma T (2022) Interpretation of global tsunami height distribution due to the 2022 Hunga Tonga-Hunga Ha’apai volcanic Preprint available at Research Square.
fig 3

Double Cote’s Spiral in M83 Galaxies, NGC 1566 and Cyclone in the South Georgia and South Sandwich Islands

DOI: 10.31038/GEMS.2023532

 

One comparative analysis of the shape of spiral galaxies and the subtropical cyclone that formed north of Georgia Island and passed north of the South Sandwich Islands, in the South Atlantic Ocean. Subtropical cyclones with double spirals appear to be common in these areas of the South Atlantic. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. In mathematics, a spiral is a curve, which emanates from a point, moving farther away as it revolves around the point. The characteristic shape of hurricanes, cyclones, typhoons is a spiral. The characteristic equation of which spiral the Extratropical Cyclone (EC) Its double spiral shape, whose mathematical equation has already been defined as Cote’s spiral, Gobato et al. (2022) and similarly Lindblad (1964) show shape of double spiral galaxies, already studied among others is discussed here [44].

The South Georgia Group lies about 1,390 km (860 mi; 750 mi) east-southeast of the Falkland Islands, at 54°-55°S, 36°-38°W. It comprises South Georgia Island itself by far the largest island in the territory, and the islands that immediately surround it and some remote and isolated islets to the west and east-southeast. It has a total land area of 3,756 square kilometers (1,450 sq. mi), including satellite islands, but excluding the South Sandwich Islands, which form a separate island group [53,56]. A cyclone is a large air mass that rotates around a strong center of low atmospheric pressure, counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere as viewed from above (opposite to an anticyclone) [14,27,29]. A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel [19,26,27].

These storms usually have a radius of maximum winds that is larger than what is observed in purely tropical systems, and their maximum sustained winds have not been observed to exceed about 32 m/s (64 knots). Subtropical cyclones sometimes become true tropical cyclones, and likewise, tropical cyclones occasionally become subtropical storms. Subtropical cyclones in the Atlantic basin are classified by their maximum sustained surface winds: Subtropical depressions have surface winds less than 18 m/s (35 knots), while subtropical storms have surface winds greater than or equal to 18 m/s [921,26,27,29].

In mathematics, a spiral is a curve, which emanates from a point, moving farther away as it revolves around the point [2325]. The characteristic shape of hurricanes, cyclones, typhoons is a spiral [26,27,29,3441]. There are several types of turns, and determining the characteristic equation of which spiral the cyclone bomb (CB) [28] fits into is the goal of the work. Spiral galaxies form a class of galaxy originally described by Edwin Hubble in his 1936 work The Realm of the Nebulae and, as such, form part of the Hubble sequence. Most spiral galaxies consist of a flat, rotating disk containing stars, gas and dust, and a central concentration of stars known as the bulge. These are often surrounded by a much fainter halo of stars, many of which reside in globular clusters [54].

The core of cyclone presents the form of a double spiral, Figure 1, in the same way the study of the spiral of the galaxies of Lindblad, (1964) [32]. This spiral is denoted from Cotes Spiral Gobato et al. (2000) [711,1820,2225,44].

fig 1

Figure 1: Image of Georgia, scale 1:200, on April 11, 2003, PM, and nucleus at the coordinates given in the image [46] [Authors].

The very fine image quality of this camera, coupled with the huge light-collecting power of the VLT, reveals vast numbers of stars within the galaxy. The images were taken in three different parts of the infrared spectrum and the total exposure time was eight and a half hours, split into more than five hundred exposures of one minute each. The field of view is about 13 arcminutes across [49,55].

The Figure 2 show Hubble image captures hundreds of thousands of individual stars, thousands of star clusters and hundreds of supernova remnants in the spiral galaxy M83. Also known as the Southern Pinwheel, this galaxy is located 15 million light-years away from Earth in the constellation Hydra. It was discovered in 1752 by the French astronomer Nicolas Louis de Lacaille. With an apparent magnitude of 7.5, M83 is one of the brightest spiral galaxies in the night sky. It can be observed using a pair of binoculars most easily in May [49,50].

fig 2

Figure 2: Spectacular spiral galaxies using the impressive power of the HAWK-I [49,50]

NGC 1566, sometimes known as the Spanish Dancer, is an intermediate spiral galaxy in the constellation Dorado, positioned about 3.5° to the south of the star Gamma Doradus (Figure 3). It was discovered on May 28, 1826 by Scottish astronomer James Dunlop. At 10th magnitude, it requires a telescope to view. The distance to this galaxy remains elusive, with measurements ranging from 6 Mpc up to 21 Mpc [50,51]. The small but extremely bright nucleus of NGC 1566 is clearly visible in this image, a telltale sign of its membership of the Seyfert class of galaxies. The centers of such galaxies are very active and luminous emitting strong bursts of radiation and potentially harboring supermassive black holes that are many millions of times the mass of the sun [50,51].

fig 3

Figure 3: Hubble image shows NGC 1566, a beautiful galaxy located approximately 40 million light-years away in the constellation of Dorado (The Dolphinfish). NGC 1566 is an intermediate spiral galaxy, meaning that while it does not have a well-defined bar-shaped region of stars at its center like barred spirals it is not quite an unbarred spiral either [50,51].

NGC 1566 is not just any Seyfert galaxy; it is the second brightest Seyfert galaxy known. It is also the brightest and most dominant member of the Dorado Group, a loose concentration of galaxies that together comprise one of the richest galaxy groups of the southern hemisphere. This image highlights the beauty and awe-inspiring nature of this unique galaxy group, with NGC 1566 glittering and glowing, its bright nucleus framed by swirling and symmetrical lavender arms [50,51].

NGC 1566 is an intermediate spiral galaxy, meaning that while it does not have a well-defined bar-shaped region of stars at its center like barred spirals it is not quite an unbarred spiral either [50,51]. The Figure 1 show the image of Georgia, scale 1:200, on April 11, 2003, PM, and nucleus at the coordinates given in the image. The Georgia, on April 11, 2003, PM and nucleus at the coordinates given in the image. The of Georgia, in an atmospheric pressure gradient model generated by the Zoom Earth system, on April 11, 2003, 12:30, with 951 mbar, and whose core was located at approximate coordinates of image. The image of Georgia, in surface wind model generated by the Zoom Earth system, on April 11, 2003, 12:00, with 5 km/h WSW, and nucleus at the coordinates given in the image.

The model of wind currents for the displacement of air masses observed in the images is consistent with that observed which presents a great turbulence in the vortex. The highlighted cyclone vortex still in turbulent formation presents two linear containment barriers, in an L shape. The subtropical cyclone that formed northwest of South Georgia and South Sandwich Island is here called Georgia. It moved 237 km in 12 h towards the West, when it was 589 km from South Georgia Island, to 809 km from the center of the coast of the South Georgia Island. During this time interval, it maintained an atmospheric pressure at sea level at its vortex close to 951 hPa. It presented rotational winds of 5 km/h approximately 8 km from the central vortex (Figure 4).

fig 4

Figure 4: Image of Georgia, scale 1:100, in surface wind model generated by the Zoom Earth system, on April 11, 2003, 12:00, with 5 km/h WSW, and nucleus at the coordinates given in the image.

The analogous shape of Georgia and the galaxies Messier 83 and NGC 1566, studied here, is clear. These present a double spiral, as studied by Lindblad [47], but with the Cote’s spiral form, Gobato et al. (2022) [8,9,11] (Table 1).

Table 1: Subtropical Cyclone Georgia: Location/Pressure

April 11, 2023

Coordinates

Pressure (hPa)

AM 5313’09”S 2745’05”W

951

PM 5316’42”S 2400’38”W

951

The subtropical cyclone that formed northwest of South Georgia & South Sandwich Island is here called Georgia. It moved 237 km in 12 h towards the West, when it was 589 km from South Georgia Island, to 809 km from the center of the coast of the South Georgia Island. During this time interval, it maintained an atmospheric pressure at sea level at its vortex close to 951 hPa. It presented rotational winds of 5 km/h approximately 8 km from the central vortex. With an approximate dimension of 1,000,000 km2, and an area of direct influence of 3,500,000 km2, the subtropical cyclone Georgia moved at an average speed of 19.75 km/h.

The mathematical model for the atmospheric pressure gradient used by ZoomEarth [43] matches the correct way to scale the atmospheric pressure, as can be seen in the comparison of the satellite images. The model of wind currents for the displacement of air masses observed in the images is consistent with that observed in which presents a great turbulence in the vortex. The image of Georgia, scale 1:20, on April 11, 2003, PM and nucleus at the coordinates given in the image. The image of Georgia, on a 1:100 scale, in an atmospheric pressure gradient model generated by the Zoom Earth system, on April 11, 2003, 12:30, with 951 mbar, and whose core was located at approximate coordinates of image, and image of Georgia, scale 1:100, in surface wind model generated by the Zoom Earth system, on April 11, 2003, 12:00, with 5 km/h WSW, and nucleus at the coordinates given in the image. The highlighted cyclone vortex still in turbulent formation presents two linear containment barriers, in an L shape. The have Georgia’s double spiral Cote’s shape. The analogous shape of Georgia and the galaxies Messier 83 and NGC 1566, studied here, is clear. These present a double spiral, as studied by Lindblad (1964) [47], but with the Cote’s spiral form, Gobato et al. (2022) [8,9,11,44].

References

  1. (2023) Creative CC BY-SA. Cyclone.
  2. American Meteorological Society (2020) Glossary of Meteorology:
  3. Landsea C (2009) Subject: (A6) What is a sub-tropical cyclone?
  4. Atlantic Oceanographic and Meteorological Laboratory.
  5. Armentrout D and Armentrout P (2007) Rourke Publishing (FL) Tornadoes, Series: Earth’s Power.
  6. Edwards R (2006) Storm Prediction National Oceanic and Atmospheric Administration. The Online Tornado.
  7. Gobato R, Mitra A and Valverde L (2022) Tornadoes analysis Concordia, Santa Catarina, Southern Brazil, 2022 season. Aeronautics and Aerospace Open Access Journal.
  8. Gobato R, Mitra A, Gobato MRR and Heidari A (2022) Cote’s Double Spiral of Extra Tropical Cyclones. Journal of Climatology & Weather Forecasting.
  9. Gobato R, Mitra A, Heidari A and Gobato MRR (2022) Spiral galaxies and powerful extratropical cyclone in the Falklands Islands. Physics & Astronomy International Journal.
  10. Gobato R, Heidari A, Mitra A and Gobato MRR (2022) Spiral Galaxies and Powerful Extratropical Cyclone in the Falklands Islands.
  11. Gobato R, Heidari A, Mitra A and Gobato MRR (2022) Extratropical Cyclone in the Falklands Islands and the Spiral Sumerianz Journal of Scientific Research.
  12. Gobato R, Heidari A, Mitra A and Gobato MRR (2022) Spiral Galaxies and Extratropical Cyclone.
  13. Gobato R, Mitra A (2022) Vortex Storms in the West of Santa Biomedicine and Chemical Sciences.
  14. Gobato R, Heidari A and Mitra A (2021) Mathematics of the Extra-Tropical Cyclone Vortex in the Southern Atlantic Ocean. Journal of Climatology & Weather Forecasting.
  15. Bluestein, HB (2013) Severe Convective Storms and Tornadoes: Observations and Dynamics, Series: Springer Praxis Books Springer-Verlag Berlin Heidelberg.
  16. Gobato R, Gobato MRR and Heidari A (2018) Evidence of Tornadoes Reaching the Countries of Rio Branco do Ivai and Rosario de Ivai, Southern Brazil on June 6, 2017. Climatol Weather Forecasting.
  17. Gobato R, Gobato MRR and Heidari A (2019) Evidence of Tornadoes Reaching the Countries of Rio Branco do Ivai and Rosario de Ivai, Southern Brazil on June 6, 2017.
  18. Gobato R, Gobato MRR and Heidari A (2019) Storm Vortex in the Center of Paraná State on June 6, 2017: A Case Study. Sumerianz Journal of Scientific Researcht.
  19. Gobato R, Heidari A, Mitra A and Gobato MRR (2020) Vortex Cote’s Spiral in an Extratropical Cyclone in the Southern Coast of Brazil. Archives in Biomedical Engineering and Biotechnology.
  20. Gobato R and Heidari A (2020) Vortex Cote’s Spiral in an Extratropical Cyclone in the Southern Coast of Brazil. J Cur Tre Phy Res App.
  21. Gobato R, Heidari A, Mitra A and Gobato MRR (2020) Cotes’s Spiral Vortex in Extratropical Cyclone Bomb South Atlantic Oceans. Aswan University Journal of Environmental Studies (AUJES)
  22. Gobato R, Gobato A and Fedrigo DFG (2016) Study of tornadoes that have reached the state of Parana. Parana J Sci Educ.
  23. Vossle DL (1999) Exploring Analytical Geometry with Mathematica. Academic Press,
  24. Casey J (2001) A treatise on the analytical geometry of the point, line, circle, and conic sections, containing an account of its most recent extensions, with numerous examples. University of Michigan Library.
  25. SharipovR (?) Course of Analytical Geometry. Bashkir State Univer-sity (Russian Federation)
  26. de Leão M and Rodrigues PR (1989) Methods of Differential Geometry in Analytical Mechanics, Series: Mathematics Studies. Elsevier Science
  27. Vasquez T (2002) Weather Forecasting Handbook (5th Edition) Weather Graphics
  28. Bluestein HB, Bosart LF and Bluestein Synoptical of Dynamic Meteorology and Weather Analysis and Forecasting: A Tribute to Fred Sanderson, Series: Meteorological Monographs 3(55) American Meteorological Society.
  29. Gobato R, and Heidari A (2020) Cyclone Bomb Hits Southern Brazil in 2020. Journal of Atmospheric Science Research.
  30. Rafferty JP (2010) Storms, Violent Winds, and Earth’s Atmosphere. Series: Dynamic Earth. Britannica Educational
  31. Krasny R (1986) A study of singularity formation in a vortex sheet by the point vortex approximation. Fluid Mech.
  32. Saffman PG (1992) Vortex Series: Cambridge monographs on mechanics and applied mathematics. Cambridge University Press.
  33. Sokolovskiy MA and Verron J (2000) Four-vortex motion in the two layer approximation – integrable case.
  34. Whittaker ET and McCrae Sir W (1989) Treatise on analytical dynamics of particles and rigid bodies. Cambridge Mathematical Library, Cambridge University
  35. George JJ (1960) Weather Forecasting for Aeronautics. Elsevier Inc.
  36. Yorke S (2010) Weather Forecasting Made Simple. Countryside Books Countryside Books.
  37. Anderson JD (1984) Fundamentals of Aerodynamics. McGraw-Hill
  38. Weisstein EW (2023) Cotes’s Spiral Cotes’s Wolfram MathWorld.
  39. Whittaker ET (2022) A Treatise on the Analytical Dynamics of Particles and Rigid Bodies: With an Introduction to the Problem of Three Bodies.
  40. Gobato R, Heidari A, Mitra A and Gobato MRR (2020) Cotes’s Spiral Vortex in Extratropical Cyclone bomb South Atlantic Oceans.
  41. Fischer R (1993) Fibonacci Applications and Strategies for Traders: Unveiling the Secret of the Logarithmic Spiral.
  42. Toomre A (?) Theories of Spiral Structure. Annual Review of Astronomy and Astrophysics.
  43. Oort jH (1970) The Spiral Structure of Our Galaxy, Series: International Astronomical Union. Becker, w. Contopoulos G.(eds.); Union Astronomique Internationale 38, Springer Netherlands.
  44. Nezlin MV and Snezhkin EN (1993) Rossby Vortices, Spiral Structures, Solitons: Astrophysics and Plasma Physics in Shallow Water Experiments, Series: Springer Series in Nonlinear Dynamics. Springer Verlag Berlin Heidelberg.
  45. Gobato R, Mitra A and Mullick P (2023) Double Spiral Galaxies and the Extratropical Cyclone in South Georgia and the South Sandwich Islands.Climate Research.
  46. Brazil’s Navy. Synoptic Letters (2023) Brazil’s navy. Synoptic Letters.
  47. (2023) Zoom Earth. NOAA/NESDIS/STAR, GOES-East, zoom.earth
  48. Lindblad B (1964) ON THE CIRCULATION THEORY OF SPIRAL STRUCTURE. ASTROPHYSICA NORVEGICA (12) Stockholms Observatorium, Saltsjobaden.
  49. Gobato R, adn Heidari A (2020) Vortex hits southern Brazil in 2020.
  50. NASA gov (2017) Messier 83 (The Southern Pinwheel)
  51. ESA/Hubble & NASA (2020) NGC 1566. European Space Agency.
  52. (2023) NGC 1566. Creative Commons.
  53. Jeynes C (2019) MaxEnt double spirals in space-time Maximum Entropy (Most Likely) Double Helical and Double Logarithmic Spiral Trajectories in Space-Time. Scientific Reports.
  54. (2023) South Georgia and the South Sandwich Islands. Creative Commons. CC BY-SA 3.0. https://en.wikipedia.org/wiki/South_Georgia_ and_the_South_Sandwich_Islands
  55. (2023) Spiral galaxy. Creative Commons.
  56. Heyer HH (2020) The classic spiral Messier 83 seen in the infrared with HAWK-I. ESO. https://www.eso.org/public/images/eso1020a/Gobato, Ricardo & Mitra, Abhijit & Mullick, Poulomi (2023) Double spiral galaxies and the extratropical cyclone in South Georgia and the South Sandwich Islands.

Significant of Molecular Genotyping Over Serological Phenotyping Techniques in Determination of Blood Group Systems among Multiple Transfused Patients and Blood Donors to Prevent Alloimmunization: A Review Article

DOI: 10.31038/CST.2023821

Summary

Erythrocyte serological phenotyping is very important in determining the identity of suspected alloantibodies and also to facilitate the identification of antibodies that may be formed in the future. Serological phenotyping is a conventional method which is based on the presence of visible haemagglutination or haemolysis. This technique has some limitations in successful determination of blood group due to presence of donor red blood cell in the circulation of recent multiple transfused patients, taking certain medications or some diseases condition which may alter the erythrocyte composition, this make accurate determination of blood group of such patients to be time consuming and difficult to interpret. It is often more complicated to determine the blood group if direct antiglobulin test of such patients were positive and there is no direct agglutinating antibody. Molecular Genotyping of blood group systems led to the understanding of the molecular basis of many blood group antigens, many blood group polymorphisms are associated with a single point mutation in the gene encoding of protein carrying the blood group antigen. This knowledge allows the use of molecular testing to predict the blood group antigen profile of an individual and to overcome the limitations of conventional serological blood group phenotyping. Determination of blood group polymorphism at the genomic level facilitates the resolution of clinical problems that cannot be addressed by serological techniques. Applications of blood group genotyping for red cell blood group antigens affecting several areas of medicine which includes identification of fetuses at risk for haemolytic disease of the newborn and candidates for Rh-immune-globulin, to determine antigen types for which currently available antibodies are weakly reactive, to determine blood group of patients who had recent multiple transfusion, to increase the reliability of repositories of antigen negative RBCs for transfusion, to select appropriate donor for bone marrow transplantation, to provide transfusion support for highly alloimmunized patients, to resolve ABO and Rh discrepancies, to confirm sub group of A2 status of kidney donors, to provide comprehensive typing for patients with haematological diseases requiring chronic transfusion and oncology patients receiving monoclonal antibody therapies that interfere with pretransfusion testing.

Keywords

Molecular, Serological, Red cell antigens, Alloantibodies and transfusion

Introduction

Blood group systems are characterized by the presence or absence of antigens on the surface of erythrocytes, the specificity of antigens is controlled by a series of genes which can be allelic or linked very closely on the same chromosome which persist throughout life and serve as identity markers. Presently, International Society of Blood Transfusion (ISBT) has acknowledged about 36 blood group systems and more than 420 blood group antigens have been discovered on the surface of the human red cell (Storry et al., 2016). The clinical importance of RBC antigens is associated with their ability to induce alloantibodies that are capable of reacting at 370C (body temperature), these antibodies have ability to cause destruction of erythrocytes. The major clinically significant antibodies include ABO, Rh, Kell, Kidd and Duffy antigens (Karafin et al., 2018). ABO antibodies are naturally occurring antibodies, while Rh and Kell antibodies are immunogenic in nature. The immune system produces alloantibodies when it is exposed to foreign antigens (incompatible erythrocyte) these antibodies form a complex with donor cells causing haemolytic transfusion reactions. Patients with Rh and Kell alloantibodies should be transfused with blood that is lacking the antigens because these antibodies are capable of causing severe haemolytic anaemia and haemolytic disease of the newborns (Singhal et al., 2017). Hence, it is important to transfused females of child-bearing age with compatible blood in order to reduce or minimize the possibility of sensitizing their immune system with clinically important antigens (Guelsin et al., 2015). Unexpected incompatibility reactions are the major risks of blood and blood products transfusion apart from transfusion transmissible infections, clinically significant alloantibodies play a critical role in transfusion medicine by causing either acute or delayed haemolytic transfusion reactions (HTRs) and haemolytic disease of the fetus and newborn (HDFN) ranging from mild to severe grades. The degree of production of alloantibodies that is capable of destroy foreign or donor red cells is higher amongst multi-transfused patients compared with general population (Karafin et al., 2017). Serological phenotyping of blood group system is a classical and conventional method of detecting erythrocyte antigen by haemagglutination or haemolysis, accurate phenotyping of blood group system among multi-transfused patients is a very complex process due to the presence of donor’s blood cells in the patient’s circulation except serological phenotyping of blood group system is performed before the initiation of transfusion. Blood group genotyping has recently been developed to determine the blood group antigen profile of an individual, with the goal of reducing risk or identifying a fetus at the risk of haemolytic disease of the newborn (HDN). Blood group genotyping improve the accuracy of blood typing where serology alone is unable to resolve red cell serological phenotyping especially in individuals with weak antigen expression due to presence of genetic variants or in case of rare phenotype where antisera are unavailable or in the case of recent multiple blood or blood products transfusion, or in patients whose RBCs are coated with immunoglobulin (Ye et al., 2016). Genotyping technique also help to determine which phenotypically antigen-negative patients can receive antigen-positive RBCs, to type donors for antibody identification panels, to type patients who have an antigen that is expressed weakly on RBCs, to determine Rh D zygosity, to mass screen for antigen-negative donors, ability to routinely select donor units antigen matched to recipients apart from ABO and Rh D which reduce complications in blood and blood products transfusion (Guelsin et al., 2010). The growth of whole-genome sequencing in chronic disease and for general health will provide patients’ comprehensive extended blood group profile as part of their medical record to be used to inform selection of the optimal transfusion therapy (Connie and Westhoff, 2019).  DNA-based genotyping is being used as an alternative to serological antibody-based methods to determine blood groups for matching donor to recipient because most antigenic polymorphisms are due to single nucleotide polymorphism changes in the respective genes. Importantly, the ability to test for antigens by genetic technique where there are no serologic reagents is a major medical advance and breakthrough to identify antibodies and find compatible donor units, which can be lifesaving.  The molecular genotyping of blood group antigens is an important aspect and is being introduced successfully in transfusion medicine. Genotyping has been shown to be effective and advantageous in relation to phenotype from genomic DNA with a high degree of precision (da Costa et al., 2013) A notable advantage of molecular testing is its ability to identify variant alleles associated with antigens that are expressed weakly or have missing or altered epitopes, thus helping to resolve discrepant or incomplete blood group phenotyping. The disadvantages of molecular testing are mainly the longer turnaround time and higher cost, compared with serologic typing (Marilia et al., 2019). The molecular basis for most erythrocyte antigens is known and numerous DNA analysis methodologies have been developed, all based on PCR which can detect several alleles simultaneously as long as the alleles studies have products of different sizes (Swati et al., 2018). The detection of blood group antigens is essential in transfusion practice in order to prevent alloimmunization, especially in multiple transfused patients. Erythrocyte antibodies that are clinically significant in transfusion medicine can lead to acute or delayed blood transfusion reactions and haemolytic disease of the fetus and newborn which increases the morbidity and mortality rate of the patients. In addition, alloimmunization may delay the localization of a compatible blood bag. The probability of an individual producing one or more anti-erythrocyte antibodies is approximately 1% per unit of blood transfused and in chronically multiple transfused patients, the alloimmunization rate may reach 50%. Both the blood donors and recipients can be genetically typed for all the clinically significant blood group antigens and antigen-matched blood can be provided to the recipient (Guelsin et al., 2010). This approach could significantly reduce the rate of alloimmunization.

Serological Phenotyping

Knowledge of the role of blood groups with their antigens and variants in alloimmunization was pivotal for the development of transfusion practices and medical interventions that require blood transfusion such as trauma, organ transplantation, cancer treatment, haematological diseases (like sickle cell disease, thalassaemia, and aplastic anemia). Serology has been considered the gold standard technique for blood group typing for a long time (Yazdanbakhsh et al., 2014). Serological methods detect the antigen expressed on the red cell using specific antibodies and can be carried out manually or by automated platforms. Typing blood group antigens using this method is easy, fast, reliable, and accurate for most of the antigens. However, serology has limitations, some of which cannot be overcome when it is used as a standalone testing platform (Das et al., 2020). Scarcity of serological reagents for some blood group systems for which there is no monoclonal antibody available is a major limitation to serological technique. In addition, human serum samples from different donors vary in reactivity, which is an issue when a nearly exhausted batch of reagent needs to be replaced. This is especially problematic when an alloantibody for that antigen is suspected to be causing adverse events after transfusion. In those circumstances, molecular methods can be used as an alternative or as a complementary test for identification of genes associated with the blood group antigens expression and prediction of antigenic profile.

Molecular Genotyping

The identification of genes that encode proteins carrying blood group antigens and the molecular polymorphisms that result in distinct antigenicity of these proteins is possible using molecular typing methods, which facilitate blood typing resolution in complex cases and overcome limitations of serological techniques when dealing with allo-immunized and multi-transfused patients (da Costa et al., 2013). In addition, molecular techniques have allowed identification of genes encoding clinically relevant antigens where serological reagents are not available. In those instances, genotyping is critical to resolve clinical challenges. Blood group genotyping is performed to predict blood group antigens by identifying specific polymorphisms associated with the expression of an antigen (Connie and Westhoff, 2019). Most variations in the blood group antigens are linked to point mutations, but for some, other molecular mechanisms are responsible, such as deletion or insertion of a gene, an exon or a nucleotide sequence (for example ABO, RH, and DO blood group systems), sequence duplication, (for example RHD gene and GE blood group system), nonsense mutation (for example RHD gene), and hybrid genes (for example RH, MNS, ABO, and CH/RG blood group systems) (Bakanay et al., 2013). In contrast to serological technique, molecular genotyping tests are performed on DNA obtained from nucleated cells and are not affected by the presence of donor’s red cells in patient’s sample, which is a common occurrence in samples of patients with recent multiple blood and blood products transfusions. Thus, erythrocyte genotyping can resolve blood group typing discrepancies in multi-transfused patients presenting with mixed field reactions, alloantibodies, or autoantibodies. Also, blood group genotyping can substantially help patients who were not previously phenotyped and need regular transfusions by facilitating management of these patients and preventing alloimmunization (Guelsin et al., 2015). Studies comparing serology and genotyping in multi-transfused population such as patients with thalassaemia and sickle cell disease have shown that genotyping is superior to serology for resolving discrepancies. Use of genotyped matched units has been shown to decrease alloimmunization rates, increase haemoglobin levels and in vivo erythrocyte survival, and diminish frequency of transfusions.

Erythrocyte Antigen Disparity and its significant in Transfusion Medicine

Patients who develop alloantibodies might have been received significantly multiple transfusions, or due to pregnancy making alloimmunization a particular problem for such patients especially those requiring chronic RBC transfusion support as a result of haematological diseases (Ngoma et al., 2016). Incidence of alloimmunization is highly variable between individual patients’ health condition, rate of exposure to a foreign antigen, ethnicity and geographical area. However, Knowledge of the genotypes of both patients and donors has led to a greater understanding of potential mechanisms for persistent alloimmunization despite serologic antigen matching for transfusion, also extended matching to include the Duffy, Kidd, and MNS systems has been shown to reduce the rate of alloimmunization (Jenna and Meghan, 2018). It is therefore clear that serologic phenotyping is inadequate to capture allelic diversity in minority populations. Without accurate characterization of the patient and donor genotypes, true antigen matching to prevent alloimmunization is not possible (Chou et al., 2013). In addition, there is still an inadequate understanding of the risk of alloimmunization with specific blood group gene haplotypes, particularly for RHD. Large, multi-institutional studies with genotyping of both patient and donor and better characterization of the specificity of antibodies formed are needed to clarify the clinical significance and immunogenic risks of variant alleles (Putzulu et al., 2017).

Blood Transfusion and Risk of Erythrocyte Alloimmunization

Erythrocyte alloimmunization is a serious adverse event of blood and blood products transfusions which can cause further clinical problems in the recipient patients including worsening of anaemia, development of autoantibodies, acute or delayed haemolytic transfusion reactions, bystander haemolysis, organ failure, and cause serious complications during pregnancies (Singhal et al., 2017). Frequent transfusions can lead to the production of multiple alloantibodies, which is often associated with autoantibodies requiring extensive serological workups and additional transfusions for proper treatment, increasing time and resources to find compatible RBC units (Yazdanbakhsh et al., 2014). Reported erythrocyte alloimmunization rates have considerable variations depending on the population and disease studied. The rates are estimated between 1 and 3% in patients that receive episodic transfusions, while for patients who receive chronic blood transfusions like patients with sickle cell disease, rates vary between 8 and 76% (Chou et al., 2013). The development of RBC antibodies is influenced by many factors including recipient’s gender, age, and underlying disease. The diversity of the blood group antigen expression among the donor and patient populations contributes substantially to the high alloimmunization rates (Ryder et al., 2014). Studies in sickle cell disease patients have reported that inflammation is associated with higher likelihood of alloimmunization, and it is suggested that the extent of the alloimmune response is higher when RBCs are transfused in the presence of an inflammatory signal. Several studies have suggested that genetic variation in immune-related genes and human leukocyte antigens might be associated with susceptibility to or protection from alloimmunization (Zimring and Hendrickson, 2008).

Consequences of Alloimmunization in Transfusion Medicine

Depending on the antigen and clinical significance of the antibody formed, patients can suffer morbidity and mortality due to an acute or delayed hemolytic transfusion reaction if incompatible blood or blood products are transfused (Jenna and Meghan, 2018). A rare but life-threatening consequence of recurrent transfusions is a hyperhaemolytic reaction which occurs in patients with hemoglobinopathies especially in sickle cell disease patients, the mechanism of hyperhaemolytic reaction in SCD could be a complication of alloimmunization, with possible contribution of an underlying genetic predisposition (Putzulu et al., 2017).  The development of RBC alloantibodies also impacts patient care by increasing the cost and time required to find compatible RBC units. Once an RBC antibody is identified, all subsequent transfusions must be negative for that antigen to prevent a delay haemolytic transfusion reaction from a robust secondary immune response (Ngoma et al., 2016).  An additional risk for previously sensitized patients is the inability to detect evanesced RBC antibodies at future transfusion events, failure to identify pre-existing antibodies is a significant contributor to haemolytic transfusion reaction. Minority patients may be at greater risk of complications from alloimmunization because the presence of antibodies may not be accurately characterized. One reason is that they are more likely to be negative for high-prevalence antigens (Jenna and Meghan, 2018). Antibodies to high-prevalence RBC antigens will react with all reagent RBCs. This is further complicated if the patient also has a positive direct antiglobulin test (DAT), as patients with SCD frequently do; the antibody to a high-prevalence antigen can then easily be confused with a warm autoantibody (Jain et al., 2016). In addition, because most reagent RBCs are not from minority populations, there is a risk that immunogenic Rh variants and other low-prevalence antigens are not expressed on the reagent RBCs, rendering antibody detection tests false-negative. Genotyping can be particularly useful to clarify antibody specificity, identify if there is a lack of a high-prevalence antigen, and identify appropriate donors.

Prevention of Alloimmunization and Improvement of Transfusion Therapy

Prevention of alloimmunization is desirable for any blood and blood products transfusion. Hence, patients not previously transfused or only having episodic blood transfusions, matching for all clinically significant antigens is not of great concern, but can result in alloimmunization against non-matched antigens (Agrawal et al., 2016). Patients previously transfused, particularly transfusion-dependent patients, the alloimmunization risk is higher and management of alloimmunized patients is of greater concern. Their alloimmunization status, including antigens of low clinical significance, is a critical part of their clinical history that may enable health care providers to take measures to prevent further alloimmunization (Singhal et al., 2017). Antigens have variable immunogenicity and not all blood group antigens are involved with the production of clinically significant antibodies after blood transfusion or pregnancy. Ideally, every blood transfusion should be compatible with the most clinically significant antigens to prevent alloimmunization (Swati et al., 2018). However, the standard pre-transfusion cross-matching is only performed for ABO blood group and the Rh (D) antigen; ABO matching is performed to avoid acute haemolytic transfusion reactions caused by natural IgM antibodies against ABO antigens, and Rh (D) matching is performed because of the high immunogenicity of the Rh (D), which is implicated in delay haemolytic transfusion reaction and haemolytic disease of foetus and newborn (Chou et al., 2013). Currently, recommendations for partial and extended donor unit or patient matching are limited to specific groups including patients on long-term transfusion protocol (sickle cell disease, thalassaemia, and aplastic anemia), patients who have developed alloantibodies and patients with warm autoimmune haemolytic anaemia (Kulkarni et al., 2018). Verification of compatibility for Rh (D, E, C, c, e) and K, which are the most frequent antigens involved in alloimmunization, is considered partial matching. Extended matching should include at least RH (D, C, E, c, e), KEL (K), FY (Fya, Fyb), JK (Jka, Jkb), MNS (S, s) and, if available, additional antigens (Osman et al., 2017). Prevention of an initial alloimmunization event may be even more important than previously appreciated to prevent the development of subsequent antibodies. For patients with a tendency toward forming RBC antibodies, and also having a RBC phenotype with either multiple negative antigens and/or lacking high-prevalence antigens, compatible units may become so rare as to make transfusion support virtually impossible (Wilkinson et al., 2012)

Screening for Clinically Significant Alloantibodies

Alloantibodies are antibodies produced in a patient as a result of exposure to foreign red cell antigen through transfusion of blood or blood products, pregnancy or transplantation (Agrawal et al., 2016). In countries such as Nigeria, there are multiple ethnic groups and racial or genetic heterogeneity among the population. This can often be associated with a wide variation of alloantibodies. Other common factors that facilitate alloantibody formation in the recipient include: the immune competence, the dose of the antigen the recipient is exposed to, the route of exposure and how immunogenic the foreign antigen is (Erhabor et al., 2015). Development of alloantibodies can lead to difficulty in finding compatible blood for transfusion or it can result in severe delayed haemolytic transfusion reaction if the antibody titre is low, undetected, missed and if antigen positive units is transfused. Evidenced-based best practice in the developing world requires that alloantibody testing is carried out as part of pre-transfusion testing of patients who require a red cell transfusion as well as pregnant women presenting to antenatal clinic at booking (Guelsin et al., 2015). The purpose of this test is to detect the presence of unexpected red cell antibody in the patient’s serum. Once these antibodies are detected during the alloantibody screening, every effort must be made to identify the specificity of the alloantibody by doing a panel test. The aim of identifying the specificity of the alloantibody in a patient that requires a red cell transfusion is to enable the Medical Laboratory or Biomedical Scientist to select antigen negative donor unit for appropriate crossmatch (indirect antiglobulin test) for such patient (Agrawal et al., 2016). Panel test in the case of a pregnant women coming for antenatal booking is to identify the alloantibody, determine whether the antibody can potentially cause HDFN and to allow the monitoring of the titre or quantification of the antibody every 4 weeks from booking until 28 weeks’ gestation and every 2 weeks thereafter until delivery. This information is important to determine the extent of developing foetus is affected by HDFN, decide whether to monitor the baby for anaemia using Doppler ultrasound, determine whether the baby will require intrauterine transfusion and to make an informed decision to possibly deliver the baby earlier. These evidence-based best practices are not being implemented in many settings in Nigeria (Erhabor et al., 2015). Testing of donor units for other clinically relevant red cell antigens other than ABO and Rh D is not routinely carried out (Singhal et al., 2017). This is a complete failure in stewardship by the Nigerian government and can compromise the transfusion service delivery to pregnant women and patients that require red cell transfusion. Settings and implement a policy to routinely test all group O donor units for haemolysins in other to identify group O donors with high titre of IgG anti A and/or anti B whose blood should be reserved only for transfusion to group O recipient while those that test negative can be transfused to A, B or AB individual as a way to maximizing the use of our limited allogeneic stock (Obisesan et al., 2015).

Applications of Molecular Genotyping Over Serological Phenotyping in Transfusion Medicine

Multiply-transfused Patients

The ability to determine a patient antigen profile by DNA analysis when haemagglutination tests cannot be used is a useful adjunct to a serologic investigation. Blood group genotyping in the transfusion setting is recommended for multiply transfused patients such as sickle cell disease (SCD), as part of antibody identification process (Castilho et al., 2018). Determination of a patient’s blood type by analysis of DNA is particularly useful when a patient, who is transfusion-dependent, has produced alloantibodies. This will help in the selection of antigen-negative RBCs for transfusion. It also assists in selection of compatible units for patients with discrepancies between genotype and phenotype, leading to increased cell survival and a reduction of the transfusion frequency (Bakanay et al., 2013). In addition to its contribution to the general accuracy of identification of red blood cell antigens, genotyping of transfusion dependent SCD patients allows assessment of the risk of alloimmunization against antigens

Patients Whose RBCs are Coated with IgG

Patients with autoimmune haemolytic anaemia (AIHA), whose RBCs are coated with IgG cannot be accurately typed for RBC antigens, particularly when directly agglutinating antibodies are not available, or IgG removal by chemical treatment of RBCs is insufficient. Blood group genotyping is very important for determination of the true blood group antigens of these patients (Jain et al., 2016). Patients received antigen-matched RBCs typed by blood group genotyping increases erythrocytes in vivo survival, as assessed by rises in haemoglobin levels and diminished frequency of transfusions.

Blood Donors

DNA-based typing can also be used to antigen-type blood donors both for transfusion and for antibody identification reagent panels. This is particularly useful when antibodies are not available or are weakly reactive (Huang et al., 2019). The molecular analysis of a variant gene can also assist in resolving a serologic investigation.

Resolution of Weak A, B, and D Typing Discrepancies

A proportion of blood donors and patients who historically have been typed as group O are now being recognized as group A or group B with the use of monoclonal antibodies capable of detecting small amounts of the immuno-dominant carbohydrate responsible for A or B specificity (Das et al., 2020). A typing result that differs from the historical record often results in time-consuming analyses. Since the bases of many of the weak subgroups of A and B are associated with altered transferase genes, PCR-based assays can be used to define the transferase gene and thus the ABO group (Nair et al., 2019). Similarly with the D antigen of the Rh blood group system, a proportion of blood donors that historically have been typed as D-negative are now reclassified as D-positive, due to monoclonal reagents that detect small and specific parts of the D antigen. The molecular basis of numerous D variants can be used to identify the genes encoding altered Rh D protein in these individuals (Huang et al., 2019).

Applications to Maternal-fetal Medicine

Alloimmunization against the Rh D antigen during pregnancy is the most frequent cause of haemolytic disease of the newborn (HDN). Immunization occurs when fetal cells, carrying antigens inherited from the father, enter the mother’s circulation following fetal-maternal bleeding. The mother, when not expressing the same antigen(s), may produce IgG antibodies towards the fetal antigen and these antibodies can pass through the placenta causing a diversity of symptoms, ranging from mild anaemia to death of the foetus (Erhabor et al., 2015). Apart from antibodies to the Rh D blood group antigen, other specificities within the Rh system and several other blood group antigens can give rise to HDN, but Rh D is by far the most immunogenic. Prenatal determination of fetal Rh D status is desirable in pregnancies to prevent sensitization and possible hydrops foetalis in foetuses of Rh D negative mothers with Rh D positive fathers. Fetal DNA has been detected in amniotic cells, chorionic villus samples, and as recently reported, in maternal plasma. It is now well accepted that a minute number of copies (as low as 35 copies/mL) of cell-free fetal RHD DNA in the maternal plasma can be utilized as a target for non-invasive genotyping of the foetus (Swati et al., 2018). Unlike fetal DNA isolated from the cellular fraction of maternal blood samples, free fetal DNA isolated from maternal plasma has been shown to be specific for the current foetus and is completely cleared from the mother’s circulation by postpartum. It has been reported that fetal RHD can be determined by PCR in DNA extracted from maternal plasma of pregnant women with Rh D positive foetuses, in a non-invasive procedure. PCR amplification of RHD in maternal plasma may be useful for the management of Rh D negative mothers of Rh D positive foetuses and for the study of foetus-maternal cell trafficking (Legler et al., 1999).

Conclusions

Determination of blood group polymorphism at the genomic level facilitates the resolution of clinical problems that cannot be addressed by serological technique. They are useful to determine antigen types for which currently available antibodies are weakly reactive; to type patients who have been recently transfused; to identify fetuses at risk for haemolytic disease of the newborn and to increase the reliability of repositories of antigen negative RBCs for transfusion. Mass scale genotyping, if applied to routine blood group of patients and blood donors, would significantly change the management of blood provision. Better matching of donor blood to patient would be the most significant benefit. This is primarily because a large numbers of low frequency antigens (or absence of high frequency antigen) are not routinely tested for, and donor-patient mismatches are only detected by serological cross-matching (only if an antibody has been generated) immediately prior to transfusion. This review overviews the current situation in this area and attempts to predict how blood group genotyping will evolve in the future.

Limitations

It is important to note that PCR based assays are prone to different types of errors than those observed with serological assays. For instance, contamination with amplified products may lead to false positive test results. In addition, the identification of a particular genotype does not necessarily mean that the antigen will be expressed on the RBC membrane.

Recommendation

As a word of caution, we should emphasize that the interpretation of molecular blood group genotyping results must take into account that potential contamination of PCR-based amplification assays and the observation that the presence of a particular genotype antigen does not guarantee expression of this antigen on the RBC membrane. The possibility to have an alternative to serological tests to determine the patient’s antigen profile should be considered for multiply transfused patients and for patients with autoimmune haemolytic anaemia (AIHA) by allowing the determination of the true blood group genotype and by assisting in the identification of suspected alloantibodies and in selection of antigen-negative RBCs for transfusion. This ensures a more accurate selection of compatible donor units and is likely to prevent alloimmunization and reduce the potential for haemolytic reactions. As automated procedures attain higher and faster throughput at lower cost, blood group genotyping is likely to become more widespread. We believe that the PCR technology may be used in a transfusion service in the next few years to overcome the limitations of serological technique.

References

  1. Agrawal A, Mathur A, Dontula S, Jagannathan L (2016) Red blood cell alloimmunization in multi-transfused patients: A Bicentric study in India. Global Journal of Transfusion Medicine 12(1): 12-17. [crossref]
  2. Bakanay SM, Ozturk A, Ileri T, Ince E, Yavasoglu S, Akar N (2013) Blood group genotyping in multi-transfused patients. Transfusion and Apheresis Science 48(2): 257-261. [crossref]
  3. Castilho L, Dinardo CL (2018) Optimized antigen-matched in sickle cell disease patients: Chances and challenges in molecular times—The Brazilian way. Transfusion Medicine and Hemotherapy 45(4): 258-262. [crossref]
  4. Chou ST, Jackson T, Vege S, Smith-Whitley K, Friedman DF, Westhoff CM (2013) High prevalence of red blood cell alloimmunization in sickle cell disease despite transfusion from Rh-matched minority donors. Blood 122(6): 1062-1071. [crossref]
  5. Connie M, Westhoff SB (2019) Blood group genotyping; Transfusion Medicine 133(17): 1814-1820. [crossref]
  6. da Costa DC, Pellegrino J, Guelsin GA, Ribeiro KA, Gilli SC, Castilho L (2013) Molecular matching of red blood cells is superior to serological matching in sickle cell disease patients. Revista Brasileira de Hematologia and Hemoterapia 35(1): 35-38. [crossref]
  7. Das SS, Biswas RN, Safi M, Zaman RU (2020) Serological evaluation and differentiation of subgroups of “A” and “AB” in healthy blood donor population in Eastern India. Global Journal Transfusion Medicine 20(5): 192-196. [crossref]
  8. Erhabor O, Malami AL, Isaac Z, Yakubu A, Hassan M (2015) Distribution of Kell phenotype among pregnant women in Sokoto, North Western Nigeria. Pan African Medical Journal 301(21): 1-9. [crossref]
  9. Guelsin GA, Rodrigues C, Visentainer JE, De-Melo, Campos. P, Traina F, Gilli SC (2015) Molecular matching for Rh and K reduces red blood cell alloimmunisation in patients with myelodysplastic syndrome. Blood Transfusion 13(1): 53-58. [crossref]
  10. Guelsin GA, Sell AM, Castilho L, Masaki VL, Melo FC, Hashimoto MN (2010) Benefits of blood group genotyping in multi-transfused patients from the south of Brazil. Journal of Clinical Laboratory Analysis 24(5): 311-316. [crossref]
  11. Huang H, Jin S, Liu X, Wang Z, Lu Q, Fan L, (2019) Molecular genetic analysis of weak ABO subgroups in the Chinese population reveals ten novel ABO subgroup alleles. Blood Transfusion 17(1): 217-222. [crossref]
  12. Jain A, Agnihotri A, Marwaha N, Sharma RR (2016) Direct antiglobulin test positivity in multi-transfused thalassemics. Asian Journal of Transfusion Science 10(1): 161-163. [crossref]
  13. Jenna, Khan, Meghan, Delaney (2018) Transfusion Support of Minority Patients: Extended Antigen Donor Typing and Recruitment of Minority Blood Donors. Transfusion Medicine and Hemotherapy,45(4): 271–276. [crossref]
  14. Karafin MS, Westlake M, Hauser RG, Tormey CA, Norris PJ, Roubinian NH (2018) Risk factors for red blood cell alloimmunization in the recipient epidemiology and donor evaluation study (REDS-III) database. British Journal Haematology 181(5): 672-681. [crossref]
  15. Kulkarni S, Choudhary B, Gogri H, Patil S, Manglani M, Sharma R (2018) Molecular genotyping of clinically important blood group antigens in patients with thalassaemia. The Indian Journal of Medical Research 148(6): 713-720. [crossref]
  16. Legler TJ, Eber SW, Lakomek M, Lynen R, Maas JH, Pekrun A (1999) Application of RHD and RHCE genotyping for correct blood group determination in chronically transfused patients. Transfusion 39(8): 852-855. [crossref]
  17. Marilia GQ, Cristiane MC, Luciana CM, Ana MS, Jeane EL (2019) Methods for blood group antigen detection; cost-effectiveness analysis of phenotyping and genotyping. Haematology Transfusion and cell therapy 41(1): 44-49. [crossref]
  18. Nair R, Gogri H, Kulkarni S, Gupta D (2019) Detection of a rare subgroup of A phenotype while resolving ABO discrepancy. Asian Journal of Transfusion Science 13(1): 129-131. [crossref]
  19. Ngoma AM, Mutombo PB, Ikeda K, Nollet KE, Natukunda B, Ohto H (2016) Red blood cell alloimmunization in transfused patients in sub-Saharan Africa: a systematic review and meta-analysis. Transfusion Apheresis Science 54: 296-302. [crossref]
  20. Obisesan OA, Ogundeko TO, Iheanacho CU, Abdulrazak T, Idyu VC, Idyu II, Isa AH (2015) Evaluation of Alpha (α) and Beta (β) Haemolysin Antibodies Incidence among Blood Group ‘O’ Donors in ATBUTH Bauchi Nigeria. American Journal of Clinical Medicine Research 3(3): 42-44. [crossref]
  21. Osman NH, Sathar J, Leong CF, Zulkifli NF, Raja, Sabudin RA, Othman A (2017) Importance of extended blood group genotyping in multiply transfused patients. Transfusion Apheresis Science 56(3): 410-416. [crossref]
  22. Putzulu R, Piccirillo N, Orlando N, Massini G, Maresca M, Scavone F (2017) The role of molecular typing and perfect match transfusion in sickle cell disease and thalassaemia: an innovative transfusion strategy. Transfusion Apheresis Science 56(1): 234-237. [crossref]
  23. Ryder AB, Zimring JC, Hendrickson JE (2014) Factors influencing RBC alloimmunization: Lessons learned from murine models. Transfusion Medicine and Hemotherapy 41(6): 406-419. [crossref]
  24. Singhal D, Kutyna MM, Chhetri R, Wee LYA, Hague S, Nath L (2017) Red cell alloimmunization is associated with development of autoantibodies and increased red cell transfusion requirements in myelodysplastic syndrome. Haematological 102(12): 2021-2029. [crossref]
  25. Storry JR, Castilho L, Chen Q, Daniels G, Denomme G, Flegel WA, Gassner C, de Haas M (2016) International society of blood transfusion working party on red cell immunogenetics and terminology: report of the Seoul and London meetings. International Society of Blood Transfusion Science 11(2): 118–122. [crossref]
  26. Swati, Kulkarni, Bhavika, Choudhary, Harita, Gogri, Shashikant, Patil, Mamta, Manglani, Ratna, Sharma, Manisha, Madkaikar (2018) Molecular genotyping of clinically important blood group antigens in patients with thalassaemia. Indian Journal of Medical Research 14(8): 713-720. [crossref]
  27. Wilkinson K, Harris S, Gaur P, Haile A, Armour R, Teramura G (2012) Molecular blood typing augments serologic testing and allows for enhanced matching of red blood cells for transfusion in patients with sickle cell disease. Transfusion 52(1): 381-388. [crossref]
  28. Yazdanbakhsh K, Ware RE, Noizat-Pirenne F (2014) Red blood cell alloimmunization in sickle cell disease: Pathophysiology, risk factors, in individuals with single and multiple clinically relevant red blood cell antibodies. Transfusion 54(8): 1971-1980. [crossref]
  29. Ye Z, Zhang D, Boral L, Liz C, May J (2016) Comparison of blood group molecular genotyping to traditional serological phenotyping in patients with chronic or recent blood transfusion. Journal of Biomedical Science 4(1): 1-4. [crossref]
  30. Zimring JC, Hendrickson JE (2008) The role of inflammation in alloimmunization to antigens on transfused red blood cells. Current Opinion in Hematology 15(6): 631-635. [crossref]

Climate Summit and the Egyptian Vision

DOI: 10.31038/GEMS.2023531

 

“We are meeting today and the environmental clock is ticking, marking the end of the planet if we do not do our best to preserve it”.

“Although not responsible for the climate crisis, the African continent faces the most negative consequences of the phenomenon and its economic, social, security and political implications. However, the continent is a model of serious climate action as far as its capabilities and available support allow”. With these words, the British Prime Minister, Boris Johnson and His Excellency President Abdel Fattah El-Sisi began their speeches before the Climate Summit in Glasgow, Scotland, which began on Sunday, October 31, 2021 under the auspices of the United Nations – known as the twenty-sixth Conference of the Parties to the Framework Convention on Climate Change – in the Scottish city of Glasgow and will continue until November 12 within the ceiling of high expectations in dealing with the problems of climate change besetting our planet. The event is abbreviated as “26COP”, which will last for two weeks and is an acronym for the words “26th Conference of the Parties to the Framework Convention on Climate Change,” For the first time, delegations representing 200 countries participated in the summit to discuss ways to reduce emissions by 2030 and help improve life on the planet.

The summit was honored by the honorable presence of the Arab Republic of Egypt with an official delegation headed by His Excellency President Abdel Fattah El-Sisi, President of the Republic, who gave an important speech at the summit, which is attended by world leaders as evidence that we are a large country with its position at the level of the continent and the whole world. Earth’s climate depends mainly on the sun, with about 30 percent of sunlight scattered back into space, some of it absorbed by the atmosphere and the rest absorbed by the Earth’s surface. The Earth’s surface also reflects part of the sunlight in the form of animated energy called infrared radiation. What is happening is that infrared radiation is delayed by “greenhouse gases” such as water vapor, carbon dioxide, ozone and methane, which cause infrared radiation to bounce back, raising the temperature of the lower atmosphere and the Earth’s surface.

Although greenhouse gases make up only one percent of the atmosphere, they form a blanket around the ground or a glass roof, which traps heat and keeps the Earth’s temperature at 30 degrees higher than otherwise. However, human activities contribute to making this cover “thicker” because natural levels of these gases are supported by carbon dioxide emissions from the combustion of coal, oil and natural gas, through the emission of more methane and nitrous oxide produced from agricultural activities and landuse changes, and through long-lived industrial gases that are not produced naturally. People usually use the terms global warming and climate change interchange, assuming they say the same thing. But there is a difference between the two: global warming refers to rising average temperatures near the Earth’s surface, while climate change refers to changes in atmospheric layers such as temperature, rainfall and other changes measured over decades or longer periods.

What is Climate Change?

Climate change refers to long-term shifts in temperature and weather patterns. These shifts may be natural – for example, through changes in the density of the Sun, slow changes in the Earth’s rotation around the Sun, or natural processes within the climate system (such as changes in the water cycle in the oceans), but since the nineteenth century, human activities have become the main cause of climate change on the planet. This is mainly due to the burning of fossil fuels, such as coal, oil and gas as a result of various industries and human activities such as the use of fuel in cars, deforestation, reforestation, urbanization, desertification, etc., where the burning of fossil fuels produces the emission of gases that act as a cover that wraps around the globe, especially carbon nan gas and methane, which leads to the capture of the sun’s heat and raising the earth’s temperatures. This releases carbon dioxide and landfills are a major source of methane emissions. Energy production and consumption, industry, transport, buildings, agriculture and land use are the main sources of emissions. Recent studies have shown that concentrations of these gases are now at their highest levels in two million years. Emissions continue to rise as a result of their continued sources. As a result, the globe is now 1.1 degrees Celsius warmer than it was in the late nineteenth century.

What are the Expected Effects of Climate Change?

The phenomenon of climate change is distinguished from most other environmental problems in nature by being global in nature, as it transcends the borders of countries to pose a danger to the whole world. The steady increase in surface air temperatures on the globe as a whole has been confirmed, with the global average increasing at a rate of 0.3 to 0.6 degrees over the past 100 years. Studies by the Intergovernmental Panel on Climate Change (IPCC) have indicated that the continuous rise in the global average temperature will lead to many serious problems such as sea level rise threatening to submerge some areas in the world, as well as the impact on water resources and crop production, in addition to the spread of some diseases. Climate change is sure to affect our health and our ability to farm, live, safety and work as the consequences of climate change include severe drought, water scarcity, severe fires, rising sea levels, saltwater infiltration of adjacent lands, floods, melting polar ice and degradation of biodiversity. In a 2018 United Nations report, scientists acknowledged that limiting global warming to no more than 1.5°C would help us avoid the worst climate impacts and maintain a livable climate. Conversely, the current trajectory of carbon dioxide emissions could increase global temperatures by up to 4.4°C by the end of the century.

Everyone Asks: Can We Stop the Phenomenon of Climate Change?

The honest scientific answer: It is only possible to slow the pace of global warming, not stop it completely, thus delaying the scale of the damage and reducing it until the end of the current century in the hope that we can coexist as a human race with the variables that we have caused. Climate change poses a great challenge to humanity, so do we have solutions to this phenomenon? The countries of the world have become aware of the danger of silence on the phenomenon of climate change and the need to confront it effectively, as the effects of climate change have existed for a while, but some countries of the world were not dealing with this crisis effectively and adequately, especially the industrialized countries that cause climate change and are negligent in the right of developing countries, where they do not take measures that protect the world from climate change and do not provide adequate funding. Despite this, there are some measures that can be taken to reduce this phenomenon and its catastrophic effects, the most important of which are:

Emission Reduction

This can be done by shifting existing energy systems from fossil fuels to new and renewable energy sources, such as solar or wind, thereby reducing climate-changing emissions. Here, a growing coalition of countries is committed to bringing emissions to net zero by 2050. However, current emissions must be cut by about half by 2030 to keep global warming below 1.5°C, and fossil fuel production must be reduced by about 6 percent per year during the decade 2020-2030.

Adaptation to Climate Impacts

Humanity must also adapt to the potential future consequences of climate change. Priority must be given to the most vulnerable people with the least resources to face climate risks, especially in developing countries that are least involved in and most affected by the phenomenon.

Financing the required amendments and procedures. Climate adaptation and coping with its effects require significant financial investments, but inaction on climate action comes at a high price. An important step is the fulfillment by industrialized countries and the main cause of the phenomenon of their commitment to provide financial allocations to developing countries so that they can adjust and move towards greener economies.

Is July 2021 really the hottest month in recorded history?

One of the parties says that July 2021 is the hottest month in the history recorded on the surface of the earth.

What is the Truth of That?

This was shown in a report by one of the US federal agencies concerned with monitoring the atmosphere and oceans in August 2021, where it announced that July 2021 was the hottest month in history since the start of the world’s temperature recording system on the planet 142 years ago. The recorded data reveal that the average temperature during this month over the earth, and the oceans together, increased by about 0.93 degrees Celsius, from the average temperature in the twentieth century, which is 15.8 degrees Celsius, and scientists believe that this is due to the long-term effects of the phenomenon of climate change. Has the number of days of extreme heat really doubled globally since the eighties of the last century?

Research conducted by the BBC found that the number of very hot days in which temperatures exceed 50°C, which are witnessed in different parts of the world annually, has doubled since the eighties of the last century. The total number of days when temperatures exceeded 50°C has increased in each of the past four decades. Between 1980 and 2009, temperatures exceeded 50°C for just 14 days, while the number rose to 26 days between 2010 and 2019. This has happened in increasing areas of our globe, which presents humanity with new challenges, especially in terms of health and livelihood aspects in general.

What are the Groups Most Affected by the Phenomenon of Climate Change?

Although all groups are affected by the results of the phenomenon of negative climate change, children bear the brunt of its effects, although they are the least responsible group for the occurrence of the phenomenon, as climate change poses a direct threat to the child’s ability to survive, develop and prosper.

In terms of:

  • The severity of weather phenomena such as hurricanes and heat waves threaten children’s lives and destroy infrastructure vital to their well-being.
  • Floods cause the destruction and damage of water and sanitation facilities, leading to the spread of various diseases,
  • which represent an imminent danger to humans in general and children in particular.
  • Drought and global change in rainfall lead to disruption in crop productivity and increased food prices, which means food insecurity and food deprivation for poor people, including of course children.

Children are the most vulnerable group to diseases that will become more prevalent as a result of climate change and drought, such as malaria, fever and pneumonia, which alone kills 2,400 children a day globally and is closely linked to under nutrition, lack of safe drinking water and air pollution, symptoms exacerbated by climate change.

The frightening and terrifying effects of climate change: In a report broadcast by the agency “AFP”, on the impact of climate change on humanity, it is clear that: Some 166 million people in Africa and Central America needed assistance between 2015 and 2019 due to food emergencies linked to climate change. Between 15 and 75 million people are at risk of famine by 2050. Some 1.4 million children will be severely stunted in Africa due to climate change in 2050. Agricultural yields have declined by 4-10% globally over the past 30 years.

Catches in the tropics have declined by 40-70%, with rising emissions.

  • As for the impact of climate change on internal migration, between 2020 and 2050, the rate will increase to 6 times the current rate.
  • Global warming will also have terrifying effects on “water stress”, with 122 million people in Central America, 28 million in Brazil, and 31 million in the rest of South America affected by a shortage of water allocations.

Climate change in Egypt and its negative effects Egypt is one of the countries most affected by the negative effects of climate change, and these effects are summarized as follows:

  1. Impact on food security
  2. Impact on water resources
  3. Impact on the ecosystem
  4. Impact on public health
  5. Impact on urban areas
  6. Impact on energy
  7. Impact on the economy

What about the Egyptian Strategy to Confront Climate Change?

The phenomenon of climate change is distinguished from most other environmental problems as it is global in nature, as it transcends the borders of countries to pose a danger to the whole world, as President Abdel Fattah El-Sisi participated in the Climate Change Summit under the auspices of the United Nations – known as the Twenty-sixth Conference of the Parties to the Framework Convention on Climate Change – in the Scottish city of Glasgow, which began on Sunday, October 31 and continued until November 12. During the closing session of the Glasgow Conference “COP 26”, it was announced that Egypt was chosen to host the 27th session of the COP27 conference on November 7 and 8 in Sharm El Sheikh, making Egypt the first country at the level of Africa to host the next climate summit, so the entire black continent will be represented in the conference, and therefore the whole world will see the efforts of Egypt and the African continent in confronting climate change.

The Egyptian strategy to confront climate change is represented in many points, the most important of which are:

  • Establishing the National Council for Climate Change to formulate the state’s general policies regarding dealing with climate change, and work to develop and update sectoral strategies and plans for climate change, in light of international agreements and national interests, and link these plans to the sustainable development strategy 2030.
  • Egypt protects its coasts on the White and Red Sea from the impact of sea level rise through clear plans carried out in cooperation between ministries and concerned scientific authorities.
  • Research bodies in Egypt are working to develop drought resistant agricultural crops and crops that reduce emissions.
  • Egypt is working to protect the agricultural area adjacent to the beaches from deterioration through mega projects.
  • Provide climate finance for the implementation of the adaptation component of the NDCs.
  • Egypt is implementing a huge desalination program (Ain Sokhna desalination plant at a cost of 2.3 billion pounds) and tertiary treatment of wastewater (Bahr Al-Baqar water treatment plant at a cost of 20 billion pounds) and Egypt is updating its strategy for low-emission development and implementing a huge renewable energy program (wind power generation project on the west coast of the Gulf of Suez at a cost of 4.3 billion pounds,)
  • Implementation of huge projects in the villages of the Egyptian countryside, such as the Decent Life project at a cost of 700 billion pounds.
  • Implementation of projects to preserve available water resources, such as the project of lining canals at a cost of 6 billion pounds. Egypt is the first country in the region to issue $750 million worth of green bonds last year.
  • Expansion of projects to establish greenhouses with the aim of adapting to climate change (the target during the next five years is about one million greenhouses.)
  • Expanding sustainable transport projects, developing the transport and communications network (at a total cost of 377 billion pounds until 2024), converting cars to work with electricity or natural gas, and operating trains with electricity to eliminate pollution.
  • Expanding health initiatives to maintain the health of citizens from various diseases.
  • Production of new varieties and hybrids of rice, such as short-lived varieties, which reduces methane emissions.
FIG 5

Facile Synthesis Process, Characterization Study and Determination of Thermoluminescence Kinetic Parameters of Combustion-Synthesized Nano Phosphor for Dosimetry and Long Persistent Applications

DOI: 10.31038/NAMS.2023634

Abstract

In this section, we describe the facile urea-assisted combustion synthesis technique that was used to synthesis of the Dy3+ activated Ca2MgSi2O7 phosphor at the already maintained muffle furnace temperature 600°C. In addition, the characterization studies of the synthesized powder samples are well reported on the basis of their structural, morphological, elemental, and thermal analysis. The synthesized Ca2MgSi2O7:Dy3+ nanophosphor was further characterized by using XRD, FESEM, EDX, and, TL analysis. The obtained XRD pattern indicates tetragonal crystal structures that are compatible with JCPDS card number #79-2425 and recognizes the formation of the desired Ca2MgSi2O7 host without any traces of impurity for confirmation of phase purity. The average crystallite size was estimated using Debye-Scherer’s formula, and was found to be in the range of ~27 nm. It performed a FESEM study to demonstrate exterior morphology. EDX spectra have been employed to establish the sintered phosphor’s elemental composition. The acquired TL glow curve have used to determine the thermal characteristics of the as-synthesized phosphor. Dy3+ doped samples have exposed for 15 min to UV exposure show optimum TL intensity at 112.21°C. With alterations to UV exposure time, it also becomes apparent that the ambient temperature corresponding to the TL peak remains constant. In order to conduct further research on characteristics including activation energy, order of kinetics, and frequency factor, samples with 4 mol% of Dy3+ exposed for varied UV exposure times were chosen. All of these parameters were assessed using the peak shape method. The aforementioned results suggest that an alternative preference for thermoluminescence dosimetry (TLD) and long-lasting applications is the combustion-synthesized Dy3+-doped Ca2MgSi2O7 nanophosphor.

Keywords

Combustion, XRD, FESEM, EDX, Tetragonal, Thermoluminescence (TL), Ca2MgSi2O7:Dy3+

Introduction

Researchers and material scientists have lately been paying special attention to compounds of nanosized luminescent materials of the melilite group doped with rare earth (RE) ions. Alkaline earth silicate phosphors have excellent qualities like high quantum efficiency, abundance, resistance to weathering, affordability, and environmental features. Silicates were also thought to be one of the best host materials for luminescence centers due to their chemical and thermal stability and long persistence times [1]. The reduction of particle size can result in remarkable modifications of some of their bulk properties, nanosized phosphors usually exhibit novel capabilities, such as higher luminescent efficiency [2], and remarkable application potential. The melilites are a large group of compounds characterized by the general formula , where M is a large monovalent or divalent cation, T1 is a small divalent or trivalent cation in tetrahedral, T2 is also a small cation in the other tetrahedral and X is an anion. Afterglow in melilite has already been well documented [3]. In the field of luminescence, the potential utility of lanthanide ions as activators has recently become widely recognized [4]. The afterglow properties of phosphors can be tailored to last for a few seconds to several hours using various activators [5]. Dy3+ is a very effective luminescent centre when utilized as an activator, according to several experimental results of luminescence in some inorganic systems. Furthermore, Dy3+ doped Ca2MgSi2O7 phosphor has been extensively explored for its thermoluminescence properties as a long-lasting phosphor. The persistent emission appears stronger when Dy3+ is added to the host because it is very likely that these ions are involved in electron trapping. With regard to the ability of two Dy3+ ions to substitute for the three ions of the host, it’s possible that the Dy3+ ions will play a role in the creation of defects that act as oxygen vacancies or electron traps [6,7]. Properties like linearity, dose range, energy response, repeatability, stability of stored information, and isotropy are taken into consideration while evaluating the performance of TLD [8]. The preparation of TLDs had good chemical and moisture stability as a result of the addition of SiO2.xH2O and the prudent choice of the chemical form of activators. To analyze the trap centers and trap level in an insulator or semiconductor driven by any radiation-source, TL is one of the most effective techniques [9]. Traps produced by the lattice imperfections play a significant role in the TL characteristics of the phosphors [10].

Accordingly, traps play an essential role in TL research. Understanding the composition of charge carriers’ trapping states is also important. The trapped electrons release energy when heated because they move back to their normal, lower-energy positions. By comprehensively analyzing the TL glow curve, one can learn regarding the trap states and recombination centers [11]. Nowadays, TL materials are widely studied as an effective tool for various applications in the fields of material characterization, archaeological and geological dating, radiation dosimetry, biological applications, age determination, geology or solid-state defect structure analysis, personnel and environmental monitoring etc., being done. Therefore, we suggest that the sintered Dy3+ doped Ca2MgSi2O7 phosphor is a preferable long-persistent phosphor and novel TL material because rare earth dysprosium [Dy3+] ions mainly act as trap centers. In our present investigation, the TL intensity is highly dependent on the concentration of dopants (Dy3+) ions. The maximum intensity of Dy3+ ions was 4 mol% and TL intensity optimum for 15 min UV exposure radiation time and then TL intensity decreases with further UV exposure.

In this article, we have described the synthesis-characterization and thermoluminescence characteristics of prepared the calcium magnesium silicate phosphor (Ca2MgSi2O7:Dy3+) by combustion synthesis utilizing urea (NH2CONH2) as fuel and boric acid (H3BO3) used as flux. Equipped phosphors were characterized and investigated by using X-ray diffractometer (XRD), field emission scanning electron microscopy (FESEM), energy dispersive X-ray spectroscopy (EDX) analysis and Thermoluminescence (TL) analysis in order to structural, morphological, elemental composition, and thermal properties of synthesized powder samples. The aim of this paper is to present the kinetic parameters of the main glow peak (385.21 K) of Ca2MgSi2O7:Dy3+, which have important aspects in the general description of physical characteristics of TL materials, using peak shape (PS) method, namely the order of kinetics (b), symmetry factor (µg), activation energy E (in eV), the frequency factor S (in s−1).

Experimental Analysis

Combustion Synthesis

In order to meet the demands of Material Science and Engineering to create inorganic materials with the appropriate composition, structure, and property, combustion synthesis (CS) has been developed as a standard approach. To sustain a self-propagating high reaction temperature, CS employs extremely exothermic (∆H ≤ 170 kJ/mol) redox (reduction-oxidation) chemicals and combinations, as well as explosive reactions. Metal nitrates serve as the oxidant while urea serves as the fuel in the combustion process. Additionally, this method has the ability to enhance materials, save energy, and protect the environment [12]. The combustion process’ key benefits include rapid heating rates, better processing times, energy efficiency, and the capacity to yield superfine, homogenous, nanocrystalline powders from the combustion products. A vital component of the simple method without a requirement for an expensive high-temperature furnace is the implementation of the enthalpy of combustion to produce and crystallize powders at low calcination temperatures [13]. This technology can effectively replace time-consuming traditional solid-state reaction and sol-gel processing techniques [14]. Numerous refractory materials, such as borides, nitrides, oxides, silicides, intermetallic, and ceramics, have been prepared using this technique.

Powder Sample Preparation

Combustion technique (Figure 1) was successfully employed for the preparation of M2MgSi2O7: Dy3+ (M=Ca) nanophosphor. The starting materials used for the preparation were of Analar grade with high- purity (i.e. 99.99%) and include calcium nitrate [Ca (NO3)2.6H2O], magnesium nitrate [Mg (NO3)2.6H2O], dysprosium nitrate [Dy (NO3)3.5H2O], and fumed silica (SiO2.xH2O). All metal nitrates were considered as sources of oxidizers, boric acid [H3BO3] as flux, and urea [NH2-CO-NH2] was used as fuel. The stoichiometric quantities of the mixture were stirred thoroughly using a magnetic stirrer to obtain a clear solution. The resulting solution was placed in a preheated muffle furnace maintained at 600°C for 5 min. Initially, the solution was thermally dehydrated and later ignited with the liberation of large amount of gases (N2, O2, etc.). Once ignited, the combustion propagates on its own without the need of any external heat. The silicate in a foamy form was obtained finally.

FIG 1

Figure 1: Synthesization of Ca2MgSi2O7: Dy3+ nanometer powder using combustion synthesis technique

After the completion of the process, the product was grinded well using agate mortar pastel to convert into a fine powder form. Further, the sample was post-annealed at 900°C for 2 h under an air atmosphere. In order to obtain white powder, the sample was then despondent cooled to room temperature. The resulting sample was put back together in an airtight bottle for additional characterization investigations such XRD, FESEM, EDX, and TL analysis. Assuming total combustion of the redox mixture for the synthesis of Ca2MgSi2O7 could be written as:

Ca (NO3)2.6H2O + Mg (NO3)2.6H2O + SiO2.xH2O + NH2CONH2 + H3BO3 → Ca2MgSi2O7 + H2O (↑) + CO2 (↑) + N2 (↑)                   (1)

Ca (NO3)2.6H2O + Mg (NO3)2.6H2O + SiO2.xH2O + Dy (NO3)3.5H2O + NH2CONH2 + H3BO3 → Ca2 MgSi2O7: Dy3+ + H2O (↑) + CO2 (↑) + N2 (↑)                                           (2)

Powder Sample Characterization

Phase structure and composition of the synthesized samples remained categorized by X-ray diffraction arrangement using Bruker D8 advance X-ray diffractometer with Cu-Kα radiation having wavelength 1.5405 Å at 40 kV, and 40 mA. The XRD data were measured over a scattering angle range of 10° to 80°. The surface morphology and EDX analysis performed with the help of FESEM (ZEISS EVO Series EVO 18 microscope) fitted with EDX spectra. Thermoluminescence (TL) glow curves of the UV-irradiated (254 nm) samples were plotted between emitted TL intensity and the corresponding temperature using routine TL set-up Nucleonix TLD reader (1009I) with constant heating rate 5°Cs-1.

Fuel and Oxidizers

Fuel and oxidizers must be utilized during the combustion synthesis process. All metal nitrates are employed as oxidizers, together with boric acid (H3BO3) as a flux and urea (NH2CONH2) as a fuel for combustion. The stoichiometric proportions of all metal nitrates and fuel are calculated using the propellant’s chemistry. With the calculation of oxidizer to fuel ratio, the elements were assigned formal valences as follows: Ca=+2, Mg=+2, Dy=+3, Si=+4, B=+3, C=+4, H=+1, O=-2, and N=0 [15].

Oxidizer and Fuel Ratio

Determining the oxidant to fuel ratio is the most important phase since it is the deciding factor that affects the characteristics of the nanomaterials that will be created. The oxidizer to fuel ratio is termed as “Ψ”, which is defined as following relation [16].

3

This ratio is very crucial in determining parameters such as reaction temperature and numerous characteristics of nanosized materials such as electrochemical, crystallinity, phase purity and morphology. Additionally, it has been also observed that the particle size of nanomaterial is influenced by the oxidant-fuel ratio. Hence, the oxygen/fuel ratio at which the heat generated by combustion is maximum is 1 [17].

Effect of Fluxes

Some unique fluxes, including Li2CO3 (lithium carbonate), NH4F (ammonium fluoride), NH4Cl (ammonium chloride), BaF2 (barium fluoride), YF3 (Yttrium Fluoride), AlF3 (aluminum fluoride), H3BO3 (boric acid) [18], KCl (potassium chloride), LiF (lithium fluoride), CaCl2 (calcium chloride) [19] etc. are additionally included with the initial precursors to enhance the creation of crystal structures and the properties of the materials, speed up the reaction, and decrease the reaction temperature. The morphological properties of silicate materials are influenced by flux, which additionally enhances the luminous efficiency of powders. The potential benefits and rate of the reaction are influenced by the reactants’ structural characteristics and the reaction circumstances. Any nano- and micro-phosphors’ crystal structure development depends significantly on fluxes. Any formation moves along more quickly because to these fluxes. The final outcome is the synthesis of phosphors with actual chemical structures [20].

Results and Discussion

Powder X-Ray Diffraction Analysis (XRD)

An efficient, necessary, analytical, and nondestructive method for material characterization is powder X-ray diffraction analysis [21]. Diffraction is an X-ray-based technique that provides information on a unit cell’s chemical composition, phase structure, and crystallinity, as well as its microstructure, crystallinity, and stress analysis. It also provides data on its percentage phase composition, inter-planar spacing, and lattice parameters. Figure 1 shows the crystal phase of the phosphor. Comparison of the recorded XRD patterns with the standard JCPDS card number #79-2425 showed good agreement [22]. Akermanite-type structure describes the crystalline structures of synthesized phosphor, which is a member of the tetragonal crystal system with the cell parameters a=b=7.8071 Å, c=4.9821 Å; and α=90°, β=90°, γ=90°, as well as space group is P¯421m (113 space number and D32d space group) and point group is -42 m. Ca2MgSi2O7 crystal structure has a cell volume of 303.663 Å3 and a molar density of 2.944 gm/cm3 [7,23]. The XRD pattern (Figure 2) shows that the sample is single phased, which is consistent with JCPDS file no. 79-2425.

FIG 2

Figure 2: XRD patterns of host & Dy3+ activated Ca2MgSi2O7 phosphor

Estimation of Crystallite Size (D)

The XRD measurement displays a slight shift towards the larger angle in contrast to the standard reference data. From the XRD diffraction peaks, the crystallite size was determined with the help of the Debye-Scherrer empirical formula [24,25].

Debye-Scherrer numerical expression as follows:

4

where k is the Scherrer constant, D is the crystallite size for the (hkl) plane, 𝜆 is the wavelength of the incident X-ray radiation [Cu-Kα (1.5405 Å)], β is the full width at half maximum (FWHM) in radiations, and 𝜃 is the corresponding angle of Bragg diffraction. Based on the Debye-Scherrer’s formula, the average particle size obtained from the XRD measurement is ~27 nm, which displays in nano form.

Analysis of Surface Morphology (FESEM) Micrographs

The combustion-synthesized phosphor’s morphology at a 2-micron scale can be seen in the FESEM micrograph (Figure 3), which suggests that the synthesized crystal is in nano form. The particles’ highly agglomerated crystallite shape gives them a frothy appearance. The precursor particles had a spherical shape and were microscopic in size. It also shows aggregated grains, which might be a result of the powder’s extended stay within a combustion furnace. Throughout the combustion reaction process, the particles cluster and get larger. In present case, we have determined the mean value of particle size by Image J software about 20.492 nm.

FIG 3

Figure 3: FESEM morphological Image of Dy3+ activated Ca2MgSi2O7 Phosphor with 20.00 K X magnification

Despite the fact that a sample containing stoichiometric proportions of redox mixture boils, goes through dehydration, and then decomposes, creating combustible gases which involves oxides of N2, H2O, and nascent oxygen when heated fast to 600°C. The doped phosphor lattice is able to develop in the ideal environment, which forms when the volatile combustible gases ignite, burn with a flame, and ignite. In addition, this procedure makes it possible to uniformly (homogeneously) dope rare-earth impurity ions in one step. We also anticipate that the particle surface shape may have an impact on the thermal characteristics of these phosphor materials.

Figure 4 shows the electron Image of the synthesized Ca2MgSi2O7: Dy3+ phosphor. It is clearly evident from the picture that the dysprosium ions are well deep trapped in the host crystal lattice sites. That is, they are indicating deeper traps. Dy3+ ions served as hole traps (Dy3+ + hole → Dy4+). Between the lower energy state (ground) and higher energy state (excited) state, Dy3+ ions serve as deep hole trap levels. Dy3+ ions may trap holes or electrons or just to create/modify defects as a result of charge compensation. The deeper traps are highly responsible for the persistent luminescence.

FIG 4

Figure 4: Electron image of synthesized Ca2MgSi2O7: Dy3+ phosphor

Analysis of Energy Dispersive X-ray (EDX) Spectroscopy

Using EDX spectra, the chemical constituents of the powder sample has been determined. Identification and measurement of the elemental composition of sample areas as small as a few nanometers are conventional procedures [26]. In EDX spectra, the presence of Ca, Mg, Si, O and Dy intense peak are present which preliminary indicates the formation of Ca2MgSi2O7: Dy3+ phosphor in Figure 5.

FIG 5

Figure 5: EDX Spectrum of Dy3+ activated Ca2MgSi2O7 phosphor

As well as the existence of dysprosium is clear in their corresponding EDX spectra. Their appeared no other emission apart from calcium (Ca), magnesium (Mg), silicon (Si), oxygen (O), and dysprosium (Dy) in Ca2MgSi2O7: Dy3+ EDX spectra of the phosphor. The elements present in the Weight% and Atomic% also determined which is represented in Table 1.

Table 1: Chemical composition of synthesized Ca2MgSi2O7: Dy3+ phosphor

Sr.

Element

Weight%

Atomic%

1. O K

54.98

71.64

2. Mg K

6.57

5.63

3. Si K

13.98

10.38

4. Ca K

23.50

12.22

5. Dy L

0.97

0.12

6. Totals

100.00

100.00

Thermoluminescence (TL) Analysis

TL Spectra

When some of the energy needed to irradiate a material is used to move electrons to traps, a phenomenon referred to as thermoluminescence (TL). This energy, which was trapped as trapped electrons, is released when the material’s temperature rises, turning the luminescence that results into TL emission [27]. The thermo luminescence dosimetry (TLD) of ionizing radiation, thermo luminescence dosimetry (TLD) for dating applications, and also provides insights on the trap levels are all extensively employed features of the TL technique [28]. The basic goal of TL experiments is to acquire data from an experimental glow curve, or from a series of experimental glow curves, and to analyze that data in order to determine values for all of the parameters with regard to the relevant luminescence mechanisms.The single and strong TL glow curve of UV-irradiated Dy3+ (4 mol%) activated Ca2MgSi2O7 with constant heating rate (5°C/sec) at different UV radiation times (i.e. 5 min, 10 min, 15 min, 25 min, 25 min) is shown in Figure 6. Since the population of trapped electrons in a metastable state reaches a maximum value at a specific time, the TL intensity in this case increases with increasing irradiation time up to 15 min, then decreases with time. From the TL glow curve of Ca2MgSi2O7:Dy3+ it was observed that single broad fitted peak are centered at 112.21°C. The corresponding depth of the trap at 112.21°C should be deeper (high temperature). Deeper traps are highly helpful for increasing the long persistent duration and afterglow process [7]. Therefore, TL data reveal the presence of single trapping levels in Ca2MgSi2O7:Dy3+ phosphor.

FIG 6

Figure 6: TL glow curve of synthesized Ca2MgSi2O7:Dy3+ phosphor

Concentration Effect of Dy3+ ions

It is obvious that the TL intensity rises with rising Dy3+ concentration, reaches a maximum value at 4 mol%, and then falls with increasing Dy3+ ion concentration. The distance between the activators ions decreases as the activator concentration rises. The ions get involved more frequently, and energy is transferred. On the other hand, the energy that the ions store decreases as the activator concentration rises [7,26]. Consequently, there is an optimum concentration of the activator ions. As is seen from figure the favorable concentration of Dy3+ in Ca2MgSi2O7 nanophosphor is about 4 mole % (relative to Ca2+). The ionic radii of Ca2+, Mg2+, Si4+ and Dy3+ are 1.12 Å, 0.58 Å, 0.26 Å, and 0.97 Å, respectively [29]. When Dy3+ ions doped into Ca2MgSi2O7 host lattice, they may prefer to occupy the Ca2+ crystallographic site rather than Dy3+ site, because the radius of the Dy3+ is very closer to that of the Ca2+ lattice site [30]. A positive centre (hole) is formed when a trivalent metallic ion (such as Dy3+) substitutes a divalent metallic ion in a host lattice. So, Dy3+ ions hardly incorporate into tetrahedral [MgO4] and [SiO4] and only incorporate into [CaO8] anions complexes in host lattice [31]. The traps are released when the phosphor is heated, and the intensity of the thermoluminescence is raised by radiative recombination at the Dy3+ ions.

Peak Shape Method

The most significant glow curve peak’s kinetic parameters are found using the peak shape approach (Figure 7), also referred to as Chen’s empirical method [32]. The kinetic/trapping parameters such as the trap depth or activation energy (E), the order of the kinetics (b), and the frequency factor (s) have a significant impact on the TL characteristics. The area under the curve, the heating rate, the form of the glow curve, and other analytical techniques have all been developed to get TL parameters. The next sections provide a brief overview of the peak shape approach and the findings it produced for the current investigation.

FIG 7

Figure 7: An illustration of a typical thermoluminescent light curve utilizing the peak shape approach

Calculation of Kinetic/Trapping Parameters

[a] Order of kinetics (b)

Recombination of de-trapped charge carriers with their counterparts is referred to as the order of kinetics (b), and the order of kinetics depends on the TL peak shape approach. The TL glow peak of the Ca2MgSi2O7:Dy3+ phosphor TL glow curves was calculated using Chen’s empirical formula.

The geometrical factor μg was calculated as:

5

Here, Tm is the temperature corresponding of high peak intensity, whereas T1 and T2 is the ascending and descending part of peak correspond to the full-width at half maxima (FWHM). The TL glow peak divided between first and second order of kinetics of geometric factor defined, first order kinetics (μg)=0.39-0.42, (μg)=0.49-0.52 is the second order kinetics and (μg)=0.43-0.48 is the mixed order of kinetics [33].

The calculated symmetry factor (g) for the single peak was 0.47-0.51, which is close to the value for the second-order kinetics. This demonstrates that the single band’s peaks are of second order. The outcome shows that, in compared to the first-order example, the chance of retrapping carriers is higher after the carriers from the traps corresponding to the single bands were freed [34].

[b] Activation energy (E)

We used the following equation to estimate the depth of the traps, (E). It is calculated by the general formula, which is valid for any kinetics. It is given by,

6

For general order kinetics cα and bα (α=τ, δ, ω) are calculated by following expression,

6.123

[c] Frequency factor (S)

After being exposed to ionizing radiation, the frequency factor is a probability that indicates the escape of electrons from the traps. After determining the kinetics order (b) and activation energy (E), the frequency factor (s) has been determined from the following equation and the values of b and E were substituted.

7

Here, β is heating rate, k is Boltzmann constant, and b is the order of the kinetics, which is 2 in this case. The TL glow curves were recorded by TLD reader (Nucleonix Model 1009I)) with a linear heating rate of 5°C s−1 [35].

Table 2 shows the effects of various UV exposure times, such as 5, 10, 15, 20, and 25 minutes, on the Ca2MgSi2O7:Dy3+ phosphor and several TL kinetic parameters, such as trap energy, symmetry factor, and frequency factor.

Table 2: Values of different Kinetic parameters of the main TL glow peak curves of Ca2MgSi2O7:Dy3+ phosphor calculated from using peak shape method.

UV Radiation Time

(oC)

(oC)

(oC)

E (eV)

Frequency Factor

5 min

86.43

112.21

139.36

25.78

27.15

52.93

0.51

0.72

1.2 × 107

10 min

87.33

112.21

136.10

24.88

23.89

48.77

0.49

0.77

2.87 × 107

15 min

87.33

112.21

136.10

24.88

23.89

48.77

0.49

0.77

2.87 × 107

20 min

85.69

112.21

139.36

26.52

27.15

53.67

0.51

0.71

1.2 × 107

25 min

85.69

112.21

136.10

26.52

23.89

50.41

0.47

0.66

1.1 × 107

This method was applied to the cleaned main peak determined by means of trap energy (E), frequency factor (s) and symmetry factor (μg) were calculated using Eqs. (5) and (6). Symmetry factor (μg), average trap energy and frequency factor were found between in the range of 0.47-0.51, 0.66-0.77 eV and 1.1 × 107 to 2.87 × 107 s−1, respectively. Thus, in our case, we obtained the maximum thermo-luminescence [TL] in 15 min UV exposures time. In our case, symmetry factor (μg) is lies between 0.47-0.51, which signs that it is a case of second order kinetics, responsible for deeper trap depth. According to Sakai’s and Mashangva report, he is also reported that a trap depth between 0.65-0.75 eV is very appropriate for long afterglow properties [36,37].

It is obvious that the dysprosium ion, because of its inherent nature, is more to responsible for the hole trap level. That is, the dysprosium ion’s inherent properties enable it to produce a hole trap level as soon as it reaches the host crystal lattice. In a host lattice site, these hole trap levels are trapped extremely deeply. The material has long-lasting properties as a result of these deeper traps.

Conclusion

Tetragonal samples of Ca2 MgSi2O7 with 4 mol% dopant concentration of Dy3+ ions have been successfully synthesized by combustion synthesis technique method at 600°C, which appears to be the most feasible method for their production. For better understanding its spectroscopic and luminescent characteristics, several characterization approaches were investigated. Amorphous structure and nanoscale particle size were determined using XRD and FESEM analyses. The crystallite particle size has been calculated as 27 nm and 20.492 nm. The size of the crystallites has been achieved in the nano order and nano range with considerably greater uniformity. The surface morphology of the particles is shown by the FESEM data to have a flake-like structure, a uniform, homogenous, superfine crystal structure, and to have aggregated firmly. The EDX spectra confirmed the presence of Ca, Mg, Si, O and Dy elements in Ca2 MgSi2O7:Dy3+ phosphor. The TL glow curves are centered at 112.21°C for 15 min UV exposure time, which displays optimum UV exposure time. TL study shows that the optimum Dy3+ concentration was found for 4 mol%. The probability of recapturing released charge carriers before recombination is supported by the second order kinetics. The long after glow process is being enhanced considerably by Dy3+ ions. The activation energy of Ca2 MgSi2O7:Dy3+ was found to be 0.66-0.77 eV. On the basis of the value of the activation energy, we have suggested that the Combustion-synthesized Ca2MgSi2O7:Eu2+, Dy3+ phosphor is an excellent thermoluminescent material and a more efficient long persistent material. It may highly applicable for TL dosimetry and long persistent applications.

Future Scope of This Work

The advancement of energy-efficient transmission, storage, and generating technologies using nanomaterials has significantly enhanced the effectiveness of both conventional and renewable energy sources. The thermoluminescence properties of nanomaterials have highly applicable in archeological dating, forensic science, geology, medical dosimetry, environmental radiation, oncology radiation, biological science, radiation physics, medicine, neutron-dosimetry, UV radiation monitoring, high-level photon dosimetry radiation. Radiation therapy or radio-therapy have used in a cancer disease treatment and kill cancer cells. As-synthesized Ca2MgSi2O7: Dy3+ phosphor is an excellent thermoluminescent material and a more efficient long persistent material.

It is highly applicable in TL radiation dosimetry applications for personnel and environmental monitoring. The luminescent features of biomaterials have prospective applications in different areas such as DNA transplantation of tumor in the field of biological science, signal processing or image recognition in the field of computer science and information technology, drug delivery in the field of pharmaceutical science, and in the field of nutrition therapy, chemotherapy, as well as in the field of tissue engineering and bone-tissue engineering.

Acknowledgement

We gratefully acknowledge the kind support for the facility of XRD, FESEM, and EDX analysis Dept. of Metallurgical Engineering, NIT Raipur (C.G.). Authors are also thankful to Dept of physics, Pt. Ravishankar Shukla University, Raipur (C.G.) for providing us the facility of thermoluminescence (TL) analysis. We are also heartily grateful to Dept. of physics, Dr. Radha Bai, Govt. Navin Girls College Mathpara Raipur (C.G.), providing the facility of muffle furnace and other essential research instruments

Competing Interests

Authors have declared that no competing interests exist in this present research investigation.

Authors Contribution

Both authors contributed to the completion of this work. Author Dr. Shashank Sharma undertakes the manuscript designed and conducted the entire experiments and characterization studies, collected and analyzed the research data, and prepared the entire manuscript draft as well as supervised the results-discussion. Similarly, author Dr. Sanjay Kumar Dubey has properly checked the spelling mistake, punctuation, grammatical error, conceptualization, writing, review, editing and helped in sample preparation. Both authors read and approved the final manuscript.

References

  1. Prasannakumar JB, Vidya YS, Anantharaju KS, Ramgopal G, Nagabhushana H, et al. (2015) Bio-mediated route for the synthesis of shape tunable Y2O3: Tb3+ nanoparticles: photoluminescence and antibacterial properties. Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy 151: 131-140.
  2. Hong DS, Meltzer RS, Bihari B, Williams DK, Tissue BM (1998) Spectral hole burning in crystalline Eu2O3 and Y2O3: Eu3+ Journal of Luminescence 76: 234-237.
  3. Gong Y, Wang Y, Jiang Z, Xu X, Li Y (2009) Luminescent properties of long-lasting phosphor Ca2MgSi2O7: Eu2+. Materials Research Bulletin 44: 1916-1919.
  4. Cai J, Pan H, Wang Y (2011) Luminescence properties of red-emitting Ca2Al2SiO7: Eu3+ nanoparticles prepared by sol-gel method. Rare Metals 30: 374-380.
  5. Talwar GJ, Joshi CP, Moharil SV, Dhopte SM., Muthal PL, et al. (2009) Combustion synthesis of Sr3MgSi2O8: Eu2+ and Sr2MgSi2O7: Eu2+ Journal of Luminescence 129: 1239-1241.
  6. Dutczak D, Milbrat A, Katelnikovas A, Meijerink A, Ronda C, et al. (2012) Yellow persistent luminescence of Sr2SiO4: Eu2+, Dy3+. Journal of Luminescence 132: 2398-2403.
  7. Sharma S, Dubey SK (2022) Significant Contribution of Deeper Traps for Long Afterglow Process in Synthesized Thermoluminescence Material. Journal of Mineral and Material Science 3: 1-6.
  8. Mckeever SWS (1985) Thermoluminescence of Solids, Cambridge University Press. London New York.
  9. Vij DR (1993) Thermoluminescence materials, PTR Prentice-Hall, Inc. A Simon a Schuster Company, Englewood Cliffs, New Jersey 7632.
  10. Yuan ZX, Chang CK, Mao DL, Ying W (2004) Effect of composition on the luminescent properties of Sr4Al14O25: Eu2+,Dy3+ Journal of Alloys and Compounds 377: 268-271.
  11. Kiisk V (2013) Deconvolution and simulation of thermoluminescence glow curves with Mathcad. Radiation Protection Dosimetry 156: 261-267.[crossref]
  12. Aruna ST, Mukasyan AS (2008) Combustion synthesis and nanomaterials. Current opinion in Solid State and Materials Science 12: 44-50.
  13. Ekambaram S,Patil KC (1995) Synthesis and properties of rare earth doped lamp phosphors. Bulletin of Materials Science 18: 921-930.
  14. Toniolo JC, Lima MD, Takimi AS, Bergmann CP (2005) Synthesis of alumina powders by the glycine-nitrate combustion process. Materials Research Bulletin 40: 561-571.
  15. Dubey SK, Sharma S, Diwakar AK, Pandey S (2021) Synthesization of Monoclinic (Ba2MgSi2O7: Dy3+) Structure by Combustion Route. Journal of Materials Science Research and Reviews 8: 172-179.
  16. Liu G, Li J, Chen K (2010) Combustion synthesis. Handbook of Combustion: Online 1-62.
  17. Kingsley JJ, Patil KC (1988) A novel combustion process for the synthesis of fine particle α-alumina and related oxide materials. Materials Letters 6: 427-432.
  18. Park B, Lee S, Kang J, Byeon S (2007) Single-Step Solid-State Synthesis of CeMgAl~ 1~ 1O~ 1~ 9: Tb Phosphor. Bulletin-Korean Chemical Society 28: 1467.
  19. MORAIS VRD, LEME DDR, Yamagata C (2018) Preparation of Dy3+-doped calcium magnesium silicate phosphors by a new synthesis method and its luminescence characterization.
  20. Sharma S, Dubey SK (2021) The significant properties of silicate based luminescent nanomaterials in various fields of applications: a review. International Journal of Scientific Research in Physics and Applied Sciences 9: 37-41.
  21. Bates S, Zografi G, Engers D, Morris K, Crowley K, et al. (2006) Analysis of amorphous and nanocrystalline solids from their X-ray diffraction patterns. Pharmaceutical Research 23: 2333-2349.[crossref]
  22. JCPDS (Joint Committee on Powder Diffraction Standard) PDF File No. #79-2425.
  23. Sharma S, Dubey SK (2023) Enhanced Luminescence Studies of Synthesized Ca2MgSi2O7: Ce3+ Phosphor. COJ Biomedical Science and Research 2: 1-9.
  24. Birks LS, Friedman H (1946) Particle size determination from X‐ray line broadening. Journal of Applied Physics 17: 687-692.
  25. Scherrer P (1918) Bestimmung der Grösse und der inneren Struktur von Kolloidteilchen mittels Röntgenstrahlen. Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, mathematisch-physikalische Klasse 98-100.
  26. Sharma S, Dubey SK (2022) “Specific Role of Novel TL Material in Various Favorable Applications.” Insights in Mining Science & Technology 3: 1-9.
  27. Vij DR (1998) Luminescence of solids, New York.
  28. Jüstel T, Lade H, Mayr W, Meijerink A, Wiechert DU (2003) Thermoluminescence spectroscopy of Eu2+ and Mn2+ doped BaMgAl10O17. Journal of Luminescence 101: 195-210.
  29. Shannon RD (1976) Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides. Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography 32: 751-767.
  30. Sharma S, Dubey, SK (2022) Importance of the Color Temperature in Cold White Light Emission of Ca2MgSi2O7: Dy3+ Journal of Applied Chemical Science International 13: 80-90.
  31. Jiang L, Chang C, Mao D (2003) Luminescent properties of CaMgSi2O6 and Ca2MgSi2O7 phosphors activated by Eu2+, Dy3+ and Nd3+. Journal of Alloys and Compounds 360: 193-197.
  32. Chen R (1969) Thermally stimulated current curves with non-constant recombination lifetime. Journal of Physics D: Applied Physics 2: 371.
  33. Sharma S, Dubey SK (2023) Enhanced Thermoluminescence Properties of Synthesized Monoclinic Crystal Structure. Global J Mater Sci Eng 5: 146.
  34. McKeever SW (1988) Thermoluminescence of solids (Vol. 3) Cambridge University Press.
  35. Chen R (1969) Glow curves with general order kinetics. Journal of the Electrochemical Society 116: 1254-1257.
  36. Mashangva M, Singh MN, Singh TB (2011) Estimation of optimal trapping parameters relevant to persistent luminescence.
  37. Sakai R, Katsumata T, Komuro S, Morikawa T (1999) Effect of composition on the phosphorescence from BaAl2O4: Eu2+, Dy3+ Journal of Luminescence 85: 149-154.
FIG 1

Synthesis of Nano-Gold Particles for Multi-functional Finishing of Cotton Fabrics

DOI: 10.31038/NAMS.2023633

Abstract

Gold nanoparticles (AuNPs) were synthesized in situ on cotton, one of the most popular cellulose materials, to achieve functionalization. The localized surface plasmon resonance of the AuNPs imparted the cotton fabric with colors, showing good colorfastness to washing and rubbing. Characterization of the surface morphology and chemical composition of the modified cotton fabric confirmed synthesis and coating of the AuNPs on cotton fibers. The relationship between the morphology of the AuNPs and the optical properties of the cotton fabric was analyzed. Acid condition enabled in situ synthesis of AuNPs on cotton. Cotton with AuNPs exhibited significant catalytic activity for reduction of 4-nitrophenol by sodium borohydride, and could be reused in this reaction. Treatment with AuNPs substantially improved the ultraviolet (UV)-blocking ability of the fabric and resulted in cotton with remarkable antibacterial activity. Traditional reactive dyes were applied to the cotton with AuNPs to enhance its color features. The catalytic properties of the AuNPs on the fabric were not influenced by dyeing with traditional dyes. AuNP-treated cotton fabric used as a flexible active substrate showed improved Raman signals of dyes on the fabric.

Keywords

Gold nanoparticles, Cotton, Coloration, Catalysis, Surface-enhanced Raman Scattering, Antibacterial

Introduction

Modification of cellulose fibers, in particular cotton products, using functional nanomaterials has attracted extensive attention, with the aim of imparting properties such as antibacterial, flame retardancy, and hydrophobic properties [1-7]. Many strategies have been developed to obtain such combinations of fabric and nanoparticles, including plasma treatment, electrostatic assembly, chelation by active groups, and in situ synthesis [8-12]. Anisotropic silver nanoparticles (AgNPs) have been assembled on cotton fabric via electrostatic interaction between nanoparticles and cotton fibers, endowing the textile with bright colors due to their unique optical property, i.e., localized surface plasmon resonance (LSPR) [13]. Cotton fabric with AgNPs was also modified, followed by treatment with fluorinated decyl polyhedral oligomeric silsesquioxane (FPOSS), to fabricate colored fabric with durable antibacterial and self-healing superhydrophobic properties. AgNPs have been combined with cotton under the binding effect of branched poly(ethylenimine) (PEI) [13]. Poly-butylacrylategrafted carbon nanotubes has been applied to cotton fabric using a common dipping–drying–curing finishing procedure [14]. The modified cotton showed various functions, such as enhanced mechanical properties and extraordinary flame retardancy. Core–shell-structured silica dioxide@zinc oxide (SiO2@ZnO) nanoparticles have been prepared and adhered to cotton fabric with the assistance of (3-Aminopropyl)triethoxysilane (APTES) or vinyltriethoxysilane (VTES) [15], resulting in textiles with antibacterial activity, UVprotection properties, and high durability. Titanium dioxide (TiO2) nanoparticles have also been used for functionalization of cotton to obtain fabric with selfcleaning and UV-blocking properties [16,17]. Among functional nanoparticles, gold nanoparticles (AuNPs) have received considerable attention from researchers, owing to their promising optical, electronic, magnetic, catalytic, and biomedical applications [18,19]. As the most stable metal nanoparticles, AuNPs can be prepared using straightforward routes and are resistant to oxidation, facilitating diverse applications. AuNPs present significant catalytic activity in various reaction systems. They exhibit unique optoelectronic features and possess excellent biocompatibility with appropriate ligands as well as high surface-to-volume ratio. The properties of AuNPs including the LSPR optical effect can be readily tuned by controlling their size, shape, and surroundings. Moreover, AuNPs have been widely used as effective active substrate materials for surface-enhanced Raman scattering (SERS) analysis. The Raman signals of molecules adsorbed on AuNPs can be extremely enhanced. Modification of textile materials with AuNPs can transfer some of these important properties to the resulting textile products. Cotton is the most widely used natural fibrous material for textile and clothing production. Modification of cotton is driven by growing consumer demand for enhanced functions of conventional textile products. In situ synthesis of nanoparticles on fabric/fibers is a simple and effective route to achieve functional modification of textile materials; For example, A hydrophobic hierarchical structure has been fabricated on cotton fabric by in situ growth of silica nanoparticles [20,21]. AgNPshas been synthesized in situ on cotton fabric by adjusting the pH value at room temperature. The as-synthesized AgNPs imparted vivid colors and strong antibacterial properties to the treated fabric. TiO2 nanoparticles have been synthesized with anatase structure using ultrasonic irradiation at low temperature and loaded them onto cotton fabric [22]. The TiO2-loaded cotton fabric exhibited UV-protection and self-cleaning features. ZnO nanoparticles have been synthesized in situ in the cellulosic pores of cotton fabric by reacting zinc nitrate and sodium hydroxide to obtain fabric with antibacterial and UV-protection properties [23]. In the previous research, AuNPs were synthesized in situ on silk fabric by heat treatment [24]. This silk fabric treated with AuNPs showed not only vivid colors but also enhanced Raman signals for use as an active SERS substrate for detection of trace analytes [25]. These results inspired us to develop cotton functionalized with AuNPs by in situ synthesis. In this study, cotton fabric was modified by AuNPs synthesized in situ through heat treatment. The optical properties of the resulting colorful fabric treated with AuNPs were analyzed based on color strength (K/S) curves and ultraviolet–visible (UV–Vis) diffuse reflectance absorption spectroscopy. The surface morphology of the cotton fabric before and after modification with AuNPs was investigated by scanning electron microscopy (SEM). The influence of pH on the in situ synthesis of the AuNPs was investigated. The catalytic activity and antibacterial features of the AuNP-treated cotton fabric were evaluated. Complex coloration of cotton fabric using both AuNPs and traditional dyes was also explored. Furthermore, cotton fabric with AuNPs was used as a flexible active substrate to enhance the Raman signals of dyes on the fabric.

Technical Details

The following chemicals were used
Tetrachloroauric(III) acid trihydrate, acetic acid, sodium hydroxide, 4-nitrophenol, and sodium borohydride, Cellulose powder, CI Reactive Red 3 and CI Reactive Red 195
The textile material used was Knitted cotton fabric
The instruments used for measurement are as follows
SEM
Plasma atomic emission spectrometer
Raman microscope system
Color i7 spectrophotometer
Liquor to fabric ratio was 50: 1
The following have been followed
Colorfastness to washing
Colorfastness to rubbing
Catalytic activity
Antibacterial testing against Gram negative bacterium (Figure 1)

FIG 1

Figure 1: Modification of cellulose fibers

Preparation and Characterization of Gold Nano-particles

Figure 2 shows a photograph of the treated cotton fabric samples. The cotton fabric treated in 0.025 mM HAuCl4 solution (Cot-Au-1) was light red, implying presence of AuNPs on the cotton fibers. The color of the cotton fabric with AuNPs changed from light red to red to dark red as the initial concentration of HAuCl4 was increased from 0.025 mM to 0.125 mM. Based on ICP-AES, the gold content of the cotton fabric with AuNPs was measured to be 0.386, 0.725, 0.921, 1.638, and 1.849 mg g-1 for Cot-Au-1, Cot-Au-2, Cot-Au-3, Cot-Au-4, and Cot-Au-5, respectively.

FIG 2

Figure 2: Treated cotton fabrics

The gold content of the samples increased as the concentration of HAuCl4 was increased. K/S curves were obtained to analyze the color changes. The peak of the K/ S curves for the treated cotton fabric remained unchanged, being located at 540 nm, with increasing gold content. However, the maximum value of the treated cotton fabric increased as the gold content was increased, consistent with its deepening color. UV– Vis diffuse reflectance absorption spectra of cotton fabric treated with AuNPs were measured. As shown in Figure 1c, a single absorption band located at 534 nm appeared in the UV–Vis absorption spectrum of CotAu-1, assigned to the characteristic LSPR mode of AuNPs. The LSPR band of cotton fabric treated with AuNPs red-shifted from 534 nm to 547 nm as the initial concentration of HAuCl4 was increased from 0.025 mM to 0.125 mM, along with an increase in the absorption band intensity. These changes in the color and LSPR property of the AuNP-treated cotton fabric may be related to the gold content, and the morphology and density of AuNPs on the cotton. SEM was employed to observe the surface morphology of the treated cotton fabric (Figure 3).

FIG 3

Figure 3: SEM images of a Cot-Au-1, b Cot-Au-2, c Cot-Au-3, d Cot-Au-4, and e Cot-Au-5

A number of nanoparticles were seen over the surface of fibers (Figure 2), demonstrating that AuNPs were synthesized in situ on the cotton. The size of the AuNPs on cotton was measured to be 8.7 ± 1.2, 8.6 ± 1.3, 14.1 ± 3.0, 17.4 ± 3.0, and 20.5 ± 3.8 nm for Cot-Au-1 to CotAu-5, respectively. Although the sizes for Cot-Au-1 and Cot-Au-2 were similar, Cot-Au-2 with higher LSPR intensity had higher density of AuNPs on fiber surfaces in comparison with Cot-Au-1. The size of AuNPs increased as the gold content of the samples increased as the Au ion concentration was changed from 0.05 mM to 0.125 mM. It was found that nearly all the nanoparticles on the cotton fibers were spherical for low gold content (Cot-Au-1, Cot-Au-2, and CotAu-3), whereas a few anisotropic AuNPs, such as triangular nanoplates, appeared when the gold content was increased to high level (Cot-Au-4 and Cot-Au-5). The anisotropy of the AuNPs led to the red-shift of the LSPR band observed in the UV– Vis diffuse reflectance absorption spectra of the AuNPtreated cotton fabric. As seen from the SEM images, the density of AuNPs reduced when the Au ion concentration was changed from 0.075 to 0.10 and 0.125 mM, due to generation of larger and anisotropic AuNPs. All these effects on the morphology (shape and size) and density of the AuNPs on cotton led to the changes in the LSPR property of the different samples. XPS was used to analyze the cotton fabric treated with AuNPs. XPS peaks assigned to O 1s and C 1s as the normal components of cellulose were seen in the XPS spectrum of Pri-Cot fabric. XPS peaks ascribed to Au element appeared in the XPS spectra after the cotton fabric was treated. The XPS spectrum for Cot-Au-4 displayed two principal bands at 82.2 and 85.8 eV, attributed to binding energies of 4f7/2 and 4f5/2 of metallic Au, respectively [26]. These XPS results indicate that AuNPs were successfully synthesized on the cotton fabric.

Influence of pH Value

We investigated the influence of the pH value on the in situ synthesis of AuNPs on the cotton fabric. The original pH value of the HAuCl4 aqueous solutions at 0.025–0.125 mM was around 4. The pH value of the reaction systems was adjusted by addition of acetic acid or NaOH aqueous solutions. The K/S curves and UV–Vis diffuse reflectance absorption spectra of cotton fabric treated has been obtained with 0.10 mM HAuCl4 solution at different pH values (3–6). As can be seen, the cotton fabric treated at pH 3 showed the highest K/S value of 1.71 among the different cotton samples. The K/S value of the AuNP-treated cotton fabric decreased as the pH value of the reaction system was increased. The maximum K/S decreased to 0.28 when the pH value was increased to 6. Vivid cotton fabric was obtained when using 0.10 mM HAuCl4 in the pH range of 3–6, although the cotton fabric changed slightly in color after heat treatment in HAuCl4 solution (0.10 mM) at pH 7 or above, implying that nearly no AuNPs were produced. The UV–Vis diffuse reflectance absorption bands of the cotton fabric treated with AuNPs at different pH values were centeredaround 540 nm. The intensity of the absorption bands decreased with increase in the pH value, consistent with the change trend of the K/S values. It can be inferred that acid condition facilitated in situ synthesis of AuNPs on cotton fabric, similar to the case of in situ preparation of AuNPs on ramie fibers [27]. It is well documented in literature that the pH value of the reaction system plays a vital role in formation of AuNPs through reduction of HAuCl4 [28-33]. Au ion complexes with chloride and/or hydroxide as ligands were suggested to be AuCl4 – (pH 3.3), AuCl3(OH)- (pH 6.2), AuCl2(OH)2 – (pH 7.1), AuCl(OH)3 – (pH 8.1), and Au(OH)4 – (pH 12.9) ions, corresponding to different ranges of pH [34,35]. The reduction potential of Au ion complexes depends remarkably on the pH value, with decreased reactivity as the pH is increased, in the order AuCl4 -[AuCl3(OH)-[AuCl2(OH)2 -[AuCl(OH)3 – [Au(OH)4 -. In the present study, AuNPs were synthesized in presence of cotton under acid condition, whereas Au ions were not reduced to form AuNPs in neutral or basic solution. The present results are consistent with previous analyses of the influence of pH on the formation of AuNPs. Mechanism of in situ synthesis of AuNPs It is well known that cellulose is the dominant component of cotton, consisting of long chains of Dglucose units [36]. In situ synthesis of AuNPs on cotton fabric in this study could result from reduction of Au ions by cellulose. Pure cellulose powder has been employed to reduce Au ions according to the same experimental procedure with cotton fabric. As shown in Figure S3, purplish red and grayish purple cellulose powders were produced after heat treatment in HAuCl4 solution, implying synthesis of AuNPs by cellulose. This result indicates that the reducing effect of cellulose in cotton led to in situ formation of AuNPs. Cellulose materials have been reported to act as reducing agents to synthesize AuNPs and AgNPs, owing to their abundant hydroxyl groups [37,38]. It is suggested that hydroxyl groups of cellulose play a pivotal role in the in situ formation of metal nanoparticles. The primary hydroxyl groups with higher reactivity in cellulose could be oxidized by Au ions during in situ synthesis of AuNPs. The reducing ends of cellulose in cotton could also contribute to reduction of Au ions to form AuNPs [39]. As proposed in previous research, oxygen-containing groups on the surface of cotton, including carboxylate and hydroxyl groups, can serve as active sites that might combine with AuNPs through complexing or electrostatic interaction [40]. Zeta potential measurements in our previous work indicated that cotton powder carries negative charge. It is suggested that a complexing or electrostatic interaction could lead to effective combination of synthesized AuNPs with cotton fabric in the present study. Cotton acted as a reducing agent and a stabilizing agent to prepare AuNPs on the fiber surface. Assessment of colorfastness is one of the important parameters to assess the properties and performance of textile products. The colorfastness to washing of the cotton fabric treated with AuNPs was tested by washing in presence of ECE reference detergent at 50oC for 45 min in each washing cycle. The DE values of the treated fabric before and after washing have been determined. After the first washing cycle, the DE values were measured to be 0.7 and 2.6 for Cot-Au-4 and Cot Au-5, respectively, revealing that color fading occurred for the cotton fabric treated with AuNPs during washing. However, the DE of the treated fabric increased slightly after the third washing cycle. These results demonstrate that the cotton fabric colored with AuNPs exhibited reasonably good colorfastness to washing. In addition, the colorfastness to rubbing of the treated cotton fabric was tested. The gray scale rating for the DE values of Cot-Au-2, Cot-Au-4, and Cot-Au-5 under dry and wet rubbing conditions was assessed. The dry rubbing colorfastness was rated as 5, 4–5, and 4–5 for Cot-Au-2, Cot-Au-4, and Cot-Au-5, respectively. The rating of the wet colorfastness of Cot-Au-2, Cot-Au-4, and Cot-Au-5 was estimated to be 4–5, 4, and 4, respectively. These results show that the cotton fabric treated with AuNPs exhibited good colorfastness to dry or wet rubbing. Investigation of catalytic activity AuNPs have been widely used as catalysts for various reactions [41-43]. In the present research, AuNPs were bound to cotton fibers after in situ synthesis. As the cotton fabric acts as a support for the nanoparticles (Scheme 1), they can be readily separated from the reaction system, enabling reuse of the catalyst. Reduction of 4-NP has been commonly used as a model catalytic reaction to evaluate the catalytic activity of metal nanoparticles [44]. The catalytic activity of the cotton fabric treated with AuNPs was assessed by monitoring the UV–Vis absorption spectra of aqueous solution during reduction of 4-NP using NaBH4. The color of the 4-NP solution changed from light yellow to green–yellow after addition of NaBH4. Nitro compounds are inert to NaBH4 in absence of catalyst, whereas metal nanoparticles can act as an electronic relay agent for electron transfer from NaBH4 to nitro compounds to accelerate the reduction reaction [45]. A new UV–Vis absorption peak for 4-NP solution appeared at 400 nm after NaBH4 was added, due to formation of 4-nitrophenolate ions. The evolution of the UV–Vis absorption spectra of 4-NP solution after mixture with NaBH4 in presence of pristine cotton fabric was recorded. The intensity of the absorption peak at 400 nm for 4-NP decreased only slightly, revealing that pristine cotton fabric showed no catalytic activity. Time-resolved UV–Vis absorption spectra of 4-NP solution with NaBH4 in presence of AuNP-treated cotton fabric have been obtained (Cot-Au-1 to Cot-Au-5). The absorption peak at 400 nm of 4-NP solution decreased distinctly in intensity after addition of NaBH4 in presence of all the fabric samples treated with AuNPs. Meanwhile, a new absorption peak arose at 300 nm due to the reduction process of 4-NP, implying formation of 4-AP [46,47]. The intensity of the absorption peak at 400 nm was plotted as a function of time to determine the reduction rate of 4-NP. The rapid intensity decrease of the peak at 400 nm indicates that the cotton fabric treated with AuNPs exhibited notable catalytic activity. It was found that the reduction system with Cot-Au-4 exhibited the highest reaction rate among the AuNPtreated cotton fabric samples. Reduction of 4-NP is generally considered to be a pseudo-first-order kinetic reaction on account of excess NaBH4 [48,49]. Plots of ln(At/A0) versus time have been obtained, where At and A0 denote the absorption intensity at 400 nm at time t and the initial stage, respectively. The linear correlation between ln(At/A0) and time confirms this pseudo-first-order hypothesis. The apparent rate constant (Kapp) of the catalytic reaction can be estimated from the linear slope of ln(At/A0) versus time. The Kapp value of the reduction reaction was found to be 1.89 9 10-2, 1.76 9 10-2, 2.29 9 10-2, 4.32 9 10-2, and 2.55 9 10-2 min-1 with Cot-Au-1, Cot-Au-2, CotAu-3, Cot-Au-4, and Cot-Au-5, respectively. The Kapp values obtained in this study are comparable to related literature results for AuNPs [50,51]. Comparing these Kapp values for the AuNP cotton fabric samples, CotAu-4 with the largest Kapp value showed the highest catalytic activity, although the gold content of CotAu-4 was lower than that of Cot-Au-5. The catalytic properties of the treated cotton fabric depend on the AuNPs on the surface of cotton fibers. It is believed that the catalytic activity is related to the shape, size, and density of AuNPs on the fabric. To evaluate the reusability of the catalyst, treated cotton fabric (CotAu-4) was separated from the reaction system and reused in repeated reduction reactions of 4-NP. The peak intensity at 400 nm versus reaction time for each complete conversion is plotted in Figure 6h. The treated fabric still exhibited strong catalytic activity even after seven cycles, indicating that the treated cotton fabric exhibited durable catalytic effect. UV-protection and antibacterial properties Treatment with AuNPs endowed the cotton fabric with additional functions. The UV transmittance and UV protection factor (UPF) values of the different fabric samples have been determined. Coating with AuNPs reduced the average transmittance values of the cotton fabric. Transmittance values in both the UVA (315–400 nm) and UVB (280–315 nm) regions showed a decreasing trend with increasing gold content on the cotton fabric. The UPF value of pristine cotton fabric was measured to be 65.1, while in situ synthesis of AuNPs on the cotton fabric increased the UPF value to 109.3, suggesting that the AuNPs improved the UV-blocking ability of the cotton fabric. The antibacterial properties of AuNPs have attracted great interest, and various potential antibacterial applications have been explored [52-54]. The antibacterial activity of cotton fabric treated with AuNPs was evaluated against Gram-negative bacterium E. Coli. The bacteria on the pristine cotton fabric and treated cotton fabric have been studied (CotAu-4). A full lawn of bacteria was seen on the plates corresponding to the pristine cotton fabric, whereas no bacteria colonies were found on the agar medium of the AuNP-treated cotton fabric, revealing that presence of AuNPs on the fabric inhibited growth of bacteria. These results demonstrate that the cotton fabric with AuNPs possessed significant antibacterial activity. Dyeing of AuNP-treated cotton fabric with traditional dyes to improve the color range and saturation of the fabric, traditional dyes including R3 and R195 were used to dye the AuNP-treated cotton fabric. Dyeing with traditional dyes imparted deeper color to the cotton fabric than that achieved based on the LSPR optical effect of the AuNPs. The K/S values of the cotton fabric observably increased after dyeing with R3 and R195. Due to the light color of AuNPs at low gold content (0.386 mg g-1), the K/ S curves of Cot-R3 were almost the same as for CotAu-R3-1. The K/S curves of Cot-R195 and Cot-Au-R195-1 were also the same. However, coating with more AuNPs gave rise to higher K/S values for the cotton fabric colored with AuNPs and traditional dyes (R3 and R195). The AuNPs and dyes played a combined role in the color of the cotton fabric, even though the traditional dyes dominated the optical properties of the colored cotton fabric. Coloration using traditional dyes did not influence the catalytic properties of the cotton fabric with AuNPs. The UV–Vis adsorption spectra of 4-NP solution with Cot-R195 changed very little after addition of NaBH4, demonstrating no evident catalytic activity of Cot-R195, consistent with the case of PriCot. The intensity of the UV–Vis absorption band at 400 nm of 4-NP solution in presence of Cot-Au-R195-4 decreased sharply after NaBH4 addition, revealing that the cotton fabric after complex coloration with AuNPs and traditional dyes retained strong catalytic activity. Moreover, the cotton fabric with AuNPs and R195 could be reused in the catalytic reaction, showing significant catalytic activity after seven cycles. These results attest that the catalytic properties of AuNPs on cotton fabric are retained after coloration with traditional dyes. SERS enhancement by AuNP-treated cotton fabric AuNPs have been widely used as active substrates for enhancing Raman signals due to their LSPR effect. Raman scattering spectra were obtained to investigate the SERS enhancement effect of the AuNP-treated cotton fabric. No distinct bands were seen in the Raman scattering spectrum of pristine cotton fabric. However, visible bands were found in the Raman scattering spectra after AuNPs were coated on the cotton fabric. Evident vibrational bands were seen at 1604, 1378, 1121, 1096, 519, 457, 435, 380, and 257 cm-1 in the Raman scattering spectra of the AuNP-treated cotton fabric, being characteristic Raman bands of cotton fibers [55,56]. Some of these bands are assigned to vibrations of b-1,4-glycosidic ring linkages between D-glucose units in cellulose. It was found that CotAu-5 exhibited the strongest Raman signal of cellulose among the AuNP-treated cotton fabric samples, which may be due to the optimal morphology and corresponding LSPR effect of the nanoparticles. Noble-metal nanoparticles on textiles can enhance the Raman signals of dyes used for coloration of fibers, leading to promising applications of SERS in the field of identification of cultural heritage, forensic analysis, and textile dyeing [57-61]. In the present study, we investigated the SERS enhancement of R3 on the AuNP-treated cotton fabric. The cotton fabric dyed with R3 without AuNPs showed unclear Raman bands from cellulose units and dye on the fabric. However, enhanced Raman bands were obtained from R3-dyed cotton fabric treated with AuNPs. The R3-dyed cotton fabric with higher gold content (Cot-Au-R3-4 and Cot-Au-R3-5) showed unambiguous enhanced Raman bands. The Raman scattering spectrum of pure R3 powder has been determined. Compared with the normal Raman scattering spectrum of R3, the SERS bands from Cot-Au-R3-4 and Cot-Au-R3-5 at 284, 385, 478, 1000, 1032, 1123, 1274, 1473, and 1589 cm-1 can be ascribed to R3 dye on the AuNP-treated cotton fabric, although tiny wavenumber shifts occurred due to interactions of the dye and cotton fibers as well as AuNPs. The AuNPs on the fabric enhanced the Raman signal of the dye on the fibers, facilitating nondestructive analysis of dyes on textiles and providing insights into the dyeing mechanism of fibers.

Conclusions

Cotton fabric was functionalized by AuNPs synthesized in situ by a heating method. The fabric was colored by the AuNPs by virtue of their LSPR optical effect. The intensity of the LSPR band of AuNPtreated fabric increased with increasing gold content in the cotton samples. The treated fabric showed good colorfastness to washing and rubbing. SEM and XPS investigations confirmed the synthesis and combination of AuNPs on cotton. The mechanism for in situ synthesis of AuNPs on cotton was investigated. The fabric with AuNPs exhibited notable catalytic activity, as shown by monitoring reduction of 4-NP to 4-AP. The cotton fabric with AuNPs showed improved UVprotection and excellent antibacterial properties. Traditional dyes were combined with the AuNP-treated cotton, revealing improved color properties. The fabric with such complex coloration still exhibited prominent catalytic activity. Cotton fabric with AuNPs can also act as a SERS substrate for analysis of dyes on the fabric.

References

  1. El-Shishtawy RM, Asiri AM, Abdelwahed NAM, Al-Otaibi MM (2011) In situ production of silver nanoparticle on cotton fabric and its antimicrobial evaluation. Cellulose 18: 75-82.
  2. Mohamed AL, El-Naggar ME, Shaheen TI, Hassabo AG (2017) Laminating of chemically modified silan based nanosols for advanced functionalization of cotton textiles. Int J BiolMacromol 95: 429-437. [crossref]
  3. Dhineshbabu NR, Arunmetha S, Manivasakan P, Karunakaran G, Rajendran V (2016) Enhanced functional properties of cotton fabrics using TiO2/SiO2 nanocomposites. J Ind Text 45: 674-692.
  4. Shaheen TI, El-Naggar ME, Abdelgawad AM, Hebeish A (2016) Durable antibacterial and UV protections of in situ synthesized zinc oxide nanoparticles onto cotton fabrics. Int J BiolMacromol 83: 426-432. [crossref]
  5. Alongi J, Malucelli G (2015) Cotton flame retardancy: state of the art and future perspectives. RSC Adv 5: 24239-24263.
  6. Alongi J, Carosio F, Malucelli G (2014) Current emerging techniques to impart flame retardancy to fabrics: an overview. PolymDegrad Stab 106: 138-149.
  7. Leng BX, Shao ZZ, de With G, Ming WH (2009) Superoleophobic cotton textiles. Langmuir 25: 2456-2460. [crossref]
  8. Cady NC, Behnke JL, Strickland AD (2011) Copper-based nanostructured coatings on natural cellulose: nanocomposites exhibiting rapid and efficient inhibition of a multidrug resistant wound pathogen, A. baumannii, and mammalian cell biocompatibility in vitro. AdvFunct Mater 21: 2506-2514.
  9. Dong H, Hinestroza JP (2009) Metal nanoparticles on natural cellulose fibers: electrostatic assembly and in situ synthesis. ACS Appl Mater Interfaces 1: 797-803. [crossref]
  10. Gorjanc M, Bukosek V, Gorensek M, Mozetic M (2010) CF4 plasma and silver functionalized cotton. Text Res J 80: 2204-2213.
  11. Tang B, Zhang M, Hou X, Li J, Sun L, Wang X (2012) Coloration of cotton fibers with anisotropic silver nanoparticles. IndEngChem Res 51: 12807-12813.
  12. Tang B, Kaur J, Sun L, Wang X (2013) Multifunctionalization of cotton through in situ green synthesis of silver nanoparticles. Cellulose 20: 3053-3065.
  13. Wu MC, Ma BH, Pan TZ, Chen SS, Sun JQ (2016) Silvernanoparticle-colored cotton fabrics with tunablecolors and durable antibacterial and self-healing superhydrophobic properties. AdvFunct Mater 26: 569-576.
  14. Liu YY, Wang XW, Qi KH, Xin JH (2008) Functionalization of cotton with carbon nanotubes. J Mater Chem 18: 3454-3460.
  15. El-Naggar ME, Hassabo AG, Mohamed AL, Shaheen TI (2017) Surface modification of SiO2 coated ZnO nanoparticles for multifunctional cotton fabrics. J Colloid Interface Sci 498: 413-422. [crossref]
  16. Bozzi A, Yuranova T, Guasaquillo I, Laub D, Kiwi J (2005) Self-cleaning of modified cotton textiles by TiO2 at low temperatures under daylight irradiation. J PhotochemPhotobiol A Chem 174: 156-164.
  17. El-Naggar ME, Shaheen TI, Zaghloul S, El-Rafie MH, Hebeish A (2016) Antibacterial activities and uv protection of the in situ synthesized titanium oxide nanoparticles on cotton fabrics. IndEngChem Res 55: 2661-2668.
  18. Daniel MC, Astruc D (2004) Gold nanoparticles: assembly, supramolecular chemistry, quantum-size-related properties, and applications toward biology, catalysis, and nanotechnology. Chem Rev 104: 293-346.
  19. Saha K, Agasti SS, Kim C, Li X, Rotello VM (2012) Gold nanoparticles in chemical and biological sensing. Chem Rev 112: 2739-2779. [crossref]
  20. Komeily-Nia Z, Montazer M, Latifi M (2013) Synthesis of nano copper/nylon composite using ascorbic acid and CTAB. Colloids Surf A PhysicochemEng Asp 439: 167-175.
  21. Chen XQ, Liu YY, Lu HF, Yang HR, Zhou XA, Xin JH (2010) In-situ growth of silica nanoparticles on cellulose and application of hierarchical structure in biomimetic hydrophobicity. Cellulose 17: 1103-1113.
  22. Sadr FA, Montazer M (2014) In situ sonosynthesis of nano TiO2 on cotton fabric. UltrasonSonochem 21: 681-691. [crossref]
  23. Prasad V, Arputharaj A, Bharimalla AK, Patil PG, Vigneshwaran N (2016) Durable multifunctional finishing of cotton fabrics by in situ synthesis of nano-ZnO. Appl Surf Sci 390: 936-940.
  24. Tang B, Sun L, Kaur J, Yu Y, Wang X (2014) In-situ synthesis of gold nanoparticles for multifunctionalization of silk fabrics. Dyes Pigm 103: 183-190.
  25. Liu J et al (2016) Surface enhanced Raman scattering (SERS) fabrics for trace analysis. Appl Surf Sci 386: 296-302.
  26. Medina-Ramirez I, Gonzalez-Garcia M, Palakurthi S, Liu J (2012) Application of nanometals fabricated using green synthesis in cancer diagnosis and therapy. In: Kidwai M (ed) Green chemistry—environmentally benign approaches. InTech, Rijeka.
  27. Tang B et al (2015b) Functional application of noble metal nanoparticles in situ synthesized on ramie fibers. Nanoscale Res Lett. [crossref]
  28. Bastu´s NG, Comenge J, Puntes V (2011) Kinetically controlled seeded growth synthesis of citrate-stabilized gold nanoparticles of up to 200 nm: size focusing versus Ostwald ripening. Langmuir 27: 11098-11105. [crossref]
  29. Chakraborty A, Chakraborty S, Chaudhuri B, Bhattacharjee S (2016) Process engineering studies on gold nanoparticle formation via dynamic spectroscopic approach. Gold Bull 49: 75-85.
  30. Ji X, Song X, Li J, Bai Y, Yang W, Peng X (2007) Size control of gold nanocrystals in citrate reduction: the third role of citrate. J Am ChemSoc 129: 13939-13948. [crossref]
  31. Kimling J, Maier M, Okenve B, Kotaidis V, Ballot H, Plech A (2006) Turkevich method for gold nanoparticle synthesis revisited. J Phys Chem B 110: 15700-15707. [crossref]
  32. Yu Chang S-S, Lee C-L, Wang CRC (1997) Gold nanorods: electrochemical synthesis and optical properties. J Phys Chem B 101: 6661-6664.
  33. Zhang P, Li Y, Wang D, Xia H (2016) High-yield production of uniform gold nanoparticles with sizes from 31 to 577 nm via one-pot seeded growth and size-dependent SERS property. Part PartSystCharact 33: 924-932.
  34. Goia D, Matijevic´ E (1999) Tailoring the particle size of monodispersed colloidal gold. Colloids Surf A 146: 139-152.
  35. Wuithschick M et al (2015) Turkevich in new robes: key questions answered for the most common gold nanoparticle synthesis. ACS Nano 9: 7052-7071. [crossref]
  36. Edwards HGM, Farwell DW, Webster D (1997) FT Raman microscopy of untreated natural plant fibres. SpectrochimActaA 53: 2383-2392.
  37. Dong H, Hinestroza JP (2009) Metal nanoparticles on natural cellulose fibers: electrostatic assembly and in situ synthesis. ACS Appl Mater Interfaces 1: 797-803. [crossref]
  38. Montazer M, Alimohammadi F, Shamei A, Rahimi MK (2012) In situ synthesis of nano silver on cotton using Tollens’ reagent. CarbohydrPolym 87: 1706-1712.
  39. Pinto RJB, Marques P, Martins MA, Neto CP, Trindade T (2007) Electrostatic assembly and growth of gold nanoparticles in cellulosic fibres. J Colloid Interface Sci 312: 506-512. [crossref]
  40. Velleste R, Teugjas H, Valjamae P (2010) Reducing end-specific fluorescence labeled celluloses for cellulase mode of action. Cellulose 17: 125-138.
  41. Kumar A, Mandal S, Selvakannan PR, Pasricha R, Mandale AB, et al. (2003) Investigation into the interaction between surface-bound alkylamines and gold nanoparticles. Langmuir 19: 6277-6
  42. Corma A, Garcia H (2008) Supported gold nanoparticles as catalysts for organic reactions. ChemSoc Rev 37: 2096-2126. [crossref]
  43. Zhao Y, Huang YC, Zhu H, Zhu QQ, Xia YS (2016) Three-inone: sensing, self-assembly, and cascade catalysis of cyclodextrin modified gold nanoparticles. J Am ChemSoc 138: 16645-16654. [crossref]
  44. Herves P, Perez-Lorenzo M, Liz-Marzan LM, Dzubiella J, Lu Y, Ballauff M (2012) Catalysis by metallic nanoparticles in aqueous solution: model reactions. ChemSoc Rev 41: 5577-5587.[crossref]
  45. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424: 824-830. [crossref]
  46. Liang M, Su R, Huang R, Qi W, Yu Y, et al. (2014) Facile in situ synthesis of silver nanoparticles on procyanidin-grafted eggshell membrane and their catalytic properties. ACS Appl Mater Interfaces 6: 4638-4649. [crossref]
  47. Tang B, Li JL, Fan LP, Wang XG (2015a) Facile synthesis of silver submicrospheres and their applications. RSC Adv 5: 98293-98298.
  48. Ai L, Yue H, Jiang J (2012) Environmentally friendly lightdriven synthesis of Ag nanoparticles in situ grown on magnetically separable biohydrogels as highly active and recyclable catalysts for 4-nitrophenol reduction. J Mater Chem 22: 23447-23453.
  49. Abdel-Fattah TM, Wixtrom A (2014) Catalytic reduction of 4-nitrophenol using gold nanoparticles supported on carbon nanotubes. ECS J Solid State SciTechnol 3: M18-M20.
  50. Panigrahi S et al (2007) Synthesis and size-selective catalysis by supported gold nanoparticles: study on heterogeneous and homogeneous catalytic process. J Phys Chem C 111: 4596-4605.
  51. Cui Y, Zhao Y, Tian Y, Zhang W, Lu X .et al (2012) The molecular mechanism of action of bactericidal gold nanoparticles on Escherichia coli. Biomaterials 33: 2327-2333. [crossref]
  52. Emam HE, El-Hawary NS, Ahmed HB (2017) Green technology for durabformation of AuNPs. Int J BiolMacromol 96: 697-705.
  53. Wadhwani P, Heidenreich N, Podeyn B, Burck J, Ulrich AS (2017) Antibiotic gold: tethering of antimicrobial peptides to gold nanoparticles maintains conformational flexibility of peptides and improves trypsin susceptibility. BiomaterSci 5: 817-827. [crossref]
  54. Edwards HGM, Farwell DW, Webster D (1997) FT Raman microscopy of untreated natural plant fibres. SpectrochimActaA 53: 2383-2392.
  55. Zorko M, Vasiljevic J, Tomsic B, Simoncic B, Gaberscek M, et al (2015) Cotton fiber hot spot in situ growth of Stober particles. Cellulose 22: 3597-3607.
  56. Fateixa S, Wilhelm M, Nogueira HIS, Trindade T (2016) SERS and Raman imaging as a new tool to monitor dyeing on textile fibres. J Raman Spectrosc 47: 1239-1246.
  57. Leona M, Lombardi JR (2007) Identification of berberine in ancient and historical textiles by surface-enhanced Raman scattering. J Raman Spectrosc 38: 853-858.
  58. Leona M, Stenger J, Ferloni E (2006) Application of surfaceenhanced Raman scattering techniques to the ultrasensitive identification of natural dyes in works of art. J Raman Spectrosc 37: 981-992.
  59. Meleiro PP, Garcia-Ruiz C (2016) Spectroscopic techniques for the forensic analysis of textile fibers. ApplSpectrosc Rev 51: 258-281.
  60. Zaffino C, Ngo HT, Register J, Bruni S, Vo-Dinh T (2016) ‘‘Dry-state’’ surface-enhanced Raman scattering (SERS): toward non-destructive analysis of dyes on textile fibers. Appl Phys A Mater Sci Process.
  61. Meleiro PP, Garcia-Ruiz C (2016) Spectroscopic techniques for the forensic analysis of textile fibers. ApplSpectrosc Rev 51: 258-281.