Monthly Archives: January 2023

Development of Rapid Pre and Post Mortem On-farm Diagnostic Test Kit for Porcine Cysticercosis (Pork Tapeworm)

DOI: 10.31038/MIP.2022321

Executive Summary

The goal of this project is to enhance sustainable productivity, value added and competitiveness of the pig industry in Uganda through easier, user-friendly and more accurate diagnosis, control and prevention of Taenia solium cysticercosis. The enhanced control and prevention of the infection is also expected to increase pork trade and food safety, prevent human infections and eliminate a health risk that has both social and economic implications.

Background

A zoonotic tapeworm Taenia solium transmitted among humans and between humans and pigs causes cysticercosis. Humans acquire taeniosis (tapeworm infection) when they eat raw or undercooked pork meat contaminated with cysticerci, the larval from of T. solium. When ingested, the cysticerci establish in the intestine of humans, become adult tapeworms and shed eggs in human feces that can infect in turn other humans and pigs by direct contact or by indirect contamination of water or food.

Epidemiological studies of porcine cysticercosis (pork tapeworm) require identification of pigs harbouring viable Taenia solium cysticerci and estimates of the degree of exposure to the parasite in the pig population destined for human consumption. Stool microscopy for diagnosis of taeniasis is inefficient and thus it is not recommended unless there is a specific indication and no suitable alternative. Even with multiple samples and concentration of large volumes of stool sample, sensitivity of stool microscopy does not exceed 60 to 70% (Allan et al. 1993). Diagnosis of teania eggs and proglottids in definitive hosts doesn’t distinguish between T. solium and T. saginata of pigs and cattle respectively. However, given that the prevalence of infection with either species is usually low, the role of parasitologic diagnosis in control programmes is relatively minor. For diagnosis of cysticercosis, histological confirmation of excised cysts is rarely required, nor easily undertaken except in a small proportion of patients with subcutaneous nodules where biopsy can provide diagnostic support. Currently, few copro-PCR techniques and non-commercial copro-Ag-ELISA assays are available. In contrast to the PCR assay, most copro-Ag-ELISA assays are genus, not species specific and thus cross-react with T. saginata (beef tapeworm). Antibody detection tests require parasitic cysts or tapeworm excretory/secretory material as a source of antigen. Assays using recombinant or synthetic antigens if available would be more suitable. Intermediate hosts diagnosis in pigs (porcine cysticercosis) can be made by tongue inspection, antibody or antigen detection, or by postmortem inspection at slaughterhouses. Rapid lingual examination for cysts is an inexpensive but insensitive test (Willingham 2006). Likewise, diagnosis by detection of cysts at slaughter of pigs is also insensitive. Uganda is ranked the biggest consumer of pork in the world. Roast pork with beer is a booming business in Uganda. This half cooked pork is a high risk for the transmission of porcine cysticercosis yet routine deworming is not commonly practiced by Ugandans. Diagnosis of porcine cysticercosis in humans and pigs ante-mortem is not developed in Uganda hence many cases go untreated. We intend to develop and evaluate a recombinant antigen for rapid diagnosis of porcine cysticercosis, lateral flow assay and LAMP assay. This will contribute to the control of this zoonotic disease in Uganda for instance neurocysticercosis; epilepsy [1-4].

Porcine cysticercosis is a zoonotic disease, it is highly prevalent in humans, livestock (12.2-25.7%) and wild suidae. Uganda is the topmost consumer of pork in the world. Mostly in form of roast pork with beer. This is half cooked pork with the risk of transmission of neuro-cysticercosis in humans. This increases the incidence of epilepsy in Uganda. Hence controlling porcine cysticercosis improves not only livestock health & productivity but socio-economics and public health.

Given the magnitude of the problem of porcine cysticercosis in Uganda, The physical tests applied for its diagnosis are not sensitive, hence the need for more sensitive point of care and field tests. Physical tests are done at the point of slaughter or postmortem. There is however need for field based diagnostics to direct treatment and control, thus improving livestock health and production as well as human health. Research has been done mostly on the prevalence of porcine cysticercosis in Uganda. However, no effort has been done to improve the disease diagnosis. The overall apparent sero-prevalence (12.2%) reported by Kungu 2015; while previous reports (25.7%) by Nsadha and others (2014) in Lake Kyoga basin. Effective disease control depends on accurate diagnosis.

Keywords

Animal health, Animal production, Pigs, Zoonotic diseases, Biotechnology, Uganda

References

  1. Allan JC, Mencos F, Garcia-Noval J, Sarti E, Flisser A, et al. (1993) Dipstick dot ELISA for the detection of Taenia coproantigens in humans. Parasitology 107: 79-85. [crossref]
  2. Arve Lee Willingham III, Dirk Engels (2006) Control of Taenia solium Cysticercosis/Taeniosis. Advances in Parasitology 61: 509-566.
  3. Kungu JM, Dione MM, Ejobi F, Ocaido M (2015) Status of Taenia solium cysticercosis and predisposing factors in developing countries involved in pig farming. Int J One Health 1: 6: 13.
  4. Nsadha Z, Thomas FL, Fèvre ME, Nasinyama G, Ojok L, Waiswa C (2014) Prevalence of porcine cysticercosis in the Lake Kyoga Basin, Uganda. BMC Veterinary Research 10: 239. [crossref]

How to Nail Down Trace Proteins in Any Sample

DOI: 10.31038/MGJ.2022514

Abstract

Proteins present in most biological mixtures are expressed over a vast concentration range (up to 10-12 orders of magnitude in human sera), the most abundant ones complicating the detection of low-abundance species or trace components, Classical approaches, such as pre-fractionation and immuno-depletion methodologies are frequently used to remove the most abundant species. Unfortunately, these methods not only are unsuccessful in concentrating trace components, which could remain below the detection limits of analytical approaches, but also may cause a non-specifically depletion of other components (including low abundance ones). In case of immuno-depletion, the situation is less than brilliant: untargeted proteomic analyses using current LC-MS/MS platforms with immuno-depletion cannot be expected to efficiently discover low-abundance, disease-specific biomarkers in plasma, since the increment in detection of these trace components after such a treatment results in a meagre 25% increase, accounting for only 5-6%of total protein identifications in depleted plasma. The characterization of minor components in complex protein systems, has been revolutionized by the introduction of the combinatorial peptide ligand libraries technology. This new methodology is based on the use of hexa-peptide baits to capture and normalize the relative concentrations of the components of any proteome under investigation. The major advantage of this technique, in comparison with other pre-fractionation methods, is that it not only diminishes the concentration of the more abundant proteins, but also concentrates low-abundance and even trace components, thus providing access to the “invisible” proteome. In addition, the loss of low-abundance species that may be accidentally eliminated by co-depletion using immuno-subtraction methods, is avoided.

The Problems with Current Depletion Methods

The elimination of high-abundance proteins is operated by immunodepletion using specific solid-phase antibodies against the proteins to be suppressed. The method is quite effective; however, it suffers from a vicious circle which starts from the use of small volumes of expensive immunosorbents which accept only small samples to deplete. In small samples the amount of targeted markers is very low and they are additionally diluted during the process, thus rendering their detection even more challenging without any post concentration. Naturally concentration is possible but it contributes to protein losses. Immunosorbents are also limited for sample treatment due to their species specificity; immunosorbents available for proteomics are limited for the treatment of only human blood plasma. Conversely the use of enrichment methods based on solid-phase adsorption of targeted species or groups (e.g. glycoproteins, phosphoproteins and other classes) or the use of solid-phase combinatorial affinity ligands are by far more effective since they allow much larger initial biological samples and hence larger quantities of targeted low-abundance proteins. The Combinatorial Peptide Ligand Library (CPLL) is a technology for sample treatment that repeatedly demonstrated its capability to allow detecting proteins that are most of the time ignored because well below the level of sensitivity of proteomics equipments and methods. It is additionally of general use for various biological material and various species. This original procedure, that takes its origin from affinity chromatography mechanisms, when used under overloading conditions, contributes not only to improve the knowledge in proteomics but, more importantly, to detect dilute proteins that are expressed at the early-stage of metabolic diseases. It is after years of applications in various conditions and various sample situations that low-abundance protein detection by CPLL in early-stages of diseases is gaining momentum as a potential discovery allowing the design of diagnostic tools. Explanation of the mechanism of action is given in the following sections as well as examples of detection of panels of exclusive low abundance proteins present in various diseases [1,2].

Biomarker Discovery, a Major Target in Proteomics Investigations

This original procedure, that takes its origin from affinity chromatography mechanisms, when used under overloading conditions, contributes not only to improve the knowledge in proteomics but, more importantly, to detect dilute protein that are expressed at the early-stage of metabolic diseases. It is after years of applications in various conditions and various sample situations that low-abundance protein detection by CPLL in early-stages of diseases is gaining momentum as a potential discovery allowing the design of diagnostic tools. Explanation of the mechanism of action is given in the following sections as well as examples of detection of panels of exclusive low abundance proteins present in various diseases. Proteins and their variants are produced in a very large number and their individual concentration is extremely large, ranging throughout at least a dozen of orders of magnitude if not more. This situation renders the detectability of low- and very low-abundance species very challenging or clearly impossible in practice. Without any kind of sample treatment the large majority of proteins cannot be detected because their concentration is either below the detectability levels or because their signal is suppressed by the presence of most abundant proteins. the use of enrichment methods based on solid-phase adsorption of targeted species or groups (e.g. glycoproteins, phosphoproteins and other classes) or the use of solid-phase combinatorial affinity ligands are by far more effective since they allow much larger initial biological samples and hence larger quantities of targeted low-abundance proteins. The Combinatorial Peptide Ligand Library (CPLL) is a technology for sample treatment that repeatedly demonstrated its capability to allow detecting proteins that are most of the time ignored because well below the level of sensitivity of proteomics equipment and methods. This original procedure, that takes its origin from affinity chromatography mechanisms, when used under overloading conditions, contributes not only to improve the knowledge in proteomics but, more importantly, to detect dilute protein that are expressed at the early-stage of metabolic diseases. It is after years of applications that low-abundance protein detection by CPLL in early-stages of diseases is gaining momentum as a potential discovery allowing the design of diagnostic tools. Explanations of the mechanism of action is given in the following sections as well as examples of detection of panels of exclusive low abundance proteins present in various diseases.

The CPLL Capabilities to Detect Low-abundance Proteins from Early Stage Gene Expression

A combinatorial peptide ligand library (CPLL) is a quite recent technology now extensively described for successful applications in animal and plant proteomics investigations. Many applications have been reported with a major interest in the discovery of low- and very low-abundance proteins that are undetectable even after the use of immuno-depletion of major species. In practice the CPLL procedure allows compressing the dynamic concentration range of protein components by simultaneously decreasing the concentration of high-abundance species and enriching for low- and very low-abundance ones (for reviews see references This concept has been coined several years ago and since then had not reduced its interest for many applications including the discovery of markers of diagnostic and prognostic interest. The library is composed of millions of spherical gel porous beads each of them covalently carrying many copies of a single hexapeptide structure. The library is made via a combinatorial synthesis process that uses natural amino acids grafted the one after the other (split-and-pool procedure). Each bead can be considered as an affinity chromatography sorbent addressing a single or a group of proteins from the crude biological sample with a common affinity for the same peptide structure. Considering that the mixture of beads carries millions of different affinity beads, most, if not all, proteins are adsorbed. Under large overloading sample conditions, concentrated proteins (high abundance species) saturate rapidly the corresponding affinity beads while the excess remains free in solution. Conversely very dilute proteins (very low-abundance species) converge towards their specific beads and are thus concentrated. Upon completion of the binding process, dominated by not only adsorption, but also by quite intensive displacement effects, the beads are washed and all proteins in solution, mainly the excess of large abundance proteins, are eliminated. The adsorbed proteins are then desorbed using dissociation compounds such as those adopted in affinity chromatography; the collected sample thus comprises all captured proteins where their respective dynamic concentration range is much reduced. In this sample low-abundance proteins are detectable because first they are concentrated by the affinity process and also because their signal is not any longer obscured by the high-abundance species that are now largely diluted. The intense competition effect among proteins during the adsorption phase is the result of numerous molecular interactions singularly or collectively present generated by the mixed-mode affinity ligand library (the peptide). Among them are hydrophobic associations, electrostatic interactions and hydrogen bonding. The interaction forces are governed by the mass action law for systems that associate together by molecular affinity; the association and dissociation of partners depend on environmental conditions such as the pH, the ionic strength of the buffer, the temperature, the presence of competitors, their concentration and the extent of overloading. All these physicochemical parameters need to be considered with care in order to get the maximum reproducibility between samples. The two major success factors are (i) the enrichment of low abundance species, which is dependent on the availability of biological sample (the larger the sample, the higher level of enrichment) and (ii) the ability to desorb all proteins captured by the beads. In comparison to the so-called “depletion” or “immune-depletion” technologies, CPLLs show large distinctive characteristics. While depletion does not concentrate the low-abundance proteins, the CPLL main property is to concentrate most of very dilute species to bring them to the level of detectability by current analytical methods. High abundance proteins are not eliminated as this is the case with depletion methods, but rather maintained at a certain level of concentration conserving thus the property to carry other interacting polypeptides that are not The risk of protein losses due to non-specific binding on solid supports is prevented with CPLLs; adsorbed proteins necessitate a complete elution using various appropriate dissociation methods. Alternatively, after extensive washings the beads loaded with proteins can be directly trypsinized in order to produce peptides that are collected and streamlined within a LC-MS/MS equipment for protein identification.

Expected Upcoming Developments with CPLL

Recent technological developments about identifying early-stage modifications of protein expression for various critical pathologies are a promise for a great future. Statistical observations of low-abundance proteins expressed during the development of some cancers will improve the reliability of selection of marker candidates.

Post-translational modifications such as truncations, mis-glycosylations, mistaken phosphorylations and others, that are also tracked as potential biomarkers, could eventually be circumvented if the enzymes that are at the origin of such modifications are identified as very low-abundance proteins that are dependent on a bad or modified regulation of the expression system. Although the performance of CPLLs has largely contributed to the progress of novel discoveries, other complementary approaches associated to modern and more sensitive equipment, will increase the probability of novel reliable and affordable findings.

On CPLL technology itself possible developments are envisioned that could be advantageously associated to specific enrichment technologies. Thus the general enrichment governed by this multi-affinity principle could be enhanced by adding to the libraries various accurately selected adsorbents in order to either increase the low-abundance species or to further enrich a special group of proteins. Main principles for this approach were already suggested in 2015. The mode of use of CPLLs could also be progressively normalized as a function of the type of biological samples. For the discovery phase of novel biomarkers, two main general routes are used today: (i) the direct comparative expression difference of previously fluorescently labelled samples by 2DDIGE separation analysis and (ii) the indirect comparative tryptic digests of enriched samples, classified as bottom up approach.

Both are more rapid than other intricate protein capture methods and multiple sequential elutions from beads followed by technological clean-up and/or fractionation methods with the additional risk of protein losses. Although to date the return from massive efforts in proteomics is quite scarce in terms of diagnostic tests, the search for early-stage protein expression modifications continues. The acceleration of exploitable results in view of bringing findings to clinical practice is contingent upon deep collaborations between laboratories having complementary skills and also complementary interests including industrial organizations as well as bio-banks and clinicians. In this endeavour it is believed that CPLLs as are described or enriched by additional features may contribute to novel discoveries for early-stage potential protein biomarkers allowing differentiation of patients subgroups to fit with the current trends in personalized medicine.

Conclusions

In the early years of CPLL applications, we felt the growth of the methodology was in a stage of “Andante moderato”, like in the second theme of the third movement of the famous Symphony No. 9 by Ludwig van Beethoven (LvB). Yet, as the years went by, and as witnessed by the graph in Figure 1, it would appear that now CPLLs have reached the stage of “Andante maestoso”, as in the fourth movement of the Symphony, which ends with the Hymn to Joy from Friedrich Schiller (FS). It is hoped that more and more scientists will pick up the technique, given its high performance (Figure 2).

FIG 1

Figure 1: Progression of the number of publications over years mentioning the use of CPLL. A: representation of the number of published papers from 2005 to 2016. The last year represents an incomplete count. B: progression of published reports on CPLL evidencing or mentioning their use within the domain of biomarker discovery. Each bar is expressed in % of the total number of published papers shown above in panel A.

FIG 2

Figure 2: Two dimensional polyacrylamide gel electrophoresis of serum from healthy women (left panel) and from epithelial ovarian cancer (right panel). The first dimension separation was performed by using a relatively narrow pH gradient from 4 to 7. Both serum samples were treated by CPLL. Three spots of significantly different density between the groups were found (see arrowed indications) and then identified by MALDI mass spectrometry.

References

  1. Righetti PG, Boschetti E (2013) Combinatorial peptide libraries to overcome the classical affinity-enrichment methods in proteomics. Amino Acids 45: 219-229. [crossref]
  2. Boschetti E, Righetti PG (2013) Low-abundance Protein Discovery: State of the Art and Protocols, Elsevier. 978-0-12-401734-4, 2013.

Weight Change in Tertiary Students: Implications for Academic Performance

DOI: 10.31038/NRFSJ.2022524

Abstract

Background: Although adequate nutrition and good health are known to promote academic success, the tuition and non-tuition expenses often force most students change their eating patterns after starting tertiary education. The unresolved dilemma is that there are high rates of obesity alongside high rates of hunger on campuses in Jamaica. This study among students in three tertiary institutions in Jamaica examined the risk factors for weight gain and weight loss. More importantly, the aim was to determine whether these weight changes among students affected their academic performance.

Results: While overall weight gain and weight loss were similar (34-37%), older students experienced more weight gain (39.7%) and males had more weight loss (41.7%). Significantly less fully employed students (24.6%) lost weight than those partially employed (43.9%) or those with no job (43.2%). Disordered eating was high (39.2%) and was associated mainly with weight loss. Lower GPA scores were correlated with weight loss. Key independent factors related to weight change were age, gender, disordered eating, the amount, and type of food consumed, depression and anxiety.

Conclusion: The large proportion of students with weight change cannot be ignored by campus administrators. On-going programs are clearly not sufficient to halt this behaviour. For tertiary institutions to meet their education mandate, authorities must provide an enabling environment for students at risk of major weight changes. Policies and programs such as regular screening of students, and education to impart relevant nutritional knowledge and improve practices, are vital to promote student health and ultimately their academic success.

Keywords

Weight change, Eating disorders, Risk factors, Policies, Higher education

Introduction

University life is a critical period for cementing healthy and sustainable eating habits [1]. Not only do adequate nutrition and good health promote academic success, but in the long run they also reduce government expenditure on management and treatment of chronic diseases. In Jamaica, chronic disease prevalence has increased steadily among the adult population and for the last 3 decades they have been responsible for most of the illness and deaths. In addition, the total economic burden on individuals has been estimated at over US$600 million [2].

While recent research has focused on “the Freshman 15” – a term used to describe the rapid and dramatic increase in weight of college students [3], there is also evidence that food insecurity significantly affects students as well [4,5]. This study therefore examined the apparent paradox between weight gain and weight loss among tertiary students in Jamaica.

National surveys in Jamaica show overweight/obesity trends for adults as 34% in 2000; 49% in 2008 and 54% in 2016 [6]. This trend is astounding – showing a 59% increase in 16 years and clearly calls for bold and sustained corrective action. Further, other surveys have documented an increase in the major behavioural risk factors and NCDs such as diabetes, hypertension, and obesity among adults [7]. In 2017, PAHO estimated that 78% of all deaths are caused by NCDs. Of all NCD deaths 30% occurred prematurely between 30 and 70 years of age [8]. Coupled with such high rates of chronic disease is the fact that almost half (46.0%) of the adult population are classified as having low physical activity and approximately 99.0% of adults consume well below the daily recommended portions of fruits and vegetables [6].

Graduates from tertiary institutions are expected to be the main drivers of sustainable development. These institutions should therefore obtain a better understanding of food insecurity and eating behaviours among students as they not only have the potential to influence academic performance, student retention and graduation rates but also allow such institutions to provide key evidence needed to advocate for and develop policies at the national and regional levels [5].

Methods

A quantitative survey was used to capture the dynamics that can affect student eating behavior and also gain insights into the mechanism through which weight change can harm student academic performance. Three tertiary institutions participated in this self-reporting study: University of Technology, Jamaica, the University of the Commonwealth Caribbean and Shortwood Teachers’ college. About 300 students from each of these institutions were randomly selected to participate. Efforts were made to stratify by faculty in each institution. A pilot test of the questionnaire was done on 20 students at different levels to assess clarity and understanding. The test results were used to modify the structure and content of questions where necessary. To solicit maximum honesty and confidentiality the students were not required giving their names, identification numbers, or any information that can be traced to them individually. After ethical clearances and permissions from the university authorities, coordinators from each institution were assigned to administer the questionnaire. No payment was given to the students for completing the questionnaire. Student responses were scrutinized for completeness and quality. Analysis was planned to reveal several descriptors of weight gain and weight loss. Independent factors were identified using weight gain and weight loss as dependent variables.

Results

Analysis of data from the three institutions combined showed that the proportion of students who reportedly gained weight (34.0%) was close to those who lost weight (36.9%). Stark differences were however found in weight change with various variables.

Figure 1 shows that among those less than 22 years old, 44.4% lost weight and 31.1% gained weight. In the 22-28-year-old category, more students also reported that they had lost weight (37.9%) compared to those who had gained (33.3%). In the over 28 years old category, however, more reported that they had gained weight (39.7%) and 22.6% lost weight.

fig 1

Figure 1: Weight change by age of tertiary students

Among males, the greatest percentage (41.7%) reported that they had lost weight while 24.6% had gained weight. Among females, weight gain was found in 36.6% of them and 35.4% lost weight.

Some students with full-time employment reported that they had gained weight (35.5%) while 24.6% reported they had lost weight. Among students who were employed part-time, the greatest percentage (43.9%) had lost weight, while 35.1% had gained weight. A larger proportion of unemployed students (43.2%) reported that they had lost weight while 32.5% had gained weight (Table 1).

Table 1: Relationship between Employment Status and Weight Change

Weight Change

Employment

Full Time

Part Time

No Job

Gained (%)

35.5

35.1

32.5

Lost (%)

24.6

43.9

43.2

No Change (%)

39.9

21.1

24.3

Total

338

171

465

P<.001

As expected, more students (40.8%) who ate 3 times per day gained weight whereas 18.9% lost weight. In contrast, most students who reported that they ate once per day had lost weight (54.0%) while 27.7% had gained weight (Table 2).

Table 2: Relationship between Meal Frequency and Weight Change

 

Weight Change

Meals per day

3+

2

1

Gained (%)

40.8

33.9

27.7

Lost (%)

18.9

37.0

54.0

No Change (%)

40.3

29.1

18.3

Total

201

570

202

P<.001

Among all students 39.2% indicated that their eating changes were due to a disorder. Of those who reported a disorder 41.6% had lost weight, 37.4% gained and 21.1% had no change in weight. Of those whose consumption changes were not due to a disorder the difference was not as large; 34.6% reported no weight changes, 33.7% reportedly lost weight and 31.7% indicated they had gained weight (Table 3).

Table 3: Changes in eating habits due to a disorder and its relation to weight change

Weight Change

Eating changed due to disorder

Yes

No

Gained (%)

37.4

31.7

Lost (%)

41.6

33.7

No Change (%)

21.1

34.6

Total

380

590

P<.05

Figure 2 shows that of those who were undernourished or normal, 50.3% said they lost weight. Of those overweight, 44% said they gained weight. Of those who were obese 49.7% said they gained weight.

fig 2

Figure 2: Weight change according to weight status

Figure 3 shows that when the type of food changed there was an overall loss in weight. Students were also asked whether anxiety and stress caused changes in their consumption habits. Most students indicated this was the case (59.9%). Of these, 44.3% reported that they had lost weight, 32.3% had gained weight. Of those whose eating had not changed because of stress and anxiety, 36.7% gained weight and 25.6% reportedly had lost weight.

fig 3

Figure 3: Other Causes of weight change

A minority of students (39.3%) indicated that they were involved in planned physical activity. Among these students, 42.1% reported that they had lost weight, 30.6% had gained weight (Figure 3). The duration of physical activity was categorized as less than 1 hour and 1-2 hours per week. There was no difference in weight change according to the duration of physical activity. Most students reported that they were not involved in planned physical activity (60.7%). Of these, 35.9% reportedly gained weight, 33.6% lost weight and 30.5% experienced no change in weight. The grade point averages (GPAs) for students were used to denote academic performance. Figure 4 shows that students with lower GPAs (<3.2) experienced more weight loss than the high achievers (GPA >3.2).

fig 4

Figure 4: Weight change in relation to academic performance (GPA)

The likelihood ratio test was used to identify the independent factors related to weight gain and weight loss. Table 4 shows that the full-time employed student was the main factor related to weight gain, whereas eating one meal per day was the main factor related to weight loss.

Table 4: Independent factors related to weight change

Factors and weight change

Weight gain vs. no change

Weight loss vs. no change

Full-time employed student (p> 001)

Older student (p<.05)

Female student (p<.05)

Eating disorder (p<.05)

Type of food consumed p<.05

 

1 vs. 3 Meals per day (p<.001)

Depression & anxiety (p<.05)

Type of food consumed (p<.05)

Discussion

Previous studies in Jamaica revealed high levels of food insecurity among university students [9]. The natural hypothesis from this observation is that it will express itself as weight loss. However, with obesity on the rise in Jamaica, that hypothesis needed to be tested. Weight gain among students who are food insecure may be the result of poor food choices, lifestyle habits (harmful use of alcohol or lack of engagement in physical activity) or the body’s response to stress including finances, academic pressures and work life. While evidence from the Caribbean is sparse and mostly anecdotal, a review of the literature shows that in several countries a double-burden of sorts exists – both food insecurity and overweight/obesity, particularly among women and youth [10].

Evidence from other parts of the world suggests that university students consume more unhealthy foods (processed foods with high total and saturated fats) and have lower intakes of fresh fruit and vegetables [10-12]. Such behaviours may carry on to adulthood thereby contributing to the burden of disease seen among middle- and older-aged adults at the population level.

The self-reported weight changes in this study indicate significantly increasing weight gain with age. This is consistent with the observation that the 18-29 year-old age group is often viewed as being at high risk for weight-related behaviour change as the transition from adolescence to adulthood is made and there is more freedom regarding the type and amount of food consumed [11]. Even when students are aware of the health consequences of overconsumption of unhealthy foods, food choices have been shown to be heavily influenced by convenience and taste [12].

Most published studies have focused specifically on weight gain, but this study shows that weight loss is relatively high, particularly among males. Globally, studies examining sex-related changes in weight have shown that both males and females experience weight changes when they enter university with the greatest changes in weight taking place in the first semester [13].

This study found higher weight gain among students fully employed. Working full time has been thought to be associated with unhealthy eating and consequently weight gain [14].

Research shows that stress can result in changes in food intake patterns and among university students has been linked to maladaptive eating behaviours such as consumption of unhealthy foods and overeating [15]. In fact, there is an apparent link between the types of foods that are viewed as ‘comfort foods’ which are used to cope with stressors and age – with younger persons seeming to prefer snack-related foods such as ice cream, candy, and sweet breads [16]. As university enrolment often results in major life changes for example new living arrangements, new social situations and increased academic and time demands, stress and its consequent effect on dietary habits may be commonplace [16]. As expected in this study, students who ate more meals achieved more weight gain. But this relationship is complex. The effect of meal frequency plays an important role not just in academic achievement but also long-term health consequences. The omission of one or more meals from the diet has been linked to poorer diet quality, increased risk of abdominal adiposity and increased BMI [17-21]. Reasons for meal skipping among university students include a lack of hunger, depression, lack of time, lack of money, and lack of cooking skills [22,23].

The cost of food has been shown to be a key factor that influences what students purchase [24]. Foods that are relatively cheaper also tend to be high in salt, sugar, fat and flavour additives which have been identified as contributory factors to the obesity epidemic. Disordered eating is associated with weight gain, overweight and obesity among adolescents/young adults [25,26]. The high percentage of disordered eating among Jamaican students is worrisome. And it resulted mainly in weight loss. Studies have shown that disordered eating is more prevalent among those who may be experiencing feelings of anxiety, loneliness and stress, all of which are common among university students [27].

Among university students, research shows that physical activity levels continue to decline while engagement in sedentary activities continue to increase [28]. This study shows a minority of students involved in planned physical activity. Several factors have been proposed regarding this decline among university students e.g. university students have greater control of their daily lives and are not mandated to participate in physical activity, the impact of residence (on- or off-campus), time demands (including time spent on social media) and access to facilities where physical activities are offered or where physical activity can be engaged in safely [29]. Data from the United Kingdom suggest that up to 60% of university students are not meeting physical activity recommendations [30].

Campus administration has an obligation to equip students not only with knowledge for the world of work but also for a healthy lifestyle which is integrally linked to work performance. Efforts can include: (1) Screening students at the start of each school year for food insecurity to gauge the type and quantity of support that is required. (2) Collaborating with food manufacturers and supermarkets for food donations to university/college campuses. (3) Conduct of an interactive “Cooking on a Budget” program during the semester which teaches students how to cook quick, cheap, and healthy meals on a budget. (4) Meal Program – Provision of a student space which can be accessed by students and provides coffee/tea, affordable and healthy snacks, sandwiches etc. (5) Pantry Program – Installation of a student-run pantry that students can access which includes toiletries and grocery vouchers for students in need.

Acknowledgment

We thank the University of Technology, Jamaica, for providing funding through the Research Development Fund, managed by the University’s Research Management Office, the School of Graduate Studies, Research & Entrepreneurship. Gratitude is expressed to Mr. Kevin Powell (University of the Commonwealth Caribbean) and Ms. Ava-Marie Reid (Shortwood Teachers’ College who coordinated the data collection at their respective institutions.

References

  1. Kim HS, Ahn J, No JK (2012) Applying the Health Belief Model to college students’ health behavior. Nutrition Research and Practice 6: 551-558. [crossref]
  2. Ministry of Health and Wellness Jamaica (2013). National strategic and action plan for the prevention and control of NCDs in Jamaica 2013-2018. Kingston, Jamaica Retrieved from http://moh.gov.jm/wp-content/uploads/2015/05/National-Strategic-and-Action-Plan-for-the-Prevention-and-Control-Non-Communicable-Diseases-NCDS-in-Jamaica-2013-2018.pdf
  3. Sharif MR, Sayyah M (2018) Assessing physical and demographic conditions of freshman” 15″ male medical students. International Journal of Sport Studies for Health 1: e67421.
  4. Freudenberg N, Goldrick-Rab S, Poppendieck J (2019) College students and SNAP: The new face of food insecurity in the United States. American Journal of Public Health 109: 1652-1658. [crossref]
  5. Payne-Sturges DC, Tjaden A, Caldeira KM, Vincent KB, Arria AM (2018) Student hunger on campus: Food insecurity among college students and implications for academic institutions. American Journal of Health Promotion 32: 349-354. [crossref]
  6. JHLS (2000). Jamaica Health and Lifestyle Survey. Ministry of Health 2000.
    ______ (2008). Jamaica Health and Lifestyle Survey. Ministry of Health 2008
    ______ (2018). Jamaica Health and Lifestyle Survey. Ministry of Health 2018
  7. Ministry of Health (2013) National Strategic and Action Plan for the Prevention And Control Non-Communicable Diseases (NCDS) in JAMAICA 2013-2018. Retrieved from: https://www.moh.gov.jm/wp-content/uploads/2015/05/National-Strategic-and-Action-Plan-for-the-Prevention-and-Control-Non-Communicable-Diseases-NCDS-in-Jamaica-2013-2018.pdf
  8. PAHO (2017) Regional mortality estimates 2000-2015. PAHO 2017.
  9. Henry FJ, Nelson M, Aarons R (2020) Learning on empty. In University Student Life and learning: Challenges for Change. Ed: Fitzroy J. Henry, University of Technology, Jamaica Press 102-112.
  10. Au L, Zhu S, Ritchie L, Nhan L, Laraia B, et al. (2019) Household Food Insecurity Is Associated with Higher Adiposity Among US Schoolchildren Ages 10–15 Years (OR02-05-19). Current Developments in Nutrition 149: 1642-1650. [crossref]
  11. Nelson MC, Story M, Larson NI, Neumark-Sztainer D, Lytle LA (2008) Emerging adulthood and college-aged youth: an overlooked age for weight-related behavior change. Obesity 16: 2205-2211. [crossref]
  12. Abraham S, Noriega B, Shin J (2018) College students eating habits and knowledge of nutritional requirements. Journal of Nutrition and Human Health 106: 46-53. [crossref]
  13. Lloyd-Richardson EE, Bailey S, Fava JL, Wing R, Network TER (2009) A prospective study of weight gain during the college freshman and sophomore years. Preventive Medicine 48: 256-261. [crossref]
  14. Escoto KH, Laska, MN, Larson N, Neumark-Sztainer D, Hannan PJ (2012) Work hours and perceived time barriers to healthful eating among young adults. American Journal of Health Behavior 36: 786-796. [crossref]
  15. Lyzwinski LN, Caffery L, Bambling M, Edirippulige S (2019) The mindfulness app trial for weight, weight-related behaviors, and stress in university students: randomized controlled trial. JMIR mHealth and uHealth 7: e12210. [crossref]
  16. Kandiah J, Yake M, Jones J, Meyer M (2006) Stress influences appetite and comfort food preferences in college women. Nutrition Research 26: 118-123.
  17. Chung H-Y, Song M-K, Park M-H (2003) A study of the anthropometric indices and eating habits of female college students. Journal of Community Nutrition 5: 21-28.
  18. Kerver JM, Yang EJ, Obayashi S, Bianchi L, Song WO (2006) Meal and snack patterns are associated with dietary intake of energy and nutrients in US adults. Journal of the American Dietetic Association 106: 46-53. [crossref]
  19. Ma Y, Bertone ER, Stanek III EJ, Reed GW, Hebert JR, et al. (2003) Association between eating patterns and obesity in a free-living US adult population. American Journal of Epidemiology 158: 85-92. [crossref]
  20. Musaiger A, Radwan H (1995) Social and dietary factors associated with obesity in university female students in United Arab Emirates. Journal of the Royal Society of Health 115: 96-99. [crossref]
  21. Timlin MT, Pereira MA (2007) Breakfast frequency and quality in the etiology of adult obesity and chronic diseases. Nutrition Reviews 65: 268-281. [crossref]
  22. Afolabi W, Towobola S, Oguntona C, Olayiwola I (2013) Pattern of fast-food consumption and contribution to nutrient intakes of Nigerian University students. Int J Educ Res 1: 1-10.
  23. Lee J-E, Yoon W-Y (2014) A study of dietary habits and eating-out behavior of college students in Cheongju area. Technology and Health Care 22: 435-442. [crossref]
  24. Tam R, Yassa B, Parker H, O’Connor H, Allman-Farinelli M (2017) University students’ on-campus food purchasing behaviors, preferences, and opinions on food availability. Nutrition 37: 7-13. [crossref]
  25. Barrack MT, West J, Christopher M, Pham-Vera A-M (2019) Disordered eating among a diverse sample of first-year college students. Journal of the American College of Nutrition 38: 141-148. [crossref]
  26. Goldschmidt AB, Aspen VP, Sinton MM, Tanofsky-Kraff M, Wilfley DE (2008) Disordered eating attitudes and behaviors in overweight youth. Obesity 16: 257-264. [crossref]
  27. Costarelli V, Patsai A (2012) Academic examination stress increases disordered eating symptomatology in female university students. Eating and Weight Disorders-Studies on Anorexia, Bulimia and Obesity 17: e164-e169. [crossref]
  28. Lauderdale ME, Yli-Piipari S, Irwin CC, Layne TE (2015) Gender differences regarding motivation for physical activity among college students: A self-determination approach. The Physical Educator 72.
  29. Calestine J, Bopp M, Bopp CM, Papalia Z (2017) College student work habits are related to physical activity and fitness. International Journal of Exercise Science 10: 1009. [crossref]
  30. Aceijas C, Bello-Corassa R, WaldhäuslN S, Lambert N, Cassar S (2016) Barriers and determinants of physical activity among UK university students Carmen Aceijas. European Journal of Public Health 26.

Bilateral Stress Fracture of the Femoral Neck: A Case Report

DOI: 10.31038/IJOT.2022524

 

Stress fractures of the femoral neck (sFNF) are rare and occur mostly in athletes and military personnel or in the osteoporotic elderly. The incidence rate of sFNFs has been found to be 100/100.000 person-years in a military population, while bilateral sFNFs are much rarer and not presented by an incident rate in the literature. Available literature largely focuses on athletes and military personnel [1-13] as protentional candidate for SFNF. Prospective series show that 8% of stress fractures in competitive track and field athletes are located in the femur and 5% of stress fractures in a military population are in the femoral neck. A more recent registry study found a dominance of stress fractures in younger patients (<60 years) compared to the elderly; 5.8% versus 1.1% of all femoral neck and basocervical fractures. Fullerton described the symptoms and clinical findings in 49 military recruits with sFNFs.

87% had anterior groin pain and 19% nightly pain and was preceded by a long run or march in 40% of patients. Tenderness to inguinal palpation was present in 62% of cases and 79% had pain in extremes of hip range of motion while heel percussion very rarely elicited pain [1-14].

Early diagnosis is important to prevent progression into a displaced fracture. In a series of 19 military recruits with displaced sFNFs, 6 patients developed necrosis of the femoral head despite surgery and a total of 13 patients eventually developed osteoarthritis of the hip. Other military series describes femoral head necrosis in 23.8% of patients with a displaced sFNFs during a 28 month follow up period despite surgery [15]. Primary x-rays (antero-posterior and axial) may be inconclusive and can cause a delay in diagnosis. This was evident in a study where 90% of military recruits with an MRI-confirmed sFNF had a negative plain radiograph prior to the MRI [16]. Another military study found a sensitivity of only 37% for plain radiography in detecting pelvic or hip stress fractures [7], while MR has a sensitivity of 100%.

Case Report

This case report concerns a 55-year-old female who developed bilateral sFNF sequentially with osteopenia as the only risk factor. The patient was initially referred for an evaluation at our orthopedic outpatients clinic with complaints of right-sided deep groin pain, c-shaped pain and a feeling of the hip ”giving in” for the past 10 weeks. Recent x-ray showed no fracture. An earlier x-ray of the hip and pelvis two years prior had revealed minimal arthrosis and retroversion of the acetabulum. No valgus or varus malalignment was found. Labral injury was suspected and an MRI-arthrography was ordered but failed to visualize the labrum due to extracapsular placement of contrast fluid. Surprisingly, the scan revealed minimal callus formation at the right femoral neck as a sign of a healed fracture. Due to already being healed, the right-sided sFNF was treated conservatively. The patient was admitted to the emergency department 2½ years later with identical symptoms now arising from her left hip. Plain radiographs and CT-scan of pelvis and the hip showed no fracture. MR revealed an incomplete tension-sided fissure in the femoral neck involving less than 50% of femoral neck width. The patient was in pain and signs of healing were absent on the MRI. The left sFNF was therefore treated surgically, to prevent further displacement.

The patient was postmenopausal and had a family history with disposition to osteoporosis. BMI was 28,4. Prior to the first orthopedic evaluation, 2.5 years earlier, a DEXA-scan showed osteopenia with a T-score of the spine of -2,0 and the hips of -1,9/-1,7. The patient described a normal diet with supplemental calcium and magnesium tablets but had not previously received pharmacological treatment for osteopenia. She had ceased smoking several years prior and rarely consumed alcohol. Because of the right sided sFNF, she was examined thoroughly by an endocrinologist. Blood tests showed normal levels of vitamin D and calcium and no disturbance in kidney function or thyroid and parathyroid. Indicators of auto-immune disease or myelomatosis were normal. She had a slightly elevated ALAT which was deemed unrelated. By advice from the endocrinologist the patient started yearly zoledronic acid treatment.

Between the two hip fractures, she was diagnosed with a stress fracture of the left 2nd metatarsal and low-energy fractures of a single rib and a right-sided distal radius fracture. All were treated conservatively. The right sFFN was treated conservatively because of the late clinical diagnosis.16 weeks after onset of symptoms, the patient was painless and ambulatory. She was advised against sports for another four months. The left sFFN was, on the contrary, surgically treated with 3 parallel screws on the day of admittance to the emergency department and she was allowed full weight-bearing postoperatively. The patient felt an immediate reduction in pain after surgery. After two months the patient still had a pain rating of NRS 3-4 during physical therapy but was able to walk 5 kilometers. Walking distance increased with physical therapy and she was free of pain and started jogging 6 months postoperatively (Figures 1 and 2).

fig 1

Figure 1: MRI of the right hip showing a stress fracture of the femoral neck with callus formation on the medial side. The fracture line is incomplete, and it is visible on the medial side and does not extend to the opposite cortex. A: T1 weighted TSE sequence B: T1 weighted TIRM sequence.

fig 2

Figure 2: Left hip MRI showing bone edema in the medial distal collum femoris. A: Coronal T1 weighted MRI with low signal in the affected area. B: Coronal STIR MRI with high signal. A small fissure is visible and interpreted as an incomplete fracture line.

Discussion

Treatment of undisplaced sFNFs is debated, and surgeons must weigh the risks and benefits of conservative treatment versus surgery. Conservative treatment requires a period of reduced activity and carries a risk of displacement. In a Swedish registry study on undisplaced or minimally displaced sFNF and basocervical fractures, 3 og 17 patients that were treated non-operatively would later require internal fixation due to fracture displacement. Another 3 patients received late surgical treatment for persistent pain or femoral head necrosis, which in total corresponds to 35% of patients treated surgically following a choice of primary non-operative treatment. This was higher than the overall rate of reoperation or late surgery of 28% and higher than the reoperation rate of 10% for the patients primarily treated with internal fixation for displaced or undisplaced fractures.

Osteosynthesis seems effective in preventing secondary displacement and allows for early weight bearing but carries a risk of surgical complications. Removal of implants at a second operation due to pain occurs as often as 14% in sFNFs (both displaced and undisplaced). Surgical site infection (SSI) is also a concern. Although incidence of SSI is difficult to estimate in sFNFs, extrapolation from studies including traumatic femoral neck fractures may provide an indication of the frequency of infection in stress fractures. In patients below 60 years of age with all-cause femoral neck fractures, the risk of surgical site infection in retrospective studies is 5.1% [17]. Incidence rates in traumatic versus stress fractures may differ due to differences in surrounding tissue damage from trauma and differences in patient characteristics. Furthermore, risk of avascular necrosis and non-union is not completely eliminated with internal fixation and fixation failure can occur [18]. Two relatively large retrospective MRI studies have identified prognostic fracture characteristics that may help guide the surgeon in choice of treatment.

Quinquilla et al. [19] reviewed 156 cases of sFNF in a military population and divided fractures into 4 grades based on MRI. Grades I and II had bone marrow edema of less or more than 6mm but no fracture line. Grades III and IV had fracture lines of less or more than 50% of neck width on coronal MRI. Conservative treatment of sFNFs of grades I, II and III did not result in displacement or even progression to a higher fracture grade. 3 of 21 patients with a fracture line of >50% of neck width (grade IV) were treated conservatively while the remaining 18 had surgical fixation.

There was no report of displacement in the conservatively treated fractures indicating this treatment option is viable in the military population.

Steele et al. [20] retrospectively reviewed 305 cases of sFNFs in a military population. In this cohort, patients were treated non-operatively with toe-touch-weight bearing with crutches if they had bone marrow edema or a fracture line <50% of femoral neck width on MRI and surgically if they had a fracture line of >50% of the femoral neck width. On subsequent progression on MRI at 6 weeks follow-up, conservatively treated patients would require surgery. A total of 75 (24.6%) patients required surgery and of these patients 48 had primary operations and 27 after the MRI at 6 weeks follow up. No patients who initially had edema without fracture line would later need surgery per protocol. Out of those patients with a fracture line <50% of femoral neck width, 27 of 103 (26%) progressed on follow-up MRI and were treated surgically secondarily. Effusion on primary MRI was correlated with later progression with a relative risk of 8.02 (CI 2.99-21.5; p<0.0001) and in these patients, surgery should strongly be considered. No patients with an undisplaced sFNF progressed to displacement.

In the case presented here, surgical treatment was chosen for the left side even though the fracture line was <50% of femoral neck width. Despite the retrospective studies by Steele and Quinquilla suggest similar fractures are able to heal without surgery, 26% would require surgery due to progression on follow-up MRI. Furthermore, extrapolating from young military recruits to a postmenopausal woman with repeated stress fractures is uncertain and avoiding displacement is crucial. Surgery was performed to allow immediate weight-bearing and to prevent dislocation.

Conclusion

This is a rare case of bilateral sFNFs in a middle-aged woman with and osteopenia as the only risk factor. Stress fractures of the femoral neck may have an insidious onset of symptoms and absence of trauma and the primary x-ray is often negative while MRI is gold standard. Treatment can be conservative or surgical depending on fracture pattern and patient characteristics.

References

  1. Egol KA, Koval KJ, Kummer F, Frankel VH (1998) Stress Fractures of the Femoral Neck. Clinical Orthipaedics and Related Research 348: 72-78. [crossref]
  2. Niva MH, Kiuru MJ, Haataja R, Pihlajamäki HK (2005) Fatigue injuries of the femur. Journal of Bone and Joint Surgery – Series B 87: 1385-1390. [crossref]
  3. Sundkvist J, Möller M, Rogmark C, Wolf O, Mukka S (2022) Stress fractures of the femoral neck in adults: an observational study on epidemiology, treatment, and reoperations from the Swedish Fracture Register. Acta Orthop 93: 413-416. [crossref]
  4. Kuhn KM, Riccio AI, Saldua NS, Cassidy J (2010) Acetabular retroversion in military recruits with femoral neck stress fractures. Clin Orthop Relat Res 468: 846-851. [crossref]
  5. Snyder RA, Koester MC, Dunn WR (2006) Epidemiology of stress fractures. Clin Sports Med 25: 37-52. [crossref]
  6. Pihlajamaki H, Ruohola J, Kiuru M, Visuri T (2006) Fractures in Military Recruits. J Bone Joint Surg Am 88: 1989-1997.
  7. Kiuru MJ, Pihlajamaki HK, Ahovuo JA (2003) Fatigue stress injuries of the pelvic bones and proximal femur: Evaluation with MR imaging. Eur Radiol 13: 605-611.
  8. Kiuru MJ, Pihlajamaki HK, Hietanen HJ, Ahovuo JA (2002) MR imaging, bone scintigraphy, and radiography in bone stress injuries of the pelvis and the lower extremity. Acta Radiol 43: 207-212.
  9. Pihlajamäki HK, Ruohola JP, Weckström M, Kiuru MJ, Visuri TI (2006) Long-term outcome of undisplaced fatigue fractures of the femoral neck in young male adults. Journal of Bone and Joint Surgery – Series B 88: 1574-1579. [crossref]
  10. Fullerton LRJ, Snowdy HA (1988) Femoral Neck Stress Fractures. Am J Sports Med 16(4). [crossref]
  11. May LA, Chen DC, Bui-Mansfield LT, O’Brien SD (2017) Rapid magnetic resonance imaging evaluation of femoral neck stress fractures in a U.S. active duty military population. Mil Med 182: e1619-e1625. [crossref]
  12. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, et al (1987) Stress fractures in athletes: A study of 320 cases. Am J Sports Med 15: 46-58. [crossref]
  13. Neubauer T, Brand J, Lidder S, Krawany M (2016) Stress fractures of the femoral neck in runners: a review. Research in Sports Medicine 24: 185-199. [crossref]
  14. Bennell KL, Malcolm SA, Thomas SA, Wark JD, Brukner PD (1996) The incidence and distribution of stress fractures in competitive track and field athletes: A twelve-month prospective study. American Journal of Sports Medicine 24: 211-217.
  15. Lee CH, Huang GS, Chao KH, Jean JL, Wu SS (2003) Surgical treatment of displaced stress fractures of the femoral neck in military recruits: A report of 42 case. Arch Orthop Trauma Surg 123: 527-533. [crossref]
  16. May LA, Chen DC, Bui-Mansfield LT, O’Brien SD (2017) Rapid magnetic resonance imaging evaluation of femoral neck stress fractures in a U.S. active duty military population. Mil Med 182: e1619-e1625. [crossref]
  17. Slobogean GP, Sprague SA, Scott T, Bhandari M (2015) Complications following young femoral neck Injury 46: 484-491. [crossref]
  18. Sundkvist J, Möller M, Rogmark C, Wolf O, Mukka S (2022) Stress fractures of the femoral neck in adults: an observational study on epidemiology, treatment, and reoperations from the Swedish Fracture Register. Acta Orthop 93: 413-416. [crossref]
  19. Rohena-Quinquilla IR, Rohena-Quinquilla FJ, Scully WF, Evanson JRL (2018) Femoral neck stress injuries: Analysis of 156 cases in a U.S. military population and proposal of a new mri classification system. American Journal of Roentgenology 210: 601-607. [crossref]
  20. Steele CE, Cochran G, Renninger C, Deafenbaugh B, Kuhn KM (2018) Femoral neck stress fractures: MRI risk factors for progression. Journal of Bone and Joint Surgery – American Volume 100: 1496-1502. [crossref]

Forgotten Right Ventricle Entity: In PASC Patients

DOI: 10.31038/JCCP.2022523

 

World has just passed through the global pandemic of COVID-19 disease with recent reports of it resurfacing in China. Although being a disease predominantly affecting lungs the involvements of other organs like heart, brain and gut have also been seen in the acute phase. PASC (post acute SARS COVID-19) is a distinct phase of the disease seen amongst survivors from both mild and severe disease where the patients continue to suffer from symptoms of palpitations, dysnoea on exertion, chest pain and fatigue. Few studies have been done in such patients to assess ongoing cardiac involvement. Most of these patients show normal left and right ventricle Ejection fraction, normal troponin levels with non specific EKG findings of sinus tachycardia. Some of these patients are made to undergo cardiac MR to rule out COVID-19 myocarditis. Here also most of the imaging specialists and the cardiologists are focused on the left ventricle only and look for the Lake Loius criteria to establish or rule out diagnosis.

In a study by Lan et al. [1] it was shown that the right ventricle was commonly involved in COVID-19 disease and the reasons attributable were due to proximity of right ventricle with pulmonary circulation, increased after load of right ventricle due to COVID-19 lung complications, increased surface area of right ventricle free wall and direct involvement of right ventricle wall by the virus. Similarly studies by Li et al. [2] and Lee et al. [3] also showed the prognostic value of myocardial strain in COVID-19 disease and altered right ventricular strain in acute COVID-19 carried a poor prognosis. In the PASC phase the etiology of myocarditis remains elusive as is the challenge of establishing the diagnosis. Studies done by Puntmann et al. [4], Huang et al. [5] have shown the use of CMR with multiparametric mapping to diagnosis myocarditis in PASC patients. Yet in all these studies the findings for a positive diagnosis were elicited by showing changes in the left ventricle myocardium only with most of the patients showing normal Left and right ventricle size and function. Hence this entity of “forgotten right ventricle in PASC”. In a follow up study done in athletes who recovered from COVID-19 disease. Wassener et al. [6] showed strain abnormalities of left ventricle only and were silent about the changes in right ventricle even though prior studies demonstrated the common involvement of right ventricle. Only a recent study done by the author Kapoor et al. [7] where multiparametric CMR was done along with feature tracking for both left and right ventricle has shown that there is equal and severe involvement of right ventricle wall with diffuse increased signal changes on T2 maps even on the follow up of recovered COVID-19 patients. Their study showed 9.9% and 6% reduction of  systolic global circumferential shortening and 61.8% and 46.5% reduction early diastolic strain rate of the left and right ventricle respectively. They showed that the use of the above technique was valuable in not only diagnosing the condition but also staging the extent of disease which could impact the management of these patients. So in PASC patients it’s pertinent to have a detailed right ventricle evaluation and not to be taken by the forgotten right ventricle entity. Unfortunately not much emphasis is being given to the detailed right ventricle assessment apart from its size and wall motion abnormalities.

In conclusion the forgotten right ventricle entity in PASC not only eludes the patient of a diagnosis of ongoing myocarditis but also can have a long term bearing on the prognosis as these patients may finally end up in cardiomyopathy of the right ventricle. It would be therefore prudent to evaluate these patients using multiparametric cardiac MR techniques with myocardial strain evaluation rather than stopping at routine echocardiograms alone. All patients who have severe impairments need to be followed up for any progression of disease.

References

  1. Lan Y, Liu W, Zhou Y (2021) Right ventricle Damage in Covid-19: Association between Myocardial Injury and Covid-19. Frontiers in Cardiovascular Medicine 8: 606318. [crossref]
  2. Li Y, Li H, Zhu S, Xie Y, Wang B, et al. (2020) Prognostic value of right ventricular longitudinal strain in patients with COVID-19. JACC: Cardiovasc Imaging 13: 2287-2299. [crossref]
  3. Lee JW, Jeong YJ, Lee G, Lee NK, Lee HW, et al. (2017) Predictive Value of Cardiac Magnetic Reso-nance Imaging-Derived Myocardial Strain for Poor Outcomes in Patients with Acute Myocarditis. Korean J Radiol 18: 643-654. [crossref]
  4. Puntmann VO, Martin S, Shchendrygina A, Hoffmann J, Ka MM, et al. (2022) Long-term cardiac pathology in individuals with mild initial COVID-19 illness. Nature Medicine 28: 2117-2123.
  5. Huang L, Zhao P, Tang D, Zhu T, Han R, et al. (2020) Cardiac involvement in patients recovered from COVID-2019 identified using magnetic resonance imaging. JACC Cardiovasc Imaging 13: 2330-2339.
  6. Wassenaar JW, Clark DE, Dixon D, Durrett KG, Parikh A, et al. (2022) Reduced Circumferential Strain in Athletes with Prior COVID-19 Infection. Radiology: Cardiothoracic Imaging 4: e 210310.
  7. Kapoor A, Kapur A. Myocardial strain abnormalities in patients with long Covid after mild to moderate Covid-19 disease. Journal of Cardiology and Cardiovascular Research.

Crimean-Congo Hemorrhagic Fever: An Endemic Sporadic Zoonotic Viral Infection in Uganda

DOI: 10.31038/MIP.2022315

Abstract

Crimean-Congo Hemorrhagic Fever is an Arboviral zoonosis responsible for sporadic outbreaks of hemorrhagic fever in endemic areas. The control of CCHFV calls for multidisciplinary approach involving partners like WHO and OIE. Mmultidisciplinary research will allow better understanding of the epidemiology of CCHF in ticks, domestic livestock and wild animal populations, and will support the identification of human risk factors for infection and the development of better diagnostics, antiviral drugs and vaccines. Also, the identification of an animal model for testing would facilitate any further research, and allow studying host response to infection and evaluating intervention and control strategies. Finally, the role of environmental change, including climate change, needs further assessment. Support CCHF surveillance, diagnostic capacity and outbreak response activities. Reduce infection in people by raising awareness of the risk factors and educating people about the measures they can take to reduce exposure to the virus.

Keywords

Crimean-Congo hemorrhagic fever, CCHFV, Zoonosis, Seroprevalence, ELISA, Ixodid ticks, Uganda

Background and Aim

Crimean-Congo hemorrhagic fever (CCHF) is a tick-borne viral zoonotic disease caused by Crimean Congo hemorrhagic fever virus (CCHFV), a member of the genus Nairovirus in the family Bunyaviridae and order Bunyavirales. CCHF is typically asymptomatic in animals but can be highly fatal in humans approaching case fatality rate of approximately 30%. The disease is distributed in many countries of Asia, Africa, the Middle East and south-eastern Europe. As the distribution of CCHFV coincides with the distribution of its main vector, Ixodid (hard) ticks of the genus Hyalomma, both a reservoir and a vector for the CCHF virus; the spread of infected ticks into new, unaffected areas facilitates the spread of the virus. Numerous wild and domestic animals, such as cattle, goats, sheep and hares, serve as amplifying hosts for the virus. Transmission to humans occurs through contact with infected ticks or animal blood. CCHF can be transmitted from one infected human to another by contact with infectious blood or body fluids. Documented spread of CCHF has also occurred in hospitals due to improper sterilization of medical equipment, reuse of injection needles, and contamination of medical supplies. Occupational groups with an elevated risk of Crimean Congo hemorrhagic fever include farmers, shepherds, veterinarians, abattoir workers, healthcare personnel and laboratory workers, as well as anyone at elevated risk of exposure to ticks. Seasonality can result from seasonal changes in tick numbers or increased human exposure to slaughtered livestock. The case fatality rate is thought to be approximately 5-30% in most instances, although rates as high as 80% have been reported occasionally in limited outbreaks. Factors such as the availability and quality of healthcare, virus dose, route of exposure, coinfections, and possibly the viral strain, are thought to influence mortality.

A person with CCHF can have the following signs & symptoms: Sudden on-set of high fever, Headache, Back pain, Joint pain, Abdominal pain, Dizziness (feeling that you are losing your balance and about to fall), Neck pain and stiffness. The person who has been in contact with a person who has similar symptoms or animals infested with ticks, or has had a tick bite. In addition, the person can also have any of the following: Nausea, Vomiting, Diarrhoea, Sore throat, Sharp mood swings, Confusion, Bleeding, bruising or a rash After 2 or 4 days, the patient may experience sleeplessness and depression. Following a bite from an infected tick, the infection can establish in the animal with brief illness. The Crimean Congo Hemorrhagic Fever virus can then be passed on to the tick which can in turn pass the virus to human or other animals [1-5].

Uganda is divided into ten agroecological zones: Southern highlands, Southern dry lands, Lake Victoria crescent, Eastern, Mid-Northern, Lake Albert crescent, West Nile, Western highlands, South East, and Karamoja drylands. Mid-Northern: Flat terrain covered by thick Savannah grassland. Lira, Apac, Kitgum, Gulu, Pader districts. Agriculture remains the major source of livelihood in Uganda. According to the Uganda National Household Survey (UNHS) 2016/17, the bigger proportion of the working population is engaged in agriculture, forestry and fishing (65%). Among the females in the working population, 70% are engaged in agriculture compared to 58% of the males. Furthermore, 38% of persons in employment were in paid employment with a higher proportion of males (46%) compared to females (28%). The Agricultural sector accounted for the largest share of employment (36%).The agriculture sector had a total contribution to GDP at current prices of 24.9 percent in the FY 2016/17 compared to 23.7 percent in FY 2015/16. This indicated that the population at risk of CCHFV in Uganda is large.

Materials and Methods

CCHFV is thought to infect animals with few or no clinical signs. No illnesses have been attributed to this virus in naturally infected animals. However, the disease is zoonotic. Serum samples are collected from susceptible animals. The host range includes: human, domestic and wild animals. Sera samples are tested for the presence of CCHFV-specific immunoglobulin G (IgG) antibodies using enzyme-linked immunosorbent assay (ELISA), virus isolation or detecting its nucleic acids and antigens in blood samples or tissues. Urine, saliva and other secretions and excretions may also contain nucleic acids, but the suitability of these samples for diagnosis has not been fully investigated. At autopsy, CCHFV can be found in a variety of tissues, such as liver, spleen, lung, bone marrow, kidney and brain. Clinical cases are often diagnosed with a combination of reverse transcription-polymerase chain reaction (RT-PCR) tests and serology. CCHFV strains are highly variable, and many RT-PCR tests only recognize local variants or a subset of viruses. However, tests that can detect most or all known variants, including the highly divergent AP92 strain, have also been developed. Other published assays to detect nucleic acids include microarray and macroarray-based techniques and loop-mediated isothermal amplification. In fatal cases, viral RNA tends to increase as the disease progresses. Immunohistochemistry can be used on tissues collected at autopsy. Animal inoculation into newborn or immunodeficient mice is more sensitive than cell culture, and has been used occasionally in clinical cases, though it is generally discouraged if there are alternatives. Either specific IgM or rising titers should be seen. Virus neutralization is rarely employed, due to the hazards of handling live CCHFV. Treatment is mainly supportive. Seriously ill patients require intensive care. The antiviral drug ribavirin has been used to treat CCHF infection with apparent benefit. Both oral and intravenous formulations seem to be effective.

There are no vaccines available for use in animals. Although an inactivated, mouse brain-derived vaccine against CCHF has been developed and used on a small scale in eastern Europe, there is currently no safe and effective vaccine widely available for human use. Tests on patient samples present an extreme biohazard risk and should only be conducted under maximum biological containment conditions. However, if samples have been inactivated (e.g. with virucides, gamma rays, formaldehyde, heat, etc.), they can be manipulated in a basic biosafety environment. Patients with fatal disease, as well as in patients in the first few days of illness, do not usually develop a measurable antibody response and so diagnosis in these individuals is achieved by virus or RNA detection in blood or tissue samples.

Results

A study by Nurettin et al. (2022) screened domestic animals for IgG prevalence, and compared their results with those for wild animals (14.01% vs. 9.84%, respectively) indicating that wild animals and livestock are equally important for circulating the CCHF virus in endemic areas such as seen in Turkiye. Mirembe et al. (2021), identified 14 confirmed cases (64% males) with five deaths (case-fatality rate: 36%) from 11 districts in western and central region Uganda. Of these, eight (73%) case-patients resided in Uganda’s ‘cattle corridor’. Atim et al. (2022), detected CCHFV seropositivity of 221/800 (27·6%) in humans, 612/666 (91·8%) in cattle, 413/549 (75·2%) in goats and 18/32 (56·2%) in dogs. Human seropositivity was associated with livestock farming and collecting/eating engorged ticks. In animals, seropositivity was higher in cattle versus goats, CCHFV was identified in multiple tick pools of Rhipicephalus appendiculatus. A cross sectional study was conducted to determine the prevalence of CCHF and to identify the potential risk factors associated with CCHFV seropositivity among the one-humped camel (Camelus dromedaries) in Central Sudan. A total of 361 camels selected randomly from six localities were employed in the study. Sera sampled were tested for the presence of CCHFV-specific immunoglobulin G (IgG) antibodies using enzyme-linked immunosorbent assay (ELISA). CCHFV seropositivity was recorded in 77 out of 361 animals accounting for a prevalence rate of 21.3%. The prevalence of CCHF is significantly high among camels in Khartoum State, Sudan. Age, breed, locality and tick control are considered as potential risk factors for contracting CCHF (Suliman et al., 2017). This study aimed at providing knowledge and awareness about the disease to reduce the impact on the livelihood of pastoral communities and ultimately avoid disease spread in human.

Crimean-Congo haemorrhagic fever (CCHF) is the most widespread, tick-borne viral disease affecting humans (Al-Abri et al., 2017).

References

  1. Al-Abri SS, Abaidani IA, Fazlalipour M, Mostafavi E, Leblebicioglu H, et al. (2017).Current status of Crimean-Congo haemorrhagic fever in the World Health Organization Eastern Mediterranean Region: issues, challenges, and future directions. International Journal of Infectious Diseases 58: 82-89. [crossref]
  2. Atim AS, Ashraf S, Belij-Rammerstorfer S, Ademun AR, Vudriko P, Nakayiki T, Niebel M, Tweyongyere Risk factors for Crimean-Congo Haemorrhagic Fever (CCHF) virus exposure in farming communities in Uganda. Journal of Infection (In-Press).
  3. Consultation on Crimean-Congo haemorragic fever prevention and control Stockholm, September 2008; ecdc.europa.eu
  4. Mirembe BB, Musewa A, Kadobera D, Kisaakye E, Birungi D, Eurien D, et al. (2021) Sporadic outbreaks of crimean-congo haemorrhagic fever in Uganda. PLoS Negl Trop Dis 15(3). [crossref]
  5. Nurettin C, Engin B, Sukru T, Munir A, Zati V, Aykut O.(2022). The Seroprevalence of Crimean-Congo Hemorrhagic Fever in Wild and Domestic Animals: An Epidemiological Update for Domestic Animals and First Seroevidence in Wild Animals from Turkiye. Vet Sci 9: 462 World Health Organization. Crimean-Congo haemorrhagic fever. [crossref]
  6. Suliman HM, Adam IA, Saeed SI, Abdelaziz SA, Haroun EM, et al. (2017) Crimean Congo hemorrhagic fever among the one-humped camel (Camelus dromedaries) in Central Sudan. Virology Journal 14: 147.
  7. World Health Organization. Crimean-Congo haemorrhagic fever.