Author Archives: author

Generation of Hydrogen along the Mid-Atlantic Ridge: Onshore and Offshore

DOI: 10.31038/GEMS.2021343

Abstract

Since the 1980s, oceanic ridges have been proven to be sites at which diagenetic processes (such as serpentinization) result in the generation of natural hydrogen, which escapes through oceanic vents. The water depths in this setting and the location of ocean ridges far offshore would seem to preclude exploitation of this resource, but similar geological contexts are found onshore. Iceland is located along the axis of the Mid-Atlantic Ridge (MAR) and is also a hot spot. As a result, the emerging ridge allows for the study of hydrogen generation within this specific oceanic extensional context. Geothermal energy is well developed in Iceland; accordingly, the presence of natural hydrogen is known based on data from numerous geothermal wells which allowed us to constrain the hydrogen occurrences and compare them with MAR emissions. The results show that H2 contents are high only in the neo-volcanic zone and very low outside the immediate vicinity of this active axis. Values reaching 198 mmol H2/kg fluid have been recorded in Landmannalaugar. Farther north, the gas mixture in the Námafjall area reaches up to 57 vol% hydrogen. These well data are in the same range as those along the MAR. The oxidation of ferrous minerals, combined with the reduction of water, allows for the formation of hydrogen. In Iceland, H2 concentrations in steam seem to be enhanced by both the low concentrations of NaCl in hydrothermal fluids and the strong fracturing of the upper crust, which provides a rapid and constant supply of meteoric fluids for oxidation reactions.

Keywords

Iceland, Natural hydrogen, Oxidation, Mid-Atlantic ridge, Basalts

Introduction

Dihydrogen or H2 (also referred to here as hydrogen) is at the center of many plans for a greener planet. Today, hydrogen is essentially a raw material extracted from CH4 and other hydrocarbons by vapocracking or coal gasification; within the new energy mix, it serves as a fuel for green mobility. However, if H2 production continues to generate CO2, it merely displaces pollutant emissions. Thus, the production of H2 without greenhouse gas (GHG) emission is desirable; this can be achieved via electrolysis or plasma technology, an alternative is the exploration and production of natural H2 [1]. This natural H2 exploration is now active in various places, particularly in intracratonic contexts, after the fortuitous discovery of an accumulation in Mali [2]. In fact, numerous H2 emanations have been observed above Precambrian basins, including in Russia [3-10]. The geological conditions allowing large accumulation and/or production rates remain open to question [11]. However, the first H2 generation zones discovered were not above such basins, but were associated with mid-oceanic smokers [12,13]. Ten years ago, some pioneers made evaluation of the MOR (mid-ocean ridge) but in term of exploration the MOR have not been targeted since the water depths and distances from land in these settings appeared to preclude economic production. In addition, assessments of MOR resources have produced differing results, up to 3 order of magnitude [14] and for some authors the potential resources were low, in comparison of the H2 world consumption, for other ones it is very large and enough have the potential to replace the manufactured hydrogen. Offshore exploration is clearly more expensive than onshore exploration, but the geological characteristics of MORs are similar to those of the ridges present in Iceland or at the Afar Triple Junction, where the Red Sea Ridge and the Aden Ridge outcrop onshore. Here, we revisit MORs and present an analysis of H2 emanations in Iceland. Many wells have been drilled in this country thanks to the geothermal energy industry, and subsurface data are numerous. We mapped these data, compared H2 emanations in Iceland with those at the Mid-Atlantic Ridge (MAR).

Geology of Iceland

Geological Setting

Iceland is part of the North Atlantic Igneous Province and owes its development during the middle Miocene to interaction between the MAR and a hot spot [15]. The island is crossed by a neo-volcanic zone, which is centered on the hot spot and divided into three rift segments (Figure 1): the North Volcanic Zone (NVZ), East Volcanic Zone (EVZ), and West Volcanic Zone (WVZ). The WVZ is the onshore continuation of the Reykjanes Ridge in the southwest. In the north, the NVZ is connected to the Kolbeinsey Ridge by the Tjörnes Fracture Zone (TFZ), a dextral transform fault typical of oceanic ridges. In the south, the South Iceland Seismic Zone (SISZ) is also a transform fault marked by high seismicity (Figure 1). The TFZ, together with the SISZ, accommodates extension due to the presence of the ridge [16].

The simultaneous presence of the MOR and the hot spot has enhanced magmatic activity since the middle Miocene. The crust has a maximum thickness of 30 km in the northernmost, easternmost, and westernmost parts of the island; in contrast, crustal thickness in the center of the rift is approximately 8–10 km [17,18]. The oldest rocks are located in the northwest of the island and are from the Middle Miocene (15–16 Ma), but the most widespread rocks are Plio–Pleistocene in age (Figure 1); 90% of these are basic rocks, most commonly basalts. There are three groups of basalts: tholeiites (olivine 6.6 vol%), transitional alkali (olivine 0.2 vol%), and alkali olivine basalts (14.8 vol%). The tholeiites are mostly found along the axis of the ridge, while the others are mostly found on the margins of the volcanic zone [19]. Some of the rocks found are intermediate, such as basaltic andesites or andesites, while some are acidic, such as rhyolite [20]. Plio–Pleistocene rocks are abundant because of increased magmatic activity at that time. The last glaciation in the Northern Hemisphere started ~100 ka in the Weichselian, with a last glacial maximum occurring ~21 ka [21]. This glacial loading/unloading, which during the last glaciation impacted an Icelandic lithosphere already weakened by the mantle plume, has been proposed to explain the enhanced magmatic activity during the Plio–Pleistocene [22]. The neo-volcanic zone is composed of an en echelon active volcanic system (Figure 1) [23]. Such systems are composed of a main volcano producing basic to acidic lavas and secondary volcanoes with overwhelmingly basaltic lavas. During subglacial eruptions, these volcanoes can produce hyaloclastites and pillow lavas [24]. The hyaloclastites are breccias consisting of glass fragments formed during subglacial eruptions. All of these volcanoes are intersected by fracture and fault swarms [24].

Geothermal Systems

Icelandic geothermal systems can be classified according to the base temperature of their fluids [25], which corresponds to the highest temperature of fluid that can be produced. As fluid transport within the reservoir is mainly convective, this temperature corresponds to the fluids located at the base of the convective cell. Low-temperature systems (i.e., those below 150°C) do not produce electricity efficiently and are thus typically used for heating; these systems appear to be located both inside and outside the neo-volcanic zone. In contrast, high-temperature (HT) systems, whose steam is used to produce electricity, are systematically located inside the volcanic zone (Figure 1), and their base temperature exceeds 200°C. Some poorly explored areas with base temperatures between 150 and 200°C also exist [26,27].

fig 1

Figure 1: Geological and structural map of Iceland (data from IINH) [17]

Another way to describe high temperature (HT) geothermal systems is to focus on their geological features, including their heat source and heat transfer mode, reservoir characteristics, the fluids, drainage characteristics, cap rock, and surface manifestations. In Iceland, the heat source is of magmatic origin, and heat transfer is assumed to be achieved primarily by the convection of fluids within the crust (Figure 2). In the upper part, the convective fluid is primarily water that circulates within the brittle and highly fractured upper crust, which lies above the magma chamber. A thin, almost purely conductive layer is present between the magmatic body and the upper part; hydrothermal fluids circulate down to this layer. Its low thickness allows for exchange between the volatile components of the magma and the hydrothermal fluids [28]. The “reservoir” is defined as the layer in which convection of water-based fluids occurs and where production may take place. In Iceland, this reservoir is composed primarily of basalts [23] with some rhyolites, which are formed by the partial fusion of basalts. The recharge of the hydrothermal fluids is assumed to be rapid owing to the numerous fracture/fault swarms that have been observed within the Icelandic crust; these structures increase the permeability of the crust. However, there is a strong anisotropy of permeability [29]. Vertical permeability is enhanced by fractures, faults, and damaged zones, whereas horizontal permeability is lower and roughly equal to the basalt bulk permeability. Hence, tectonic characteristics control the downward flow of fluids within hydrothermal systems and allow meteoric fluids or seawater to circulate within the Icelandic crust [30]. The reservoir is covered by layers of hydrothermally altered hyaloclastites [31]. Primary porosity is often infilled by secondary minerals such as smectites. These layers of altered hyaloclastites act as barriers to hydrothermal fluids [31]. However, this seal is not perfect, and leakages are numerous, resulting in the surface manifestations including fumaroles, boiling springs, hot or acidic springs, mud pools, sulfide deposits, siliceous sintering above convective cells, CO2 springs, and travertines (particularly at the rims of hydrothermal basins). Geothermal systems are driven by fluid convective cells. In Iceland, the fluids of these geothermal systems are typically divided into two groups: primary fluids (or reservoir fluids) and secondary fluids [28,32]). The primary fluids are formed by the direct mixing of water with the volatile components of magma. Secondary fluids are produced by water/rock interaction during the ascent of the primary fluids. For example, secondary fluids can oxidize rocks and produce hydrogen, as follows.

H2O + 3FeO → Fe2O3 + H2                 (1)

fig 2

Figure 2: Schematic fluid migration pathway resulting in the geothermal system in Iceland. Within the conductive upper zone, the fracture network enhances the circulation toward the hot conductive layer, in its contact the fluid is warmed up. Water infill is insured by the rain and the ice cap.

Icelandic Hydrothermal Systems

Well Data

We gathered data published from 1950 to 2011 [26, 33-40]. Here, we present a summary of these data for a dozen HT areas in Iceland, including gas compositions, liquid characteristics (such as pH), surface temperatures, and isotopic data (including formation temperature). The fluids were sampled either from the surface (fumaroles and springs) or subsurface (wells), and their temperatures ranged from those of hot steams to those of warm springs. Gas compositions of the vapor phase are listed in Tables 1 and 2 in mmol/kg of fluid and vol%, respectively. In the literature, some data are given in vol%, while others are in mmol/kg H2O. While vol% corresponds to the volume occupied by a chemical species within a mixture, mmol/kg H2O corresponds to the quantity of a species contained in 1000 g of H2O. We tried to convert all of the published values to the same units; however, the available data did not allow us to gather corresponding information such as pressure, temperature, and bulk chemical composition for each site. Thus, it was impossible for us to convert vol% values into mmol/kg H2O and vice versa. Furthermore, the % data values reported sometimes referred to the ratio within the gas present in steam without considering the H2O itself. A more advanced evaluation and comparison between these fluids, from wells and from fumaroles may be fund [41]. To evaluate potential hydrogen production quantitatively, we used kg of H2/year. All available data suggest that hydrogen production varies temporally; thus, the currently available data, mainly sporadic, will allow us to determine only approximate trends.

Table 1: Gas concentrations in mmol/kg of fluid within the steam phase, W is for well and F is for fumarole [26,34,36,39].

table 1

 

Table 2: Gas concentrations, vol% and ppm. G is for non-condensable gas of well discharge [35,37,38,41].

table 2

In some areas of note, gas mixtures exhibit remarkable hydrogen contents, reaching 64% and 57% of total gas volume at Namaskard [38] and Námafjall [37], respectively. At Landmannalaugar, H2 concentrations reached a maximum of 198 mmol/kg H2O, which is seven times higher than the concentrations observed within the Ashadze site along the MAR [42]. As seen in (Figure 3), this hydrogen is systematically associated with minor concentrations of CH4 and N2. CO2 is always a major component in the vapor phase (Figures 3 and 4), reaching 93% at Krafla [37]. While hydrogen sulfides are negligible at Krafla [37], H2S concentrations may reach 10% in other areas studied (Figure 4), reaching 22% at Krisuvik [38]. As shown in Tables 1 and 2, H2S concentrations are mostly similar to or lower than H2 concentrations; when they exceed H2 concentrations, as seen at Hellisheidi or Krisuvik, they remain within the same order of magnitude.

fig 3

Figure 3: CO2, H2S, H2, CH4, N2, O2, and Ar concentrations (mmol/kg) for nine high- temperature hydrothermal sites (see data in tables)

fig 4

Figure 4: Ternary diagram with relative proportions of CO2, H2S, and H2 for nine high-temperature areas in Iceland [32, 35, 37, 38].

Characteristics of liquid phases are listed in Table 3 for Theistareykir [34], Hveragerdi, Nesjavellir, Námafjall [35, 36] and Hellisheidi [39] only owing to a lack of data for the other sites. The pH of these fluids is between neutral and alkaline and the corresponding surface temperatures do not exceed 25°C. NaCl concentrations for these areas are always lower than 500 ppm.

Table 3: Liquid-phase composition, WS is for warm spring [34-36,39].

table 3

Additional Data from Námafjall and Reykjanes Areas

Additional data from these sites mirror the trends exhibited by the published data described above, as seen in Tables 4 and 5. Table 4 contains data from the Námafjall area, which has a basaltic host rock, experiences meteoric water infiltration, and is located inside the neo-volcanic zone. Table 5 contains data from the Reykjanes area, which also has a basaltic host rock and is located inside the neo-volcanic zone but experiences seawater infiltration. Tables 4 and 5 present the gas compositions and liquid characteristics of wells from these areas. The Námafjall site has pH values between 6.6 and 9.7 and surface temperatures between 14 and 25.8°C. The Na and Cl contents for this site are both lower than 500 ppm (Figure 1). The vapor phase is mostly composed of CO2. The H2S and H2 contents here are relatively high, reaching between 12.6 and 121 mmol/kg of fluid. CH4 and N2 contents are non-negligible but never exceed 18 and 115 mmol/kg, respectively. Tables 4 and 5 show that the gas concentration values are variable with time. For instance, in well N°11, H2 has been measured at 31, 48, 55, 80, and 121 mmol/kg H2O over a period of 18 years. These data indicate that this is an active and dynamic system and that monitoring will be necessary before any quantification of flow or annual flux.

Table 4: Námafjall steam phase compositions and liquid characteristics.

table 4

 

Table 5: Reykjanes steam phase compositions and liquid characteristics.

table 5

In contrast is the Reykjanes site, the pH values for this site are lower than those for Námafjall and define the fluids as being acidic. Surface temperatures at Reykjanes are between 20.9 and 22.4°C. The fluids from the two sites also differ in terms of their NaCl content; in Reykjanes, NaCl concentrations far exceed 500 ppm. We also found considerable differences in vapor phase composition. While CO2 concentrations are also high at Reykjanes (between 100 and 1000 mmol/kg), H2S and H2 concentrations are lower, rarely exceeding 10 mmol/kg of fluid for H2S and 1 mmol/kg of fluid for H2 .

Isotopic Data

We also collected isotopic data (including 𝛿𝐷 of H2 and H2O and 𝛿13𝐶 of CO2 and CH4) and their corresponding calculated temperatures from the literature. 𝛿𝐷𝐻2 and 𝛿𝐷𝐻2O values have been calculated according to the following equations and are listed in Table 6:

𝛿𝐷𝐻2(‰)=((𝐷/𝐻)𝑒𝑐ℎ/(𝐷𝐻)𝑠𝑡𝑑−1)× 1000 𝑎𝑛𝑑 𝛿𝐷𝐻2𝑂(‰)=((𝐷/𝐻)𝑒𝑐ℎ/(𝐷𝐻)𝑠𝑡𝑑−1)× 1000,     (2)

𝛿𝐶𝐶𝑂2(‰)=((𝐷/𝐻)𝑒𝑐ℎ/(𝐷𝐻)𝑠𝑡𝑑−1)× 1000 𝑎𝑛𝑑 𝛿𝐷𝐶𝐻4(‰)=((𝐷/𝐻)𝑒𝑐ℎ/(𝐷𝐻)𝑠𝑡𝑑−1)× 1000.    (3)

Table 6.1 & 6.2: Isotopic data and calculated temperature from them, HS = Hot Spring [33,37].

table 6.1

table 6.2

The H2–H2O equilibrium is a geothermometer used to determine the equilibrium temperature reached by H2–H2O, which can be simplified as the formation temperature of hydrogen. Arnason used the H2–H2O geothermometer based on Bottinga’s work (1969) to calculate the hydrogen formation temperature. The fractionation factor used between H2 and H2O is as follows:

∝𝐻2−𝐻20 = [𝐻𝐷𝑂]/[H2O]/([𝐻𝐷]/[𝐻2]).            (4)

Sano et al. calculated isotopic temperatures from the difference between 𝛿13C of CO2 and CH4 following Bottinga’s work on the fractionation factor between CO2 and CH4:

∝𝐶𝑂2−𝐶𝐻4 = [𝐷/𝐻]2/[𝐷/𝐻]𝐶𝐻4.                        (5)

At Námafjall [34], Krisuvik, Hveragerdi, Nesjavellir, Namaskard, Torfajökull, and Reykjanes [33], these isotopic values are between –358.0‰ and –631.0‰, yielding isotopic formation temperatures of 385 and 114°C, respectively. These calculated formation temperatures are represented as a function of H2 concentration in Figure 5.

fig 5

Figure 5: Formation temperature as a function of H2 concentration for six HT areas (data Table 6)

The maximum geothermal gradient inside the rift zone in Iceland is 150°C/km [44]. Based on this gradient and the formation temperatures calculated as above, it is possible to determine the likely depth of hydrogen formation, which tends to occur between 0.8 and 2.5 km. Isotopic data relating to H2O can also provide information about the source of hydrothermal fluids. The 𝛿𝐷𝐻2O values were found to be between –50.9 ‰ and –97.7‰; these negative values indicate that the water within the hydrothermal system is not derived from seawater but is mostly of meteoric origin. The specific value for Reykjanes (–22.5‰) can be explained by the mixing of seawater with water with lower deuterium content, such as meteoric water [33].

Our comparison of data from the literature has allowed us to highlight the most H2-rich areas in Iceland (Figure 6). We considered twelve sites, as follows:

– Theistareykir, Krafla, Námafjall, Namaskard, and Kverkfjoll in the NVZ;

– Landmannalaugar and Torfajökull in the EVZ;

– Hengill, Nesjavellir, Hellisheidi, Krisuvik, and Hveragerdi in the WVZ.

fig 6

Figure 6: Map of natural H2 emanations in Iceland. Concentration is high in the active volcanic zone and low outside.

Interpretation

Generation of H2

The literature highlights how H2 production is a function of the host rock and its mineralogical composition [45]. The presence of minerals rich in ferromagnesian elements in the host rock allows for the oxidation reaction at the origin of hydrogen generation. In Iceland, the host rock can be divided in three groups: tholeiites, transitional alkali basalts, and alkali olivine basalts. These rocks can contain as much as 15% olivine. [19]. Rhyolite, formed by partial melting of basalts, can be considered a fourth host rock. The presence of olivine, being a ferromagnesian mineral, should allow hydrogen production in Icelandic hydrothermal systems via the oxidation of iron. In addition, in these systems, when the Cl concentration in hydrothermal fluids is low (<500 ppm), the prevailing secondary minerals include the following [36]: pyrite (FeS), pyrrhotite (FeS2), epidote (Ca2(Al2,Fe3+)(SiO4)(Si2O7)O(OH)), and prehnite (Ca2Al2(SiO4)(Si3O10)(OH)2). These minerals are rich in iron and sulfide, which allow for the following reaction:

4FeS + 2 Ca2Al2Si3 O10(OH)2 + 2H2O → 2 FeS2 + 2Ca2FeAl2Si3O12(OH) + 3H2       (6)

For waters with higher Cl concentrations (>500 ppm), the minerals involved include the following: pyrite, epidote, prehnite, magnetite (Fe2+Fe3+2O4), and chlorite. Thus, natural hydrogen is likely produced by the oxidation of ferrous, sulfide-rich minerals, and its concentration is controlled by the mineral–fluid equilibrium, which is controlled by fluid–rock interactions, as proposed by [32]. It is also possible that magmatic degassing and other processes, such as crystallization, can take place during hydrogen production.

H2 Transport

Hydrogen is considered a mobile, reactive, and poorly soluble gas. In fact, its solubility increases above 57°C, which is the temperature at which the minimum solubility of H2 is reached. For P > 30 MPa and 200 < T < 300°C, hydrogen is more easily contained in the gas phase than in the liquid phase [46]. Pressure also plays a role, as greater pressures (i.e., greater depths) lead to greater hydrogen solubilities. Similarly, salinity plays a major role in hydrogen solubility, as described by the “salting-out” effect [47]; in particular, when NaCl concentration increases, hydrogen solubility decreases. As a result, in subsurface at depths greater than a few kilometers, the quantity of H2 in the associated hydrothermal fluid may be large.

Comparison with the Mid-Atlantic Ridge and Its Hydrothermal Systems

Hydrothermal Sites of the MAR

For the last 30 years, hydrothermal smokers along the MAR have been known to be places of natural gas emission, including CH4 and H2 (Figure 7 and Table 7) [12]. In addition to H2 and CH4, these smokers emit primarily CO2, H2S, and trace quantities of Ar and N2 (Figure 8). The maximum hydrogen content recorded to date was 26.5 mmol/kg fluid for the Ashadze site [42]; see location Figure 7).

fig 7

Figure 7: Mapping of gas emanations along the Mid-Atlantic Ridge

Table 7: MAR H2 concentrations and temperatures [42,48-53].

table 7

fig 8

Figure 8: CO2, H2S, H2, CH4, N2, and Ar concentrations for hydrothermal vents along the MAR [42, 48-53].

The hydrogen gas escapes through smokers located on fractured and faulted basic to ultrabasic basement. These rocks are variably enriched in ferromagnesian elements, and hydrogen is produced by their serpentinization. Typically, a magmatic body in this setting develops into an ultramafic outcrop. The heat coming from the magmatic body warms up the fluids present in the upper part of the crust. The optimum temperature for the serpentinization reaction is between 200 and 350°C [13]. Fluids interact with the rocks such that the ferromagnesian minerals present in the rock (e.g., olivine, Mg1.8Fe0.2SiO4) are hydrated and destabilized. Simultaneously, water is reduced and the ferrous minerals are oxidized into ferric minerals, which leads to the formation of secondary minerals (e.g., serpentine, Mg3Si2O5(OH)4; magnetite, Fe3O4; and brucite, Mg(OH)2) and the liberation of hydrogen [42], as shown below:

3Mg1.8FeO.2SiO4 + 4.1 H2O = 1,5 Mg3Si2O5(OH)4 + 0.9 Mg(OH)2 + 0.2 Fe3O4 + 0.2 H2  (7)

Within approximately the same temperature range, a process of phase separation takes place in the oceanic crust. The vapor phase, which is lighter, rapidly migrates upward through the crust due to the fracture network. In contrast, the liquid phase, which is over-concentrated in chemical elements, remains trapped in pores. While the magmatic body cools, the fluid temperature cools also. The newly established pressure and temperature differences generate a convective fluid cell, which allows the fluid phase to be released through the crust and to the ocean floor [54]. The brine then mixes with colder seawater, and the sulfide elements precipitate to form hydrothermal vents of two types: black smokers and white smokers [55]. The black smokers, such as Rainbow and Logatchev, emit HT (>350°C) anoxic fluids. They are also rich in metallic elements such as iron, manganese, and copper, and their CH4 and H2 concentrations are large. The fluids of white smokers are alkaline and colder, with temperatures as low as 70°C (e.g., Lost City). Even when these smokers are located on the seafloor (i.e., at depths of 3 km), life is prevalent in this environment (Menez [56] and reference inside). The fluids of white smokers are also rich in CaS, CaCO3, and CH4, and their H2 concentrations are higher than those of black smokers.

Geological Commonalities/Differences between Hydrothermal Sites of Iceland and the MAR

This synthesis shows some commonalities between the black smokers along the MAR and Icelandic hydrothermal systems (Figure 9). In particular, their fluid temperatures are similar (Tables 1 and 7), approximatively between 200 and 350°C, and their fluids are alkaline. In both cases, the heat source is magmatic, fluid transfer is ensured by convective cells, and permeability can be attributed primarily to fractures and faults. Furthermore, H2 generation is ensured by the oxidation of ferrous or other mineral-rich materials during H2O reduction. In contrast, the data show some interesting differences between the hydrothermal systems of the MAR and Iceland (Figure 9). In Iceland, all of the H2-rich sites are located in the neo-volcanic zone and, therefore, in an HT area. Landmannalaugar and Torfajökull are located on acidic outcrops of rhyolite, while the other 10 sites are located on intermediate to basic outcrops predominantly comprising basalt. Therefore, unlike the H2-releasing sites along the MAR, which are located on ultrabasic outcrops, H2-rich areas in Iceland are mostly situated on basic outcrops. Furthermore, Icelandic H2-releasing areas are always composed of hydrothermally altered hyaloclastite outcrops, which provide a good cap rock.

fig 9

Figure 9: Schematic illustration of the Mid-Atlantic Ridge (A) and Iceland (B) illustrating the architecture of the active volcanic zone

Discussion of Key Parameters of Hydrogen Formation in Iceland

Some geological differences exist between the MAR and Icelandic hydrothermal systems; however, we believe that these differences are not the main factors controlling the differences in H2 concentrations between the two settings. The Icelandic context is remarkable in that precipitation rates are high and ice caps are extensive. Most of the water infiltrating the upper crust is meteoric in origin or directly linked to the ice caps [33]. Arnason showed that the residence time of these fluids (i.e., the time since their precipitation) varies significantly, ranging from a few decades to thousands of years (i.e., the last glaciation). Furthermore, the crust here is highly fractured with a very high anisotropy of permeability and hydrogen formation temperatures that are between 200 and 350°C.

Thereby, this synthesis allows us to propose an explanation for the higher H2 concentrations observed for the HT hydrothermal systems in Iceland relative to the MAR systems. We propose the following.

– First, large quantities of water are available in the Iceland systems, which facilitates a rapid and significant water flux for oxidation reactions in the crust.

– Second, owing to the meteoric characteristics of the water flux, NaCl concentrations in the fluid are very low (mostly <500 ppm), resulting in higher hydrogen solubility in the Icelandic fluids. As seen in Figure 4, H2 concentrations are higher in Námafjall, where Cl concentrations are lowest, than in Reykjavik or along the MAR.

– Third, owing to their formation temperatures, Icelandic fluids containing hydrogen are mostly in gaseous form with a minor liquid phase, with the former phase typically containing more hydrogen than the latter [46].

These factors boost hydrogen production such that Icelandic H2 concentrations are higher in many places than those recorded along the MAR.

Conclusion

Our review of the literature allowed us to map the preferred areas for natural hydrogen emissions within Iceland (Figure 6); all of these areas are located within the neo-volcanic zone of this HT geothermal system. The presence of H2-rich zones within the active axis has been also noted for the Goubhet–Asal area, where the Aden Ridge outcrops within the Republic of Djibouti [57]. In the similar context of the Mid-Pacific Ridge, within Socorro Island, H2 values reaching 20% within the vent have been also described [58]. The authors also noted the influence of rainwater and the presence of abiotic CH4. H2-enriched fluids are alkaline and poor in NaCl. Isotopic data and formation temperatures for these fluids can help constrain the conditions of hydrogen formation. The isotopic data show that hydrogen is typically formed at temperatures between 385 and 114°C, which is equivalent to depths between 0.8 and 2.5 km; this makes hydrogen formation a relatively shallow process. Under these conditions, hydrogen formation occurs due to the oxidation of the ferrous minerals of the acidic to basic host rock. In the basic reservoir rocks of Iceland, the primary minerals oxidized during H2 formation include iron sulfides, epidote, and prehnite. Even if this reaction is not considered today as the main one, sulfide oxidation could be particularly important in the formation of H2, particularly when H2S is present. Additionally, H2 can also be produced by the degassing of magma as suggested by Larin [3] and Zgonnik [4]. The H2-producing areas in Iceland and along the MAR appear to be relatively similar; however, the gas concentrations in Icelandic hydrothermal steams tend to be significantly higher than those along the MAR. We posit that this difference is due to the availability of freshwater in Iceland, which does not affect H2 production directly but does affect the solubility of H2, which is higher in freshwater. Finally, it is important to note the high fracture density of the basalt in Iceland, which allows a rapid and constant supply of meteoric waters for reactions. These parameters influence hydrogen concentrations. The presented data also highlight temporal variation in hydrogen concentrations. Although the HT hydrothermal systems considered here appear to be active and dynamic, it would be useful to monitor and quantify the real H2 flux as has been done in Brazil, where structures from the São Francisco Basin emit hydrogen [6,7]. We would not expect to find the various periodicities registered in the monitored fairy circles (e.g., 24 h and sporadic pulses) in the subsurface (where wells are monitored), either directly at the surface or where the soil cover is absent. Nevertheless, further data will be required to characterize changes in H2 flow in the geothermal fluids in Iceland. Finally, the geothermal industry is well established in Iceland, with several important geothermal power plants located in the neo-volcanic zone that allow for electricity production and the heating of farms and other buildings. These power plants release non-condensable gas into the atmosphere, including CO2, H2S, and H2. The Hellisheidi geothermal power plant produced 640 tons of H2 in 2011. In future, the production of natural hydrogen without significant emissions could be possible using classical gas separation processes.

Acknowledgement

The authors gratefully acknowledge Andry Stefansson for the collaboration and the access to the Reykjanes and Namafjall data set, Dr. Dan. Levy and PhD student Gabriel Pasquet, both from E2S UPPA, for many interesting discussions on natural H2. This work is extracted from the Master’s Thesis of Valentine Combaudon, funded by Engie. We thank Isotope Editing for providing specialist scientific editing services for a draft of this manuscript.

References

  1. Moretti I (2019) H2: energy vector or source?. (L’actualité chimique n° 442, July-Aout, p 15-16.
  2. Prinzhofer A, Cissé CST, Diallo AB (2018) Discovery of a large accumulation of natural hydrogen in Bourakebougou (Mali). International Journal of Hydrogen Energy 43: 19315-19326.
  3. Larin N, Zgonnik V, Rodina S, Eric D, Alain P, et al. (2015) Natural molecular hydrogen seepage associated with surficial, rounded depressions on the European craton in Russia. Natural Resources Research. 24: 369-383.
  4. Zgonnik V (2020) The occurrence and geoscience of natural hydrogen: A comprehensive review. Earth-Science Reviews 243.
  5. Zgonnik V, Beaumont V, Deville E, Nikolay L, Daniel P, et al. (2015) Evidence for natural molecular hydrogen seepage associated with Carolina bays (surficial, ovoid depressions on the Atlantic Coastal Plain, Province of the USA). Progress in Earth and Planetary Science 2.
  6. Prinzhofer A, Moretti I, Francolin J, Cleuton P, Angélique D’A, et al. (2019) Natural hydrogen continuous emission from sedimentary basins: The example of a Brazilian H2-emitting structure. International Journal of Hydrogen Energy 44: 5676-5685.
  7. Moretti I, Prinzhofer A, Françolin J, Cleuton P, Maria R, et al. (2021) Long-term monitoring of natural hydrogen superficial emissions in a brazilian cratonic environment. Sporadic large pulses versus daily periodic emissions. International Journal of Hydrogen Energy 46: 3615-3628.
  8. Moretti I, Brouilly E, Loiseau K, et al. (2021) Hydrogen Emanations in Intracratonic Areas: New Guidelines for Early Exploration Basin Screening. Geosciences 11.
  9. Frery E, Langhi L, Maison M, et al. (2021) Natural hydrogen seeps identified in the North Perth Basin, Western Australia. International Journal of Hydrogen Energy 46: 31158-31173.
  10. Boreham CJ, Edwards DS, Czado K, Rollett N, Wang L, et al. (2021) Hydrogen in Australian natural gas: occurrences, sources and resource. Journal of the Australian Production and Petroleum Exploration Association 61.
  11. Klein F, Tarnas J, Bach W (2020) Abiotic Sources of Molecular Hydrogen on Earth. Elements 16: 19-24.
  12. Charlou JL, Donval JP, Fouquet Y, Jean-Baptiste P, Holm N (2002) Geochemistry of high H2 and CH4 vent fluids issuing from ultramafic rocks at the Rainbow hydrothermal field (36°14’N, MAR). Chemical Geology 191: 345-359.
  13. Cannat M, Fontaine F, Escartin J (2010) Serpentinization and associated hydrogen and methane fluxes at slow spreading ridges. Advancing Earth and Space Science.
  14. Worman SL, Pratson J, Karson, Schlesinger W (2020) Abiotic hydrogen (H2) sources and sinks near the Mid-Ocean Ridge (MOR) with implications for the subseafloor biosphere. Proceedings of the National Academy of Sciences of the United States of America.
  15. Martin E, Paquette JL, Bosse V, Ruffet G, Tiepolo M, (2011) et al. Geodynamics of rift–plume interaction in Iceland as constrained by new 40Ar/39Ar and in situ U–Pb zircon ages. Earth and Planetary Science Letters 311: 28-38.
  16. Garcia S, Arnaud N, Angelier, J, Françoise B, Catherine H, et al. (2003) Rift jump process in Northern Iceland since 10 Ma from 40Ar/39Ar geochronology. Earth and Planetary Science Letters 214: 529-544.
  17. IINH (2020) – Metadatas and download https://en.ni.is/node/27919
  18. Björnsson A (1985) Dynamics of crustal rifting in NE Iceland. Journal of Geophysical Research: Solid Earth 90: 10151-10162.
  19. Jakobsson SP (1972) Chemistry and distribution pattern of recent basaltic rocks in Iceland. Lithos 5: 365-386.
  20. Sigmundsson F, Einarsson P, Hjartardóttir ÁR, Vincent D, Kristín J, et al. (2021) Geodynamics of Iceland and the signatures of plate spreading. Journal of Volcanology and Geothermal Research 391.
  21. Bourgeois O, Dauteuil O, Vliet‐lanoë BV (2000) Geothermal control on flow patterns in the Last Glacial Maximum ice sheet of Iceland. Earth Surface Processes and Landforms: The Journal of the British Geomorphological Research Group 25: 59-76.
  22. Garcia S, Angelier J, Bergerat F, Catherine H, Olivier D, et al. (2008) Influence of rift jump and excess loading on the structural evolution of northern Iceland. Tectonics. 27.
  23. MORTENSEN, A. Geological mapping in volcanic regions: Iceland as an example. Short Course on Conceptual Modelling of Geothermal Systems, organized by UNU-GTP and LaGeo, in Santa Tecla, El Salvador, 2013.
  24. Gudmundsson A. (2000) Dynamics of volcanic systems in Iceland: example of tectonism and volcanism at juxtaposed hot spot and mid-ocean ridge systems. Annual Review of Earth and Planetary Sciences 28: 107-140.
  25. Bodvarsson G (1961) Physical characteristics of natural heat resources in Iceland. Jökull 11: 29-38.
  26. Ármannsson H, Benjamínsson J, Jeffrey A (1989) Gas changes in the Krafla geothermal system, Iceland. Chemical Geology 76: 175-196.
  27. Ármannsson H (2016) The fluid geochemistry of Icelandic high temperature geothermal areas. Applied Geochemistry 66:14-64.
  28. Arnórsson S, Stefánsson A, Bjarnason, JÖ (2007) Fluid-fluid interactions in geothermal systems. Reviews in Mineralogy and Geochemistry 65: 259-312.
  29. Árnason K (2020) New Conceptual Model for the Magma-Hydrothermal-Tectonic System of Krafla, NE Iceland. Geosciences 10.
  30. Pope EC, Bird DK, Arnorsson S, et al. (2016) Hydrogeology of the Krafla geothermal system, northeast Iceland. Geofluids 16: 175-197.
  31. Thien BMJ, Kosakowski G, Kulik DA (2015) Differential alteration of basaltic lava flows and hyaloclastites in Icelandic hydrothermal systems. Geothermal Energy 3.
  32. Stefánsson A (2017) Gas chemistry of Icelandic thermal fluids. Journal of Volcanology and Geothermal Research 346: 81-94.
  33. Arnason B (1977) The hydrogen-water isotope thermometer applied to geothermal areas in Iceland. Geothermics 5: 75-80.
  34. Ármannsson H, Gíslason G, Torfason H (1986) Surface exploration of the Theistareykir high-temperature geothermal area, Iceland, with special reference to the application of geochemical methods. Applied geochemistry 1: 47-64.
  35. Arnórsson S, Grönvold K, Sigurdsson S (1978) Aquifer chemistry of four high-temperature geothermal systems in Iceland. Geochimica et Cosmochimica Acta 42: 523-536.
  36. Arnórsson S, Gunnlaugsson E (1985) New gas geothermometers for geothermal exploration—calibration and application. Geochimica et Cosmochimica Acta 49: 1307-1325.
  37. Sano Y, Urabe A, Wakita H, Hitoshi C, Hitoshi S (1985) Chemical and isotopic compositions of gases in geothermal fluids in Iceland. Geochemical journal 19: 135-148.
  38. Sigvaldason GE (1966) Chemistry of thermal waters and gases in Iceland. Bulletin Volcanologique 29: 589-604.
  39. Stefánsson A, Arnórsson S, Gunnarsson I, Hanna K, Einar G (2011) The geochemistry and sequestration of H2S into the geothermal system at Hellisheidi, Iceland. Journal of Volcanology and Geothermal Research 202: 179-188.
  40. Arnórsson S (1986) Chemistry of gases associated with geothermal activity and volcanism in Iceland: A review. Journal of Geophysical Research: Solid Earth 91: 12261-12268.
  41. Combaudon V, Moretti I, Kleine B, Stefansson A (2021) Hydrogen emissions from hydrothermal fields in Iceland and comparison with the Mid-Atlantic Ridge, Submitted to International Journal of Hydrogen Energy.
  42. Charlou JL, Donval JP, Konn C, et al. (2010) High production and fluxes of H 2 and CH 4 and evidence of abiotic hydrocarbon synthesis by serpentinization in ultramafic-hosted hydrothermal systems on the Mid-Atlantic Ridge. GMS 188: 265-296.
  43. Bottinga Y (1969) Calculated fractionation factors for carbon and hydrogen isotope exchange in the system calcite-carbon dioxide-graphite-methane-hydrogen-water vapor. Geochimica et Cosmochimica Acta 33: 49-64.
  44. Flóvenz ÓG, Saemundsson K (1993) Heat flow and geothermal processes in Iceland. Tectonophysics 225: 123-138.
  45. Klein F, Bach W, Mccollom T (2013) Compositional controls on hydrogen generation during serpentinization of ultramafic rocks. LITHOS 178: 55-69.
  46. Bazarkina EF, Chou IM, Goncharov AF (2020) The Behavior of H2 in Aqueous Fluids under High Temperature and Pressure. Elements: An International Magazine of Mineralogy, Geochemistry, and Petrology 16: 33-38.
  47. Lopez-lazaro C, Bachaud P, Moretti I, et al. (2019) Predicting the phase behavior of hydrogen in NaCl brines by molecular simulation for geological applications. Prédiction par simulation moléculaire des équilibres de phase de l’hydrogène dans des saumures de NaCl pour des applications géologiques. Bulletin de la Société Géologique de France 190.
  48. Reeves EP, Mcdermott JM, Seewald JS (2014) The origin of methanethiol in midocean ridge hydrothermal fluids. Proceedings of the National Academy of Sciences 111: 5474-5479.
  49. Proskurowski G, Lilley MD, Kelley DS, et al. (2006) Low temperature volatile production at the Lost City Hydrothermal Field, evidence from a hydrogen stable isotope geothermometer. Chemical Geology 229: 331-343.
  50. Kelley DS, Karson JA, Früh-green GL, et al. (2005) A serpentinite-hosted ecosystem: The Lost City hydrothermal field. Science 307: 1428-1434.
  51. James RH, Elderfield H, Palmer MR (1995) The chemistry of hydrothermal fluids from the Broken Spur site, 29 N Mid-Atlantic Ridge. Geochimica et Cosmochimica Acta 59: 651-659.
  52. Charlou JL, Donval JP, Douville E, et al. (2000) Compared geochemical signatures and the evolution of Menez Gwen (37 50′ N) and Lucky Strike (37 17′ N) hydrothermal fluids, south of the Azores Triple Junction on the Mid-Atlantic Ridge. Chemical geology 171: 49-75.
  53. Campbell AC, Palmer MR, Klinkhammer GP, et al. (1988) Chemistry of hot springs on the Mid-Atlantic Ridge. Nature 335: 514-519.
  54. Coumou D, Driesner T, Weis P, et al. (2009) Phase separation, brine formation, and salinity variation at Black Smoker hydrothermal systems. Journal of Geophysical Research: Solid Earth 1143.
  55. Corliss JB, Dymond J, Gordon LI, et al. (1979) Submarine thermal springs on the Galapagos Rift. Science 203: 1073-1083.
  56. Menez B (2020) Abiotic Hydrogen and Methane: Fuels for Life. Elements 16: 39-40.
  57. Pasquet G, Houssein H, Sissmann O, et al. (2021)An attempt to study natural H2 resources across an oceanic ridge penetrating a continent: The Asal–Ghoubbet Rift (Republic of Djibouti), Submitted to Journal of African Earth Science.
  58. Taran Z, Varley N, Inguaggiato S, Cienfuegos E (2010) Geochemistry of H2- and CH4-enriched hydrothermal fluids of Socorro Island, Revillagigedo Archipelago, Mexico. Evidence for serpentinization and abiogenic methane, Geofluid 10: 542-555.

Application of Artificial Intelligence in the Diagnosis and Treatment of COVID-19 Lung Disease

DOI: 10.31038/JIPC.2021122

Introduction

COVID-19 disease spread worldwide in 2019 from Wuhan, China, for unknown reasons [1,2]. The disease was caused by the SARS-CoV-2 virus and infected many people in different parts of the world in a short time. According to global statistics, about 255 million people have been infected with the illness so far, and 5.12 million people have died from it [3]. Emergencies have led health professionals and centers to develop guidelines to prevent the transmission of the sickness chain and to treat people. Common symptoms include fever, cough, tiredness, and loss of smell or taste. In addition, patients may experience sore throat, headache, irritated eyes, and diarrhea. More severe cases of the disease can include chest pain and shortness of breath which the person should go to a doctor and hospital immediately [4]. Since its inception, the virus has shown mutations that have led to increased transmission and severity of the disease, which has ultimately increased mortality. Some obvious variants are Alpha, Beta, Gamma, and Delta species which the Delta mutation has become more intense. Although the process of universal vaccination has made humans more resistant to the virus, the disease is now leading to injury and death [5].

Methods of Diagnosis

One of the most important things to deal with COVID-19 is the process of its diagnosis. So that diagnosis in the early stages of the conflict plays an important role in the treatment of patients. A variety of methods have been used for this purpose, the most common of which is the RT-PCR test [6]. Another commonly used solution is to use medical CT scans and X-rays, which doctors can use to identify areas of lung involvement and severity [7]. The significant problem with this method is that pulmonary symptoms do not usually occur in the early stages of the illness. Therefore, this method leads to delays in the treatment process of patients [8]. Among other procedures, the diagnostic approach is based on blood tests, which are further discussed in this study. Performing this test has advantages over other methods, including the cost of performing it compared to other ways is less and the result is available in a shorter time [9]. Combining artificial intelligence techniques with diagnostic tests can significantly increase the accuracy and speed of detection [10]. For example, deep convolutional neural networks can categorize and segment medical images based on symptoms and infections in the lungs [11] or analyze and categorize medical and statistical data using various machine learning methods and multilayer perceptron networks [12].

Data Information

The data used in this study include 1104 cases, of which 531 samples are related to persons with COVID-19 and 573 samples are related to people who do not have this disease. The blood parameters used include CRP, Lymphocyte, Platelet, W.B.C, and LDH. The target label corresponding to each blood test sample includes zero or one, which distinguishes between the two categories. Data and labels have been prepared and provided by specialists and physicians of Masih Daneshvari hospital located in Iran.

Details of Experiment

The mechanism used in the classifier design is the K-Fold criterion (K = 5). Thus, at each stage, 80% of the data is used to train and design the categorizer and 20% to test it. The above method repeats the training process five times and the final results are obtained based on the average of these steps. Various methods were implemented on blood test data, and the best result was obtained by the ensemble approach, which is the result of combining the three methods of K-nearest-neighborhood, random forest, and multilayer perceptron. This method determines the final class by voting between the three mentioned categories. K-nearest-neighborhood and random forest methods are common machine learning methods that are available in (13)and (14), respectively. Information about the multilayer perceptron network is also available in (15).

Results

The classification results are based on the confusion matrix and the criteria including accuracy, precision, recall, and F1-score which are represented in 1 to 4 equations. `Negative` corresponds to people who do not have COVID-19 and Positive is related to ones with this disease.

formula correction

The Confusion matrices of Trian and test are represented in Figure 1. Element 11 of matrices belong to the number of non-COVID persons that are classified correctly and element 22 is related to the number of COVID-19 patients who have been correctly diagnosed.

fig 1

Figure 1: Confusion matrices of train (left-side) and test (right side).

Conclusion

According to the results, only several blood parameters can be used to successfully diagnose COVID-19 disease with 82.7% accuracy, 84.9% recall, 80.4% precision, and 82.6% F1-score, on average. The more valid data available, the higher the percentages of accuracy and sensitivity. On the other hand, due to the diversity of artificial intelligence structures, the performance of the classifier can be improved with appropriate changes. This method can be used by the public at a lower cost in areas that do not have sufficient facilities.

References

  1. Chaolin Huang, Yeming Wang, Xingwang Li, Lili Ren, et al. (2020)  Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. The Lancet Journal 497-506.
  2. Wu, F., Zhao, S., Yu, B, Chen YM, Wang W, et al. (2020) A new coronavirus associated with human respiratory disease in China. Nature 580: 7. [crossref]
  3. Ritchie H, Mathieu E, Rodés-Guirao L, Appel C, et al. (2020) Coronavirus Pandemic (COVID-19). Our World in Data.
  4. Singhal T (2020) A Review of Coronavirus Disease. The Indian Journal of Pediatrics 1-6.
  5. Forchette L, William S, Liu T (2021) A comprehensive review of COVID-19 virology, vaccines, variants, and therapeutics., Curr Med Sci 9: 1-15. [crossref]
  6. Kasteren PBV, Veer BVD, Brink SVD, Wijsman L, Jong JD, et al. (2020) Comparison of seven commercial RT-PCR diagnostic kits for COVID-19. Clinical Virology 128: 104412. [crossref]
  7. Shah FM, Joy SKS, Ahmed F, Hossain T, Humaira M, et al. (2021) A Comprehensive Survey of COVID-19 Detection Using Medical Images. SN comput Sci 2: 434. [crossref]
  8. Bernheim A, Mei X, Huang M, Yang Y, Fayad ZA, et al. (2020) Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology 295: 200463.
  9. Mehralian S, Zaferani J, Effat, Shahrzad S, Kashefinishabouri Farnaz, et al. (2021) Rapid COVID-19 Screening Based on the Blood Test using Artificial Intelligence Methods. Journal of Control 14.
  10. Hussain AA, Bouachir O, Turjman FA, Aloqaily M (2020) AI techniques for COVID-19. IEEE Access 8: 128776-128795.
  11. Rahman T, Khandakar A, Qiblawey Y, Tahir A, Kiranyaz S, et al. (2021) Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Computers in Biology and Medicine 132: 104319. [crossref]
  12. Borghi, PH, Zakordonets O, Teixeira JP (2021) A COVID-19 time series forecasting model based on MLP ANN. Procedia Computer Science 181: 940-947. [crossref]
  13. Theodoridis, Konstantinos KS (2009) Pattern Recognition. Fourth 56-59.
  14. Breiman L (2001) Random forests. Machine learning 45: 5-32.
  15. Mohammad Teshnehlab, Pourya Jafari (2015) Neural Networks and Advanced Neuro-Controllers. Tehran: KN Toosi University of Technology.

Single Dose Acute Toxicology in a Preclinical Trial: The Basic Step in Drug Discovery and Development

DOI: 10.31038/JPPR.2021443

 

Extended Abstract: 21st European Biotechnology congress 2018, Moscow, Russia

The dose in a preclinical trial literally refers to the amount of a test compound that has to be administered to a study subject to evaluate its pharmacological suitability. Depending on the objective of the trial, different levels of doses have to be prepared and separately administered to the study animals to be able to determine the suitable dose of a test compound for the next phase of a trial. There is, however, scientific misconception about the role of a dose in experimental pharmacology in which it is considered to be the fundamental concept of toxicology that avoids the poison of a test compound which is far from scientific reality due to the fact that the nature of a test compound could not be changed by simply quantification. The natural property of a compound could neither be changed nor eliminated by limiting the amount of a dose that has to be administered to a study animal. It is a contradiction to the scientific law of physics which states that “matter can neither be created nor destroyed but rather it can be transformed into other form of matter by the use of energy”. The basic principle of toxicology is, however, deviated from this scientific reality by the fact that it judges to create a compound with different pharmacological property from a single test compound by simply quantification. In other words, it uses a hypothetical concept in experimental pharmacology in which the lower dose is considered to be safe when the higher dose of the same test compound is unsafe for life. It is important to note that only one molecule of a test compound binds with the binding domain of a drug receptor to trigger a biological signal. The pharmacologic property of one molecule of a test compound, therefore, could not be different from the pharmacologic property of multiple molecules of the same compound despite the magnitude of a biological response that could be able to manifest on study animals within the shortest possible time in the course of metabolism. This means that one molecule of a test compound could trigger the same physio-pharmacological mechanisms as ten molecules of the same test compound within the biological processes of an organism. However, the magnitude of a biological response against ten molecules of a test compound could be ten times higher than the magnitude of a biological response of one molecule of the same test compound administered to study animals with similar biological functionality and strength of natural immunity. If we administered a higher dose to a study animal, we could easily notice a response in a short period of time as compared to a lower dose of the same test compound. The adverse effect of a dose of a test compound administered to a study animal is directly proportional to the magnitude of immunoglobulins immune response against its harmful molecules [1]. Immunoglobulins are cell signalling proteins embedded in the cell membrane with the ability to detect the harmful molecules of a test compound against which it responds by activating B lymphocytes to proliferate and produce new immunoglobulin molecules. Immunoglobulins also exist freely in the plasma but it does not involve in cell signalling and cell activation mechanisms [2]. The amount of newly formed immunoglobulins would increase in blood serum as the number of harmful molecules of a test compound interacted with signalling proteins has also increased except with a test compound that has a depressant effect on the metabolic system of an organism that could also depress the amount of immunoglobulin molecules in blood serum as the immune and metabolic systems are directly interrelated. However, the amount of immunoglobulins in blood serum usually declines when the toxic severity of a dose administered to a study animal has reached at its peak in which the signalling mechanisms of immunoglobulins seemingly desensitised as the number of toxic molecules of a dose interacted with signalling proteins has increased [1]. Thus, immunoglobulins response is crucial in experimental pharmacology and toxicology to determine the toxic severity of a dose that is again enabling us to decide the safety pharmacology of a test compound. The dose of a test compound is said to be safe when the magnitude of its toxic severity is ≤0.

The previous studies conducted in 2011 and 2019 have shown that the pharmacological property of any amount of administered test material into study Balb c mice remained intact whether it was high or very low in amount [1-3]. The amount of administered dose, however, changed the magnitude of a biological response and the length of time at which undesired effect was manifested on Balb c mice treated orally. The pharmacological effect of a dose starts at the biochemical and molecular level of exposed organism which perhaps cause biological response at the cellular level which eventually leads to biological response at the organismal level as the reactive dose in the natural process of an organism increases all of which has regulatory mechanism at each level [4]. The biological effect of lower doses perhaps limited at the molecular level which impacts the health of exposed organisms in the long run as genetic disorders or metabolic disorders or cancer of different types depending on the site of damage introduced to the biological system. Disease such as cancer may result from an abnormality in function or structure of a single cell induced by a dose of noxious chemicals. This implies to the fact that the amount of a dose could not avoid or eliminate the harmful property of a drug that has to be administered to a study animal. In the previous experimental studies, all tested chemicals were toxic at any amount with different intensity which was computed using biological responses as toxic reaction rate and toxic severity during the course of metabolism [1,3]. This biological approach was considered one independent and two dependant research variables in order to be able to compute toxic severity and toxic reaction rate of a dose administered to study Balb c mice orally. The independent and dependent research variables respectively used were: [3], the administered dose, [1], elapsed time for the manifestation of recognisable adverse effect in the biological system of treated Balb c mice and [3], the changes in the amount of immunoglobulins in blood serum after dosing. The pharmacological property of tested chemicals were determined by the computed result of both toxic reaction rate and toxic severity rather than by the amount of a test chemical that has been administered to the study Balb c mice. The toxic reaction rate refers to the number of harmful molecules of administered dose that has been interacted with its receptor and has manifested undesired biological response on a study animal which was computed using a mathematical formula formula 1 mg/sec whereas the toxic severity refers to the magnitude of a biological harm or injury caused by the dose of a drug that has been administered to a study animal which was also computed using a mathematical formula formula 2 %/sec where r is toxic reaction rate, s is toxic severity, d is administered dose, t is elapsed time for adverse effect manifestation and formula 3 is the changes in immune response after dosing. The study has revealed that the toxic severity of a dose was the reason for the limited lifespan of treated animals whereas the toxic reaction rate accounted for the safety pharmacology of tested chemicals. The higher the amount of a dose administered to a study animal, the higher the toxic severity was that had influenced the lifespan of exposed Balb c mice.  This implies to the fact that the dose doesn’t determine safety but rather lifespan of study animals in its natural environment. This means that the harmful effect of a test chemical within the biologic process of an organism is determined by the chemical nature rather than by the amount of a dose administered. The amount of a dose, however, could speed up the time at which biochemical and physio-pathological changes would be manifested on treated animals. Since the higher dose could manifest undesirable biological response within a short period of time and the lower dose after a long period of time on treated animals, categorising of a single test chemical into minimum lethal dose (LD50) and maximum effective does (ED50) has no scientific ground to declare at a point of time that the lower dose is safe and the higher dose is unsafe for life. The undesirable biological effect of lower doses of a test compound is likely to be manifested in the late ages of an organism which could be the reason why cancer is more prevalent in the elderly population. The etiologic agent perhaps introduced to our biological process in early ages and its undesired effect possibly manifested in the late ages. This means that if the higher dose is lethal to a study animal, there is no scientific reason to declare that the lower dose is safe. A test compound is said to be toxic not only when it has caused death but also undesirable biological mechanism in the study animals.

Evolution has showed that all living things inherited desirable and typical genetic material from their predecessors through reproduction which naturally makes the difference among themselves [5]. Today, however, there are thousands and millions of humans and animals with anomalies and disabilities which might be either hereditary or nonhereditary depending on the site of damage introduced into the cell, tissue or organ system in which drug is one of the highest risk factors for the incidence. Therapeutic drugs such as valproic acid, thalidomide and warfarin have been proved to be teratogens after being on market for many years [4]. There are thousands and millions of other diseases caused by chromosomal abnormalities and gene defects such as Cri du chat syndrome, Down syndrome and Achondroplasia, fragile-x syndrome respectively [5]. The genetic changes that causes these diseases can be a whole additional chromosome or a whole missing chromosome or a change of a single base in a gene sequence [5]. However, there is no specifically defined cause, other than speculation, about the abnormal chromosomes and defected genes which are causing these diseases. There are many chemical agents that can cause damage to the nucleus of a cell and other cell organelles such as adverse effects of prescribed medications, poisons, environmental pollutants and recreational drugs like alcohol which are high risk factors for genetic disorders causing these diseases.  In general, the drug’s mode of damaging the biological structure of an organism is diverse depending on the diverse chemical nature of the drug and nature of biological component of an organism responded to it. The undesirable effect of a drug might be manifested at the biochemical, cellular or organismal level depending on the amount of administered drug in which categorizing a single test material into minimum lethal dose (LD50) and maximum effective dose (ED50) could not ensure the safety pharmacology of any test drug.

References

  1. Yilkal Tariku Belay (2019) Study of the principles in the first phase of experimental pharmacology: The basic step with assumption hypothesis. BMC Pharmacology and Toxicology.
  2. Schroeder H. and Cavacini (2013) Structure and function of the immunoglobulins, J Allergy Clin Immunol 125: S41-S52. [crossref]
  3. Belay Y (2011) Study of safety and effectiveness of traditional dosage forms of the seed of Aristolochia elegans mast against malaria and laboratory investigation of pharmaco-toxicological properties and chemical constituents of its crude extracts. Ann Trop Med Public Health 4: 33-41.
  4. Yilkal Tariku Belay (2019) Misconception about the role of a dose in pharmacology, Short review report on the biological and clinical effects. Adv Bioeng Biomed Sci Res (ABBR) 2.
  5. Huelsenbeck JP, Ronquist F, Nielsen R, Bollback (2001) Bayesian Inference of Phylogeny and Its Impact on Evolutionary Biology. Science 294: 2310-2314. [crossref]

Accommodating Evidence of Traditional Use for Medicines Within Risk-Based Regulation in Australia

DOI: 10.31038/JPPR.2021442

 

For reasons including affordability, accessibility, cultural heritage and health benefits, traditional medicines are still important contributors to health care. As the regulation of medicines becomes increasingly evidence and risk based, regulators have the challenge of dealing with the role of traditional use evidence in assessing the safety and efficacy of traditional medicines. Regulators must protect the consumer while also respecting the rights of consumers to have access as far as possible to medicines of their choice. Evidence for the safety and efficacy of traditionally used medicines is based largely on observation and experience over extended periods, sometimes gained over centuries of use. If Traditional Chinese Medicine (TCM) is used as an example, the evidence has evolved over millennia of use, and is still evolving, and the information is passed on through documentation such as in treatises and the education and training of practitioners (Figure 1).

fig 1

Figure 1: Rationale for considering evidence based on traditional use in Chinese medicine.

However, in recent times more reliable scientific methods for establishing the safety and efficacy of medicines and medical treatments have been developed. While there is considerable activity by industry and researchers in using these newer methods to substantiate the safety and efficacy of traditional medicines, evidence of traditional use will still be used to justify the supply of many of traditional medicines due to their compositional complexity such as from using raw herbs and the lack of incentives to conduct expensive clinical studies when the intellectual property from studies on natural materials may not be able to be protected. A dilemma is that traditional use evidence, while an important source of information, ranks quite lowly on the scale of reliability of evidence (Figure 2).

fig 2

Figure 2: Levels of evidence – position of traditional use evidence.

The challenge for regulators therefore is how to apply adequate care to protect consumers from unsafe or ineffective medicines without denying access to traditional medicines. While some countries have avoided the issue by classifying such products as foods or unregulated products, Australia has taken a pragmatic approach towards their risk management for both the supply of proprietary medicines and the individualised formulation of prescriptions by practitioners.

Accommodating Traditional Use Evidence in Risk-based Regulation of Proprietary Medicines

Premarket Regulation

In Australia, medicines other than some very low risk classes, must be entered onto an Australian Register of Therapeutic Goods (ARTG) based on to their acceptable quality, safety and efficacy before they can be supplied into the market.All medicines on the ARTG are expected to be manufactured in compliance with Good Manufacturing Practice by licensed manufacturers. For dealing with quality and safety, there are two levels of entry into the ARTG; ‘listed’ medicines for lower risk products and ‘registered’ medicines for higher risk products. Any medicine whose safety and efficacy are based on traditional use evidence is classified as a listed medicine and can only contain active ingredients in a defined list for which safety is well established [1], is limited to therapeutic claims in a defined list which mainly refer to the treatment of minor, self-limiting conditions [2] and the supplier must be able to provide the traditional use evidence upon which the therapeutic claims are based. The product label must indicate that the intended purpose of the medicine is based on traditional use. If the supplier wishes to make more substantial therapeutic claims outside this framework, the justification must be based on scientific evidence.

Post Market Regulation

The channels of supply of medicines in the marketplace is determined through a Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP) [3] which restricts the supply of certain substances to the prescription of primarily western medical practitioners or to supply through a pharmacy or to supply directly by a pharmacist. Most proprietary traditional medicines because of the premarket controls to minimise their risk are unscheduled thus allowing unrestricted supply. Advertising of non-prescription medicines to the public is limited to their therapeutic claims included in the ARTG and must indicate that the claims are based on traditional use evidence [4]. The regulator (the Therapeutic Goods Administration) monitors and responds to adverse events to medicines.

Accommodating Traditional Use Evidence in the Regulation of Traditional Medicine Practitioners

TCM practitioners are a nationally regulated, allied health profession under the jurisdiction of the National Registration and Accreditation Scheme (NRAS) [5]. They must be registered to practice subject to similar educational and professional practice standards as other important health professions such as practitioners in medicine, nursing, dentistry and pharmacy. While the level of evidence to support professional health services provided by the practitioner to a patient is not defined in law, individualised prescriptions based on traditionally used ingredients cannot contain ingredients restricted by the SUSMP and the practitioner’s registration standards require that appropriate informed consent is given by the patient after receiving information about their intended treatment and any associated risks. While advertising of professional health services to the public can refer to more substantial medical conditions, the advertising must be able to be supported by scientific evidence, not just traditional evidence. This is because advertising to the public is providing information usually without any consultation with the practitioner. Traditional health professions other than TCM come under the regulation of each Australian State or Territory. These practitioners operate via negative licensing whereby they can practise without being registered but are subject to a national Code of Conduct for Health Care Workers [6] which contains similar principles for practice and advertising to those required for health professions subject to the NRAS. There are strong complaint systems in place whereby anyone can submit concerns about individual practitioners (Figure 3).

fig 3

Figure 3: Levels of evidence and risk based regulation.

Conclusion

The primary role of health regulators is to apply a regulatory scheme that moderates risks sufficiently to protect consumers without inappropriate hindrance to access or to industry. This paper describes the procedures used in Australia to moderate risks when relying on evidence based on traditional use for the efficacy and safety of traditional medicines.

References

  1. Therapeutic Goods (Permissible Ingredients) Determination: https://www.legislation.gov.au/Details/F2021L01108
  2. Therapeutic Goods (Permissible Indications) Determination, tables 14 and 15: https://www.legislation.gov.au/Details/F2021L00056
  3. Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP): https://www.tga.gov.au/publications/poisons-standard-susmp
  4. Therapeutic Good Advertising Code: https://www.legislation.gov.au/Details/F2021C00845
  5. National Registration and Accreditation Scheme (NRAS): https://www.coaghealthcouncil.gov.au/NRAS
  6. National Code of Conduct for Health Care Workers: https://www.coaghealthcouncil.gov.au/NationalCodeOfConductForHealthCareWorkers

An Overview of the First Organic Shrimp Model in the Mekong Delta of Vietnam

DOI: 10.31038/AFS.2021344

Abstract

Organic agriculture has become a global trend when the demand for cleaner products is on the increase worldwide. This paper reviews several important aspects and assesses the possibility of further expansion of the first internationally certified organic shrimp model in the coastal part of the Mekong delta of Vietnam. The model seems appropriate in physical terms (quality of water, sediment, and soils; mangrove growth) and shrimp yields. However, managerial challenges (e.g. assessment methods for certification, mechanism of payment, benefit sharing, social, and environmental benefits) still exist and make its efficacy questionable. Accordingly, the model has not been very interesting to the coastal communities. Although strongly favored by the natural conditions and supported by international organizations and the government, the model would be expanded further in the coastal part of the Mekong delta of Vietnam if these challenges are mitigated.

Organic Shrimp Models for Cleaner Products

Organic agriculture has developed rapidly and become a trend worldwide recently in the context of increasing demands for cleaner products [1,2]. In aquaculture sector, organic shrimp models are introduced in which shrimps and mangroves are raised in the same farms in a near-natural environment [3-5]. These models have been developed in the coastal areas of many countries in the tropics, such as Thailand, Bangladesh, Indonesia, India, Madagascar, and Vietnam [3,6-8]. In general, shrimps are raised in polyculture systems without using antibiotics and chemicals, and with special emphasis on the protection of mangrove forests and mangrove ecosystems [9]. Shrimps harvested from the models are examined and certified as ‘organic shrimp’ by several organizations such as the Ecocert (France), IMO – Institute of Market Ecology (Switzerland), National Programme for Organic Production (India), and Japanese Agricultural Organic Standard (Japan). With the rising health and environmental awareness of global consumers, these models are expected to grow faster in the near future [10,11].

Naturland is one of the world’s leading international associations for organic agriculture [9,12]. The principles of Naturland for organic aquaculture are composed of:

  1. Careful selection of sites for aquaculture farms.
  2. Protection of adjacent ecosystems
  3. Active avoidance of conflicts with other users of the aquatic resources (e.g. fishermen)
  4. Prohibition of chemicals (e.g. as anti-fouling agents in net pens)
  5. Natural remedies and treatments in the case of disease
  6. Feedstuff from organic agriculture
  7. Fishmeal and fish oil in feed derived from by-products of fish processed for human consumption (no dedicated feed fishery)
  8. Prohibition of genetically modified organisms (GMOs), either in feedstuff or in the stock itself
  9. Processing according to organic standards [9,13].

The First Organic Shrimp Model in the Mekong Delta of Vietnam

Introduction of the Model

The Mekong delta of Vietnam has a long coastline along which mangrove forests reside. In this coastal part, shrimp aquaculture has a long history and plays a key role in the coastal economy [13-15]. On the basis of the mixed shrimp-mangrove systems developed from the 1980s, the first organic shrimp model was introduced to Tam Giang commune, Nam Can district, Camau province, Mekong delta of Vietnam in 1999 and certified in 2001 by Naturland [9,16] (Figure 1). By 2010, around 1,000 integrated shrimp-mangrove farms had been certified by the German organic certification scheme Naturland and audited by the certification body IMO [17,18] in this area. The shift from non-organic to organic farms in this province does not require large changes in farm’s infrastructure or management because these characteristics have been similar between the two [19].

fig 1

Figure 1: Mekong delta of Vietnam (left) and location of the first Naturland’s organic farm (right).

In this model, most farms range from 4–5 ha in size. Mangroves in the farms are pure stands of replanted Rhizophora (Rhizophora apiculata Blume) with an averaged density of 10,000 trees.ha-1, and the forest ratio must be of at least 50% of the whole pond area [16]. Black tiger shrimp (Penaeus monodon Fabricius, 1798) are cultured at low densities in mixed pattern with the mangroves (Figure 2), often with marine crab (Scylla serrata Forskal, 1775), blood cockle (Anadara granosa Linnaeus, 1758), and wild shrimps [3,4]. A typical organic shrimp model and its sluice gate are shown in Figure 3.

fig 2

Figure 2: Layout of the first organic shrimp model in Tam Giang commune, Nam Can district.

fig 3

Figure 3: A typical organic shrimp model (left) and its sluice gate (right).

Black tiger shrimps harvested from the model are expected to meet the current international organic standards (e.g. EU organic regulations, Naturland standard, or Bio Suisse standard) and have been accepted in Swiss and EU markets [4,16]. After exporting to EU, the value of these shrimps will increase 20% from which the shrimp farmers, traders, and processing factories receive respectively 15%, 2%, and 3% [20]. Wild shrimps from the model are sold in the local market [4].

Cropping Calendar, Stocking Density, and Farm Management

A new production cycle starts in September and ends in July the following year. Farm water is taken from the rivers at high tides with the use of a net (1 cm × 1 cm) to prevent undesired objects and aggressive fish. The 15-day postlarvae of black tiger shrimp are screened for subclinical levels of pathogens [21] before stocking. The stocking density at the start of the production cycle was from 3-5 postlarvae.m-2 and about 50% more postlarvae were supplemented in the following months until February–March. Wild shrimps (Penaeus indicus H. Milne Edwards, 1837, Penaeus merguiensis de Man, 1888 [in de Man, 1887-1888], Metapenaeus ensis (De Haan, 1844 [in De Haan, 1833-1850]) and Metapenaeus lysianassa (de Man, 1888 [in de Man, 1887-1888])), estimated of less than 1 postlarvae.m-3 of water in 1996 [15] are also introduced to the farms during water intake. Farmers release marine crabs (Scylla serrata Forskal, 1775) to the farms (0.1–0.2 individual.m-2) after every 3 months. There is no regular water exchange, no chemical use, and shrimps rely completely on natural food. Four to five months after stocking, farmers harvest market-sized shrimps by draining out part of the farm water twice a month (3-4 consecutive days each at the end/start and the middle of the lunar months). As a result of continuous stocking and partial-harvesting method, shrimps of different ages and sizes are present in the farms at a certain point of time during the production cycle. In August, accumulated sediment in the channel is dredged and deposited on the dikes, and quicklime (CaO) is usually used to disinfect the farm bottom after sediment removal [4,16].

Water Depth and Water Characteristics

The averaged water depth is 68.8±3.4 cm [22]. Pond water is alkaline (pH 7.59 ± 0.07) and highly buffered [4,11], similar to other shrimp-mangrove systems in the Mekong delta [23,24]. The pH is high in the middle of the dry season (7.68 ± 0.07) but drops at the start of the wet season (7.40 ± 0.06) before stabilizes in the transition between the wet and the dry season (7.70 ± 0.18). In contrast, total iron is lowest in the middle of the dry season (0.41 mg/l) but increases sharply at the start of the wet season (1.06 mg/l) [11,22]. The pH drop at the start of the wet season was due to the reception of acidic components washed down from the dikes, a phenomenon commonly observed in aquaculture ponds on acid sulfate soils in the Mekong delta [25-27]. Although seasonal changes are observed, pH of farm water is still within the limits (7-9) for shrimp growth [28]. Because the seasonal pH drop is not serious, effects of toxic components (e.g. Al, Fe, Mn) on aquaculture species would still be low in the model [4,11].

Characteristics of Channel Sediment

Silt (0.063-0.002 mm) and clay (<0.002 mm) are dominant, suggesting that suspended matter from intake water is one of the main sources of the sediment. The annual sediment removal does not significantly influence the particle size distribution, revealing that this practice removes only part of the accumulated sediment during the production cycle [4,29]. As shown from Table 1 [29], the sediment is reduced with a high Fe2+/Fe3+ ratio and almost neutral, with low exchange acidity. Organic matter (OM) and total nitrogen (N) are high, and the C/N ratio varies largely, suggesting a high diversity of organic matter sources [30,31].

Table 1: Basic parameters of channel sediment in the organic shrimp model [29].

Parameter Min Max 95% Confidence interval
Redox potential (mV) -299.00 -1.00 -177.75 ± 14.75
pH of fresh sediment 6.05 7.64 7.20 ± 0.07
pHH2O 6.63 7.78 7.20 ± 0.06
pHKCl 6.35 7.43 6.92 ± 0.07
Exchange acidity (cmolc kg-1) 0.03 0.12 0.05 ± 0.00
Fe2+/Fe3+ 0.55 93.30 9.89 ± 3.35
OM (%) 2.41 9.30 4.20 ± 0.33
Total Nitrogen (%) 0.18 0.51 0.30 ± 0.02
C/N 3.90 12.16 8.12 ± 0.36

Characteristics of Mangrove Soils

Mangrove soils to 60 cm depth are heavily reduced with redox potential ranging from -321 mV to -52 mV [29]. According to [32], sulfate reduction (optimal at -100mV) and methanogenesis (optimal at -200mV) are dominant processes in this condition. The soils are acidic (pHH2O 5.63 ± 0.15, pHKCl 5.27 ± 0.18) as a result of pyrite oxidation when exposed to the open air (Eq. 1). The presence of pyritic material in the soils was confirmed by the sign of pyrite oxidation (Figure 4) and the high acidity of soils deposited on the dikes (Table 2). Pyrite oxidation forms precipitated Fe(OH)3, which is harmful to shrimps because it adheres to the gills and retards shrimp respiration [33]. The problem is, however, rather mild because the farms are inundated for most of the time during the production cycle.

4FeS2 + 15O2 + 14H2O → 4Fe(OH)3 + 8SO42- + 16H+                   (1)

 
fig 4

Figure 4: Mangrove soils (with clear signs of pyrite oxidation) on the dikes of the model.

Table 2: Acidity of mangrove soils deposited on the dikes [29].

Parameter

pHH2O pHKCl Exchange acidity Exchangeable Al3+
      cmolc kg-1

cmolc kg-1

Range

1.97-3.21

1.81-2.14 8.90-13.48

4.45-7.49

95% Confidence interval

2.51 ± 0.72

2.03 ± 0.21 11.56 ± 2.69

6.03 ± 1.72

Soil organic carbon (SOC) (5.19 ± 0.59%) is high in the top sediment as a result of an abundant supply from mangrove debris but drops sharply from a 80 cm depth. High exchange acidity is found in mangrove soils rich in SOC [4,29].

Shrimp Yields and Relationships with Physico-Chemical Properties

The total shrimp yield was low (355.4 kg ha-1 year-1). The wild shrimps (Penaeus indicus H. Milne Edwards, 1837, Penaeus merguiensis de Man, 1888 [in de Man, 1887-1888], Metapenaeus ensis (De Haan, 1844 [in De Haan, 1833-1850]) and Metapenaeus lysianassa (de Man, 1888 [in de Man, 1887-1888])) contributed 55% to the total shrimp yield [11]. Shrimp yield of this model is similar or even somewhat higher compared to those in integrated shrimp-mangrove systems in the Mekong deta of Vietnam [15,17,34,35] and Indonesia [36]. The model is, however, no longer as productive as it was in a recent past (550–600 kg ha-1 year-1) [16]. As there was no marked difference in stocking densities between now and the past, the most probable reason for this could be a decline in water and sediment quality of the model.

There are positive correlations (p < 0.05) between total shrimp yield/wild shrimp yield and water depths [11], in agreement with previous findings in similar systems in the Mekong delta where the water depths ranged between 50-80cm [15,37]. The finding suggests that the model should be made deeper, such as to a depth of about 80-90 cm [4]. Positive correlations between total shrimp yield with pHH2O (p < 0.05) and pHKCl (p < 0.001) suggest that shrimps grow well in neutral or near-neutral pond bottom [11], similar to previous findings in aquaculture ponds [38,39]. Turbidity is positively correlated with wild shrimp yield [11], most probably due to the positive relationship between turbidity and organic matter content in pond water [40,41]. Inverse relationships between total shrimp yield/black tiger shrimp yield and Fe2+ [11] confirm the negative impacts of iron to shrimp growth as shown in previous research [33,42]. [5] found that forest ratios have a direct impact on the total shrimp yield, and that these ratios should be 50%, well in accordance with the guidelines for this model [16]. In the same model in Rach Goc commune, Ngoc Hien district, Camau province, farmers claimed that the best mangrove coverage on their farms should lie between 30-50% for the highest productivity [43].

Income from the Model

Shrimps provide short-term income while mangroves provide the long-term for the local shrimp farmers. Currently, data of benefits from shrimps is not available. Regarding the forest, farmers are allowed to exploit the mature mangroves (≥ 10 years old) by trimming (up to ≤50% forest area) or complete logging followed by reforestation. Farmers would receive all benefits from the mangroves if they invest and take care of the forests by themselves. In case farmers rent the land and receive supports (capital, techniques, etc.) from the Board of forest management for reforestation, they are to receive 30% benefits from the mangroves. According to the local shrimp farmers in Camau province, it was worth about 50,000 USD ha-1 of mangrove forest in the model (10,000 mature trees on average) in 2016 [4].

Can this Organic Shrimp Model be Expanded Further in the Mekong Delta of Vietnam?

Organic agriculture in Vietnam is still at an early stage and has not developed rapidly [44,45]. However, the Vietnamese government has issued new policies to encourage organic agriculture development in the Mekong Delta and the whole country [46-48]. Given the favorable physical and socio-economic conditions and as supported by FAO, the government has planned to expand organic certification to integrated shrimp-mangrove farming systems along the coast of the Mekong delta of Vietnam [19,49].

The physical conditions of the organic shrimp model are in general appropriate to shrimp growth, although several drawbacks (e.g. iron content and turbidity in the water, precipitated Fe(OH)3 from pyrite oxidation, water depth, and forest ratios) might affect shrimp yields [5,11,29]. While the model seems appropriate in physical terms, several managerial challenges still exist. For examples, according to the local shrimp farmers, there are still illogicalities in the regulations for the forest ratios (calculated for each household, not a group of households using the same water sources), total farm areas (not accepting farms of less than 3 ha), assessment methods for certification of the IMO, inappropriate mechanism of payment, benefit sharing, and the sharing of mangrove products (e.g. wood and other forest products) between shrimp farmers and the Board of forest management [4,20]. Accordingly, this model has been not very interesting to the local communities [4]. In Ngoc Hien district (Camau province), certified farms of this model do not show significant differences to non-certified farms in terms of social and environmental benefits [43]. The author suggests that rather than being a tool for improvement, ‘Naturland’ certification for integrated shrimp–mangrove systems in Camau province has become an end in itself. Although being strongly supported by the government, this model would be largely expanded in the coastal part in the Mekong delta of Vietnam if these issues are properly solved.

References

  1. Willer H, Lernoud J (2019) The World of Organic Agriculture – Statistics and Emerging Trends 2019. Research Institute of Organic Agriculture (FiBL), Frick, and IFOAM – Organics International, Bonn, Germany.
  2. FAO (2021) Family Farming Knowledge Platform – The World of Organic Agriculture 2021.
  3. Jonell M, Henriksson PJG (2015) Mangrove-shrimp farms in Vietnam-comparing organic and conventional systems using life cycle assessment. Aquaculture 447: 66-75.
  4. Tho N (2016) Assessing the natural food basis for shrimps in relation to the hydrogeochemical characteristics of the organic shrimp model in Nam Can district, Camau province -Proposing solutions to improve the model. Vietnam Academy of Science and Technology. Project coded VAST.CTG.06/14-16.
  5. Thai TT, Tho N, Yen NTM, Quang NX, Thao NTP, et al. (2021) Effect of Mangrove Cover on Shrimp Yield in Integrated Mangrove-Shrimp Farming. Asian Fisheries Science 34: 269-277.
  6. Willer H, Lernoud J, Kilcher L (2014) The world of organic agriculture – Statistics and emerging trends 2014: Frick. Switzerland: Research Institute of Organic Agriculture (FiBL) & Bonn: International Federation of Organic Agriculture Movements (IFOAM).
  7. iPFES (2015) Study report – Economic valuation of ecosystem services to develop payment for forest environmental service mechanism on aquaculture in Lao Cai, Thua Thien Hue and Ca Mau provinces. Vietnam Forest Protection and Development Fund (Vnff), Vietnam 188.
  8. Ahmed N, Thompson S, Glaser M (2018) Integrated mangrove-shrimp cultivation: Potential for blue carbon sequestration. Ambio 47: 441-452. [crossref]
  9. Naturland (2019) Naturland standards organic aquaculture.
  10. Mukul AZA, Afrin S, Hassan MM (2013) Factors affecting consumers’ perceptions about organic food and their prevalence in Bangladeshi organic preference. Journal of Business and Management Sciences 1: 112-118.
  11. Tho N, Tu TTK, Chi NTH (2019) Shrimp yield in relation to the ecological parameters of an organic shrimp model in the Mekong delta of Vietnam: A case study. Asian Fisheries Science 32: 154-161.
  12. Naturland (2008) Naturland Standards for Organic Aquaculture.
  13. McEwin A, McNally R (2014) Organic Shrimp Certification and Carbon Financing: An Assessment for the Mangroves and Markets Project in Ca Mau Province, Vietnam. International Climate Initiative (IKI), SNV Smart Development Works.
  14. Brennan D, Clayton H, Be TT (2000) Economic characteristics of extensive shrimp farms in the Mekong delta. Aquaculture Economics & Management 4: 127-139.
  15. Johnston D, Trong NV, Tien DV, Xuan TT (2000) Shrimp yields and harvest characteristics of mixed shrimp-mangrove forestry farms in southern Vietnam: factors affecting production. Aquaculture 188: 263-284.
  16. Camimex (2012) Internal control system, Camimex – Ngoc Hien organic project.
  17. Ha TTT, van Dijk H, Bush SR (2012a) Mangrove conservation or shrimp farmer’s livelihood? The devolution of forest management and benefit sharing in the Mekong Delta, Vietnam. Ocean & Coastal Management 69: 185-193.
  18. Omoto R (2012) Small-scale producers and the governance of certified organic seafood production in Vietnam’s Mekong Delta. PhD Thesis, University of Waterloo.
  19. Ha TTT, Bush SR, Mol APJ, van Dijk H (2012b) Organic Coasts? Regulatory Challenges of Certifying Integrated Shrimp-Mangrove Production Systems in Vietnam. Journal of Rural Studies 28: 631-639.
  20. Ha TTT (2015) Naturland organic shrimp certification in protecting mangrove forests in Camau – Prospects and challenges. Journal of Science and Forestry Technology 3: 101-109.
  21. FAO (2007) Improving Penaeus monodon hatchery practices – Manual based on experience in India. FAO Fisheries Technical Paper 446.
  22. Tho N, Khanh DNN (2018) Assessing the hydrochemical characteristics of the organic shrimp model at Tam Giang commune, Nam Can district, Ca Mau province. Journal of Marine Science and Technology, Vietnam Academy of Science and Technology 18: 205-213.
  23. Loc NX, Nga TT, Tinh HQ (2008) Water quality in extensive shrimp ponds (Penaeus monodon) in Tam Giang I forestry–fisheries enterprises, Ngoc Hien district, Camau province. Science Journal of Can Tho University 99: 202-209.
  24. Toan LB (2011) Studying the mixed mangrove-shrimp systems in Ngoc Hien district, Camau province. PhD Thesis, Ho Chi Minh City University of Agriculture and Forestry 144.
  25. Minh LQ, Tuong TP, van Mensvoort MEF, Bouma J (1997) Tillage and water management for riceland productivity in acid sulfate soils of the Mekong delta, Soil and Tillage Research 42: 1-14.
  26. Tuong TP, Minh LQ, Ni DV, van Mensvoort MEF (1998) Reducing acid pollution from reclaimed acid sulphate soils: experiences from the Mekong delta, Vietnam. In: LS. Pereira, JW Gowing (1998) Water and the Environment: Innovation Issues in Irrigation and Drainage. E. and FN. Spon, London 75-83.
  27. Phong ND, Tuong TP, Phu ND, Nang ND, Hoanh CT (2013) Quantifying Source and Dynamics of Acidic Pollution in a Coastal Acid Sulphate Soil Area. Water, Air, & Soil Pollution 224: 1765.
  28. Haws MC, Boyd CE (2001) Methods for improving shrimp farming in Central America. Central American University Press-UCA, 292.
  29. Tho N, Khanh DNN, Tu TTK (2017) Risk of acidification of the organic shrimp model at Tam Giang commune, Nam Can district, Ca Mau province. Science & Technology Development 20.
  30. Meyers PA (1997) Organic geochemical proxies of paleoceanographic, paleolimnologic, and paleoclimatic Org Geochem 27: 213-250.
  31. Lamb AL, Wilson GP, Leng MJ (2006) A review of coastal palaeoclimate and relative sea-level reconstructions using d13C and C/N ratios in organic material. Earth Sci Rev 75: 29-57.
  32. Avnimelech Y, Ritvo G (2003) Shrimp and fish pond soils: processes and management. Aquaculture 220: 549-567.
  33. Boyd CE (2008) Iron Important To Pond Water, Bottom Quality. Global Aquaculture Advocate 59-60.
  34. Binh CT, Phillips MJ, Demaine H (1997) Integrated shrimp-mangrove farming systems in the Mekong delta of Vietnam. Aquaculture Research 28: 599-610.
  35. Bosma RH, Nguyen TH, Siahainenia AJ, Tran HT, Tran HN (2014) Shrimp-based livelihoods in mangrove silvo-aquaculture farming systems. Reviews in Aquaculture 8: 43-60.
  36. Fitzgerald WJ (2000) Integrated mangrove forest and aquaculture systems in Indonesia. In Proceedings of the workshop on mangrove-friendly aquaculture. (eds. Primavera JH., Garcia LMB., Castaños MT., Surtida MB.) 21-34. Iloilo City, Philippines.
  37. Minh TH, Yakupitiyage A, Macintosh DJ (2001) Management of the integrated mangrove-aquaculture farming systems in the Mekong Delta of Vietnam. Food and Agriculture Organization of the United Nations 24.
  38. Banerjea SM (1967) Water quality and soil conditions of fish ponds in some states of India in relation to fish production. Indian Journal of Fisheries 14: 114-115.
  39. Boyd CE (1995) Bottom soils, sediment and pond aquaculture. Chapman and Hall, New York 348.
  40. Azim ME, Verdegem MC, van Dam AA, Beveridge MC (2005) Periphyton: ecology, exploitation and management. CABI Publishing, New York 325.
  41. Shaari AL, Surif M, Latiff FA, Omar WM, Ahmad MN (2011) Monitoring of water quality and microalgae species composition of Penaeus monodon ponds in Pulau Pinang, Malaysia. Tropical Life Sciences Research 22: 51-69. [crossref]
  42. Poernomo A (1990) Technical constraints in shrimp culture and how to overcome them. In Proceedings of the shrimp culture industry workshop. (ed. Yap, W.G.) 59-66. FAO Fisheries and Aquaculture Department, Jepara City, Indonesia.
  43. Baumgartner U, Nguyen TH (2017) Organic certification for shrimp value chains in Ca Mau, Vietnam: a means for improvement or an end in itself? Environ Dev Sustain 19: 987-1002.
  44. British Council (2016) Vietnam Social Enterprise Casebook.
  45. Presilla M (2018) The development of organic farming in Vietnam. Jurnal Kajian Wilayah 9: 20-32.
  46. Toan PV, Minh ND, Thong DV (2019) Organic Fertilizer Production and Application in Vietnam. In: Larramendy M and Soloneski S. (eds): Organic Fertilizers – History, Production and Applications.
  47. Nguyen CT, Van TTT (2021) Development of Organic Agriculture in the Mekong Delta – Opportunities and Challenges. European Journal of Development Studies 1: 29-35.
  48. GIZ (2021) The potential of digital tools to promote sustainable production in the shrimp aquaculture sector: A case study of the Mekong Delta, Viet Nam. Mekong Delta Climate Resilience Programme (MCRP). Hanoi, October 2021.
  49. FAO (2015) Sustainable Shrimp farming project.

Effect of Nutrition Education on Improving Knowledge and Practice Regarding IYCF among Mothers with 6-24 Months Children

DOI: 10.31038/IJNM.2021241

Abstract

Food insecurity and poor infant and young child feeding (IYCF) practices contribute to under nutrition. Nutrition during early years of life is crucial for children to survive, grow and develop into healthy adults who can lead rewarding lives and productively contribute to their communities. Infant and Young Child Feeding (IYCF) is a critical component of care in childhood. It is a major determinant of short- and long-term health outcomes in individuals, and hence of social and economic development of communities and nations. Objective of the study was to assess effectiveness of nutritional education intervention on improving knowledge and practice regarding IYCF among mothers having 6-24 months children. A Quantitative research approach and Quasi experimental design were used in this study, convenient sampling method with 30 samples were participated in the study, data was collected by structured questionnaire and observational checklist. Data was analyzed by descriptive and inferential statistics. The finding of the study revealed that the mean post-test knowledge score was higher than mean pre-test knowledge score with the mean difference of 11.67 which revealed that nutritional education intervention was effective in terms of knowledge among mothers. The mean post-test practice score was higher than mean pre-test practice score with the mean difference of 16.75 which revealed that mothers were doing correct practice after nutritional education.

Keywords

Nutrition education, Knowledge, Practice, IYCF, Weaning

Introduction

“Breastfeeding is warmth, nutrition and love all rolled into one. It is a mother’s gift to herself, her baby and the earth.” Food insecurity and poor infant and young child feeding (IYCF) practices contribute to undernutrition. Nutrition during early years of life is crucial for children to survive, grow and develop into healthy adults who can lead rewarding lives and productively contribute to their communities. The period from birth to two years of age is considered as a “critical window” of opportunity as during this period the foundation for healthy growth and development in later years is laid down. Thus, adequate nutrition through this period has been recognized as national and international priority [1] and Young Child Feeding (IYCF) is a critical component of care in childhood. It is a major determinant of short- and long-term health outcomes in individuals, and hence of social and economic development of communities and nations [2]. Realizing this need, World Health Organization (WHO) recommends that optimal nutrition practices for infants and children include early initiation of breastfeeding i.e. within one hour of birth, exclusive breastfeeding for the first six months of life, followed by the addition of nutritionally adequate, safe, and appropriate complementary foods with continuation of breastfeeding for one year and longer.3However, even after constantly emphasizing the importance of implementing these recommendations, the nation fails to elevate the status of Infant and Child Feeding which is necessary for attaining a better and yielding future. Optimal nutrition and hearty feeding are imperative for healthy growth and development of infants and young children. Globally, more than one-third of childhood deaths are attributed to undernutrition, which is more prevalent in low- and lower-middle-income countries [1,2]. In India, the third National and Family Health Survey [3] indicated that 46% of children below the age of three were underweight, 38% were stunted, and 19% were wasted. India is a country of various cultures and traditions. A lot of the customs and practices have their effect on our health including infant feeding practices. By assessing the knowledge, attitude and practices of mothers regarding their child’s feeding, an overview can be obtained about the areas which need modification and hence specific intervention strategies can be made to correct the same.

Problem Statement

A explorative study to assess effectiveness of Nutrition educational intervention on improving Knowledge and practice regarding IYCF among mothers with 6-24 months at New Civil Hospital, Surat.

Objectives

The objectives of the study were:

  1. Assess existing knowledge and practice regarding IYCF among mothers with 6-24 months at New Civil Hospital, Surat
  2. Develop and implement nutrition educational intervention regarding IYCF among mothers with 6-24 months
  3. Determine correlation between knowledge and practice on nutritional educational intervention regarding IVCF among mothers with 6-24 months at New Civil Hospital, Surat
  4. Find out association between pretest knowledge and practice regarding IYCF among mothers with 6-24 months at New Civil Hospital, surat with selected socio demographic variables.

Assumption

  • Mothers don’t have enough knowledge regarding IYCF and not doing correct practice towards IYCF.
  • Nutritional education intervention helpful for improvement of knowledge and practice regarding IYCF which will be highly significant for growth and development of child.

Delimitation

  • The study was delimited only mothers those who are having 6- 24 months old child.
  • Those who are available and willing to participate in study at the time of the data collection.
  • Study was delimited to pediatric ward, New Civil Hospital Surat, Gujarat.

Research Methodology

  • Research approach: Quantitative evaluative Research Approach was used to assess effectiveness of nutrition educational intervention.
  • Research design: Quasi experimental research design with one group pretest post Design
  • Research setting: New Civil Hospital Surat, Gujarat
  • Sampling technique: Convenient non probability sampling technique.
  • Sample size:

Sampling Criteria

Inclusion Criteria

  • Mothers who are able to communicate in Gujarati and Hindi.
  • Mothers who are willing to participate in the study.
  • . Mother of children under age group 6-24 months.
  • Mothers who visited in New Civil Hospital, Surat
  • Exclusion Criteria
  • Mother who are not willing to participate.
  • Mothers of children above age group of 6-24 months.
  • Mothers who are not available at the time of data collection.

Description of Data

  • Section-I: demographic variables of subject – age, religion, education, qualification, Occupation, income of the family, parity, types of family, any information regarding IYCF
  • Section –II: Self structured knowledge questionnaire- total 30 questionnaires related to IYCF
  • Section- III: Observational practice Checklist for IYCF
  • Section –IV: develop nutritional educational intervention

Validity of data: Validated by 10 experts in the field of nursing.

Reliability: The reliability of the tool was calculate by using Split half method and the value were 0.89, 0.92, respectively.

Ethics and consent: informed consent taken to all subject. Before conducting this study took permission from Medical Superitendentant, New civil Hospital, Surat.

Results and Discussion

  • Considering correlation between mothers posttest knowledge score and posttest practice score was a statistically significant, moderate positive correlation between them(r=0.52 P≤01.)

The association between post-test knowledge score and demographic variables Age, education and occupational status had association with their demographic data, and posttest practice score age, education, monthly income and parity had association with their demographical data (Tables 1 and 2).

Table 1: Finding related to analysis of demographic variable of mothers

Sr. No.

Variables Frequency

Percentage

1.

Age (in years)

·                     18-23 years

·                     24-29 years

·                     30-35 years

·               Above 35 years

 

09

14

06

01

 

30%

47%

20%

03%

2. Religious :

·                     Hindu

·                     Muslim

·                     Christian

·                     Others

 

20

10

00

00

 

 

66%

34%

3. Education

·        Illiterate

·        Primary education

·        Higher secondary

·        Graduate and more than

 

06

12

10

02

 

20%

 40%

 34%

06%

4. Occupation

·                     Housewife

·                     Government job

·                     Private job

·                     Other

 

25

00

01

04

 

83%

 –

 04%

 13%

5. Community

·                     Urban

·                     Rural

 

17

13

 

57%

43%

6. Monthly Income (rupees) < 5000/-

5000-10000/-

10000-15000/-

>15000/-

 

06

18

08

00

 

20%

53%

27%

 –

7. Parity

·                     1st child

·                     2nd child

·                     >3 child

 

08

12

10

 

27%

40%

33%

8. Family

·                     Nuclear family

·                     Joint family

 

16

14

 

53%

47%

9 Source of information regarding IYCF

·      Journals and magazine

·      Social Media

·      T.V and Radio

·      Any others

8

16

6

27

53

20

Table 2: Comparison of pretest and posttest knowledge and practice score regarding IYCF.

Variables

Pretest (n= 30) Posttest (n=30) Mean Difference Student paired t-test
Mean score Standard Deviation Mean score

Standard Deviation

Knowledge

15.26

2.41 26.93 2.04 11.67 t=22.46  P=01*(S)
Practice

29.46

2.99 46.21 2.21 16.75

t=52.36  P=01*(S)

The finding coincides with the findings that study on effect of nutrition education on knowledge, complementary feeding and hygiene practices of mothers with moderate acutely malnourished children in Uganda. Result of study was Mean scores for knowledge, dietary diversity, and meal frequencies were higher at end line compared to baseline (P<0.001). Handwashing did not improve significantly (P=0.183), while boiling water to enhance water quality improved (P<0.001).

Recommendation

  1. Train to the grass root health workers on IYCF policies of WHO and MoHFW GOI, stressing on the benefits of appropriate feeding practices by hospital, CHC, PHC, HWC and making these services universally available with IEC.
  2. Health care personnel traditionally encourage mothers to breastfeed by giving knowledge regarding benefits of breast feeding to infants as well mothers.
  3. Breast feeding may be affected by religious ideologies, therefore it must be modified behavior and attitude of the mothers by giving counseling by reinforcing the cultural and religious practices.
  4. Use of local religious techniques can bring positive changes in the implementation of health programs.
  5. Government and other partners working on sustainable child nutrition reduction should focus on the nutrition education to improve the knowledge and appropriate complementary feeding practice including daycare centers [4-8].

Conclusion

Nutrition Education intervention was effective on improving knowledge and practice regarding IYCF. Its improved knowledge of mothers and they were doing correct practice of IYCF.

References

  1. UNICEF/WHO/World Bank. Levels and Trends in Child Malnutrition: Key Findings of the 2019 Edition of the Joint Child Malnutrition Estimates. Geneva, Switzerland: World Health Organization; 2019.
  2. WHO (2000) Effect of breastfeeding on infant and child mortality due to infectious diseases in less developed countries: a pooled analysis. Collaborative Study Team on the role of breastfeeding on the prevention of infant mortality. Lancet 355: 451-455. [crossref]
  3. Dewey KG, Vitta BS (2013) Strategies for Ensuring Adequate Nutrient Intake for Infants and Young Children during the Period of Complementary Feeding. Washington, DC, USA: Alive & Thrive.
  4. Beyene S, Willis MS, Mamo M, Belaineh L, Teshome R, Tsegaye T, et al. (2019) Nutritional status of children aged 0–60 months in two drought-prone areas of Ethiopia. South African Journal of Clinical Nutrition.
  5. Jukes M, McGuire J, Method F, Sternberg R Nutrition: a Foundation for Development. Geneva, Switzerland.
  6. Drake L, Maier C, Jukes M, et al. (2002) School-age children: their nutrition and health. Partnership for Child Development 25: 4-30.
  7. Hoddinott J, Alderman H, Behrman JR, Haddad L, Horton S (2013) The economic rationale for investing in stunting reduction. Maternal & Child Nutrition 9: 69-82. [crossref]
  8. Black RE, Morris SS, Bryce J (2003) Where and why are 10 million children dying every year? Lancet 361: 2226-2234. [crossref]

National Heart Institute (NHI) – Acute Cardiovascular Care (ACVC): One-Month Perspective, a Single-Center Experience

DOI: 10.31038/JCCP.2021425

Abstract

Background: National Heart Institute (NHI), Cairo – Egypt is a tertiary care center serving cardiovascular patients nationwide. In this study, patients admitted to our institute’s cardiac care unit (CCU) with critical acute cardiovascular conditions (ACVC) were included to assess the management strategies and the in-hospital outcome.

Aim: Determine different presenting diagnoses admitted to NHI-CCU with documentation of their primary management strategies and its correlated outcomes.

Methodology: A prospective cohort study of all comers to NHI-CCU for one month duration from 15/7/2020 to 15/8/2020.

Results: This study represents a cohort of all comers admitted to our institute with ACVC during the study period. Total of 445 patients were included. In terms of gender, 301 patients were males (67.8%) and 143 patients were females (32.2%). The mean age of patients in this cohort was 55.8 ± 13.0 years of age. Patients spent a total length of hospital stay of 2250 days, with an average hospital stay per patient of 6 ± 6 days. The overall mortality was 13% (i.e.: 58 patients). Of particular notice patients presenting with cardiogenic shock or pulmonary edema complicating acute coronary syndrome (ACS) had longer hospital stay and higher mortality.

Conclusion: The current strategy for managing ACS or HF at NHI had markedly improved with a high success rate and favorable overall outcome across different age groups. The presence of acute pulmonary edema or Cardiogenic shock on admission to CCU in patients with ACS (STEMI or NSTE-ACS) is strongly correlated with prolonged hospital stay and in-hospital mortality. Reperfusion strategies, either Primary PCI (PPCI) or pharmaco-invasive approach, has a positive impact on short-term outcome and length of hospital stay. This positive impact is blunted in patients complicated with acute pulmonary edema or cardiogenic shock. Device therapy in patients with decompensated HF impacts short-term outcomes and correlates with a shorter hospital stay.

Introduction

ACVCs are a worldwide health issue that affects both industrialized and underdeveloped countries [1]. This is related to increased mortality; more extended hospital stays, and increased healthcare expenditures. The purpose of this study is to evaluate the treatment strategy, and in-hospital outcomes of patients admitted to NHI with ACVC over a one-month duration.

Objectives

Improve the care of ACVC in NHI and provide optimized management plans.

Aim

Determine different presenting diagnoses admitted to NHI-CCU with documentation of their primary management strategies and their correlated outcomes.

Methodology

Study Design

A prospective cohort study.

Study Setting

Cardiac Care Unit – National Heart Institute

Study Period

One month – Mid-July to Mid-August 2021

Study Population

Inclusion Criteria:

  • All comers with ACVC requiring admission to CCU for at least 24 hours

Exclusion Criteria:

  • Incomplete or missing diagnosis or outcome.

Study Procedures

All consecutive patients with ACVC admitted to the CCU at the NHI from mid-July to mid-August 2021 were included in this study. Patients’ data were collected, including personal information, clinical history, diagnoses, plan of management, and status at discharge. Further stratification for each group of patients with a specific diagnosis was done according to the management plan. According to the diagnosis and management plan, patients’ status at discharge was recorded.

Data Collection

Patients’ data were collected using a pre-designed electronic worksheet. Once a patient’s data is submitted, it is backed up to an online database. A double-check was performed before patient discharge to ensure no data was missing. In addition, senior staff continuously monitored, assessed, and revised the data collection process to ensure data integrity and completeness.

Ethical Aspects

Informed written consent was signed by all patients who accepted enrollment in this study. In critically ill patients and patients who died before signing a consent, a next of kin signed in substitute.

Statistical Analysis

The Statistical Package for Social Sciences, version 26.0 – 2018 (SPSS Inc., Chicago, Illinois, USA), was used for statistical assessment. Demographic data, clinical diagnoses, plan of management, and hospital outcome were compared using Chi-squared (χ2) test for categorical variables and the unpaired t-test for continuous variables. Relationships between management plans and hospital outcomes were assessed using logistic regression analysis. Quantitative data were expressed as mean ± SD. Qualitative data were expressed as frequency and percentage, and a P value less than 0.05 was considered significant.

Results

During the study period of 1 month from mid-July to mid-August 2021, a total of 445 patients were admitted to the CCU. Primary diagnoses were ACS, CHF, Pulmonary embolism, aortic dissection, Infective endocarditis, and CHB. Regarding gender, 302 patients were males (67.8%), and 143 patients were females (32.2%). Accordingly, the male-to-female ratio is 2.06:1.

Mortality in this cohort was 13% (i.e., 58 patients). The highest mortality was recorded in patients with aortic dissection; that is 100%.

Patients spent a total of 2250 days, with the mean length of hospital stay per patient being six ± six days. The maximum hospital stay was 46 days. The minimum hospital stay was one day.

The mean age of patients in this cohort was 55.8 ± 13.0 years of age. The youngest patient was 16-year-old, while the oldest patient was 82-year-old (i.e., Range: 16-82)

Primary Diagnosis

Total no. = 445

 
ACS

202

45.39%

High degree AV block

133

29.89%

CHF

78

17.53%

Ventricular Tachycardia

11

2.47%

Pulmonary Embolism

8

1.80%

Atrial Fibrillation

5

1.12%

Aortic Dissection

3

0.67%

Infective Endocarditis

2

0.45%

Device related infection

1

0.22%

Left Atrial Myxoma

1

0.22%

Acute Coronary Syndromes

During the study period, a total of 202 cases of ACS were admitted to CCU. The In-hospital mortality was 6.9%. Most of the cases were STEMI (84.5%) vs. 15.5% NSTE-ACS. 95% of the patients had undergone invasive coronary angiography their admission. The initial strategy for STEMI was PPCI in 67%, while the pharmaco-invasive plan was 22.4%. Rescue PCI needed in 8.1%. Delayed PCI was carried out in 2.5%. The success rate for PCI is 96%—failed PCI in 7 patients (4 planned for CABG, two intended for a second attempt).

Further revascularization was needed in 14% of cases (8% required CABG, 6% staged PCI). Only 10% of patients who needed further revascularization were initially diagnosed with NSTE-ACS, 90% were STEMI (most commonly inferior STEMI). Thrombolytic therapy was used in 22.4% of STEMI cases. Only SK was used and successful in 69% of cases. All patients with failed SK went for Rescue PCI. Length of hospital stay in PPCI was 2.9 days while was 4.5 days inpatient who received thrombolytic therapy. And was 2.5 days for NSTE-ACS patients. The initial management strategy was significantly correlated with length of hospital stay in STEMI cases (P=0.0002), and the use of thrombolysis was significantly related to a more extended hospital stay P<0.0212.

fig 1

fig 2

fig 3

fig 4

fig 5

fig 6

fig 7

Presenting Diagnosis as AHF among ACS Patients

Patients admitted with cardiogenic shock and pulmonary edema due to an underlying acute coronary syndrome represents a small fraction of ACS patients admitted to NHI. Fifteen cases showed signs of severe AHF during their initial presentation, either APE or Cardiogenic shock. Analysis of ACS complicated by severe AHF showed that Conservative treatment was the most common therapy for those patients. PCI has done in 33%. CABG is 13.3%. Only one case documented a mechanical complication of STEMI (VSR; percutaneous closure was attempted, and the patient died after the procedure. The mortality was 76% in all cases, 80% for PCI, and 75% for the conservatively treated group and 50% for CABG patients.

The use of hemodynamic support was considered in 2 cases; both had IABP insertion (one during PCI the other during CABG).

fig 8

fig 9

fig 10

fig 11

fig 12

In regression analysis, the presence of severe AHF (APE or Cardiogenic shock) was the only variable that showed a strong correlation with in-hospital mortality Acute decompensated Heart Failure (AHF).

A total of 78 cases were admitted with AHF. 66.7% were males. Various age groups were noticed; the most common decade was 50-60 years, followed by 60-70 years. More than half of the patients’ overall hospital stay was six days. The initial medical strategy was considered in all patients and was directed only toward guidelines-directed medical therapy, and device therapy was considered in the eligible patient. Precipitating factor for decompensation was not documented in admission notes.

50% of the patients have underlying IHD. 50% (39 cases) are NICM the underlying etiology in patients with NICM was identified in 8 cases (20.5%); 3 cases have primary valve disease as underlying etiology (2 cases with rheumatic MS and one case with severe AS and MR), one case with PPCM, two patients with HFpEF and two patients with isolated right-sided HF. The rhythm was AFib in 24.4% of cases. Moderate to severe secondary valve pathology was documented in 8 patients with AHF; 10.2%. The most common valve pathology was Functional Mitral regurge (5 cases), MR and TR (3 cases), and isolated TR (one patient). Rhythm and valve lesions were not correlated with in-hospital outcomes.

Device therapy in HF correlated with a shorter hospital stay (P=0.0045). Devices used were CRT, CRT-D, and ICD.

IHD in a patient with HF is associated with a prolonged hospital stay. AF, valve pathology is not associated with outcome or stay

Non-Ischemic Cardiomyopathy

58 cases admitted with decompensated HF without underlying IHD. 65.5% were males. AF was the rhythm in 44.8%. All patients were managed medically. Mortality was in 3 patients; 1 because of VF and two because of progressive pump failure and multisystem affection. The underlying etiology was not identified in 90% of the cases. The diagnosis of HFpEF was documented only in 2 cases out of 78% with CHF, which indicates a significant challenge in identifying this specific subgroup and possibly missing proper diagnostic criteria for this group of patients. In patients with the underlying etiology of primary valve disease for CHF, the rheumatic pathophysiology is more common than degenerative, reflecting the current challenge in developing countries where RHD is still the most common pathology for VHD.

fig 13

fig 14

Heart Block

During the study period, 133 patients were admitted to CCU with a high degree A-V block. Eight cases with the second degree and 94% with CHB. 52.4% were males, and 47.6% were females. The average age was 61 years; the range was 1-90 years. The average hospital stay was 7.2 days. 131 patients received a PPM. One patient was managed with medical therapy, and one patient died before pacemaker insertion. A dual-chamber pacemaker was used in 53.4% of cases, while a single lead pacemaker was used in 46.6%. In addition, two patients required CRT while eight patients required ICD according to the associated condition.

fig 15

fig 16

Pulmonary Embolism

Eight cases were admitted with pulmonary embolism. Thrombolytic therapy was the strategy in 62.5% of cases. Medical therapy (anticoagulation) was used in 37.5%. No mechanical therapy was used. Mortality in one patient.

fig 17

Acute Aortic Dissection

Three patients were admitted with acute aortic dissection; all were Stanford Type A. All were planned for surgical repair. In-hospital mortality 100%. 2 of them died before surgery.

Arrhythmia

Eleven patients were admitted with VT. The most common presenting symptom is syncope. Ten were males, and one was female. The average age was 46 years (range from 19-70 years). Device-based therapy was the default strategy for management in 9 patients. Devices used were DDD-ICD (5 cases), ICD (2 cases), and CRT-D (2 cases). Medical therapy was the strategy in 2 cases. Mortality was 1 case.

Atrial Fibrillation (AF)

AF was the primary diagnosis in 5 cases. 4 were males. Rate control is used in one patient. Electrical cardioversion was used in one patient. Three cases underwent ablation for AF. Mortality in one point, which was in the ablation group.

As a secondary diagnosis, AF was ubiquitous in all cases admitted with CHF. Found in 44.8% of all patients with DCM and 11% of cases with ICM. All patients were managed medically.

SVT

One case was admitted with resistant SVT and underwent ablation successfully.

Cardiovascular Related Infections

Three cases were admitted with infections. 2 had IE; one was native tricuspid, and one was prosthetic valve endocarditis. Both were treated medically. One was device-related (Pacemaker) and required device replacement.

fig 18

Others

One case was admitted with left atrial myxoma and went surgery.

Discussion

Our management strategies and outcomes were compared to the internationally available observational data for each subgroup of cases.

ACS

In patients with ACS, the invasive approach was the default method for most patients admitted to NHI. PPCI was the standard treatment for all STEMI patients. However, the pharmaco-invasive plan was an acceptable option in circumstances where ideally timed reperfusion was not achievable. PPCI has a success rate of 96%, which satisfies international standards [2,3]. Thrombolysis was successful in 69% of patients; however, the only thrombolytic agent utilized was SK, the only agent available, and no adverse effects were reported. Newer agents, such as t-PA and TNK, would produce a better result.

In patients with NSTE-ACS, the invasive technique was used in all scenarios; this could be related to the higher risk category of NSTE-ACS referred to NHI, where Invasive strategy should be the standard regimen.

In ACS patients presented by AHF, the consideration of an invasive strategy was considerably lower. The most frequent method was to use medical treatment. This finding could be related to the interventionist perception of the procedural risk or futility of any intervention. In some instances, advanced shock and multisystem affection were the primary causes of this significant caring trend change. The survival in those subgroups of patients who underwent invasive treatment was 20% [4,5], while the survival for conservative strategy was 25%. CABG was considered in few cases; however, the number is too small to make a clear conclusion. It should be noted that IABP was the only hemodynamic support available to those individuals because advanced MCS (Impella® and LVAD) were not available.

Acute Decompensated Heart Failure

As regards decompensated HF, admission criteria were based only on clinical evaluation and radiological findings. As per study protocol, AHF complicating ACS was included in the ACS subgroup but not in the AHF group. The average hospital stay was similar to the international data [6].

Device therapy, including CRT, CRT-D, and ICD, was associated with a shortened hospital stay. However, we cannot make a causal relation between an acute benefit of device therapy and short-term outcomes in those patients.

Pulmonary Embolism

For a patient with PE, more than half the patients received thrombolytic therapy. The decision of thrombolysis was physician dependent and non-standardized. Initial hemodynamic data was missing. SK was the only thrombolytic agent used, as a newer agent was not available. Mechanical thrombectomy was not used in any patient, as mentioned above.

Aortic Dissection

In our snapshot cohort, only 3 cases were admitted over a month duration. All patients were of Stanford type A, and all of them planned for surgical repair. Mortality was 100%. Time to surgery was missed. The small number of cases and the available data limits our capability to provide a clear conclusion. However, it is urgently needed to collect more data over an extended time to verify patient outcomes.

Atrial Fibrillation

A common condition exists either as a primary or secondary diagnosis, especially in patients with CHF, either DCM or IHD. The consideration of ablation in AF is an outstanding achievement. However, it was noticed that conservative medical therapy was the default strategy for all patients with CHF and AF. The consideration of device therapy targeting LV dysfunction was much less when AF was associated. Moreover, concern for rhythm control or ablation for AF was underutilized with CHF.

Heart Block

Our Success rate in device therapy is meeting the international standards [7,8]. A dual-chamber in almost half of the patients is an excellent achievement in a big volume center with limited resources. Thirty patients stayed in hospital more than eight days, up to 88 days, with an average of 22 days. There is no clear explanation for the underlying reason for their prolonged hospital stay and whether the delay was because of delayed intervention or procedure-related. A follow-up registry is needed to evaluate long-term outcomes and reflect our practice on functional class and quality of life [9].

Conclusion and Recommendations

  • Although it is a single-center experience, NHI is a high-volume cardiac center with a CCU admission rate of around 400 patients per month.
  • NHI has established a successful 24/7 PCI service with a high success rate. Accordingly, invasive reperfusion methods, whether PPCI or pharmaco-invasive, have a beneficial influence on short-term results and hospital stay length.
  • The present method for addressing ACS or HF at NHI has significantly improved, with a positive overall result across all age categories.
  • Strategies that result in a shorter hospital stay and reduced readmissions should be seriously explored in such a high-volume institution.
  • Accurate monitoring of timed reperfusion goals for various ACS reperfusion methods is urgently required to tackle all time delay components.
  • Complicated ACS patients with AHF have a very high risk of poor outcomes and are significantly associated with more extended hospital stays and in-hospital deaths. In this group of patients, invasive strategies -in our local experience-provided a muted benefit, indicating the need for further development in treatment measures. Well-established methods like ventricular unloading devices or MCS should be considered urgently to support successful mechanical reperfusion and improve prognosis.
  • Available Device therapy (CRT, DRT-D, and ICD) in patients with decompensated HF impacts short-term outcomes and correlates with a shorter hospital stay. Follow-up data reflecting the impact on long-term effects is needed.
  • Poor outcomes noticed in specific diagnoses like cardiogenic shock and Aortic dissection mandate the generation of an institute-specific registry for more precise data collection over a more extended period to guide further management optimization.
  • Creating an electronic health record for mega data collections will offer a clearer picture of healthcare quality rather than analysis of snapshot data. As a result, more comprehensive recommendations regarding areas for improvement will be implemented.

References

  1. Savarese G, Lund LH (2017) Global Public Health Burden of Heart Failure. Cardiac Failure Review 3: 7-11. [crossref]
  2. Shehab A, Al-Dabbagh B, Almahmeed W, et al. (2012) Characteristics and in-hospital outcomes of patients with acute coronary syndromes and heart failure in the United Arab Emirates. BMC Res Notes 5: 534. [crossref]
  3. Peter J McCartney, Colin Berry (2019) Redefining successful primary PCI. European Heart Journal – Cardiovascular Imaging 20: 133-135. [crossref]
  4. Nieminen MS, Harjola VP (2005) Definition and epidemiology of acute heart failure syndromes. Am J Cardiol 96: 5G-10G. [crossref]
  5. Cannon CP, Battler A, Brindis RG, Harrington RA, Krumholz HM, et al. (2001) American College of Cardiology key data elements and definitions for measuring the clinical management and outcomes of patients with acute coronary syndromes. A report of the American College of Cardiology Task Force on Clinical Data Standards (Acute Coronary Syndromes Writing Committee). J Am Coll Cardiol 38: 2114-2130. [crossref]
  6. Cowie MR, Anker SD, Cleland JGF, Felker GM, Filippatos G, Jaarsma T, et al. (2014) Improving care for patients with acute heart failure: before, during and after hospitalization. ESC Heart Fail 1: 110-45. [crossref]
  7. Armstrong PW, Pieske B, Anstrom KJ, Ezekowitz J, Hernandez AF, et al. (2020) Vericiguat in Patients with Heart Failure and Reduced Ejection Fraction. The New England Journal of Medicine 382: 1883-1893. [crossref]
  8. Lee TC, Kon Z, Cheema FH, et al. (2018) Contemporary management and outcomes of acute type A aortic dissection: an analysis of the STS adult cardiac surgery database. J Card Surg 33: 7-18. [crossref]
  9. Rajaeefard A, Ghorbani M, Babaee Baigi MA, Tabatabae H (2015) Ten-year Survival and Its Associated Factors in the Patients Undergoing Pacemaker Implantation in Hospitals Affiliated to Shiraz University of Medical Sciences During 2002 – 2012. Iranian Red Crescent Medical Journal 17: e20744. [crossref]

To Mask, or Not to Mask?

DOI: 10.31038/AWHC.2021451

 

Wearing facemasks is recommended as part of personal protective equipment and as a public health measure to prevent the spread of coronavirus disease 2019 (COVID-19) pandemic. However, new mask guidance is suggesting vaccinated people take their mask off in countries such as the U.S., where more than one third of their population is vaccinated [1]. But after the trauma of the past year, and given that we are still nowhere near the roughly 80 percent needed to reach herd immunity, are we ready to uncover our faces yet?

Over one year into the pandemic, among the variety of public health and hygiene measures that have been gradually adopted worldwide, the most visually noticeable is the wearing of face masks. Different, mandatory or voluntary, practices, and contradictory indications about the utility of facemask wearing were introduced across affected countries. Across Europe, face masks have been adopted as one of the measures to reduce the COVID-19 spread, despite the fact that wearing masks in Europe is not common or familiar, and it is often only associated with some Asian or Middle East countries [1], where its use is deeply connected to social and cultural practices, as well as political, ethical, and health-related concerns, personal, and social meanings [2]. At the beginning of the pandemic, there was a lack of consistency among political leaders and experts, who advised against the use of facemasks by the public due to a sense that their potential risks, such as self-contamination, could outweigh the potential benefits, and that public use, would lead to depletion of the supply needed for healthcare workers. Experts then shifted their thinking about potential benefits of masks to include protecting others against infection with SARS-CoV-2 (source control), similar to how surgical masks in the operating room protect patients. However, self-protection is the main reason why infection prevention and control experts recommend healthcare workers to wear a facemask when entering a patient’s room who may have a viral respiratory infection. With COVID-19, however, facemasks have proven to be beneficial for protection of both healthcare workers and the public. This has since been backed up by empirical observations. Epidemiological evidence from Cochrane [3] or the World Health Organization [4] point out that, for population health measures, we should not generally expect to be able to find controlled trials, due to logistical and ethical reasons, and should therefore instead seek a wider evidence base. Therefore, we should not be surprised to find that there are no randomised clinical trials for the impact of masks on community transmission of any respiratory infection in a pandemic.

While there remains much uncertainty around the true effectiveness of face masks-especially when factoring in differences in mask types, levels of adherence, and patterns of human behavior-there is evidence to confirm that masks can provide a measure of protection and containment for respiratory viruses [3]. Systematic reviews of facemask use suggest relative risk (RR) reductions for infection ranging from 6-80%, including for betacoronavirus infection (e.g., COVID-19, SARS, MERS). For COVID-19, this evidence is of low or very low certainty because it is derived from observational studies with important risk of various biases, or indirect evidence from randomised studies of other (non-betacoronavirus) respiratory viruses with methodological limitations. Only one observational study has directly analysed the impact of mask use in the community on COVID-19 transmission that looked at the reduction of secondary transmission of SARS-CoV-2 in households by facemask use [5]. It found that face masks were 79% effective in preventing transmission, if all household members used them prior to symptoms occurring. The study did not look at the RR of different types of mask. In a systematic review sponsored by the World Health Organization [6], physical distancing, facemasks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 were studied, observing that facemask use could result in a large reduction in risk of infection. However, the review included only three studies of mask use outside healthcare settings, all of which were of SARS, not of SARS-CoV-2, and were too underpowered to draw any conclusions [7-9]. Another study found the use of masks was strongly protective, with a risk reduction of 70% for those that always wore a mask when going out [11-13], but it did not look at the impact of masks on transmission from the wearer. It is not known to what degree analysis of other coronaviruses can be applied to SARS-CoV-2. None of the studies looked at the RR of different types of mask [2,12].

Laboratory studies have demonstrated the efficacy of masks and other fabrics as a barrier to small particles and microbes. Surgical and N95 masks limit and redirect the projection of airborne whereas filtration efficiency, which may correlate with containment, has been estimated to be 80% for fitted surgical masks against small particles, or up to 96% against microbes [3,12]. Surgical masks were three times more effective than homemade masks, though droplet transmission from infected individuals wearing the latter was nevertheless reduced. Generally, however, the theoretical protective effect of masks may be diminished by a number of factors: compliance and effective use may be inadequate, masks may not be replaced frequently enough to prevent contamination, and finally, COVID-19 infection may even occur via alternative routes, such as ocular transmission. Nevertheless, the best evidence for airborne (or aerosol) transmission of COVID-19 is from outbreaks and through the detection of virus in air samples [10]. What is meant by airborne or aerosol transmission is the inhalation of the smallest droplets by exposed individuals. This is the case whether the virus is contained in “ballistic” droplets emitted at close range from an infected person or in aerosolized particles over longer distances, minutes or more after leaving the source. Coughing has been associated with the highest aerosol emissions, with a peak concentration at least 10 times greater than the mean concentration generated by speaking or breathing.

Consequently, the wearing of masks-in addition to vigilant hand hygiene-has been put forth as a means to mitigate disease transmission, especially in healthcare settings [10,11]. Much research has indicated that masks can provide significant protection to the wearer, although proper mask fitting is critical to realizing such benefits [7-9]. Alternatively, masks can potentially reduce outward transmission by infected individuals, providing protection to others [10].

From a public health perspective, it is important to emphasise the importance of other risk mitigation strategies, aimed at reducing the number, proximity, and duration of interpersonal contacts, respiratory and hand hygiene measures, and engineering measures in built environments. No single intervention, therefore, seems to give invulnerability to SARS-CoV-2.

Therefore, future steps should include conducting high quality studies, including use of standardised cloth masks, for both the estimates of effects and contextual factors in tandem with ongoing evidence synthesis. Current best evidence includes the possibility of important relative and absolute benefits of wearing a facemask. As no intervention is associated with affording complete protection from infection, a combination of measures will always be required, now and during the next pandemic.

Individual and collective responsibility and trust in the institutions and in the official assessment of recommendations as to the adopted measures are crucial to build up a degree of epistemic agreement [7]. However, this relies on communicating certainty [9], of which very little has been seen during COVID-19 pandemic. Hence, the acceptance of official advice varied among countries, cultures, and political contexts, with some degree of contradiction.

Which is why the general public can’t help but wonder, with or without a mask? It’s a confusion exacerbated by changing rules that vary by countries, states, provinces or even neighborhood, all while the very real threat of infection remains, in some places more than others. However, recent observations directly demonstrate that wearing of surgical masks or KN95 respirators, even without fit-testing, substantially reduce the number of particles emitted from breathing, talking, and coughing [10]. While the efficacy of cloth and disposable masks is not as clear and confounded by shedding of mask fibers and the importance of regular changing of disposable masks and washing of homemade masks is mandatory for its correct use, observations indicate it is likely that they provide some reductions in emitted expiratory particles, in particular the larger particles (> 0.5 μm).

In the case of being fully vaccinated, the Centres for Disease Control and Prevention in the U.S. recently recommended that people vaccinated against the coronavirus resume wearing masks in schools and in public indoor spaces in parts of the country where the virus is surging, marking a sharp turnabout from their advice just a few months ago.

On the other hand, the World Health Organization recommends on wearing masks, especially indoors, and making its use as a normal part of being around people. Vaccines are effective against the worst outcomes of infection, even with variants, and conditions have clearly improved since last year [12]. However, being in an area with a high number of new COVID-19 cases wearing a mask indoors in public and outdoors in crowded areas or when you are in close contact with unvaccinated people is highly recommended, especially in patients with other conditions [13].

In summary, it is therefore indisputable that mask wearing will reduce emission of virus-laden aerosols and droplets associated with expiratory activities and help in mitigating pandemics associated with respiratory disease such as COVID-19. Parallel to the principle of herd immunity for vaccines, the greater the extent to which the intervention-mask wearing-is adopted by the community, the larger the benefit to each individual member. The prevalence of mask use may be of greater importance than the type of mask worn. Recovery of the countries from the COVID-19 pandemic requires the combined efforts of their populations working together in unified public health action. When masks are worn and combined with other recommended mitigation measures, they protect not only the most vulnerable population, but the whole community. Recommendations for masks will likely keep varying as more is learned about various mask types and as the pandemic evolves. With the emergence of more transmissible SARS-CoV-2 variants, it is even more important to adopt widespread mask wearing until effective levels of vaccination are achieved.

References

  1. Martinelli Lucia, Kopilaš Vanja, Vidmar Matjaž, Heavin Ciara, Machado Helena, et al. (2021) Face Masks during the COVID-19 Pandemic: A Simple Protection Tool with Many Meanings. Frontiers in Public Health 8: 947. [crossref]
  2. Jeremy Howard, Austin Huang, Zhiyuan Li, Zeynep Tufekci, Vladimir Zdimal, et al. (2021) An evidence review of face masks against COVID-19. Proceedings of the National Academy of Sciences 118: e2014564118. [crossref]
  3. Worby CJ, Chang HH (2020) Face masks use in the general population and optimal resource allocation during the COVID-19 pandemic. Nat Commun 11: 4049. [crossref]
  4. Stutt ROJH, Retkute R, Bradley M, Gilligan CA, Colvin J (2020) A modelling framework to assess the likely effectiveness of facemasks in combination with ‘lock-down’ in managing the COVID-19 pandemic. Proc R Soc A476: 20200376.
  5. Li JO, Lam DSC, Chen Y, Ting DSW (2020) Novel coronavirus disease 2019 (COVID-19): the importance of recognising possible early ocular manifestation and using protective eyewear. Br J Ophthalmol 104: 297-298. [crossref]
  6. Science in Emergencies Tasking – COVID-19 (SET-C). Face masks and coverings for the general public: Behavioural knowledge, effectiveness of cloth coverings and public messaging.
  7. Chu DK, Akl EA, Duda S, Solo K, Yaacoub S, et al. (2020) Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet 395: 1973-1987. [crossref]
  8. Nicola M, Alsafi Z, Sohrabi C, Kerwan A, Al-Jabir A, et al. (2020) The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg 78: 185-193. [crossref]
  9. Greenhalgh T, Schmid MB, Czypionka T, Bassler D, Gruer L (2020) Face masks for the public during the covid-19 crisis. BMJ 369: m1435. [crossref]
  10. Torjesen I (2021) Covid-19: Risk of aerosol transmission to staff outside of intensive care is likely to be higher than predicted. BMJ 372: n354. [crossref]
  11. Asadi S, Cappa CD, Barreda S, et al. (2020) Efficacy of masks and face coverings in controlling outward aerosol particle emission from expiratory activities. Sci Rep 10: 15665.
  12. Jeremy Howard, Austin Huang, Zhiyuan Li, Zeynep Tufekci, Vladimir Zdimal, et al. (2021) An evidence review of face masks against COVID-19. Proceedings of the National Academy of Sciences 118: e2014564118. [crossref]
  13. Catching A, Capponi S, Yeh MT, Bianco S, Andino R (2021) Examining the interplay between face mask usage, asymptomatic transmission, and social distancing on the spread of COVID-19. Sci Rep 11: 15998. [crossref]

Improving Adherence of Infection Prevention Standards in Health Facilities: The Role of Competition Approach from Four Regions of Tanzania Mainland

DOI: 10.31038/PEP.2021246

Abstract

Introduction: Implementation of infection prevention and control in health facilities faces several barriers. We conducted quality competition activities amongst health facilities as a model of Improving compliance to Infection Prevention and Control (IPC) standards in four regions of Tanzania.

Methodology: The quality competition activities’ implementation design was held in sixty (60) health facilities. Before the competition, healthcare workers from facilities were taken through a thorough capacity building on IPC as a continued essential health services project post first wave of COVID-19. The sampled facilities were informed that the competition was going to be held based on adherence to IPC principles. Two tools were used, i.e., the IPC national checklist tool and the star rating assessment tool. Both tools focused on the Reproductive, Maternal, Newborn and Child Health. The assessments, using both tools, were done independently and then the mean score was developed.

Results: A substantial improvement in adherence to IPC in all the participating facilities was observed. The top three health facilities from each region were selected as winners for a non-monetary gift. The gifts were given based on the level of health facility, that is hospitals, health centers and dispensaries receiving an award worth 13,043.5USD (Tshs. 30,000,000/=) for hospitals, 10,869.6.0USD (Tshs. 25,000,000/=) for health centers and 6521.7USD (Tshs.15,000,000/=) for dispensary. The funder was Catholic Relief Services (CRS).

Conclusion: Completion was found to be facilitator for the adherence of infection prevention principles amongst healthcare facilities.

Keywords

Infection prevention control; Quality of care; Maternal and newborn health

Introduction

The World Health Organization (WHO) advocates that Infection prevention and control (IPC) is a practical, evidence-based approach which prevents patients and health workers from being harmed by avoidable infection and helps to mitigate antimicrobial resistance [1]. Unfortunately, the implementation of IPC in the health facilities of developing countries has been jeopardized by number of barriers and facilitators [2].

The awareness of IPC principles continues to affect the endeavors to improve IPC implementation in most of sub-Saharan African countries. Normally, interventions are required to be done by all hospital staff as part of their duties. However, limited knowledge on IPC guidelines, lack of formal feedback on performance, lack of resources, and staff hierarchy issues, continue to hamper IPC enforcement at facility level [3]. Health Care Workers believe that patients pose no health risks especially when there is an asymptomatic presentation. Little awareness of infection existence or following IPC recommendations coupled with unavailability of adequate resources, high workload, and time limitation interferes with providing good patient care [4].

Compliance to IPC guidelines, standards, and standard operating procedures in Tanzania has been shown to be inadequate. Compliance to IPC standards between 2010 and 2017 was found to be 32% in 2010, then improved to 53% in 2014, and dropped to 34% in 2017 [5]. Implementation of IPC principles in primary healthcare facilities, as assessed through star rating implementation in 2015/2016 and 2017/2018, found that “median adherence to IPC principles increased from 31 percent in 2015/16 to 57 percent in 2017/18” [6]. In outpatient settings, in 2018, adherence to hand hygiene was found to be 6·9%, for glove use 74·8%, disinfection of reusable equipment 4·8%, and waste management 43·3%  [7].  For example, according to a study conducted in Dodoma by Wiedenmayer, et al, (2020), only 6.1% and 3.0% of assessed units in intervention and non-intervention facilities respectively were able to reach the recommended World Health Organization compliance rate of ≥81% in Water, Sanitation and Hygiene [8].

Conceptually, the integration of Care for Child Development (CCD) into Management of Possible Severe Bacterial Infection (PSBI) and Neonatal Survival Program [9] is hinged in “The Lancet Global Health Commission on High Quality Health Systems in the Sustainable Development Goals (SDG) reports [10]. The report proposed four actions that require: (i) commitment of health system leaders to govern for quality of care and continuous learning; (ii) countries to redesign service delivery to maximize health outcomes; (iii) transform the health workforce by adopting competency-based clinical education and performance; (iv) governments working with the civil society to hold systems accountable and actively seek high-quality care [10].

The main aim of the study was to translate the Lancet Global Health Commission on High Quality Health Systems report in health system improvement for IPC in Tanzania and strategize to understand what works, why, and in what contexts” [10]. Hence, the main aim of this study was to highlight quality competition activities amongst health facilities as a model of improving IPC compliance in Health Facilities in four regions of Tanzania.

Methodology

The quality competition activities implementation design was held in sixty (60) health facilities of Iringa (16), Mbeya (15), Njombe (14) and Songwe (15) regions. The facilities included regional referral and district hospitals (19), health centres (36) and dispensaries (5). Before implementation of the quality competition activities, healthcare workers from these facilities had a thorough capacity building on IPC as a continued essential health services (CES) project post first wave of coronavirus disease of 2019 (COVID-19) which ended in June 2020. The CES project was designated as a response towards prevention, detection and early containment of outbreaks, that has been developed jointly by the Tanzanian Ministry of Health, Community Development, Gender, Elderly and Children (MoHCDGEC) and President’s Office – Regional Administration and Local Government (PO-RALG) in collaboration with AMREF Health Africa – Tanzania under the financial support of UNICEF. The implementation started in August 2020 and ended in October 2021. The project’s goal was to increase capacity of health facilities to continue to provide essential health services through strengthening IPC during the COVID-19 pandemic in 17 regions of Tanzania; Mainland (12) and Zanzibar (5). The regions in Tanzania Mainland were: Arusha, Dar es Salaam, Dodoma, Iringa, Kilimanjaro, Manyara, Mbeya, Morogoro, Mwanza, Tanga, Njombe and Songwe.

The CES project deployed a blended cascading mode to train health care workers on IPC to ensure that there is CES even during pandemic time. The project physically trained 40 national level master trainers who then trained virtually 237 regional and district level trainers. These then trained virtually and physically 1,172 facility-based trainers called facility champions who finally trained their fellow health care workers in their facilities. These facilities champions then provided training and mentorship to their fellow health care workers physically. A total of 5,172 health care workers both medical and non-medical, had been reached from 297 health facilities. To ensure a good practice of IPC standards by health care workers information, education and communication (IEC) materials on IPC were developed and distributed to facilities in the Mainland and Zanzibar side.

In order to strengthen the implementation of the IPC standards, the MoHCDGEC and PO-RALG in partnership with Catholic Relief Services (CRS), and with financial support from UNICEF, conducted a quality competition on IPC implementation to the purposefully sampled health facilities which were reached by the CES project that is implemented under AMREF Health Africa – Tanzania. Improving adherence of IPC in the selected health facilities design also fits well with the “behaviour change wheel” fitting with communication and motivation components and of the “source of behaviour (capability, opportunity and motivation – producing a behaviour [COM-B])” involving some aspects of policy and intervention functions [11], as shown in Table 1. The quality competition is conceptualized to be a behavioural change technique in a form of material incentive to winning facility valued at 13,043.5usd (Tshs. 30,000,000/=) for hospitals, 10,869.6.0usd (Tshs. 25,000,000/=) for health centers and 6521.7 (Tshs.15,000,000/=) for dispensary. The expected mechanism of action of the quality competition is by triggering an attitude towards improving IPC practices (which is the intended behaviour change in the project implementing facilities [12].

Table 1: Behavior change wheel [11] fit with quality competition for improving IPC practices in four regions of Tanzania Mainland.

Behavior Change Wheel (BCW) – components, categories and intervention functions

Interventions planned in the quality competition for improving implementation of IPC in Iringa, Mbeya, Njombe and Songwe

BCW– Policy & COM-B system

BCW – Intervention functions

 

Policy categories

Guidelines
  • Planning meetings at national level both consultative and technical involving key stakeholders for deigning on its implementation (assessment and mentorship modalities) and involvement of sub-national levels.
 

 

 

 

 

COM-B system

Capability – physical Training

Enablement

  • Training of health workers in the selected facilities on the use of the SBM-R tool for self-assessment and improvement.
Capability – psychological Education

Training

Enablement

  • Mentorship of health care workers in the health facilities by two members from the Council Health Management Team (CHMTs) aiming at: promoting self-reflection and quality improvement; data use for individual quality improvement; and encourage self-monitoring and evaluation of IPC indicators at unit and facility level.
Motivation – reflective Incentivization

 

  • Self-assessment by individual facilities and working unit which aims at stimulating accountability towards quality improvement.
  • Process of benchmarking against peer facilities and work towards a public award.
  • Use of monitoring system to rate facilities on the implementation of appropriate IPC and WASH.
  • Awarding the highest-ranking facilities with material awards that will be used towards further quality improvement in their respective facilities
Motivation – automatic Incentivization

 

  • External-assessment by National Quality Assessors which aims at stimulating accountability towards quality improvement.  through quality competitions between facilities in the target regions in which facilities will compete with other facilities of the same type in each of the four target regions.
  • Process of benchmarking against peer facilities and work towards a public award.
  • Awarding the highest-ranking facilities with material awards that will be used towards further quality improvement in their respective facilities.

The sampled facilities were informed that, the competition was going to be held based on adherence of IPC and quality improvement principles. The exact timing of the competition was not disclosed to the participating facilities as they would change behavior in response to the knowledge that they are being evaluated. They were informed of the existence of competition for ethical purposes only. Two assessment tools were used, i.e., the IPC Standards Assessment Tool (which uses the Standard Based Management and Recognition (SBM-R) approach) to check for IPC adherence and star rating tool (SRT) to check for quality of health services provision. Both tools focused the Reproductive, Maternal, Newborn, Child and Adolescent Health (RMNCAH). The tools are in the Afya Supportive Supervision system. This system was developed by the ministry to ease the supervision and assessments. The tools can be accessed by using phones, tablets, computers, etc. The assessment using the tools was done independently, scores were autogenerated and the mean score was developed.

Data Analysis

Scores (%) of the facilities were extracted from Afya Supportive Supervision System (afya SS) after assessment using SRT for RMNCH Services and IPC – SBM-R tool for department/functional areas related to RMNCH Services and exported to the excel spread sheet. The average score of the facility was computed using excel. The top three best performers on the aspects/domains/variables of SBMR / IPC Score – RMNCH related Departments (%) and Star rating assessment in the reproductive health departments were selected as winners from each region

Results

Average Score (%) of the facilities of Iringa Region after assessment using SRT for RMNCH Services and IPC–SBM-R tool for department/functional areas related to RMNCH Services are shown in Table 2.

Table 2: Average scores for facilities in Iringa Region.

Name of Health Facility

Council SBMR/IPC Score – RMNCH related Departments (%) SRT Score (%) Final Score (%)

A. Hospitals

1 Frelimo District Hospital (CH) Iringa MC

71.00

93.50

82.25

2 Iringa Regional Referral

Hospital (RRH)

Iringa MC

68.10

90.00

79.05

3 Mafinga Town Hospital (TH) Mafinga TC

64.00

68.00

66.00

4 Kilolo District Hospital Kilolo DC

67.00

50.50

58.75

B. Health Centers

1 Ipogolo HC Iringa MC

78.20

81.00

79.60

2 Kidabaga HC Kilolo DC

67.00

80.50

73.75

3 Mlowa HC Iringa DC

61.10

86.00

73.55

4 Nzihi HC Iringa DC

55.60

91.00

73.30

5 Kiponzelo HC Iringa DC

55.13

86.00

70.57

6 Ngome HC Iringa MC

51.70

86.00

68.85

7 Malangali HC Mufindi DC

53.00

82.50

67.75

8 Ihongole HC Mafinga DC

54.00

78.00

66.00

9 Mgololo HC Mufindi DC

60.00

70.00

65.00

10 Mgama HC Iringa DC

43.20

86.50

64.85

11 Ismani HC Iringa DC

35.00

89.00

62.00

12 Sadani HC Mufindi DC

57.00

54.50

55.75

Average Scores (%) of the facilities selected from Mbeya Region after assessment using SRT for RMNCAH Services and IPC – SBM-R tool for department/functional areas related to RMNCAH Services are shown in Table 3.

Table 3: Average scores for facilities in Mbeya Region.

SN

Name of the Facility Council SBMR/IPC Score –RMNCAH related Departments (%) SRT Scores (%)

Average score (%)

Health Centres

1. Igawilo HC Mbeya CC

68.29

96.14

82.22

2. Utengule Usangu HC Mbarali DC

63.06

95.18

79.12

3. Ipinda HC Kyela DC

54.29

100

77.15

4. Ilembo HC Mbeya DC

60.43

93.81

77.12

5. Chalangwa HC Chunya DC

56.31

94.32

75.32

6. Ntaba HC Busokelo DC

54.27

87.82

71.05

7. Ikuti HC Rungwe DC

39.80

90.42

65.11

Hospitals

1. Chunya Council Hospital (CH) Chunya DC

68.84

100

84.42

2. Mbarali (CH) Mbarali DC

53.41

100

76.71

3. Mbeya Zonal Referral Hospital Mbeya CC

56.31

95

75.66

4. Mbeya (CH) Mbeya DC

46.13

100

73.07

5. Tukuyu (CH) Rungwe DC

43.24

95.15

69.20

6. Kyela (CH) Kyela DC

38.14

100

69.07

7. Busokelo (CH) Busokelo DC

48.21

88.50

68.36

8. Mbeya Regional Referral Hospital (RRH) Mbeya CC

32.91

95

63.96

Average Score (%) of the facilities Njombe Region after assessment using SRT for RMNCAH Services and IPC–SBM-R tool for department/functional areas related to RMNCAH Services are shown in Table 4.

Table 4: Average scores for facilities in Njombe Region.

SN

Name of the Facility Council SBMR/IPC Score –RMNCAH related Departments (%) SRT Average Score Overall Score (%)
 

HOSPITALS

1. Ludewa CH Ludewa DC

50.1

100

75.1

2 Makete CH Makete DC

49.3

100

74.7

3 Njombe RRH Njombe TC

40

100

70

4 Njombe TCH Njombe TC

31.8

100

65.9

 

HEALTH CENTERS

1. Lupembe HC Njombe DC

69.7

100

84.9

2. Njombe HC Njombe TC

60.3

100

80.2

3. Lupila HC Makate DC

49.4

100

74.7

4. Ihalula HC Njombe TC

49.7

100

74.9

5. Matamba HC Makete DC

48

100

74

6. Wanging’ombe HC Wanging’ombe DC

46.4

100

73.2

7. Manda HC Ludewa DC

38.5

100

69.3

8. Ipelele HC Makete DC

42.5

100

71.3

9. Makambako HC Makambako DC

36.4

100

68.2

10. Mlangali HC Ludewa DC

30.6

100

65.3

Average Score (%) of the facilities of Songwe Region after assessment using SRT for RMNCAH Services and IPC – SBM-R tool for department/functional areas related to RMNCAH Services are shown in Table 5.

Table 5: Average scores for facilities in Songwe Region.

SN

Name of the Facility Council SBMR/IPC Score -RMNCAH related Departments (%)   SRT Score (%) Average Score (%)

HOSPITALS

1. Vwawa Desinated RRH Mbozi DC

47.8

94.5

71.1

2. Mwambani CDH Songwe DC

44.2

90.5

67.3

3. Itumba CH Ileje DC

20.5

88.5

54.5

HEALTH CENTERS

1 Itaka HC Mbozi DC

62.4

93

77.7

2 Tunduma HC Tunduma TC

17.9

83

50.4

3 Ibaba HC Ileje DC

12

87

49.5

4 Kamsamba HC Momba DC

18.1

79

48.5

5 Nanyala HC Mbozi DC

19.2

72

45.6

6 Mbuyuni HC Songwe DC

11.9

74.5

43.2

7 Lubanda HC Ileje DC

9.0

66

37.5

DISPENSARIES

1 Isongole Ileje DC

42.5

100

71.2

2 Katete Tunduma TC

31.2

90.5

60.8

3 Ngwala Songwe DC

22.8

90

56.4

4 Mlowo Mbozi DC

21.4

71.5

46.4

5 Ivuna Momba DC

24.7

65.6

45.1

Three Winners in Every Region

The top three health facilities from each region were selected as winners for the gift. The gifts were given based on the level of health facility. That is 13,043.5USD (Tshs. 30,000,000/=) for hospitals, 10,869.6.0USD (Tshs. 25,000,000/=) for health centers and 6521.7USD (Tshs.15,000,000/=) for dispensary. The facilities were not given cash but rather to choose an in-kind award worth of the amount. CRS procured the awards as proposed by the winner facilities. The facilities that won are shown in the table 6 below.

Table 6: Facilities which won the competition.

Region

Winners
Facility Name SRA average score Average SBMR RMNCAH related Department’s score (%)

Overall score (%)

Mbeya Chunya CH

100.0

68.8

84.4

Igawilo HC

96.1

68.3

82.2

Utengule Usangu HC

95.2

63.1

79.1

Iringa Frelimo Hospital

93.5

71.0

82.3

Ipogolo HC

81.0

78.2

79.6

Iringa RRH

90.0

68.1

79.1

Njombe Ludewa CH

100.0

50.1

75.1

Lupembe HC

100.0

69.7

84.9

Njombe HC

100.0

60.3

80.2

Songwe Vwawa Designated RRH

94.5

47.8

71.1

Itaka HC

93.0

62.4

77.7

Isongele Dispensary

100

42.5

71.2

Discussion

The main aim of our study was to highlight quality competition activities amongst health facilities as a model of improving IPC compliance in Health Facilities in four regions of Tanzania. Our data have shown this model of competition is effective in fostering IPC compliance. The highest average score was 84.4%.

Training of Infection Prevention and Control to Health Care Workers

The WHO recommends that IPC education should be in place for all health care workers by utilizing team- and task-based strategies that are participatory. This should include simulation training to reduce the risk of Health Associated Infection and Anti-Microbial Resistance. IPC education and training should be a part and parcel of an overall health facility education strategy, including new employee orientation and the provision of continuous educational opportunities for existing staff, regardless of level and position (for example, including also senior administrative and housekeeping staff) [15]. Taking that into account, the training was taken as critical to the facilities. There was engagement of stakeholders that is, government officials, partners and the participating facilities. The engagement was enhanced so as to cultivate the culture of ownership of IPC in the facilities and the government. The cascaded training used both physical and virtual approaches to maximize usage of available resources. The national trainers were responsible to designing the training package based on the selected topics that captured all standard and transmission-based precautions. The development of the training package led to all the trainers to be conversant and own the training package. The national trainers trained the reginal and district trainers virtually. Likewise, the reginal and district teams trained the facility-based trainers. The facility-based trainers trained the health care workers at facility level.

Infection Prevention and Control Mentorship : Facilities Based and External Based

The health facility-based mentorship was done by the health facility-based mentors who were also the trainers. This approach of using the facility-based mentors ensured the ownership. The ownership of any approach facilitates long term sustainability. We need sustainability of compliance of the IPC by all health workers in all health care settings. We find this approach of giving ownership to the health care workers to take lead in the training and mentorship of their fellow healthcare workers to be more successful than depending on the external trainers and mentors.

The external mentors also took part in the mentorship after when the internal mentors had finished mentoring session. The external mentors main obligation was to further emphasize what the internal mentors had done but also to mentor areas where the internal mentors did not mentor as well as to further mentor the health facility mentors. The external mentoring also created motivation to the facility-based mentors and health workers. In addition, the external mentoring created smooth means of communication between health facility workers and upper levels that’s, district, region and the MoHCDGEC of health as well as the implementing partners.

Internal Assessment Using National IPC Checklist

The health facility-based assessors also assessed the health care workers and the facility as a whole to check out how far do they comply with the IPC standards. The internal assessors had assessed themselves to identify the gaps and plan for the interventions to correct the gaps. This approach worked well because the facilities were able to identify the gaps based on the IPC checklist and plan the measures by themselves. Again, this way promoted ownership to the gaps identified and hence the facilities felt that the gaps were theirs and thus, they were responsible to correct them. This approach therefore was successful to improve the compliance of IPC.

Competition by External Assessment Using National IPC Checklist

When the process of preparing the facilities in terms of training, mentorship and internal assessment were done, it was the time to conduct health external assessment and compare health facilities. The facilities that had scored higher were rewarded. As the facilities were told before that, those facilities that scored higher got the gift. The fact that the facilities knew there would be a gift fostered competition amongst health workers from different health facilities. The healthcare workers took self-initiatives to improve IPC by complying to the standards put by the MoHCDGEC. Though overall improvements were noted at inter-facility level, variations in results were observed during the external assessment. These variations are attributable to the overall performance of health facilities in the regions.

The implementation of the project applied a multi-pronged strategy in order to be able to improve IPC in at targeted facilities. Accountability was instituted through quality competitions between facilities in the target regions. This was based on the fact that quality competitions have been used in a variety of settings and gained recognition as a potential approach for increasing accountability and building a culture of quality in health facilities [10]. Therefore, facilities competed with other facilities of the same type (dispensary vs. health center vs. hospital) in each of the three target regions namely Mbeya, Njombe and Songwe.

The process of benchmarking against peer facilities and work towards a public award has been shown to be motivating, as demonstrated in a recent study of a national quality improvement program in Tanzania [13]. Monitoring system to rate facilities on the implementation of appropriate IPC and WASH was used. The highest-ranking facilities received public awards that will be used towards further quality improvement in their respective facilities. In order to ensure fair and just competition, this activity involved: formation of team of judges, orientation of health facilities on selected indicators, selection and orientation of external health facilities, data collectors from each region, from Regional Health Management Team (RHMT) and/ Council Health Management Team (CHMT), data collection; and judges spot check of HF implementation of IPC/ WASH activities and selection of winning HF and award celebrations per regions.

Limitations

The was no control group of health facilities where CES project was not implemented so as to compare with the facilities where CES project was implemented. The scores achieved by these facilities might also be achieved by facilities where there were not implementing CS.

Conclusion

Overall, quality Competition on the adherence of IPC best principles and standards for maternal and child health was found to be a facilitator for the adherence of infection prevention principles amongst healthcare workers. It is envisaged that subsequent efforts in this field will gain insights from the approach to comprehensively address key obstacles that prevent adherence of IPC best principles. By using competition to trigger improvement in IPC practices in the health facilities, the CRS project has been able to show that it is possible for Tanzania to use the approach as a way of further elevating and incentivizing quality of care in health facilities and thus accelerating attainment of what Nimako and colleagues have referred to as “a survival-focused universal health coverage agenda” [14].

Acknowledgement

The team would like to acknowledge the support of the MoHCDGEC of Tanzania.  The team acknowledges UNICEF’s financial support rendered through the CES and PSBI projects which enabled implementation of the competition exercise among the health facilities. The team would also like thank AMREF Health Africa and CRS for their contribution in the development of this publication. Finally, we thank WHO country office Tanzania for generously funding the costs of publication of this article.

Funding

This publication is part of a three years project titled: “Integrating Care for Child Development (CCD) into Management of Possible Severe Bacterial Infection (PSBI) and Neonatal Survival Program which is funded by the United Nations Children Fund (UNICEF) and implemented by Catholic Relief Services (CRS) in four regions of Tanzania Mainland (Iringa, Mbeya, Songwe and Njombe) as well as in Unguja and Pemba in Zanzibar.

Disclaimer

The contents of this article represent the views of the authors and do not necessarily reflect the views of the organizations where the authors are affiliated.

Author Contribution

All authors contributed to designing the manuscript, oversaw the implementation, conducted the literature review, and wrote the first and final draft. Amref Africa and CRS led the implementation of the program in the country.

Conflict of Interest

There was no conflict of interest amongst authors

Ethical Considerations

This work does not require ethical clearance because IPC is part of the routine patient care. There is therefore no requirement of the formal ethical clearance for publication of these data.

References

  1. World Health Organization (‎2016) Guidelines on core components of infection prevention and control programmes at the national and acute health care facility level. https://apps.who.int/iris/handle/10665/251730. License: CC BY-NC-SA 3.0 IGO. Accessed on 17th August, 2021.
  2. Houghton C., Meskell P., Delaney H., et al. (2020) Barriers and facilitators to healthcare workers’ adherence with infection prevention and control (IPC) guidelines for respiratory infectious diseases: a rapid qualitative evidence synthesis. Cochrane Database of Systematic Reviews 4. CD013582. [crossref]
  3. Herbeć A, Chimhini G, Rosenberg-Pacareu J, Sithole K, Rickli F, et al. (2020) Barriers and facilitators to infection prevention and control in a neonatal unit in Zimbabwe – a theory-driven qualitative study to inform design of a behaviour change intervention. J Hosp Infect 106(4):804-811. DOI: https://doi.org/10.1016/j.jhin.2020.09.020 [crossref]
  4. Alhumaid S, Al Mutair A, Al Alawi Z, Alsuliman M, Ahmed GY, et al. (2021) Knowledge of infection prevention and control among healthcare workers and factors influencing compliance: a systematic review. Antimicrob Resist Infect Control 3;10(1):86. doi: 10.1186/s13756-021-00957-0 [crossref]
  5. Hokororo J, Eliakimu E, Ngowi R, German C, Bahegwa R, et al. (2021) Report of Trend for Compliance of Infection Prevention and Control Standards in Tanzania from 2010 to 2017 in Tanzania Mainland. Microbiol Infect Dis 5(3): 1-10. Available at: https://scivisionpub.com/pdfs/report-of-trend-for-compliance-of-infection-prevention-and-control-standards-in-tanzania-from-2010-to-2017-in-tanzania-mainland-1598.pdf Accessed 03rd July, 2021.
  6. Kinyenje E, Hokororo J, Eliakimu E, Yahya T, Mbwele B, et al. (2020) Status of Infection Prevention and Control in Tanzanian Primary Health Care Facilities: Learning From Star Rating Assessment. Infection Prevention in Practice 2(3):100071. doi: 10.1016/j.infpip.2020.100071 [crossref]
  7. Powell-Jackson T, King JJC, Makungu C, Spieker N, Woodd S, et al. (2020) Infection prevention and control compliance in Tanzanian outpatient facilities: a cross-sectional study with implications for the control of COVID-19. Lancet Glob Health 8(6): e780–e789. DOI: 1016/S2214-109X(20)30222-9
  8. Wiedenmayer K, Msamba VS, Chilunda F, Kiologwe JC, Seni J (2020) Impact of hand hygiene intervention: a comparative study in health care facilities in Dodoma region, Tanzania using WHO methodology. Antimicrob Resist Infect Control 8;9(1):80. doi: 10.1186/s13756-020-00743-4 [crossref]
  9. Catholic Relief Services. Infection Prevention and Control (IPC and WASH) as Sgnificant Component to Systems Package for Survival at Birth. A concept note introducing the project to the Ministry of Health, Community Development, Gender, Elderly and Children. February, 2021.
  10. Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, et al. (2021) High-quality health systems in the Sustainable Development Goals era: time for a revolution. Lancet Glob Health 6(11):e1196-e1252. doi: 10.1016/S2214-109X(18)30386-3. Epub 2018 Sep 5. Erratum in: Lancet Glob Health. 2018 Sep 18; Erratum in: Lancet Glob Health. 2018 Nov;6(11):e1162. Erratum in: Lancet Glob Health. [crossref]
  11. Michie S, van Stralen MM, and West R (2011) The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science 6:42. http://www.implementationscience.com/content/6/1/42
  12. Carey RN, Connell LE, Johnston M, Rothman AJ, de Bruin M, et al. (2019) Behavior Change Techniques and Their Mechanisms of Action: A Synthesis of Links Described in Published Intervention Literature. Ann Behav Med 17;53(8):693-707. doi: 10.1093/abm/kay078 [crossref]
  13. Yahya, T. (2020) Star Rating Evaluation: A Mixed Methods Analysis of Tanzania’s National Health Facility Quality Assessment System. Presentation to the Development Partners Group in Health (DPG-Health) Available at: shorturl.at/atyAU
  14. Nimako K., Smith J.M. and Akweongo P. (2021) Translating the Lancet Global Health Quality Commission report into action: can we implement a UHC4Survival agenda? IJQHC Communications, lyab006, https://doi.org/10.1093/ijcoms/lyab006
  15. Storr J., Twyman A., Zingg W., et al. (2017) Core components for effective infection prevention and control programmes: new WHO evidence-based recommendations. Antimicrob Resist Infect Control 6, 6  https://doi.org/10.1186/s13756-016-0149-9 [crossref]

Innovative Two-Stage Seamless Adaptive Clinical Trial Designs

DOI: 10.31038/JPPR.2021441

Abstract

In recent years, the use of a two-stage seamless adaptive design in clinical research has become popular, which combines two separate clinical studies into a single study that can address the study objectives of two separate studies. The design cannot only reduce the lead time between the two separate trials and consequently shorten the development process, but also increase the probability of success of the intended clinical trial because critical decisions or adaptations can be made after the review of interim data at the end of the first stage. Depending on study objectives, study endpoints, and target patient populations at different stages, two-stage seamless adaptive designs can be classified into “k-D” designs (k is the number of different dimensions). A primary assumption that the study endpoint at the first stage is predictive of the study endpoint at the second stage, consideration of using two sets of hypotheses to account for different study objectives at different stages, and an assessment of a sensitivity index for possible population shifts are proposed for valid statistical analyses for a given type of “k-D” design. Examples concerning a hepatitis C virus (HCV) infection clinical study and a non-alcoholic steatohepatitis (NASH) clinical trial are presented.

Keywords

Two-stage phase 1/2 (2/3) seamless adaptive design; The “k-D” design; Population shift; NASH clinical trial

Introduction

In recent years, the use of seamless adaptive designs in clinical trials has become very popular in clinical research and development. A seamless trial design is defined as a design that combines two separate (independent) trials into a single study [1]. The single study is able to address the study objectives that are normally achieved through the conduct of the two trials. An adaptive seamless trial design is referred to as a seamless design that applies adaptations during the conduct of the trial. A seamless adaptive design would use data collected from patients enrolled before and after the adaptation in the final analysis. A typical example is a two-stage phase 2/3 seamless adaptive clinical trial which consists of two stages, namely a learning (or exploratory) stage (e.g., phase 2 for dose finding or drop the lowers) and a confirmatory stage (e.g., phase 3 study for efficacy confirmation). See also, EMA (2014) [2]; FDA (2019). A two-stage seamless adaptive trial design has the following characteristics: (i) it combines two separate and independent trials into a single trial, (ii) the single trial consists of two stages, namely a learning (exploratory) stage and a confirmatory stage, and (iii) it offers opportunities for adaptations based on accrued data at the end of learning stage [3]. A two-stage seamless adaptive design provides an opportunity for saving because it allows stopping a trial early for safety and/or futility/efficacy. In addition, it can reduce the lead time between the learning stage and the confirmatory stage. Furthermore, data collected at the learning stage can be combined with those data obtained at the confirmatory stage for a final analysis for obtaining a more accurate and reliable assessment of the treatment effect under study. However, the use of a two-stage seamless adaptive trial design also suffers from the following limitations (or regulatory concerns): (i) it may introduce operational bias (e.g., adaptations relate to dose, hypothesis, and endpoint, etc), (ii) it may not be able to control the overall type I error rate, (iii) statistical methods for combined analysis are not well established especially when the study objectives and study endpoints are different at different stages, and (iv) the complexity of the two-stage seamless adaptive design depends upon the adaptations apply [4,5]. Depending upon whether the study objectives, study endpoints, and target populations at different stages are the same, two-stage seamless adaptive designs can be classified into several categories. Statistical methods for data analysis including power calculation for sample size calculation and allocation are different for seamless adaptive designs in different categories. In the next section, these types of seamless adaptive designs are defined. Section 3 describes the analysis methods of these types of seamless adaptive clinical trials with one or more differences in study objective, endpoint, and/or target patient population. Section 4 discusses primary assumptions and statistical considerations for analysis of a general “K-D” design”. Two examples concerning a hepatitis C virus (HCV) infection clinical study and a non-alcoholic steatohepatitis (NASH) clinical trial are presented to illustrate the application of a “2- D” design and a “3-D” design, respectively. Some concluding remarks are given in the last section of this article.

Types of Two-Stage Seamless Adaptive Design

Generally, a seamless adaptive design has three key dimensions: study objective, study endpoint, and target patient population. As is described in Table 1, in practice, a seamless adaptive design may combine two separate (independent) trials with similar but different study objectives into a single trial, e.g., a phase 2 trial for dose selection and a phase 3 study for efficacy confirmation. In addition, the study endpoints considered at the two separate trials may be different, e.g., a biomarker or surrogate endpoint versus a regular clinical endpoint. In some cases, such as non-alcoholic steatohepatitis (NASH) clinical trials, the target patient populations may have been shifted due to disease progression at different stages (e.g., fibrosis, cirrhosis, and liver transplant). Thus, the three dimensions may be the same or different in a particular two stage seamless adaptive design. We can classify two-stage seamless adaptive designs into eight categories depending upon whether the study objectives, study endpoints, and target patient populations at different stages are the same (Table 2).

Table 1: Three Key Dimensions of a Seamless Adaptive Design.

Dimension

Example

Study objective Dose selection versus efficacy confirmation
Study endpoint A biomarker or surrogate endpoint versus a regular clinical endpoint
Target patient population It may be shifted due to disease progression at different stages (e.g., fibrosis, cirrhosis, and liver transplant).

Table 2: Types of Two-Stage Seamless Adaptive Designs (Depending upon Objective, Endpoint, and Target Population)

Study Objective

Target Patient Population

Same (S)

Different (D)
Study Endpoint

Study Endpoint

Same (S) Different (D) Same (S)

Different (D)

Same (S)

SSS

SDS SSD

SDD

Different (D)

DSS

DDS DSD

DDD

Table 3 indicates that there is one “0-D design”, three “1-D design”, three “2-D design”, and one “3-D design”. These “K-D” designs, where K is the number of differences in objective, endpoint, and target patient population are briefly described below.

The “0-D” design is a two-stage seamless adaptive design with the same study objective and same study endpoint at different stages under the same target patient population, which is similar to typical group sequential design with a planned interim analysis.

For the “1-D” designs, there are three different types: (i) the study objective is different at different stages (e.g., dose selection versus efficacy confirmation), (ii) the study endpoint is different at different stages (e.g., biomarker or surrogate endpoint or clinical endpoint with shorter duration versus clinical endpoint), and (iii) the target patient population is different at different stages (e.g., population shift before and after adaptations applied based on the review of interim analysis at the end of the first stage).

For the “2-D” designs, there are three different types: (i) both study objective and endpoint are different at different stages (e.g., dose selection versus efficacy confirmation and biomarker or surrogate endpoint or clinical endpoint with shorter duration versus clinical endpoint), (ii) both study objective and  target  patient  population are different at different stages (e.g., dose selection versus efficacy confirmation and population shift before and after adaptations applied based on the review of interim analysis at the end of the first stage), and (iii) both study endpoint and the target patient population are different at different stages (e.g., biomarker or surrogate endpoint or clinical endpoint with shorter duration versus clinical endpoint and population shift before and after adaptations applied based on the review of interim analysis at the end of the first stage).

For the “3-D” designs, in addition to differences in study objective and study endpoint at different stages, the target patient population  is also different at different stages. A typical example is a two-stage NASH seamless adaptive clinical trial, which will be further discussed in a later section.

Table 3: Types of Two-Stage Seamless Adaptive Designs (Depending upon the Number of Differences in Objective, Endpoint, and Target Population). Depending on the number of differences in study objective, endpoint, and target population, Table 2 can be summarized as the following table.

Two-Stage Seamless Design

The “0-D” design The “1-D” design The “2-D” design

The “3-D” design

SSS

DSS

DDS

DDD

SDS

DSD
SSD

SDD

Note: S = Same, D = Different

Analysis of Seamless Adaptive Trial Design

Analysis for Seamless Design with Different Objectives

In this section, we will focus on statistical inference for the scenario where the study objectives at different stages are different (e.g., dose selection versus efficacy confirmation) and study endpoints at different stages are different (e.g., biomarker or surrogate endpoint versus regular clinical study endpoint). As indicated earlier, one of the major concerns when applying adaptive design methods in clinical trials is probably how to control the overall type I error rate at a pre-specified level of significance. It is also a concern that how the data collected from both stages should be combined for the final analysis. Besides, it is of interest to know how the sample size calculation/allocation should be done for achieving individual study objectives originally set for the two stages (separate studies). In this article, a multiple-stage transitional seamless trial design with different study objectives and different study endpoints and with and without adaptations is proposed. The impact of the adaptive design methods on the control of the overall type I error rate under the proposed trial design is examined. Valid statistical test and the corresponding formulas for sample size calculation/allocation are derived under the proposed trial design. As indicated earlier, a two- stage seamless trial design that combines two independent studies (e.g., a phase 2 study and a phase 3 study) is often considered in clinical research and development. Under such a trial design, the investigator may be interested in having one planned interim analysis at each stage. In this case, the two-stage seamless trial design becomes a 4-stage trial design if we consider the time point at which the planned interim analysis will be conducted as end of the specific stage. In this article, we will refer to such a trial design as a multiple-stage transitional seamless design to emphasize the importance of smooth transition from stage to stage. In what follows, we will focus on the proposed multiple-stage transitional seamless design with (adaptive version) and without (non- adaptive version) adaptations.

Consider a clinical trial comparing k treatments groups, consider 1 with a control group C. One early surrogate endpoint and one subsequent primary endpoint are potentially available for assessing the treatment effect. Let consider 2 and consider 3 be the treatment effect comparing consider 4 with C measured by the surrogate endpoint and  the primary endpoint, respectively. The ultimate hypothesis of interest is

(1)

which is formulated in terms of the primary endpoint. However, along the way, the hypothesis

(2)

In terms of the short-term surrogate endpoint will also be assessed. Cheng [1,3] assumed that interms 1 is a monotone increasing function of the corresponding interms 2. The trial is conducted as a group sequential trial with the accrued data analyzed at 3 stages (i.e., stage 1, stage 2a, stage 2b, and stage 3) with 4 interim analyses, which are briefly described below. The timeline of the trial is depicted in Figure 1. For simplicity, consider the case where the variances of the surrogate endpoint and the primary outcomes, denoted as interms 3 and interms 4 are known.

fig 1

Figure 1: Timeline of a Seamless Trial of Different Objectives and Different Endpoints with 4 Interim Analyses.

At Stage 1 of the study, (k +1)n1 subjects will be randomized equally to receive either one of the k treatments or the control. As the result, there are n1 subjects in each group. At the first interim analysis, the most promising treatment will be selected and used in the subsequent stages based on the surrogate endpoint. Let at stage 1 1 be the pair wise test statistics, and at stage 1 2, then if at stage 1 3 for some c1, then the trial is stopped and H0,1 is accepted. Otherwise, if at stage 1 4, then the treatment Esis recommended as the most promising treatment and will be used in all the subsequent stages. Note that only the subjects receiving either the promising treatment or the control will be followed formally for the primary endpoint. The treatment assessment on all other subjects will be terminated and the subjects will receive standard care and undergo necessary safety monitoring.

At Stage 2a, 2n2 , additional subjects will be equally randomized to receive either the treatment Es or the control C. The second interim analysis is scheduled when the short-term surrogate measures from these 2n2 Stage 2 subjects and the primary endpoint measures from those 2n1 Stage 1 subjects who receive either the treatment Es or the control C become available. Let at stage 2a 1 and at stage 2a 2 be the pairwise test statistics from Stage 1 based on the surrogate endpoint and the primary endpoint, respectively, and at stage 2a 3 be the statistic from Stage 2 based on the surrogate. If

T2.1

then stop the trial and accept H0,1. If T2.1 > C2.1 and T1.2 > C1.2, then stop the trial and reject both H0,1 and H0,2. Otherwise, if T2.1 > C2.1 but then stop 1, then we will move on to Stage 2b.

At Stage 2b, no additional subjects will be recruited. The third interim analysis will be performed when the subjects in Stage 2a complete their primary endpoints. Let

T2.2

where where 1 is the pair-wise test statistic from stage 2b. If T2.2 > C2.2, then stop the trial and reject H0,2 . Otherwise, we move on to Stage 3.

At Stage 3, the final stage, 2n3 additional subjects will be recruited and followed till their primary endpoints. For the fourth interim analysis, define

T3

where us3 is the pair-wise test statistic from stage 3. If T3 > C3, then stop the trial and reject H0,2; otherwise, accept H0,2. The parameters in the above designs, n1, n2, n3, c1.1, c1.2, c2.1, c2.2 and c3 are determined such that the procedure will have a controlled type I error rate of α and a target power of 1−β. The determination of these parameters will be given in next section.

Analysis for Seamless Design with Different Endpoints

For illustration purpose, consider a two-stage phase 2/3 seamless adaptive trial design with different (continuous) study endpoints. Let xi be the observation of one study endpoint (e.g., a biomarker) from the ith subject in phase 2, i 1,…, n and yj be the observation of another study endpoint (the primary clinical endpoint) from the jth subject in phase 3, j=1,…, m. Assume that xi‘s are independently and identically distributed with Exi=v and var(xi); and yj′s are independently and identically distributed with E(yj)=μ and var(yi). Chow, Lu (2007) proposed using the established functional relationship to obtain predicted values of the clinical endpoint based on data collected from the biomarker (or surrogate endpoint). Thus, these predicted values can be combined with the data collected at the confirmatory phase to develop a valid statistical inference for the treatment effect under study. Suppose that x and y can be related in a straight-line relationship

y = β0 + β1x + ε (3)

where ε is an error term with zero mean and variance ς2. Furthermore, ε is independent of x. In practice, we assume that this relationship is well-explored and the parameters β0 and β1 are known. Based on (3), the observations xi observed in the learning phase would be translated to β0 + β1xi (denoted by yi cap ) and are combined with those observations yi collected in the confirmatory phase. Therefore, yi cap‘s and yi‘s are combined for the estimation of the treatment mean μ. Consider the following weighted-mean estimator,

4

where where after It should be noted that u cap is the minimum variance unbiased estimator among all weighted-mean estimators when the weight is given by

5

if β1, interms 4 and interms 3 are known. In practice, interms 4 and interms 3 are usually unknown and ω is commonly estimated by

6

where S12 and yi cap are the sample variances of yi cap’s and yj’s, respectively. The corresponding estimator of μ, which is denoted by

7

Is referred to as the Graybill-Deal (GD) estimator of μ. The GD estimator is also known the weighted mean in metrology. An approximate unbiased estimator of the variance of the GD estimator, which has bias of order O(n−2 + m−2) is given as

is referred

For the comparison of the two treatments, the following hypotheses are considered

8

Let yij cap be the predicted value bob1xij, which is used as the
prediction of y for the jth subject under the ith treatment in phase 2. From (7), the Graybill-Deal estimator of ui is given as

9

where 4page wher 1 and 4page where 2 with 4page where 3 and 4page where 4 being the sample variances of 4page where 5 respectively. For hypotheses (8), consider the following test statistic,

10.

is an estimator of is an 1,2 Using arguments similar to those in section 2.1, it can be verified that t1 cap has a limiting standard normal distribution under the null hypothesis H0 if

is an after

Consequently, an approximate 100(1-α)% confidence interval of μ1 − μ2 is given as

11

where vt Therefore, hypothesis H0 is rejected if the confidence interval (9) does not contain 0. Thus, under the local alternative hypothesis that h1 u1, the required sample size to achieve a 1−β power satisfies

zb

Let mi for Then, denoted by NT the total sample size for two treatment groups is (1+ρ)(1+γ)n1 with n1 given as

12

where and with

For the case of testing for superiority, consider the following local alternative hypothesis that

for the case

The required sample size to achieve 1−β power satisfies

zb2

Using the notations in the above paragraph, the total sample size for two treatment groups is (1+ρ)(1+γ)n1 with n1 given as

13

where D where For the case of testing for equivalence with a significance level α, consider the local alternative hypothesis that that The required sample size to achieve 1−β power satisfies

zb 3

Thus, the total sample size for two treatment groups is (1+ρ)(1+γ) n1 with n1 given

14

Note that following similar idea as described above, statistical tests and formulas for sample size calculation for testing hypotheses of equality, non-inferiority, superiority, and equivalence for binary response and time-to-event endpoints can be obtained.

Analysis of Seamless Adaptive Design with Different Target Patient Population

In clinical research, it is often of interest to generalize clinical results obtained from a given target patient population (or a medical center) to a similar but different patient population (or another medical center). Denote the original target patient population by (μ0, σ0), where μ0 and σ0 are the population mean and population standard deviation, respectively. Similarly, denote the similar but different patient population by μ1, σ1. Since the two populations are similar but different, it is reasonable to assume that μ10 + ε and σ1 =Cσ0 (C > 0), where ε is referred to as the shift in location parameter (population mean) and C is the inflation factor of the scale parameter (population standard deviation). Thus, the (treatment) effect size adjusted for standard deviation of population (μ1, σ1) can be expressed as follows:

15

where triangle and E0 and E1 are the effect size (of clinically meaningful importance) of the original target patient population and the similar but different patient population, respectively. Δ is referred to as a sensitivity index measuring the change in effect size between patient populations [6].

As it can be seen from (1), if ε = 0 and C = 1, E0 = em>E1. That is, the effect sizes of the two populations are identical. In this case, we claim that the results observed from the original target patient population (e.g., adults) can be generalized to the similar but different patient population (e.g., pediatrics or elderly). Applying the concept of bioequivalence assessment, we can claim that the effect sizes of the two patient populations are equivalent if the confidence interval of |Δ| is within (80%, 120%) of E0. It should be noted that there is a masking effect between the location shift (ε) and scale change (C). In other words, shift in location parameter could be offset by the inflation or deflation of variability. As a result, the sensitivity index may remain unchanged while the target patient population has been shifted.

As indicated by [7], in many clinical trials, the effect sizes of the two populations could be linked by baseline demographics or patient characteristics if there is a relationship between the effect sizes and the baseline demographics and/or patient characteristics (e.g., a covariate vector). In practice, however, such covariates may not exist or exist but not observable. In this case, the sensitivity index may be assessed by simply replacing ε and C with their corresponding estimates [7]. Intuitively, ε and C can be estimated by

big c down

where uocaps are some estimates of (μ0 σ0) and (μ1 σ1), respectively. Thus, the sensitivity index can be estimated by

formula

In practice, the shift in location parameter (ε) and/or the change in scale parameter (C) could be random. Chang [8] studied possible shift in target patient population. If both ε and C are fixed, the sensitivity index can be assessed based on the sample means and sample variances obtained from the two populations. In real world problems, however, ε and C could be either fixed or random variables. In other words, there are three possible scenarios: (1) the case where ε is random and C is fixed, (2) the case where ε is fixed and C is random, and (3) the case where both ε and C are random.

Analysis of k-D seamless adaptive design

When there are differences in study objective, endpoint, and/or target patient population in seamless adaptive designs, some primary assumption and/or statistical considerations are necessarily applied for deriving valid statistical methods for data analysis collected from a given seamless adaptive design. These assumptions and/or considerations are described below.

Primary Assumption and/or Considerations

The “0-D Design” (SSS Design). As indicated in Table 2, SS Design is a two-stage seamless adaptive design with the same study objective and same study endpoint at different stages, which is similar to typical group sequential design with a planned interim analysis. Thus, standard statistical methods such as MIP (method of individual p-values), MSP (method of sum of p-values), and MPP (method of product of p-values) for group sequential design can be directly applied [1,9]. It should be noted that if additional adaptations such as change in primary study endpoint or hypotheses after the review of interim data, the standard methods have to be modified for the control of the overall type I error rate.

The “1-D Design” (DSS, SDS, or SSD Design). Since a “1-D design” could be an SD design or a DS design. Statistical analyses for an SD design and a DS design are different. To have a valid statistical analysis, some assumptions are necessary. For example, for an SD design (i.e., study objectives at different stages are the same but the study endpoints are different at different stages), it is assumed that study endpoint (e.g., a biomarker, a surrogate endpoint, or a clinical endpoint with a short duration) at the first stage is predictive of the study endpoint (i.e., regular clinical endpoint) at the second stage [10]. On the other hand, for a DS design (i.e., study objectives at different stages are different but the study endpoints at different stages are the same), we have to consider testing two sets of hypotheses at different stages [3].

The “2-D Design” (DDS, DSD, or SDD Design). For the “2-D” design (i.e., both study objectives and study endpoints at different stages are different), the following primary assumption and consideration are necessarily made for obtaining a valid statistical test using different endpoints for achieving study objectives at different stages: (i) study endpoint at the first stage is predictive of the study endpoint at the second stage, and (ii) considering testing two sets of hypotheses at different stages.

Chow and Lin (2015) illustrated statistical analysis for a DD design using an example concerning a clinical trial for evaluation of safety, tolerability and efficacy of a test treatment for patients with hepatitis C virus (HCV) infection. In the HCV study, a two-stage seamless adaptive design is considered. The trial design was to combine two independent studies (one phase 2b study for treatment selection and one phase 3 study for efficacy confirmation) into a single study. Thus, study objectives at different stages are similar but different. For the study endpoint, the well-established clinical endpoint is the sustained virologic response (SVR) at week 72 (i.e., 48 weeks of treatment plus 24 weeks of follow-up). Since the PI or sponsor is interested in making early decision for treatment selection at Stage 1. The clinical endpoint of early virologic response (EVR) at week 12 is considered as a surrogate endpoint for treatment selection at Stage 1. Thus, the study endpoints at different stages are different. Statistical test was ten derived based on the primary assumption and consideration for addressing the study objectives at different stages [3].

The “3-D Design” (DDD Design). For the “3-D” design (i.e., study objectives, study endpoints, and target patient populations at different stages are different), the following primary assumption and considerations are necessarily made for obtaining a valid statistical test using different endpoints for achieving study objectives at different stages: (i) study endpoint at the first stage is predictive of the study endpoint at the second stage, (ii) considering testing two sets of hypotheses at different stages, and (iii) the assessment of sensitivity index indicates that there is no significant shift in target patient population from stage to stage.

Examples

Hepatitis C Virus (HCV) Study

A pharmaceutical company was interested in conducting a clinical trial for evaluation of safety, tolerability and efficacy of a test treatment for patients with hepatitis C virus infection. For this purpose, after consulting with regulatory reviewers, it was decided that a two-stage seamless adaptive design would be used for the intended study. The proposed trial design was to combine two independent studies (one phase 2b is study for treatment selection and one phase 3 study for efficacy confirmation) into a single study. Thus, the study consists of two stages: treatment selection (Stage 1) and efficacy confirmation (Stage 2). The study objective at the first stage was for treatment selection, while the study objective at Stage 2 was to establish the non-inferiority of the treatment selected from the first stage as compared to a treatment of standard of care (SOC). Thus, the proposed trial design is a typical “2-D” design, i.e., a two-stage adaptive design with different study objectives at different stages with the same target patient population.

Figure 2 shows the timeline of the “2-D” HCV study. For genotype 1 HCV patients, the treatment duration is usually 48 weeks of treatment followed by a 24-week follow-up. The clinical endpoint is the sustained virologic response (SVR) at week 72. The SVR is defined as an undetectable HCV RNA level (< 10 IU/mL) at week 72. Thus, it will take a long time to observe a response. The pharmaceutical company was interested in considering the same clinical endpoint with a much shorter duration to make early decision for treatment selection of the four active treatments under study at Stage 1. As a result, the clinical endpoint of early virologic response (EVR) at week 12 is considered as a surrogate endpoint for treatment selection at Stage 1. The resultant “2-D” seamless adaptive design is briefly outline below (see also Chow and Lin, 2015) [3]:

fig 2

Figure 2: Timeline of the “2-D” HCV study

Stage 1. At this stage, the design begins with five arms (4 active treatment arms and one control arm). Qualified subjects were randomly assigned to receive one of the five treatment arms at a 1:1:1:1:1 ratio. After all Stage 1 subjects have completed Week 12 of the study, an interim analysis was performed based on EVR at week 12 for treatment selection. Treatment selection was made under the assumption that the 12-week EVR is predictive of 72-week SVR. Under this assumption, the most promising treatment arm was selected using precision analysis under some pre-specified selection criteria that the treatment arm with highest confidence level for achieving statistical significance (i.e., the observed difference as compared to the control is not by chance alone) was selected. Stage 1 subjects who have not yet completed the study protocol continued with their assigned therapies for the remainder of the planned 48 weeks, with final follow-up at Week 72. The selected treatment arm was then proceeded to Stage 2.

Stage 2. At Stage 2, the selected treatment arm from Stage 1 was test for non-inferiority against the control (SOC). A separate cohort of subjects was randomized to receive either the selected treatment from Stage 1 or the control (SOC) at a 1:1 ratio. A second interim analysis was performed when all Stage 2 subjects have completed Week 12 and 50% of the subjects (Stage 1 and Stage 2 combined) have completed 48 weeks of treatment and follow-up of 24 weeks. The purpose of this interim analysis was two-fold. First, it was to validate the assumption that EVR at week 12 is predictive of SVR at week 72. Second, it was to perform sample size re-estimation to determine whether the trial will achieve study objective (establishing non-inferiority) with the desired power if the observed treatment preserves till the end of the study. Statistical tests as described in the previous section was presented to test non-inferiority hypotheses at interim analyses and at end of stage analyses. For the two planned interim analyses, the incidence of EVR at week 12 as well as safety data, were reviewed by an independent data safety monitoring committee (iDMC). The commonly used O’Brien-Fleming type of conservative boundaries was applied for controlling the overall Type I error rate at 5%. Adaptations such as stopping the trial early, discontinuing selected treatment arms, and re-estimating the sample size based on the pre-specified criteria were applied as recommended by the iDMC.

Non-Alcoholic SteatoHepatitis (NASH) Clinical Trials

For development of drug products for treating patients with NASH, after having consulted with regulatory agency, it is suggested the following clinical trials utilizing seamless adaptive designs may be useful to shorten and speed up the process of NASH drug product development: (i) proof-of-concept/dose ranging adaptive trial design,(ii) phase 3/4 adaptive trial design, and (iii) phase 2/3/4 adaptive design [11].

Table 4 illustrates the objectives, endpoints and target patient populations in NASH clinical trials. For illustration purpose, consider a single seamless phase 2/3/4 adaptive trial design allows adaptations, continuous exposure, and long-term follow-up (Figure 3). Endpoints at interim analysis are (i) reduction of at least 2 points in NAS, (ii) resolution of NASH by histology without worsening of fibrosis, and/or (iii) improvement in fibrosis without worsening of NASH [12-15]. One (the most promising dose) or two doses may continue to the next phase. A post-marketing phase 4 with demonstration of improvement in clinical outcomes will lead to final marketing authorization.

Because only one trial would lead to approval, a very small overall alpha (i.e., <0.001) is recommended to ensure proper control of a type I error.

Although the above seamless phase 2/3/4 appears to be reasonable, regulatory agency such as FDA [16-18] emphasizes that the designs must be supported by a sound rationale and scientific justifiable for integrity, quality and validity. Protocol should address the following typical issues:

(i) Provide detailed information regarding how the overall type I error rate is controlled or preserved;

(ii) Provide a detailed strategy or plan for preventing possible operational biases that may incur before and after the adaptations are applied;

(iii) Provide justification regarding the validity of statistical methods used for a combined analysis;

(iv) Provide justification for the chosen alpha spending function (e.g., O’Brien-Fleming) for stopping boundaries;

(v) Provide justification regarding criteria used for critical decision-making at interims;

(vi) Establish an independent data safety monitoring committee (IDMC) and provide IDMC charter;

(vii) Provide justification for power analysis for sample size calculation and sample size allocation especially where the study objectives, endpoints, and populations are different at different stages;

(viii) Provide justification if sample size re-estimation is performed in a blinded or unblended fashion in the seamless adaptive trial design.

Table 4: Objectives, Endpoints and Target Patient Populations in NASH Clinical Trials.

Objective

Primary Endpoint

Target Patient Population

Trials to support a marketing application Composite endpoint: complete resolution of steatohepatitis and no worsening of fibrosis –

Composite endpoint: At least one point improvement in fibrosis with no worsening of steatohepatitis (no increase in steatosis, ballooning or inflammation)

Biopsy confirmed NASH patients with moderate/advanced fibrosis (F2/F3)
Clinical outcome underway by the time of submission:

Histopathologic progression to cirrhosis

MELD score change by >2 points or MELD increase to >15 in population enrolled with ≤ 13

•Death

•Transplant

•Decompensation events

–Hepatic encephalopathy – West Haven ≥ grade 2

–Variceal bleeding – requiring hospitalization

–Ascites – requiring intervention

–Spontaneous bacteria peritonitis

 

Dose ranging/Phase 2 Improvement in activity (NAS)/ballooning/inflammation without worsening of fibrosis can be acceptable

Include a subpopulation with moderate/advanced fibrosis (F2/F3) to inform PhIII

Biopsy proven NASH (NAS ≥ 4)

–Include patients with NASH and liver fibrosis with any stage of fibrosis

Include patients with NASH and ≥ Fibrosis stage 2 to inform PhIII

Early phase trials/Proof of concept Endpoints should be based on mechanism of drug

Consider using improvement in NAS (ballooning & inflammation) and/or fibrosis

Reduction in liver fat with a sustained improvement in transaminases

 

Ideal to use patients with biopsy proven NASH, but acceptable to use patients at high risk for NASH (fatty liver + type 2 diabetes, the metabolic syndrome and high transaminases are acceptable

fig 3

Figure 3: Phase 2/3/4 Seamless Adaptive Design.

The NASH clinical trial design is a typical “3-D” design. The analysis of a “3-D” seamless adaptive trial design requires (i) a primary assumption that the study endpoint at the first stage is predictive of the clinical endpoint at the second stage to account for different study endpoints at different stages, (ii) a consideration of testing two sets of hypotheses to account for different study objectives at different stages, and (iii) a sensitivity analysis to account for a possible shift in target patient population from stage to stage.

Conclusion

In this article, depending upon whether the study objectives, study endpoints, and target patient populations at different stages are different, two-stage seamless adaptive designs are classified into eight different categories, namely, “0-D” design, “1-D” design, “2-D” design, and “3-D” design. For a given type of two-stage seamless adaptive trial design, the following proposal is made for a valid statistical analysis. First, a primary assumption that the study endpoint at the first stage is predictive of the study endpoint at the second stage is made to account for different study endpoints at different stages. Second, a consideration of testing two sets of hypotheses is suggested to account for different study objectives at different stages. Third, it is suggested that an assessment of a sensitivity index should be performed for possible shift in target patient population from stage to stage. Two examples concerning a hepatitis C virus (HCV) infection clinical study (a typical “2-D” design) and a non-alcoholic steatohepatitis (NASH) clinical trial (a typical “3-D” design) are presented to illustrate the proposed methods. From regulatory perspectives, the innovative seamless adaptive trial designs discussed in this article cannot only offer great flexibility of identifying any signal, trend, or optimal benefit of the test treatment under investigation, but also improve the relative efficiency (e.g., shorten the development process). However, these can only be achieved at the risk of controlling the overall type I error rate and/or the validity and integrity of the intended clinical trials. From statistical perspectives, on the other hand, for most innovative seamless adaptive trial designs, statistical methods are not fully established. Although clinical simulation may provide a solution, it is not “the” solution because the model used for simulation is difficult, if not impossible, to verify. A wrong model could lead to biased conclusion and hence may be misleading. Never misuse or abuse the use of complex seamless adaptive trial design in clinical research and development. From clinical perspectives, it is suggested that an “investigator’s wish list” approach should be considered when applying complex innovative design in clinical research. In other words, clinician should always be in the driver seat and biostatistician should development statistical tests with optimal statistical properties to accommodate the investigator’s wish list without undreaming the validity and integrity of the intended trial.

References

  1. Chow SC, Chang M (2011) Adaptive Design Methods in Clinical Trials. 2nd edition, Chapman and Hall/CRC Press, Taylor & Francis, New York, New York.
  2. EMA (2014) Pilot project on adaptive licensing. European Medicines Agency, London, UK.
  3. Chow SC, Lin M (2015) Analysis of two-stage adaptive seamless trial design. Pharmaceutica Analytica Acta 6.
  4. Chow SC, Corey R (2011) Benefits, Challenges and obstacles of adaptive designs in clinical trials. The Orphanet Journal of Rare Diseases 6. [crossref]
  5. Chow SC (2020) Innovative Methods for Rare Disease Drug Chapman and Hall/CRC Press, Taylor & Francis, New York.
  6. Shao J, Chow SC (2002) Reproducibility probability in clinical trials. Statistics in Medicine, 21: 1727-1742.
  7. Chow SC, Shao J (2005) Inference for clinical trials with some protocol amendments. Journal of Biopharmaceutical Statistics 15: 659-666. [crossref]
  8. Lu Y, Kong YY, Chow SC (2017) Analysis of sensitivity index for assessing generalizability in clinical research. Jacobs Journal of Biostatistics 2.
  9. Chang M (2007) Adaptive design method based on sum of p-values. Statistics in Medicine 26: 2772-2784. [crossref]
  10. Chow SC, Lu Q, Tse SK (2007) Statistical analysis for two-stage adaptive design with different study endpoints. Journal of Biopharmaceutical Statistics 17: 1163-1176. [crossref]
  11. Filozof C, Chow SC, Dimick-Santos L, Chen YF, Williams RN, et al.. (2017) Clinical endpoints and adaptive clinical trials in precirrhotic nonalcohotic steatohepatitis: facilitating development approaches for an emerging epidemic. Hepatology Communications 1 ; 577-585. [crossref]
  12. Argo CK, Northup PG, Al-Osaimi AM, Caldwell SH (2009) Systematic review of risk factors for fibrosis progression in non-alcoholic steatohepatitis. J Hepatol 51: 371-379. [crossref]
  13. Brunt EM, Kleiner DE, Wilson LA, Belt P, Neuschwander-Tetri BA (2011) NASH Clinical Research Network (CRN) Nonalcoholic fatty liver disease (NAFLD) activity score and the histopathologic diagnosis in NAFLD: distinct clinicopathologic meanings. Hepatology 53: 810-820. [crossref]
  14. Ekstedt M, Franzen LE, Mathiesen UL, Thorelius L, Holmqvist M, et al. (2006) Long-term follow-up of patients with NAFLD and elevated liver enzymes. Hepatology 44: 865-873. [crossref]
  15. Angulo P, Kleiner DE, Dam-Larsen S, Adams LA, Bjornsson ES, et al. (2015) Liver fibrosis, but no other histologic features, is associated with long-term outcomes of patients with nonalcoholic fatty liver disease. Gastroenterology 149: 389-39. [crossref]
  16. FDA (2014) Guidance for Industry – Expedited Programs for Serious Conditions – Drugs and Biologics. The United States Food and Drug Administration, Silver Spring, Maryland.
  17. FDA (2018) Guidance for Industry – Noncirrhotic Nonalcoholic Steatohepatitis With Liver Fibrosis: Developing Drugs for Treatment. The United States Food and Drug Administration, Silver Spring, Maryland.
  18. FDA (2019) Guidance for Industry – Adaptive Designs for Clinical Trials of Drugs and Biologics. The United States Food and Drug Administration, Silver Spring, Maryland, November 2019.