Author Archives: author

Graphite and Diamond-Rich Pegmatite as a Small Vein in a Gneiss Drill Core from the Annaberg Region/ Erzgebirge, Germany

DOI: 10.31038/GEMS.2024622

Abstract

Diamond and graphite in a vertical pegmatite veinlet in a gneiss drill core from the Annaberg region/Erzgebirge, Germany, demonstrate a more crustal position and underline the greater meaning of the input of supercritical fluids from mantle deeps. Proof for that statement is a high concentration of nano-diamond-bearing graphite as a micrometer to sub-micrometer large crystals in quartz and orthoclase.

Keywords

Pegmatite, Nanodiamond, Graphite, Raman spectroscopy, Supercritical fluids

Introduction

End of the eighties, we studied drill cores, most granites from the Annaberg district, for melt inclusion to reconstruct the temperature and pressure of granites with cassiterite-bearing vein/veinlet structures. Most samples are from the borehole An 10/85 near the Grundteichschenke north of Schlettau (Buchholz region). Some results are used and cited in Hösel et al. (1992) [1]. The primary data are in the Thomas (1988) [2]. Most samples are granite drill cores. The solidus, liquidus, homogenization temperatures, and water content of melt inclusions were determined on these granite samples in 1988. However, sample T6 from the drill core An 10/85 (at 71.0 m depth) is a gneiss core with a 2 cm thick pegmatite veinlet parallel to the drill core axis (perpendicular). The pegmatite veinlet was by the field geologist as a dolomite-fluorite vein wrongly interpreted. A microscopic study showed that dolomite and fluorite are not present. Only quartz, feldspars, muscovite, zircon, and apatite are observable. The sample was not studied then because of the complete absence of fluid and melt inclusions.

Methods

We used for all microscopic and Raman spectrometric studies a petrographic polarization microscope with a rotating stage coupled with the RamMics R532 Raman spectrometer working in the spectral range of 0-4000 cm-1 using a 50 mW single mode 532nm laser. Details are in Thomas et al. 2022a and 2022b [3,4]. We used the Olympus long-distance LMPLN100x for the Raman spectroscopic routine measurements as a 100x objective. To avoid contamination on the sample surface, we studied only mineral grains deep under the surface. Therefore, we generally used the total laser power of 50 mW on the sample and a long counting time of 100 to 200 seconds and sometimes up to 10 minutes.

Sample

Figure 1 shows the used sample T6 from the drill core An 10/85 (at 71.0 m). Besides the typical pegmatite minerals quartz, feldspars, mica, zircon, zircon-reidite, xenotime-(Y), monazite-(Ce), and apatite, the pegmatite is characterized by an unexpectedly large amount of graphite and nanodiamond grains in quartz (Figure 2) and also in orthoclase. Fluid inclusions (≤ 2 µm) are very rare. The 500 µm thick sample is polished on both sides with an Al2O3-H2O suspension and carefully cleaned. Graphite and diamond are the objects of this short study. To avoid possible contamination by diamonds from the drilling and preparation, we used only graphite and diamonds deeper than 30 µm (see Thomas et al. 2023).

fig 1

Figure 1: View of the drill core sample. a) Side view of the gneiss core with yellow-brown pegmatite veinlet. X shows the sample position. b) Top view of the sample. The red and black crystals are hematite and pyrrhotine, respectively.

fig 2

Figure 2: Distribution of the graphite grains in pegmatite quartz about 30 µm deep from the sample surface.

Results

During the microscopic study of the quartz and orthoclase from the pegmatite, black crystals and aggregates (≥ 5 µm) are noticeable. The mean is 1.8 x 108 black grains per cm3 in quartz. The distinction between graphite and nanodiamond is only possible with the Raman spectroscopy. There are sporadic also graphite-diamond grains present in feldspar. Figure 2 shows the distribution of graphite-like crystals and clusters in quartz.

The distribution is not homogeneous. Partial hydrothermal re-mobilised quartz (lighter) is pure in graphite. However, the 1580 cm-1 Raman graphite band is also present at long counting times in transparent quartz regions. That means the sample has a very high density of tiny to nano-graphite particles-invisible at the highest magnification (100x ocular). Table 1 gives the Raman data for “invisible” diamonds and graphite in transparent quartz regions, and Figure 3 shows the corresponding Raman spectrum.

Table 1: Raman band of diamond and graphite in clear pegmatite quartz of the sample

Mineral phase

Raman band (cm-1)

FWHM (cm-1)

Quartz

1233.2

17.4

Diamond

1333.1

19.1

Graphite D1

1353.6

33.2

Graphite G

1580.7

28.0

Graphite D2

1615.7

37.1

FWHM: Full Width at Half Maximum

fig 3

Figure 3: Raman spectrum of clear pegmatite quartz without microscopically visible graphite. Qtz-weak quartz band at 1233.2 cm-1. Recording conditions: 50 mW on sample, exposition time of 10 minutes, 100x objective (see Table 1).

Under the more macroscopical black dots (often spherical or elliptical), there are primarily mixtures of diamond and graphite-Figure 4 shows such crystals. The large xenotime-(Y) crystal in Figure 4a is conspicuous. The Raman spectra match 97% of the xenotime-(Y) RRUFF database ID: R050178 [5]. Besides xenotime-(Y), there are also monazite-(Ce) crystals present [RRUFF database ID: R040106, match 95%], mostly in larger graphite aggregates. Both REE minerals are primarily present in larger graphite-nanodiamond crystal clusters, demonstrating that these minerals are also related to the fast-rising supercritical fluids. Table 2 shows the obtained Raman data (Gaussian fit) of the studied graphite and nanodiamond in Figures 4 and 5.

Table 2: Raman data for the graphite-diamond aggregate shown in Figure 4a

Mineral phase

Raman band (cm-1)

FWHM (cm-1)

Diamond tip1)

1267.8

71.6

Diamond

1322.6

46.9

Diamond (bulk)

1333.1

100.0

Graphite D1

1352.7

56.8

Graphite G

1571.8

81.9

Graphite D2

1615.7

39.3

FWHM: Full Width at Half Maximum. 1)According to Zaitsev (2001), this range is typical for isolated crystallites of diamonds – here, nanodiamonds.

fig 4

Figure 4: Graphite-nano-diamond aggregates in pegmatite quartz from the drill core T6 from the drilling An 10/85. a) The crystal with Xtm marked place in the graphite-nano-diamond aggregate is a xenotime-(Y) crystal. The crystal shown in b) is composed only of graphite and diamond.

fig 5

Figure 5: Raman spectrum of a graphite-diamond aggregate (Figure 4a) in pegmatite quartz (sample T6). Shown are only the principal data. More information is in Table 2.

Table 3 summarizes the Raman data of nanodiamond and graphite in the pegmatite sample T6.

For quartz, the Raman band ranges from 1327.3 to 1351.1 cm-1. According to Zaitsev (2001) [6], this range is typical for isolated crystallites of diamonds-here, nanodiamonds with grain size in the range of several nanometers. Besides the spheric to elliptic graphite aggregates, there are also whisker-like graphite needles (see Figure 6); however, there are never moissanite whiskers.

Table 3: Raman data for nanodiamond (n=20) and graphite (n=10) in quartz and orthoclase in the pegmatite from samples T6, 20, and 10 different crystals, respectively.

 

Diamond Graphite
Mineral Raman band (cm-1) FWHM (cm-1) Raman band (cm-1)

FWHM (cm-1)

Quartz

1339.4 ± 12.1

41.8 ± 12.0 1580.3 ± 4.5

1615.4 ± 3.8

39.2 ± 11.9

29.2 ± 4.3

Orthoclase

1337.5 ± 6.8

49.9 ± 16.3

1571.8 ± 8.0

27.8 ± 4.0

Or-matrix*

1352.4

31.5 1581.7 27.9

*Free of visible graphite (50mW on sample, 10 minutes recording time)

fig 6

Figure 6: Graphite needles or whiskers beside graphite-bearing nanodiamond cluster. Gr: Graphite, nD: Nanodiamond. Note the needles are real needles and not sections of flat graphite crystals.

Discussion

In the last couple of years, the author, with his colleagues, found in different Variscan granites, pegmatites, and related mineralizations from more crustal position minerals like diamond, graphite, moissanite, reidite, coesite, stishovite, and others, representing mantle origin. By the dominance of spherical forms and their extraneous position in the host minerals, fast transport via supercritical fluids is almost imperative. Proofs are in Thomas et al. (2023a) [7] and Thomas (2023a and 2023b) [8,9]. The small vertical pegmatite vein in gneiss (sample T6) with graphite and nanodiamond further hints that supercritical fluids play a more significant role than assumed. The search for moissanite (isometric crystals or whiskers) in the quartz of the given sample was unsuccessful. Therefore, we can conclude that the formation of moissanite whiskers and isometric crystals in the beryl-dominant veins of the Sauberg mine near Ehrenfriedersdorf beryllium and water are essential catalysts for the formation of moissanite [10]. The diamond (nanodiamond) and graphite spectra look like the shock-synthesized diamond by Chen et al. (2004), representing strongly nonequilibrium processes during the change of the supercritical state into a critical/undercritical one. Most diamonds/nanodiamonds show a covering by graphite. All spectra differ from static pressure diamonds [11]. The relatively extended stay at high temperatures makes the primary diamond unstable and transforms it into nanodiamond or onion-like carbon (OLC) – see Zou et al. 2010 [12]. The fine-disperse distribution of nanographite and nanodiamond in the quartz and orthoclase matrix is conspicuous. Maybe these prevent the intense formation of fluid inclusions in quartz and orthoclase during the cooling.

Acknowledgment

Günter Hösel (Freiberg) is thanked for providing the drilling sample material from the Annaberg

References

  1. Hösel G, Kühne R, Zernke B (1992) Zur Zonalität der Zinnmineralisation im Raum Annaberg/Erzgebirge. Geoprofil 4: 49-57.
  2. Thomas R (1988) Ergebnisse der thermobarometrischen Untersuchungen an Granitproben aus dem Gebiet Annaberg. Unpublished Report, Freiberg.
  3. Thomas R, Davidson P, Rericha A, Recknagel U (2022a) Discovery of stishovite in the prismatine-bearing granulite from Waldheim, Germany: A possible role of supercritical fluids of ultrahigh-pressure origin. Geosciences 12: 1-13.
  4. Thomas R, Davidson P, Rericha A, Voznyak DK (2022b) Water-rich melt inclusions as “frozen” samples of the supercritical state in granites and pegmatites reveal extreme element enrichment resulting under nonequilibrium conditions. Min J (Ukraine) 4 4: 3-15.
  5. Lafuente B, Downs RT, Yang H, Stone N (2016) The power of database. The RRUFF project. In: Armbruster T, Danisi RM (Eds.), Highlights in Mineralogical Crystallography. De Gruyter, Berlin, München, Boston, USA, Pg: 1-30.
  6. Zaitsev AM (2001) Optical Properties of Diamond. A data Handbook. Springer-Verlag Berlin Heidelberg GmbH 502.
  7. Thomas R (2023a) Ultrahigh-pressure and-temperature mineral inclusions in more crustal mineralizations: The role of supercritical fluids. Geol Earth Mar Sci 5: 1-2.
  8. Thomas R, Davidson P, Rericha A, Recknagel U (2023a) Ultrahigh-pressure mineral inclusions in a crustal granite: Evidence for a novel transcrustal transport mechanism. Geosiences 13: 1-13.
  9. Thomas R, Recknagel U, Rericha A (2023b) A moissanite-diamond-graphite paragenesis in a small beryl-quartz vein related to the Variscan tin-mineralization of the Ehrenfriedersdorf deposit, Germany. Aspects Min Miner Sci 11: 1310-1319.
  10. Thomas R (2023b) The Königshainer granite: Diamond inclusion in zircon. Geol Earth Mar Sci 5: 1-4.
  11. Chen P, Huang F, Yun S (2004) Structural analysis of dynamically synthesized diamonds. Materials Research Bull 39: 1589-1597.
  12. Zou Q, Wang MZ, Li YG, Lv B, Zhao YC (2010) HRTEM and Raman characterization of the onion-like carbon synthesised by annealing detonation nanodiamond at lower temperature and vacuum. J Experim Nanosci 5: 473-487.

REE-rich Fluorite in Granite from Zinnwald/East Erzgebirge/Germany

DOI: 10.31038/GEMS.2024621

Abstract

The REE-rich fluorites in quartz of the topaz-albite granite from Zinnwald/Erzgebirge are often related to nanodiamonds and graphite. Together, the solvus curves (water content of melt inclusions in granite quartz versus temperature) and the Lorentzian element distribution (F, Rb, Cs) prove the input of supercritical fluids and their influences on the element redistribution in the granite. Together with the impact of supercritical fluids, the crystallization history of the topaz-albite granite from Zinnwald is very complex.

Keywords

Topas-albite-granite, REE-rich fluorites, Fluocerite and tveitite trends, Nanodiamonds, Graphite, Raman spectroscopy

Introduction

During the study of melt and fluid inclusions in quartz and topaz from the Zinnwald granite [1] we often found in granite quartz spherical crystals of REE-rich fluorites beside other, for “normal” granites untypical mineral phases: magmatic fluorite, cryolite, elpasolite, and rubidian leucite with the empiric formula (K0.64Rb0.22,Na0.13Cs0.01)(Al0.96Fe0.03)Si2O6, and boromuscovite. According to melt inclusion results, the fluorine concentration in the melt of the evolved granite phases increases to 5.64 ± 0.19%(g/g). It was also essential that for this granite, we could construct from the analytically determined water concentration of different melt inclusion a pseudobinary XH2O vs. T plot of re-homogenized type-A and type B-melt inclusion with a solvus crest at 720°C and 28.6%(g/g) H2O (Figure 10 in there). That was a natural granite system’s first pseudobinary solvus curve [1]. In the meantime, we have seen that such pseudobinary solvus curves are mostly connected with the extreme enrichment of some elements. For the case of Zinnwald, we found Lorentzian curves for F, Rb, and Cs [2]. Such curves are strong proof of the participation of supercritical fluids. However, around 2005, nobody had any idea about the role of supercritical fluids in granite formation and mineralization. Using the small example of REE-rich fluorite globules in quartz, we will show that supercritical fluids play an essential part in granite formation and re-crystallization.

Methods

Primary for the first identification of the REE-rich fluorites, we used a Dilor XY Laser Raman Triple 800 mm spectrometer equipped with an Olympus optical microscope. The spectra were collected with a Peltier-cooled CCD detector using a laser wavelength of 488 nm. For recent studies, we used for all microscopic and Raman spectrometric studies a petrographic polarization microscope with a rotating stage coupled with the RamMics R532 Raman spectrometer working in the spectral range of 0-4000 cm-1 using a 50 mW single mode 532nm laser. Details are in Thomas et al., 2022a and 2022b [2]. For the Raman spectroscopic routine measurements, we used the Olympus long-distance LMPLN100x as a 100x objective. We carefully cleaned the samples to prevent diamond contamination due to the preparation. For the Raman determination, we used only 30 or more µm deep crystals from the sample surface [3]. One nanodiamond sample (Figure 5) is on the surface. However, the Raman lines are characteristic of nanodiamonds, not contaminated diamonds [3]. To determine the composition of the minerals in question, we used the microprobes CAMECA SX 50 and SX100. Details are in Franz et al. (1996) [4-7]

Sample

The samples (TH212) are on both sides, about 500 µm thick polished granite sections (Figure 1). The used sample is a topaz-albite granite collected as a boulder about 1.3 km from Fuchshübel, about 1.3 km northeast of Zinnwald in the East-Erzgebirge. A concise description was given by Thomas et al. (2005) [1]. The quartz contains tiny, mostly spherical colorless crystals of REE-rich fluorite. Other typical and untypical minerals are graphite-whiskers, F-rich needle-like topaz crystals, and orthorhombic cassiterite with graphite and nanodiamond inclusions. Thomas (2023) [8], as well as fluorite, cryolite, elpasolite, rubidian leucite, and boromuscovite (all in quartz). For comparison, we used an emerald-green REE-bearing fluorite from the Sachsenhöhe near Zinnwald [9]. Note that the very smooth surface of the small diamond spheres at the sample surface causes these grains to fall out easily during preparation (polishing).

Another relatively REE-rich fluorite (No. Z 9054) was an emerald-green piece as big as your fist from the Sachsenhöhe near Zinnwald [9]. This sample served to calibrate the ICP-AES and microprobe SX50 instruments of the GFZ in the nineties. The sample contains 0.27% Y and 0.30% REE. Another REE-rich fluorite is from Ehrenfriedersdorf sample Sn70 with a maximum of 0.90% Y and 1.08% REE.

fig 1

Figure 1: Thick section ZW-TH212-I (500 µm thick and on both sides polished). Scale is in centimeters. The colorless parts are quartz with REE-rich fluorite globules.

Results

The REE-rich fluorite globules in the Zinnwald granite quartz have diameters of around 20 µm (up to 70 mm) and are only present in the larger quartz crystals of the granite (Figure 1). Figure 2 shows the form of the typical REE-rich fluorites and Figure 3 a typical Raman spectrum.

fig 2

Figure 2: Typical REE-rich spherical fluorite crystals in Zinnwald quartz. The scale (below right) is valid for all examples in the Figure. Gr: Graphite and nanodiamonds in graphite.

fig 3

Figure 3: Typical Raman spectrum of REE-rich fluorite in granite quartz from Zinnwald. Note that the characteristic strong Raman band at 321 cm-1 for REE-free fluorite is missing or has been shifted to 242 cm-1.

The interpretation of REE-rich fluorite Raman spectra is difficult. There are too many variables (Y vs. sum of REE. Only two passable correlations exist between the sum of REE (in at%) and the Raman band position at about 650 cm-1 and the fluorine content, respectively. The position of the second broad Raman band (between 400 and 550 cm-1) correlated roughly with the Y concentration. More work is, however, necessary, primarily because this Raman band is a double band. Often, these fluorite globules contain remnants of nanodiamond-bearing graphite inside, or the graphite forms small rims around the crystals. Besides the spherical REE-rich fluorites, there are also graphite globules (Figure 4a) up to 40 µm in diameter. Smaller graphite spheres with remnants of diamonds are rarer (Figure 4b) to see. The spherical form of the REE-rich fluorites in quartz, nanodiamond, and graphite indicate clearly that these crystals are unambiguous foreign crystals in the granite quartz.

fig 4

Figure 4: Spherical graphite crystal a) with nanodiamonds in Zinnwald quartz. b) graphitized diamond in the same quartz (about 30 µm deep). The Raman band of the diamond is 1320.2 cm-1 with a FWHM of 56.7 cm-1. FWHM: Full Width at Half Maximum.

Besides the REE-rich fluorite globules, fluorite with larger graphite aggregates is also present (Figure 5). Also, this graphite aggregate contains nanodiamonds. The presence of fluorite-cryolite-topaz aggregates (see Thomas et al., 2005 [1]; Figure 1c in it) in the quartz of this peraluminous granite rock clearly shows a second peralkaline history. Additionally, the nanodiamonds, combined with the solvus and some Lorentzian-distributed elements [10], prove the input of supercritical fluids from the deeper mantle region. In this short contribution, we will only concentrate on the REE-rich fluorites. Table 1 shows the results of the microprobe analyses on the REE-rich fluorites. The abbreviation of the REEs in Table 1 is Ln (the sum of the REE). It is a selection of data obtained in more than 15 years. They show, however, the characteristic properties of these fluorites. Primarily, the author thinks the REE-rich fluorites are a tveitite-like mineral. However, tveitite-(Y) [symplified as Ca14Y5F43] is hexagonal. The lattice parameters using the TEM technique (unpublished data by R. Wirth, GFZ Potsdam and Wirth, 2004) [11] determined for a, b, and c identical values of 5.46 Å, also a cubic mineral. The density is 4.0 g/cm3. Also, birefringence is missing. In contrast to simple fluorites, the REE-rich ones show in no case the typical 321 cm-1 Raman line for pure fluorites. Figure 6 shows a REE-rich fluorite crystal like a twinning. On the right side, there are tiny, very REE-rich microcrystals (no monazite or xenotime!). The Raman spectrum is similar; however, there are also more significant Raman band differences.

fig 5

Figure 5: Fluorite (Fl) with graphite in Zinnwald granite quartz. The Raman band for fluorite is at 321.7 cm-1, for diamond at 1327.2 cm-1, and for graphite at 1345.3 cm-1 (D1), 1576.2 cm-1 (G), and 1602.4 cm-1 (D2) with the FWHM values of 9.7, 44.9, 71.2, 73.7, 38.8, respectively.

Table 1: Chemical composition (in % (g/g) of REE-rich fluorites from Zinnwald (reduced number)

Component

ZW-1 ZW-2 ZW-3 ZW-4 ZW-5 ZW-6 ZW-7 ZW-8 ZW-9 ZW-10

ZW-11

Na

0.24

0.23 0.23 0.23 0.31 0.24 0.12 0.00 0.00 0.16 0.14

Ca

36.99 36.75 36.77 36.97 35.45 36.86 32.75 45.97 39.21 26.97

22.26

Y

4.43

4.47 4.44 4.48 4.84 4.57 2.43 5.28 0.86 3.09 2.81
La

1.59

1.65 1.65 1.65 1.78 1.67 5.10 1.79 5.12 5.99

5.89

Ce

4.67

4.76 4.63 4.64 5.13 4.60 13.83 5.06 13.45 16.20 15.97

Pr

0.62 0.66 0.60 0.55 0.87 0.58 1.50 0.65 1.65 1.76

1.74

Nd

1.75

2.03 2.08 2.19 2.56 2.10 4.92 2.26 4.35 5.74 5.69

Sm

0.57 0.58 0.60 0.56 0.61 0.55 1.05 0.61 1.45 1.22

1.22

Eu

0.00

0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00

0.00

Gd

0.91

0.94 0.97 0.91 0.92 0.90 0.77 0.99 0.53 0.89 0.89

Tb

0.14

0.12 0.13 0.16 0.00 0.10 0.17 0.14 0.02 0.20 0.00

Dy

0.84

0.88 0.92 0.91 1.03 0.91 0.60 0.95 0.34 0.68 0.69
Ho

0.24

0.15 0.22 0.19 0.00 0.19 0.05 0.21 0.05 0.05 0.05

Er

0.66

0.84 0.75 0.84 0.00 0.71 0.25 0.81 0.16 0.28 0.29

Tm

0.00

0.00 0.00 0.00 0.00 0.00 0.00 0.13 0.00 0.00 0.00

Yb

0.94

0.93 0.95 0.93 0.00 0.96 0.47 1.00 0.07 0.54 0.55

Lu

0.13

0.15 0.16 0.15 0.00 0.14 0.10 0.16 0.02 0.12 0.12

F

45.66

44.28 44.90 44.66 46.49 44.93 36.12 33.99 32.72 36.12 41.72

Total

100.4

99.42 100.0 100.0 99.99 100.0 100.2 100.0 100.0 100.01

100.0

Formula coefficients calculated based on one cation
Na

0.010

0.010 0.010 0.010 0.140 0.010 0.005 0.000 0.000 0.010 0.006

Ca

0.919

0.917 0.917 0.922 0.885 0.920 0.814 1.147 0.978 0.67 0.56

Y

0.050

0.500 0.050 0.050 0.054 0.051 0.027 0.059 0.010 0.03 0.032
La

0.011

0.012 0.012 0.012 0.013 0.012 0.037 0.013 0.037 0.04 0.042
Ce

0.033

0.034 0.033 0.033 0.037 0.033 0.098 0.036 0.096 0.12 0.114

Pr

0.004

0.005 0.004 0.004 0.006 0.004 0.011 0.005 0.012 0.01 0.012

Nd

0.012

0.014 0.014 0.015 0.018 0.015 0.034 0,016 0.030 0.04 0.039

Sm

0.004

0.004 0.004 0.004 0.004 0.004 0.007 0.004 0.010 0.01 0.008

Eu

0.000

0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.00 0.000

Gd

0.006

0.006 0.006 0.006 0.006 0.006 0.005 0.006 0.003 0.01 0.006

Tb

0.001

0.001 0.001 0.001 0.000 0.001 0.001 0.001 0.000 0.00 0.000
Dy

0.005

0.005 0.006 0.006 0.006 0.006 0.004 0.006 0.002 0.00 0.004

Ho

0.001

0.001 0.001 0.001 0.000 0.001 0.000 0.001 0.000 0.00 0.000

Er

0.004

0.005 0.004 0.005 0.000 0.004 0.001 0.005 0.001 0.00 0.002

Tm

0.000

0.000 0.000 0.000 0.000 0.000 0.000 0.001 0.000 0.00 0.000
Yb

0.005

0.005 0.005 0.005 0.000 0.006 0.003 0.006 0.000 0.00 0.003

Lu

0.001

0.001 0.001 0.001 0.000 0.001 0.001 0.001 0.000 0.00 0.001

F

2.394

2.331 2.363 2.351 2.447 2.365 1.894 1.789 1.722 1.90 2.196

SLn

0.087

0.093 0.09 0.090 0.090 0.090 0.202 0.191 0.191 0.23 0.23

S(Y+Ln)

0.137

0.098 0.14 0.140 0.144 0.140 0.229 0.201 0.201 0.26

0.26

Ln: Measured Lanthanides

fig 6

Figure 6: REE-rich fluorite – a) optical microphotography and b) electron microprobe BSE image. D – diamond and graphite. The left lower part in b) shows the immiscibility of heavy-REE-rich fluorites. In Figure 6a, black points in the right part of the crystal are not graphite, which is seen in the BSE image (Figure 6b).

The nanodiamond (D) at the left upper corner has the main band at 1332.8 cm-1 with a FWHM = 47.9 cm-1, and the graphite bands are positioned at 1363.1, 1571.4 1602.3 cm-1 with the FWHM of 40.5, 44.0, and 40.2 cm-1 respectively.

Figure 7 shows Y + Ln versus Ln (values in at%) with the zero point for pure water-clear synthetic fluorite (not shown in the table). The circles are the data from Table 1 plus unpublished data [1], and the triangles represent the data for REE-rich fluorites and tveitite-(Y) from Pekov et al., 2009 [12]. There are two trends: the dashed blue line [12] corresponds to the tveitite-(Y) trend, and the dashed black line corresponds to the fluocerite trend for the REE-rich Zinnwald fluorites.

fig 7

Figure 7: (Y + Ln) versus Ln (all in at%) of REE-rich fluorites. Ln = sum of the REE, (Y + Ln) = sum of Y + REE’s. The black circles (Zinnwald) and the dashed black line correspond to the fluocerite trend (this work), and the blue triangles and dashed blue line equal the tveitite trend, according to Pekov et al. (2009) [12].

The simultaneous crystallization of xenotime-(Y) and monazite-(Y) together with the REE-rich fluorite prevents the crystallization of the fluocerite [12] standing at the end of the trend (fluocerite trend of Figure 7). That is also true for other REE-rich fluorite in the region and around Ehrenfriedersdorf. The emerald-green fluorite (Z 9054) from the Sachsenhöhe near Zinnwald contains 0.29% (g/g) Y and 0.31% (g/g) REE, and the fluorite from Ehrenfriedersdorf (sample Sn70) has 0.60% (g/g) Y and 1.22% (g/g) REE’s. Compared with this, the REE-rich Zinnwald fluorite contains up to 5.3% (g/g) Y and 33.1% (g/g) REE’s. The high REE content of the fluorite here is the result of the supercritical fluids in the Zinnwald region being deleted in phosphorus [12]. Figure 8 shows the chondrite standardized REE distribution of the two different fluorites from Zinnwald (ZW) and the Sachsenhöhe near Zinnwald [13].

By the simultaneous crystallization of more significant amounts of monazite-(Y) and xenotime-(Y) in the case of the fluorite from the Sachsenhöhe, is the concentration of all REE significantly lower than in the REE-rich fluorites from Zinnwald.

fig 8

Figure 8: Chontrite standardized REE concentration in fluorites from Zinnwald (ZW) and the Sachsenhöhe (SH). The REE concentration in fluorite is in ppm. Y is at the position between dysprosium and holmium inserted there as the result of the stereochemistry behavior of yttrium [14].

Discussion

The topaz-zinnwaldite-albite granite from Zinnwald has obviously a very complex history [1]. Besides melt inclusions representing the crystallization of the topaz-zinnwaldite-albite granite, extreme water-rich melt inclusions are present (Table 3 in Thomas et al., 2005) [1], showing a more pegmatite-like state. At this time, the author also found REE-rich fluorite globules in quartz, for which no explanation could given. Later, we also found Lorentzian distributed elements (F, Rb, Cs) [10]. Now, during this study, the proven mantel indications (nanodiamond and graphite globules) tell us a further story – input of supercritical fluids coming from mantle deeps. These supercritical fluids are obviously very fluorine-rich, forming the REE-rich fluorite, and for a peraluminous granite, untypical minerals like cryolite, elpasolite, and rubidian leucite. The REE distribution patterns (Figure 8, ZW) differ clearly from the typical REE distribution patterns of hydrothermal and remobilized fluorites [14-16]. In a more hydrothermal later state, the minerals cryolite, cryolithionite, elpasolite, and native sulfur are daughter minerals in fluid inclusions in hydrothermal quartz crystals. Up to now, we could prove that supercritical fluids are active in the whole Erzgebirge, Slavkovsky les, the Saxon Granulite, Lusatian Granodiorites and quartz veins, Königshain Granite Massifs.

Acknowledgment

Thanks to Dr. V. Grunewald (ZGI Berlin) for the fluorite sample Z 9054. For microprobe analyses, we thank D. Rhede, H.-J. Förster, and Ona Appelt (all GFZ) for their help in the very past.

References

  1. Thomas R, Förster HJ, Rickers K, Webster JD (2005) Formation of extremely F-rich hydrous melt fractions and hydrothermal fluids during differentiation of highly evolved tin-granite magmas: a melt/fluid-inclusion study. Contrib Mineral Petrol 148: 582-601.
  2. Thomas R, Davidson P, Rericha A, Voznyak DK (2022) Water-rich melt inclusions as “frozen” samples of the supercritical state in granites and pegmatites reveal extreme element enrichment resulting under non-equilibrium conditions. Min J(Ukraine) 44: 3-15.
  3. Thomas R, Davidson P, Rericha A, Recknagel U (2023) Ultrahigh-pressure mineral inclusions in a crustal granite: Evidence for a novel transcrustal transport mechanism. Geosciences 94: 1-13.
  4. Franz G, Andrehs G, Rhede D (1996) Crystal chemistry of monazite and xenotime from Saxothuringian-Moldanubian metapelites, NE Bavaria, Germany. Eur J mineral 8: 1097-1118.
  5. Seifert W, Thomas R, Rhede D, Förster HJ (2010) Origin of coexisting wüstite, Mg-Fe and REE phosphate minerals in graphite-bearing fluorapatite from the Rumburk granite. Eur J Mineral 22: 495-507.
  6. Breiter K, Förster HJ (2021) Compositional variability of monazite.cheralite-huttonite solid solution, Xenotime, and Uraninite in geochemically distinct granites with special emphasis to the strongly fractionated peraluminous Li-F-P-rich Podlesi granite system (Erzgebirge/Krušné Hory Mts., Central Europe. Minerals 127: 1-21.
  7. Thomas R, Davidson P, Rhede D, Leh M (2009) The miarolitic pegmatites from the Königshain: a contribution to understanding the genesis of pegmatites. Contribution to Mineralogy and Petrology 139: 394-401.
  8. Thomas R (2023) Unusual cassiterite mineralization, related to the Variscan tin-mineralization of the Ehrenfriedersdorf deposit, Germany. Aspects in Mining & Mineral Science 11: 1233-1236.
  9. Thomas R (1978) Thermobarometrische und kryometrische Untersuchungen an Fluorit- und Quarzproben aus einem Brekzienkörper der Sachsenhöhe bei Bärenstein/Osterzgebirge. Unpublished report for the ZGI Berlin, HV 623/75,1-14.
  10. Thomas R, Davidson P, Appel K (2019) The enhanced element enrichment in the supercritical states of granite-pegmatite systems. Acta Geochim 38: 335-349.
  11. Wirth R (2004) Focused ion beam (FIB): a novel technology for advance applications ofmicro- and nanoanalysis in geosciences and applied mineralogy. Eur J Mineral 16: 863-876.
  12. Pekov IV, Chukanov NV, Kononkova NN, Yakubovich OV, Massa W, et al. (2009) Tveitite-(Y) and REE-enriched fluorite from amazonite pegmatites of the western Keivy, Kola Peninsula, Russia: Genetic crystal chemistry of natural Ca, REE-fluorides. Geology of Ore Deposits 51: 595-607.
  13. Thomas R (1994) Fluid evolution in relation to the emplacement of the Variscan granites in the Erzgebirge region: A review of the melt and fluid inclusion evidence. Metallogeny of collisional Orogens. International Association of the Genesis of Ore Deposits (IAGOD), eds: By Seltmann, R., Kämpf, H., Möller, P., 70-81
  14. Bommer H (1941) Über die Einordnung des Yttriums in die Reihe der Lanthaniden. Zeitschrift für anorganische und allgemeine Chemie 248: 397-401.
  15. Möller P (1989) REE(Y), Nb, and Ta enrichment in pegmatites and carbonatite-alkalic rock complexes. In: Lanthanides, Tantalum and Niobium. Eds: by P. Möller, P. Černy, and Saupé, 380: 103-144
  16. Thomas R, Davidson P (2016) Origin of miarolitic pegmatites in the Königshain granite/Lusatia. Lithos 260: 225-241 and ESM.

Geological and Water Resources of Afghanistan

DOI: 10.31038/GEMS.2024614

Abstract

Afghanistan is rich in mineral and water resources but lacks political leadership and mineral- extraction capacity to fully realize the value and benefits of such commodities, even several world-class mineral deposits. Afghan leaders fail to acknowledge or intervene in continued pollution of water resources that will most certainly be a detriment to future generations as climate change adds drought stress to the country. Many of the Afghanistan resources, except for water, can wait for some future date to develop. The Afghan people who must rely on some of these resources for survival, however, are suffering under the incompetence and backwardness.

Keywords

World-class mineral resources, Water resources, Hydro-cognizance, Hydro-hegemony, Climate change

All forms of rock, mineral, and water resources have been assessed in Afghanistan for about the past century, starting mainly by Russian geoscientists from the 1920s through the 1980s [1-5]. By the late 1960s enough progress had been made to produce detailed maps and reports that subsequently were reinterpreted considering plate-tectonic theory, coupled with independent reassessments by Afghan, American, British, French, German, Japanese, and a few other nationalities [6-9]. The result has been the recognition that several trillions of dollars of natural resources have been discovered [10], although recurring political instabilities have so far precluded actual mining much beyond small artisanal efforts to extract coal, gemstones, chromite, stone quarrying, and minor other resources. Several world-class deposits of copper, iron, rare earths, uranium, and lithium occur, with the copper and iron deposits being the largest in Asia [11-13].

Difficulties with studying and understanding all forms of water in Afghanistan (weather climate, glacier ice, river flow, underground water) are plentiful, with increased pollution, drawdown, natural hazards (landslides, rapid wet debris flows, mudflows), flashfloods, and multiple and increasing droughts [14,15]. On the other hand, over-extraction of ground and surface waters is occurring everywhere, particularly now that climate change is well underway across the whole region of South and Central Asia. Furthermore, long-term intransigencies by all prior Afghan governments and their bloated and incompetent bureaucracies were set firmly against even talking about water in any context. In fact, most of the water experts and engineers of the prior Ghani regime have long-since fled the country or gone underground to protect themselves and their families.

These aversions have compounded and added much to living difficulties, especially with the government now being run by an ineffective and largely illiterate Taliban. Almost no recognition of the Taliban government has been granted by outside countries or the United Nations, except for Pakistan, Saudi Arabia, and the United Arab Emirate. As a result, almost all external financial assistance has dried up in the face of pro-religious and anti-scientific pronouncements by the Taliban, who for example, have denied reports of water pollution and linked those reports to supposed enemies of the Afghan people. The traditional government arrangements are not working, however, to solve today’s problems with over-extraction and pollution [16]. The Taliban are unwilling to accept any such solutions because they seek to use only Sharia laws, which are only acceptable to some fundamentalist Muslims and are not useful to most villagers.

Hydro-cognizance and hydro-hegemony are two concepts about Afghanistan water that have emerged in the Western literature recently. These need to be understood in terms of scientific approaches to the hydrologic cycle (evaporation, precipitation, glacier, lake, ocean and underground water storage, river flow, etc.), as well as the means to exert hegemonic control over water between Afghanistan and its neighboring countries [17]. Hydro-hegemony has four major pillars: (1) geographic position (top, middle, or bottom of watersheds; (2) material power (demography, infrastructure, literacy, military strength, etc.); (3) bargaining power (water-law awareness, diplomatic skills, etc.); and (4) ideational power (skill with new ideas and new thinking). Afghanistan is at the top of the watershed, which is a very strong position compared to Pakistan and Iran, but quite weak in the other pillars. The result is that aside from the excellent geographic position at the top of the watersheds, Afghanistan is woefully deficient in all the other factors, so much so that the country is vulnerable to hydrologic machinations by the neighboring countries.

In sum, the geology and ores of Afghanistan could become part of the salvation of the sorely beset nation through wise resource extraction. Various transparencies to reduce individual, corporate and government corruption have been introduced by the prior governments, along with ideas on comprehensive extraction, transportation, and refining in various resource corridors, all of which could certainly help jumpstart the rebuilding of the Afghanistan economy. This would require adoption by the Taliban, who are not known for their ability to comprehend such modernism.

Competing Interests

The authors declare that they have no competing interests

References

  1. Abdullah S, Chmyriov VM (2008) Geology and mineral resources of Afghanistan, Book 1, Geology: Ministry of Mines and Industries of the Democratic Republic of Afghanistan. Afghanistan Geological Survey, 15, p. 488. British Geological Survey Occasional Publication.
  2. Ali SH, Shroder JF (2011) Afghanistan’s mineral fortune: Multinational influence and development in a post-war economy: Research Paper Series C: 1: 2011 (1): Institute forEnvironmental Diplomacy and Security; James Jeffords Center for Policy Research, v. 1(1), p. 24. University of Vermont.
  3. Shroder JF (2014) Natural resources in Afghanistan: Geographic and geologic perspectives on centuries of conflict. Elsevier 572.
  4. Shroder JF (2015) Progress with Afghanistan extractive industries: Will the country know resource success or failure evermore? Extractive Industries and Society 2: 264-275.
  5. Shroder JF, Eqrar N, Waizy H, Ahmadi H, Weihs BJ (2022) Review of the Geology of Afghanistan and its water resources. International Geology Review.
  6. Shareq A (1981) Geological observations and geophysical investigations carried out in Afghanistan over the period of 1972-1979, in Gupta HK, and Delany FM, eds. Zagros Hindu Kush Himalaya geodynamic evolution: American geo-physical union, geodynamics series 3, Pg: 75-86.
  7. Siehl A (2015) Structural setting and evolution of the Afghan orogenic segment – A review, in Brunet, MF, McCann T, Sobel RR, eds., Geological Evolution of Central Asian Basins and the Western Tien Shan Range, London: The Geological Society of London, Pg: 427.
  8. Debon F, Afzali H, Le Fort P, Sheppard SMF, Sonet J (1987) Major intrusive stages in Afghanistan: Typology, age and geodynamic setting. Geologische Rundschau 76: 245-264.
  9. Doebrich JL, Wahl RR (2006) Geologic and mineral resource map of Afghanistan, version 2: U.S. Geological Survey OF-2006-1038, scale 1: 850,000, 1 sheet.
  10. Risen J (2010) U.S. identifies vast mineral riches in Afghanistan. The New York Times, June 13.
  11. Peters SG (2011) Summaries and data packages of important areas for mineral investment production opportunities in Afghanistan, U.S. Geological Survey Fact Sheet 2011-3108.
  12. Peters SG, King TVV, Mack TJ, Chornack MP, eds. the U.S. Geological Survey Afghanistan Mineral Assessment Team (2011a) Summaries of important areas for mineral investment and production opportunities of nonfuel minerals in Afghanistan: U.S. Geological Survey Open-File Report 2011-1204.
  13. Peters SG, King TVV, Mack TJ, Chornack MP (2011b) Summaries of important areas for mineral investment and production of opportunities of nonfuel minerals in Afghanistan, U.S. Geological Survey Open-File repot 2011-1204.
  14. Shroder J, Ahmadzai S (2016) Transboundary water resources in Afghanistan – Climate change and land-use implications, Amsterdam. Elsevier.
  15. Shroder JF, Ahmadzai SJ (2017) Hydro-cognizance: Water knowledge for Afghanistan: Journal of Afghanistan Water Studies: Afghanistan Transboundary Waters. Perspectives on International Law and Development 1: 25-58.
  16. Mahaqhi A, Mehiqi M, Mohegy MM, Hussainzadah J (2022) Nitrate pollution in Kabul water supplies, Afghanistan; sources and chemical reactions: a review. International Journal of Environmental Science and Technology 19.
  17. Ahmadzai SJ, Shroder JF (2017) Water security: Kabul River basin: Journal of Afghanistan water studies: Afghanistan Transboundary Waters. Perspectives on International Law and Development 1: 91-109.

Value of Ecosystem Conservation versus Local Economy Enhancement in Coastal Sri Lanka

DOI: 10.31038/GEMS.2024613

Abstract

The coastal natural ecosystem is the world’s most sensitive, threatened, and populated environmental system. Economic valuation of coastal ecosystems helps identify this complexity and justify conservation efforts that could divert the local attention of people for sustainable coastal management. The abundance of the quality of the coastal ecosystem affects the marine biological process, both primary and secondary production requirements that support the needs of humans. However, there are no prices for environmental resources to be valued and undervalued, as there is no price to appreciate the actual monetary value. Since the importance of coastal resources is undermined due to their undervaluation, valuation can help develop our knowledge of the true value of ecosystems. However, the Sri Lankan conservation planning process still needs to consider determining the economic value of coastal ecosystem conservation. Therefore, this study aims to estimate the monetary value of protecting coastal areas in Sri Lanka using the willingness to pay (WTP) approach. Further, it identifies attributes and measurable variables that reflect the economic value of conserved coastal areas by evaluating public preference over possible cases. The selected case study is the Mirissa coast from the southern coastal belt, enticing high tourism attractions. The Choice Experiment (CE) method determines the study’s primary objective. First, a questionnaire survey was used to collect data under a random sampling method with a sample size of around 250 using a face-to-face interview method. Then, researchers analyzed the data by using the Conditional Logit Model (CLM). According to the results, public preferences ranked three variables at the top: “All known coral reef conservation, WTP SLR500, and creating more opportunities for locals.” In addition, all the parameter variables used in the study were significant at α <0.01 level. Finally, the study has generated vital information about the values placed on different ranges of conservation of coastal resources and the tradeoffs by respondents.

Keywords

Coastal ecosystem, Conservation, Local economy, Tradeoff, Economic valuation

Introduction

Sri Lanka is rich with a valuable coastal belt of 1585 km, encircling the country with uncountable coastal natural resources. Generally, the coastal areas are low-lying landscapes with different geographical features like estuaries and lagoons. The total area of 126989 hectares (ha) includes 6,083 ha of mangroves, 68,000 ha of corals, and 15,576 ha of bays, dunes, and coastal marshes. The country’s coastal environment is beautiful and consists of rich biodiversity and different kinds of natural resources [1]. The main occupation of the coastal area is the tourism industry, as highly demanding leisure destinations are available around Sri Lanka. Coastal tourism empowers economic benefits to both local and national economies. Moreover, 80% of tourism infrastructure is based in coastal areas [2]. The increment of the coastal human population, poor environmental planning, and lack of consideration of social and ecological issues have manipulated the degradation of the coastal environment. Inland development has been related closely to the maritime activities of the country. Hence, coastal ecosystems are the most populated landscape threatened in Sri Lanka, like the situation worldwide. All people living in coastal areas would most likely be affected by the conservation or conversion of the land for development [3]. Open access areas such as coastal zones are continuously exploited for economic purposes, instigating the extinction of valuable species. The need to manage coastal problems came into practice in the 1920s; however, the effort in the field appeared sometime later. Coastal erosion problems are mainly due to the need for a better understanding of conservation values, ensuing vast destruction in the Sri Lankan coastline [4]. The relevant authorities need more capacity and efficiency in managing and maintaining the coastal resources. Moreover, public participation is significant in overcoming the situation as they are directly involved in the coastal natural resource conservation program. Thus, general users are more responsible for conservational coastal natural resource programs.

Moreover, this will be a good start for conserving and managing resources and saving for the future. Countries are practicing the willingness to pay (WTP) approach to determine user satisfaction with natural resources based on their happiness and perception of future conservation and management. However, only a few economic valuation studies on coastal resources in the Sri Lankan context are found in the literature review. Investigating stakeholder preferences spatially for conservation and development in unprotected wetland areas was conducted using the WTP and Analytical Hierarchy Process (AHP) techniques in Sri Lanka [5]. Economic valuation on coastal ecosystems will be a significant advantage for cost-effective designations to manage sustainable ecosystems. Some studies focus on the coastal belt of Sri Lanka and its valuable natural resources. However, most studies have been carried out to address the impacts of coastal pollution, coastal conservation, and coastal area management. There are even studies related to coastal protection in the literature. However, studies on the total economic value (TEV) of the welfare of” ‘use’ and ‘non-use’ values and conservation of coastal areas are minimal [3]. The research of this paper focuses on two main aspects. First, to identify attributes and measurable variables that reflect the economic value of conserved coastal areas. Second, to estimate the monetary value of conserving coastal regions of Sri Lanka using CE.

Materials and Methods

Coastal Management Methods

The country’s coastal management was initiated in the 1920s, focusing on engineering solutions for coastal erosion. In 1963, a comprehensive approach was required to manage coastal resources. Due to that, the Colombo Commission established a coastal protection unit. Under the Ministry of Fisheries, the Coast Conservation Division was established in 1978 and upgraded in 1984 as a department. The Coastal Conservation Act No. 57 of 1981 was enacted in 1981 and came into operation in 1983. The main legal document that frames the coastal zone activities is the Coast Conservation Act of 1981 and its 1988 amendment. Since 1930, the theme of social justification for projects has evolved. For example, the Flood Control Act of 1936 mentioned federal participation in controlling the flood hazard if the benefits of these projects exceeded the estimated costs. Managing coastal resources is essential to planning and developing a sustainable economy. Only a few advanced types of research were carried out in the Sri Lankan context to estimate coastal area management and conservation to achieve sustainable development. However, numerous types of research are available worldwide that analyze the public perception and apply it to coastline conservation. Many countries that own coastal resources are making decisions to conserve coastal areas against development activities. However, some small groups engage in activities that affect the coastal areas. Coastal protection methods such as conservation can ensure human health, protection, and improvements of renewable resources such as fisheries [6], mangroves, and coral reefs. As an island country, Sri Lanka also practices coastal conservation and preservation strategies to a certain extent. The environmental movement of the late 1960s talked about pollution control and highlighted the role of WTP for this purpose.

Environmental Valuation

Researchers believe there is an unpredictable value for unassessed coastal habitats due to the need for more prices in natural resources. However, using the WTP approach, it is possible to value it using the concept of maximum utility. Economists use environmental valuation techniques to appreciate natural resources and resource services as market and non-market goods. The term “value” is a precise term used to express the idea that a consumer’s highest price is WTP to obtain a good/service. Simply, it is about how much the user values the good/service. This value varies from person to person and good to good. Supply and demand concepts in Economics help estimate the WTP to obtain goods/services. Regarding the coastal context, the term valuing differs due to their interests. According to ecologists, the salt marsh value will be the significance of the marsh as a reproductive good/service to certain kinds of fish species. However, only some users look at that from this view. The economic value measures the maximum amount an individual is WTP for a good/service. The welfare measurement is expressed formally with the concept of WTP. Further, suppose a value loss occurs in a degraded environment of pollution. In that case, that lost amount is the maximum amount an individual is willing to accept (WTA) to compensate for pollution. In economic valuation, one can identify characteristics like money used as a unit of account. This value is relative because it measures tradeoffs of goods/services that have a value if people directly or indirectly value them. Otherwise, there is no value, and when determining values for the whole society, value aggregates from individual values [4].

Random Utility Theory

Utility theory, the random utility theory, and the theory of value are the main relevant theories for valuing coastal ecosystems. The basic meaning of the word “utility” can be assumed as satisfaction. In general, people make decisions based on their satisfaction. The four types of economic utilities are form, place, time, and possession. This theory says people choose WTP based on income, wealth, status, and mindset [7]. Random utility theory is used to derive behavioral models which must be obtained from the choice dimension. The rational behavior of humans and maximization of the utility relative to personal choice is the primary assumption when considering the approach to this theory. For example, there is a tendency of human behavior that, in most cases, each choice uses available alternatives [8]. The theory of value describes that desire and utility are not the only things when making decisions. Attributes or the characteristics of the good/service matter. Several approaches to this concept examine why, how, and to what extent people value things [7]. The CE method used in this research is a method that asks individuals to prefer their alternative among several available options to appreciate the natural resources.

The Choice Experiment Method

CE method is an application of the theory of value combined with random utility theory. This method estimates attributes’ economic values by measuring people’s WTP to achieve improvements or changes suggested by each option (attribute) [10]. Several methods for estimating CE parameters, such as Logit and Probit, were identified. The Multinomial Logit Model (MLM) is widely used for three or more choice categories in a problem and the respondents’ socio-economic characteristics. The Conditional Logit Model (CLM) is also a suitable method that extends the MLM. Moreover, the CLM is the most appropriate method when choosing the alternatives considered in the modeling process. Hence, the CLM procedure is used as the modeling method for this research. The choice among selected alternatives is a function of the other options in this research. The characteristics of respondents who are making a choice [11] are less likely essential to achieve the objectives of this research. This procedure estimates the Maximum Likelihood by “running” Cox regression” of SPSS.

for

According to the above equation, CLM estimates that an individual i chooses alternative j as a function of the attributes varying for the alternatives and unknown parameters [12]. Therefore, in CLM, Xij is used as a vector of attributes site j and individual i, with the probability that individual i chooses alternative j.

Choice Questionnaire Survey

Southern coastal areas of Sri Lanka have the highest tourism attraction; consequently, the Mirissa coastline is the location for the survey. This area is one of the coastal areas with the highest mean coral cover of 23.97%. The other most significant feature of the area is that the highest live coral is available in the same area, which should be conserved and protected. Southern coastal belts, including Mirissa, Weligama, Polhena, Hikkaduwa, and Rumassala, show high BOD levels (average=3.98mg). When considering the protection of the ecosystem, the case study area is worthy as it has evidence of the threat of human activities. Destructive fishing activities are mainly high within the region, threatening the coastal area’s natural resources. However, the high tourist arrival rate is on record during the year. Therefore, this study focuses on Mirissa’s natural coastal resources to preserve the ecosystem. Attribute selection of the study mainly focused on what is relevant to the respondent group and the policy context of respondents. Furthermore, selecting attributes wants to occur from end-user perspectives, which means the population of interest is the decision-makers [13]. Further, the selection of attributes uses three steps. First, it identified essential attributes that reveal the good or service. Second, it determined a suitable framework for the attributes and finally identified levels for each attribute (Table 1).

Table 1: Selected attributes and the levels

Attributes

Level 1 (Status quo) Level 2 Level 3

Environmental strategy to protect coral reefs

Identified coral reef conservation All known coral reef conservation

All known and unknown coral reef conservation

Local economy enhancement Benefits captured by well-established businesses Encourage small-scale local businesses which reflect the Sri Lankan culture

Creating more opportunities for locals to establish with high-income generations

Management and preservation payment

No payment

SLR0

SLR500

SLR1000

Levels and attributes were from the information collected in the literature and discussions with experts, stakeholders, practitioners, and university professionals. The following are the basic descriptions of each attribute and level.

First, the attribute selected considers the environmental strategy to protect coral reefs. Corals can be identified as susceptible living species adapting to changes in marine ecosystems. The corals are especially vulnerable to physical damage like pressure on ornamental fishing, deep-sea fishing, trawling for fishing, Moxy nets, and iron rods. The rapid growth of calcareous Halimeda sp. and Caulerpa sp. has been identified as a leading primary threat to the area’s corals [14]. At the status quo (current stage), steps are used to develop the conservation of the identified coral reefs, but the threat continues to grow. Therefore, level 2 uses level 1 plus the preservation of all known coral reefs and level 3, which is the level 2 + conservation of all unknown coral reefs’ environment for the future. The second attribute is the enhancement of the local economy in the area. It developed the Southern coastal line by expanding the tourism industry. Further, the contribution of the fishery industry was a substantial income generation source for the area. On the contrary, modern fishery practices and tourism threaten natural coastal ecosystems. The benefits of these activities are primarily obtained by well-established businessmen, leaving poor local people aside. This attribute encourages micro, small, and medium entrepreneurs (MSMEs) to boost their rural economies. This development will be a strategy for attracting local tourism and fisheries to promote the conservation of natural resources. Thus, levels 1 and 2 create more opportunities for locals to establish higher income generation potential and promote coastal protection.

Management and preservation payment is the third attribute used in the choice set. Competition for limited resources has intensified with human population growth in coastal regions and the diversion of coastal areas, including wetlands, for economic activities experienced globally [15]. The coastal areas are open access spaces, and free accessibility to public common spaces and resources makes excessive exploitations. To estimate the damage, there are no market prices for many characteristics in coastal natural areas. This study will evaluate the economic value of the coastal natural ecosystem based on the general public’s perception, including all stakeholders. These hypothetical payments are no payment, SLR0 (status quo), SLR500, and SLR1000. An experimental design can identify attributes and all levels as a choice set in the choice experiment study. For example, considering three attributes and three levels makes the total combination of 33 = 27; however, it is difficult to show all 27 combinations in a questionnaire. Therefore, 27 combinations have been reduced to 9 using the orthogonal procedure for convenience in field data collection. An orthogonal design stores the information in a data file. However, this active dataset is optional before running to generate an orthogonal design procedure. However, this procedure allows the researcher to create an active dataset that generates variable names, variable labels, and value labels from the options shown in the dialog boxes. The researcher can replace or save the orthogonal design as a separate data file if an active dataset is available. Thus, the initial step of a choice experiment creates the combinations of attributes and levels presented as product profiles to the subjects in the field. Moreover, even a few attributes and a few levels for each attribute will lead to an unmanageable number of potential product profiles. As such, the researcher needs to generate a representative subset known as an orthogonal array.

Respondents in the field choose 1 out of 9 shown to them. However, the probability can be estimated for all 27 combinations in the modeling stage. It is better to use fractional factorial design than complete factorial design in deriving alternatives of choice set due to its complexity. A full factorial design consists of combinations of all attributes and their levels. When combinations become too large, fractional factorial design will usually be used. This procedure is known as a total design sample that allows for estimating all the effects of interest. The fractional factorial design can be orthogonal, indicating no correlation between attribute levels [16]. Orthogonality estimates the correlation between two attributes. If it is 0 or less, then that is called orthogonal. An orthogonal plan then selects random pairing alternatives. After finalizing the other options, it can develop scenarios, create a choice card, and use it in the CE method. The questionnaire develops under crucial sectors, and the introduction consists of what, why, how, and who does this investigation. The questionnaire to gather ‘individual’ preferences consists of questions about public perception of coastal ecosystem conservation. Choice cards experiment with examples to introduce the task clearly to the respondents. A clear presentation is essential in the field to get complete answers. Socio-economic and demographic data interpretation and validation require data such as age, gender, household income, and education level of respondents.

Sample Selection

Simple random samples are a commonly applied sampling technique in CE studies [17]. This study mainly uses convenient sampling, replicating the simple random sampling method. The sample includes all users/visitors of the coastal natural areas within the selected case study area. With a 90% (1.65) confidence level, we set the n=250 sample. This sample only considers people between the ages of 18 and 65. The questionnaire survey was a face-to-face interview since it secured a higher rate of responses. When creating the choice card, all coded data consists of 0 or 1. All other data were entered as continuous data or as 0 or 1. The output combines the information about attributes levels chosen, not chosen, and the respondent’s socio-demographic data. In choice selection, level 1 (status quo) was constant as a base for other choice selections.

Results

Socio-Economic Characteristics

The socio-economic characteristics of the respondents are presented in Table 2. The gender balance is almost equal in the sample, with the highest percentage of the younger and active generation aged 18-40. The educational category of the highly educated group, “high to postgraduate qualification,” consists of 45%. Sixty percent of the sample in the group are married. At the same time, unemployed people are about 33%.

Table 2: Socio-economic characteristics of the respondents

tab 2

Estimation of Conditional Logit Model

We used the choice experiment procedure to estimate the economic value by ‘individuals’ preferences over a set of attributes. Respondents compared nine choice alternatives differing in terms of levels and attributes. CE results are generated from the survey using a choice card analyzed by CLM, and this sub-section presents the results of CE. In addition, the importance of the selected attributes is explored using Cox regression of continuous-time survival data in SPSS software. This procedure uses the partial likelihood method in SPSS, which helps match the choice CLM to the data set. The Likelihood ratio, Score, and Wald tests use Chi-squares (χ2) analysis to estimate the model parameter. As shown in Table 3, χ2 data for Likelihood ratio, Score, and Wald statistics indicate that the model is highly significant. The model test value for each test is 0.0001, smaller than 1% (α = 0.01), i.e., 0.0001<0.01; thus, all three tests confirm that the model is significant at α = 0.01 probability level. These results demonstrate a powerful interrelationship between the attributes and the Choice. The likelihood ratio test (Chi-square) rejects the null hypothesis of no relationship between attributes and the Choice at the significant level of α = 0.01.

Table 3: Model test statistics (global H0: β = 0)

Test

χ2

DF

Pr>χ2

Likelihood ratio

233.697

6

0.0001

Score

388.343

6

0.0001

Wald

388.343

6

0.0001

Estimating the parameter values of the maximum likelihood is vital in modeling choices. Table 4 shows parameter values for identified attributes and levels are statistically significant at α = 0.01 level as 0.000<0.01. For the entire model, the significant value, α < 0.01, indicates that the whole model is perfectly significant at a 1% level. The estimated model parameters for variables displaying zero coefficients indicate the status quo level (reference level). Coefficients for the other six levels have recorded values relative to the three references described above.

Table 4: Maximum likelihood estimation analysis for all respondents

Parameter Variable

Estimate

S.E.

χ2

Pr>χ2

Environmental strategy to protect coral reefs
L3-All known and unknown coral reefs conservation (AKUCC)

2.162

0.29

53.20

0.000

L2-All known coral reef conservation (AKCC)

2.956

0.31

92.93

0.000

L1-Identified coral reefs conservation (Status quo)

0

.
Local economy enhancement
L3-Creating more opportunities for locals to establish with high-income generations

3.675

0.42

76.32

0.000

L2-Encourage small-scale local businesses which reflect the Sri Lankan culture

3.379

0.44

59.40

0.000

L1-Benefits capturing by well-established businesses (Status quo)

0

Management and preservation payment
L3-SLR1000

1.595

0.25

41.06

0.000

L2-SLR500

2.147

0.27

62.64

0.000

L1-SLR0 (Status quo), No payment currently

0

Factors Affecting Conservation and Economic Activities

The first attribute tested is “environmental strategy to protect coral reefs.” The estimated coefficient of the first attribute, L1, is zero, as this is the status quo or reference level. It represents the current status of the resource. The part-worth utility (estimated coefficient) for “all known coral reef conservation” (AKCC) L2 is +2.956, which shows as the highest coefficient. “All known and unknown coral reef conservation” (AKUCC), L3 is +2.162, which indicates the second-highest part-worth utility component. The study area is highly known for coral reef-based tourism and is widely used for research. The unknown coral area may be a fantasy for the general public, hence a low preference for unexplored reef areas. Both L1 and L2 variables in attribute one are significant at α = 0.01, (1%) level, as Pr > χ2 value 0.000<0.01. The following attribute tested in this model was “local economy enhancement.” Benefits captured by well-established businesses (L1) is the current situation (status quo) in coral reef areas, which is structured zero. The part-worth utility for the variable (L2) “encourage small scale local businesses which reflect the Sri Lankan culture” is +3.379, while the part-worth utility (L3) is +3.675. Accordingly, L2 prefers over status quo (L1), while L3 prefers L1 and L2. Creating more opportunities for locals to establish with high-income generations from coral reefs is preferred over L2 and L1, which means L3 becomes the preferable level of the second attribute. This preference reveals the sustainable development of coral reefs by encouraging conservation and uplifting the local economy through the first two attributes. Further, all the variables in attribute two are significant at α = 0.01 (1%) as 0.00<0.01; parameter values tested under this attribute are highly significant. The third attribute tested under the model is “WTP” (Management and preservation payment). This WTP value contributes monthly from people towards the management and conservation cost for the coastal resources. The status quo remains the “SLR 0” (No payment). The SLR500 (L1) is highly favorable +2.147 over the status quo (No payment) and SLR1000. The estimated parameter for SLR1000 is +1.595, which is less favorable for SLR1000. Two parameters, L1 and L2, tested under this attribute, are also perfectly significant as Pr > χ2 values. Both are significant at α = 0.01 or 1% level. Thus, all the selected attributes considered to obtain the maximum utility from the conservation, management, and preservation of coastal natural resources are estimated to be crucial in the choices preferred by people.

‘Users’ Perception of Conservation and Economic Enhancement

Users’ perceptions of conservation and economic enhancement in coastal areas can be used to validate CE results. According to average public perception, more conserved coastal areas strongly agree with all the factors. The following have recorded more than 50% responses for the “strongly agree” category among the perceptions. They enjoy many things, including natural beauty, feeling fresh sea air, waves, and sunshine cures. Further, people felt relaxed (mind and body), gathered with friends/family, escaped the stress/pressure of work, and enhanced the local economy. However, more than 30% of the share agreed on category responses of enjoying fresh sea air and waves and sunshine cures, exercise, and leisure walks. Gathering with friends/family, being alone, and making seafood safer have also been recorded. Exercise, leisure walks, and meeting new people are neutral to more than 30% of respondents, and other categories are less than 30%. The importance of improving selected features in the case study area is displayed in Figure 1. On average, the public has a more significant percentage of enhancing the features chosen in the case study area under “significant” categories. Except for the increase of neighborhood property value near 40% of responses, all other responses have more than 40% of public responses. The most critical features under the category of “extremely important” were saving the natural resources for the future, reducing the pollution of the natural environment, and protecting the flora and fauna species in the coastal area.

fig 1

Figure 1: Public perceptions of ranking the conservation of coastal areas

The data regarding the “problems faced by people in coastal areas relative to the case study area” might provide essential facts in the future management of these areas. The categories of strongly disagree and disagree responses are less than 5% and 15%, respectively, for all problems suggested in this study. More than 50% of the respondents strongly agree with the existing problem of improper garbage disposal. Further, more than 25% of respondents strongly agree with inadequate parking areas, unclean environment, and poor sanitary facilities. More than 25% agree with deficient parking areas, messy environment safety issues (nearly 60% responses), insufficient clean facilities (more than 40%), and improper garbage disposal. Responses have recorded the neutral category (more than 20%) for all mentioned problems in the case study area, except improper garbage disposal (less than 10%) (Figure 2).

fig 2

Figure 2: Public perception of selected attributes

Public perception related to selected attributes in coastal conservation is displayed in Figure 2. Environmental sustainability and protection of natural coastal lives, including corals, have recorded the highest percentage of responses under significant sections and no answers for categories of somewhat, not significant, or not at all. These facts validate the CE results. For example, the “environmental sustainability and protection of natural coastal lives” has the highest percentage of ‘respondents’ share. Further, their parameter used in CE under (“AKCC” and “AKUCC”) have estimated the highest coefficients respectively, +2.956, which has the highest recorded coefficient, and +2.162 as the second-highest part-worth utility. Further, improvement of the local economy has a share of responses of 37% of the sample. Regarding conserving coastal natural resources, 42% of respondents share the “extremely important” category. Environmental sustainability was rated “extremely important” by 59%, while “protecting natural coastal life, including corals,” was rated 68%. More than a 30% share of responses to all the factors for the “important” type.s Only 3% of responses have been recorded as not crucial for the attribute charge for conserving coastal natural resources/WTP. In comparison, 42% agreed with choosing extremely important, and 39% agreed with choosing “important” categories. This result ratifies the CE results.

Probability for Conservation and Local Economy Enhancement

A probability test can be defined as the level of marginal significance within a statistical test representing the probability of the occurrence of a given event. The parameters shown in Table 4 are used to estimate the probability associated with the nine alternatives of the study. The ranking of the variables suggests that the best three preferences are AKCC with 22.1% preference, WTP500 with 19.7% preference, and creating more opportunities for locals (CMOFL) with 18.8% preference, respectively. However, the results show the importance of each variable based on respondent preference when considering 27 individual variables (Figure 3).

fig 3

Figure 3: Probability associated with each variable

Discussion

Southern coral reefs have been facing high degradation issues, and human activities have accelerated it drastically. The area has a high demand for coastal recreational and other uses from local and foreign tourists and natural hazards (MMDE, 2016). Local people should determine the conservation of coastal areas for future generations or covert for local area enhancement. In a democratic society, it can be decided by quantifying public preferences. In Figure 1, more than 80% of people identified these areas as extremely important in a) reducing pollution, b) protecting flora and fauna, c) increasing tourism, and d) saving natural resources for the future. This result is a clear signal for coastal area conservation and local area enhancement through ecotourism. The perception reported in Figure 2 also confirmed a similar trend. The preservation of all known corals (AKCC) with the highest probability value of 22.1% in Figure 3 indicates that stakeholders’ highest preference is conservation. Among the Southern coastal areas, Mirissa has the highest tourism attraction. It is one of the coastal areas with the highest mean coral cover of 23.97% [18]. The most significant feature is that the highest live coral is available in the same area, which should be protected at any cost. This result should be an essential aspect of future management and policymaking on coastal conservation. Furthermore, these results can be replicated elsewhere in estimating the monetary value of preserving coastal resources that deliver the best public utility using benefit transfer methods. The significant result of this research is not only about the natural capital in the case study area but also provides public utility in terms of economic value. For example, these results further describe complementary human-made capital like “local economy enhancement” related activities that can create additional economic outputs for the area. This study has represented the importance of conserving coastal natural resources and some identifiable essential policy implications.

First, selected attributes of CE, the results will enhance the ‘users’ experiences with estimated monetary value (WTP) for each alternative combination (Table 4). The identified range of alternatives and levels of choices, some potential options can create more economic benefits for the area by maximizing public welfare. For example, there are fewer alternative services/facilities in the current situation. However, they can enhance and contribute to the quality of public life in the coastal areas in many ways. These include creating more business opportunities for locals, encouraging small-scale local businesses, and environmental strategies, including coral conservation, to have high tourism attractions. CE results have recorded that all the alternatives combined with those services/facilities are significant. Further, investment options under the attributes of “local economy enhancement” and “environmental strategy to protect coral reefs” will improve the economic benefits by enhancing the quality of livelihood. Local financial improvement will automatically enhance participation in economic activities in the investment as mentioned earlier options. Secondly, the investment in conserving coastal public open spaces in terms of “environmental strategy to protect coral reefs” and related activities can add value to the cultural ecosystem services-based benefits. As the results revealed, the most preferred variables estimated by the model are AKCC (all-known coral conservation) and CMOFL (creating more opportunities for locals) with a WTP of SLR500. Choosing this alternative over the other 24 variables, even having SLR0 WTP (No payment), is a remarkable finding deeply considered in policy implications. The third aspect is policy implications derived from this research and concentrating on using public funds to conserve coastal resources. General preference for an attribute can be identified as helpful information on how funds should be invested with more advantages. The study results have shown that the public has agreed with WTP over the status quo scenario. The second most preferred variable that resulted from the model is the WTP of SLR500. The results of the public attitude examination reveal that the public has agreed with the “charge for the conservation of coastal natural resources.” Further, the results highlighted some problems the public faces in the existing situation and the fewer chances to experience what the public is looking for from coastal public open spaces. This fact proves why the public rejects the current condition of the selected area as a case study compared to the other alternatives shown in the choice card with price package (WTP).

Results reveal that “environmental strategy to protect coral reef” related activities will firstly append economic value to coastal conservation. The first recorded probability among the 27 variables is AKCC (all known coral conservation). Furthermore, the second is the WTP of SLR500, and the third is CMOFL (creating more opportunities for locals). Finally, this research has indicated vital information regarding the values of a range of conservation of coastal resources by users. For example, suppose the responsible parties of coastal public open spaces give considerable attention to valued public opinions and choices and interpret the results. In that case, how responsible parties can manage practical resource allocation decisions will be clear. Further, this research has confirmed the effectiveness of using the choice experiment method used in the study to reveal public preferences. Thus, CE can be a convenient tool to uncover public perception since it can provide in-depth information on ‘individual’ preferences. Future studies can be done with a larger respondent sample or specific visitor group to further explore this research’s findings.

Author Contributions

“Conceptualization, I.A., and P.W.; methodology, P.W.; software, P.W., and I.A.; validation, I.A., P.W., and P.B.; data; I.A.; formal analysis, I.A, and P.W.; investigation, P.W.; resources, I.A.; writing—original draft preparation, I.A., P.W.; writing—review and editing, P.W. and P.B.; visualization, I.A.; supervision, P.B., and P.W. All authors have read and agreed to the published version of the manuscript” Please turn to the CRediT taxonomy for the term explanation. Authorship must be limited to those who have contributed substantially to the work reported.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Sample data are available upon request.

Acknowledgments

None

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Seneviratne C (2005) Coastal Zone Management in Sri Lanka: Current Issues and Management Strategies.
  2. White AT, Virginia B, Gunathilake T (1997) Using Integrated Coastal Management and Economics to conserve coastal tourism resources in Sri Lanka. Ambio 26: 335-344.
  3. Wattage P, Mardle S (2007) Total economic value of wetland conservation in Sri Lanka identifying use and non-use values. Wetlands Ecology and Management 16: 359-369.
  4. Lipton DW, Wellman K, Shleifer IC, Weiher RF (1995) Economic Valuation of Natural Resources: A Handbook for Coastal Resource Policymakers. Maryland: s.n.
  5. Wattage P, Simon M (2005) Stakeholder preferences towards conservation versus development for a wetland in Sri Lanka. Journal of Environmental Management 77: 122-132. [crossref]
  6. Lowry K, Wickramarathna H (1988) Coastal Area Management in Sri Lanka. Ocean Yearbook Online 7: 263-293.
  7. Navrud S, Kirsten GB (2007) Consumers’ preferences for green and brown electricity: a choice modeling approach. Revue d’économie politique 117: 795-811.
  8. Cassetta E, Random Utility Theory. In: Springer Optimization and Its Applications (volume 29) sl. sn. Pg: . 89-167.Christie M, et al., (2005) Valuing the diversity of biodiversity. Ecological economics, 2009, 4 October, 58(2), Pg: 304-317.
  9. Remoundou K, Phoebe K, Areti K, Paulo A. Nunes c, Michalis S, et al., (2009) valuation of natural marine ecosystems: an economic perspective. Environmental Science & Policy 12: 1040-1051.
  10. Hanley N, Wright RE, Adamowicz V (1998) Using Choice Experiments to Value the Environment. Environmental and Resource Economics, 11: 413-428.
  11. Wattage P, Glenn H, Mardle S, Van Rensburg T, Grehan A, et al. (2011) Economic value of conserving deep-sea corals in Irish waters: A choice experiment study on Marine Protected Areas. Fisheries Research 107: 59-67.
  12. McFadden D (1974) Conditional logit analysis of qualitative choice behavior. In: Zarembka, P. (Ed.), Frontier in Econometrics., Academic Press, Pg: 105-142.
  13. Cleland J, McCartney A (2010) Putting the Spotlight on Attribute Definition: Divergence Between Experts and the Public, l Environmental Economics Research Hub.
  14. Ministry of Mahaweli Development and Environment (2016) S. L., International Research Symposium Proceedings. Colombo-Sri Lanka: Sri Lanka Next-“A Blue-Green Era”, Conference and Exhibition, Pg: 88-89.
  15. Wattage P (2011) Valuation of ecosystem services in coastal ecosystems: Asian and European perspectives. Environment for Development, Ecosystem Services Economics, (ESE), Working Paper Series,, Paper N° 8, Division of Environmental Policy Implementation, The United Nations Environment Program.
  16. Hoyos D (2010) The state of the art of environmental valuation with discrete choice experiments. Ecological economics 69: 1595-1603.
  17. Louviere JJ, Hensher DA, Swait JD (2000) Stated choice methods: analysis and applications. s.l., Cambridge University Press.
  18. Anon (2019) Master plan on coast conservation & tourism development within the coastal zone from Negombo to Mirissa in Sri Lanka, s.l.: Environment, Coast Conservation, and Coastal Resource Management Department Ministry of Mahaweli Development.

Berkeley, Anti-Semitism, and AI-Suggested Remedies: Current Thinking and a Future Opportunity

DOI: 10.31038/CST.2024913

Abstract

This study examines the growing anti-Semitism on the Berkeley campus. The article combines simulations of anti-Semitic attitudes with AI proposed solutions. The technique is based on Mind Genomics, which searches for attitudes in the population. These mindsets are various approaches to making judgments based on the same data or information. The research demonstrates the benefits of mimicking biases while also employing artificial intelligence to provide solutions to such preconceptions.

Introduction – The Growth of Anti-Semitism

The current political climate has fueled anti-Semitism, both locally and globally. Recent years have witnessed an upsurge in hate speech and discriminatory actions, allowing extremist ideologies to spread and gain acceptability. In this toxic environment, anti-Semitic beliefs are more likely to propagate and manifest as threatening and aggressive behavior. The present political context, both locally and globally, has been blamed for some of today’s “newest incarnation” of anti-age-old Semitism’s myths. Recent years have witnessed an upsurge in hate speech and discriminatory actions, allowing extremist ideologies to spread and gain acceptability. In this toxic environment, anti-Semitic beliefs are more likely to propagate and manifest as threatening and aggressive behavior [1-5]. Covert but growing acceptance of anti-Semitism has resulted in an increase in hate speech and acts among certain organizations. As a result, a toxic environment has formed in which individuals feel free to express their anti-Semitic views without fear of repercussions. Furthermore, as both parties have become more entrenched and unwilling to engage in genuine negotiations, the Israeli-Palestinian issue has become more polarized. Anti-Semitism has become stronger in the current political climate, both locally and globally. The rise in hate speech and discriminatory conduct in recent years has provided a forum for extreme ideologies to spread and gain support. Anti-Semitism is more likely to spread in this poisoned climate, showing itself as violent and deadly behaviors [6-11]. Anti-Semitic feelings are common in America, especially among young people. These feelings may be a mirror of larger social problems including xenophobia and the growth of nationalism. In today’s politically sensitive environment, young people could be more vulnerable to the influence of extreme beliefs or extremist organizations. Propaganda and false information demonizing specific groups may also be the source of the hatred and intolerance becoming increasingly public and readily expressed.

Anti-Semitism in Higher Academe, Specifically UC Berkeley

Anti-Semitism has recently increased on college campuses, particularly at UC Berkeley, although it seems to be widespread as of this writing (March 2024). This might be due to a number of causes, including the impact of extremist organizations and the growing polarization of political beliefs. In addition, social media has been used to organize rallies against pro-Israel speakers and propagate hate speech. A lack of education and understanding of the history and consequences of anti-Semitism may contribute to the anti-Semitism pandemic at UC Berkeley. Many students may be unaware of the full ramifications of their words and actions, thereby fueling a vicious cycle of hate and prejudice toward Jews. Furthermore, the university’s failure to respond to and condemn anti-Semitic offenses may have given demonstrators the confidence to act without concerns about negative consequences [12-16]. There are most likely many explanations for the recent surge of anti-Jewish sentiment at the University of California, Berkeley. The ongoing wars in the Middle East, particularly those involving the Israeli-Palestinian conflict, might be one direct reason. This has the capacity to elicit strong emotions and generate conflicting views regarding Israel and its activities. Protests and threats against the Israel speaker may have stemmedfrom her apparent sympathy for the Israeli government’s harsh policies or practices. It is possible that university demonstrators responded against the speaker because they considered their affiliations or ideas of view caused unfairness or harm. The timing of this hate campaign may be related to recent events in Israel and its ties with other countries in the region. For example, a disputed decision or action by the Israeli government might reignite interest and support for anti-Semitism. Furthermore, the ubiquity of social media and instant messaging may affect how rapidly information travels and how protests are planned [17-20].

Mind-Sets Emerging from Mind Genomics and Mind-Sets Synthesized by AI

The emerging science of Mind Genomics focuses on the understand of how people make decisions about the everyday issues in their lives, viz., their normal, quotidian existence. Rather than focusing on experiments which put people in artificial situations in order to figure out ‘how they think’, Mind Genomics does simple yet powerful experiments. The different ways people think about the same topic become obvious from the results of a Mind Genomics study.

Mind Genomics studies are executed in a systematic fashion, using experimental design, statistics (regression, clustering) and then interpretation to delve deep into a person’s mind. The “process” of Mind Genomics begins by having the researcher develop questions about the topic, and, in turn, provide answers to those questions. The questions are often called ‘categories’, the answers are often called ‘elements’ or ‘messages.’ The questions deal with the different, general aspects of a topic. They should ‘tell a story’, or at least be able to be put together in a sequence which ‘tells a story’. The requirement is not rigid, but the ‘telling a story’ promotes the notion that there should be a rationale to the questions. In turn, the answers or elements are specific messages, phrases which can stand alone. These elements paint ‘word pictures’ in the mind of the respondent. The process continues, with the respondent reading vignettes, combinations of answers or elements, but without the questions. The respondent reads each vignette, rates the vignette, and at the end the Mind Genomics database comprises a set of vignettes (24 per respondent), the rating of the vignette, and finally the composition of the vignette, in terms of which elements appear in each vignette, and which elements are absent. The final analyses uses OLS (ordinary least-squares) regression to identify which particular elements ‘drive’ the response, as well as cluster analysis to divide the set of respondents into smaller groups based upon the similarity of patterns. Respondents with similar patterns of elements ‘driving’ the response are put into a common cluster. These clusters are called mind-sets. The mind-sets are remarkably easy to name because the patterns of strong performing elements within a mind-set immediately suggest a name for that mind-set.s All of a sudden, this blooming, buzzing confusion comes into clear relief and one sees the rules by which a person weights the different messages to assign the rating [21-25]. The development of mindsets through Mind Genomics leads naturally to the question about the use of artificial intelligence, AI, to synthesize these mindsets. The specific question is whether AI can be told that there are a certain number of mindsets and then instructed to synthesize those mindsets. The difference here is that AI is simply informed about the topic, given an abbreviated ‘introduction, and immediately instructed to create a certain number of mindsets, and of course afterwards answer questions about these mindsets, such as the name of the mindset, a description of the mindset, how the mindset react would to specific messages, slogans with which to communicate with the mind-set, etc. It will be that use of AI which will concern us for the rest of this paper, and especially a demonstration of what can be done with AI using Mind Genomics ‘thinking’ about the mind-sets based upon responses to the issues of the everyday.

A Worked Example Showing the Synthesis of Mind-Sets in Berkeley8803

The process begins by briefing AI about the topic. Table 1 shows the briefing given to AI. The specific instantiation of AI is called SCAS (Socrates as a Service.) SCAS is part of the BimiLeap platform for Mind Genomics. The text in Table 1 is typed into SCAS in the Mind Genomics platform. Note that the topic is explained in what might generously be labelled ‘sparsely.’ There is really no specific information.

Once the user has briefed SCAS (AI) has been briefed, it is a matter of iterations. Each iteration emerging from the AI ends up dealing with a specific mind-set. Occasionally the iteration fails, and the user has to return to try the iteration once again. The iterations require about 15 to 20 seconds each. The iterations are recorded in an Excel workbook. They are then analyzed after the study has been completed. The user might run 5-10 iterations in a matter of a few minutes. Each iteration, as noted above, is put into a separate tab in the Excel ‘Idea Book’. A secondary set of analyses, built in to the prompted by the user and carried out by AI works on the answers and provides additional insight. Table 2 shows the results from the iterations, generating the mind-sets. Note that the various iterations generated seven mind-sets, not six. The reason is that each iteration generated only one mind-set, even though the briefing in Table 1 specified six mind-sets. Each iteration begins totally anew, without any memory of the results from the previous iterations. The consequence is that SCAS (viz., AI) may return with many more different mind-sets since each iteration generates one mind-set in isolation.

Table 1: The briefing question provided to AI (SCAS)

tab 1

Table 2: AI Simulation of mind-sets of Berkeley protesters against Israel and an IDF speaker

tab 2(1)

tab 2(2)

Benefits from AI Empowered by Mind Genomics Thinking to Synthesize Mind-sets

Mind Genomics allows us to better comprehend the protestors’ individual tastes, values, and views by breaking them down into different mindsets. Having this information is essential for creating communication plans and focused interventions. AI enables us to analyze vast amounts of data and simulate a variety of scenarios. It can decipher complex data and identify patterns and trends that are not immediately apparent to human viewers. Artificial intelligence (AI) has the potential to help us make better decisions by helping us predict the potential outcomes of certain strategies and actions. Mind Genomics thinking empowering AI Intelligence simulation capabilities can allow us to analyze and understand the different mindsets of the protesters at UC Berkeley. Mind Genomics allows us the idea to segment the protesters based on their unique perceptions, attitudes, and beliefs towards the Israel speaker. This will give us a deeper insight into the underlying motives and triggers of their intolerant behavior. In turn, using AI almost immediately enables us create to virtual scenarios, simulate various perspectives, and then synthesize the array of reactions of the protesters [26]. This real-time synthesis of different mindsets may enable the creation of meaningful, feasible strategies to counter the intolerant antisemitism at a faster pace. Simulating this type of thinking and behavior is meaningful because it allows us to explore a wide range of possibilities and outcomes in a controlled environment. It provides us with valuable insights into the dynamics of group behavior and the factors that drive intolerance and protest movements. By conducting simulations, we can test different strategies and interventions in a risk-free setting and identify the most effective approaches. Rather of falling for artificial intelligence’s tricks, we should use its powers to improve our comprehension and judgment . Artificial intelligence (AI) has the potential to improve our capacity to evaluate complicated data and model various situations, opening up new avenues for investigation. We can learn more about the actions and motives of the UC Berkeley protestors by fusing the analytical framework of Mind Genomics with the computing capacity of AI. This makes it possible for us to examine the fundamental causes of intolerance and anti-Semitism in academic settings in more detail.

How AI can Synthesize the Future of Future of the Young Haters in UC Berkeley

As a final exercise, AI (SCAS) was instructed to use its ‘knowledge;’ about the mind-sets of students to predict their future. These were called the ‘young haters in UC Berkeley’. The request to AI was to predict their future. The prediction by AI appears in Table 3. It is clear from Table 3 that AI is able to synthesize what might be a reasonable future for the young haters in UC Berkeley. Whether the prediction is precisely correct or not is not important. What is important is the fact that AI can be interrogated to get ideas about the future of students who do certain things, about the nature of mindsets of people who hold certain beliefs, as well as issues which ordinarily would tax one’ thinking and creative juices but might eventually emerge given sufficient effort. The benefit here is that AI can be reduced to iterations, each of which takes approximately 15 seconds, each of which can be further analyzed subsequently by a variety of queries, and which together generate a corpus of knowledge.

Table 3: AI synthesis of the future of the young haters in UC Berkely

tab 3

Discussion and Conclusions

A House of Social Issues and Human Rights – A Library and Database Located at UC Berkeley

Rather than looking at the negative of the resurgent anti-Semitism at Berkeley, and indeed around the world, let us see whether, in fact, the emergent power of AI can be used to understand prejudice and combat it, just as we have seen what it can do to help us understand the possible sources of the attacks at Berkeley. We are talking here about the creation of a database using AI to understand all forms of the suppression of human rights and to suggest how to reduce this oppression, how to ameliorate the problems, how to negotiate coexistence, how to create a lasting peace. We could call this The house of social issues and human rights, and perhaps even locate it somewhere at Berkeley. What would be the specifics of this proposition? The next paragraphs outline the vision. We may imagine a vast collection paper dealing with the presentation, analysis, discussion, and solution of societal concerns. This library, which is possible to construct in a few months at a surprisingly cheap cost (apart from the people who do the thinking), will be a complete digital platform where people can get resources, knowledge, and answers on urgent social problems from anywhere in the globe. There will be parts of the library devoted to subjects including human rights, environmental sustainability, education, healthcare, and poverty, among others. Articles, research papers, case studies, and other materials will be included in each part to assist readers in comprehending the underlying causes of these problems as well as possible solutions The library will act as a center for cooperation and information exchange, enabling people and communities to benefit from one another’s triumphs and experiences. With this wealth of knowledge at its disposal, the library will enable people to take charge of their own lives and transform their communities for the better. By encouraging individuals to join together and work together to create a more fair and equal society, this library will benefit the whole planet. The library will boost empathy and understanding by encouraging social problem education and awareness, which will result in increased support for underprivileged communities. The library’s use of evidence-based remedies will address structural inequities and provide genuine opportunities.

Books on human rights and world order adorn the shelves of a large library devoted to tackling social concerns globally. Every book includes in-depth assessments and suggested solutions for the problems that humanity now and in the future may confront. The library provides a source of information and inspiration for change, addressing issues ranging from wars and injustices to prejudice and inequality. The collection covers a wide range of topics, including access to education, healthcare, and clean water, as well as gender equality and the empowerment of marginalized communities. It explores the root causes of poverty, violence, and environmental degradation, offering strategies for sustainable development and peacebuilding. The diversity of perspectives and approaches within the library reflects the complexity and interconnectedness of global issues, encouraging dialogue and collaboration among researchers, policymakers, and activists. As visitors navigate the aisles of the library, they discover case studies and success stories from around the world, showcasing innovative solutions and best practices in promoting human rights and fostering a more just and equitable world order. They engage with interactive exhibits and multimedia resources, highlighting the power of storytelling and advocacy in driving social change and building solidarity among diverse populations. The library serves as a hub for research, advocacy, and activism, fostering a sense of collective responsibility and global citizenship among its users. Scholars and practitioners from various fields converge in the library, exchanging ideas, sharing expertise, and mobilizing resources to address pressing social challenges and advance the cause of human rights and justice. They participate in workshops, seminars, and conferences, deepening their understanding of complex issues and sharpening their skills in advocacy, diplomacy, and conflict resolution. The library serves as a catalyst for social innovation and transformative change, inspiring individuals and organizations to unite in pursuit of a more inclusive, peaceful, and sustainable world. Visitors to the library are encouraged to reflect on their own role in promoting human rights and upholding ethical principles in their personal and professional lives. They are challenged to think critically about the impact of their actions on others, and to explore ways in which they can contribute to positive social change and build a more resilient and compassionate society. The library serves as a place of introspection and inspiration, empowering individuals to become agents of change and advocates for justice and equality in their communities and beyond.

References

  1. Friedman S (2023) Good Jew, Bad Jew: Racism, anti-Semitism and the assault on meaning. NYU Press.
  2. Gertensfeld M (2005) The deep roots of anti-semitism in European society. Jewish Political Studies Review 17: 3-46.
  3. Ginsberg B (2024) The New American Anti-Semitism: The Left, the Right, and the Jews. Independent Institute.
  4. Greenwood H (2020) Corona pandemic opens floodgates for antisemitism. Israel Hayom. March 19, 2020.
  5. Spektorowski A (2024) Anti-Semitism, Islamophobia and Anti-Zionism: Discrimination and political Construction. Religions 15:74.
  6. Alexander JC, Adams T (2023) The return of antisemitism? Waves of societalization and what conditions them. American Journal of Cultural Sociology 11: 251-268.
  7. Jikeli G (2015) European Muslim antisemitism: Why Young urban males say they don’t like jews. Bloomington: Indiana University Press.
  8. Kushner T (2017) Antisemitism in Britain: Continuity and the absence of a resurgence? In Antisemitism Before and Since the Holocaust, 253-276. Cham: Palgrave Macmillan.
  9. LaFreniere Tamez HD Anastasio N, Perliger A (2023) Explaining the Rise of Antisemitism in the United States. Studies in Conflict & Terrorism, pp.1-22, Taylor & Francis.
  10. Lewis B (2006) The new antisemitism. The American Scholar 75: 25-36.
  11. Lipstadt DE (2019) Antisemitism: Here and Now. New York: Schocken.
  12. Bailard CS, Graham MH, Gross K, Porter E, Tromble R (2023) Combating hateful attitudes and online browsing behavior: The case of antisemitism. Journal of Experimental Political Science, First View, 1-14.
  13. Kenedy RA (2022) Jewish Students’ Experiences in the Era of BDS: Exploring Jewish Lived Experience and Antisemitism on Canadian Campuses. In Israel and the Diaspora: Jewish Connectivity in a Changing World (pp. 183-204). Cham: Springer International Publishing.
  14. Harizi A, Trebicka B, Tartaraj A, Moskowitz H (2020) A mind genomics cartography of shopping behavior for food products during the COVID-19 pandemic. European Journal of Medicine and Natural Sciences 4: 25-33.
  15. Burton AL (2021) OLS (Linear) regression. The Encyclopedia of Research Methods in Criminology and Criminal Justice 2: 509-514.
  16. Fishman AC (2022) Discrimination on College Campuses: Perceptions of Evangelical Christian, Jewish, and Muslim Students: A Secondary Data Analysis (Doctoral dissertation, Yeshiva University).
  17. Al Jazeera (English) (2024) US rights group urges colleges to protect free speech amid Gaza war.” Al Jazeera English, 1 Nov. 2023, p. NA. Gale Academic OneFile.
  18. Wu T, He S, Liu J, Sun S, Liu K, Han QL, Tang Y et al. (2023) A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica 10: 1122-1136.
  19. Radványi D, Gere A, Moskowitz HR (2020) The mind of sustainability: a mind genomics cartography. International Journal of R&D Innovation Strategy (IJRDIS) 2: 22-43.
  20. Papajorgji P, Moskowitz H (2023) The ‘Average Person ‘Thinking About Radicalization: A Mind Genomics Cartography. Journal of Police and Criminal Psychology 38: 369-380. [crossref]
  21. Shapiro I (2024) Even Jew-haters have Free Speech, But … inFOCUS, 18, 10+. Gale Accessed 29 Feb. 2024.
  22. Papajorgji P, Ilollari O, Civici A, Moskowitz H (2021) A Mind Genomics based cartography to assess the effects of the COVID19 pandemic in the tourism industry. WSEAS Transactions on Environment and Development 17: 1021-1029.
  23. Mulvey T, Rappaport SD, Deitel Y, Morford T, DiLorenzo A, et al. (2024) Using structured AI to create a Socratic tutor to promote critical thinking about COVID-19 and congestive heart failure Advances in Public Health, Community and Tropical Medicine APCTM-195 ISSN 2691-8803
  24. Milligan GW, Cooper MC (1987) Methodology review: Clustering methods. Applied Psychological Measurement 11: 329-354.
  25. Nassar M (2023) Exodus, nakba denialism, and the mobilization of anti-Arab Racism. Critical Sociology 49: 1037-1051.
  26. Lonas, Lexi (2023) “Palestinian Student Group At Center of Antisemitism, Free Speech Debate: SJP chapters said the world witnessed ‘a historic win for the Palestinian resistance: across land, air, and sea, our people have broken down the artificial barriers of the Zionist entity.’.” Hill, 16 Nov. 2023, p. 11. Gale Academic OneFile.

Comments on “Cancer Diagnosis and Treatment Platform Based on Manganese-based Nanomaterials.”

DOI: 10.31038/NAMS.2024722

 
 

Cancer is a serious disease that poses a significant threat to human health. Early diagnosis and treatment are crucial for improving patient survival rates. In recent years, the application of nanotechnology in the field of cancer, particularly the precision diagnosis and treatment platform based on manganese-based nanomaterials, has garnered considerable attention. This novel nanomaterial possesses unique physical and chemical properties that enable precise diagnosis and treatment at the level of cancer cells, offering new hope for cancer patients. Manganese-based nanomaterials hold immense potential and significant advantages in precision cancer diagnosis and treatment. Due to their nanoscale characteristics, these materials can penetrate tissues more effectively, achieving higher sensitivity and more accurate diagnosis. However, manganese-based nanomaterials also have some limitations. Firstly, the accuracy of manganese-based nanomaterials in cancer diagnosis still needs improvement. While these materials can identify cancer cells through targeted actions, their ability to recognize different types of cancer cells remains limited. This may result in misdiagnosis or underdiagnosis, affecting treatment outcomes. Therefore, further research and enhancement of the targeted recognition mechanism of manganese-based nanomaterials are needed to improve their accuracy in cancer diagnosis.

The application of manganese-based nanomaterials in cancer treatment also presents notable advantages. By modifying the surface properties of manganese-based nanomaterials and functionalizing them, targeted recognition and eradication of cancer cells can be achieved while minimizing damage to normal cells. Additionally, these nanomaterials can serve as carriers for loading chemotherapy drugs or photothermal agents, enabling targeted release and localized treatment to enhance treatment effectiveness and reduce side effects. This precise treatment strategy can effectively inhibit tumour growth and metastasis, prolonging patient survival and increasing treatment success rates. However, the drug release efficiency of manganese-based nanomaterials in cancer treatment needs improvement. Although these materials can efficiently transport anticancer drugs to tumour sites, their drug release rate and efficiency are still not ideal. This may lead to premature or inadequate drug release in the body, impacting treatment outcomes. Therefore, new material designs and drug release mechanisms need to be explored to enhance the drug release efficiency of manganese-based nanomaterials in cancer treatment.Furthermore, manganese-based nanomaterials exhibit good biocompatibility and biodegradability, posing no long-term toxic side effects on the human body, providing a reliable guarantee for clinical applications. While these materials demonstrate good biocompatibility in vitro studies, their toxicity and metabolic mechanisms in vivo remain unclear. This may limit the widespread application of these materials in clinical practice. Thus, more in vivo studies are required to understand the toxicity and biocompatibility of manganese-based nanomaterials to ensure their safety and efficacy. The stability and controllability of manganese-based nanomaterials in practical clinical applications still need further improvement. Additionally, the high production cost of manganese-based nanomaterials restricts their potential for large-scale applications. Therefore, despite the significant importance of manganese-based nanomaterials in cancer treatment, their limitations need to be carefully addressed to promote their broader application and development.

In conclusion, the precision diagnosis and treatment platform for cancer based on manganese-based nanomaterials holds tremendous potential and prospects for development, yet it also presents some limitations. With the continuous advancement and refinement of nanotechnology, it is believed that manganese-based nanomaterials will become an essential tool for cancer diagnosis and treatment in the future, offering patients a better quality of life and health. It is hoped that in the near future, this novel nanomaterial can be widely applied in clinical practice, bringing new hope and possibilities for overcoming cancer.

Accelerated the Mechanics of Science and Insight through Mind Genomics and AI: Policy for the Citrus Industry

DOI: 10.31038/NRFSJ.2024713

Abstract

The paper introduces a process to accelerate the mechanics of science and insight. The process comprises two parts, both involving artificial intelligence embedded in Idea Coach, part of the Mind Genomics platform. The first part of the process identifies a topic (policy for the citrus industry), and then uses Mind Genomics to understand the three emergent mind-sets of real people who evaluate the topic, along with the strongest performing ideas for each mind-set. Once the three mind-sets are determined, the second part of the process introduces the three mind-sets and the strongest performing elements to AI in a separate ‘experiment’, instructing Idea Coach to answer a series of questions from the point of view of each of the three mind-sets. The acceleration can be done in a short period of time, at low cost, with the ability to generate new insight about current data. The paper closes by referencing the issues of critical thinking and the actual meaning of ‘new knowledge’ emerging from a world of accelerated mechanics of science and insight.

Introduction

Traditionally, policy has been made by experts, often consultants to the government, these consultants being experts in the specific topic, in the art and science of communication, or both. The daily press is filled with stories about these experts, for example the so-called ‘Beltway Bandits’ surrounding Washington D.C. [1].

It is the job of these experts to help the government decide general policy and specific implementation. The knowledge of these experts helps to identify issues of importance to the government groups to whom they consult. The ability of these expert to communicates helps to assure that the policy issues on which they work will be presented to the public in the most felicitous and convincing manner.

At the same time that these experts are using the expertise of a lifetime to guide policy maker, there is the parallel world of the Internet, source of much information, and the emerging world of AI, artificial intelligence, with the promise of supplanting or perhaps more gently, the promise of augmenting, the capabilities and contributions of these expert. Both the internet and AI have been roundly attacked for the threat that they pose [2]. It should not come as a surprise that the world of the Internet has been accused of being replete with false information, which it no doubt is [3]. AI receives equally brutal attacks, such as producing false information [4], an accusation at once correct and capable of making the user believe that AI is simply not worth considering because of the occasional error [5].

The importance of public policy is already accepted, virtually universally. The issue is not the general intent of a particular topic, but the specifics. What should the policy emphasize? Who should be the target beneficiaries of the policy? What should be done, operationally, to achieve the policy? How can the policy be implemented? And finally, in this short list, what are the KPI’s, the key performance indicators by which a numbers-hungry administration can discover whether the policy is being adopted, and whether that adoption is leading to desire goals.

Theory and Pragmatics – The Origin of This Paper

This paper was stimulated by the invitation of HRM to attend a conference on the Citrus Industry in Florida, in 2023. The objective of the conference was to bring together various government, business and academic interests to discuss opportunities in the citrus industry, specifically for the state of Florida in the United States, but more generally as well. Industry-center conferences of this type welcome innovations from science, often with an eye on rapid application. The specific invitation was to share with the business, academic and government audiences new approaches which promised better business performance.

The focus of the conference was oriented towards business and towards government. As a consequence, the presentation to the conference was tailored to show how Mind Genomics as a science could produce interesting data about the response to statements about policy involving the business of citrus. As is seen below, the material focused on different aspects of the citrus industry, from the point of view of government and business, rather than from the point of view of the individual citrus product [6-9].

The Basic Research Tool-Mind Genomics

At the time of invitation, the scope of the presentation was to share with the audience HOW to do a Mind Genomics study, from start to finish. The focus was on practical steps, rather than theory, and statistics. As such the presentation was to be geared to pragmatics, about HOW to do the research, WHAT to expect, and how to USE the results. The actual work ended up being two projects, the first project to get some representative data using a combination of research methods and AI, AI to generate the ideas and then research to explore the ideas with people. The second part, done recently, almost five months after the conference, expanded the use of AI to further analyze the empirical results, opening up new horizons for application.

Project #1: Understanding the Mind of the Ordinary Person Faced with Messages about Citrus Policy

The objective of standard Mind Genomics studies is to understand how people make decisions about the issues of daily life. If one were to summarize the goals of this first project, the following sentence would do the best job, and ended up being the sentence which guided the efforts. The sentence reads: Help me understand how to bring together consumers, the food trade, and the farmer who raises citrus products, so we can grow the citrus industry for the next decade. Make the questions short and simple, with ideas such as ‘how’ do we do things. The foregoing is a ‘broad stroke’ effort to under what to do in the world of the everyday. The problem is general, there are no hypotheses to test, and the results are to be in the form of suggestions. There is no effort to claim that the results tell us how people really feel about citrus, or what they want to do when the come into contact with the world of citrus as business, as commerce, as a regulated piece of government, viz., the agriculture industry. In simple terms, the sentence in bold is a standard request that is made in industry all the time, but rarely treated as a topic to be explored in a disciplined manner.

Mind Genomics works by creating a set of elements, messages about a topic, and mixing/matching these elements to create small vignettes, combinations comprising a minimum of two messages and a maximum of four messages. The messages are created according to an underlying structure called an experimental design. The respondent, usually sitting at a remote computer, logs into the study, reads a very short introduction to the study, and then evaluates a set of 24 vignettes, one vignette at a time. The entire process takes less than 3-4 minutes and proceeds quickly when the respondents are members of an on-line panel and are compensated for their participation by the panel company.

The Mind Genomics process allows the user to understand what is important to people, and at the same time prevents the person from ‘gaming’ the study to give the correct answer. In most studies, the typical participant is uninterested in the topic. The assiduous researcher may instruct the participant to pay attention, and to give honest answers, but the reality is that people tend to be interested in what they are doing, not in what the researcher wants to investigate. As a consequence, their answers are filled with a variety of biases, ranging from different levels of interest and involvement to distractions by other thoughts. The Mind Genomics process works within these constraints by assuming that the respondent is simply a passive observer, similar to a person driving through their neighborhood, almost in an automatic fashion. The person takes in the information about the road, traffic, and so forth, but does not pay much attention. At the end, the driver gets to where they are going, but can barely remember what they did when asked to recall the steps. This seems to be the typical course of events.

The systematic combinations mirror these different ‘choice points.’ The assumption is that the respondent simply looks at the combination, and ‘guesses’, or at least judges with little real interest. Yet, the systematic variation of the elements in the vignettes ends up quickly revealing what elements are important, despite the often heard complain that ‘I was unable to see the pattern, so I just guess.’

The reasons for the success of Mind Genomics are in the design and the execution [10-12].

  1. The elements are created with the mind-set of a bookkeeper. The standard Mind Genomics study comprises four questions (or categories), each question generating four answers (also called element). The questions and answers can be developed by professionals, by amateurs, or by AI. This paper will show how AI can generate very powerful, insight questions and answers, given a little human guidance by the user
  2. The user is required to fill in a templated form, asking for the questions (see Figure 1, Panel A). When the user needs help the AI function (Idea Coach) can recommend questions once Idea Coach is given a sense of the nature of the topic. Figure 1, Panel B shows the request to Idea Coach in the form of a paragraph, colloquially called a ‘squib.’ The squib gives the AI a background, and what is desired. The squib need not follow a specific format, as long as it is clear. The Idea Coach returns with sets of suggested questions. The first part of the suggest questions appears in Figure 1, Panel C, showing six of the 15 questions returned by the AI-powered Idea Coach. The user need only scroll through to see the other suggestions. The user can select a question, edit it, and then move on. The user can run many iterations to create different sets of questions and can either edit the squib or edit the question, or both. At the end of the process, the user will have created the four questions, as shown in Figure 1, Panel D. Table 1 shows a set of questions produced by the Idea Coach, in response to the squib.
  3. The user follows the same approach in order to create the answers. This time, however, the squib does not need to be typed in by the user. Rather, the question selected by the user, and after editing, becomes the squib for Idea Coach to use. For this project, Figure 1, Panel D shows the four squibs, one for each question. Idea Coach once again returns with 15 answers (elements) for each squib. Once again the Idea Coach can be used, so that the Idea Coach becomes a tool to help critical thinking, providing sequential sets of 15 answers (elements). From one iteration to another the 15 answers provided by Idea Coach differ for the most part, but with a few repeats Over 10 or so iterations it’s likely that most of the answers will have been presented.
  4. Once the user has selected the questions, and then selected four answers for each question, the process continues with the creation of a self-profiling questionnaire. That questionnaire allows the user to find out how the respondent thinks about different topics directly or tangentially involved with the project. The self-profiling questionnaire has a built-in pair questions to record the respondent’s age (directly provided), and self-described gender. For all questions except that of age, the respondent is instructed to select the correct answer to the question, the question presented on the screen, the answers presented in a ‘pull-down’ menu which appears when the corresponding question is selected for answering.
  5. The next step in the process requires the user to create a rating scale (Figure 2, Panel A). The rating scale chosen has five points as show below. Note that the scale comprises two parts. The first part is evaluative viz., how does the respondent feel (hits a nerve vs hot air). The second part is descriptive (sounds real or does not sound real). This two-sided scale enables the user to measure both the emotions (key dependent variable for analysis), as well as cognitions. For this study, the focus will be on the percent of ratings that are either 5 or 4 (hitting a nerve). Note that all five scale points are labelled. Common practice in Mind Genomics studies has been to label all the scales for the simple reason that most users of Mind Genomics results really are not focused on the actual numbers, but on the meaning of the numbers.
    Here’s a blurb you just read this morning on the web when you were reading stuff.. What do you think
    1=It’s just hot air … and does not sound real
    2=It’s just hot air … but sounds real
    3=I really have no feeling
    4=It’s hitting a nerve… but does not sound real
    5=It’s hitting a nerve .. and sounds real
  6. The user next create a short introduction to the study, to orient the respondent (Figure 2, Panel B). Good practice dictates that wherever possible the user should provide as little information about the topic as possible. The reason is simple. It will be from the test stimuli, the elements in the 4×4 collection, or more specifically the combinations of those elements into vignette, that the respondent will make the evaluation and assign the judgment. The purpose of the orientation is to make the respondent comfortable and give general direction. The exceptions to this dictum come from situations, such the law, where knowledge of other factors outside of the material being presented can be relevant. Outside information is not relevant here.
  7. The last step of the setup consists of ‘sourcing’ the respondents (Figure 2, Panel C). Respondents can be sourced from standing panels of pre-screened individuals, or from people one invites, etc. Good practice dictates working with a so-called online panel provider, which for a fee can customize the number and type of respondent desired. With these online panel providers the study can be done in a matter of hours.
  8. Once the study has been set-up, including the selection of the categories and elements (viz, questions and answers), the Mind Genomics platform creates combinations of these elements ‘on fly’, viz., in real time, doing so for each respondent who participates in the study. It is at the creation of the vignettes where Mind Genomics differentiates itself from other approaches. The conventional approach to evaluating a topic uses questionnaires, with the respondent present with stand alone ideas in majestic isolation, one idea at a time. The idea or topic might be a sentence, but the sentence has the aspects of a general idea, such as ‘How important is government funding for a citrus project.’ The goal is to isolate different, relevant ideas, focus the mind of the respondent on each idea, one at a time, obtain what seems to be an unbiased evaluation of the idea, and then afterwards to the relevant analyses to obtain a measure of central tendency, viz., an average, a median, and so forth. The thinking is straightforward, the execution easy, and the user presumes to have a sense of the way the mind of the respondent works, having given the respondent a variety of ‘sterile ideas’, and obtained ratings for each of the separate ideas.

fig 1

Figure 1: Set up for the Mind Genomics study. Panel A shows the instructions to provide four questions. Panel B shows the input to Idea Coach. Panel C shows the first part of the output from Idea Coach, comprising six of the 15 questions generated. Panel D shows the four questions selected, edited, and inserted into the template.

fig 2

Figure 2: Final steps in the set-up of the study. Panel A shows the rating scale; the user types in the rating question, selects the number of scale points, and describes each scale point. Panel B shows the short orientation at the start of the study. Panel C shows the request to source respondents.

Table 1: Questions provided to the user by AI embedded in Idea Coach

tab 1

Figure 3 shows a sample vignette as the respondent would see it. The vignette comprises a question at the topic, a collection of four simple statements, without any connectives, and then the scale buttons on the bottom. The respondent is presented with 24 of these vignettes. Each vignette comprises a minimum of two and a maximum of four elements, in the spare structure shown in Figure 3. There is no effort made to make the combination into a coherent whole. Although the combinations do not seem coherent, and indeed they are not, after a moment’s shock the typical respondent has no problem reading through the vignette, as disconnected as the elements are, and assigning a rating to the combination. Although many respondents feel that they are ‘guessing,’ the subsequent analysis will reveal that they are not.

fig 3

Figure 3: Example of a four-element vignette, together with the rating question, the 5-point rating scale, and the answer buttons at the bottom of the screen.

The vignettes are constructed by an underlying plan known as an experimental design. The experimental design for these Mind Genomics studies calls for precisely 24 combinations of elements, our ‘vignettes’. There are certain properties which make the experimental design a useful tool to understand how people think.

  1. Each respondent sees a set of 24 vignettes. That set of vignette suffices to do a full analysis on the ratings of one respondent alone, or on the ratings of hundreds of respondents. The design is explicated in Gofman and Moskowitz (2010) [13].
  2. The design calls for each element to appear five times in 24 vignettes and be absent 19 times from the 24 vignettes.
  3. Each question or category contributes at most one element to a vignette, often no elements, but never two or more elements. In this way the underlying experimental design ensures that no vignette every present mutually contradictory information, which could easily happen if elements from the same category appeared together, presenting different specifics of the same type of information.
  4. Each respondent evaluates a different set of vignettes, all sets structurally equivalent to each other, but with different combinations [13]. The rationale underlying this so-called ‘permutation’ approach is that the researcher learns from many imperfectly measured vignettes than from the same set of vignettes evaluated by different respondents in order to reduce error of measurement. In other words, Mind Genomics moves away from reducing error by averaging out variability to reducing error by testing a much wider range of combinations. Each combination tested is subject to error, but the ability to test a wide number of different combinations allows the user to uncover the larger pattern. The pattern often emerges clearly, even when the measurements of the individual points on the pattern are subject to a lot of noise.

The respondent who evaluates the vignettes is instructed to ‘guess.’ In no way is the respondent encouraged to sit and obsess over the different vignettes. Once the respondent is shown the vignette and rates it, the vignette disappears, and a new vignette appears on the screen. The Mind Genomics platform constructs the vignettes at the local site where the respondent is sitting, rather than sending the vignettes through the email.

When the respondent finishes evaluating the vignettes, the composition of the vignette (viz., the elements present and absent) is sent to the database, along with the rating (1-5, as show above) as well as the response time, defined as the number of seconds (to the nearest 100th) elapsing between the appearance of the vignette on the respondent’s screen and the respondent’s assignment of a rating.

The last pieces of information to be added comprise the information about the respondent generated by the self—profiling questions, done at the start of the study, and a defined binary transformation of the five-point rating to a new variable, called convenient R54x.. Ratings 5 and 4 (hitting nerve) were transformed to the value 100. Ratings 3,2,1 (not hitting a nerve) were transformed to the value 0. To the transformed values 0 or 100, respectively, was added a vanishingly small random number (<10-5). The rationale for the random number is that later the ratings would be analyzed by OLS (ordinary least-squares) regression and then by k-means clustering, with the focus on the coefficients to emerge from OLS regression as inputs to the clustering. To this end it was necessary to ensure that all respondent data would generate meaningful coefficients from OLS regression, a requirement only satisfied when the newly created binary variables were all different from each other. Adding the vanishingly small random number to each newly created binary variable ensured that variation.

The analysis of the ratings follows two steps once the ratings have been transformed to R54x. The first step uses OLS (ordinary least-squares) regression, at the level of the individual respondent. OLS regression fits a simple linear equation to the data, relating the presence/absence of the 16 elements to the variable R54x. The second step uses k-means clustering (Likas et. al., 2003) to divide the respondents into groups, based upon the pattern of the coefficients for the equation.

The equation is expressed as: R54x = k1A1 + k2A2 … k16D4. The OLS regression program has no problem creating an equation for each respondent, based upon the prophylactic step of having added a vanishingly small random number to each transformed rating. That prophylactic step ensures that the OLS regression will never encounter the situation of ‘no variation in the dependent variable’, R54x.

Once the clustering has finished, the cluster program assigns each respondent first into one of two non-overlapping clusters, and second into one of three non-overlapping clusters. In the nomenclature of Mind Genomics these clusters are called ‘mind-sets’ to recognize the fact that they represent different points of view.

Table 2 presents the coefficients for the Total Panel, then for the two-mind-set solution, and then for the three-mind-set solution. Only positive coefficients are shown. The coefficient shows the proportion of time a vignette with the specific element generate a value of 100 for variable R54x. There emerges a large range in the numerical values of 16 coefficients, not so much for the Total Panel as for the mind-sets. This pattern of large difference across mind-sets in the range of the coefficients for R54x makes sense when we consider what the clustering is doing. Clustering is separating out groups of people who look at the topic in the same way, and do not cancel each other. When we remove the mutual cancellation through clustering the result is that all of the patterns of coefficients in a cluster are similar. The subgroup no longer has averages of numbers from very high to very low for a single element, an average which suppressed the real pattern. No longer do the we have the case that the Total Panel ends up putting together streams flowing in different directions. Instead, the strengths of different mind-sets becomes far more clear, more compelling, and more insights driven.

We focus here on the easiest take, namely, to interpret the mind-set. It is hard to name mind-sets 1 of 2 and 2 of 2. In contrast, it becomes far easier to describe the different mind-sets. We look only at the very strong coefficients; those score 21 or higher.

  1. Mind-Set 1 of 3-Focus on interacting with users, include local rowers, consumers, businesses which grow locally, and restauranteurs.
  2. Mind-Set 2 of 3-Focus on publicizing benefits to consumers
  3. Mind-Set 3 of 3-Focus on communication

Table 2: Coefficients for the Total Panel, and then for the two-mind-set solution, and then for the three-mind-set solution, respectively.

tab 2

Table 2 shows a strong consistency within the segments, a consistency which seems more art than science. The different groups emerge clearly, even though it would be seemingly impossible to find patterns among the 24 vignettes, especially recognizing that each respondent ended up evaluating a unique set of vignettes. The clarity of the mind-set emerges again and again in Mind Genomics studies, despite the continue plaint by study respondents that they could not ‘discover the pattern’ and ended up ‘guessing.’ Despite that plaint, the patterns emerging make overwhelming sense, disposing of the need of some of the art of storytelling, the ability to craft an interesting story from otherwise boring and seemingly pattern-less data. A compelling story emerges just from looking at what element are shade, for each mind-set. Finally, the reason for the clarity ends up being the hard-to-escape reality that the elements all are meaningful in and of themselves. Like the reality of the everyday, each individual element, like each individual impression of an experience, ‘makes sense’.

The Summarizer-finding Deeper Meanings in the Mind-set Results

Once the study has finished, the Mind Genomics platform does a thorough ‘work-up’ of the data, creating models, creating tables of coefficients, etc. As part of this the Mind Genomics platform applies a set of pre-specified queries to the set of strong performing elements, operationally defined as those elements with coefficients of 21 or higher. The seemingly artificial lower limit of 21 comes from analysis of the statistical properties of the coefficients, specifically at what value of coefficient can user feel that the pattern of coefficients is statistically robust, and thus feel the pattern to emerge has an improved sense of reality.

The Summarizer is programmed to write these short synopses and suggestions, doing so only with the tables generated by the Mind Genomics platform, as shown above in Table 2. Thus, for subgroups which generate no coefficients of 21 or higher, the Summarizer skips those subgroups. Finally, the summarizer is set up to work for every subgroups defined in the study, whether age, gender, or subgroup defined by the self-profiling classification question in which respondent profile themselves on topics relevant to the study.

Table 3 shows the AI summarization of the results for each of the three mind-sets. The eight summarizer topics are:

  1. Strong performing elements
  2. Create a label for this segment
  3. Describe this segment
  4. Describe the attractiveness of this segment as a target audience:
  5. Explain why this segment might not be attractive as a target audience:
  6. List what is missing or should be known about this segment, in question form:
  7. List and briefly describe attractive new or innovative products, services, experiences, or policies for this segment:
  8. Which messages will interest this segment?

The open discussions, information sharing, and understanding of the challenges faced by farmers.

Table 3: The output of the AI-based Summarizer applied to the strong performing elements from each of the mind-sets in the three-mind-set solution.

tab 3(1)

tab 3(2)

tab 3(3)

tab 3(4)

tab 3(5)

Part 2: AI as a Tool to Create New Thinking, Create New Hypotheses

During the past six months of experience with AI embedded in Idea Coach, a new and unexpected discovery emerged, resulting from exploratory work by author Mulvey. The discovery was that the squib for Idea Coach could be dramatically expanded, moving it beyond the request for questions, and into a more detailed request. The immediate reaction was to explore how deeply the Idea Coach AI could expand the discovery previously made.

Table 4 shows the expanded squib (bold), and the what the Idea Coach returned with later on. The actual squib was easy to create, requiring only that the user copy the winning elements for each mind-set (viz., elements with coefficients of 21 or higher). Once these were identified and listed out, squib was further amplified by a set of six questions.

Idea Coach returned with the answers to the six questions for each of the three mind-sets, and then later did its standard analysis using the eight prompts. These appear in Table 4. It is important to note that Table 4 contains no new information, but simply reworks the old information. In reworking that old information, however, the AI creates an entirely new corpus of suggestions of insights.

From this simple demonstration emerges the realization that the sequence of Idea Coach, questions, answers, results, all emerging in one hour or less for a set of 100 respondents or fewer, can be further used to springboard the investigations, and create new insights. These insights should be tested, but it seems likely that a great deal of knowledge can be obtained quickly, at very low cost, with no risk.

Table 4: AI ‘super-analysis’ of results from an earlier Mind Genomic study, revealing three mind-sets, and the strong performing elements for each mind-set.

tab 4(1)

tab 4(2)

tab 4(3)

tab 4(4)

Discussion and Conclusions

This paper began with a discussion of a small-scale project in the world of citrus, a project meant to be a demonstration to be given to a group at the citrus conference in September 2023. At that time, the Idea Coach had been introduced, and was used as a prompt for the study. It is important to note that the topic was not one based on a deep literature search of existing problems, but instead a topic crafted to be of interest to an industry-sector conference. The focus was not on science to understand deep problems, but rather research on how to satisfy industry-based needs. That focus explains why the study itself focuses on a variety of things that one should do. The focus was tactics, not knowledge.

The former being said, the capability to accelerate and expand knowledge is still relevant, especially as that capability bears upon a variety of important issues. The first issue is the need to instill critical thinking into students [14,15]. The speed, simplicity, and sheer volume of targeted information may provide an important contribution to the development of critical thinking. Rather than giving students simple answers to simple questions, the process presented here opens up the possibility that the Idea Coach format shown here can become a true ‘teacher’, working with students to formulate questions, and then giving the students the ability to go into depth, in any direction that they wish, simply by doing an experiment, and then investigating in greater depth any part of the results which interest them.

The second issue of relevance is the potential to create more knowledge through AI. There are continuing debates about whether or not AI actually produces new knowledge [16,17]. Rather than dealing with that issue simply in philosophy-based arguments, one might well embark on a small, affordable series of experiments dealing with a defined topic, find the results from the topic in terms of mind-sets, and then explore in depth the mind-sets using variations of the strategy used in the second part of the study. That is, once the user has obtained detailed knowledge about mind-sets for the topic, there is no limitation except for imagination which constrains the user from asking many different types of questions about what the mind-sets would say and do. After a dozen or so forays into the expansion of knowledge from a single small Mind Genomics project, it would then be of interest to assess the degree to which the entire newly developed corpus of AI-generated knowledge and insight is to be considered ‘new knowledge’, or simply a collection of AI-conjectures. That consideration awaits the researcher. The tools are already here, the effort is minor, and what awaits may become a treasure trove of new knowledge, perhaps.

References

  1. Butz EL (1989) Research that has value in policy making: a professional challenge. American Journal of Agricultural Economics 71: 1195-1199.
  2. Wang J, Molina MD, Sundar SS (2020) When expert recommendation contradicts peer opinion: Relative social influence of valence, group identity and artificial intelligence. Computers in Human Behavior 107: 106278.
  3. Molina MD, Sundar SS, L T, Lee D (2021) “Fake news” is not simply false information: A concept explication and taxonomy of online content. American Behavioral Scientist 65: 180-212.
  4. Dalalah D, Dalalah OM (2023) The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT. The International Journal of Management Education 21: 100822.
  5. Brundage M, Avin S, Clark J, Toner H, Eckersley P, et al. (2018) The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv: 1802.07228.
  6. Batarseh FA, Yang R (2017) Federal data science: Transforming government and agricultural policy using artificial intelligence. Academic Press.
  7. Ben Ayed R, Hanana M (2021) Artificial intelligence to improve the food and agriculture sector. Journal of Food Quality 1-7.
  8. Sood A, Sharma RK, Bhardwaj AK (2022) Artificial intelligence research in agriculture: A review. Online Information Review 46: 1054-1075.
  9. Taneja A, Nair G, Joshi M, Sharma S, Sharma S, et al. (2023) Artificial Intelligence: Implications for the Agri-Food Sector. Agronomy 13: 1397.
  10. Harizi A, Trebicka B, Tartaraj A, Moskowitz H (2020) A mind genomics cartography of shopping behavior for food products during the COVID-19 pandemic. European Journal of Medicine and Natural Sciences 4: 25-33.
  11. Porretta S, Gere A, Radványi D, Moskowitz H (2019) Mind Genomics (Conjoint Analysis): The new concept research in the analysis of consumer behaviour and choice. Trends in Food Science & Technology 84: 29-33.
  12. Zemel R, Choudhuri SG, Gere A, Upreti H, Deite Y, et al. (2019) Mind, consumers, and dairy: Applying artificial intelligence, Mind Genomics, and predictive viewpoint typing. In: Current Issues and Challenges in the Dairy Industry (ed. R. Gywali S. Ibrahim, & T. Zimmerman), Intech Open, IntechOpen. IBSN: 9781789843552, 1789843553
  13. Gofman A, Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies 25: 127-145.
  14. Guo Y, Lee D (2023) Leveraging chatgpt for enhancing critical thinking skills. Journal of Chemical Education 100: 4876-4883.
  15. Ibna Seraj PM, Oteir I (2022) Playing with AI to investigate human-computer Interaction Technology and Improving Critical Thinking Skills to Pursue 21st Century Age. Education Research International.
  16. Schäfer MS (2023) The Notorious GPT: science communication in the age of artificial intelligence. Journal of Science Communication 22: Y02.
  17. Spennemann DH (2023) ChatGPT and the generation of digitally born “knowledge”: How does a generative AI language model interpret cultural heritage values? Knowledge 3: 480-512.

Disruptive Activity of Acetic Acid of Salmonella enterica Serovar Typhimurium and Escherichia coli O157 Biofilms Developed on Eggshells and Industrial Surfaces

DOI: 10.31038/MIP.2024511

Abstract

Communities of enteropathogenic microorganisms adhere as biofilms to both natural and artificial surfaces encountered by eggs and chickens during production, constituting a major source of food cross-contamination. Given the rising bacterial resistance to chemical sanitary agents and antibiotics, there is a need to explore alternative approaches, particularly using natural products, to control the proliferation of these microorganisms along the surfaces of the poultry production chain. This study investigates and compares the bactericidal and antibiofilm properties of acetic, citric, and lactic acids against Salmonella enterica serovar Typhimurium and Escherichia coli O157 cells. Biofilms were allowed to develop on eggshells, stainless steel, and polystyrene surfaces at temperatures of 22°C and 37°C, and subsequently exposed to the acids for durations of 2 and 24 hours. The three organic acids exhibited varying degrees of reduction in planktonic, swarmer, and biofilm cells. Notably, acetic acid consistently produced the most promising outcomes, resulting in a reduction between 3 and 6.6 Log10 in the quantities of young and mature biofilm cells adhered to eggshells or stainless steel. Additionally, a decrease of 1 and 2.5 optical density units was observed in biofilms formed on the polystyrene surface. Overall, these findings suggest that acetic acid can effectively act as an anti-biofilm agent, disrupting both newly formed and matured biofilms formed under conditions encountered along the production chain of eggs and broilers.

Keywords

Food-contamination, Bactericidal, Organic acids, Enteropathogenic bacteria, Poultry production

Introduction

Foodborne pathogens, such as Salmonella enterica Serovar Typhimurium (S. Typhimurium) and E. coli O157, linked to poultry production and the food industry, are major concerns in global gastroenteritis outbreaks affecting humans. According to the USA Centers for Disease Control and Prevention, these pathogens contribute to 76 million infections, 325.000 hospitalizations, and 5.000 deaths annually in the USA alone [1]. In Colombia, a South American country, the Colombian National Institute of Health-Sivigila reported a total of 9.781 cases of foodborne illnesses involving 679 outbreaks in 2017 [2]. Despite the inherent protective physical and chemical barriers in eggs, research reveals that S. Typhimurium, E. coli O157, and other enteropathogenic bacteria can contaminate and infect them. Eggs typically become contaminated through three general routes: before oviposition, when the reproductive organs suffer an infection, and secondly, by encountering feces or contaminated surfaces [3,4]. Accumulating evidence illustrates that S. Typhimurium and E. coli O157, through the formation of biofilms, not only colonize eggs but also surfaces throughout the production chain (Figure. 1). This contamination of surfaces may result in the transmission of these pathogens, posing significant risks to public health [4-6].

fig 1

Figure 1: Areas and surfaces of the Colombian poultry production chain at risk for contamination by biofilms formed by enteropathogenic bacteria. Numbers highlight the different steps at which eggs and chickens can be contaminated by enteropathogenic bacteria. Italic letters indicate the places or utensils that may be made of stainless steel or polystyrene from which cross-contamination of eggs and chickens can occur.

Several studies have demonstrated how Salmonella and E. coli strains that are common causes of human gastroenteritis presented firm attachment of their bacterial cells to the eggshell surface and several types of foods and plants of production surfaces, facilitating the formation of biofilms [7-9]. The formation of a biofilm comprises several distinct steps. First, the initial reversible adsorption of cells onto the surface. Second, production of surface polysaccharides or capsular material occurs followed by the formation of an extracellular polymeric matrix. At this stage, biofilm cells form a strong attachment to the surface. In the following steps, the biofilm architecture is developed and matured. The process ends with the liberation of single motile cells that disperse into the environment and initiate again the process [10]. Biofilm formation is known to be influenced by several environmental cues, such as as availability and concentration of nutrients, and the physicochemical parameters of the of the surrounding environment, such as temperature and the material composition of the surface [11]. The surface type can influence microbial interactions among pathogens and promote co-biofilm formation, increases in individual pathogen biomass, and cell activity [12]. By nature, biofilm structure allows microbes to resist chemical or biological sanitizers, while bacterial cells are more vulnerable during the planktonic state and in a short contact time than when sequestered and protected in biofilms. Bacteria cells within biofilms are more resistant to environmental stresses, such as desiccation and UV light exposure, as well as to host-mediated responses, such as phagocytes [13]. Bacterial biofilms are more resistant to antimicrobial agents than are free-living cells, which makes it difficult to eradicate pathogens from surfaces commonly used in the poultry industry [5].

With the rise in the occurrence of foodborne outbreaks associated with poultry production, there is increasing interest in the use of novel biocide applications to prevent or reduce microbial contamination in food industries. The viability of microbes on food contact surfaces varies according to the biofilm state and formation ability, as well as the type of surface. Biofilm formation from the highest to lowest degree follows the order of eggshell > rubber > stainless steel > plastic [14,15]. As reported by Lee [15], rinsing surfaces with water, even extensively, appears to have limited effect on reducing S. Typhimurium biofilm viability. The regular application of cleaning and disinfecting procedures are common strategies employed to control pathogen establishment on industrial equipment [16]. Importantly, Chemical sanitizer efficacy can significantly depend on surface types, bacterial strains, and relative humidity [17]. Therefore, such procedures may not be fully effective in impeding or disrupting biofilms and can induce the formation and persistence of resistant phenotypes [18].

Novel alternatives, such as natural compounds extracted from bacterial cultures or aromatic plants, as well as organic acids, are currently under evaluation for their potential in eradicating biofilms. These compounds may exhibit high lethality against pathogens, efficiently penetrate the structure of a biofilm, and degrade easily in the environment [16]. Organic acids are generally recognized as safe (GRAS) by the USA Federal Drug Administration (FDA) and have been documented to possess antimicrobial activities against different pathogens [5]. In studies involving antibiotic-resistant bacteria, Clary [19] demonstrated how low concentrations (5%) of acetic acid rapidly killed (30 min) planktonic cells of Mycobacterium abscessus. On the other hand, Bradhan [20] demonstrated that lactic acid can decrease viable cell counts of planktonic as well as biofilm-forming cells of multiple carbapenem-hydrolyzing, multi-drug-resistant Klebsiella pneumoniae strains. Acetic acid demonstrated antimicrobial effectiveness on both smooth and rough cell morphotypes. Besides directly affecting bacterial cell viability, organic acids can also influence the electrochemical properties of the attachment surface, leading to an effective antimicrobial outcome [21].

An antimicrobial mechanism of organic acids, such as citric acid, acetic acid, and lactic acid, involves decreasing the environmental pH, creating unfavorable growth conditions for pathogenic bacteria [22]. Weak acids like acetic acid, when at a pH lower than their pKa and in their undissociated form, have shown the ability to reduce biofilm formation by permeating the biofilm structure and inner cell membrane. Kundukad [23] demonstrated that these weak acids, including acetic acid, could effectively eliminate bacteria without harming human cells if the pH remains close to their pKa. Organic acids in their undissociated form possess lipophilic properties, enabling them to diffuse across bacterial cell membranes, thereby disrupting cell function upon reaching the cell interior [5].

Research focusing on evaluating alternative treatments and methods to control S. Typhimurium and E. coli O157 biofilm formation on surfaces along the egg and other animal-derived food production chains is crucial to reduce cross-contamination. Accordingly, the present study aimed to assess the efficacy of organic acids in: 1) controlling biofilm formation by S. Typhimurium and E. coli O157 during the initial stages of development, and 2) disrupting mature biofilms. Eggshells, stainless steel, and polystyrene were utilized to simulate potential soiling surfaces encountered by eggs and broilers throughout the production chain. Two temperatures were assessed as key environmental variables: 22°C, representing the mean environmental temperature of the largest broiler-producing region in Colombia, and 37°C, simulating the optimal growth temperature of these pathogens. Additionally, to track the impact of exposure time and the potential development of resistance, the biofilms were subjected to organic acids for 2 and 24 hours.

Materials and Methods

Bacterial Strains and Growth Conditions

Bacterial strains used in this study were S. Typhimurium ATCC 14028 (American Type Culture Collection, Manassas, VA., USA) and E. coli O157 strain AGROSAVIA_CMSABV_Ec-col-B-001-2007 from the Animal Health collection of the AGROSAVIA Microbial Germplasm Bank (Mosquera, Cundinamarca, Colombia). The bacteria were grown on nutrient agar (Merck, Darmstadt, Germany) or Luria Bertani low salt agar (LBL) (peptone (ThermoFisher, Waltham, Massachusetts, USA) at 10 g.L-1, yeast extract (Merck) at 5 g.L-1, sodium chloride (Merck) at g.L-1, agar (Merck) at g.L-1. When required, LBL agar was acidified to pH 3 with 0.3% (v/v) acetic acid (Merck), 0.2% (v/v) citric acid (Merck), or 0.2% (v/v) lactic acid (Merck). For biofilm assays, LBL broth (LBL without agar) was used.

Growth Curves

S. Typhimurium and E. coli O157 were aerobically grown on LBL agar plates at 37°C for 24 h. The inocula were prepared by scraping the surface of the agar plates following the addition of 10 mL of LBL broth at pH 7 or acidified to pH 3 with acetic (0.3% v/v), citric (0.2% v/v) or lactic acid (0.2% v/v). These cell suspensions were adjusted to an OD at 600 nm of 0.1 (2.2 × 103 colony-forming units (cfu).mL-1) or 1.8 (2.8 × 109 103 cfu.mL-1). Three bacterial suspensions (n=3) per treatment and the control at an initial OD of 0.1 or 1.8 were incubated aerobically at 37°C for 48 h with constant shaking at 140 rpm. Every 2 h, 1-mL aliquots of the bacterial cultures were taken, and 10-fold serial dilutions and plating on LBL agar were made to determine Log10 cfu.mL-1 at each time.

Surface Spreading Assays

S. Typhimurium and E. coli O157 were grown aerobically in 5 mL of LBL broth at 37°C until reaching an optical density (OD) at 600 nm of 1 (16 h). Then, 1 mL of each culture was concentrated 10-fold by centrifugation at 4.400 × g for 5 min at room temperature. The pellets were suspended in 100 µL of LBL broth. The concentration of inocula for S. Typhimurium was 8.25 × 1010 and for E. coli O157 was 1.05 × 1011 cfu.mL-1. Semi-solid agar surface spreading plates were prepared as described by Amaya [24] with 20 mL of LBL broth containing 8% (w/v) of glucose and 0.6% (w/v) of agar and if required acidified with acid acetic (0.3% v/v), citric acid (0.2% v/v), or lactic acid (0.2% v/v). A 5-µL drop of the suspended bacteria was placed in the center of the plates (n=10) and allowed to air-dry for 10 min. The plates were inverted and incubated aerobically for 24 h at 37°C. The areas of the spreading colonies were measured with ImageJ software 1.52a (Wayne Rasband, National Institute of Health, Bethesda, MD, USA) by delimiting the coly area using the shaped and measured tools.

Disruption of Newly Formed Biofilms Developed on Eggshells and Stainless Steel

S. Typhimurium and E. coli O157 were grown aerobically on LBL agar plates at 37°C for 24 h. The inocula of the pathogens were prepared by scraping the cell mass grown from the surface of the plates and washing twice in 2 mL of LBL broth at pH 7 or LBL broth at pH 3, acidified with acid acetic (0.3% v/v), citric acid (0.2% v/v), or lactic acid (0.2% v/v), and centrifugation at 4.400 × g for 5 min at room temperature. Washed cells were suspended in 20 mL of the respective media. The OD of each suspension was adjusted to 1.8 at 600 nm (5 × 109 cfu.mL-1). Then, six 1-cm2 pieces of eggshell or stainless steel for each treatment, which were sterilized by autoclave at 15 lb of pressure and 121°C for 20 min, were weighted and covered with 5 mL of each bacterial suspension in 15-mL Falcon tubes. Negative controls contained each medium without inoculum. Following incubation for 2 or 24 h at 22°C or 37°C, eggshells and stainless-steel pieces were aseptically transferred with sterile forceps to 15-mL Falcon tubes. The eggshells and stainless-steel pieces were rinsed three times with 2 mL of sterile 0.85% NaCl solution to remove unbound cells. To detach the biofilm cells, the eggshells and stainless-steel pieces were sonicated twice in 2 mL of sterile 0.85% NaCl solution for 2 min with a pause of 2 min. Ten-fold serial dilutions were made in sterile 0.85% NaCl solution and plated using drop plate technique on nutrient agar. Plates were incubated aerobically at 37°C for 20 h and the numbers of colony-forming units were counted. The results were expressed as Log10 cfu.g-1 of eggshell or stainless steel.

Disruption of Mature Biofilms Formed on Eggshells and Stainless Steel

Pathogen biofilms were allowed to develop on the surface materials (n=9) for 2 or 24 h at 22 or 37°C in LBL broth (pH 7), following the procedures described above. Once the biofilms were formed, eggshells and stainless-steel pieces were aseptically transferred to LBL broth at pH 7 or acidified with acetic acid (0.3% v/v) to pH 3. The 2-h-old biofilms were incubated aerobically for 2 h and the 24-h-old biofilms were incubated for 24 h, at 22°C or 37°C. After rinsing three times with 2 mL of sterile 0.85% NaCl solution and sonication in 2 mL of sterile 0.85% NaCl solution, 10-fold serial dilutions were made and plated using drop plate technique on nutrient agar. Results were expressed as Log10 cfu.g-1 of eggshell or stainless steel.

Disruption of Biofilms Formed on Polystyrene

S. Typhimurium and E. coli O157 inocula were prepared as described above for the evaluation of biofilm formation on eggshells and stainless steel. To evaluate the disruption of young biofilms by acetic acid, ninety-six-well polystyrene plates (Becton Dickinson, Franklin Lakes, NY, USA) were inoculated with 180 µL of S. Typhimurium or E. coli O157 inoculum adjusted to an OD of 1.8 (approximately 5.12 × 109 cfu.mL-1) in LBL broth pH 7 or broth acidified to a pH of 3 with acetic acid (0.3% v/v), n=24. The multi-well plates were incubated aerobically for 2 or 24 h at 22 or 37°C, without shaking and under humid conditions to prevent evaporation. To evaluate the disruption of matured biofilms, first, the biofilms were allowed to form aerobically in LBL broth at pH 7 for 2 or 24 h. Subsequently, the culture broth was removed and 150 µL of LBL broth at pH 7 or broth acidified to pH 3 with acetic acid (0.3% v/v) was added to the wells, the number of wells used per treatment was of 24. Then the plates were incubated aerobically once more for an additional 2 or 24 h at 22°C or 37°C. Controls consisted of uninoculated broths. At the end of the incubation times, the OD was read at 600 nm using a SunriseTM microtiter plate reader (Tecan Group Ltd, Männedorf, Switzerland). Subsequently, the liquid contents of each well were gently removed, and the biofilms were stained for 1 h with 180 µL of 0.01% (w/v) crystal violet (Sigma-Aldrich, St. Louis, MO., USA) 3. Excess dye was removed, and the wells were rinsed three times with sterile distilled water. The plates were allowed to air-dry at room temperature before adding 180 µL of ethanol: acetone (80: 20) to each well. Crystal violet-stained biofilms were measured at 600 nm using a SunriseTM microtiter plate reader.

Statistical Analysis

At least three biological replicates of each experiment were carried out to ensure the reproducibility of results. Data of surface spread colony areas, cfu.g-1 of eggshell or stainless steel and crystal-violet stained biofilms were Log10 (x + 1) transformed to homogenize variances between treatments. Linear models (LM) were employed for statistical analyses using R v. 3.6.0 (http://www.R-project.org/) with packages lme4, car, and emmeans. Surface spreading data were analyzed using LM and pairwise comparisons were performed for the interaction between all factors. The cfu.g-1 of eggshell or stainless steel and OD data for multi-well plate assays were analyzed with a negative binomial distribution. The negative binomial theta parameter was established with an alternating iteration procedure using the glm.nb function. Pairwise multiple comparisons were carried out using the false discovery rate (FDR) for P-value corrections.

Results

Impact of Acetic Acid on S. Typhimurium and E. coli O157 Planktonic Cells

Growth curves were conducted to monitor the antimicrobial activity of the three organic acids on planktonic cells. Initial low (0.1 OD) and high (1.8 OD) concentrations of cells were employed to simulate the numbers used in young and mature biofilm inoculants, respectively. The results indicated that irrespective of the initial concentration (Figure 2C and 2D), all three organic acids exhibited bactericidal activity against S. Typhimurium and E. coli O157 planktonic cells. In both scenarios, a progressive decrease in colony-forming unit (cfu) numbers was observed over time. Compared to cultures at pH 7 with an initial OD of 0.1, cultures in LBL broth acidified with acetic, citric, and lactic acids exhibited reductions of 7.86 Log10 cfu.mL-1 for S. Typhimurium (Figure 2A) and 8.17 Log10 cfu.mL-1 for E. coli O157 (Figure 3B). When initial cell concentrations were high, cfu.mL-1 numbers also decreased in cultures acidified with the three organic acids. After 48 hours of incubation, viable Log cfu.mL-1 counts of S. Typhimurium and E. coli O157 in acidified cultures revealed reductions of 8.36 and 8.10, respectively.

fig 2

Figure 2: S. Typhimurium and E. coli O157 growth curves for control (pH 7) and acid (pH 3) broth cultures with an initial optical density of 0.1 (A and B, respectively) and 1.8 (C and D, respectively). Error bars indicate standard error of the mean (n=9).

Interference of Organic Acids with Surface Spreading

Bacterial surface motility is known to be involved at different stages of biofilm formation, especially at its initial stages. We evaluated the impact of acetic, citric, and lactic acids on this phenotype. Compared to control conditions, a significant (P < 0.05) decrease in the surface spreading abilities of S. Typhimurium and E. coli O157, ranging between 97 to 98%, was observed on the semi-solid agar plates containing any of the three organic acids (Figure 3).

fig 3

Figure 3: Effect of organic acids on S. Typhimurium (A) and E. coli O157 (C) surface spreading. Error bars indicate the standard error of the means (n=10). Bars with the same letter do not differ significantly (P > 0.05). B and D demonstrate the observed surface spreading patterns of S. Typhimurium and E. coli O157 at 24 h post-inoculation, respectively (bar=1 cm).

Disruption of Newly Formed Biofilms

First, the capacity of acetic, citric, and lactic acid to disrupt biofilms formed at 2 and 24 h post-inoculation (hpi) on eggshells was evaluated. Under the control treatment conditions, the numbers of S. Typhimurium and E. coli O157 attached cells were similar in most comparisons at 2 and 24 hpi (Table 1); although at 24 hpi at 37°C, fewer (P < 0.05) E. coli O157 than S. Typhimurium cells were found to be attached. Of the three acids, acetic acid generated (P < 0.05) higher reductions on newly formed biofilms developed by both pathogens, with an overall 3 Log10 cfu.g-1 of eggshells decrease at both times and temperatures compared to the controls. An exception was at 2 hpi and 37°C with biofilm formation by S. Typhimurium being controlled to a greater extent by lactic acid rather than by acetic and citric acids. Compared to the effect achieved by the other two organic acids, at 2 and 24 hpi, acetic acid also yielded the highest (P < 0.05) reduction of E. coli O157 biofilm formation at both temperatures.

Table 1: Organic acids inhibition of young Salmonella Typhimurium and Escherichia coli O157 biofilms developed on eggshells and stainless steel surfaces.

     

Bacteria (Incubation Temperature)

Surface1

Treatment Time (h) ST (22°C) ST (37°C) EC (22°C) EC (37°C)
ES Con 2 8.58 ± 0.05aA 8.23 ± 0.12aA 8.31 ± 0.11aA

8.24 ± 0.13aA

24 8.48 ± 0.06aA 8.40 ± 0.18aA 8.33 ± 0.09abA 8.05 ± 0.10bA
AA 2 5.12 ± 0.07bC 7.36 ± 0.08aB 5.03 ± 0.03aC

5.12 ± 0.06bC

24 5.13 ± 0.06aC 5.07 ± 0.04aC 5.03 ± 0.03aC 5.06 ± 0.04aD
LA 2 5.17 ± 0.08cB 5.08 ± 0.05cC 7.33 ± 0.29bB

8.36 ± 0.06aA

24 7.56 ± 0.21aB 7.69 ± 0.07aB 7.65 ± 0.06aB 6.59 ± 0.07bC
CA 2 6.33 ± 0.17bB 8.13 ± 0.13aA 7.52 ± 0.30aB

6.23 ± 0.34bB

24 7.59 ± 0.08cB 8.14 ± 0.10Ba 7.79 ± 0.05cB 8.40 ± 0.11aB
SS Con 2 10.48 ± 0.01bA 10.72 ± 0.01aA 10.31 ± 0.01cA

10.50 ± 0.01bA

24 10.63 ± 0.01bA 10.96 ± 0.00aA 10.33 ± 0.01dA 10.50 ± 0.01cA
AA 2 4.04 ± 0.21aB 4.23 ± 0.01aB 4.20 ± 0.27aB

4.23 ± 0.03aB

24 4.41 ± 0.05bB 4.45 ± 0.03bB 4.61 ± 0.01aB

4.59 ± 0.02aB

1ES: Egg Shell, SS: Stainless Steel, Con: Control medium at pH 7, AC: Acetic acid medium at pH 3, LA: Lactic acid medium at pH 3, CA: Citric acid medium at pH 3, ST: Salmonella Typhimurium, EC: Escherichia coli O157.
abcdMeans (Log10 cfu/g) ± SE (n=9) in rows and with different letters are significantly different (P < 0.05).
ABCDMeans (Log10 cfu/g) ± SE (n=9) in columns, with the same surface material, and the same time, and with different letters are significantly different (P < 0.05).

The disruptive activity that citric and lactic acid caused on newly formed biofilms was found to depend on the time of exposure and the incubation temperature. Citric acid was found to be more effective in disrupting the 2-h-old biofilms formed by S. Typhimurium at 22°C, and by E. coli O157 at 37°C, causing a reduction in the number of cfu.g-1 of eggshell of 2.25 and 2.01 Log10, respectively. On the other hand, lactic acid exerted the higher antibiofilm activity against S. Typhimurium biofilms generating a decrease in the number of cfu attached per gram of eggshell at 22°C of 3.41 Log10 and at 37°C of 3.15 Log10. At the same time and temperatures, E. coli O157 biofilms saw a decrease of 1 and 0 Log10. Biofilms formed during 24 h, treated with this organic acid showed an overall decrease of less than 1 Log10 cfu.cm2-1 of eggshell, in both pathogens. Because acetic acid was observed to be the most effective organic acid in controlling S. Typhimurium and E. coli O157 biofilm formation on eggshells, this organic acid was selected for further studies.

As seen with eggshells, the number of cfu.g-1 of stainless steel attached at 2 and 24 h showed similar numbers by S. Typhimurium and E. coli O157 within control and acetic acid treatments (Table 1). The Attached S. Typhimurium cells to this surface were higher in the control treatment at 2 and 24 hpi (P < 0.05) at 37°C and lower (P < 0.05) for E. coli O157 at 22°C, although the differences were small. At all times and temperatures, acetic acid caused a reduction (P < 0.05) by nearly 6 Log10 of S. Typhimurium and E. coli O157 cfu.g-1 of stainless steel. All counts for acetic acid-treated biofilms were similar (P > 0.05) at 2 hpi; however, at 24 hpi, E. coli O157 counts at both temperatures were slightly higher (P < 0.05) than those for S. Typhimurium.

The formation of biofilms by both pathogens on multi-well polystyrene plates was also found to be influenced by temperature and incubation temperature (P < 0.05, Table 2). Lower (P < 0.05) biofilm OD values were found for S. Typhimurium and E. coli O157 at 22°C than at 37°C at 2 hpi and 24 hpi for the control and acetic acid treatments. Treatment with acetic acid resulted in both pathogens producing less (P < 0.05) biofilm at both temperatures when compared to control OD values at 2 and 24 hpi. However, there were higher decreases in OD values for both acetic acid-treated pathogens at both temperatures at 24 hpi as compared to 2 hpi. While an overall reduction of nearly 1 OD unit was obtained at 2 hpi for both pathogens, a decrease at 24 hpi of 2 OD units and 1.7 OD units was found for S. Typhimurium and E. coli O157, respectively.

Table 2: Acetic acid inhibition of young Salmonella Typhimurium and Escherichia coli O157 biofilms developed on polystyrene surfaces.

     

Bacteria (Incubation Temperature)

Surface1

Treatment Time (h) ST (22°C) ST (37°C) EC (22°C) EC (37°C)
PS Con 2 2.76 ± 0.04bA 3.14± 0.03aA 2.43 ± 0.07cA

2.98 ± 0.05aA

24 3.04 ± 0.02bA 3.62 ± 0.09aA 3.03 ± 0.03bA 3.12 ± 0.04bA
AA 2 1.68 ± 0.06bB 1.84 ± 0.08abB 1.68 ± 0.05bB

1.91 ± 0.05aB

24

0.64 ± 0.05cB 1.59 ± 0.10aB 1.22 ± 0.04bB

1.53 ± 0.09aB

1PS: Polystyrene, Con: Control medium at pH 7, AC: Acetic acid medium at pH 3, ST: Salmonella Typhimurium, EC: Escherichia coli O157
abcdMeans (OD) ± SE (n=24) in rows and with different letters are significantly different (P < 0.05).
ABMeans (OD) ± SE (n=24) in columns and with different letters are significantly different (P < 0.05).

Acetic Acid Disruption of Mature Biofilms

Control treatments showed that the number of cfu of S. Typhimurium and E. coli O157 attached per g of eggshell did not significantly increase from 2 to 24 h (P < 0.05) at any of the evaluated temperatures. On the other hand, 2 more hours of incubation were enough to allow higher (P < 0.05) numbers of S. Typhimurium and E. coli O157 cells to be attached to stainless steel than to eggshells for control and acetic acid-treated cultures at both temperatures. As observed in the assays of young biofilms, treatment with acetic acid for 2 and 24 h also generated significant (P < 0.05) reduction on the already formed and mature S. Typhimurium and E. coli O157 biofilms, regardless of the evaluated surface (Tables 3). Compared to control treatments, there was an overall 6.6 Log10 reduction in the number of cfu attached to eggshells and stainless-steel surfaces. Exposure to acetic acid for 2 h was enough to disrupt the already formed biofilms. Interestingly, prolonged exposure to acetic acid for 24 h did not incrementally affect these mature biofilms (Table 3). Furthermore, as observed when evaluating the disruption of young biofilms, the antibiofilm activity of acetic was higher on the biofilms formed on stainless steel than on eggshells.

Table 3: Acetic acid disruption of matured Salmonella Typhimurium and Escherichia coli O157 biofilms developed on eggshells and stainless surfaces.

     

Bacteria (Incubation Temperature)

Surface1

Treatment Time (h) ST (22°C) ST (37°C) EC (22°C) EC (37°C)
ES Con 2 8.67 ± 0.03aA 8.77 ± 0.03aA 8.31 ± 0.03bA

8.32 ± 0.05bA

24 8.78 ± 0.08bA 8.94 ± 0.01aA 8.39 ± 0.03cA 8.33 ± 0.03cA
AA 2 2.24 ± 0.01aB 2.23 ± 0.01aB 2.21 ± 0.02aB

2.22 ± 0.03aB

24 2.41 ± 0.05bB 2.45 ± 0.03bB 2.59 ± 0.01aB 2.59 ± 0.02aB
SS Con 2 10.54 ± 0.02bA 10.67 ± 0.02aA 10.11 ± 0.01cA

10.68 ± 0.02aA

24 10.76 ± 0.04bA 10.92 ± 0.01aA 10.85 ± 0.01aA 10.92 ± 0.00aA
AA 2 3.46 ± 0.03bB 3.65 ± 0.01aB 3.57 ± 0.03aB

3.62 ± 0.04aB

24

3.56 ± 0.04cB 3.88 ± 0.02aB 3.55 ± 0.07cB

3.72 ± 0.04bB

1ES: Egg Shell, SS: Stainless Steel, PS: Polystyrene, Con: Control medium at pH 7, AC: Acetic acid medium at pH 3, LA: Lactic acid medium at pH 3, CA: Citric acid medium at pH 3, ST: Salmonella Typhimurium, EC: Escherichia coli O157
abcdMeans (Log10 cfu/g) ± SE (n=9) in rows and with different letters are significantly different (P < 0.05).
ABCDMeans (Log10 cfu/g) ± SE (n=9) in columns, with the same surface material, and with different letters are significantly different (P < 0.05).

Incubation of the mature biofilms formed on the polystyrene surface for an additional 2 and 24 h generated significant differences (P < 0.05) in the OD values for S. Typhimurium and E. coli O157 biofilms. Regardless of the time and temperature of incubation, the OD values of E. coli O157 biofilms were higher than those of S. Typhimurium. Additionally, while S. Typhimurium showed higher OD values at 24 hpi than at 2 hpi, E. coli O157 OD values were reduced over time. An overall reduction on the OD values caused by acetic acid was observed in the matured biofilms formed by the two pathogens, however the antibiofilm activity of the acid varied depending on the time and temperature (P < 0.05). The higher antibiofilm activity exerted by acetic acid on S. Typhimurium matured biofilms was observed at 24 hpi and 22°C (1.08). Similarly, the higher reduction in the OD values in E. coli O157 was found at 22°C; although it was observed at 2 (2.49) and 24 hpi (2.33). Extending the exposure to acetic acid of S. Typhimurium matured biofilms formed at 22°C led to a higher reduction on the OD values at 24 hpi than at 2 hpi. However, this decrease caused by a longer exposure to acetic acid was not observed for the matured biofilms formed at 37°C by S. Typhimurium or by E. coli O157 at any of the evaluated temperatures (Table 4).

Table 4: Acetic acid disruption of matured Salmonella Typhimurium and Escherichia coli O157 biofilms developed polystyrene surfaces.

     

Bacteria (Incubation Temperature)

Surface1

Treatment Time (h) ST (22°C) ST (37°C) EC (22°C) EC (37°C)
PS Con 2 2.60 ± 0.05bA 1.19 ± 0.08cA 4.40 ± 0.06aA

2.77 ± 0.07bA

24

3.57 ± 0.06bA

2.09 ± 0.06dA 4.20 ± 0.08aA

2.35 ± 0.08cA

AA

2

2.20 ± 0.15aB 0.46 ± 0.07bB 1.91 ± 0.1aB

1.75 ± 0.09aB

24

2.49 ± 0.11aB 1.32 ± 0.14cB 1.87 ± 0.08bB

1.53 ± 0.11abB

1PS: Polystyrene, Con: Control medium at pH 7, AC: Acetic acid medium at pH 3, ST: Salmonella Typhimurium, EC: Escherichia coli O157
abcdMeans (OD) ± SE (n=24) in rows for each surface and with different letters are significantly different (P < 0.05).
ABMeans (OD) ± SE (n=9) in columns, with the same time, and with different letters are significantly different (P < 0.05).

Discussion

Complete removal of enteropathogenic bacteria from the poultry production chain environment is essential to ensure overall food safety. Pathogens like S. Typhymurium and E. coli O157 possess the capability to form biofilms, enabling their survival under unfavorable conditions by adhering to abiotic surfaces such as metals, plastic, or glass while creating a protective barrier [25,26]. Despite the implementation of numerous hygienic measures, concerns persist regarding the efficacy of disinfectants due to the emergence of bacterial resistance [27]. Moreover, several chemical sanitizers previously used for human health purposes are now prohibited, leading to a renewed interest in substituting chemical industrial sanitizers with natural antimicrobial agents. Organic acids, considered safe for food animal and human health, stand out as exceptional alternatives in this regard [28]. They are affordable and known to be safe compounds.

The results from the current study demonstrate the efficacy of acetic acid as an antibiofilm agent against S. Typhimurium and E. coli O157 biofilms. Halstead [29] similarly revealed the bactericidal actions of this organic acid against pathogens such as E. coli, Staphylococcus aureus, and Acinetobacter baumannii. However, in contrast to these findings, other studies have suggested that acetic acid might not be the most efficient biofilm disruptor when compared to other organic acids. For instance, Ban [30] evaluated the antibiofilm activities of propionic acid, acetic acid, lactic acid, malic acid, and citric acid, and found lactic acid to be the most effective in disrupting 6-day-old S. Typhimurium, E. coli O157: H7, and Listeria monocytogenes biofilms. Moreover, Amrutha [5] reported that, when comparing the activity of acetic, lactic, and citric acids at a 2% concentration, lactic acid achieved maximum inhibition of Salmonella sp. and E. coli biofilms formed on cucumber. The degree of antimicrobial effect might be influenced by the concentration of organic acid and the exposure time [28]. According to Beier [31], acetic, butyric, and propionic acids required lower molar amounts than citric, formic, and lactic acids to significantly inhibit enteropathogens. Furthermore, Bardhan [20] indicated that lactic acid was an effective antimicrobial against clinical carbapenem-hydrolyzing, multi-drug-resistant Klebsiella pneumoniae planktonic and biofilm-forming cells. The authors observed cell membrane damage and high rates of bacteriolysis after treatment with lactic acid at concentrations of 0.15% and 0.225%.

The antibacterial activity of organic acids has been associated with their pKa and the optimal pH for dissociation [28]. The pKa values for acetic, citric, and lactic acid are 4.476, 3.86, and 3.13, respectively. Kundukad [23] demonstrated that maintaining their pH close to their pKa enables weak acids like acetic and citric acid to eliminate persistent cells within biofilms of antibiotic-resistant bacteria such as Klebsiella pneumoniae KP1, Pseudomonas putida OUS82, Staphylococcus aureus 15981, Pseudomonas aeruginosa DK1-NH57388A, and P. aeruginosa PA_D25. When provided at a pH lower than their pKa, these compounds can penetrate the biofilm matrix and bacterial cell membranes. While lactic acid is considered a stronger acid than acetic acid based on their pKa values, the efficacy of organic acids also relies on pH levels. The proximity between the pKa value of lactic acid and the pH of 3 used in this study might explain why acetic acid exhibited better performance against biofilm formation and disruption than lactic acid. Further studies comparing the effectiveness of these organic acids at various pH values are necessary to confirm these observations.

In general, it has been suggested that increasing the contact time with disinfectants enhances their antibiofilm activities on various material surfaces [15]. In the current study, it was observed that prolonged exposure of S. Typhimurium and E. coli O157 planktonic cells to the tested organic acids resulted in lower OD values, as depicted by the presented growth curves. However, when mature biofilms of these microbes were exposed to acetic acid on polystyrene surfaces, this time-related effect was not observed. The OD values for mature biofilms did not decrease after exposure to acetic acid for 2 or 24 hours. Similar resistance over time was noted for biofilm cells attached to eggshells and stainless steel when the biofilm formation and contact time with organic acids extended from 2 hours to 24 hours. Amrutha [5] reported that exposure to acetic, citric, and lactic acids did not significantly reduce the production of exopolysaccharides in Salmonella sp. biofilms and resulted in reductions of 10.89%, 6.25%, and 13.42% in E. coli O157: H7 biofilms, respectively. The extracellular matrix developed by biofilm cells acts as a barrier, impeding the penetration or inactivation of antimicrobial compounds [31,32]. Therefore, the limited reduction in S. Typhimurium and E. coli O157 biofilms formed on eggshells, stainless steel, and polystyrene with increased exposure time to acetic acid is likely due to the obstruction presented by the biofilm matrix against the passage of organic acids. Research focusing on disrupting the biofilm matrix using alternative methods before exposure to organic acids could lead to the development of complementary approaches to enhance the antimicrobial activity of organic acids.

In addition to the biofilm matrix defensive shield, it is conceivable that the remaining cells inside the S. Typhimurium and E. coli O157 biofilms would respond to the effects of acetic acid by triggering other protection strategies. Changes in membrane lipids have been described as one of these defensive mechanisms [33]. Additional cell protective strategies would include the release of ammonia [34], the pumping out of protons, and the proton-consuming decarboxylation processes. More recently, Clary [19] demonstrated how the bacterial colony diversification (morphotype) would define the outcome of tolerance to a particular stressor during the process of biofilm formation and its persistence against environmental assaults. Amrutha [5] demonstrated that a reduction of exopolysaccharide (EPS) synthesis, EPS composition and organization, swimming and swarming cell patterns, and a negative impact on quorum sensing play crucial roles in microbial community architecture as well as resistance to toxic substances. Further research is required to identify which of these mechanisms are used by the remaining S. Typhimurium and E. coli O157 biofilm cells attached to eggshells and the industrial surfaces evaluated in the current study.

The antibiofilm activity of organic acids, such as acetic acid, might encounter hindrances due to alterations in the biofilm structure caused by temperature shifts and variations in adhesion surface types. Generally, temperature and surface material have been reported to influence the attachment ability of enteropathogenic and other bacteria, consequently affecting the biofilm structure [35]. In the current study, we did not assess the impact of temperature on the biofilm structure on the tested surfaces. However, our findings indicate that at 22°C, acetic acid exhibited less control only over mature biofilms formed by S. Typhimurium on polystyrene, differing from the conditions at 37°C. Similar temperature-related alterations in biofilm capacity were observed by Andersen [36] when evaluating the biofilm-forming capacity of several E. coli K12 clinical isolates. They reported a higher number of attached cells at 30°C compared to 35°C, observing denser and more evenly distributed biofilms on silicone surfaces at the lower temperature. Andersen [36] suggested that the presence of curli fibers, which facilitate cell adhesion, might have influenced the type and creation of the biofilm structure, particularly at lower temperatures where these cell surface adhesins are produced. Furthermore, another study focused on E. coli O157: H7 biofilms formed at 4°C and 15°C on beef processing surfaces concluded that while a slight decrease in the number of attached cells was noted at 4°C, it did not hinder the overall increase in attached cell numbers over time [37].

Conclusion

The efficacy of compounds utilized for sanitation involves multifaceted events associated not only with the morphology and physiology of the target microbial cells but also with factors such as relative surface hydrophobicity, material surface roughness, and the impact of shear stress [38]. Organic acids can influence the internal chemical equilibrium of microbial cells, leading to alterations in cell membrane integrity or cellular activities, ultimately resulting in cell death. Consequently, organic acids represent an important option for sanitizing purposes and may potentially be combined or incorporated into innovative carrier matrices with other established antimicrobial molecules, such as essential oil components, thereby improving molecule stability and extending their biological activity [39]. The results obtained from this study offer new insights into the effectiveness of acetic acid as an antibiofilm agent, which can be utilized to control S. Typhimurium and E. coli O157 biofilms formed under conditions encountered along the poultry production chain. This newfound information may facilitate the integration of this natural compound into hygiene programs aimed at preventing cross-contamination of eggs, broilers, and broiler meat products.

Acknowledgments

This work supported by the United States Department of Agriculture under grant number 58-3091-7-028-F; and by the Colombian Ministry of Agriculture and Rural Development under grant numbers Tv18 and Tv19. We thank Yessica Muñoz and Xiomara Abella for technical assistance, and Corporación Colombiana de Investigación Agropecuaria – Agrosavia for supporting this research.

Contributions

AGC and CVAG were involved in the experimental and performed biofilm experiments. AGC, MEH, FRV and CVAG participated in data analysis and wrote the manuscript.

Ethics Approval

Not applicable.

Consent to Participate

All authors approved the manuscript.

Consent for Publication

The authors consented for the publication.

Statements and Declarations

Competing Interests

The authors declare no competing interests.

References

  1. Afzal A, Hussain A, Irfan M, and Malik KA (2015) Molecular diagnostics for foodborne pathogens (Salmonella spp.) from poultry. Life Sci 2: 91-97.
  2. Instituto Nacional de Salud de Colombia (2017) Investigación de brote enfermedades transmitidas por alimentos y vehiculizadas por agua, 59(2): 4-16.
  3. Gantois I, Ducatelle R, Pasmans F, Haesebrouck F, et al. (2004) Cross-sectional analysis of clinical and environmental isolates of Pseudomonas aeruginosa: biofilm formation, virulence, and genome diversity. Pharmacol 72: 133-144. [crossref]
  4. Pande VV, McWhorter AR, Chousalkar KK (2016) Salmonella enterica isolates from layer farm environments are able to form biofilm on eggshell surfaces. Biofouling 32: 699-710.
  5. Amrutha B, Sundar K, Halady Shetty PH (2017) Effect of organic acids on biofilm formation and quorum signaling of pathogens from fresh fruits and vegetables. Microb Pathog 111: 156-162. [crossref]
  6. Chowdhury MAH, Ashrafudoulla, Mevo SIU, Mizan MFR, et al. (2023) Current and future interventions for improving poultry health and poultry food safety and security: A comprehensive review. Compr Rev Food Sci Food Saf 22: 1555-1596. [crossref]
  7. Yang X, Tran F, Youssef MK, Gill CO (2015) Determination of sources of Escherichia coli on beef by multiple-locus variable-number tandem repeat analysis. J Food Prot 78: 1296-1302. [crossref]
  8. Silva PL, Goulart LR, Reis TF, Mendonça EP, Melo RT, et al. (2019) Biofilm formation in different Salmonella serotypes isolated from poultry. Curr Microbiol 76: 124-129. [crossref]
  9. Harrell JE, Hahn MM, D’Souza SJ, Vasicek EM, Sandala JL, et al. (2021) Salmonella biofilm formation, chronic infection, and immunity within the intestine and hepatobiliary tract. Front Cell Infect Microbiol 10: 624622. [crossref]
  10. Kim SH, Wei CI (2007) Biofilm formation by multidrug-resistant Salmonella enterica serotype Typhimurium phage type DT104 and other pathogens. J Food Prot 70: 22-29.
  11. Schonewille E, Nesse LL, Hauck R, Windhorst D, Hafez HM, Vestby LK (2012) Biofilm building capacity of Salmonella enterica strains from the poultry farm environment. FEMS Microbiol Immunol 65: 360-365. [crossref]
  12. Maggio F, Rossi C, Chaves-López C, Serio A, Valbonetti L, Pomilio F, et al. (2021) Interactions between Listeria monocytogenes and Psuedomonas fluorescens in dual-species biofilms under simulated dairy processing conditions. Foods 10: 176. [crossref]
  13. Fatemi P, Frank JF (1999) Inactivation of Listeria monocytogenes/Pseudomonas biofilms by peracid sanitizers. J Food Prot 62: 761-765.
  14. Hingston PA, Stea EC, Knøchel S, Hansen T (2013) Role of initial contamination levels, biofilm maturity and presence of salt and fat on desiccation survival of Listeria monocytogenes on stainless steel surfaces. Food Microbiol 36: 46-56. [crossref]
  15. Lee KH, Lee JY, Roy PK, Mizan MFR, Hossain MI, et al. (2020) Viability of Salmonella Typhimurium biofilms on major food-contact surfaces and eggshell treated during 35 days with and without water storage at room temperature. Poult Sci 99: 4558-4565. [crossref]
  16. Bridier A, Sanchez-Vizuete P, Guilbaud M, Piard JC, et al. (2015) Biofilm-associated persistence of food-borne pathogens. Food Microbiol 45: 167-178. [crossref]
  17. Joseph B, Otta SK, Karunasagar I, Karunasagar I (2001) Biofilm formation by Salmonella on food contact surfaces and their sensitivity to sanitizers. Int J Food Microbiol 64: 367-372. [crossref]
  18. Simoes M, Simoes LC, Vieira MJ (2010) A review of current and emergent biofilm control strategies. LWT-Food Sci Technol 43: 573-583.
  19. Clary G, Sasindran SJ, Nesbitt N, Mason L, Cole S, Azad A, McCoy K, Schlesinger LS, Hall-Stoodley L (2018) Mycobacterium abscessus smooth and rough morphotypes form antimicrobial-tolerant biofilm phenotypes but are killed by acetic acid. Antimicrob Agents Chemother 62: e01782-17. [crossref]
  20. Bardhan T, Chakraborty M, Bhattacharjee B (2019) Bactericidal activity of lactic acid against clinical, carbapenem-hydrolyzing, multi-drug-resistant Klebsiella pneumoniae planktonic and biofilm-forming cells. Antibiotics 8: 181. [crossref]
  21. Souza JG, Cordeiro JM, Lima CV, Barão VA (2019) Citric acid reduces oral biofilm and influences the electrochemical behavior of titanium: An in situ and in vitro study. J Periodontol 90(2): 149-158. [crossref]
  22. Canibe N, Steien SH, Øverland M, Jensen BB (2001) Effect of K-diformate in starter diets on acidity, microbiota, and the amount of organic acids in the digestive tract of pig. J Anim Sci 79: 2123-2133. [crossref]
  23. Kundukad B, Schussman M, Yang K, Seviour T, Yang L, et al. (2017) Mechanistic action of weak acid drugs on biofilms. Sci Rep 7: 1-12. [crossref]
  24. Amaya-Gómez CV, Porcel M, Mesa-Garriga L, Gómez-Álvarez MI (2020) A framework for the selection of plant growth-promoting rhizobacteria based on bacterial competence mechanisms. Appl Environ Microbiol 86: e00760-20. [crossref]
  25. Peng D (2016) Biofilm formation of Salmonella. Microbial Biofilms. Biofilms-Importance and Applications. IntechOpen, 231-242.
  26. Yang X, Wang H, Hrycauk S, Holman DB, Ells TC (2023) Microbial dynamics in mixed-culture biofilms of Salmonella Typhimurium and Escherichia coli O157: H7 and bacteria surviving sanitation of conveyor belts of meat processing plants. Microorganisms 11: 421. [crossref]
  27. Yuan L, Sadiq FA, Wang N, Yang Z, He G (2020) Recent advances in understanding the control of disinfectant-resistant biofilms by hurdle technology in the food industry. Crit Rev Food Sci Nutr 1-16. [crossref]
  28. Coban HB (2020) Organic acids as antimicrobial food agents: applications and microbial productions. Bioprocess Biosyst Eng 43: 569-591.
  29. Halstead FD, Rauf M, Moiemen NS, Bamford A, Wearn CM, et al. (2015) The antibacterial activity of acetic acid against biofilm-producing pathogens of relevance to burns patients. PLoS One 10: e0136190. [crossref]
  30. Ban GH, Park SH, Kim SO, Ryu S, Kang DH (2012) Synergistic effect of steam and lactic acid against Escherichia coli O157: H7, Salmonella Typhimurium, and Listeria monocytogenes biofilms on polyvinyl chloride and stainless steel. Int J Food Microbiol 157(2): 218-223. [crossref]
  31. Beier RC, Harvey RB, Hernandez CA, Hume ME, et al. (2018) Interactions of organic acids with Campylobacter coli from swine. PLoS One 13: e0202100.
  32. Kim SH, Wei CI (2007) Biofilm formation by multidrug-resistant Salmonella enterica serotype Typhimurium phage type DT104 and other pathogens. J Food Prot 70: 22-29.
  33. Pienaar JA, Singh A, Barnard TG (2020) Membrane modification as a survival mechanism through gastric fluid in non-acid adapted enteropathogenic Escherichia coli (EPEC). Microb Pathog 144: 104180. [crossref]
  34. Lu P, Ma D, Chen Y, Guo Y, et al. (2013) L-glutamine provides acid resistance for Escherichia coli through enzymatic release of ammonia. Cell Res 23: 635-644. [crossref]
  35. Lund P, Tramonti A, De Biase D (2014) Coping with low pH: Molecular strategies in neutralophilic bacteria. FEMS Microbiol Rev 38(6): 1091-1125.
  36. Andersen TE, Kingshott P, Palarasah Y, Benter M, et al. (2010) A flow chamber assay for quantitative evaluation of bacterial surface colonization used to investigate the influence of temperature and surface hydrophilicity on the biofilm-forming capacity of uropathogenic Escherichia coli. J Microbiol Methods 81: 135-140. [crossref]
  37. Dourou D, Beauchamp CS, Yoon Y, Geornaras I, Belk KE, et al. (2011) Attachment and biofilm formation by Escherichia coli O157: H7 at different temperatures, on various food-contact surfaces encountered in beef processing. Int J Food Microbiol 149: 262-268. [crossref]
  38. Cai S, Phinney DM, Heldman DR, Snyder AB (2020) All treatment parameters affect environmental surface sanitation efficacy, but their relative importance depends on the microbial target. Appl Environ Microbiol 87: e01748-20. [crossref]
  39. Scaffaro R, Lopresti F, Marino A, Nostro A (2018) Antimicrobial additives for poly (lactic acid) materials and their applications: current state and perspectives. Appl Microbiol Biotechnol [crossref]

Attacks on ‘First Responders’ in the United States: Can AI Using Mind Genomics ‘Thinking’ Identify Mindsets and Provide Actionable Insight?

DOI: 10.31038/JCRM.2024714

Abstract

Using generative AI, the paper investigates the nature of individuals who are likely to attack first responders (e.g., police, fire fighter, medical professionals). AI suggested five different mind-sets, and a variety of factors about these mind-sets, including what they may be thinking, and how they can be recognized. The approach of synthesizing mind-sets provides society with a way to understand negative behaviors, and to protect against them.

Introduction

In today’s society, the traditional feeling towards first responders such as emergency services, law enforcement and firefighters at the scene of an accident or crime, as well as doctors and nurses providing care in clinics, is usually one of respect and gratitude. These individuals are seen as heroes who put their own lives at risk to help others in need. People typically view first responders as dedicated professionals, essential to maintaining order and providing crucial assistance in emergency situations. Often, their work is so stressful that in some cases they end up suffering with PTSD years after their efforts [1-6].

However, violence against first responders, appears to be a growing threat. While underreported, studies suggest a concerning rise. A 2019 report by the National Fire Protection Association (NFPA) highlights that a staggering 69% of EMS personnel experienced some form of violence on the job within a year, with a third being physically assaulted (NFPA 2019).

During the past 30 years, however, the United States has experienced significant changes in societal attitudes and behaviors which end up in the often-unthinkable behavior of attacking first responders, whether these be public servants like police [7] or doctors and nurses in clinics [8-10]. At first glance this behavior seems irrational because the first responders are actively helping the public.

Among the key reasons:

Emotional Intensity and Stress: Emergency situations can be highly emotional and stressful for everyone involved. First responders often encounter distressed individuals, family members, or witnesses. The intensity of these situations can lead to aggression directed at responders [11].

Substance Abuse and Mental Health Issues: People under the influence of drugs or alcohol may act irrationally and become aggressive. Additionally, individuals with mental health conditions might not respond well to assistance. This problem is made worse by the fact that mental health services are underfunded and under supported, which increases the likelihood that first responders may face violent incidents [12].

Vocal And Emotionally Charged Skepticism Towards Government, Law Enforcement, And The Media: Some scholars suggest that this trend owes its growth to the increasingly. The result is a culture where challenging authority is increasingly the norm. Sometimes this erosion is expressed by a simple expression, ‘is nothing sacred anymore?’ [13-15].

Economic Disparities and Social Inequalities in the US; Economics and daily struggle cannot help but create pockets of resentment and frustration within marginalized communities. First responders, often interacting with these communities in times of crisis, end-up becoming targets for the projected frustration and anger emerging from this economically driven sense of powerlessness and injustice. This was recognized more than a half century ago [16]. Also adding to the distrust and antagonism towards first responders is the militarization of police forces and the general increase in the use of force by police officers. When the public views police as tyrannical or hostile, trust in law enforcement may suffer and communities may stop working together to keep the peace [17].

AI Predictions about the Future for First Responders

To understand this topic and to offer recommendations capable of minimizing attacks on these essential workers, we used the AI embedded in the Mind Genomics platform to help us understand the mind-sets of people who attack first responders. The approach asked the AI embedded in Mind Genomics (SCAS, Socrates as a Service) what would happen if the current attitude towards the first responders were to be unchecked. Our specific questions were what would happen in 2026, then in 2030.

SCAS returned with the following ‘prediction’: ‘People will probably still respect and admire first responders in 2026, but they may be worried about their safety due to the rising frequency of assaults on them. Many people may start to be more cautious of the dangers that come with becoming a doctor, nurse, or police officer in light of the increasing number of occurrences targeting these professions. There may be a rising chorus of voices demanding more funding and assistance to shore up the safety nets that now shield first responders. Looking further ahead to 2030, if the trend of attacks on first responders continues unabated, people’s feelings towards these essential workers may become deeply polarized. There may be a growing divide between those who continue to view first responders as heroes deserving of support and admiration, and those who have lost faith in the system and believe that drastic measures are needed to address the root causes of the problem. The traditional feeling of respect and gratitude towards these individuals may be overshadowed by a sense of resentment and anger at the injustices faced by those who dedicate their lives to helping others.

Deeper Understanding of the Problem of Attacking First Responders: Mind-sets and the Contribution of Mind Genomics

Based upon the foregoing ‘prediction’ by AI, we move to a deeper understanding of the minds of people who are described as ‘attacking first responders.’ The approach was based upon the work in Mind Genomics, an emerging branch of psychology dealing with how people respond to the world of the everyday [18,19].

How people respond to stimuli is influenced by their cognitive biases, cultural background, childhood, and life experiences. Studying these individual differences, Mind Genomics zeroes down on the minute details of daily life by classifying individuals according to their thoughts on a subject, their motivations for doing something, and even their barriers to action. Mind genomics achieves this by utilizing a combination of controlled experiments, data analysis, and cognitive psychology principles to identify distinct mind-sets and predict corresponding behaviors [20-22].

Recently, attention has shifted to using artificial intelligence to suggest mind-sets [23]. By using AI, it becomes possible to create a situation where the different mind-sets are identified, along with their possible ‘internal conversation before the attack’, as well as things that can be done immediately as well as long term to discourage these behaviors.

Mind Genomics Empowered by AI, to Explore ‘Who’ Attacks and Why

The rest of the paper is devoted to an exploration of different mind-sets, using AI to drive the creation of the mind-set. The AI is Chat GPT [24], with a series of prompts developed specifically for Mind Genomics. The prompts enable the user to find out specific information about a topic, and later apply AI to further ‘analyze’ the information originally provided by AI. The system is called Socrates as a Service, abbreviated as SCAS. It will be SCAS which allows us to interact with AI.

The exploration begins by presenting SCAS, viz., the embedded AI, with background material, or more correctly with a simple prompting statement. This statement, chosen by the user, is simply the statement: There are six radically different mind-sets of individuals who attack first responders. This statement is presented as fact. (Note that AI will return with only five mind-sets). The rest of the information presented to SCAS is a set of six questions, generated by the user. Table 1 shows the information and request provided to AI.

Table 1: The input information provided by the user and the request for additional information. Note that AI ended up returning only five mind-sets.

TAB 1

The simplicity of the system reduces the anxiety of the user. The user ends up setting the scene for AI by stating the number of mind-sets, and then requests that the AI (viz., SCAS) become a tutor, by answering six questions for each mind-set just synthesized by AI.

Once the user has specified the requested information AI returns quickly with suggestions about the mind-sets. The request has to be made properly. In the effort to create Table 2, it took four iterations to get the request correct, viz., the request shown in Table 1. The iterations are fast, requiring about 15 second each, allowing for a trial-and-error change of instructions so that they end up being clear, and without ambiguity. It is important to emphasize that the ‘errors’ instructing the AI are usually the result of ambiguous instructions, and all-too-often, instructions which contain impossible-to-satisfy requests.

Table 2: The five mind-sets developed by SCAS as a direct response to the request

TAB 2

Table 2 shows the set of five mind-sets ‘synthesized’ by AI. A second iteration might return with some of the same mind-sets, but perhaps with one or two new mind-sets, as well as four of the previous five mind-sets. Note that although the user can request a certain number of mind-sets, the request ends up being a suggestion. Quite often AI returns with fewer mind-sets than requested, but never more than the number requested by the user.

The mind-sets appear with the relevant questions. Whether or not the information is accurate is not as important as the fact that within minutes the user has begun to learn about assaults against first responders. Just the information alone begins to educate, providing insights about what may be going on in the minds of those who do the assaulting, as well as what to say to them in terms of ‘slogans’.

Putting the Ideas into Action after Knowing Mind-sets

AI can predict and prevent attacks on first responders by understanding threat mindsets. By analyzing past incidents, AI can identify patterns and intervene before violence. This knowledge can de-escalate volatile encounters, suggest communication tactics, and prevent violence. With the right tools, first responders can manage unpredictable situations safely.

A short description of each mind-set was given to AI (SCAS), along with the background shown at the top of Table 3. The different mind-sets were provided to give AI a sense of the range of the different ways people might feel about first responders. The request, however, was to come back with a single strategy. The request was given twice, generating two iterations. These are shown in Table 3.

Table 3: Putting the ideas into action – how to prevent or ameliorate the attacks

TAB 3

Strategies Suggested by AI to Minimize Attacks on First Responders

The final activity in this exploration of attacks against first responders comprises the education of professionals. Here let us assume that we are dealing with police officers in a local precinct. The assumption here is that many of the potential attackers are thought to fall into the grouping of ‘Aggressive Defender.’

The strategy is first to create a briefing document for all officers to read (Table 4, and then to create a set of posters showing how the officers should behave towards the Aggressive Defender (Table 5). The briefing document and posters for police officers can enhance their understanding of Aggressive Defender mindsets. The briefing document provides detailed information on their characteristics, behaviors, and motivations, enabling them to anticipate, respond to, and de-escalate situations, thereby improving their safety and effectiveness on the job. In turn, the posters for the precinct teach the police officers how to effectively interact with Aggressive Defenders and potential threats.

It’s important to note that briefing documents and posters are just one method for communicating the information outlined. Multimedia formats for the same information, such as video generated by prompts or text are generally available, and could be used as an adjunct to or substitute for the poster approach outlined below.

Table 4: The briefing document for police officers, focusing on the AGGRESSIVE DEFENDER mind-set

TAB 4

Table 5: Three types of posters for the police precinct, dealing: The briefing document for police officers, focusing on the AGGRESSIVE DEFENDER mind-set.

TAB 5

Who Would be Interested in These AI-based Simulations of Potential Attacker Mind-sets’?

We close the ‘results section’ (viz., the simulations) with a second-level analysis by SCAS. Once the iterations are complete and delivered to the user, the embedded AI reviews the information, and provides deeper analysis of what was presented in the results immediately delivered to the user. This secondary ‘summarization’ of the information occurs some time later, after the project is closed.

Part of the summarization analysis considers WHO would be the audiences. SCAS is pre-programmed to provide three different groups: those who are interested, those who are opposed, and those who think differently and may bring new viewpoints to the problem. These appear in Table 6.

Table 6: AI summarization of three different types of audiences faced with information and simulation of potential attacker mind-sets.

TAB 6

Discussion and Conclusions

Understanding the roots of violence today is critical to safeguarding our first responders. They are continuously exposed to risky circumstances that might develop into violent assaults. The police are often the most visible targets of this assault, but physicians at clinics are also at danger. Individual physicians have been targeted in violent assaults because they are blamed for poor medical results.

Using AI to model mindsets may assist first responders in better understanding and anticipating possible violence. Mind Genomics is a helpful tool for better analyzing and communicating with diverse mindsets. Understanding the mindsets of prospective attackers allows first responders to effectively de-escalate situations and protect themselves and others. This may greatly enhance the safety and efficacy of our first responders in high-risk circumstances.

Imagine a future in which all first responders are educated to comprehend and communicate with diverse mindsets utilizing AI technology. This might transform how our essential front-line workers handle perilous circumstances, shield themselves from injury, and maintain public support. The capacity to detect and avoid violence may be the difference between life and death for the first individuals on the scene.

References

  1. Alexander DA, Klein S (2009) First responders after disasters: a review of stress reactions, at-risk, vulnerability, and resilience factors. Prehospital and Disaster Medicine 24: 87-94. [crossref]
  2. Henry VE (2015) Crisis intervention and first responders to events involving terrorism and weapons of mass destruction. Crisis Intervention Handbook: Assessment, Treatment, and Research, pp.214-47.
  3. Holgersson A (2016) Review of on-scene management of mass-casualty attacks. Journal of Human Security 12: 91-111.
  4. Jannussis D, Mpompetsi G, Vassileios K (2021) The Role of the first responder. in emergency medicine, trauma and disaster management: In: From Prehospital to Hospital Care and Beyond (pp. 11-18). Cham: Springer International Publishing.
  5. Prioux C, Marillier M, Vuillermoz C, Vandentorren S, Rabet G, et al. (2023) PTSD and Partial PTSD among first responders one and five Years after the Paris terror attacks in November 2015. International Journal of Environmental Research and Public Health 20,: 4160. [crossref]
  6. Wilson LC (2015) A systematic review of probable posttraumatic stress disorder in first responders following man-made mass violence. Psychiatry Research 229: 21-26. [crossref]
  7. Soltes V, Kubas J, Velas A, Michalík D (2021) Occupational safety of municipal police officers: Assessing the vulnerability and riskiness of police officers’ work. International Journal of Environmental Research and Public Health 18: 5605. [crossref]
  8. Huffman MC, Amman MA (2023) Violence in a place of healing: Weapons-based attacks in health care facilities. Journal of Threat Assessment and Management 10: 151-187.
  9. Gibbs JC (2020) Terrorist attacks targeting police, 1998–2010: Exploring heavily hit countries. International Criminal Justice Review 30: 261-278.
  10. Rohde D (1998) Sniper Attacks on Doctors Create Climate of Fear in Canada. New York Times.
  11. Richter G (2019) Assaults to EMS First Responders are Felonies in Pennsylvania, So Why Do Many Victims Feel They Do Not Receive Justice? Am J Ind Med [crossref]
  12. Coleman TG, Cotton DH (2010) Reducing risk and improving outcomes of police interactions with people with mental illness. Journal of Police Crisis Negotiations 10: 39-57.
  13. Cole J, Walters M, Lynch M (2011) Part of the solution, not the problem: the crowd’s role in emergency response. Contemporary Social Science 6: 361-375.
  14. Gibbs JC (2013) Targeting blue: Why we should study terrorist attacks on police. In: Examining Political Violence, Routledge (pp. 341-358).
  15. Gibbs JC (2018) Terrorist attacks targeting the police: the connection to foreign military presence. Police Practice and Research 19: 222-240.
  16. Ransford HE (1968) Isolation, powerlessness, and violence: A study of attitudes and participation in the Watts riot. American Journal of sociology 73: 581-591. [crossref]
  17. Smith DC (2018) The Blue Perspective: Police Perception of Police-Community Relations. University of Maryland, Baltimore County.
  18. Moskowitz HR, Gofman A, Beckley J, Ashman H (2006). Founding a new science: Mind Genomics.” Journal of Sensory Studies 21: 266-307.
  19. Moskowitz HR, Gofman A (2007) Selling Blue Elephants: How to Make Great Products that People Want Before They Even Know They Want Them. Pearson Education.
  20. Ilollari O, Papajorgji P, Gere A, Zemel R, Moskowitz H (2019) Using Mind Genomics to understand the specifics of a customer’s mind.The European Proceedings of Social, Behavioural Sciences EpSBS ISSN: 2357-1330.
  21. Moskowitz H, Wren J, Papajorgji P (2020) Mind Genomics and the Law (1st Edition). LAP LAMBERT Academic Publishing.
  22. Ilollari O, Papajorgji P, Civici A (2020) Understanding client’s feelings about mobile banking in Albania. Interdisciplinary International Conference On Management, Tourism And Development Of Territory 147-154.
  23. Moskowitz HR, Rappaport S, Saharan S,DiLorenzo, A (2024) What makes ‘good food’: Using AI to coach people to ask good questions. Food Science, Nutrition Research 7: 1-9. [crossref]
  24. Aher GV, Arriaga RI, Kalai AT (2023) Using large language models to simulate multiple humans and replicate human subject studies. Proceedings of the 40th International Conference on Machine Learning 202: 337-371.

The Financial Incentives Leading to the Overutilization of Cardiac Testing and Invasive Procedures

DOI: 10.31038/JCRM.2024712

 
 

The overutilization of cardiac testing and unnecessary referrals to invasive coronary angiography are significant clinical and health policy concerns. Inappropriate imaging cardiac stress tests are estimated to cost the U.S. healthcare system $500 million annually and expose many patients to unnecessary radiation. The unjustifiable use of diagnostic tests to screen for cardiac disease in asymptomatic and low-risk chest pain patients may lead to further testing and invasive procedures that are costly and potentially harmful, and have no clear outcome benefits. The principal trend in the treatment strategy for stable ischemic heart disease (SIHD) over the past two decades has been the utilization of percutaneous coronary intervention (PCI) and diminishing utilization of medical treatment and coronary artery bypass surgery (CABG). Despite these long-term changes in strategy, overall mortality has not improved significantly while costs have risen exponentially. One deleterious consequence has been an increasingly greater dependence on testing and interventional volume to maintain the revenue stream of cardiology practices.

Historical Background

The origins of this dependence are related to the original PCI learning curve. PCI quantity became a surrogate for quality: at an early stage, the standard was that “the more you do, the better you are”. This misconception persisted long after it was demonstrated to not be an accurate measure of quality despite the proposal of better metrics. There were several reasons for this tenacity. First, with the high reimbursement for PCI, cardiology sections and departments of medicine had found a “cash cow” in an era of “cost containment” that financed program expansion and higher compensation. Interventional leaders at first rigorously maintained high evidentiary standards of case selection. But then, as fellows were trained and entered outside practice with their newly minted skills, the potential income to physicians and hospitals became apparent. Teaching hospitals suddenly were in competition with previously small community hospitals, including those that previously were established referral sources. More and more interventionists entered practice, and competition expanded further; maintaining high volume meant moderating standards of case selection.

Another factor was an inherent uncertainty and unpredictability with balloon angioplasty. It was accepted that there was a risk of dissection and acute closure requiring urgent CABG, and thus only those who were surgical candidates could be PCI candidates. Some pioneers pushed that envelope with great success in otherwise hopeless cases. With the introduction of stents, the incidence of acute closure requiring CABG became zero. And with this fantastic tool, there was suddenly no contraindication to any patient with a severe lesion, including those with no symptoms at all.

Impact of Financial Incentives

Thereafter, the volume of procedures increased exponentially, and with it, revenue to hospitals, doctors, and programs at a time of diminishing reimbursements for cognitive skills. Hospital administrators, with the bottom line fully in focus, insisted on even more volume. As hospital systems increasingly acquired practices, these non-physicians became physician-leaders, and their bottom line was income generation. Any physician who wanted to see the science that showed evidence that all of these patients were getting benefits were suddenly no longer considered to have high standards, but rather naïve. The cardiology department and cardiac catheterization laboratory directors were expected to increase cath lab volumes.

In parallel, an entire lesion detection infrastructure sprung up with various forms of high-volume, moderately well-reimbursed stress testing being performed on any patient with even the most atypical symptoms. In a patient with a low pretest probability of coronary artery disease, a positive stress test is more likely to be a false positive than a true positive. Cardiologists developed an entire system to detect CAD that was revenue generating, even though the evidence suggesting it saved lives or improved quality of life was lacking. Finding disease to prevent sudden death is an attractive concept and was used to justify the liberalization of testing.

The fact that this testing strategy has led to millions of procedures with no scientific evidence to support it is unwelcome news to many. Science has taken a back seat to dogma in the promotion of procedures designed for a paradigm (obstructive lesion → ischemia → MI → mortality) that is known to be highly simplistic and incorrect. Any suggested harms became controversial and subjects of debate, in particular, whether a “small myocardial infarction” related to microthrombi and embolization during the procedure has long-term prognostic implications.

With academic leaders in interventional cardiology promoting PCI for MI prevention, it should have been no surprise that certain physicians with large practices of SIHD patients were doing unnecessary procedures on non-significant lesions, and sometimes, with no visible stenosis at all. A significant culprit of this time told the media that his 7-figure income was not an influence for placing 30 stents in a day. A few physician reputations were destroyed, but no hospitals went out of business—others, to keep that volume coming in, acquired them. The blame was placed on the “bad apple”, not the tree.

Guidelines

Rather than undertake a serious introspective evaluation at what was transpiring, an indirect evaluation was proposed. The cardiology societies collaborated to develop appropriateness criteria to classify which indications for revascularization were acceptable and which were not. The idea was to self-police and control the destiny of medical practice rather than allow outside agendas, clearly not attuned to the patient, control the procedure. Hospitals became interested in developing and paying for quality assurance programs as a defense against obvious malfeasance. These criteria were most notable for posing a temporary obstacle for clever interventionists to work around rather to assure that the right procedure is done for the right patient.

The flaws in these criteria were clear to many from the outset. Improved survival is not the only benefit a treatment strategy can offer, just the easiest to measure. Most patients prefer improved quality of life to longer survival alone, especially in regard to symptom status, but these are less objective in their assessment. If subjective improvement in symptoms is considered a benefit, then there was no way to generalize classifications, and they could also be subjectively influenced, so they weren’t included. Nearly all interventionists were displeased with a cookbook approach to case selection without reference to the individual patient. And with every new tweak of devices and technique, there was a disregard for prior studies that failed to show a benefit, even when new studies continued to show almost identical results. It is no coincidence that the most important PCI trials of the last 15 years (COURAGE, BARI2D, and ISCHEMIA) were not led by interventional cardiologists.

Contemporary Practice

Today, cardiologists can no longer compensate for declining reimbursement for their services by increasing the number of services they provide. The volume of coronary interventions performed in most institutions and by most interventional cardiologists is declining, just as the number of heart surgeries has been declining for years. Insurance companies require pre-approval for coronary CT angiograms, nuclear imaging, and other procedures. The pressure for interventional cardiologists to do as many cases as possible is motivated by demand from hospital and practice administration to increase revenue, which seems to conflict with the scientific evidence provided by randomized trials and summarized in practice guidelines.

Intervention has devolved to that of a commodity, a service provided on order as if there was no downside risk, with great benefits, and as if no alternative exists. Medical therapy remains the implied least attractive treatment modality, resorted to only when PCI or CABG are not favorably viewed from a technical standpoint. Standard management remains that invasive procedures always yield information that benefits the patient’s outcome. Discordant clinical trials are characterized as flawed in design.

As cardiologists, we see the patients referred to us to consider if a procedure is indicated, then we do the procedures, for which we are compensated; but receive only the fee for office visit if we do not advise the procedure be performed. That is self-referral, and the inherent conflict of interest this business model incorporates has had a substantial influence on modern practice. The pressure to do more cases is constantly applied from the administrative hierarchy: to prove quality, to generate income, to develop new referrals.

The response of third-party payors to the exponential rise in procedures was to suggest non-payment when the physician’s guidelines were abrogated. The physician’s response was to liberalize the criteria, eliminate the term “inappropriate” so that no case could be said to be not scientifically based, and denounce lack of payment for services in a fee-for-service environment. Consequently, the insurance companies now pay decreasing amounts for the procedure, currently at laughably low levels, because they realized that doctors and hospitals have no incentive to become partners in trying to control costs.

The decreased payment per case, of course, adds further pressure to do even more cases and procedures, of even less proven benefit to the patient, to generate more revenue. Hypothermia, ventricular assist devices, multivessel stenting in MI and shock, and specific treatment devices, have been advocated in these guidelines despite no studies showing benefits and even some showing a lack of benefit and even harm. Cycles of increasing indications for procedures following diminishing reimbursement have resulted.

Can This Be Fixed?

As Deming said, “Every system is perfectly designed to get the result that it does”; so to change the outcome, it would be necessary to change the system and its component parts which derive profit from these circumstances. One place to start is how trainees are taught. It’s not just what is said to fellows and housestaff, but how their teachers actually act. If they see their attendings say one thing and do another, with a wink and a nod, they get it. The practice of today has to reflect the values medicine should optimally follow in the future.

Incorporating the results of the ISCHEMIA Trial into practice guidelines is a significant challenge. The finding that SIHD with moderate-to-severe ischemia treated by revascularization had no benefit beyond OMT in preventing major cardiovascular events after 4 years challenges all of our preconceived notions. The premise that severely symptomatic SIHD should be treated invasively to improve mortality is incorrect: since worsening severity of ischemia is associated with increased mortality, logically it would seem to follow that procedures that reduce ischemia should improve survival, but this was not the case. Moreover, the traditional teaching that revascularization does not prevent MI in SIHD may be incorrect: the rate of spontaneous MI during 4-year follow-up was lower in the revascularization subgroup (HR 0.67 (0.53, 0.83), p<0.01), suggesting that perhaps PCI may reduce type I MIs.

For most patients with SIHD but without left main coronary disease or severely reduced left ventricular function, shared decision‐making about revascularization should be based on discussions of symptom relief and quality of life and not about reduction in mortality.

As better evidence is developed, more definitive appropriateness criteria should be implemented to ensure we deliver effective, valuable care — and contain costs.

This change would have immediate repercussions, as the entire medical payment system would have to re-equilibrate after decades of deception on all sides. It will mean less revenue in an environment in which over-utilized procedures are underpaid. Professional societies must take on the hard battles, showing responsibility and leadership. Mechanisms to self-regulate are needed. Those who repeatedly take advantage of the lack of objectivity in testing, without regard to costs to the patient, have to be discouraged, not rewarded, by their practice pattern.

Hospitals and physicians must agree to allow oversight of quality by outside, objective agencies and methods, and welcome it. The alternative is to continue down the current path, where costs are rising, reimbursement diminishing, income is threatened, and procedures are done with modest reference to clinical trials that determine what really helps the patient. The delivery of optimal clinical benefit requires an ongoing self-assessment structure comparing actual results to accepted benchmarks, with timely modification of practices when deficiencies are identified. The critical quality elements include adhering to evidence-driven case selection, ensuring proficient technical performance, and monitoring clinical outcomes [1-4].

References

  1. Klein LW, Dehmer GV, Anderson HV, Rao SV (2020) Overcoming obstacles in developing and sustaining a cardiovascular procedural quality program. Journal of the American College of Cardiology – Cardiovascular Interventions 13(23): 2806-2810. [crossref]
  2. Klein LW, Anderson HV, Rao SV (2019) Proposed framework for the optimal measurement of quality assessment in percutaneous coronary intervention. Journal of the American Medical Association – Cardiology 4(10): 963-964. [crossref]
  3. Klein LW, Anderson HV, Rao SV (2020) Performance metrics to improve quality in contemporary PCI practice. Journal of the American Medical Association – Cardiology 5(8): 859-860. [crossref]
  4. Anderson HV, Shaw RE, Brindis RG, et al. (2005) Relationship between procedure indications and outcomes of percutaneous coronary interventions by American College of Cardiology/American Heart Association Task Force Guidelines. Circulation 112 (18), 2786-2791. [crossref]