This paper is part of a series of papers using generative AI to simulate issues of current importance in the world of nations and their interactions. Through AI and the Mind Genomics platform, BimiLeap.com, one can explore different facets of a situation. The study here on the potential move of China on Taiwan explores the topic from five viewpoints, each simulated by AI, and the entire processing taking less than 24 hours, and at low cost. Phase 1 deals with reconstructing the recent past through simulated interviews with government officials. Phase 2 deals with the mind-sets of the Chinese people regarding Taiwan. Phase 3 projects the future history of the conflict by positioning the simulation in 2030 and simulating one’s recall of events six years before when the conflict between China and Taiwan took place. Phase 4 simulates a congressional hearing to explore the conflict. Phase 5 presents five simulations of what one must do to avoid the problem. The five phases provide an easy-to-understand briefing document, designed to capture the “human face” of the conflict, and involve the reader in critical thinking about issues and solutions.
Keywords
China-Taiwan conflict, Generative AI, Geopolitical issues, International conflict, Mind Genomics
Introduction
The relationship between China and Taiwan has been a contentious issue for decades, with China viewing Taiwan as a rogue province and Taiwan viewing itself as a sovereign state. The conflict has roots in the Chinese Civil War, where the defeated Nationalist Party retreated to Taiwan, establishing a separate government. Despite growing trade and cultural exchanges, political tensions have not fully dissolved. In 2024, tensions are at extreme levels, with China’s President Xi Jinping making increasingly threatening statements about Taiwan’s autonomy. The Chinese people view this as a rightful step to ensure China’s global standing. On the other hand, Taiwan’s President Tsai Ing-wen faces immense pressure from both citizens and international allies. The U.S. and other international allies have played a central role in maintaining peace in the region, but the stakes have never been higher. Intensifying espionage and propaganda efforts have driven public sentiment further to extremes, with Chinese media portraying Taiwan as dangerously rebellious and Taiwanese media portraying China as an oppressive neighbor. The future hinges on how long Taiwan can hold out and what the international community is willing to do in its defense [1-3].
Phase 1 — Reconstructing the Past Through Simulated Interviews
Simulating history through imaginary interviews offers profound insights beyond mere facts, allowing for a deeper understanding of the intentions, motivations, tensions, and decisions that might have been obscured in official records or documents. This mode of exploration fosters empathy, deeper understanding of complexities, and a recognition that history is more than a collection of dates and events; it is a narrative shaped by the thoughts, emotions, and actions of individuals and institutions. By placing oneself in the shoes of both the interviewer and the interviewee, one can ask pointed questions that reflect contemporary concerns and imagine the answers through the lens of the individuals involved, reconstructing not just their public- facing personas but their personal doubts, ambitions, and limitations. This exercise in empathy allows for a deeper understanding of the uncertainty and messiness of decisions that might seem inevitable or preventable with the benefit of hindsight [4-6].
Simulated interviews also help to test assumptions, uncovering underlying ideologies, competing narratives, and significant ideological blind spots that governed behavior and choices. They also model a different type of dialogue, allowing for a better understanding of the role of personality and individual agency in history. This approach instills analytical rigor and creative empathy, skills crucial for any student of history. Table 1 shows the instructions to the AI to synthesize the interviews with ten government officials.
Table 1: Simulated interviews about the China-Taiwan situation with 10 government officials.
Phase 2 — Mind-Sets of China Regarding Taiwan
Mind Genomics is an emerging science which identifies different “mind-sets” based on cognitive patterns, preferences, and biases. It suggests that people respond to the same issue in different but predictable ways, not because they are irrational or misinformed. This concept can be applied to geopolitical issues like the China-Taiwan conflict, helping to deconstruct varying viewpoints in China regarding Taiwan’s status and potential actions. Within China, multiple mind- sets exist regarding Taiwan, including nationalistic, historical, economic, and strategic perspectives. Understanding these different mind-sets can help decision-makers craft targeted policies to appeal to specific segments of the population, preventing oversimplification of the complex issue of the China-Taiwan conflict.
Table 2 shows the three mind-sets synthesized by AI. China’s mind- sets regarding Taiwan are influenced by its historical conception of sovereignty and territorial integrity, as well as its long-standing belief in a unified China dating back to imperial dynasties. The Chinese government views Taiwan as an integral yet temporarily estranged part of the modern Chinese nation-state, with the Taiwan question seen as a symptom of a larger historical trajectory. The Chinese leadership is aware of the political repercussions of losing Taiwan, and any deviation could weaken the Chinese Communist Party’s (CCP) grip on the narrative. Taiwan’s strategic role in global geopolitical dynamics, particularly its dominance in advanced semiconductor production, further influences Beijing’s approach. China’s approach to Taiwan is long-term, with strategic patience informed by the Confucian principle that “time will solve all problems.” However, the international context is not overlooked, with Taiwan’s close ties to the United States, alliances with Japan, and its pivotal role in the Indo- Pacific strategy. The prevailing mind-set of the Taiwanese people, who overwhelmingly prefer maintaining the current status quo, conflicts with Beijing’s strategy of eventual reunification. Understanding China’s mind-set can help navigate its decision-making processes and understand its complex emotions and motivations [7-9].
Table 2: Mind-sets of China Regarding Taiwan.
Phase 3 — Looking Forward by Looking Backwards: The Experts Recall What Happened Six Years Ago
Edward Bellamy’s novel “Looking Backward” offers a unique approach to understanding the future by imagining it as if it has already occurred. By placing the reader in the year 2000, looking back at the societal transformations that fixed the problems of 1887, Bellamy provides a structured way of imagining possible trajectories and assessing the decisions that lead to certain outcomes. This technique can be applied to the fraught situation between China and Taiwan, as it allows for better analysis and prevention of repeating mistakes.
Bellamy’s method enhances our ability to learn by structuring our critical analysis, allowing us to mentally walk backward and identify key events or errors that determined the future. The immediacy of the China-Taiwan conflict is complicated by militaristic, economic, and geopolitical uncertainties, but by mentally projecting Taiwan as having already been annexed or successfully defended its sovereignty, the outcome can only be understood and studied.
Storytelling is another aspect of “looking backward,” making complex international relations more graspable for everyone involved in the process. By offering a blueprint in the form of an already- imagined outcome, Bellamy effectively shifts the reader toward structured speculation.
Looking backward frames today’s decisions with the weight of historical responsibility while maintaining the speculative flexibility the future demands. By using Bellamy’s method creatively, we may better navigate the tense and dangerous waters of contemporary geopolitics [10-13].
Table 3 presents us the results of ten interviews with individuals who were simulated to be conversant with the issues, and who had opinions about what could have been done better. The approach follows Edward Bellamy’s approach of telling the story of a moderately recent past to foretell the future in a way which is palatable and interesting.
Table 3: Ten interviews about the Chinese move on Taiwan which occurred five years before.
Phase 4 — Questions and Answers at the Congressional Hearing
Simulating a congressional hearing with unnamed professionals recounting their memories of an event like a Chinese move on Taiwan can be an educational and thought-provoking exercise. It allows readers to explore complex foreign-policy issues within a structured context, encouraging critical thinking, engagement with hypothetical expertise, and scenario analysis. This method focuses on roles and expertise rather than individuals, allowing readers to consider the processes and systems that underpin decisions. Table 4 presents the simulated congressional hearing.
Table 4: Simulated questions and answers at a congressional hearing about the Chinese move on Taiwan.
Simulating a congressional hearing can also deepen understanding of contemporary geopolitics and history by placing students in hypothetical situations where they need to apply historical knowledge, critical analysis, and strategic thinking. It also trains students and participants to ask better questions, identifying gaps in knowledge and anticipating the need for further information.
The interdisciplinary nature of the simulation allows readers to understand how disciplines interact in policy decisions, highlighting the union of various domains of expertise in resolving international conflicts. While some may enjoy the freedom of working within fictive or simulated environments, others may find the exercise challenging due to the added responsibility of dealing with a complex situation that has not “actually” happened but could happen in the future.
Ultimately, employing simulations in history and policy classes can nurture analytical skills, leadership potential, and decision-making acumen. A hearing simulation on an event like a Chinese move on Taiwan helps attendees and readers practice working with complex, nuanced issues, serving as an effective preparatory exercise for those who may enter fields in government, law, international relations, or academia where nuanced and critical decisions will be valued [14-16].
Phase 5 — Five “Faces of Prevention”
In times of uncertainty, questions play a crucial role in national security, foreign policy, and crisis management. The unpredictability of information and insights can create tension when different answers create more ambiguity. Consultations from experts from the cabinet and Pentagon bring varied experiences, fields of study, and specializations to the table. Receiving different answers does not necessarily signify the system is failing or confused, but it highlights the reality of complexity and the necessity of pulling from diverse perspectives [17-19].
Repetition of questions can signal attention to the critical nature of the issue, revealing nuances in arguments, gaps in logic, or overlooked information. Inconsistency in responses may give a broader, more comprehensive understanding of the nuances faced, prompting deeper thinking. Table 5 shows four different answers to the same question: What steps should be taken to prevent similar acts of aggression in the future?
Table 5: Five answers to the same question: What steps should be taken to prevent similar acts of aggression in the future?
Discussion and Conclusion
China’s intentions and potential military actions towards Taiwan are a major concern for national security and policymakers worldwide. AI-enabled simulations have been used to study and predict China’s strategies, including triggers, diplomatic channels, military postures, and deterrence scenarios. These simulations provide quicker, more adaptable analyses of complex geopolitical scenarios, allowing policymakers to run multiple “what-if” scenarios that take into account economic pressures, diplomatic relationships, and military movements. However, concerns about overemphasis on AI-based simulations exist, as they may not fully grasp cultural, historical, and deeply embedded political factors. To ensure AI does not dominate the decision-making process, traditional simulation techniques, field experience, and diplomatic insight should be used alongside AI- based simulations. Simulation exercises can help decision-makers better prepare for potential real-world conflicts without endangering national security or international stability.
Acknowledgments
The authors delightedly acknowledge the ongoing help of Vanessa Marie B. Arcenas and Isabelle Porat in the preparation of this manuscript and its companions.
References
Amonson K, Egli D (2023) The Ambitious Dragon: Beijing’s Calculus for Invading Taiwan by Journal of Indo-Pacific Affairs 6: 37-53.
Roy D (2000) Tensions in the Taiwan Survival 42: 76-96.
Wang TY (2023) Taiwan in 2022: An Eventful Asian Survey 63: 247-257.
Albores P, Shaw D (2008) Government preparedness: Using simulation to prepare for a terrorist Computers & Operations Research 35: 1924-1943.
Borning A, Friedman B, Davis J, Lin P (2005) Informing Public Deliberation: Value Sensitive Design of Indicators for a Large-Scale Urban In: Proceedings of the Ninth European Conference on Computer-Supported Cooperative Work 449-468.
DiCicco JM (2014) National Security Council: Simulating Decision-making Dilemmas in Real International Studies Perspectives 15: 438-458.
Dweck CS, Yeager DS (2019) Mindsets: A View From Two Eras. Perspectives on Psychological Science 14: 481-496.
Moskowitz H, Kover A, Papajorgji P (2022) Applying Mind Genomics to Social IGI Global.
Wu AX (2014) Ideological polarization over a China-as-superpower mind-set: An exploratory charting of belief systems among Chinese internet users, 2008-2011. International Journal of Communication 8: 2650-2679.
Berridge V (2016) History and the future: Looking back to look forward? InternationalJournal of Drug Policy 37: 117-121.
Franklin JH (1938) Edward Bellamy and the Nationalist TheNewEnglandQuarterly 11: 739-772.
Levi AW (1945) Edward Bellamy: Ethics 55: 131-144.
Zhang P (2015) The IS History Initiative: Looking Forward by Looking CommunicationsoftheAssociationforInformationSystems 36: 477-514.
Kahn MA, Perez KM (2009) The Game of Politics Simulation: An Exploratory Journal of PoliticalScience Education 5: 332-349.
Mariani M, Glenn BJ (2014). Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Journal of Political Science Education 10: 284- 301.
Rinfret SR, Pautz MC (2015) Understanding Public Policy Making through the Work of Committees: Utilizing a Student-Led Congressional Hearing Simulation. Journal of Political Science Education 11: 442-454.
Hart P (2002) Preparing Policy Makers for Crisis Management: The Role of Journal of Contingencies and Crisis Management 5: 207-215.
Hetu SN, Gupta S, Vu VA, Tan G (2018) A simulation framework for crisis management: Design and Simulation Modelling Practice and Theory 85: 15-32.
Rosenthal U, Pijnenburg B (eds) (1991) Crisis Management and Decision Making: Simulation Oriented Scenarios. Springer Science & Business Media.
This paper presents a new approach to understand FMI (foreign malign influences) such as disinformation and propaganda. The paper shows how to combine AI with the emerging science of Mind Genomics to put a “human face” on FMI, and through simulation suggest how to counter FMI efforts. The simulations comprise five phases. Phase 1 simulates a series of interviews from people about FMI and their suggestions about how to counter the effects of FMI. Phase 2 simulates questions and answers about FMI, as well as what to expect six months out, and FMI counterattacks. Phase 3 uses Mind Genomics thinking to suggest three mind-sets of people exposed to FMI. Phase 4 simulates being privy to a strategy meeting of the enemy. Phase 5 presents a simulation of a briefing document about FMI, based upon the synthesis of dozens of AI-generated questions and answers. The entire approach presented in the paper can be done in less than 24 hours, using the Mind Genomics platform, BimiLeap.com, with the embedded AI (ChatGPT 3.5) doing several levels of analysis, and with the output rewritten and summarized by AI (QuillBot). The result is a scalable, affordable system, which creates a database which can become part of the standard defense effort.
Keywords
AI simulations, Disinformation, Mind genomics, Foreign malign influences
Introduction: The Age of Information Meets the Agents of Malfeasance
Information warfare is a powerful tool for adversarial governments and non-state actors—with propaganda, fake news, and social media manipulation being key strategies to undermine democracies, particularly the United States. Foreign actors like Russia and China exploit socio-political divides to spread fake news, amplifying racial tensions and cultural clashes. The U.S. government is increasingly concerned about disinformation and propaganda efforts from foreign adversaries, with agencies like the Department of Homeland Security (DHS) and Federal Bureau of Investigation (FBI) warning about evolving tactics. The private sector, particularly social media companies, has a key role in countering propaganda but has been criticized for being insufficient. To combat these threats, the U.S. government, social media companies, and civil society organizations need to collaborate effectively, using innovative techniques to detect and counter malign influences without infringing on civil liberties [1-4].
The war on disinformation continues apace. Sustained efforts are evermore vital to preserve the integrity of democratic systems. Malign influences do their evil work through their deliberate use of deceptive or manipulative tactics. The actors may be state or non-state actors, who spread false information, distort public perception, or undermine trust in democratic institutions. Traditional media plays one of two roles, or sometimes both roles. Traditional media either amplifies misinformation by reporting unverified stories or counteracts it by adhering to journalistic standards of fact-checking and verification. The outcome is a tightrope, balancing act, one part being freedom of expression, the other being the structural harm from the willy-nilly acceptance of potentially injurious information. Balancing freedom of expression with the need to protect citizens from harmful deceit can be difficult [5-7].
Strategies currently in use include increased investment in fact- checking initiatives, creating algorithms to detect fake accounts and bots, public awareness campaigns about media literacy, and stricter regulations about political ad funding, respectively. Nonetheless, it is inevitable that challenges remain in detecting and removing disinformation, clearly in part due to the avalanche effect, the sheer volume of content and evolving tactics. Fact-checking can help reduce the spread of false stories, but it is often limited by reach, speed, and the willingness of individuals to believe corrections. Artificial intelligence may identify patterns in disinformation campaigns, flagging suspicious accounts or content, but may struggle to distinguish among opinion, satire, and deliberately harmful misinformation [2,8].
Misinformation can erode trust in traditional media by making it difficult for the public to discern what is true and what is propaganda. Broad laws targeting online speech often raise concerns about censorship and the infringement of free speech. Media literacy programs give people the tools to critically evaluate sources and identify fake news, but they require widespread implementation and can be hindered by existing biases [9-11].
This paper moves the investigation of malign influences such as fake news into the direction of the analysis of the everyday. The paper attempts to put a human face on malign influences by using AI to simulate interactions with people, with questions that people might ask, and with ways that people deal with information sent out by “actors” inimical to the United States. The paper presents AI “exercises” using the Mind Genomics platform, BimiLeap.com.
Phase 1: Putting a Human Face on the Topic Through Snippets of Stories with Recommendations
The psychological principle of presenting a “human face” to issues like foreign malign influences (FMI) resonates with people as they are naturally driven by stories. Simulating interviews with individuals recounting personal struggles with misinformation injects warmth, vulnerability, and relatability, making it easier to feel empathy [12-14]. Building trust and emotional connection is essential in addressing the erosion of trust in media, government, and social institutions. To this end, Table 1 presents 22 short, simulated interviews with ordinary people, as well as the recommendation that they make.
Table 1: AI simulated snippets of interviews and recommendations about FMI (foreign malign influences).
Phase 2: Simulating Advice
AI can be used to generate specific questions and detailed, actionable answers to counter foreign malign influences (FMI). This approach allows for quick identification of common points of intrusion or manipulation by foreign actors, providing an organized strategy to address key vulnerabilities. AI-driven directives prioritize immediate actions, enabling individuals or institutions to respond swiftly to rapid information warfare. AI’s ability to flesh out complex situations while accounting for multiple variables allows it to present tangible alternatives and outcomes with ease through simulation, providing a “what if ” perspective. This actionable level of detail bridges the gap between theory and practice, making recommendations feel natural and embedded in the broader scenario being played out in real-time simulation. The iterative nature of AI allows for constant feedback and improvement, making it better suited to the evolving circumstances of FMIs. AI’s role also provides clarity and simplicity, making it suitable to create directives for targeted messaging campaigns, media outlets, and the general public [15-17].
Table 2 shows questions and answers based on a simple AI “understanding” of the topic, along with additional analyses such as predictions of what might happen six months out, and FMI’s counterstrategy. Information presented in this manner may produce more compelling reading, and a greater likelihood that the issues of FMI end up recognized and then thwarted.
Table 2: Questions, answers, strategies and counterstrategies for FMI efforts.
Phase 3: Mind-sets of People in the United States Exposed to FMI
Mind-sets are stable ways individuals react to stimuli or situations which are shaped by cognitive processes, personal experiences, emotional predispositions, and sociocultural factors [18-21]. AI- generated mind-sets can be crucial for understanding how different people process misleading material, such as the topic of this paper, Foreign Malign Influence (FMI). Machine learning algorithms use clustering methods, unsupervised learning, and statistical analysis to generate or simulate these mind-sets. By feeding AI real-world data, AI can identify distinct groups of people who respond to information in specific ways. This enables predictions on how these groups will behave when confronted with different types of foreign malign influence, making interventions more effective. Table 3 shows the simulation of three mind-sets of individuals responding to FMI information.
Table 3: AI simulation of three mind-sets, created on the basis of how they respond to misinformation presented by the FMI.
Exploring mind-sets in the context of FMI provides insights into social resilience and helps design better defense mechanisms against misinformation. Educational platforms can teach people how to recognize manipulation techniques based on their underlying mind-set. Furthermore, governments, social media companies, and other stakeholders can measure the effectiveness of counter- disinformation campaigns by targeting specific mind-sets and adjusting their message based on real-time feedback or simulation predictions from AI.
Phase 4: Predicting the Future by Looking Backwards
The “Looking Backwards” strategy is an innovative method for predicting trends and outcomes, inspired by Edward Bellamy’s “Looking Backwards” process. By mentally placing ourselves in 2030 and reviewing the events of 2024, we can distance ourselves from innate biases, misinformation, anxieties, and uncertainties of the present moment. This mental distance allows for clearer, more holistic insights into the trajectory of ongoing issues, such as foreign malign influences attempting to flood the U.S. with disinformation. Table 4 shows the AI simulation of looking backward from 2030.
Table 4: Predicting the future by looking backward at 2024 from 2030 to see what was done.
By looking back at 2024 from 2030, we can better assess the societal, political, and psychological ramifications of foreign influence operations, especially disinformation campaigns. By identifying the steps taken today that resulted in negative or positive outcomes by 2030, we might adjust our efforts now, fortifying our democratic resilience against foreign ideologies seeking to undermine our stability. This approach also holds potential when shared with the public, as it can help improve resilience and empower the democratic system to remain agile [22-26].
Phase 5: Creating a Briefing Document — Instructing the AI Both to Ask 60 Questions and Then to Summarize Them
In this step AI was instructed to create 60 questions, and provide substantive, detailed answers to each. The questions focused on various aspects regarding the impact of foreign disinformation on public opinion, civic engagement, and stability. These responses were then condensed into a more digestible briefing using summarizing tools like QuillBot [27-29]. This process allows for the inclusion of ideas and hypotheses that might not be immediately apparent to human analysts due to cognitive biases or blind spots [30,31] (Table 5).
Table 5: A simulated “set of five questions briefing document” about FMI, based upon the AI-generated set of 60 questions and answers, followed by an AI summarization of the results.
In the short term, AI-generated answers are objective and free from emotional bias, allowing analysts to base their next moves on data-driven insights. In the long term, AI technologies can be used for long-term planning and resilience strategies, allowing for rapid adjustment to evolving situations and trend recognition. This AI- driven approach also contributes to international cooperation against FMI, fostering a united front against foreign disinformation.
Discussion and Conclusions
The paper shows how the team developed a system using artificial intelligence, Mind Genomics, and real-time simulation capabilities to identify, counteract, and neutralize foreign malign influences (FMI). The system aims to understand the psychological and tactical mechanisms driving disinformation campaigns, and in turn generate strategic responses to reduce their efficacy.
AI simulations mimic real-world strategic meetings, interpersonal interviews, and situational dynamics, revealing the “human face” of the enemy and transforming large volumes of data into actionable intelligence. Mind Genomics thinking creates mind-sets, allowing for the identification of different tactics employed by adversaries. This allows mapping of a psychological landscape, understanding which messages take root and which defensive strategies resonate best with different audience segments. Real-time insights are crucial for adjusting countermeasures in sync with the adversary’s shifting methods.
The system has potential to influence public perception and bolster civic resilience by simulating the actions of enemy actors and the reactions of different segments of society. It could enable preemptive action, enabling policymakers and national security analysts to deploy specific public information campaigns or strategic maneuvers based on projections. The system’s broader geopolitical implications extend beyond national borders, creating a cooperative defense mechanism against foreign powers which exploit misinformation to sow international discord.
Acknowledgments
The authors gratefully acknowledge the ongoing help of Vanessa Marie B. Arcenas and Isabelle Porat in the creation of this and companion papers.
References
O’Connell E (2022) Navigating the Internet’s Information Cesspool, Fake News and What to Do About University of the Pacific Law Review 53(2): 252-269.
Schafer JH (2020) International Information Power and Foreign Malign Influence in In: International Conference on Cyber Warfare and Security; Academic Conferences International Limited.
Weintraub EL, Valdivia CA (2020) Strike and Share: Combatting Foreign Influence Campaigns on Social The Ohio State Technology Law Journal 702-721.
Wood AK (2020) Facilitating Accountability for Online Political TheOhio State Technology Law Journal 521-557.
Rasler K, Thompson WR (2007) Malign autocracies and major power warfare: Evil, tragedy, and international relations theory. Security Studies 10(3): 46-79.
Thompson W (2020) Malign Versus Benign In: Thompson WR (ed.), Power Concentration in World Politics: The Political Economy of Systemic Leadership, Growth, and Conflict. Springer pp. 117-142.
Tromblay DE (2018) Congress and Counterintelligence: Legislative Vulnerability to Foreign Influences. International Journal of Intelligence and Counterintelligence 31(3): 433-450.
Lehmkuhl JS (2024) Countering China’s Malign Influence in Southeast Asia: A Revised Strategy for the United Journal of Indo-Pacific Affairs 7(3): 139.
Bennett WL, Livingston S (2018) The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication 33(2): 122-139.
Bennett WL, Lawrence RG, Livingston S (2008) When the Press Fails: Political Power and the News Media from Iraq to University of Chicago Press.
Wagnsson C, Hellman M, Hoyle A (2024) Securitising information in European borders: how can democracies balance openness with curtailing Russian malign information influence? European Security 1-21.
Feuston JL, Brubaker JR (2021) Putting Tools in Their Place: The Role of Time and Perspective in Human-AI Collaboration for Qualitative Proceedings of the ACM on Human-Computer Interaction 5(CSCW2): 1-25.
Jiang JA, Wade K, Fiesler C, Brubaker JR (2021) Supporting Serendipity: Opportunities and Challenges for Human-AI Collaboration in Qualitative In: Proceedings of the ACM on Human-Computer Interaction 5(CSCW1): 1-23.
Rafner J, Gajdacz M, Kragh G, Hjorth A, et al. (2022) Mapping Citizen Science through the Lens of Human-Centered AI. Human Computation 9(1): 66-95.
Aswad EM (2020) In a World of “Fake News,” What’s a Social Media Platform to Do? UtahLawReview 2020(4): 1009.
Garon JM (2022) When AI Goes to War: Corporate Accountability for Virtual Mass Disinformation, Algorithmic Atrocities, and Synthetic Propaganda. Northern Kentucky Law Review 49(2): 181-234.
Hartmann K, Giles K (2020) The Next Generation of Cyber-Enabled Information 2020 12th International Conference on Cyber Conflict 233-250.
Brownsword R (2018) Law and Technology: Two Modes of Disruption, Three Legal MindSets, and the Big Picture of Regulatory Indian Journal of Law and Technology. 14(1): 30-68.
Dang J, Liu L (2022) Implicit theories of the human mind predict competitive and cooperative responses to AI Computers in Human Behavior 134: 107300.
Moskowitz HR, Gofman A, Beckley J, Ashman H (2006) Founding a New Science: Mind Journal of Sensory Studies 21(3): 266-307.
Papajorgji P, Moskowitz H (2023) The ‘Average Person’ Thinking About Radicalization: A Mind Genomics Cartography. Journal of Police and Criminal Psychology; 38(2): 369-380.
Levinson MH (2005) Mapping the Causes of World War I to Avoid Armageddon ETC: A Review of General Semantics 2(2): 157-164.
Rapoport A (1980), Verbal Maps and Global Politics. ETC: A Review of General Semantics 37(4): 297-313.
Rapoport A (1986) General Semantics and Prospects for Peace. ETC: A Review of General Semantics 43(1): 4-14.
Sadler E (1944) One Book’s Influence Edward Bellamy’s “Looking ” The New England Quarterly 17(4): 530-555.
Vincent JE (2011) Dangerous Subjects: US War Narrative, Modern Citizenship, and the Making of National Security 1890-1964 (Doctoral dissertation, University of Illinois at Urbana-Champaign)
Bayatmakou F, Mohebi A, Ahmadi A (2022) An interactive query-based approach for summarizing scientific documents. Information Discovery and Delivery 50(2): 176-191.
Fan A, Piktus A, Petroni F, Wenzek G, et al. (2020) Generating Fact Checking Briefs. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing 7147-7161.
Fitria TN (2021) QuillBot as an online tool: Students’ alternative in paraphrasing and rewriting of English writing. Englisia: Journal of Language, Education, and Humanities 9(1): 183-196.
Radev DR, Hovy E, McKeown K (2002) Introduction to the Special Issue on Computational Linguistics 28(4): 399-408.
Safaei M, Longo J (2024) The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy Analysis. Digital Government: Research and Practice 5(1): 1-35.
The paper introduces the use of AI coupled with Mind Genomics technology (BimiLeap.com) to understand the topic of how countries think about the U.S.’ position on nuclear deterrence. The entire exercise is done using AI (ChatGPT 3.5), with the BimiLeap.com platform. The five sections cover a broad range of aspects for the topic, and are set up to be rapid, cost effective, easy to do, and in some respects, virtually automatic. The results presented in this paper required approximately six hours to generate, including the initial and secondary AI analyses. The range of aspects goes from simulated “listening to enemy strategy meetings” to key emerging ideas, and onto AI-suggested innovations, expected responses by different audiences, and finally suggested questions and both optimistic and pessimistic answers. The paper is presented as an approach, with the topics easy to change, and the scalability straightforward to demonstrate.
Keywords
AI-generated simulations, Mind Genomics, Nuclear deterrence, Strategic signaling
Introduction
The U.S.’ nuclear arsenal is a crucial part of its strategic defense, but its lack of open threat to nations has led to aggressive postures. To revise defense signals without compromising global stability or reputation, U.S. policymakers must evolve strategic signaling, including bolstering military presence, conducting high-profile exercises, and issuing diplomatic statements. Monitoring hostile nations’ rhetoric and consistent communication of “red lines” is crucial for effective nuclear deterrence. AI simulation and Mind Genomics thinking offer a powerful tool to understand high-level strategic discussions in the minds of nation-states. AI simulation platforms can analyze geopolitical data and simulate decision- making processes based on historical patterns, key events, and diplomatic or military postures. Mind Genomics, the study of how individuals or groups structure their thinking and interpret the world, can codify the thought processes of leaders and policymakers. By merging these technologies, the U.S. can simulate different nations’ responses to various U.S. actions, such as bolstering military presence, conducting strategic exercises, or issuing diplomatic statements [1,2]. This approach can break down a nation’s strategic rationale into cognitive and cultural predispositions, enabling more accurate forecasts of a nation’s response to U.S. policy changes [3,4]. It can also function as a form of strategic empathy, enabling the U.S. to craft tailored policies.
Phase 1: Simulating Private Strategy Discussions Among Opponents of the USA
AI-driven simulations of enemy conversations can provide valuable insights into American policy and strategy development (Table 1). By role-playing the enemy’s perspective, AI can anticipate potential threats and understand weaknesses in American policies [2]. This predictive insight can mitigate risks and enhance national defense mechanisms [5]. AI simulations can also expose blind spots within current strategic thinking, revealing perspectives that American strategists may not see naturally. This effort can fuel defensive preparedness and negotiation tactics [6]. AI simulations can compress time and offer predictive outcomes based on potential decisions, allowing agencies to respond in real-time to threats while staying ahead of competitors [7,8]. However, there are risks and limitations to AI-assisted simulations. One issue is over-reliance on technology and algorithms instead of human judgment and intuition. AI algorithms may not have human motivations, emotions, or spontaneity, leading to irrational or emotional decisions. Additionally, AI simulations may not reflect real discussions due to human factors. Despite these limitations, AI-assisted simulations can enhance understanding, decision-making speed, and avoid blind spots.
Table 1: AI simulation of an overheard “enemy” discussion about American nuclear policy.
Phase 2: Key Ideas Emerging
The key ideas in the topic questions revolve around the concept of U.S. nuclear deterrence and strategic signaling — focusing on how the United States can communicate its readiness to use military force, including nuclear weapons, to deter adversaries without explicitly threatening them [9-12]. These key ideas emphasize the delicate balance the United States must maintain in its nuclear deterrence strategy, combining visible military strength with subtle diplomatic moves to ensure that its adversaries perceive its willingness to defend its interests while avoiding global instability. Table 2 shows 12 key ideas emerging from the simulation present in Table 1 and a subsequent AI- based analysis by the Mind Genomics platform, BimiLeap.com.
Table 2: Key ideas emerging from the AI simulation of an enemy strategy meeting, and then a second and further AI analysis. The analyses were done by the Mind Genomics platform, BimiLeap.com.
Phase 3: Innovations in Products and Services
The themes associated with U.S. nuclear deterrence and strategic signaling offer several conceptual frameworks for developing new products, services, or experiences [13-15]. Table 3 presents products and services suggested by AI, based upon the material presented in Tables 1 and 2. The AI further evaluated the information in both Tables, and generated the suggestions in Table 3.
Table 3: Innovation in products and services, suggested by AI, and based upon the information shown in Tables 1 and 2.
Phase 4: The Different Players (Positive Versus Negative Audiences)
Several distinct audiences would have a strong interest in the topic questions, each bringing a unique perspective based on their professional, academic, or geopolitical involvements [16-18]. These audiences are shown in Table 4 and comprise both those who are “interested” and those who are not interested, viz., possibly “hostile.” Once again, the analysis was done after the fact, in a second pass through the data to provide more insight by AI.
Table 4: Positive and negative audiences.
Phase 5: AI-Generated Questions and Answers for Further Thought
The AI was presented with the situation presented in Table 2. The AI was instructed to create questions, and then give both an optimistic answer and a pessimistic answer to the same question. Table 5 shows the results. The benefit here is that the AI can generate a great number of questions in a short time and provide answers [4,19-21].
Table 5: AI generated questions and two answers for each question; optimistic versus pessimistic, respectively.
Discussion and Conclusions
AI combined with Mind Genomics works as a safe testing ground where various communication scenarios — words, actions, or threats— can be “played forward” to understand precisely how they may backfire, escalate tensions, or bring about desired mediations. Such tools would also enable the U.S. to make faster, informed decisions in unprecedented crises, whether arising from smaller rogue nations or larger superpowers. By anticipating hostile rhetoric, understanding a nation’s internal political conditions, and knowing exactly where the “red lines” fall, AI simulations can precisely calculate the tipping point at which a country might enter an irreversible aggressive stance. Thus, this system works like a blueprint for creating not only stronger deterrence policies but also more effective diplomatic resolutions. Finally, the long-term potential of these simulations lies in their ability to integrate into international consensus-building. For deterrence to be effective, it must not only be unilateral but shared among allies. This enhanced AI and Mind Genomics model could be a framework that multiple democratic governments employ to analyze the decisions of shared adversaries. In doing so, the U.S. would gain not only tactical advantages but help contribute to a shared platform of predictive thinking, ensuring stability and global peace.
Based upon the AI exercise reported here, the key ideas related to U.S. nuclear deterrence and strategic signaling can be grouped into six distinct themes:
Perception and Willingness
Perceptions of U.S. Willingness to Use Nuclear Weapons. Some adversaries doubt the U.S.’ willingness to use nuclear weapons, leading to questions about the effectiveness of its deterrent posture.
Actions and Military Readiness
Actions to Signal Deterrence. The U.S. can engage in military actions such as exercises, missile testing, and tough diplomatic messaging to show its readiness and resolve. These actions not only test U.S. readiness, but they are also key components in strategic signaling to deter adversaries.
Diplomatic Efforts and Communication
Importance of Diplomatic Language. Strategic use of diplomatic language can underscore U.S. seriousness without escalating tensions. Diplomatic support of public signaling is pivotal, both with adversaries and allies, ensuring that messages are clearly communicated through multiple channels. Establishing and clearly communicating red lines helps avoid ambiguity and ensures adversaries are clear about the consequences of crossing thresholds.
Adversary Reactions and Feedback
Adversaries’ denouncements, such as accusing the U.S. of “escalation,” might paradoxically indicate that U.S. signaling is effective and being acknowledged. Visible shifts in rhetoric or behavior toward calls for diplomacy from adversaries can be viewed as signs of effective deterrence.
Geopolitical Assessment and Strategy Adjustment
Strategic signaling must be informed by constant assessments of adversarial military activities and propaganda to ensure proper messaging and deterrent force are applied. Signs of successful deterrence include adversaries scaling down aggressive maneuvers and showing a willingness to negotiate.
Failures, Escalation, and Risk Management
If adversaries respond to U.S. signaling with increased military presence or aggression, it points to a failure in deterrence, necessitating strategic recalibration. The U.S. must strike a balance between projecting sufficient strength to deter adversaries without causing unintended escalation or regional destabilization.
Acknowledgments
The authors gratefully acknowledge the ongoing help of Vanessa Marie B. Arcenas and Isabelle Porat in the preparation of this and companion manuscripts.
References
Cox J, Williams H (2021) The Unavoidable Technology: How Artificial Intelligence Can Strengthen Nuclear The Washington Quarterly 44(1): 69-85.
Davis PK, Bracken P (2022) Artificial intelligence for wargaming and The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology.
Horowitz M, Kania EB, Allen GC, Scharre P (2018) Strategic Competition in an Era of Artificial Center for a New American Security.
Johnson J (2021) Deterrence in the age of artificial intelligence & autonomy: a paradigm shift in nuclear deterrence theory and practice? Defense & Security Analysis 36: 422-448.
Goldfarb A, Lindsay JR (2022) Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War. International Security 46: 7-50.
Johnson J (2019) Artificial intelligence & future warfare: implications for international Defense & Security Analysis 35: 147-169.
Layton P (2021) Fighting Artificial Intelligence Battles: Operational Concepts for Future AI-Enabled Joint Studies Paper Series, 4.
Turnitsa C, Blais C, Tolk A (2022) Simulation and John Wiley & Sons, Inc.
Borges AF, Laurindo FJ, Spínola MM, Gonçalves RF, et (2021) The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions. International Journal of Information Management 57: 102225.
Flournoy MA, Lyons RP (2016) Sustaining and Enhancing the US Military’s Technology Strategic Studies Quarterly 10: 3-14.
Morgan FE, Boudreaux B, Lohn AJ, Ashby M, et (2020) Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation.
Stone M, Aravopoulou E, Ekinci Y, Evans G, et (2020) Artificial intelligence (AI) in strategic marketing decision-making: a research agenda. The Bottom Line 33: 183- 200.
Mühlroth C, Grottke M (2020) Artificial Intelligence in Innovation: How to Spot Emerging Trends and Technologies. IEEE Transactions on Engineering Management 99: 1-18.
Sayler, KM (2020) Artificial Intelligence and National Security. Congressional Research Service, R45178.
Scharre P (2023) Four Battlegrounds: Power in the Age of Artificial W.W.Norton & Company.
Ali MB, Wood-Harper T (2022) Artificial Intelligence (AI) as a Decision-Making Tool to Control Crisis Situations. In: Ali M (ed.), Future Role of Sustainable Innovative Technologies in Crisis Management, IGI Global,. 71-83.
Johnson J (2021) Artificial Intelligence and the Future of Warfare: The USA, China, and Strategic Manchester University Press.
Tsotniashvili Z (2024) Silicon Tactics: Unravelling the Role of Artificial Intelligence in the Information Battlefield of the Ukraine Asian Journal of Research 9: 54-64.
Aydin Ö, Karaarslan E (2023) Is ChatGPT Leading Generative AI? What is Beyond Expectations? Academic Platform Journal of Engineering and Smart Systems 11: 118-134.
Ehsan U, Wintersberger P, Liao QV, Mara M, et (2021) Operationalizing Human- Centered Perspectives in Explainable AI. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1-6.
Rospigliosi P (2023) Artificial intelligence in teaching and learning: what questions should we ask of ChatGPT? Interactive Learning Environments 31: 1-3.
In this paper, we show that quard veins in the Lusatian Massif were primarily generated by supercritical fluids coming from mantle deeps, rising very fast into the crust region. For the substantiation of our conclusions, we use the occurrence of lonsdaleite and microdiamonds in the root zones of quard crystals from these quard veins. Hydrothermal fluids afterward reworked the so primarily formed veins in more than one step. This hydrothermal activity hides the primary origin of the veins. For corroboration of the proofs, we used other examples from the Saxon Granulite Massiv, the Central Erzgebirge, and E-Thuringia.
In a row of publications, the author and co-authors [1-8] have used melt inclusions to the characterization of the pegmatite-forming melt and the high-temperature quartz veins in the Lusatian massif. Primarily, these studies focused on single aspects of new minerals in this region and the formation of respective geological objects. That also includes the determination of pseudo-binary solvus curves and some element enrichment related to such curves, which often show a Lorentzian distribution [2,3,7,8]. In this contribution, we will show that all studied objects clearly indicate that supercritical fluids or melts trigger the cause of the formation of those apparent and different objects (granites, pegmatites, and quartz veins). The first indications came from the high- speed intrusion velocity of the Königshainer granite melt [5], with about 700 to 1000 m/year. Later, this value increased significantly because larger magmatic epidote crystals could be found. Up to this point, there was a relationship between granite-forming melt and the formation of quartz veins and supercritical fluids or melts already not given. However, the similarities of the solvus curves of all studied objects were a solid hint of a uniform process. In addition, the Lorentzian distribution of some main and trace elements around the solvus crest demands a process of overriding importance. The finding of diamond and water- rich stishovite in a different geological unit, the Saxonian Granulite Massif, opened the eyes to processes that were not important up to now [8,9]. An earlier paper showed Thomas et al. (2020) [10] on the example of emeralds from the Habachtal that the new results of melt inclusions in this mineral generated a conflict with the accepted geological model. With the acceptance of supercritical fluids, this conflict is soluble. Here, we will show that the formation of the granite stock from Königshainer and a large part of quartz veins in the Lusatian region are influenced or generated by supercritical fluids or melts coming from mantle depths.
Sample Material and Earlier Results
Details of the used sample material are in the references above. A short explanation is necessary for the quartz samples from Lauba. For the preparation (grinding and polishing) of quartz thick sections (500 µm thick), diamond was not used. For polishing this quartz, we used a suspension of silica in a ten percent KOH solution using the Speed Fam of the Danish company Haldor Topsøe. From both sides, 100 µm were removed. Generally, we tried to remove all diamond rests from all used samples, which may have been induced by grinding and polishing (see results and discussion in Thomas et al., 2023) [9] using an ultrasound bath. A large number of quartz veins in the Lusatian Massif were described by Bartnik (1969) [11]. By an extensive effort, we could only study a minimal number of samples. For the selection of usable samples, the root zone must contain melt inclusions because diamonds, etc., are present only in these zones. The water-clear parts of the quartz crystals do not include such minerals. These are formed by later activation and recrystallization.
The following samples are studied:
Quartz from the Königshainer granite [4,5].
Quartz crystals from Sproitz [12].
Quart crystal from Caminau [13].
Quartz crystals from Lauba [13].
Quartz crystals from Oppach [2,3,6].
Quartz crystals from Steinigwolmsdorf [7,16].
Massif blue quartz with a brownish coat at fissures from Berthelsdorf near Neustadt W-Lusatian [14].
Unusual in sample 7 is the abundance of graphite (Raman band: 1579.4 ± 3.0, FWHM = 21,5 ± 6.4 (FWHM – Full-Width at
Half Maximum) and the occurrence of tiny crystals of thortveitite (R061065, 97% Match, see Lavuente et al., 2016) [15].
In the root zone, there are many melt inclusions. Rehomogenization at different temperatures and pressures of ~4.5 kbar and the following determination of the water concentration in the melt inclusions give a pseudo-binary solvus curve (Figure 2).
Figure 1: A typical quartz crystal from Steinigtwolmsdorf/Lusatia [16].
Figure 2: Pseudo-binary solvus curve (Temperatur vs. water concentration) derived from re-homogenized melt inclusions in quartz from Steinigtwolmsdorf/Lusatia. CP is the critical point (740°C, ~4.5 kbar, 27% H2O). It is very typical for such solvus curves that the distribution of some principal and trace elements obey a Lorentzian curve, like Figure 3, the distribution of NaCl and CaCl2 versus water concentration.
What we see from Figure 3 is that Na and Ca behave reciprocally. That means that at or near the critical point of a solvus curve, a strong separation of elements, maybe also isotopes, is probably. That would be a pleasant and new research theme for the future. Like the NaCl distribution in Figure 3, we found similar distributions in many examples (Thomas et al. 2019a and 2022a) [8]. For Oppach, extremely high sulfate concentrations are typical (for example, 21.3% SO4) – see Thomas 2024a) [17]. The opening of a solvus curve around the critical point starts with a singularity [18]. The solvus and the Lorentzian distribution of elements around the critical point is after that authors a strong proof of supercritical transition to critical and under-critical conditions. The evidence of lonsdaleite and diamond from the mantle deep in rocks and minerals in the upper crust underlines the existence of supercritical fluids that transport material from the mantle deep into the crust. Therefore, the finding of lonsdaleite and diamond as inclusions in rocks and quartz of widespread quartz veins in the Lusatian Massif generates a new approach.
Figure 3: Schematic Lorentzian distributions of NaCl and CaCl2 in melt inclusions from Steinigtwolmsdorf/Lusatia in dependence on the water content. The plot is highly schematic because the determination of NaCl and CaCl2 in highly complex fluid systems is not adequately known.
Phenotypes of Lonsdaleite and Diamond in More Crustal Rocks and Quartz Veins
Here, we will show exemplarily some lonsdaleite and diamond crystals that occur in minerals of untypical crust positions (Figures 4-6).
Figure 4: Diamond (D) in blue quartz from Berthelsdorf near Neustadt, W-Lusatian. The rounded light brown diamond grain is about 20 µm deep. The fluid inclusion right beside the diamond demonstrates that the hydrothermal recrystallization of the quartz did not affect the diamond. Fl: fluid inclusion
Figure 5: Raman spectrum of the rounded diamond inclusion in blue quartz from Berthelsdorf near Neustadt, W-Lusatian. The diamond inclusion is 20 µm deep under the surface.
Figure 6: Needles in the root zone of quartz from Caminau near Königswartha, Lusatian Massif. The upper photomicrograph shows a jumble of needles in quartz composed of lonsdaleite (Lon) and hydroxylbasnäsite-Ce [Ce(CO3)(OH)] and vapor. The lower photomicrograph shows details of such a needle. The needle is deep enough (45 µm) to form by contamination. V: vapor phase
The formula given in Figure 6 is the ideal chemistry. According to Antony et al. (2003) [19], it is the more real formula (Ce, La) (CO3)(OH, F) (see Lafuente et al. 2016 – [15]: RRUFF R060283). According to Kirillov (1964) [20], hydroxylbasnäsite-Ce is typically a late phase of the hydrothermal stage formed by the dissolution and reprecipitation of earlier carbonatite minerals. The main Raman band of lonsdaleite shown in Figure 6 is 1318.7 ± 4.6 cm-1. This example demonstrates a maybe late formation of lonsdaleite. Some needles are even bent. Often, lonsdaleite forms prismatic crystals or whiskers [2,3]. A reference spectrum for natural lonsdaleite (Kumdykol diamond deposit, North Kazakhstan) is given by Shumilova et al., 2011 [21]. According to Németh et al. (2014) [22], lonsdaleite does not exist as discrete crystals. In this contribution, we cannot resolve this question because we only use Raman spectroscopy. However, some observations speak for the existence of lonsdaleite as whisker- like crystals [3; Figure 7].
Figure 7: The microphotography a) shows an older, more gray quartz cluster (marked with “+”) from Sproitz with diamond, lonsdaleite, and graphite (black) in hydrothermal quartz (marked with “*”). The Raman spectra (b) show the differences between the two quartz generations: red, which is an older quartz, and blue, which is hydrothermal quartz.
Figure 7 shows an isolated quartz cluster with lonsdaleite, diamond, and graphite in the root zone of a quartz crystal from Sproitz (sample SP3) from the N-slope of the Gemeindberg in the rural district Görlitz/Lusatian Massif [12]. The quartz cluster shows a different Raman spectrum (red) in contrast to the matrix quartz (blue spectrum). Firstly, this quartz cluster was primarily a different SiO2-polymorph formed at high pressure (coesite?). For comparison, we have also included the results on different rocks from Middle- Saxonian and Thuringian/E-Germany (see Table 2 and Figure 8) as well as Thomas and Recknagel 20024, Thomas and Trinkler 2024 [2,3]. Figure 8 shows an example of diamond-bearing perovskite in the granulite rock from Waldheim/Saxony (see also 2022b) [23]. The prismatic form of the diamonds is unusual. Maybe lonsdaleite was the precursor of this diamond.
Figure 8: Diamond (D) in perovskite (Prv) [CaTiO3] embedded in rutile (Rt) as foreign mineral inclusion in the prismatine rock from Waldheim/Saxony, E-Germany.
The occurrence of perovskite inclusion in diamonds indicates, according to Nestola et al. (2018) [24], the recycling of oceanic crust into the lower mantle. Looking at Figure 8 raises the question of the reverse: come the diamond embedded in perovskite, saved by rutile, also from the lower mantle? The first finding of H2O-rich stishovite (7.5 GPa at 1000°C, corresponding to a depth of 230 km) in the same rock [8] speaks for it.
Methodology
The techniques used (microscopy, homogenization measurements on melt inclusions, Raman spectroscopy, and electron microprobe) are described in the references above. Because Raman spectroscopy is crucial in this study, we will give more details here.
Raman Spectroscopy
Primary for the first identification of the mineral inclusion in quartz, we used for all microscopic and Raman spectrometric studies a petrographic polarization microscope with a rotating stage coupled with the RamMics R532 Raman spectrometer working in the spectral range of 0-4000 cm-1 using a 50 mW single mode 532nm laser. Details are in Thomas et al., 2022 and 2023 [18]. For the Raman spectroscopic routine measurements, we used the Olympus long- distance LMPLN100x as a 100x objective. We carefully cleaned the samples to delete diamond contaminations due to the preparation. For the Raman determination, we used only 20 or more µm deep crystals from the sample surface [18]. The laser energy on the sample is adjustable down to 0.02 mW. The Raman band positions were calibrated before and after each series of measurements using the Si band of a semiconductor-grade silicon single-crystal. The run-to-run repeatability of the line position (based on 20 measurements each) was ±0.3 cm-1 for Si (520.4 ± 0.3 cm-1) and 0.5 cm-1 for diamond (1332.3 ± 0.5 cm-1 over the range of 50 – 4000 cm-1). We used a natural diamond crystal as a reference (for more information, see Thomas et al., 2022b) [23]. Only crystals under the surface should be measured to prevent diamond contaminations introduced by preparation (grinding, polishing). If the lonsdaleite and diamonds have needle- and whisker- like or other forms (disk-like, spherical crystals with very smooth surface, spherical sector), then the mistake is strongly reduced. We do not heat the quartz samples for homogenization of the present melt inclusions to prevent the lonsdaleite and diamonds from extensive transformation into graphite.
Results
Raman Data
First, we will show that diamonds are present in all studied quartz samples that contain a root zone. In the more or less water-clear part of the quartz crystals, we have never found lonsdaleite and diamond. We have found a general evolutionary development: lonsdaleite → lonsdaleite + diamond → diamond → nano-diamond + graphite → graphite (or, more generally, carboniferous material). Each lonsdaleite or diamond phase shows more or less a strong graphite band. A Raman spectrum of lonsdaleite is shown in Figure 9. Table 1 shows the results of our studies on lonsdaleite and diamond crystals, mostly in quartz from the Lusatian Massif. For comparison, results for the Middle-Saxonian and Thuringian regions are in Table 2.
Figure 9: Raman spectrum of lonsdaleite in quartz from Steinigtwolmsdorf/Lusatia (Raman band at 1317.8 cm-1, FWHM = 31.9 cm-1, Raman mode: A1g – see Wu 2007) – [25].
Table 1: Raman spectrometric determined the main lines of lonsdaleite and diamond in the Königshainer granite and some quartz veins in the Lusatian Massif.
Location
Mineral
Host
Raman band(cm-1)±1σ
FWHM(cm-1)
Numberofgrains
Königshain
Lonsdaleite
Feldspar
1317.6 ± 4.5
98.0 ± 2.3
5
Diamond
Feldspar
1331.7 ± 3.5
97.3 ± 4.4
6
Diamond
Zircon
1336.7 ± 4.9
75.9 ± 17.7
9
Sproitz SP3
Lonsdaleite
Quartz
1319.0 ± 2.7
43.2 ± 7.2
3
Diamond
Quartz
1333.7 ± 5.3
60.6 ± 24.4
8
Caminau
Lonsdaleite
Quartz
1318.7 ± 4.6
36.8 ± 1.5
6
Diamond
Quartz
1332.2 ± 3.4
53.9 ± 7.1
13
Lauba
Lonsdaleite
Quartz
1316.5 ± 1.1
9.6
3
Diamond
Quartz
1327.5 ± 5.2
45.9 ± 29.3
6
Oppach
Lonsdaleite
Quartz
1316.5
56.7
1
Diamond
Quartz
1329.6 ± 4.7
75.1 ± 9.0
10
Steinigtwolmsdorf
Lonsdaleite
Quartz
1317.2 ± 0.4
31.3 ± 2.2
5
Diamond
Quartz
1331.2 ± 5.0
50.9 ± 11.0
6
Berthelsdorf bei Neustadt
Diamond
Blue-Quartz
1331.8 ± 3.8
64.9 ± 14.3
14
Table 2: Comparision of lonsdaleite and diamond related to supercritical fluids or melts in Middle-Saxonian and Thuringian occurrences.
Location
Mineral
Host
Raman band(cm-1)±1σ
FWHM(cm-1)
Numberofgrains
Waldheim, Saxonia
Lonsdaleite
Zircon, Rutile
1320.4 ± 3.4
74.8 ± 8.8
7
Diamond
Zircon, Rutile
1331.5 ± 3.5
78.3 ± 10.7
22
Diamond
Zircon
1336.7 ± 4.9
75.9 ± 17.7
9
Diamond
Perovskite
1331.8 ± 1.2
65.6 ± 8.1
10
Greifenstein granite
Diamond
Beryl
1328.6 ± 5.6
~60
14
Diamond
Quartz
1333.7 ± 5.3
60.6 ± 24.4
8
Ehrenfriedersdorf
Lonsdaleite
Quarz
1318.6
100
1
Diamond
Quartz
1331.5
46.0
1
Annaberg granite
Diamond
Quartz
1339.4 ± 12.1
41.8 ± 12.0
20
Zinnwald
Lonsdaleite
Fluorite
1318.0 ± 3.8
9.6
3
Sadisdorf
Diamond
Fluorite
1331.8 ± 5.2
83.1 ± 13.9
11
Lonsdaleite
Fluorite
1316.5
5
Cunsdorf, Thuringia
Lonsdaleite
Quartz
1322.2 ± 1.31
75.6 ±10.9
47
Diamond
Quartz
1329.6 ± 4.7
71.6 ± 28.8
10
Remark
Some needles in the quartz from Caminau contain long and small lonsdaleite crystal sections with Raman bands between 1311.8 and 1313.0 cm-1 (Raman mode A1g), corresponding, according to Wu (2007) [25] to the 2H polytype of diamond (data not in Table 1), similar to the Raman spectrum in Figure 9.
From both tables, we obtain for the lonsdaleite and diamond three groups (I: lonsdaleite, II: diamond, III: diamond under mechanical stress [26]:
1320.6 ± 2.3 cm-1 FWHM = 65.7 ± 22.8 cm-1 n = 85
1331.0 ± 1.9 cm-1 FWHM = 68.6 ± 13.4 cm-1 n = 121
1338.8 ± 1.2 cm-1 EWHM = 52.4 ± 15.8 cm-1 n = 38
n is the number of measured lonsdaleite and diamond crystals.
All studied lonsdaleite and diamond crystals show one or two graphite-like solid G- and D2 bands of carbonaceous material [26,27]. Figure 10 shows the frequency distribution of the G-band for lonsdaleite and diamond.
Figure 10: Frequency distribution of the G-band for lonsdaleite and diamond.
The Gaussian distribution data of the G-band position (Figure 10) for lonsdaleite and diamond are in Table 3, and the data for the corresponding FWHM are in Figure 11 and Table 4. According to Frezzotti (2019) [28], the Raman analyses show clear evidence that nano-sized diamonds and, obviously, also the lonsdaleite crystals show hybrid structures, consisting of nano-diamond and -lonsdaleite and carbon groups indicated by the mostly present G-bands assigned to C=C stretching vibrations E2g of graphite [29].
Table 3: Gaussian data of the G band for lonsdaleite and diamond (r2=0.91774).
Ramanband
Area
Center(cm-1)
Width(cm-1)
Height
1 (green)
100.29
1556.4
14.65
5.46
2 (blue)
380.86
1580.0
17.44
17.42
3 (magenta)
114.11
1604.1
12.73
9.03
Table 4: Gaussian distribution data for the FWHM of both components (green and red) for lonsdaleite and diamond found in crustal rocks (granite, granulite) and minerals (fluorite, quartz, perovskite, and zircon).
Ramanband
Area
Center(cm-1)
Width(cm-1)
Height
1 (green)
334.81
59.23
26.04
10.26
2 (red)
251.25
71.35
13.16
15.23
Figure 11: Frequency distribution of the FWHM for the G-bands for lonsdaleite and diamond (r2 = 0.95066).
Discussion
The proof of lonsdaleite and diamond in crustal surroundings, together with the excellent solvus curves constructed from melt inclusions and the Lorentzian distribution of some elements, are strong proofs of supercritical fluids coming very fast from mantle deeps, bringing microcrystals of lonsdaleite and diamond as load into the crust region. The finding of these minerals in the root zones of quartz veins in the Lusatian Massif demonstrates straightforwardly that the quartz veins primarily start with the intrusion of supercritical fluids, maybe of the Variscan age, carrying lonsdaleite and diamond. Later, these early primary quartz veins are multistage reworked at lower temperatures by intensive hydrothermal activity. Through this activity, a lot of proof of the primary origin is destroyed. Only by a very intense search can such remnants be found. Meinel (2022) [30] discusses intensively the genesis of diamonds by very high volatile internal pressure in closed systems in relatively low deeps. Thomas and co-authors [2,3,7,8,18] have shown that supercritical fluids/melts are detectable in the whole region between Lusatia, East- and Middle Erzgebirge, N-Bohemia [31] and E-Thuringia and someplace else (for example emerald deposit in the Habachtal, Austria [10]. Sometimes, however, lonsdaleite and diamond occur as needle and whisker- like crystals instead of smooth spherical microcrystals transported by supercritical fluids or melts. The same results were obtained for moissanite whiskers in beryl from Ehrenfriedersdorf (2023b) [32]. That must be an in situ formation at the upper crust. Therefore, the question result: Is the coincidence of supercritical fluids or melts with cooler upper crust granites an excellent localization for outstanding processes: solvus formation, extraordinary element enrichment in the form of the Lorentzian distribution, speedy changes of the viscosity and diffusivity of the supercritical fluids to near- and under critical conditions? This question opens new points of view for future research.
Acknowledgments
We dedicate this paper to Prof. Hans Jürgen Rösler (1920-2009), Prof. Otto Leeder (1933-2014), both from the Mining Academy Freiberg, and Dr. Günter Meinel (1933-2012) from Jena.
References
Thomas R (2023a) The Königshainer granite: Diamond inclusions in zircon. Geol Earth Mar Sci 5: 1-4.
Thomas R, Recknagel U (2024) Lonsdaleite, diamond, and graphite in a lamprophyre: Minette from East-Thuringia/Germany. Geol Earth Mar Sci 6: 1-4.
Thomas R, Trinkler M (2024) Monocrystalline lonsdaleite in REE-rich fluorite from Sadisdorf and Zinnwald/E-Erzgebirge, Geol Earth Mar Sci 6: 1-5.
Thomas R, Davidson P, Rhede D, Leh M (2009) The miarolitic pegmatites from the Königshain: a contribution to understanding the genesis of pegmatites. Contrib Mineral Petrol 157: 505-523.
Thomas R, Davidson P (2016) Origin of miarolitic pegmatites in the Königshain granite/Lusatia. Lithos. 260: 225-241.
Thomas R, Davidson P (2017) Hingganite-(Y) from a small aplite vein in granodiorite from Oppach, Lusatian Mineralogy and Petrology. 111: 821-826.
Thomas R, Davidson P, Appel K (2019) The enhanced element enrichment in the supercritical states of granite–pegmatite systems. Acta Geochim 38: 335-349.
Thomas R, Davidson P, Rericha A, Voznyak D (2022a) Water-Rich Melt Inclusion as “Frozen” Samples of the Supercritical State in Granites and Pegmatites Reveal Extreme Element Enrichment Resulting Under Non-Equilibrium Miner J 44: 3-15.
Thomas R, Davidson P, Rericha A, Recknagel U (2023) Ultrahigh-pressure mineral inclusions in a crustal granite: Evidence for a novel transcrustal transport mechanism. Geosciences. 94: 1-13.
Thomas R, Davidson P, Rericha A (2020) Emerald from the Habachtal: new Mineralogy and Petrology. 114: 161-173.
Bartnik D (1969) Die Quarzgänge im Lausitzer Geologie 18: 21-40.
Schwarz D, Tietz O, Rogalla O, Rosch F (2015). Ein Quarzgang am Gemeindeberg von Kollm in der Berichte der Naturwissenschaftlichen Gesellschaft der Oberlausitz. 23: 139-150.
Lange W, Tischendorf G, Krause U (2004) Minerale der Verlag G. Oettel. Pg: 258.
Witzke T, Giesler T (2011). Neufunde und Neubestimmungen aus der Lausitz (Sachsen), Part 3. Aufschluss 62.
Lafuente Downs RT, Yang H, Stone N (2016) The power of databases: The RRUFF project. In Highlights in Mineralogical Crystallography; Armbruster T, Danisi RM, Eds.; De Gruyter: Berlin, Germany; München, Germany; Boston, MA, USA: 1–30. ISBN 9783110417104.
Thomas R, Davidson P, Rericha A, Tietz O (2019b) Eine außergewöhnliche Einschlussparagenese im Quartz von Steinigtwolmsdorf/Oberlausitz. Berichte der Naturwissenschaftlichen Gesellschaft der Oberlausitz. 27: 161-172.
Thomas R (2024a) Melt inclusions in an aplite vein in granodiorite of the Lusatian Massif: Extreme alkali sulfate Geol Earth Mar Sci. 6: 1-5.
Thomas R, Rericha A (2023) The function of supercritical fluids for the solvus formation and enrichment of critical elements. Geol Earth Mar Sci 5: 1-4.
Anthony JW, Bideaux RA, Bladh KW, Nichols MC (2003) Handbook of Mineralogy, 5. Mineral Data Publishing, Tucson, Arizona. Pg: 813.
Kirillov AS (1964) Hydroxyl bastnäsite, a new variety og bastnäsite. Doklady Akademii Nauk SSSR. 159: 93-95 (translation).
Shumilova TG, Mayer E, Isaenko SI (2011) Natural monocrystalline Lonsdaleite, DokladyEarthSci. 441: 1552-1554.
Neméth P, Garvie LAJ, Aoki T, Dubovinskaia N, Dubrovinsky L (2014) Lonsdaleite is faulted and twinned cubic diamond and does not exist as a discrete Nature Communications 5: 1-5.
Thomas R, Davidson P, Rericha A, Recknagel U (2022b) Discovery of stishovite in the prismatine-bearing granulite from Waldheim, Germany: A possible role of supercritical fluids of ultrahigh-pressure origin. Geosciences. 12: 1-15.
Nestola F, Korolev N, Kopylova M, Rotiroti N, Pearson DG, et al. (2018) CaSiO3 perovskite in diamond indicates the recycling of oceanic crust into the lower Nature, Letter. 555: 237-241.
Wu BR (2007) Structural and vibrational properties of the 6H diamond: First- principles Diamond and Related Materials. 16: 21-28.
Zaitsev AM (2001) Optical Properties of Diamond – A Data Springer.
Beyssac O, Coffee B, Chopin C, Rouzaud JN (2002) Raman spectra of carbonaceous material in metasediments: a new geothermometer. J metamorphic Geol 20: 859-871.
Frezzotti ML /2019) Diamond growth from organic compounds in hydrous fluids deep within the Earth. Nature Communications. 10: 1-8.
Gogotsi YG, Kailer A, Nickel KG (1998) Pressure-induced phase transformations in Journal of Applied Physics. 84: 1299-1304.
Meinel G (2022) Betrachtungen zum irreversiblen Verlauf der Erdgeschichte: Ein Versuch zur Beschränkung des aktualistischen Prinzips in der Geologie auf nicht von der geologischen Entwicklung abhängige Vorgänge. Berlin und Pg: 231.
Thomas R (2024b) Rhomboedric cassiterite as inclusions in tetragonal cassiterite from Slavkovsky les -North Geol Earth Mar Sci 6: 1-6.
Thomas R (2023b) Grow of SiC whisker in beryl by a natural supercritical VLS Aspects in Mining and Mineral Science. 11: 1292-1297.
In this paper, a trans-diagnostic approach to the treatment of trauma-related mental disorders is presented. The clinical rationale for the approach is described along with several core principles of the treatment model. These include: the problem of attachment to the perpetrator; the locus of control shift; and the problem is not the problem. Rather than focusing on diagnoses, in this approach the focus is on the underlying conflicts, cognitive errors and maladaptive coping strategies. Psychiatric diagnoses are usually made within what the author calls the single disease model: in that approach there is a primary diagnosis with additional comorbid diagnoses. The assumption of that approach is that a diagnosis determines the treatment plan, and the potential treatment plans are differentiated, distinct and specific to the primary diagnosis. According to the author, however, that is not how much mental health treatment actually operates, in either psychopharmacology or psychotherapy: instead, polypharmacy is the norm, the same medications are used for a variety of different diagnoses, and psychotherapy is often multimodal and not based on any one model. For trauma-related disorders, the author advocates that the ICD-11 concept of complex PTSD should apply to the majority of cases. Rather than a diagnosis of DSM-5 PTSD with comorbid diagnoses, treatment is designed to address a poly-symptomatic trauma response that spans many DSM-5 categories. Rather than focusing on separate diagnoses, trauma-informed psychotherapy should address a set of commonly occurring underlying conflicts, cognitive errors and defenses.
Keywords
Trans-diagnostic approaches, Mental health diagnoses, Treatment planning
Introduction
The purpose of this paper is to describe a trans-diagnostic approach to the treatment of mental disorders and the rationale for it. The clinical rationale for the approach is described along with several core principles of the treatment model. These include: the problem of attachment to the perpetrator; the locus of control shift; and the problem is not the problem. Rather than focusing on diagnoses, in this approach the focus is on the underlying conflicts, cognitive errors and maladaptive coping strategies. No effort will be made to provide a literature review or to support the approach with evidence.
The Single Disease Model: Diagnosis Determines Treatment
What I call the single disease model dominates medicine and psychiatry. For example, a bacterial ear infection, a sprained ankle and pregnancy are biologically distinct, separate problems with different etiologies and treatments. It is possible for a pregnant woman to have a sprained ankle and an ear infection as well, but these are co-occurring diagnoses not variations on a single disorder or condition. For any presenting problem, the task of the physician is to set up a differential diagnosis and then, through history taking, physical examination and laboratory testing (bloodwork, X-rays, sputum or urine samples, etc.) to arrive at a single diagnosis. There are complex cases such as those seen regularly in ICUs in which a person has extensive comorbidity, but these are the exception rather than the rule.
By and large, distinct biological disorders, diseases or conditions have distinct treatments. That is why a single disease diagnosis has to be made by the doctor, either as a confirmed diagnosis or as a working hypothesis. When I finished medical school and started my psychiatry residency, it was evident that psychiatry identified itself as a branch of medicine: psychiatrists made a differential diagnosis then a single diagnosis, and the diagnosis determined the treatment plan. The American Psychiatric Association’s Diagnostic and Statistical Manual (DSM), from DSM-III (1980) [1] to DSM-IV (1984) [2] to DSM-5 (2013) [3], is divided into different sections such as psychotic disorders, eating disorders, substance use, mood disorders and so on. The terminology for the different sections has varied across editions, but the single disease model has dominated the organization of the manual throughout its history.
On the one hand, that makes sense: it is obvious that someone with bulimia is very different from someone with severe schizophrenia and they do not require the same treatment. When there is no extensive trauma history or comorbidity, the treatments of bulimia and schizophrenia are highly differentiated. In outpatient and private practice settings one encounters individuals for whom the single disease model fits fairly well.
During my residency years in Canada (1981-1985), individuals with substance abuse disorders were referred to specialty programs and were not treated within general psychiatry, in part because they did not require psychiatric medications unless they were in acute withdrawal. Then, within a few years, a new term appeared in the psychiatric literature on substance abuse: now we had to grapple with the dual diagnosis patient, which was regarded as a complex, challenging subset of substance abuse patients. In fact, individuals with extensive comorbidity are the norm in substance abuse populations, as I found in research I published in 1992 [4]: among 100 participants in treatment for substance use at an outpatient specialty clinic, 62 met criteria for major depressive disorder, 39 for a dissociative disorder and 36 for borderline personality disorder on a structured interview; 43 reported childhood physical and/or sexual abuse. The structured interview did not diagnose anxiety disorders, eating disorders or a wide range of other DSM-III disorders, so the research identified only a small portion of the comorbidity in the participants.
One of the main reasons for identifying a single or primary psychiatric diagnosis, I was taught in my residency, was to guide the selection of medications: for depression one prescribed antidepressants, for psychosis antipsychotics, for anxiety anxiolytics, for insomnia hypnotic-sedatives and for bipolar disorder mood stabilizers. The classes of medication matched the different sections in DSM-III. It all made sense in theory but not in practice. In practice, psychiatric inpatients were given a single primary diagnosis – even if additional comorbidity was acknowledged, it was viewed as secondary and not the primary focus of treatment.
A very short exposure to psychiatric inpatient units revealed that most patients were on multiple different classes of psychiatric medication for their supposed single, primary disorder. The single disease model did not in fact guide or determine treatment. Theory did not match reality. Polypharmacy was the norm, as it is today. It was, and still is, common for a psychiatric inpatient to be on an antidepressant, an antipsychotic, a mood stabilizer, and a benzodiazepine and to have been prescribed many different medications in each of those categories in the past.
The same thing is true for outpatient psychotherapy. There are distinct types of psychotherapy such as cognitive therapy, psychoanalytic psychotherapy, internal family systems therapy, EMDR and so on and some outpatients do get manualized, distinct forms of psychotherapy. However, none of those therapies are diagnostically specific – a cognitive therapist will do cognitive therapy for depression, anxiety, a personality disorder, PTSD, and numerous other disorders. Most psychotherapists and counselors practice a technically eclectic, multi-modal approach that varies a bit from client to client but is broadly the same. Treatment is not really determined by a single disease diagnosis, which is nevertheless required for insurance billing.
In the United States, the Food and Drug Administration (FDA) will not approve a new medication unless it has been shown to be better than placebo for a single DSM diagnosis such as major depressive disorder. In order to get published in a psychiatry journal, most research has to be about a single DSM disorder. Conferences, books and journals often identify a DSM category in their titles and most speakers identify themselves as experts on a DSM category. Experts on eating disorders, by and large, do not attend schizophrenia conferences, do not talk to schizophrenia experts, do not read schizophrenia journals and do not treat anyone with a primary diagnosis of schizophrenia. The mental health field is a collection of separate silos with minimal cross-talk.
The trans-diagnostic approach outlined in the present paper is based on my Trauma Model [5] and my Trauma Model Therapy [6] which rests on the foundation of the general trauma model.
Predictions of the Trauma Model
The Trauma Model [5] is designed to be scientifically testable and makes a series of testable predictions. For example, assume that the results of a large study in the general population were: women who met lifetime criteria for major depressive disorder were compared to women who did not; the female relatives of the depressed women had higher rates of major depressive disorder than the female relatives of non-depressed women; the male relatives of the depressed women had higher rates of alcohol abuse and antisocial personality disorder than the male relatives of the non-depressed women.
A common interpretation of these results within biological psychiatry would be that the primary cause of the depression in the women and the alcoholism and antisocial personality in the men was genetic: an inherited set of risk genes running in the affected families was expressed phenotypically as depression in the women and as alcoholism and antisocial personality disorder in the men. The Trauma Model makes a different interpretation: it is very depressing to be female and to grow up in an extended family of antisocial alcoholic men. These men will be perpetrators of neglect, family violence and physical and sexual abuse of their children. That’s what’s making the women depressed, not their genes.
These two interpretations of the data need not be mutually exclusive. The Trauma Model predicts that, for this example, and for mental disorders in general, there is a distribution of genetic risk from very low to very high. For the women in these families, the abuse, overall, is contributing much more to their risk for depression than are their genes. However, a few women will be at such high genetic risk that they will become clinically depressed even without severe trauma. It’s a question of the odds of depression; the degree of risk for it will increase with increasing trauma in large samples of women.
This prediction of the Trauma Model could be tested through adoption studies. The prediction is that children adopted at birth out of high-trauma families into low-trauma families will have a much-reduced risk for depression, PTSD, dissociative disorders, borderline personality disorder, anxiety disorders and a wide range of mental health problems. In the opposite direction, women adopted at birth out of non-trauma families into trauma families will have a greatly increased lifetime prevalence of all these disorders.
In a similar fashion, consider a large twin study of schizophrenia in which it was found that identical or monozygotic (MZ) twins had a much higher concordance for schizophrenia than non-identical dizygotic (DZ) twins. Let’s say that when the first MZ twin interviewed has schizophrenia, the other MZ twin has it 40% of the time; when the first DZ twin interviewed has schizophrenia, the other twin has it only 12% of the time. Within biological psychiatry this would be interpreted as evidence that schizophrenia has a strong genetic component.
The Trauma Model makes a different prediction: if severe childhood trauma was measured in a schizophrenia twin study, the results would be: twin concordance is highest in MZ twins concordant for trauma; second highest in DZ twins concordant for trauma; third highest in MZ twins discordant for trauma; and lowest in DZ twins discordant for trauma. Such results would support the hypothesis that the trauma is contributing more to the development of schizophrenia than the genes.
Overall, the model predicts, survivors of severe childhood trauma will resemble each other, and will have similar treatment needs irrespective of their primary diagnosis: the treatment of a woman with a primary diagnosis of bulimia and severe trauma will resemble that for a woman with a diagnosis of schizophrenia and severe trauma, and will be quite different from the treatment needs of a woman with bulimia and no severe trauma – the latter woman will fit the single disease model better than the trauma survivor with bulimia.
My name appears in the back of DSM-IV because I was a member of the DSM-IV dissociative disorders committee: I had an inside view of the process and spoke with a leader of the DSM process in between DSM-IV and DSM-5. The DSM leaders rejected the concept of Complex PTSD (C-PTSD) because it threatened the conceptual foundation of the DSM system, namely the single disease model. C-PTSD was incorporated into ICD-11 in 2019 [7] but does not appear in DSM-5 even though extensive research-supported submissions were made to the committees developing both DSM-IV and DSM-5 to include a category corresponding to C-PTSD, no matter what it was called.
The basic idea behind C-PTSD is that it is a trans-diagnostic disorder that includes features across many domains of symptoms, self-regulation difficulties and interpersonal conflicts. Within this framework, depression, anxiety, substance use, anger problems, personality disorders and PTSD symptoms are all elements of an inclusive trauma response, not of separate single disorders. C-PTSD dismantles the walls between the different DSM-5 silos and threatens the conceptual foundations of the DSM system.
Curiously, while resisting the inclusion of the concept of C-PTSD, no matter what its official title, the DSM criteria for PTSD have gradually drifted in the direction of C-PTSD without acknowledging it. Compared to DSM-III PTSD, DSM-5 PTSD includes a much greater emphasis on anger, negative cognition and mood, and interpersonal conflicts.
A Focus on Function, Conflicts, Coping Strategies and Symptoms
Within Trauma Model Therapy, the focus is not on DSM-5 disorders as such. Patients/clients do meet criteria for many comorbid DSM-5 disorders but the focus is on the person’s function, conflicts, coping strategies and symptoms. The DSM-5 disorders are not ignored, they just aren’t the focus. The goal is to reduce symptoms and conflicts while improving the person’s overall function and self-regulation skills. This does not mean that medications are irrelevant or disallowed: most people treated within my inpatient and outpatient programs for the last 35 years have been on multiple psychiatric medications at the time of admission and at discharge.
Trauma Model Therapy is evidence-based and supported by a series of prospective cohort studies [8-16]. There have been no randomized controlled trials because those would require millions of dollars in external funding, which has not been available.
Core Principles of Trauma Model Therapy
The core principles of Trauma Model Therapy include: the problem of attachment to the perpetrator; the locus of control shift; the problem is not the problem; just say ‘no’ to drugs; addiction is the opposite of desensitization; and the victim-rescuer-perpetrator triangle [6]. Here I will focus on the first three of these. The therapy is multi-modal and involves cognitive therapy, experiential groups, inner child work, self-regulation skill building, systems approaches and trauma education. Most recently, clients in an outpatient program I owned and ran for four years received a 91-page collection of lesson plans tagged to the group therapy sessions, which took place 20 hours per week. This program was discontinued due to low reimbursement rates by insurance companies combined with endless denials, appeals and administrative tasks.
The Problem of Attachment to the Perpetrator
The problem of attachment to the perpetrator is a core element of the treatment model. It is based on the fact that mammals are dependent for survival on adult caretakers for a period of time after birth that varies from species to species, and in humans lasts for years. Built into mammalian biology is a set of attachment mechanisms and processes: attachment to caretakers is built into mammalian biology and DNA and in humans is not due to race, culture, gender, IQ or personality. It is not optional and happens automatically. The human child loves and needs to be loved by his or her caretakers, who are usually the child’s biological parents but can be adoptive or foster parents. In a stable, healthy family this all works out – the child develops good self-esteem and secure attachment and is able to take risks in the outside world because there is a safe base to return to, home.
In a severe trauma family, there is a varying combination of emotional and physical neglect, physical, sexual and emotional abuse, absent caretakers, family violence and highly disturbed family dynamics. The child must and does attach to mom and dad, which I call mode A. However, another instinctual reaction is also operating – just like a withdrawal reflex when one touches a hot stove, the child fears, avoids and withdraws from the perpetrator(s), who are also the primary attachment figures – I call that mode B.
That is an impossible problem for the child to comprehend or solve: how to attach to people from whom you must run away. The survival imperative is to attach to an adult caretaker: the idea of the model is that there is an over-ride by the attachment systems. In order to survive, mom and dad must be OK and the child must be in mode A. For this to be true, a fundamental dissociation is required, not in order to protect the child’s feelings but to keep the attachment system up and operating. Bad mom and dad must be put out of sight and out of mind, at least enough to maintain attachment.
Sometimes mom and dad are present and not abusive. At other times they are absent, neglectful or abusive and the child activates mode B, but after a while there has to be an over-ride and a return to mode A. The child develops what is called a disorganized attachment style. From my perspective, this is actually a highly organized and tactical survival strategy: it solves the problem of attachment to the perpetrator, which is how to maintain an attachment to people who might literally kill you.
When the person comes into Trauma Model Therapy decades later they are taught about the problem of attachment to the perpetrator in group and individual therapy and in reading assignments. They then make a core realization: I loved the people who hurt me; and I was hurt by the people I loved. When this sinks in it leads to a lot of grief, mourning and loss – mourning the loss of the childhood I never actually had, which was a good, stable childhood. Addictions, acting out, rigid defenses and other survival strategies that worked in childhood but are maladaptive now must be unlearned and healthier coping strategies must be learned and practiced.
A related cognitive error is the belief that I must be weird, sick or mentally ill to love my perpetrators. The corrective cognition is telling yourself that loving your perpetrator proves only one thing: you are a mammal. It seems that no amount of abuse completely extinguishes the positive attachment, no matter how much it is disavowed, dissociated and buried.
The Locus of Control Shift
The locus of control shift is the second core principle of Trauma Model Therapy. Like attachment to the perpetrator, it is not based on race, culture, gender, IQ or personality – it is based on normal childhood cognition, which I call the mind of the magical child: I am at the center of the universe, everything revolves around me, and I cause everything that happens in my world. The child automatically shifts the locus of control – the control point – from inside the perpetrator to inside the self: I am bad, I am causing the abuse, it is my fault, and I deserve to be treated that way. These core negative self-beliefs get reinforced over and over by what the parents do (the abuse) and what they do not do (protecting the child and stopping the abuse), then by bullying at school, a sexually abusive coach, a rape at the frat house and an abusive partner or spouse.
This is the source of the self-blame, self-hatred and self-punishment that is virtually universal in survivors of severe, chronic childhood trauma. The paradox is that it is good to be bad: because the abuse is being caused by badness inside me, I can control it and stop it. All I have to do is decide to be a good little girl or boy, then mom and dad will forgive me and everything will be OK. The locus of control shift confers a developmentally protective illusion of power, control and mastery at the cost of the badness of the self. It also solves the problem of attachment to the perpetrator because it sanitizes mom and dad and creates an illusion that they are safe attachment figures. Thirty years later, the battered wife leaves the battered spouse shelter and returns home, vowing to be a better wife so that he won’t be so stressed and won’t have to hit me anymore. The domestically violent husband forgives her for leaving him temporarily and they enter a short-lived honeymoon phase until he beats her again.
When the client really gets it and it really sinks in that he or she is not bad and deserved to be loved and protected like every other child, that is good and relieves the self-blame and self-hatred. However, it also dismantles the illusion of power, control and mastery and throws the person into an underground reservoir of unresolved grief, loss, powerlessness and helplessness. I always say that no one in their right mind would want to go there, which de-stigmatizes and normalizes the avoidance so that we can look at the cost-benefit in the present of holding onto the locus of control shift.
The Problem Is Not the Problem
The problem is not the problem is adapted from general systems theory and family therapy. Rather than being psychologically meaningless symptoms of brain dysfunction, symptoms are viewed in the context of the person’s life story and are understood as maladaptive coping strategies that helped the person survive their childhood. Sometimes the model does not apply because the individual’s symptoms are endogenous, biologically driven and consistent with the disease model. However, in a substantial majority of cases, the author believes, the principles of Trauma Model Therapy can be applied and be helpful. It is important to avoid all-or-nothing thinking: for one person, psychotherapy is the primary intervention, and medications are adjunctive; for the next person, the opposite is true. Some clients want only medication, some want only psychotherapy, and some want a combination, irrespective of the clinician’s views. In all cases, the approach should be collaborative not dictatorial.
The assumption in Trauma Model Therapy is that the presenting problem – hearing voices, flashbacks, substance use – is a solution to an underlying problem. For example, a person drinks heavily to drown the sorrows arising from complex, chronic abuse and neglect and loss of loved ones. The problem is the grief, self-blame and lack of healthy self-regulation skills: alcohol solves the problem temporarily and is basically an avoidance strategy. The fact that alcohol works temporarily reinforces the addiction, as does the fact that the effect wears off and the person has to drink more.
Once the person makes a serious commitment to abstinence and to doing the work, the therapy can begin: that commitment is an ongoing process with fluctuating hard work and avoidance, often with temporary relapses. Once enough grief work, cognitive therapy and internal family systems tasks have been sufficiently completed, and healthy self-regulation strategies have been practiced and learned, it becomes much easier to say ‘no’ to alcohol. Simply removing the defense, addiction or maladaptive coping strategy does not solve the underlying problems: hence the concept of the ‘dry drunk’ who is still miserable and difficult to tolerate.
Rather than being symptoms of brain disease, voices are understood as arising from dissociated ego states, especially if they speak in sentences and paragraphs and converse with each other – they can be engaged in psychotherapy and participate in the work. They are holding thoughts, feelings and beliefs that have been disowned and disavowed by the person. They aren’t just symptoms to be gotten rid of, rather they are parts of the person and parts of an overall survival strategy that needs to be adjusted: it worked well in the emergency situation of childhood but isn’t working so well now.
Flashbacks are conceptualized in a similar fashion: rather than being symptoms of brain damage or dysfunction, flashbacks are an effort to review the tapes of the trauma. What happened leading up to the trauma? What red flags did I miss? If I can make a list of all the red flags, stay hyper-aroused and scan for danger, I can spot the red flags in the future and take evasive action. It is my own fault that I didn’t do so the first time (locus of control shift).
Conclusions
The author has reviewed some of the principles of Trauma Model Therapy, which is a trans-diagnostic approach to mental health problems and addictions. The assumption is that trauma in many forms is a major driver of symptoms and disorders across the mental health field, in a proportion that varies from case to case. The model provides a rationale for trauma therapy irrespective of diagnosis and provides an extensive set of strategies, techniques and interventions for the therapist [6]. Its effectiveness is supported by a set of prospective treatment outcome studies.
References
Diagnostic and statistical manual of mental disorders, 3rd. ed (1980) Washington, DC, USA: American Psychiatric Association.
Diagnostic and statistical manual of mental disorders, 4th. ed (1994) Washington, DC, USA: American Psychiatric Association.
Diagnostic and statistical manual of mental disorders, 5th. ed (2013) Washington, DC, USA: American Psychiatric Association.
Ross CA. Kronson J, Koensgen S, Barkman K, Clark P, Rockman G (1992) Dissociative comorbidity in 100 chemically dependent patients. Hospital and Community Psychiatry 43: 840-842. [crossref]
Ross CA (2007) The trauma model: A solution to the problem of comorbidity in psychiatry. Richardson, TX: Manitou Communications.
Ross CA, Halpern N (2009) Trauma model therapy: A treatment approach for trauma, dissociation and complex comorbidity. Richardson, TX: Manitou Communications.
World Health Organization (2019). International Classification of Diseases and Related Health Problems. Geneva: World Health Organization.
Ellason JW, Ross CA (1996) Millon Clinical Multiaxial Inventory – II follow-up of patients with dissociative identity disorder. Psychological Reports 78: 707-716. [crossref]
Ellason JW, Ross CA (1997) Two-year follow-up of inpatients with dissociative identity disorder. American Journal of Psychiatry 154: 832-839. [crossref]
Ross CA, Ellason JW (2001) Acute stabilization in a trauma program. Journal of Trauma and Dissociation 2: 83-87.
Ellason JW, Ross CA (2004) SCL-90-R norms for dissociative identity disorder. Journal of Trauma and Dissociation, 5(3).
Ross CA, Haley C (2004) Acute stabilization and three month follow-up in a trauma program. Journal of Trauma and Dissociation 5(1).
Ross CA, Burns S (2007) Acute stabilization in a trauma program: A pilot study. Journal of Psychological Trauma 6(1).
Ross CA, Goode C, Schroeder E (2018) Treatment outcomes across ten months of combined inpatient and outpatient treatment in a traumatized and dissociative inpatient group. Frontiers in the Psychotherapy of Trauma and Dissociation 1: 87-100.
Ross CA, Engle M, Baker B (2018) Reductions in symptomatology at a residential treatment center for substance use disorders. Journal of Aggression, Maltreatment & Trauma 28(10).
Ross CA, Engle M, Edmonson J, Garcia A (2020) Reductions in symptomatology from admission to discharge at a residential treatment center for substance abuse disorders: A replication study. Psychological Disorders and Research 28, Available from: https://shorturl.at/WGdDm
Objective: Because the current treatment technology cannot really solve the problem of the loss of melanocytes in the area of vitiligo, resulting in poor curative effect and low cure rate of vitiligo, known as the cancer of immortal people; Based on this, Liu Jingwei’s team proposed “the theory of implanting melanocyte processing plant in vitiligo affected areas” to fundamentally solve the worldwide problem of melanocyte loss in vitiligo affected areas.
Methods: 50 cases of vitiligo patients who had failed various treatments were selected by homologous pairing principle, and the complete outer hair root sheath containing hair follicle melanocyte stem cells was extracted and isolated by patented technology, and the resting hair follicle melanocyte stem cells in the outer hair root sheath were activated, and the outer hair follicle root sheath was prepared into a processing plant of melanocyte and implanted in the affected area of vitiligo.
Results: The melanocyte stem cells in the outer hair root sheath could be continuously transformed into melanocytes and enter the epidermis along the outer hair root sheath, thus inducing white spots to recolor. After 1 year, the cure rate of 50 patients with vitiligo was as high as 92%. At present, this technology has obtained 1 Chinese invention patent and 11 utility model patents, and also obtained international PCT patents, and obtained patent acceptance in the EU, the United States, Japan, South Korea and Thailand through the PCT patent way.
Conclusion: “The theory of implanting melanocyte processing plant in vitiligo affected area” were successfully transplanted to the affected area of vitiligo, which breaks through the traditional vitiligo treatment thinking, creates a new theory of vitiligo treatment, completely solves the source of melanocytes in vitiligo affected area, so that it has increased its cure rate to more than 90%. This patented technology cannot only completely cure vitiligo but also is not easy.
As a clinical refractory disease, vitiligo has a significant impact on the physical and mental health of patients, threatening the state of their marriage, social interactions, and employment. As the pathogenesis of vitiligo remains unknown, the ineffective rate of various treatments for vitiligo patients has reached 50% [1]. Therefore, vitiligo has always been regarded as a chronic disease in dermatology. The new method for treating vitiligo invented by the team of Liu JW (Nanhai Renshu International Skin Hospital) has been granted patents by the China Patent (Invention Patent) [2] (Technical Method for Treating Leucoderma Based on Hair Follicle Melanocyte Stem Cell Transplantation, Patent No.: ZL201910769979.1) and by the Patent Cooperation Treaty (PCT) [3] (Patent No.: PCT/CN2021/072340). At present, there are many surgical methods for treating vitiligo that utilize melanocyte (MC) transplantation. However, only the hair follicle MC stem cell (McSC) transplantation technology has been used effectively, becoming a massive breakthrough in the treatment of vitiligo.
General Data
A total of 50 vitiligo patients who had been treated in Nanhai Renshu International Skin Hospital using other methods for more than 1 year between June 2020 and March 2022 with unsatisfactory outcomes were selected as the research subjects for the present study. Inclusion criteria were as follows: 1) patients meeting the diagnostic criteria for vitiligo, 2) individuals over 4 years old, 3) those with no contraindications for ultraviolet radiation and no photosensitivity, 4) patients and their guardians who were able to adhere to the medical treatment, 5) patients who had not received any other treatments within 1 week and those with more than two white patches, at least one of which had received only the 308-nm excimer laser therapy as the control group, and 6) those who signed the informed consent form. Exclusion criteria included the following: 1) patients with malignant skin tumors, 2) those with mental disorders, 3) individuals with infected lesions at the white patch site, and 4) pregnant or lactating women. The present study was a key research and development project of Hainan Province in 2021, named Clinical Research and Application of the Transplantation of the Complete Outer Root Sheath of the Hair Follicle in the Treatment of Vitiligo (Project No.: ZDYF2021SHFZ048), which was approved by the Ethics Committee of the hospital on June 1, 2020 [Approval No.: 2020 (Clinical Research) RS002].
The 50 study subjects included 24 males and 26 females 4–62 years old, with an average age of (34.23±4.14) years. Vitiligo can be classified into localized type (n=29), generalized type (n=3), acrofacial type (n=5), and vulgaris type (n=14). In addition, leukoderma can be categorized into progressive (n=8) and stable (n=42) stages. In the control group, a total of 89 white patches were not surgically treated, and each patient had at least one such white patch. These white patches took up an area of 680 cm2 in total, with the largest area per patch of 89 cm2 and the smallest area per patch of 2 cm2. A total of 126 white patches were surgically treated in the treatment group, taking up a total area of 2,517 cm2, with the largest area per patch of 135 cm2 and the smallest area per patch of 1 cm2.
Instrument
The equipment used in the present study included a Peninsula 308- nm excimer laser system [model: XECL-308C; Shenzhen Peninsula Medical Co., Ltd. (Shenzhen, Guangdong, China), working medium: xenon chloride (XeCl), wavelength: 308 nm].
Therapeutic Dose
Prior to the 308-nm excimer laser therapy, the minimal erythema dose was tested in the abdomen of all patients using the instrument in the operation mode for the minimal erythema dose. The minimal erythema dose response was observed in each patient within 24–48 h after the irradiation. This dose was considered as the initial dose of the first operation.
Surgical Procedures
For the treatment group, disinfection and local anesthesia were carried out in a 10,000-level laminar flow operating room. Vitiligo was surgically treated according to the method recorded in the PCT- protected Technical Method for Treating Leucoderma Based On Hair Follicle Melanocyte Stem Cell Transplantation (hereinafter referred to as the invention patent) as follows: 1) the outer root sheath (ORS) containing hair follicle McSCs was extracted, and complete hair follicles containing McSCs were obtained using follicular unit extraction technology; 2) the complete ORS containing McSCs was obtained via the hair follicle separation method specified in the invention patent; 3) the obtained hair follicle McSCs were cultured in vitro using a special culture medium that is described in the invention patent. The stem cell activity was further generated to achieve transformation into mature MCs; 4) the obtained hair follicles containing McSCs were inactivated using the utility model patent Novel Vitiligo Hair Follicle Inactivation Needle (Patent No.: ZL201921329885.4) [4] according to the inactivation method in the invention patent, thus achieving dark pigmentation in the skin of vitiligo patients without hair growth; and 5) hair follicles containing McSCs with a complete ORS were transplanted using two utility model patents, including Planting Needle for Vitiligo Treatment (Patent No. ZL201921450324.X) [5] and A Plant Pilot Pin for Hair Follicle Transplants (Patent No.: ZL201921277579.0) [6].
In both the treatment group after the operation and control group, irradiation was conducted using a Peninsula 308-nm ultraviolet light therapy device 1–2 times/week, and the interval between the two irradiation procedures was no more than 7 days. The initial irradiation time was set up based on the minimal patient erythema dose. If erythema persisted for 12–48 h after the treatment, the irradiation dose was appropriate. Each white patch was irradiated 30 times as a course of treatment, and clinical observation of all patients lasted for more than half a year. Local patients in Hainan Province received free phototherapy once a week in the hospital. Patients outside the province underwent phototherapy using a home-use Peninsula 308- nm excimer laser therapy device as required, and the therapy status was reported at least once a week.
Evaluation Criteria
The efficacy for the vitiligo treatment was evaluated based on the efficacy evaluation criteria formulated by the Pigmentation Disorder Group of the Dermatology and Venereal Disease Committee of the Chinese Society of Integrated Traditional Chinese and Western The therapy was regarded as effective only when patients were cured. Vitiligo was deemed to be cured after patches at the treatment site completely disappeared and the skin color basically returned to normal. The cure rate was calculated according to the following formula: cure rate = number of cured cases/total number of cases × 100%. The efficacy was also compared.
Adverse reactions in all patients during the 308-nm excimer laser therapy, such as folliculitis, blisters, skin itching, burning sensation, and pain, were counted and recorded.
Efficacy satisfaction questionnaires were distributed to all patients with a total score of 100 points on the last day of the follow-up. A score lower than 90 points was considered to indicate unsatisfactory efficacy.
Results
Therapeutic Results
One patient in the whole cohort received surgery at two different sites and was recorded as two cases. In the treatment group, 46 cases (92%) were cured, while four cases (8%) were not, resulting in the total cure rate of 92%. None of the 50 cases were cured in the control group and had a cure rate of 0%. Among the uncured patients, two suffered from hypothyroidism and took Eutyrox for a long time. Two acrofacial type patients were over 50 years old.
Adverse Reactions
During the 6-month follow-up after the treatment, the incidence rate of adverse reactions was 10% in the treatment group, with one case of skin itching, four cases of folliculitis, and zero cases of other discomforts. No obvious adverse reactions were detected in the control group.
Satisfaction Degree
Efficacy satisfaction questionnaires were distributed to all patients on the last day of the follow-up. The score was 100 points in 22 patients, 95 points in 20 patients, 90 points in five patients, and below 90 points in three patients, demonstrating an efficacy satisfaction rate of 94%.
A Typical Case
A 35-year-old male patient had multiple depigmented patches on the right side of his face for the duration of 17 years. White patches the size of a small fingernail appeared on the right side of the patient’s face for no obvious reasons 17 years prior. Various drug therapies, fire acupuncture, and laser therapies were performed in this patient with unsatisfactory results. Over the course of the past year, white patches on the right side of the patient’s face expanded, gradually affecting the forehead, eyelids, eyebrows, part of the nose, lower lip, and right side of the neck, occupying large areas. Due to the patient’s lack of confidence in stem cell transplantation, white patches in some areas (marked in Figure 1) were surgically treated for the first time. Two months after stem cell transplantation combined with 308-nm excimer laser therapy, a large amount of melanin was produced in the white patch areas. White patch areas that were previously operated on were repigmented six months after the operation. In particular, the lips and eyelid mucosa where vitiligo could not be cured in the past were repigmented with no color difference. White patches on the unoperated area only received 308-nm excimer laser therapy and did not change as a result (Figure 1).
Figure 1: A case of vitiligo on the face: The first operation
The patient underwent a second operation combining stem cell transplantation with eyebrow implantation on the remaining white patch area on the face and neck six months later. Six months after the combined therapy, white patches in the operated area were completely cured, while those in the unoperated area receiving only the 308- nm excimer laser therapy remained unchanged (Figure 2). White patches on the ears and scalp of the patient have been recently treated surgically and are now recovering.
Figure 2: A case of facial vitiligo: The second operation
Discussion
Theoretical Basis and Research Progress for Hair Follicle McSCs in Vitiligo Treatment
Because mature MCs in the basal layer of the white patch area are partially or completely deficient, repigmentation of the white patch area is often achieved by the production of melanin granules by MCs migrating from outside this region. In 1959, Staricco et al. [7] have confirmed the existence of a large number of immature MCs containing no melanin in the ORS of hair follicles, which cannot synthesize melanin, are negative to dihydroxyphenylalanine (DOPA), and are thus regarded as amelanotic melanocytes (AMMCs). In 1979, Ortonne et al. [8] have found that after the psoralen plus long- wave ultraviolet therapy for vitiligo lesions, DOPA-negative and non-dendritic MCs in hair follicles migrate to the epidermis along the ORS of hair follicles and differentiate into mature MCs. On this basis, the hypothesis for the MC reservoir existence in hair follicles was put forward for the first time. In 1991, Cui et al. [9] have found that the inactivated MCs in the middle or lower part of the skin lesion hair follicle are activated and proliferate after the vitiligo treatment, changing from a non-functional to a functional state, and then migrate to the epidermis along the ORS of the hair follicle, forming pigmented spots at the hair follicular orifice. Dong et al. [10] have discovered that neural crest-derived McSCs located on the hair follicle bulge can effectively differentiate into mature MCs under the irradiation from narrow-band ultraviolet B (NB-UVB) rays and gradually migrate along the ORS to be repigmented at the hair follicular orifice of the vitiligo epidermis. Hair follicle AMMCs can serve as a reservoir for skin MCs in the treatment of vitiligo [11-13]. MCs are derived from the embryonic neural crest and begin to migrate to the epidermis and hair follicles 2–5 weeks after embryonic development. MCs migrating to hair follicles can be divided into two types: one type with melanin synthesis activity located in the hair matrix and infundibulum of the hair follicle in the anagen period, and the other type is inactivated AMMCs located in the ORS in the anagen period showing no melanin synthesis activity. In recent years, it has been shown that AMMCs can be activated by some specific factors, proliferate, migrate, and produce melanin, manifesting some characteristics of stem cells [14,15].
The McSCs and pre-MCs have been classified into AMMCs in numerous studies [16]. Hair follicle McSCs are located in the bulge area at the bottom of the hair follicle (upper 1/3), mostly in a resting state, with slow periodicity and ability to maintain self-renewal. They are typical representatives of regenerative stem cells [17]. However, as research progresses, it has been confirmed that stem cells in a transitional state, namely, pre-MCs, are present in the ORS of hair follicles. These cells do not synthesize melanin but are active in the pigment production cycle. As the direct source of MCs, pre-MCs are the earliest initiator of each pigmented hair cycle [18]. Pre-MCs are transitional cells between McSCs and MCs, which are formed by the proliferation and differentiation of McSCs in the previous hair growth cycle. They are essentially McSCs. As mature MCs in the basal layer of the white patch area are partially or completely deficient, the repigmentation of the white patch area is often achieved through the production of melanin granules by MCs migrating from outside this area. MCs migrating to the epidermis eventually settle on the basement membrane, forming mature MCs that continuously produce melanin [19]. McSCs serve as a melanocyte reservoir for the repigmentation of the affected skin in vitiligo patients. McSCs proliferate and migrate upwards to the nearby epidermis upon activation, forming pigment islands around hair follicles (Figure 3) [20].
Figure 3: A case of oral vitiligo
Clinical Research on McSC Transplantation for Vitiligo Treatment
In 2002, Nishimura et al. [21] investigated the proliferation of melanoblasts and found that stem cell factors expressed in the epidermis form a channel between the ORS and the epidermis, along which MCs migrate from the hair follicle to the epidermis. If the ORS containing McSCs is directly transplanted under the epidermis, the McSCs in the ORS can be activated by a 308-nm excimer laser, while those transported along the ORS can be processed into mature MCs.
Among all laser wavelengths, 308 nm is the laser wavelength where the absorption values of human DNA and proteins almost peak. This contributes to the production of pyrimidine dimers, purine dimers, and other substances, thus triggering the corresponding biological photoimmune response and repigmentation [22]. It has been pointed out that the 308-nm laser changed the microenvironment of hair follicles, facilitated the maturation and differentiation of McSCs, and stimulated the migration of MCs to the epidermis (Figure 4) [23].
Figure 4: A case of vitiligo at the end of the finger
The transplantation of McSCs for treating vitiligo is a technological invention in the implantation of an MC processing plant, which provides a basis for a new theory of vitiligo treatment. The PCT- protected technical method used in the present study employed the following processes: extraction of autologous hair follicles, inactivation of hair follicles, separation of complete ORS, culture and activation of McSCs in the ORS, and harvesting and transplantation of functional McSCs. The complete hair follicle ORS supplies melanoblasts for McSCs. After the ORS containing functional McSCs was transplanted to an area under the epidermis, the 308-nm excimer laser activated the McSCs in the ORS to produce MCs in vitro, thereby continuously producing mature MCs and successfully establishing an MC processing plant in the affected skin of vitiligo patients.
Clinical Research and Theoretical Innovation in McSC Transplantation for Vitiligo Treatment
Although there are presently many surgical methods for treating vitiligo, including epidermal transplantation, MC transplantation, skin tissue engineering, ORS suspension transplantation [24], and single hair follicle transplantation [25], the actual transplants are MCs, which will be inactivated or become apoptotic after completing a life cycle, leading to re-whitening of the skin in vitiligo patients. In particular, surgical methods other than single hair follicle transplantation require microdermabrasion for the transplantation of MCs, resulting in significant damage, uneven repigmentation, and proneness to scarring. Single hair follicle transplantation has been adopted to treat vitiligo more than 20 years ago, but the essence of this method is to implant hair follicles into the dermis and subcutaneous tissues, through which only MCs at the junction of the basement membrane zone and the ORS can enter the epidermis. As only a small segment of the ORS has been transplanted, it is necessary to transplant a large number of hair follicles to achieve repigmentation of whole white patches. This method requires a large number of hair follicles for the treatment of hairless white patches, after which the hair becomes unmanageable. In addition, this method utilizes punctiform repigmentation in most cases and is thus ineffective for white patches in the mucosa. Therefore, it is only suitable for the treatment of vitiligo on skin portions with hair. Hair follicle McSC transplantation method in the present study was adopted to transplant the complete ORS of hair follicles to an area between the epidermis and dermis. After the operation, new MCs were continuously generated via in vitro activation of McSCs in the ORS, thus achieving patchy repigmentation. Since hair follicles were inactivated before the operation, they fell out naturally post-operation after one hair cycle (Figure 5).
Figure 5: A case of vitiligo on the feet.
The PCT-protected Technical Method for Treating Leucoderma Based on Hair Follicle Melanocyte Stem Cell Transplantation is the first in the world to propose a technique for transplanting hair follicle McSCs to treat vitiligo based on the complete ORS. Using this method, McSCs can be directly transplanted to the epidermis of vitiligo patients, thereby treating large-area vitiligo via extraction of small quantities of hair follicles. In addition, the present invention also shows marked efficacy in hairless areas, which indicates that stem cell transplantation is also applicable for the treatment of white patches on the mucous skin membrane.
This is the first time that transplantation of the skin environment containing McSCs with complete hair follicle ORS has been proposed for treating vitiligo. Additionally, this invention patent method provides a basis for a new theory of vitiligo treatment by implanting an MC processing plant, which provides a source of MCs for the treatment of vitiligo and lays a foundation for repigmentation of white patches (Figure 6).
Figure 6: A case of vitiligo on the head
This invention patent introduces a new method of transplanting McSCs for vitiligo treatment without dermabrasion. Vitiligo patients were surgically treated without dermabrasion, and repigmentation with no color difference after the vitiligo operation was achieved via minimally invasion transplantation using the self-developed plant pilot pin for hair follicle transplants and planting a needle for vitiligo treatment.
Conclusion
Liu et al. have obtained McSCs in a functional state using a PCT- protected technical method and implanted them and melanoblasts to an area under the epidermis. Continuously activated by a 308- nm excimer laser in vitro, McSCs in the ORS were transformed into mature MCs and migrated along the ORS to multiple hair follicle orifices in the vitiligo area or sebaceous gland openings in the hairless area to achieve central-type repigmentation with no color difference. McSC transplantation addresses the issue of MC sources for patients with vitiligo and provides a new solution for its treatment. With a cure rate of 92%, this method brings new hope for recovery to 70 million patients with vitiligo worldwide.
Liu J Technical method for treating leucoderma based on hair follicle melanocyte stem cell transplantation. China. ZL201910769979.1 2021.01.12, Publication Number: CN110339214A.
LiuJing-wei. Technical method for treating vitiligo through hair follicle melanocyte stem cell transplantation. China. PCT/CN2021/072340, WO2022/151450.
Liu J W. Novel vitiligo hair follicle inactivation needle. China. ZL201921329885.4, 2020.07.14. Publication Number: CN210990699U.
Liu J W. Planting needle for vitiligo treatment. China. ZL201921450324.X, 2020.09.08. Publication Number: CN211434526U.
Liu J W. A plant pilot pin for hair follicle transplants. China. ZL201921277579.0, 09.08. Publication Number: CN211433043U.
Staricco RG. Amelanotic melanocytes in the outer sheath of the human hair follicle[J]. J Invest Dermatol 1959 [crossref]
Tobin DJ, Bystryn JC (1996) Different populations of melanocytes are present in hair follicles and epidermis [J]. Pigment Cell Res [crossref]
Cui J, Shen LY, Wang GC (1991) Role of hair follicles in the repigmentation of JInvestDermatol 97(3): 410-416. [crossref]
Dong D, Jiang M, Xu X, et al (2012) The effects of NB-UVB on the hair follicle derived neural crest stem cells differentiating into melanocyte lineage in vitro[J] J Dermatol Sci [crossref]
Yu HS (2002) Melanocyte destruction and repigmentation in vitiligo: a model for nerve cell damage and regrowth[J]. J Biomed Sci [crossref]
Slominski A, Wortsman J, Plonka P M, et Hair follicle pigmentation [J]. J Invest Dermatol 2005. [crossref]
Bernard Hair cycle dynamics: the case of the human hair follicle [J]. J Soc Biol 2003 [crossref]
Ma HJ, Zhu WY, Wang DG, et Endothelin-1 combined with extracellular matrix proteins promotes the adhesion and chemotaxis of amelanotic melanocytes from human hair follicles in vitro[J]. Cell Biol Int 2006 [crossref]
Lei TC, Vieira WD, Hearing In vitro migration of melanoblasts requires matrix metalloproteinase-2: Implications to vitiligo therapy by photochemotherapy [J]. Pigment Cell Res 2002 [crossref]
Takada, K, Sugiyama, K, Yamamoto, I, et al. Presence of amelanotic melanocytes within the outer root sheath in senile white hair[J]. J Invest Dermatol 1992 [crossref]
Nishimura EK, Granter SR, Fisher DE. Mechanisms of hair graying: Incomplete melanocyte stem cell maintenance in the niche [J]. Science, 2005. [crossref]
Hsu YC, Li L, Fuchs E. Transist amplifying cells orchestrate stem cell activity and tissue regeneration [J]. Cell. 2014 [crossref]
Slominski A, Wortsman J, Plonka PM, et (2005) Hair follicle pigmentation [J]. J Invest Dermatol 124(1): 13-21. [crossref]
Matz H, Tur Vitiligo[J] (2007) [title] CurrProblDermatol 35: 78-102.
Nishimura EK, Jordan SA, Oshima H, et al. Dominant role of the niche in melanocyte stem-cell fate determination[J]. Nature 2002 [crossref]
Jmb A, Shk B, Hjj A, et al. Suberythemic and erythemic doses of a 308-nm excimer laser treatment of stable vitiligo in combination with topical tacrolimus: A randomized controlled trial – Science Direct[J]. Journal of the American Academy of Dermatology 2020. [crossref]
Noborio R, Nomura Y, Nakamura M, et al. Efficacy of 308-nm excimer laser treatment for refractory vitiligo: A case series of treatment based on the minimal blistering dose[J]. Journal of the European Academy of Dermatology and Venereology 2020. [crossref]
Vinay K, DograS, ParsadD, et Clinical and treatment characteristics determining therapeutic outcome in patients undergoing autologous noncultured outer root sheath hair follicle ce11 suspension for treatment of stable vitiligo [J]. J Eur Acad Dermatol 2014. [crossref]
Na GY, Seo SK, Choi SK. Single hair grafting for the treatment of vitiligo[J]. J Am Aead Dermatol 1998. [crossref]
The article titled “Safety and Efficacy of Bempedoic Acid Among Patients with Statin Intolerance and Those Without” provides a comprehensive meta-analysis and systematic review of randomized controlled trials, addressing a critical gap in the management of hypercholesterolemia. Bempedoic acid emerges as a viable alternative for patients who are intolerant to statins, which have long been the cornerstone of cholesterol-lowering therapy.
The findings from this analysis reveal that bempedoic acid significantly lowers low-density lipoprotein cholesterol (LDL-C) levels compared to placebo, underscoring its efficacy in lipid management. Notably, it appears particularly beneficial for patients without statin intolerance, though results are somewhat mixed for those with such intolerance. This variability prompts further investigation into factors that could influence treatment outcomes, such as concurrent lipid-lowering therapies.
Importantly, the study highlights the safety profile of bempedoic acid. There was no significant increase in serious adverse events compared to placebo; however, certain side effects-such as gout and elevated hepatic enzymes—led to a higher discontinuation rate among users. These findings necessitate a careful assessment of the risk-benefit balance when considering bempedoic acid for cholesterol management, especially in a population known for sensitivity to medication side effects.
Moreover, the article emphasizes the potential of bempedoic acid to enhance patient adherence to therapy by alleviating muscle-related symptoms often associated with statin use. This could represent a crucial factor in improving long-term outcomes for patients struggling with high cholesterol.
Despite its strengths, the meta-analysis is not without limitations. Variations in baseline characteristics and the relatively short duration of follow-up raise questions about the long-term implications of bempedoic acid therapy. Additionally, the issue of publication bias must be taken into account, as it may skew perceptions of the drug’s overall efficacy and safety.
In conclusion, this systematic review presents bempedoic acid as a promising option for patients dealing with statin intolerance and reinforces the importance of individualized treatment strategies in managing hypercholesterolemia. As more data becomes available, healthcare professionals will be better equipped to navigate the complexities of lipid-lowering therapies, ultimately leading to improved patient care and outcomes.
The paper presents the empirical evaluation in a Mind Genomic format of five sets of 16 elements each, previously generated entirely by AI, and dealing with the issue of aspects of a police officer’s job focused in a school, in a small town in Pennsylvania. The respondents, ages 18-30, read combinations of messages (elements) about the job, these elements combined by experimental design into vignettes comprising 2-4 elements per vignette. The results from all five studies revealed the very strong performance of the elements when the respondents were divided into mind-sets. Three studies each generated three mind-sets, two studies in turn, each generated two clear mind-sets. The entire process — from the generation of the ideas to the validation with people — required approximately four days and was done in an affordable fashion with available technology, generating easy-to-understand, immediately actionable messaging. The five studies along with the rapid generation of the ideas using generative AI open up the possibilities that AI may help to better communicate with people, through the combination of LLM (large language models) and Mind Genomics empirical thinking and experimentation.
Keywords
Generative AI, Mind genomics, Police recruitment, Synthesized mind-sets
Introduction
In the companion paper, “School Crossings and Police Staffing Shortages: How Generative AI Combined with Mind Genomics Thinking Can Become “Colleague,” Collaborating on the Solution of Problems Involved in Recruiting,” we presented four strategies to approach the issue of recruiting for a police officer position in TOWNX. Strategy 3 in that paper dealt with the creation of questions and answers. The answers were to be given by four AI-synthesized mind-sets: Dedicated Public Servant, Compassionate Protector, Community-Focused, and Proactive Problem Solver. Thus, Strategy 3 generated questions about the topic of recruiting, and answers to the questions from four simulated mind-sets. There was no guidance of the process from a human being, other than the basic question of how one gets a person to consider a career in law enforcement. This paper continues that work, looking at these AI-generated, best- guess questions and answers, not with artificial intelligence alone, but with actual respondents living in the state of Pennsylvania and of the proper age, 18 to 30, with a high school diploma, who might be interested in having a career in law enforcement. That is, how well do the ideas generated by artificial intelligence end up performing when given to real respondents in the Mind Genomics platform?
Mind Genomics
Mind Genomics is an emerging science with origins in experimental psychology and statistics and consumer research. The background to Mind Genomics and the computational approaches have been well documented and presented elsewhere [1-3]. Here are some of the specifics relevant to the data presented in this paper:
The researcher identifies a topic of interest. Here, the topic is what communications are effective to get a young person (ages 18-30) to want to join the police force and be part of the effort to help at school properties, among other tasks.
The researcher creates four questions. Figure 1 shows the requirement to fill in the four questions (Panel A) and the four questions that were filled in (Panel B).
Figure 1: The BimiLeap.com screen guiding the user to provide or create the four questions (Panel A) and then the completed screen as typed in by the user (Panel B).
It is at this point that many prospective researchers “hit a blank wall,” feeling that they are unable to create questions. The Mind Genomics platform has been augmented with generative AI (ChatGPT 3.5) [4- 7]. The user accesses the AI through Idea Coach. Strategy 3 in the companion paper shows how AI can generate 21 questions of interest, with a simple prompt. This paper uses the 21 questions from Strategy 3 to create the questions needed for five separate experiments using the Mind Genomics platform. For each question, the researcher is instructed to provide four answers. This task is simpler, less daunting. In the companion paper, we created the questions. For each question, we generated four answers reflecting the way different types of people with different ways of thinking about the problem would answer the question. Table 1 also shows the four answers for each question. The answers were provided by AI, in the companion paper, but have been edited to be more “standalone.”
Table 1: The five questions and the four answers to each question.
Properties of the Vignettes Created by the Underlying Experimental Design
The basic unit of evaluation at the level of the individual respondents is the set of 24 vignettes, presented to and evaluated by the respondent one vignette at a time, in an interview lasting about three minutes, and done on the internet. Each respondent evaluates a different set of 24 vignettes. Rather than having to “know” the best range to test, the approach allows anyone to become an expert simply by testing many elements in this format [8]. The vignette comprises a combination of 2-4 elements, viz., message (see Figure 2, Panel B as an example of a vignette). These vignettes are created according to an experimental design. The design prescribes that there be four sets of four statements each. The statements are “elements” in the language of Mind Genomics. Each vignette comprises a minimum of two elements and a maximum of four elements. Each vignette has either one or no elements from a question. Thus, a vignette can never comprise two mutually exclusive or contradictor elements, viz., different answers or elements from the same question. The experimental design prescribes the specific composition of each vignette or combination of the 24 vignettes. For each set of 24 vignettes allocated to one respondent, each of the 16 elements appears exactly five times, once in five different vignettes, and absent from the remaining 19 vignettes. The 16 elements are statistically independent of each other, allowing the researcher to use statistical modeling (e.g., ordinary least squares regression analysis, OLS regression) to estimate the linkage between the presence of the 16 elements, and the rating that will be assigned by the respondent [9].
Figure 2: The respondent experience. Panel A on top shows the self-profiling classification in a pull-down menu. Panel B on the bottom shows one of 24 vignettes that the respondent will evaluate.
The Respondent Experience
These studies are typically run with respondents who have agreed to participate, signing an agreement with an online research panel “provider.” These research panels comprise thousands of individuals from all over the country and all over the world. The panel members are invited to participate, usually by email. They receive some remuneration for each participation, with the remuneration administered by the panel company. The user is guaranteed that these are not bots, but rather real people. The respondents are invited to participate by an email based upon the qualifications requested by the researcher. The respondents who agreed to participate press a link and are led to the interview. The interview itself is simple and the explanation of the interview is done by a series of slides at the beginning of the interview. The researcher first obtains some additional classification from the respondent using a pull-down menu (Figure 2, Panel A). Currently, the platform, BimiLeap.com, provides the user with up to 10 self-profiling questions, two of which are fixed: age and gender, respectively. That information can be extended dramatically to many more questions. The respondent then reads an orientation, and is led to the set of 24 vignettes, presented one vignette at a time. Figure 2, Panel B shows an example of the vignette that the respondent sees. The vignette itself comprises two to four elements as noted above, along with a short introduction to the project present in each vignette and of course the rating scale present in each vignette. The respondent reads the orientation, usually once, skips to the vignette, reads the vignette, and then assigns an answer. The objective is to get the respondent’s immediate impressions, almost a so-called “gut feeling,” where it is not judgment but feelings which are dominant.
The spare design of the vignette, without any connectives, may seem unpolished. The reality is that this spare profile of the vignette reduces fatigue. The respondent “grazes” for information in a comfortable manner, rather than having to wade through the thickets of text to get to the ideas. The respondent evaluates the vignette, considering the 2-4 elements as one idea, scoring the vignette on the scale. The Mind Genomics platform records the rating, and the response time (RT), defined as the number of seconds elapsing to the nearest 100th of a second from the time the vignette was presented to the time the rating was assigned.
Automated Preparation of the Data for Statistical Analysis
The Mind Genomics platform now creates a database which is set up to facilitate analysis. The database comprises of records for each vignette. Since each respondent evaluated 24 vignettes, each respondent generates 24 rows of data. The first set of columns is reserved for information about the respondent, generated from a self- profiling classification. This information includes gender, age, and up to eight additional self-profiling classification questions. The second set of columns is reserved for the information about the 16 elements. Each element has its own column. When the element is present in the vignette the value is “1” in the cell. When the element is absent, the value is “0” in the cell. Each vignette rated on the 5-point rating scale is converted to a binary scale, R54x or “JOIN.” A rating of 5 or 4 is converted to 100 to denote interest in joining. A rating of 3, 2, or 1 is converted to 0, to denote not interested in joining. Then, a vanishingly small random number (<10-5) is added to the newly created binary variable. The rationale is to ensure that even when a respondent rated all 24 vignettes high (5 or 4), or all 24 vignettes low (3, 2, or 1), there will be some minimal variation in the newly created binary variable. That minimal variation is necessary for the data from a single respondent or in fact any group of respondents to be analyzed later on using OLS (ordinary least-squares) regression.
Statistical Analysis — OLS Regression to Find Linkages Between Elements and Binary Variable R54x
The Mind Genomics process is now standardized. The experimental design ensures that all of the elements for each respondent are independent of each other. This up-front effort ends up allowing OLS (ordinary least squares) regression to relate the presence/absence of the 16 elements to the binary dependent variable R54x (viz., interested in joining).
The equation is simple: R54x = k1A1 + k2A2… + k16D4.
The foregoing equation can be estimated at the level of the individual respondent, at the level of any group of respondents, and of course at the level of the total panel. Note that the equation has no additive constant. The ingoing rationale is that in the absence of elements we should have a rating of 0. There is no reason to “join” when there are no elements to communicate the job. The coefficients show the driving power of the elements as a motivator of joining. A coefficient of 20 is twice as much driving power to join as a coefficient of 10. A coefficient of 20 is 2/3 of the driving power of a coefficient of 30, and so forth. The coefficients can be thought of as psychological measures of probability saying “I will join” when the element is in the mix of messages. We should look for coefficients around 21 or higher.
Creating Mind-Sets
A key hallmark of Mind Genomics is the search for mind-sets, defined as groups of respondents with similar patterns of coefficients, who think the same way. These individuals are not necessarily like each other in other ways, but they do think similarly for the topic. The topic here is the messages which drive the respondent to say they would like to join. The approach to find these groups, so-called mind-sets, is called clustering. Clustering uses the individual sets of 16 coefficients as inputs. Clustering tries to put the respondents into a small number of predefined groups (e.g., 2 or 3), so that the pattern of coefficients of the individuals within the cluster or group is similar. At the same time, the average profile on the 16 coefficients for the two or three groups is different. The clustering program used by Mind Genomics, k-means clustering, works entirely by mathematics. It is only afterwards that we try to interpret the meaning of these clusters [10]. The clusters are called mind-sets.
Interpreting the Data
When we look at Figure 2, Panel B, viz. the sample vignette, we see that the structure of the vignette does not lend itself to “gaming the system.” There are 24 vignettes, so there is no point in expending a great deal of effort. The sheer number of vignettes militates against trying to outguess the researcher. Another aspect, namely the spare structure of the combinations, and the fact that to the untrained eye these vignettes seem to be random. Every respondent sees a different set of 24 vignettes, with the elements in the vignettes seeming to be put in or taken out by random. The respondent quickly goes into a sense of indifference and guesses, rather than focusing on being correct and pleasing the respondent and pleasing the interviewer. The respondent participating on a computer simply proceeds, going through the evaluation. As noted above, the OLS (ordinary least squares) regression analysis shows the driving power of the elements. Table 2, column labelled Total Panel, shows the 16 coefficients for the elements below. When we look at the coefficients from the total panel, we have a coefficient as high as 22, and a coefficient as low as 14. Only one element moves beyond the pre-set criterion of coefficient C1 — What you do: I actively engage with residents and address their concerns. The remaining columns show the other groups, gender and age. Respondents not appropriate for the secondary requirements (viz., age outside the allowable range) were not considered for specific analyses, but were included in the Total Population, and in the self- profiling classifications about marital status and children. Once again, we see relatively few elements which score strongly. Only Element C1 scores consistently strongly. To make interpreting easier, keep in mind that the numbers in the body of the table are coefficients from regression. They can also be interpreted as “the increment percent of people who, reading this element, will say I will join.” Also keep in mind that we would like strong performing elements. Looking now at the Total Panel, we find that C1 has a coefficient of 22. This means that when element C1 appears in a vignette (What you do: I actively engage with residents and address their concerns), we get 22% more people saying, “I would like to join.” On the other hand, when we put in A4 for whatever reason (Advantage: I identify potential safety threats and implement preventive measures), only 14% say they will join. That’s about 2/3 as many. We clearly would want to put in Element C1. Verbalize results — look for opportunities — by looking down within a group, and across groups. The numbers can all be compared to each other, and added together, at least up to four elements, no more than one element from a question. The sum provides us with a sense of the likely percent of respondents who say they will join. The consequence of this analysis is a powerful tool to understand, and to compose, all done in a matter of hours.
Table 2: Coefficients for the 16 elements for Study 1, for Total Panel, gender, age, and self-profiling status of marriage and children.
Thinking Differently at the Granular Level of Everyday Life — The Challenge of Mind-Sets
One of the hallmarks of Mind Genomics is this belief that in every area of everyday life, people differ in the way that they deal with the objectives, the goals, the messages. These are not the major differences in people, but rather everyday differences which are systematic, repeatable, and useful for things as different as medical advice and advertisements for shopping. The approach to find these so-called mind-sets, these differences in the way we approach issues, is very straightforward. Recall from above that we have regression analysis for each of our 100 respondents who saw the 24 combinations. So instead of doing the analysis at the level of all 100 people pooled together, let us do the regression analysis for each one of our 100 people, and let’s store 100 sets of the 16 coefficients in a database. When we do that analysis, we end up with 100 different models, 100 rows each with 16 columns. Each row is a respondent, one of our 100 respondents. The numbers are the coefficients estimated from the individual-level regression analysis. That difference is not based on who the people are, but rather on how the people respond to specific, relevant messages describing a small aspect of daily life. In other words, we are not interested in who people are, what they do, but how they think in a very local granular situation. There are a variety of metrics, ways to quantify the dissimilarity between respondents. We use the measure of distance between pairs of respondents, based upon the correlation of the coefficients. The distance between pairs of respondents is defined (1 – Pearson Correlation), computed on the corresponding pairs of the 16 coefficients. When the 16 coefficients of one respondent correlate perfectly with the 16 coefficients of another respondent, they are defined as having 0 distance. When the 16 coefficients of the two respondents describe opposite patterns, their distance is +2. We do not supervise the program. We simply allow the program to come up with these groups so that the patterns of the respondents within a group, within a cluster, are very similar, but the averages of the cluster on the 16 elements are very different across the three mind-sets. When we do the analysis, we find that the strongest result emerges when we ask the clustering program, the K-means clustering program, to create three groups. The bottom line is that even without intellectually thinking through the study, the regression analysis and clustering end up with radically different interpretable groups, as shown in Table 3. The important thing here is that the clusters are interpretable, the coefficients are very high, and it makes sense. What’s also important is that the coefficients are high for one group and reasonably low for the other group. We are really dealing with different mind-sets, responding to different messages as motivators. The important thing for this study is that the generation of these elements by artificial intelligence, Strategy 3 in the companion paper, with slight editing, ends up showing remarkably different types of people, suggesting the power of artificial intelligence revealed by human responses in a situation where respondents can game the system.
Table 3: The performance of all elements in Study 1, for Total Panel and for the three mind-sets generated by k-means clustering (MS1, MS2, MS3). Strong performing elements are shown by shaded cells.
How do we know that the clustering produces real mind-sets? This is an important question. The goal in Mind Genomics is to discover truly different ways of thinking about the same topic. Two factors come into play. One fact is that the data should show elements which have high coefficients, with these elements “telling a story.” The other is that the data should show elements which have low coefficients. It is not sufficient to generate high coefficients everywhere. That would show better elements, but not show radically different mind-sets. In recent studies, the authors have introduced the index called IDT, Index of Divergent Thought. The IDT is a way to show the net effect of the two forces: high coefficients for some sets of interpretable elements, and low coefficients for the other elements. Table 4 shows the computations. Simulations of data sets showing high coefficients for elements relevant to the mind-set and low coefficients elsewhere suggest that an IDT around 70 is best. The data in Study 1 suggest an IDT of 71, almost perfect.
Table 4: The data for the IDT (Index of Divergent Thinking) and the calculations.
Using AI to Summarize the Results, Considering Only the Strong-Performing Elements
The final analysis in this study deals with how AI analyzes the results and the strong elements for each mind-set. These appear in Table 5. The notion here is that AI can act as a second pair of eyes, as a coach, as an interpreter of the results. The table is laid out in the form of a set of questions to be answered for each mind-set, based upon the pattern of elements scoring 21 or higher for that mind-set. The questions themselves range from a summarization of the mind-set, the elements which perform strongly, and then onto questions about innovations and messaging.
Table 5: AI summarization of the key findings and opportunities for each mind-set, based upon the patterns generated for strong performing elements for that mind-set.
The questions are answered automatically, once the study is completed. The results here are done automatically, provided at the end of the study, within 30 minutes. In the interest of standardizing our understanding, the questions are fixed, answered in every Mind Genomics report, for key groups, including Total Panel, Self-Profiled Groups (e.g., gender), and mind-sets such as the three mind-sets reported here. Over time, it is straightforward to update the Mind Genomics platform, BimiLeap, so that the platform becomes even more complete, recognizing only that the updated platform will be used for every report and every key subgroup within the report.
Discussion and Conclusions
The data presented in this paper, in Study 1 above, and in Studies 2-5 in the appendices, suggest that we are only beginning to see the fruits of an AI which can help us to solve practical problems about recruitment and similar issues in a way never before possible. It is important to note that the study ran here, this first study, emerged from the questions and the answers generated by AI. Mind Genomics began to incorporate AI in 2023, typically to solve the problem of researchers “freezing” at the task of developing questions and then answers to those questions (so-called elements). The early work was so successful that it led to the incorporation of AI in the form of Idea Coach. It was with the exploration of AI beyond requesting questions and answers that the power of AI would emerge even more forcefully. The companion paper demonstrated the possibility of creating questions about a topic, and then different answers to the same question, those answers provided by AI-synthesized mind-sets. Everything, therefore, was under the control of AI, which moved from a coach to “unfreeze the researcher” into a true researcher, one almost independent of the human researcher. If we were to summarize the importance of this paper and of the companion paper, we would probably come out with the idea that we have now a tool, which in a very short period of time, hours and days, can produce information both in a deep way from generative AI and from actual people responding to the relevant stimuli as AI considers them to be. The consequence is the promise of increased expertise in the field for the professional, and an increased ability to learn how to think critically for younger students. We are sitting here on a cusp now, where learning through the computer can be made targeted, fun, quick, easy, and even gamified with the results from the Mind Genomics experiment. The simple fact that all of the material presented here was done in less than one week (really 5.5 days), starting from absolute zero is witness to the fact that we are on the cusp of an intellectual revolution, where information, validated information, about issues related to people can be dealt with quickly, both in terms of quote library type research through AI, and then human experiments.
Acknowledgment
The authors would like to thank Vanessa Marie B. Arcenas and Isabelle Porat for their help in producing this manuscript.
Abbreviations
AI: Artificial Intelligence, ChatGPT: Chat Generative Pre-Trained Transformer, IDT: Index of Divergent Thought, LLM: Large Language Model, OLS regression: Ordinary Least Squares regression
References
Jahja E, Papajorgji P, Moskowitz H, Margioukla I, Nasto F, Dedej A, Pina P, Shella M, Collaku M, Kaziu E and Gjoni K (2024). Measuring the perceived wellbeing of hemodialysis patients: A Mind Genomics cartography. Plos One 19(5): e0302526. [crossref]
Porretta S, Gere A, Radványi D and Moskowitz H (2019) Mind Genomics (Conjoint Analysis): The new concept research in the analysis of consumer behaviour and choice. Trends in Food Science & Technology 84: 29-33.
Radványi D, Gere A and Moskowitz HR (2020) The mind of sustainability: a mind genomics cartography. International Journal of R&D Innovation Strategy (IJRDIS) 2(1): 22-43.
Mendoza C, Deitel J, Braun M, Rappaport S and Moskowitz HR. (2023)(a) Empowering young researchers: Exploring and understanding responses to the jobs of home aide for a young child. Pediatric Studies and Care 3(1): 1-9.
Mendoza C, Mendoza C, Deitel Y, Rappaport S, Moskowitz H (2023)(b) Empowering Young People to become Researchers: What Does It Take to become a Police Officer? Psychology Journal: Research Open 5(3): 1–12.
Mendoza C, Mendoza C, Rappaport S, Deitel J, Moskowitz HR (2023)(c) Empowering young researchers to think critically: Exploring reactions to the ‘Inspirational Charge to the Newly-Minted Physician’. Psychology Journal: Research Open 5(2): 1-9.
Mendoza C, Mendoza C, Rappaport S, Deitel Y, Moskowitz H (2023) Empowering Young Students to Become Researchers: Thinking of Today’s Gasoline Prices. Mind Genom Stud Psychol Exp 2(2): 1-14.
Gofman A and Moskowitz H (2010) Isomorphic permuted experimental designs and their application in conjoint analysis. Journal of Sensory Studies, 25(1): 127-145.
Messinger S, Cooper T, Cooper R, Moskowitz D, Gere A, et al. (2020) New Medical Technology: A Mind Genomics Cartography of How to Present Ideas to Consumers and to Investors. Psychol J Res Open 3(1): 1-13.
Dubey A and Choubey A (2017) A systematic review on k-means clustering techniques. Int J Sci Res Eng Technol (IJSRET, ISSN 2278–0882) 6(6).
A woman’s transition through menopause is a multifaceted experience that encompasses more than just the end of reproductive capacity. It presents unique challenges and opportunities for mental health scholars and practitioners. Importantly, hormonal fluctuations that occur during menopause impact a woman’s neurological and cognitive functioning. As a result, women may experience a variety of cognitive challenges commonly collectively referred to as brain fog. To this end, brain fog is a hallmark symptom of menopause. The application of neurocounseling to menopausal mental health care presents a novel pathway for holistic, personalized treatment. This article presents current information regarding cognition during menopause and neurocounseling. The article concludes with recommendations for the application of neurocounseling as a treatment approach for brain fog within a menopausal mental health care.
Keywords
Menopause, Brain fog, Cognition, Neurocounseling, Neuroscience, Mental health
Cognition During Menopause
During menopause, many women experience noticeable changes in cognition, often referred to as brain fog . These cognitive changes are largely influenced by hormonal fluctuations, particularly the decline in estrogen, which affects brain function. Estrogen plays a key role in cognitive processes, including memory, attention, and learning. As estrogen levels drop during perimenopause, women may face several cognitive challenges. For example, many women report memory problems such as forgetfulness or difficulty recalling information, especially short-term memory issues. A decline in attention span and focus is common, making it harder to concentrate and complete tasks that require sustained attention. During this time some women notice that it takes longer to think through tasks or solve problems than before suggesting that some perimenopausal women have a lower processing speed. In addition, perimenopausal women may struggle to find the right words during conversations, leading to feelings of frustration. Overall, many women experience a general sense of mental cloudiness or difficulty thinking clearly, affecting problem-solving and decision-making during menopause, and perimenopause in particular [1-5].
These cognitive changes can affect daily life and work, contributing to emotional distress such as anxiety and frustration. This is further exacerbated by the fact that there is no universal assessment or benchmark for the onset of perimenopause [5]. Thus, brain fog is many women’s first encounter with the symptoms of menopause. As a result, many women struggle to detect their symptoms as those of menopause and not simply aging or stress [6]. While this may be inconsequential for some women, the lack of knowledge and preparedness can create significant psychological distress for others. For most women, cognitive changes are temporary and tend to improve post-menopause. Nevertheless, the impairment can last years as a women undergoes perimenopause and be quite debilitating [5]. Lifestyle interventions like exercise, healthy diet, mental stimulation, and stress management can help mitigate cognitive difficulties during this phase. Moreover, menopausal hormone therapy (MHT), such as the use of prescribed estrogen via patches, pills, vaginal creams, combined estrogen-progesterone pills, gel-based applications, and certain intrauterine devices (IUDs), can help mitigate the underlying hormonal cause of cogntitive impairment. However, these interventions do not address the neurological aspects of a woman’s hormonal fluctuations that contribute to psychological distress [1,7].
Neurocounseling
Neurocounseling is an interdisciplinary approach that integrates neuroscience with counseling practices to better understand and address mental health issues. It focuses on how brain function and neurological processes influence behavior, emotions, cognition, and overall mental health. By incorporating knowledge of the brain and nervous system, neurocounseling helps mental health practitioners design more effective interventions tailored to the biological underpinnings of a person’s mental health challenges [8,9].
The neurocounseling approach involves using tools like brain imaging studies, neurofeedback, mindfulness practices, and cognitive-behavioral strategies to promote positive changes in brain functioning and emotional regulation. The goal is to help patients and clients improve their mental health by combining traditional therapeutic methods with insights from neuroscience, fostering a deeper understanding of how the brain and nervous system respond to therapy. Neurocounseling is particularly useful in treating conditions such as anxiety, depression, trauma, ADHD, and other disorders where brain function plays a critical role. The use of neurocounseling to support menopausal women is relatively unexamined [10-12].
Implications for Menopausal Mental Health Care
Mental health practitioners can use neurocounseling to effectively treat cognitive challenges such as brain fog during menopause by incorporating neuroscience-based techniques that target both the brain and behavior. Given that cognitive changes during menopause, such as memory issues, difficulty concentrating, and brain fog, are often linked to hormonal fluctuations, neurocounseling offers a holistic and empowering approach to managing these challenges. A summary of how six aspects of neurocounseling can be used to address cognitive aspects of menopausal mental health follows.
Psychoeducation
Psychoeducation is an integral aspect of neurocounseling [11]. Mental health practitioners can educate patients and clients about the neurological basis of cognitive changes during menopause, helping them understand that these difficulties are normal and often temporary. This awareness can reduce anxiety and foster a more compassionate view of their menopausal experience.
Cognitive-behavioral Therapy (CBT)
Neurocounseling can integrate CBT techniques to help clients and patients manage negative thought patterns that may arise from cognitive struggles [10]. For example, women who feel frustrated by forgetfulness can learn strategies to reframe their experiences and reduce the emotional burden associated with menopausal cognitive challenges.
Mindfulness and Relaxation Techniques
Since stress exacerbates cognitive decline, mental health practitioners can teach mindfulness-based stress reduction (MBSR) and relaxation techniques [13]. Mindfulness has been shown to positively affect brain plasticity, promoting cognitive resilience by enhancing focus, attention, and emotional regulation. For women undergoing cognitive changes due to menopause, MBSR can be particularly useful, empowering women to assert greater control their attention and emotional regulation.
Neurofeedback
This tool allows patients and clients to monitor their brain activity in real time and learn how to regulate their brain waves. Neurofeedback can improve concentration, memory, and mental clarity, which are often impacted by menopause [10].
Memory and Attention Training
Mental health practitioners can use brain-based exercises to strengthen cognitive functions such as working memory and attention. Techniques like brain games, puzzles, and structured mental exercises can improve cognitive flexibility and processing speed [14]. As a tool within the neurocounseling framework, memory and attention training can empower women while creating an entertaining outlet. The latter may be particularly important given that many women report an increase in social isolation and a decreased participation in pleasurable activities during the menopausal transition [15].
Lifestyle Guidance
Neurocounseling emphasizes the connection between brain health and lifestyle choices. Mental health practitioners can encourage physical exercise, proper nutrition, and adequate sleep, all of which are linked to better cognitive function. They may also recommend activities that stimulate the brain, like reading, puzzles, or learning new skills, which promote neuroplasticity and cognitive improvement [13].
Conclusion
By using neurocounseling, mental health practitioners can offer women experiencing cognitive difficulties during menopause a comprehensive treatment plan that addresses both the psychological and neurological aspects of their symptoms, helping them regain confidence and mental clarity. For patients who can and are willing to take MHT, mental health practitioners using neurocounseling approach can work collaboratively with a woman’s medical healthcare provider who can prescribe MHT [1]. In this case, a combination of MHT along with neurocounseling presents a meaningful clinical pathway for the treatment of hormonal fluctuations, psychological implications, and neurological aspects of cognitive concerns among menopausal women. Women who are unable or unwilling to take MHT can also benefit from neurocounseling for symptom management and improved quality of life.
References
Maki PM, Jaff NG (2022) Brain fog in menopause: a health-care professional’s guide for decision-making and counseling on cognition. Climacteric 25(6): 570-578. [crossref]
Haver MC (2024) The new menopause: navigating your path through hormonal change with purpose, power, and facts. New York, Rodale Books.
Mosconi L, Berti V, Dyke J, Schelbaum E, Jett S, et al. (2021) Menopause impacts human brain structure, connectivity, energy metabolism, and amyloid-beta deposition. Scientific Reports;11(1): 10867. [crossref]
Mosconi L (2024) The menopause brain: new science empowers women to navigate menopause with knowledge and confidence. New York, Avery, an imprint of Penguin Random House.
Malone S (2024) Grown woman talk: your guide to getting and staying healthy. New York, Crown.
Refaei M, Mardanpour S, Masoumi SZ, Parsa P (2022) Women’s experiences in the transition to menopause: A qualitative research. BMC Women’s Health 22(1): 53-53. [crossref]
Gava G, Orsili I, Alvisi S, Mancini I, Seracchioli R, et al. (2019) Cognition, Mood and Sleep in Menopausal Transition: The Role of Menopause Hormone Therapy. Medicina 55(10): 668. [crossref]
Beeson ET, Field, TA (2017) Neurocounseling: A New Section of the Journal of Mental Health Counseling. Journal of Mental Health Counseling 39(1): 71-83. Available from: https://doi.org/10.17744/mehc.39.1.06
Goss D (2016) Integrating Neuroscience into Counseling Psychology: A Systematic Review of Current Literature. The Counseling Psychologist 44(6): 895-920.
Russell-Chapin LA (2016) Integrating Neurocounseling into the Counseling Profession: An Introduction. Journal of Mental Health Counseling 38(2): 93-102.
Lorelle S, Michel R (2017) Neurocounseling: Promoting Human Growth and Development Throughout the Life Span. Adultspan Journal 16(2): 106-119. Available From: https://doi.org/10.1002/adsp.12039
Marzbani H, Marateb HR, Mansourian M (2016) Neurofeedback: A Comprehensive Review on System Design, Methodology and Clinical Applications. Basic Clinical Neuroscience 7(2): 143-58. [crossref]
Hughes M (2023) Neuroplasticity-based multi-modal desensitization as treatment for central sensitization syndrome: A 3-phase experimental pilot group. Journal of Pain Management 16(1): 67-74.
Al-Thaqib A, Al-Sultan F, Al-Zahrani A, Al-Kahtani F, Al-Regaiey, K, et al. (2018) Brain Training Games Enhance Cognitive Function in Healthy Subjects. Medical Science Monitor. Basic Research 24: 63-69. Available From: https://doi.org/10.12659/MSMBR.909022
Currie H, Moger SJ (2019) Menopause – Understanding the impact on women and their partners. Post Reproductive Health 25(4): 183-190. [crossref]
In the pharmaceutical industry, a major challenge is ensuring consistent quality of finished products as the batch scale shifts from laboratory to pilot to commercial levels. This review article aims to provide insights into the current industry practices and understanding of scale-up calculations and factors involved in the production of oral solid dosage forms. Pharmaceutical manufacturing encompasses various unit operations for oral solid dosage forms, including blending, wet granulation, dry granulation via roller compaction, milling, compression, and coating processes such as Wurster and film coating. Each unit operation’s parameters significantly influence the final product’s quality. As batch sizes increase, it becomes crucial to control various process parameters strategically to maintain product consistency. This article discusses the application of scale-up and scale-down calculations throughout different stages of unit operations, highlights the importance of scale-up factors in technology transfer from pilot to commercial scales, and reviews the current methodologies and industry perspectives on scale-up practices.
Oral solid dosage forms are final drug products designed to be ingested orally. Once swallowed, these forms dissolve in the gastrointestinal tract and the active ingredients are absorbed into the bloodstream. Examples of oral solid dosage forms include powders, granules, tablets, capsules, soft gels, gummies, dispersible films, and pills. These dosage forms are preferred for several reasons: they are relatively easy to administer, they can be clearly distinguished from one another, and their manufacturing processes are well-established and understood. Among oral solid dosage forms, tablets and capsules are the most common. Both consist of an active pharmaceutical ingredient (API), also known as the drug substance, along with various excipients. The manufacturing process for these dosage forms involves several unit operations, including blending, wet granulation (using a rapid mixer granulator or fluid bed processor), dry granulation (via roller compaction), milling, compression, Wurster coating, and film coating [1,2].During the early stages of drug product development, formulations and processes are created using active pharmaceutical ingredients (APIs) and excipients to ensure the quality, safety, and efficacy of the final drug products at the laboratory scale [3]. Once this formulation is established, the process is scaled up from the laboratory to pilot and eventually to commercial scales [4,5]. Throughout this technology transfer, the laboratory-scale formulation is generally finalized and remains unchanged, while process parameters are adjusted. For instance, as the scale of the granulation container increases, both the powder weight and the sizes of components like the impeller and chopper, as well as operational parameters, may need adjustment. These changes can impact the quality of the finished product [6]. Successful scale-up relies on a thorough understanding of the process parameters and the ability to adjust them appropriately to maintain the same quality observed at the laboratory scale.
Successful scale-up of a manufacturing process hinges on a deep understanding of the fundamental principles and insights into each unit operation, which are derived from mechanical insights into the process. The Food and Drug Administration (FDA) has introduced the Quality by Design (QbD) approach to facilitate the efficient and timely production of high-quality pharmaceutical products [7,8]. According to the International Conference on Harmonisation (ICH) guidelines specifically ICH Q8 (Pharmaceutical Development), ICH Q9 (Quality Risk Management), and ICH Q10 (Pharmaceutical Quality System) the scale-up process should be conducted to ensure product quality in alignment with the QbD principles. To meet these regulatory requirements, it is essential to establish methods for reducing variability during scale-up through a systematic understanding of the manufacturing process and the application of the QbD approach [9]. This review examines the application of mathematical considerations in scale-up calculations and explores various methodologies used in scaling up different unit operations for oral solid dosage forms. It aims to provide a systematic strategy to ensure the quality of finished dosage forms in the pharmaceutical industry.
Methods
Scale Up Process Basic Understanding
Using scientific approaches and mathematical calculations for process scale-up or scale-down can significantly reduce the risk of failure, ensure regulatory compliance, and lower costs associated with trial batches. These calculations help to establish robust and realistic parameters for scaling up or down pharmaceutical formulations [10]. When scaling unit process parameters, key considerations include equipment size, shape, working principle, and associated parameters. According to process modeling theory, processes are deemed similar if they exhibit geometric, kinematic, or dynamic similarity.
Scale-Up Strategy for Oral Solid Dosage Forms
The manufacturing of oral solid dosage forms tablets and capsules involves several key unit operations such as blending, granulation, milling, tableting, Wurster coating and film coating. Each of these operations requires a carefully planned scale-up strategy to ensure product quality and process efficiency. Detailed overview of the scale- up strategy for each unit operation are discussed.
A) Blending/Mixing in Pharmaceutical Manufacturing
Blending is a critical unit operation in the manufacture of oral solid dosage forms (e.g., tablets and capsules). It ensures uniformity of the final product by mixing active pharmaceutical ingredients (APIs) with excipients. Equipment used in pharmaceutical blending unit operations are Double-Cone Blenders, Bin Blenders, Octagonal Blenders, V-Blenders and Cubic Blenders [11,12]. The blend should have a degree of homogeneity during blending to ensure the quality of solid dosage forms, such as tablets and capsules [13,14]. The blend homogeneity is influenced by several factors, such as material attributes (for example particle size distribution, particle shape, density, surface properties, particle cohesive strength) and process parameters (for example blender design, rotational speed, occupancy level, and blending time) [15]. These factors affect the agglomeration and segregation of the blend during the blending process, which affect the blend homogeneity. However, experiments with appropriate scale up calculations are sufficient to confirm changes in the agglomeration and segregation of the blend caused by these factors [16]. Scale up considerations and current industry practices in scale up calculations for blending unit operations are presented in Table 1a. Different types of blenders (Figure 1) such as Mass Blenders, Ribbon Blenders, V Cone Blenders, Double Cone Blender, Octagonal Blender, Drum Blender, Bin Blenders and Vertical Blenders, working principles, key factors and advantages are presented in Table 1b.
Table 1a: Process parameters, quality attributes, scale up considerations and industry practices for Blending unit operation.
Figure 1: Different types of Pilot/commercial scale model blenders used in pharmaceutical blending unit operation; 1. Mass Blenders; 2. Ribbon Blender; 3. V Cone Blenders; 4. Double Cone Blender; 5. Octagonal Blender; 6. Drum Blender; 7. Bin Blender; 8. Vertical Blender.
Table 1b: Different types of blenders in pharmaceuticals and its working principles, key factors and advantages.
B) Granulation in Pharmaceutical Manufacturing
Granulation is a crucial process in the pharmaceutical industry, particularly in the manufacture of solid dosage forms like tablets and capsules. It involves the formation of granules from a mixture of powders, which can improve the properties of the final product. Purpose of Granulation is to improve flow properties, enhance compressibility, reduce dust and improve uniformity [17,18]. Currently pharmaceutical industry adapted different types of granulation methods such as
i). Dry Granulation
Involves compressing powders into slugs or sheets and then milling them into granules. This method is used when the API is sensitive to moisture or heat. The process includes roller compaction and slugging. Typically roller compactors are used in dry granulation process.
ii). Wet Granulation
Involves adding a liquid binder to the powder mixture, which forms a wet mass that is then dried and sized into granules. This method typically includes preparation of binder solution, granulation, drying and sizing. Typically high shear rapid mixer granulators are used for wet granulation.
iii). Semi-Wet Granulation
A combination of wet and dry process involves in this granulation techniques, where a small amount of liquid binder is used, and the granules are only partially dried. Typically low shear fluid bed granulators are used in semi-wet granulation process.
i). Dry Granulation – Scale-Up Consideration and Industry Perspectives
Dry granulation is an alternative to both direct compression and wet granulation, particularly suited for active pharmaceutical ingredients (APIs) that are sensitive to moisture, have poor flow properties, or possess other physicochemical characteristics that are incompatible with direct compression or wet granulation. Unlike wet granulation, dry granulation does not involve the use of solvents or additional heating, which can introduce challenges related to physical or chemical stability, especially in formulations with amorphous solid dispersions or those prone to chemical degradation. Dry granulation offers several advantages over wet granulation, including a simpler process that is particularly beneficial for APIs that are sensitive to heat or water [19]. The two most commonly used methods for dry granulation are roller compaction and slugging.
Roller Compaction (RC) is a dry granulation technique that simultaneously densifies and agglomerates the powder blend to achieve increased packing density and granule size. In this process, the blend is compacted into ribbons using rollers, which are then milled into granules. Roller compaction reduces the risk of segregation, minimizes dust formation, and produces ribbons that can be processed into granules with improved flow properties. These granules are suitable for various subsequent processes, such as sachet filling, capsule filling, or tableting. Different scales and schematic representation of the roller compaction process is depicted in the Figure 2.
Figure 2: Different types of Roller compacters a) Lab scale model b) Pilot/Commercial Scale c) Schematic presentation of Roller Compactor.
Scaling up of roller compaction involves utilizing traditional large- scale experimental designs to optimize the dry granulation process. This approach can be time-consuming and resource-intensive. To streamline scale-up and minimize the number of experiments, it is crucial to have a deep understanding of the process parameters and the attributes of both the ribbons and granules produced [20]. Key process parameters for roller compaction include roll gap, roll pressure, feed screw speed, roll speed, and roller shape. These parameters must be carefully adjusted to achieve the desired granulation outcomes [21]. The quality of the ribbon, which is the primary product of the roller compaction process, is assessed based on several attributes like Ribbon Density (an indicator of how compacted ribbon is), Ribbon Strength (reflects the mechanical strength of the ribbon), Ribbon Thickness (Affects the granule size and uniformity), Young’s Modulus (Measures the ribbon’s elasticity and rigidity), Ribbon Shape (Impacts the subsequent granule formation), and Moisture Content (Ensures the ribbon’s stability and suitability for further processing). By focusing on these parameters and attributes, it is possible to optimize the roller compaction process effectively while reducing the need for extensive experimentation. The rolling theory for granular solids developed by Johanson describes the pressure distribution along the rolls considering the physical characteristics of the powder and the equipment geometry. The dimensionless number frequently used in roller compaction is determined based on the Johanson theory [22]. Johanson proposed a model that predicts the density of ribbons made by roller compaction using the nip area and the volume between the roll gaps. Johanson proposed distinguishing two regions between the rolls, (i) a slip region where the roll speed is faster than the powder and there is only rearrangement of the particles and (ii) a nonslip region where the powder gets trapped between the rolls and becomes increasingly compacted until the gap. The transition from slip to nonslip region is defined by the so-called nip angle. In the nonslip region, it is assumed that the powder behaves as a solid body being deformed as the distance between the rolls narrows down to the gap. It is further assumed that the deformation has only one axial component such that it can be idealized as uniaxial compression. One source of discrepancy between the predictions of Johanson’s and Reynolds’ model and the ribbon density measurements is the different compaction behaviour of the powder in the roller compactor compared to uniaxial compression tests [23,24] demonstrated that the roller compactor and a compaction simulator lead to different ribbon densities and built a model to account for that difference.
Rowe et al. extended Johanson’s model and proposed a modified Bingham number (Bm*) that represented the ratio of yield point to yield stress as follows:
Where Cs is the screw speed constant, 0 is pre consolidation factor, ρtrue is true density, is circumference of the roll circle, D is the roll diameter, W is roll width, SA roll is roll surface area, S is roll gap, NS is feed screw speed and NR is roll speed. Bm* is easy to determine because the input parameters of Bm*consist of those that can be generally measured in the compaction process. The model- predicted values and the actual test results from WP 120 Pharma and WP 200 Pharma (Alexander werk, Remscheid, Germany) models are shown. By maintaining Bm*, it was possible to obtain a consistent ribbon density between the two operating scales. It was suggested that Bm* can be effectively used for the development of roller compaction scale-up [25]. Case studies suggest that dimensionless numbers for the prediction of ribbon density in dry granulation processes can be used successfully during the Scale-up process (Table 2).
Table 2: Critical process parameters, quality attributes, scale up considerations and industry practices for Roller compaction unit operation.
ii). Wet Granulation – Scale-Up Consideration and Industry Perspectives
Wet granulation is a key process in pharmaceutical manufacturing used to produce granules from powders by incorporating a liquid binder (Figure 3).
Figure 3: Different types of wet granulation process equipments used in pharmaceutical development a) Lab model rapid mixer granulator b) Pilot/Commercial scale rapid mixer granulator.
This process is crucial for ensuring that the final granules exhibit desirable properties such as uniformity, good flowability, and compressibility [26]. The choice of equipment for wet granulation includes high-shear Rapid mixer granulators (RMG) and low shear fluid bed granulators (FBG). RMG involves mixing powders with the binder in a high-shear environment. The impeller and chopper facilitate the formation of granules by applying mechanical forces. FBG involves spraying the binder solution onto the powder bed in a fluidized state. The fluidized bed aids in the uniform distribution of the binder and granule formation. Granulation Process involves the dry powders, including the active pharmaceutical ingredient (API) and excipients (e.g., fillers, disintegrants, lubricants), are blended to ensure a uniform distribution. The blended powders are loaded into the high-shear granulator’s mixing bowl. The liquid binder (e.g., water, ethanol, or polymer solution) is sprayed onto the powder bed. This binder helps in forming granules by adhering powder particles together. The impeller rotates on a horizontal plane, creating a high-shear environment that facilitates mixing and initial granule formation. The chopper, rotating either vertically or horizontally, breaks up large lumps and ensures the uniform size of granules by cutting and mixing [27]. Granulation end point determined by the granules continue to grow as the binder is added until they reach the desired size and consistency. The process is typically monitored to ensure that granules are not over granulated or under-granulated. The process is carefully controlled by adjusting parameters such as binder addition rate, impeller speed, and chopper speed. A predefined endpoint, based on granule size or moisture content, is set to determine when the granulation is complete. Scaling up a Rapid Mixer Granulator (RMG) involves translating process parameters from a smaller, laboratory-scale unit to a larger, production-scale unit while maintaining the desired granule quality and consistency (Table 3a and 3b). This process requires careful consideration of equipment design, power requirements, and process parameters. Below tabulated are the guide to some common scale-up calculations for RMG.
Table 3a: Critical process parameters, quality attributes, scale up considerations for RMG granulation unit operation.
Table 3b: Scale up considerations and industry practices for RMG granulation unit operation.
iii). Semi-Wet Granulation – Scale-Up Consideration and Industry Perspectives
Fluid Bed Processor (FBP) for granulation operates by passing hot air at high pressure through a distribution plate located at the bottom of the container, creating a fluidized bed of solid particles. This fluidized state, where particles are suspended in the air, facilitates drying. Granulating liquid or coating solutions are sprayed onto these fluidized particles through a spray nozzle, followed by drying with hot air. The fluidized bed processor operates on the principle of fluidization, where a gas (typically air) is passed through a bed of solid particles at a velocity sufficient to suspend the particles in the gas stream. Air is introduced through a perforated plate or distributor at the bottom of the bed, and as it flows upwards, it lifts the particles, making them behave like a fluid. During fluidization, various processes can be carried out: a binder solution or melt is sprayed onto the particles, causing them to agglomerate; hot air removes moisture from the particles; and a coating solution is applied, which is then dried. The air, now carrying moisture or coating material, exits through the top of the bed. Scaling up a FBP in the pharmaceutical industry involves several calculations and considerations to ensure that the process can be effectively transitioned from a laboratory or pilot scale to full- scale production [28,29]. The process must maintain product quality, efficiency, and compliance with regulatory standards. Here’s a detailed guide on scale-up calculations and key factors for Fluidized Bed Processors. Scaling up a FBP involves maintaining similar fluidization conditions and process outcomes as in smaller scales. Key principles include maintaining the same fluidization regime, similar granulation or coating characteristics, and ensuring that drying or granulation efficiency scales proportionally (Table 4).
Table 4: Scale up considerations and industry practices for FBP granulation.
C) Compression in Pharmaceutical Manufacturing
Tablet compression is a critical process in pharmaceutical manufacturing that involves transforming powdered or granulated substances into solid tablets (Figure 4).
Figure 4: Different types of compression machines used in pharmaceutical development a) Lab model Single Punch Tablet Press and b) Pilot/commercial Scale Single Rotary Tablet Press.
Compression is a critical and challenging step in tablet manufacturing. The way a powder blend is compressed directly impacts tablet hardness and friability, which are crucial for dosage form integrity and bioavailability. While the tablet press is essential for the compression process, the preparation of the powder blend is equally important to ensure it is suitable for compression. Understanding the physics and principles of the compression process is vital for managing these operations effectively. For high-dose or poorly compressible drugs, the study of compression becomes particularly important, especially when the relationship between compression force and tablet tensile strength is non-linear. A thorough grasp of compression dynamics also helps resolve many tableting issues, which often stem from various compression-related factors [30,31].
Compression Cycle
Understanding the different stages of the compression cycle is essential for comprehending how powder materials are compacted into tablets. It also provides valuable insights into the various formulation and compression variables that impact the quality of the finished tablet. Compression cycle is divided into following 4 phases: Pre-compression, Main-compression, Decompression and Ejection.
Pre-compression
As the name implies, pre-compression is the initial stage where a small force is applied to the powder bed to create partial compacts before the main compression. This is typically achieved using a pre- compression roller that is smaller than the main compression roller. However, the size of the pre-compression roller and the level of pre- compression force can vary based on the properties of the material being compressed. For instance, powders that are prone to brittle fracture may require a higher pre-compression force compared to the main compression force to achieve increased tablet hardness. In contrast, elastic powders need a gradual application of force to minimize elastic recovery and allow for stress relaxation. Optimal tablet formation is often achieved when the sizes of the main and pre- compression rollers and the forces applied are similar.
Main Compression
During the main compression phase, inter particulate bonds are formed through particle rearrangement, which is followed by particle fragmentation and/or deformation. For powders with viscoelastic properties, special attention to compression conditions is necessary, as these conditions significantly influence the material’s compression behavior and the overall tableting process.
Decompression
After the compression phase, the tablet experiences elastic recovery, which introduces various stresses. If these stresses exceed the tablet’s ability to withstand them, structural failures can occur. For instance, high rates and degrees of elastic recovery may lead to issues such as tablet capping or lamination. Brittle fractures can also occur if the tablet undergoes brittle fracture during decompression. To alleviate stress, plastic deformation, which is time-dependent, can occur. The rate of decompression also influences the potential for structural failure. Therefore, incorporating plastically deforming agents, such as PVP or MCC, is recommended to enhance the tablet’s ability to handle these stresses.
Ejection
Ejection is the final stage of the compression cycle, involving the separation of the tablet from the die wall. During this phase, friction and shear forces between the tablet and the die wall generate heat, which can lead to further bond formation. To minimize issues such as capping or laminating, lubrication is often used, as it reduces ejection forces. Powders with smaller particle sizes typically require higher ejection forces to effectively remove the tablets from the die. Industry perspective is to overall understanding the theoretical aspects of compression helps in selecting the optimal compression conditions for a given tablet product and at the same time can avoid the potential tableting problems thus saving significant time and resources.
D) Wurster Coating in Pharmaceutical Manufacturing
The Wurster fluid bed coating technique is renowned for its versatility and efficiency in coating applications [32]. This method is distinguished by its rapid heat and mass transfer capabilities and its ability to maintain temperature uniformity. Unlike traditional fluidized bed coating, which uses a more straightforward approach, the Wurster method employs a nozzle located at the bottom of a cylindrical draft tube to spray the coating solution. Particles are circulated through this tube, periodically passing through the spraying zone where they encounter fine droplets of the coating solution. This circulation not only ensures thorough mixing but also provides precise control over particle movement and coating quality. Wurster Coating Process is extensively utilized in the pharmaceutical industry for coating powders and pellets. Wurster systems can handle batch sizes ranging from 100 grams to 800 kilograms. This process is ideal for coating particles as small as 100 µm up to tablets. The Wurster coating chamber is typically slightly conical and features a cylindrical partition about half the diameter of the chamber’s bottom. At the base of the chamber, an Air Distribution Plate (ADP), also known as an orifice plate, is installed. The ADP is divided into two areas: the open region beneath the Wurster column, which allows for greater air volume and velocity, and the more restricted areas. As air flows upward through the ADP, particles move past a spray nozzle positioned centrally within the up- bed region of the ADP. This nozzle, which is a binary type, has two ports: one for the coating liquid and one for atomized air. The nozzle creates a solid cone spray pattern with a spray angle of approximately 30-50°, which defines the coating zone. The region outside the cylindrical partition is referred to as the down-bed area. The choice of ADP is based on the size and density of the material being coated. The height of the column regulates the horizontal flow rate of the substrate into the coating zone. As the coating process progresses and the mass of the material increases, the column height is adjusted to maintain the desired pellet flow rate.
Scaling up the Wurster coating process involves increasing the equipment size to handle larger batch capacities, ranging from small lab-scale units to industrial-scale machines (Figure 5).
Figure 5: Different types of Wurster coating equipments used in pharmaceutical development a) Lab model b) Pilot/Commercial scale model.
Larger systems require careful design to maintain consistent coating quality and process efficiency. Equipment dimensions, including the height and diameter of the coating chamber and the size of the Air Distribution Plate (ADP), must be scaled proportionally to ensure effective particle fluidization and coating (Table 5).
Table 5: Scale up considerations and industry practices for Wurster coating.
As batch size increases, maintaining optimal airflow dynamics becomes crucial. The airflow rate, velocity, and distribution must be adjusted to ensure uniform coating. Larger systems may require modifications to the ADP to accommodate increased air volume and maintain desired particle circulation and spray pattern. The configuration of spray nozzles needs to be scaled to match the increased batch size. Ensuring consistent liquid atomization and spray pattern is essential to achieve uniform coating thickness. In larger systems, multiple nozzles may be used to cover the expanded coating zone. Process parameters such as temperature, airflow, and coating solution viscosity must be carefully calibrated. Industry perspectives as scale- up introduces more variables, precise control of these parameters is necessary to maintain coating uniformity and avoid issues such as over or under coating. Scaling up involves adjustments in material handling to accommodate the larger volume and ensure smooth transfer and processing of the particles. This includes considerations for feeding systems, particle flow control, and uniform distribution within the coating chamber.
E) Film Coating in Pharmaceutical Manufacturing
Film coating is a widely used technique in pharmaceutical manufacturing to apply a thin layer of coating material onto tablets, and other dosage forms (Figure 6).
Figure 6: Different types of Film coating equipment used in pharmaceutical development a) Lab model and b) Pilot/commercial Scale film coating equipment.
This coating process enhances the appearance, improves the stability, and controls the release of active ingredients in pharmaceutical products. Different film coating formulations can be used to achieve controlled or modified-release properties. This allows for the gradual release of the drug over time, improving therapeutic outcomes and patient compliance. Film coatings can improve the appearance of dosage forms, making them more appealing to patients. Additionally, they can mask the taste of unpleasant drugs, making oral administration more acceptable [33]. Choosing the wrong film coating equipment or using subpar technology can lead to significant film coating defects. These defects can greatly affect the quality, efficacy, and appearance of pharmaceutical products. It’s essential to identify and address these issues to maintain product integrity and ensure compliance. Below is an overview of common film coating defects and their potential causes, as detailed in Table 6a. Scaling up of film coating processes in pharmaceutical manufacturing involves several important considerations to ensure that the coating process remains effective and consistent as production volumes increase Table 6b.
Table 6a: Pharmaceutical film coating defects, route cause and remedial action.
Table 6b: Scale up considerations and industry practices for Film coating.
Current Industry Persepctives
Current industry perspectives on scale-up calculations emphasize a comprehensive understanding of both the scientific and operational aspects of production. By leveraging the scale up calculations, advanced methodologies such as Design of Expert (DoE) and quality by design (QbD), along with a keen focus on cost, equipment selection, and regulatory compliance, pharmaceutical companies can navigate the complexities of scaling up oral solid dosage forms effectively. Adapting to technological advancements and maintaining a proactive approach to risk management will be crucial for success in an increasingly competitive landscape.
Conclusion
The scale-up of oral solid dosage forms (OSDFs) is a critical phase in pharmaceutical development that directly influences product quality, regulatory compliance, and market success. The successful scale-up of OSDFs is a multifaceted challenge that requires strategic planning and execution. By focusing on these critical factors integrated processes, quality assurance, economic considerations, regulatory compliance, technological advancements, risk management, and continuous improvement pharmaceutical industries can enhance their chances of delivering high-quality products to the market. As the industry evolves, maintaining a forward-thinking approach will be essential for navigating complexities and ensuring sustainable success in a competitive landscape.
Conflicts of Interest
The authors declare no conflict of interest
Acknowledgement
Authors acknowledge Dr. Sudhakar Vidiyala, Managing Director, Ascent Pharmaceuticals Inc. for his support and encouragement in writing this review article.
References
Eun HJ, Yun SP, Min-Soo K, Hyung DC (2020) Model-Based Scale-up Methodologies for Pharmaceutical Granulation. Pharmaceutics 12. [crossref]
Doodipala N, Palem CR, Reddy S, Madhusudan RY (2011) Pharmaceutical development and clinical pharmacokinetic evaluation of gastroretentive floating matrix tablets of levofloxacin. Int J Pharm Sci Nanotech 4: 1461-1467.
Raval N, Tambe V, Maheshwari R, Pran KD, Rakesh KT (2018) Scale-Up Studies in Pharmaceutical Products Development. In Dosage Form Design Considerations; Academic Press: Cambridge, MA, USA.
Amirkia V, Heinrich M (2015) Natural products and drug discovery: A survey of stakeholders in industry and academia. Frontiers in Pharmacology 6. [crossref]
Morten A, Rene H, Per H (2016) Roller compaction scale-up using roll width as scale factor and laser-based determined ribbon porosity as critical material attribute. Eur J Pharm Sci 87: 69-78. [crossref]
Mazor A, Orefice L, Michrafy A, Alain DR, Khinast JG (2018) A combined DEM & FEM approach for modelling roll compaction process. Powder Technol 337: 3-16.
Vladisavljevi´c GT, Khalid N, Neves M A, Kuroiwa T, Nakajima M, et al. (2013) Industrial lab-on-a-chip: Design, applications and scale-up for drug discovery and delivery. Adv. Drug Deliv Rev 65(11-12): 1626-1663. [crossref]
U.S: Food and Drug Administration. Pharmaceutical cGMPS for the 21st Century – A Risk-Based Approach: Second Progress Report and Implementation Plan. FDA website. Drugs section 2003.
Mahdi Y, Mouhi L, Guemras N, Daoud K (2016) Coupling the image analysis and the artificial neural networks to predict a mixing time of a pharmaceutical powder. J Fundam Appl Sci 8: 655–670.
Moakher M, Shinbrot T, Muzzio FJ (2000) Experimentally validated computations of flow, mixing and segregation of non-cohesive grains in 3d tumbling blenders. PowderTechnol 109: 58-71.
Cleary PW, Sinnott MD (2008) Assessing mixing characteristics of particle-mixing and granulation devices. Particuology 6: 419-444.
Mendez ASL, Carli de G, Garcia CV (2010) Evaluation of powder mixing operation during batch production: Application to operational qualification procedure in the pharmaceutical industry. Powder Technology 198: 310–313.
Arratia P.E, Duong N h, Muzzio F.J, Godbole P, Lange A, et al. (2006) Characterizing mixing and lubrication in the bohle bin blender. Powder Technol 161: 202–208.
Adam S, Suzzi D, Radeke C, Khinast JG (2011) An integrated quality by design (QbD) approach towards design space definition of a blending unit operation by discrete element method (DEM) simulation. Eur J Pharm Sci 42: 106-115. [crossref]
Palem CR, Gannu R, Yamsani SK, Yamsani VV, Yamsani MR (2011) Development of bioadhesive buccal tablets for felodipine and pioglitazone in combined dosage form: in vitro, ex vivo, and in vivo characterization. Drug delivery 18: 344-352. [crossref]
Teng Y, Qiu Z, Wen H (2009) Systematic approach of formulation and process development using roller compaction. Eur J Pharm Biopharm 73: 219-229. [crossref]
Kleinebudde P (2004) Roll compaction/dry granulation: Pharmaceutical applications. Eur J Pharm Biopharm 58: 317-326.
Gago AP, Reynolds G, Kleinebudde P (2018) Impact of roll compactor scale on ribbon density. Powder Technol 337: 92-103.
Johanson J (1965) A rolling theory for granular solids. JApplMech 32(4): 842-848.
Reynolds G, Ingale R, Roberts R, Kothari S, Gururajan B (2010) Practical application of roller compaction process modeling. Comput Chem Eng 34: 1049–1057.
Reimer HL, Kleinebudde P (2018) Hybrid modelling of roll compaction processes with the Styl’One Evolution. Powder Technol 341: 66–74.
Rowe JM, Crison JR, Carragher TJ, Vatsaraj N, Mccann RJ, et al. (2013) Mechanistic Insights into the Scale-Up of the Roller Compaction Process: A Practical and Dimensionless Approach. J Pharm Sci 102: 3586-3595. [crossref]
Rambali B, Baert L, Massart D L (2003) Scaling up of the fluidized bed granulation process. Int J Pharm 252: 197-206. [crossref]
Victor EN, Ivonne K, Maus M, Andrea S Daniela S (2021) A linear scale-up approach to fluid bed granulation. Int J Pharm 598: 120-209. [crossref]
Patel S, Kaushal AM, Bansal AK (2006) Compression Physics in the Formulation Development of Tablets. Critical Reviews TM in Therapeutic Drug Carrier Systems 23: 1-65. [crossref]
Mohan S (2012) Compression Physics of Pharmaceutical Powders: A Review. Int J ofPharm Sci and Research 3: 1580-1592. [crossref]
Teunou E, Poncelet D (2002) Batch and continuous fluid bed coating – review and state of the art. Journal of Food Engineering.
Ahmad S (2022) Pharmaceutical Coating and Its Different Approaches, a Review. Polymers 14. [crossref]