Toward a Comprehensive Brain Deal to Harness the Potential of Artificial Intelligence
Table of Contents
Author(s)
Share this Publication
- Print This Publication
- Cite This Publication Copy Citation
Harris A. Eyre, et al, "Upgrading Our Societal Operating System: Toward a Comprehensive Brain Deal to Harness the Potential of Artificial Intelligence" (Houston: Rice University’s Baker Institute for Public Policy, May 31, 2023), https://doi.org/10.25613/0GQN-7Q13.
By Harris A. Eyre, M.D., Ph.D.; Walter D. Dawson, Ph.D.; Ryan Abbott, M.D., Ph.D., J.D.; Karen Silverman, J.D.; Shuo Chen, J.D.; Rym Ayadi, Ph.D.; Steve Carnevale; Stephen T. C. Wong, Ph.D., P.E., Michael Berk, M.D., Ph.D.; Upali Nanda, Ph.D.; Geoffrey F. L. Ling, M.D., Ph.D.; Michelle Tempest, M.D.; Jo-An Occhipinti, Ph.D.; Seema Shah, Ph.D.; Julia Mahfouz, Ph.D.; Ernestine Fu, Ph.D.; Ian MacRae; Gabriel Esquivel; Theo Edmonds, J.D., M.H.A., M.F.A.; Felice Jacka, Ph.D.; William Hynes, D.Phil; Grace Wickerson; Pawel Swieboda
With the rapid emergence of large language models like ChatGPT, alarm bells are sounding loudly about the risks of advanced artificial intelligence (AI) — both for what we know about its capabilities, and what we do not. Yuval Noah Harari, famed historian, suggested that AI has “hacked the operating system of human civilization.” He, of course, is referring to our brains as our operating systems, influencing how we make decisions and behave — both individually and collectively. Our collective brains are referred to as societal “brain capital.”
As Harari suggests, there are very real concerns about the impact of AI on our brains. AI could also have detrimental effects on our social lives and even our jobs. Designing effective policy to maximize the benefits and minimize the risks of AI therefore requires a better understanding of the human brain. Establishing a comprehensive “Brain Deal” that advances new policies, datasets, and projects is one way to ensure that AI is used for good. This Brain Deal would encompass policymaking across a number of fields — from the economy and education to health care, social services, and digital infrastructure.
This policy brief highlights the need to continue to advance brain science and management tools and translate our new knowledge and findings into effective policymaking and implementation. Establishing an effective Brain Deal is at the core of this effort.
The Risks and Benefits of AI
Many observers have highlighted significant risks of AI that must be addressed. These include the potential for wide-scale job loss and the consequent need for retraining, as well as the rise of deep fakes and compelling disinformation on social media. The highly addictive nature of AI-driven platforms like TikTok are not well understood and have suspected negative effects on teen mental health. These negative effects could be direct, such as distorted perceptions of and expectations regarding life, or indirect, such as the opportunity costs of excessive screen time.
The potential for inappropriate use of AI-supported brain-computer interfaces, which could influence operations by hostile governments in future wartime operating environments, is also a real concern. Furthermore, recent brain-reading neurotechnologies suggest there may soon be a major battle for our brains and a need to defend the right to think freely in the age of neurotechnology. AI empowered bots can even be corralled by organized crime.
All of these rapid and real issues have led some to call for a temporary cessation to AI innovation; however, the cessation would be near impossible to implement, given AI automation has already been integrated seamlessly and used daily across many industries, e.g., manufacturing and finance. Regardless, these risks need to be navigated as they have significant implications for well-being, productivity, and social cohesion.
Conversely, there are many potential applications of AI that are beneficial to society. In the realm of education, AI systems can help to track and optimize the speech and language development of children. Algorithms can also enhance the educational experience and outcomes for students by allowing for greater personalization based on how individual brains learn differently.
In the health care setting, AI chatbot assistants may be able to aid in drafting responses to patient questions. These chatbots were even demonstrated to provide preferred responses over physicians and were rated significantly higher for both quality and empathy. These tools have the potential to then augment and extend the productive capacity of the existing health care workforce. Furthermore, big data innovations are key for better detection and treatment of health conditions including depression, anxiety, and Alzheimer’s disease. The Economist also recently noted that neuroscience is undergoing a renaissance thanks to new and advanced tools such as CRISPR, optogenetics, omics, brain-computer interfaces, neuroimaging, and stem cells.
AI’s ability to generate complex and diverse reports of rapidly increasing quality has the potential to increase economic efficiency. Certain AI applications can lower costs and waste in supply chains, reduce fraud and abuse in insurance claims and health care delivery, accelerate drug discovery and product development, and even enhance public policy alternatives.
In the workplace setting, a recent report from Goldman Sachs noted, “Worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth. The combination of significant labor cost savings, new job creation, and higher productivity for non-displaced workers raises the possibility of a productivity boom that raises economic growth substantially.” AI has also entered global use much more rapidly than previous transformative technologies such as the PC and internet, demanding equally rapid resets in almost all domains.
Beyond dispute is that AI is going to change how we do almost everything, how we need to think, and where we need to apply mental energy. In the short term (and possibly longer), this increases the risk of stress and anxiety in the general population. However, in the long term, it is important to implement the right policy infrastructure to ensure that brain science and AI can continue to advance to improve society.
Brain Capital as a Key National Asset
The world is increasingly relying on brain capital, where a premium is put on brain skills and brain health. Building brain capital is fundamental for meeting modern societal challenges and driving innovation. It also provides a framing to understand the negative impacts of AI on our communities and offers insights as to how to mitigate them. The most impactful solutions to build brain capital at scale do not lie at the level of the individual or health services, but in public policy.
The Brain Capital Grand Strategy, published in 2021, articulates the need for a Brain Capital Dashboard, Brain Capital In-All-Policies, and a Brain Capital Investment Plan. The Organization of Economic Co-Operation and Development (OECD) also launched the Neuroscience-inspired Policy Initiative to promote and refine the concept of brain capital. Following the success of that initiative, the Brain Capital Alliance has now been formed as its multi-organizational extension.
What a Brain Deal Would Entail
Five out of the six priorities of the U.S. Surgeon General relate to brain health and misinformation (all of which are issues that are relevant globally), and this signals an important time for governments to commit to a coherent set of economic, social, and health policies and initiatives aimed at fostering brain health through brain health policy entrepreneurship and specifically a Brain Deal.
A Brain Deal could start by focusing on a number of core areas:
Ensure Effective Infrastructure for Mental Health and Wellness
- Prioritize mental health and integrity for research and regulation of AI applications. More funding should be allocated to the field of captology — pioneered by Stanford professor BJ Fogg — which creates insight into how computing products can be designed to change what people believe and what they do. Moreover, when social media or other technologies enhance or exploit human vulnerabilities, we should fashion regulation and standards that mitigate those harms. Rules and regulations tend to redress physical and financial damage, or civil rights. We should elevate mental health and integrity to this level. If AI-enhanced social media becomes hyper-effective at manipulating the dopamine reward systems of the brain, safety regulations should address that. Jonathan Haidt, an NYU professor, argues that the age at which social media companies can collect children’s data without parental consent should be raised from 13 to 16, thereby protecting teens during the most vulnerable years of early puberty. That is one approach. Likewise, federal law could require more notifications when the app has been used for too long, automatic turn-offs at night, and more. We are encouraged to see the Biden administration’s new Interagency Task Force on Kids Online Health and Safety, operated jointly by the Department of Health and Human Services and the Department of Commerce, with a mandate to explore and find solutions to these issues.
- Implement clinical brain health technology regulatory reforms. AI-driven clinical tools for brain and mental disorders likely require regulatory reforms. A recent op-ed by Dr. Tom Insel, former director of the National Institute for Mental Health, argued for a new federal digital mental health regulatory agency to differentiate useful mental health tech from digital snake oil. A range of groups are advocating for the establishment of a Neuroscience Center of Excellence at the Food and Drug Administration to solve problems related to the development of new treatments, including digital, small molecule, and device-based options. Optimizing model transparency for advanced AI algorithms, health care, and all sensitive industries, is also key. This was noted in the recent White House Report on Mental Health Research Priorities.
- Build predictive models to steer mental health policy. There are new opportunities to combine economic, social, environmental, and medical data to forecast need and design services to address the growing mental health crisis. This may include drawing together qualitative and quantitative evidence and data and capturing changes triggered by the pandemic — such as education loss, job loss, domestic violence, social isolation, fear, and uncertainty — and emerging megatrends. Models could forecast demand for community mental-health services and acute care, including emergency-department presentations and psychiatric hospitalizations, as well as outcomes such as suicidal and homicidal behavior. Importantly, these models can be used as decision support tools for policy and planning before large investments are made, saving time, resources, and lives.
- Ensure digital mental health equity. Digital determinants of health, including access to technological tools, digital literacy, and community infrastructure like broadband internet, likely function independently as barriers to and facilitators of health. They also interact with the social determinants of health to impact outcomes. The rapid digitization of health care may widen health disparities if solutions are not developed with these determinants in mind. The digital transformation of health requires leaders and developers to understand how digital determinants impact health equity. Richardson et al. recently detailed a framework for digital health equity that aims to support the work of digital health tool creators in industry, health systems operations, and academia.
- Utilize the concept of data justice to eliminate harmful impacts on historically marginalized communities. Data justice is an approach that readdresses ways of collecting and disseminating data that have harmed historically marginalized communities in often invisible ways. For decades, data has been weaponized against different types of communities to reinforce oppressive systems that result in divestment and often inappropriate and harmful policies. There is a real concern about interrogating the ethical implications of AI methods across a multitude of cultures and techniques. A common goal should be to explore the current status of the use of AI and machine-learning frameworks and investigate new methods of creation relative to data justice. Data justice has emerged as a key framework for engaging with the intersection of datafication and society in a way that privileges an explicit concern with social justice.
- Invest in the built infrastructure as a means to develop brain infrastructure. Our cities and the buildings we live in are not conducive to physical or mental fitness. Simple principles like daylight, air quality, acoustics, access to nature, accessible sidewalks, walkable/bikeable neighborhoods, opportunities for social connection, and access to housing, health care, arts, and other amenities, can go a long way to develop a health-built environment. When layered with the right digital infrastructure, this can set communities up for harnessing brain capital. Additionally, building systems have the incredible potential to leverage AI to learn and respond to human needs and balance them against climate objectives. The question is, how do we maximize the value of data, optimize the use of mobile telecommunications and cloud computing, and bring the Internet of Things (IoT) to life? Effectively, IoT represents the systems that will enable sensors deployed across various built environment systems and equipment to speak to one another using AI and machine learning, increasing both the volume and velocity of data movement, and creating new opportunities to interconnect physical and brain operations, as well as respond to human needs and balance them against climate objectives.
Drive a Brain-capital Informed Research Agenda
- Drive precision nutritional brain health. Good food is increasingly understood as vital to brain health. By leveraging AI technology, we can rapidly advance precision nutrition. The application of AI affords the analysis of vast amounts of data, including genetic information (human and microbial); other key biomarkers such as those for immune function, metabolites, lipids, and blood glucose; health conditions; and other behavioral and environmental factors. All of this data can be used to generate personalized nutrition plans tailored to optimize brain health. Machine learning algorithms can identify patterns and correlations between specific nutrients, foods, and phytochemicals and mental and cognitive function, allowing for individually targeted interventions. AI-powered apps and devices can track individual dietary habits, provide real-time feedback, and suggest dietary adjustments. Furthermore, AI can facilitate the discovery of new interventions that incorporate and leverage diet through advanced data mining techniques. These approaches may be considered in the proposed National Institutes of Health Food as Medicine Research Opportunities.
- Design and launch a “Brain Moonshot” via a National Brain Institute. Akin to the “Cancer Moonshot” — a major public-private partnership of the National Cancer Initiative to crack the code on cancer — a similar effort is needed for the brain to understand how our brains can best harness the potential of AI. The Association for Mathematical Consciousness Science also suggests the AI and neuroscience fields need to collaborate to advance our understanding of consciousness, given advanced AI may soon achieve this state. This type of Brain Moonshot approach, operated by a National Brain Institute, could occur in tandem with the newly announced Advanced Research Projects Agency for Health, which supports transformative research to drive biomedical and health breakthroughs — ranging from molecular to societal — to provide transformative health solutions for all. This would put further resources and private sector action behind the pre-existing BRAIN Initiative.
Advance National Security Through Brain Investments
- Use data to determine vulnerability to misinformation. Fake news, conspiracy theories, the shuttering of local newspapers, controversies over social media, rumors of bizarre side effects from COVID vaccinations, and the erosion of basic trust in scientific facts increasingly distort our public debates. Research suggests there are strong linkages between vulnerability to misinformation and despair. Given the major risks to our democracy from the domestic radicalization of misinformation, we urgently require data to track vulnerability and resilience to misinformation, alongside innovative public health and policy approaches. Special care must be taken to protect data privacy. AI will only optimize the effectiveness of the brain-hacking effects of misinformation. If a particular region is determined as vulnerable to misinformation, a Swedish-style psychological defense agency could be established to implement a population-wide campaign on misinformation awareness and education. Alternatively, education on misinformation spotting could be infused into the early education system as is done in Finland.
- Manage the emergence of dual use brain-computer interfaces (BCIs). BCIs refer to systems establishing a direct connection pathway between a brain and an external computer such as a PC, a robotic arm, a speech synthesizer, or a wheelchair. There is still considerable missing knowledge and lack of understanding of biological processes and mechanisms involved in BCIs to make them consistent, reliable, and cost-effective. In particular, the impact on neuroplasticity and how the brain changes in response to BCIs needs to be better understood. AI in turn can accelerate the discovery of knowledge in this field. Several countries are advancing brain-computer interface (BCI) innovation for both civilian and military usage. This requires democratic nations to make decisions about how to manage their own investments in military applications of neuroscience research and emerging neurotechnology. Recently, Kosal and Putney put forward an analytical ethical framework that “attempts to predict the dissemination of neurotechnologies to both the commercial and military sectors in the United States and China.”
Prepare the Workforce of Today and of the Future
- Support effective use of advanced AI in the classroom. While generative AI can more easily allow students to plagiarize without being detected, it also has numerous potential classroom benefits. For example, bilingual learners may be able to benefit from learning in any of their preferred languages in real time. New curricula should be developed and deployed to teach students how to leverage AI to optimize the quality of their learning and efficiency. If used to deepen creativity, knowledge, and problem-solving, large validated language models in particular domains can help students understand the context of what they are learning and expand their critical thinking. AI can also help teachers prepare and optimize their coursework, develop lesson plans and exercises quickly, and spend more classroom time with interactive learning.
- Mitigate the transitional uncertainty of AI on jobs. AI could displace certain types of jobs and require workers to learn new skills and adapt to new roles. This could create a period of transition and uncertainty — and higher rates of depression and anxiety — for some workers. Requiring particular attention in this transition are those who are less adaptable and/or who lack access to the necessary training and education to reskill for new roles. This of course includes those living with brain health conditions such as depression, anxiety, and cognitive decline. Therefore, thoughtful strategies must be implemented to care for those working with brain health conditions, and targeted education and training programs will be needed to help workers acquire the skills they need to succeed in an AI-driven economy. It is encouraging to see Microsoft’s framing and integration of AI into its existing products as a “co-pilot.”
- Use AI to optimize creative outputs. Generative AI, such as GPT-3, has a profound impact on human creativity. By analyzing vast amounts of data and learning patterns, these models can generate highly realistic and creative content, including art, music, and writing. This technology serves as a powerful tool for artists, writers, and musicians, augmenting their creative process. It can inspire new ideas, provide unique perspectives, and offer novel solutions to creative challenges. However, concerns arise regarding the role of AI in the creative sphere, as it blurs the lines between human and machine-generated content. Nevertheless, generative AI holds great potential to enhance human creativity, pushing the boundaries of imagination and enabling new artistic expressions.
Establish Governance Infrastructure to Drive the Brain Deal Toward Success
- Reinstate the Presidential Bioethics Commission and orient it to neuroethics. Over the past five decades, U.S. bioethics commissions under both Democratic and Republican administrations have helped guide the development of government policies affecting U.S. citizens in important ways. Such a commission should be reinstated and focused on neuroethics given the above-mentioned issues. Similarly in Europe, the current centrality of AI regulation needs to be completed by creating appropriate frameworks for the governance of neurotechnology, in particular during the next institutional cycle starting with the elections to the European Parliament in 2024.
- Establish a White House Brain Capital Council. We recently proposed a White House Brain Capital Council. This council would take a whole-of-country approach, integrating the federal government with communities at all levels and engaging partners across the spectrum — from small and medium enterprises to patient and caregiver groups, to educators, health care workers, economists, and beyond. This Brain Capital Council would harmonize with existing task forces, councils, and advisory groups with overlapping remits to not duplicate but rather bolster all efforts related to the building of brain capital.
- Leverage the Brain Capital Dashboard. As noted previously, brain capital provides a framing to understand the negative impacts of AI on our communities and offers insights as to how to mitigate them. We must leverage data to determine ways to optimize AI so it builds and doesn’t degrade brain capital. The OECD’s Neuroscience-inspired Policy Initiative (NIPI), the Brain Capital Alliance (BCA) and the Euro-Mediterranean Economists Association have developed a country-by-country Global Brain Capital Dashboard to quantify and track brain capital globally. Datasets have been converged under the banner of three domains: drivers, health and skills. The driver domain will involve datasets focused on digitalization, health services, the natural environment, perceptions, social connections, research, and development. The brain health domain will involve datasets focused on the absence of disorders, childhood/adolescence-related issues, aging-related issues, and prenatal related issues. The brain skills domain will involve datasets focused on cognitive skills, non-cognitive skills, mental flourishing, and mental resilience.
Conclusion: Moving Forward with a Comprehensive Brain Deal
A Brain Deal should ideally cover all sectors of society as AI can have beneficial or harmful effects across sectors ranging from the economy and education to health care, social services, and digital infrastructure. The Brain Deal would align well with President Biden’s focus on “mental health and well-being,” one of his top four bipartisan priorities, as well as the goals of the Congressional Neuroscience Caucus. It could also be considered by the National Artificial Intelligence Advisory Committee and an equivalent European entity such as the EU AI Agency.
Although implementing a comprehensive Brain Deal is a major undertaking, beginning with a summit like the United Nations General Assembly to unpack the above-mentioned domains would be an ideal place to start. A critical aim of the summit would be to ensure coherence of policies and initiatives across these domains to avoid the silos and inefficiencies that could delay progress.
As Arianna Huffington recently noted, “All technology is about humans and what the technology allows us to do. So does it allow us to be our best selves, and unlock uniquely human qualities? Does it augment our humanity, or diminish it?”
Moving forward with a Brain Deal would allow us to ensure that AI strengthens our minds and brains without diminishing their potential.
Author Affiliations
Harris A. Eyre, M.D. Ph.D. is a fellow at Rice University’s Baker Institute for Public Policy and a senior fellow at the Meadows Mental Health Policy Institute. He is an international advisor to the Latin American Brain Health Institute (BrainLat) and FondaMental Fondation, and an advisor to the Texas Medical Center’s Innovation Institute. Eyre maintains adjunct positions with the Baylor College of Medicine and The University of Texas Health Sciences Center at Houston.
Walter D. Dawson, D.Phil., is an assistant professor of neurology at the School of Medicine at OHSU and at the OHSU-PSU School of Public Health, as well as a senior fellow with the Global Brain Health Institute.
Ryan Abbott, M.D., J.D., Ph.D., is a professor of law and health sciences at the University of Surrey School of Law, an adjunct assistant professor of medicine at the David Geffen School of Medicine at University of California Los Angeles, a partner at Brown, Neri, Smith & Khan, and a mediator and arbitrator at JAMS, Inc.
Karen Silverman, J.D., is CEO and president of The Cantellus Group and a member of the World Economic Forum’s Global AI Council.
Shuo Chen, J.D., is a general partner at IOVC (a future of work focused VC fund), a faculty member at The University of California at Berkeley, and a Californian Mental Health Commissioner.
Rym Ayadi, Ph.D., is the founder and president of the Euro-Mediterranean Economists Association, a senior advisor to the Center for European Policy Studies, a co-founder of the Brain Capital Alliance, and the chair of the advisory board of the Banking Stakeholder Group of the European Banking Authority.
Steve Carnevale is a business executive with an extensive track record of activities in mental health and learning disorder innovation. He is a California Mental Health Commissioner and the founder of the UCSF Dyslexia Center.
Stephen T. C. Wong, Ph.D., P.E., is the John S. Dunn, Sr. Presidential Distinguished Endowed Chair in Biomedical Engineering, the director of T.T. & W. F. Chao Center for BRAIN, and the associate director of Houston Methodist Neal Cancer Center, Houston Methodist Hospital. He is also a professor of radiology, neurosciences, pathology and laboratory medicine at Cornell University.
Michael Berk, M.D., Ph.D., is the director of the Institute for Mental Health and Physical Health and Clinical Translation (IMPACT) at Deakin University. He maintains adjunct roles with ORYGEN Youth Health, The Florey Institute for Mental Health and Neuroscience, The University of Melbourne, and Monash University.
Upali Nanda, Ph.D., is a partner and global director of research at HKS, Inc, as well as an associate professor of practice at the University of Michigan.
Geoffrey F. L. Ling, M.D., Ph.D., is a neurologist, neuroscientist, and technologist. He is an adjunct professor with the Department of Neurology at Johns Hopkins School of Medicine and co-lead of The BrainHealth Project at the Center for BrainHealth at The University of Texas at Dallas.
Michelle Tempest, M.D., is partner at Candesic and author of “Big Brain Revolution: Artificial Intelligence Spy or Saviour?”
Jo-An Occhipinti, Ph.D., is a professor, the co-director of the Mental Wealth Initiative, and head of Systems Modelling Simulation & Data Science at the Brain and Mind Centre, University of Sydney. She is also managing director of Computer Simulation & Advanced Research Technologies (CSART), an international not-for-profit.
Seema Shah, Ph.D., is head of Democracy Assessment at International IDEA (Institute for Democracy and Electoral Assistance).
Julia Mahfouz, Ph.D., is an associate professor at the School of Education and Human Development at the University of Colorado, Denver and the director of the Prosocial Leader Lab.
Ernestine Fu, Ph.D., is a founding faculty member of Stanford University's Frontier Technology Lab and a commissioner for California 100.
Ian MacRae is director of High Potential Psychology and award-winning author of six books on workplace psychology and technology.
Gabriel Esquivel is AI design research lead and vice president at HKS Inc.
Theo Edmonds, J.D., M.H.A., M.F.A., is director of the University of Colorado Denver’s Imaginator Academy and serves on the national board of directors at Americans for the Arts.
Felice Jacka, Ph.D., is the Alfred Deakin Professor of Nutritional Psychiatry and co-director of the Food and Mood Centre at Deakin University.
William Hynes, D.phil., is the new approaches to economic challenges coordinator within the Office of the Chief Economists of the OECD. He also holds adjunct positions with the Johns Hopkins School of Advanced International Studies, University College London, and the Santa Fe Institute.
Grace Wickerson is the healthy equity policy manager with the Federation of American Scientists.
Pawel Swieboda is CEO of EBRAINS, dIrector-general of the EU Human Brain Project, and a member of the steering committee of the OECD Neuroscience-inspired Policy Initiative and the Brain Capital Alliance.
Acknowledgment
The authors would like to acknowledge Jim Hackett for his review of this manuscript and helpful comments.
This material may be quoted or reproduced without prior permission, provided appropriate credit is given to the author and Rice University’s Baker Institute for Public Policy. The views expressed herein are those of the individual author(s), and do not necessarily represent the views of Rice University’s Baker Institute for Public Policy.