AI and Taxes — A Work in Progress: Part 2
Table of Contents
Author(s)
Share this Publication
- Print This Publication
- Cite This Publication Copy Citation
Joyce Beebe, "AI and Taxes — A Work in Progress: Part 2" (Houston: Rice University’s Baker Institute for Public Policy, August 24, 2023), https://doi.org/10.25613/VZGT-7H93.
Tags
This is the second of two issue briefs by Joyce Beebe on the use of AI in the tax field. Part 1 can be found here.
Recent releases of generative artificial intelligence (AI) tools — such as OpenAI’s ChatGPT, Google’s Bard, and Stability AI’s Stable Diffusion — signal the stiff competition to attract users among companies that are developing AI tools.[1] In the tax field, large professional service firms are turning to the same tools to automate daily operations, provide more efficient research results, and optimize tax outcomes for clients. Meanwhile, tax authorities around the world are also deploying comparable applications to narrow the tax gap.
Amid the race to develop AI-based tools, where different parties often have conflicting objectives, lawmakers are seeking to regulate the fast-growing industry while still encouraging innovation to ensure U.S. competitiveness. This issue brief reviews recent congressional efforts to regulate AI as well as research into ways to deploy AI in the tax policy area.
AI in Public Policy
In May 2023, the U.S. Senate Judiciary Subcommittee on Privacy, Technology, and Law held a hearing regarding AI. Subcommittee Chair Richard Blumenthal used the voice-cloning function of AI to deliver his opening speech, which had been written by ChatGPT and trained with his prior speeches. The speech accurately pointed out concerns about “when technology outpaces regulation,”[2] leading both Democrat and Republican lawmakers to agree that regulations were necessary.
Many Concerns Raised About AI
The hearing identified some of the most alarming problems associated with the rapid deployment of AI, which include disinformation, bias-based decision-making, workforce impact, and privacy invasion.[3] Because of AI’s public nature and the ease of access to AI tools, people without specialist training can easily use its sophisticated algorithms to create and disseminate fake videos, fake news reports, and the like — all of which lack factual accuracy but look very realistic. (See this video for examples.)[4] In addition, there is limited transparency on how AI operates, the training data on which it is based, and how the algorithms make decisions. Another concern is capability overhang — the possibility that models may have dormant capabilities their developers are not initially aware of but that emerge as the models become more sophisticated.[5]
Proposals for Regulation of AI
During the hearing a number of ideas were proposed as to how to regulate AI:
- Impose an external monitoring mechanism, requiring an independent third party to review the future releases of AI models before providing access to the public.
- Implement risk-based restrictions. Under such a mechanism, different rules will be designed based on different levels of risk. For example, high-risk AI applications, such as those involving safety and biomedical risks, will be subject to tighter oversight than low-risk applications.
- Add licensing requirements.
- Establish a new regulatory agency to monitor and regulate AI activities. Advocates of a central regulatory agency believe that it is preferable to sector-specific solutions, which may lead to different, and possibly inconsistent, standards.
Progress Toward Regulation
At least four AI-related hearings took place in the Senate within a three-month period,[6] and some federal and state lawmakers began using ChatGPT to write proposals.[7] While there has been a lot of interest in AI, no regulations are close to being finalized.
In response to a White House request, top tech companies committed to developing responsible AI to ensure safe, secure, and transparent public releases of future products.[8] As part of this voluntary commitment, they jointly created the Frontier Model Forum in late July 2023.[9] This industry-led effort will effectively set the safety standards for AI systems until formal congressional regulations are enacted.
The European AI Act
While U.S. lawmakers have just recently begun to consider the profound impacts of AI and the need for proper regulation, the European Commission (EC) started on this journey a few years ago. In April 2021, the EC proposed the Artificial Intelligence Act, which includes rules and actions to “turn Europe into the global hub for trustworthy AI.”[10] In June 2023, the European Parliament adopted the AI Act. Although the proposal still needs to clear several hurdles before becoming law, it is the most comprehensive attempt at regulation of AI to date.[11] Specifically, the act proposes a risk-based regulatory framework on AI and requires companies to notify humans that they are interacting with AI, regardless of the risk level. The AI Act will create a public European Union register of these technologies to improve transparency and help with enforcement.
The AI Act assigns AI applications to three risk categories, with treatment varying according to category:
- Unacceptable risk — prohibited.
- High risk — regulated.
- Minimal or no risk — applications not explicitly listed are largely unregulated.[12]
Unacceptable Risk. Systems with risks that are deemed unacceptable, which therefore will be banned, include government-run social scoring (i.e., classifying people based on socio-economic status or personal characteristics) and real-time biometric identification systems like facial recognition.
High Risk. High-risk AI systems will be subject to strict premarket conformity assessment and ongoing evaluation. These systems must be secure, transparent, explainable, fair, and accountable. In addition, they need to comply with the existing European General Data Protection Regulations (GDPR) for data privacy and protection purposes. Examples of high-risk AI systems include:
- The technology used in education — scoring of exams.
- Safety components of products — AI applications in robot-assisted surgery.
- Employment — resume sorting software for recruitment.
- Essential public and private services — credit scoring that evaluates applicants’ opportunities to obtain loans.
- Law enforcement — assessing the reliability of evidence.
While the EC’s intent is for the AI Act to promote trust among stakeholders, some are concerned the proposed legislation will increase AI deployment and development costs. Companies will need to go through an application and certification process to demonstrate compliance with the framework, and smaller companies may find the compliance costs prohibitively high.[13]
Work in Progress — AI Policy Tools
Beyond the congressional hearings, rulemaking, and company self-regulation described above, academic and industry researchers have begun developing AI-based tools to help lawmakers design tax policies and assist tax authorities in identifying noncompliant taxpayers. These innovative developments are in the early stages and have some major issues to address. This section reviews two models with slightly different objectives: 1) AI Economist and 2) Shelter Check.
AI Economist
One of the AI-based models that boasts its ability to design optimal income tax policy is the AI Economist. This model uses a two-level “reinforcement learning” process where a social planner (government, policymaker) and economic agents (taxpayers, workers) co-adapt, meaning that one side changes behaviors in response to changes in the other.[14] The system ultimately aims to identify optimal tax policies for a simulated economy.
In the model, several AI-directed workers interact. They have different skill levels and earn money through carrying out different economic activities. In the end of each simulated year, each worker pays taxes at a rate devised by an AI-controlled policymaker. In each period, some workers learned to avoid a certain amount of tax by reducing their productivity to qualify for a lower tax bracket. If the policymaker subsequently reduces the tax rate to incentivize work, those workers will increase their productivity again.
The developers recognize the need to increase the number of workers and add companies in the simulation to model more realistic scenarios.[15] In addition, they acknowledge that if certain communities and segments of the workforce are under-represented in the training data, the AI model will not be representative of the economy — and these under-represented groups may behave or react differently from others when responding to tax rate changes. [16]
Critics have additional concerns:
- First, critics point out that this “data-driven approach” is not rooted in any longstanding economic theory or based on any proven economic model. Whereas the established theories and models were first developed analytically then tested empirically, the agents in the AI model have no prior knowledge of economic theory; instead, they learn how to interact by observing data.[17] Although its developers claim the AI model has the benefit of not being limited to traditional approaches — and is therefore not restricted by analytical tractability[18] — it is not immediately clear whether the trial-and-error approach of agents’ behaviors is superior.
- There are also debates as to whether the traditional analytical model or the AI model better captures the complexity of the real economy. Finally, observers claim that — at least in some settings — the tax policy devised from the AI approach is unconventional. The policy resulting from some simulations generates higher tax rates for upper- and lower-income groups and lower rates for middle-income workers. Although the outcome leads to more even income distribution across workers (the authors refer to this result as “higher social welfare,” defined as a balance between equality and productivity), this is not the type of policy that a human policymaker would have designed. As such, the researchers recognize it is less intuitive and harder to explain such blended “progressive and regressive tax rates.”
Shelter Check
Another AI-based policy tool, Shelter Check, is more compliance oriented. The developers state their goal is to provide policymakers (including Congress, the IRS, and the courts) with feedback on potentially negative and unintended consequences of proposed changes to tax law.[19]
Essentially, after inputting a proposed or draft legislation to the model, the algorithm will return ways that this legislation would be used and the associated outcomes. It can also identify unexpected, negative consequences or loopholes, such as the creation of a tax shelter if combined with other parts of tax law.[20] At this stage, the quality of the AI-generated output is not as good as that prepared by tax or legal professionals. Nevertheless, the developers suggest that because their tool can process information at a much faster speed than humans, it can at least function as a “sanity check” when contemplating tax law changes.
Where We Are
Overall, because AI tax policy tools evaluate policies in an entirely different way, they offer interesting new thought experiments and suggest new approaches to policy changes. In the future, they could serve as valuable reference or research tools for policymakers. Currently, however, they are still works-in-progress as researchers continue to improve the models.
Handle with Care: Human Guidance Needed
AI has experienced unprecedented growth in recent years, and the global trend is expected to continue. Today its reach extends to tax research, tax administration and compliance, and tax policymaking. For tax professionals, the complexity, gray areas, and unique features of tax rules mean that AI still has a sharp learning curve ahead before it can catch up with humans. However, human functions in daily tax operations will definitely change, if they have not already. AI is an ideal supplement to human workers in several areas:
- Monitoring the development of new rules and regulations.
- Reviewing long documents.
- Keeping track of routine but essential tasks.
As business operations become more complicated and the amount of taxpayer data increases exponentially over time, tax authorities need advanced analytical tools not only to review tax returns, but also to identify hidden relationships and detect anomalies. As tax authorities become more comfortable with them, more AI-based tools will be deployed. However, government agencies must always keep in mind the need to observe privacy laws, ensure transparency, and avoid biased use.
Finally, researchers are developing AI-based tools to assist policymakers in the tax area. These approaches provide “outside-of-the-box” thinking when it comes to policy design. However, humans still need to verify whether these tools are superior to the existing economic models before they can be promoted as mainstream policy tools.
Endnotes
[1] In non-technical terms, generative AI is a type of AI technology that can receive inputs and produce several kinds of outputs, including text, image, video, or synthetic data in response to various prompts.
[2] U.S. Senate Judiciary Subcommittee, “Artificial Intelligence in Government,” May 16, 2023, (opening speech at 18:30) https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence.
[3] Darrell M. West, “Senate Hearing Highlights AI Harms and Need for Tougher Regulation,” Brookings, May 17, 2023, https://www.brookings.edu/articles/senate-hearing-highlights-ai-harms-and-need-for-tougher-regulation/.
[4] CBS News, “Lindsay Gorman Says Context is ‘Really Important’ in Differentiating AI-Generated Images,” July 2, 2023, https://www.cbsnews.com/video/lindsay-gorman-says-context-is-really-important-in-differentiating-ai-generated-images/.
[5] Laurie A. Harris, “Generative Artificial Intelligence: Overview, Issues, and Questions for Congress,” IF 12426, Congressional Research Service, June 9, 2023, https://crsreports.congress.gov/product/pdf/IF/IF12426.
[6] Most recently, a July hearing about regulations attracted numerous young attendees: Christiano Lima, “The Senate’s Hottest Hearing: AI Policy, Washington Post, July 26, 2023, https://www.washingtonpost.com/politics/2023/07/26/senate-hottest-hearing-ai-policy/.
[7] For example, see 1) Shira Stein, “ChatGPT Wrote California Rep. Ro Khanna’s New AI Bill,” San Francisco Chronicle, July 20, 2023, https://www.sfchronicle.com/politics/article/chatgpt-ai-federal-bill-18200932.php (subscription required), and 2) Commonwealth of Massachusetts, “An Act Drafted With the Help of ChatGPT to Regulate Generative Artificial Intelligence Models Like ChatGPT,” S. 31, February 16, 2023, https://malegislature.gov/Bills/193/S31.
[8] The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.
[9] Rebecca Klar, “Top Tech Companies Create Joint AI Safety Forum,” July 26, 2023, The Hill, https://thehill.com/policy/technology/4120594-top-tech-companies-create-joint-ai-safety-forum/.
[10] European Commission, “Europe Fit for the Digital Age: Commission Proposes New Rules and Actions for Excellence and Trust in Artificial Intelligence,” press release, April 21, 2021, https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682.
[11] Ryan Browne, “EU Lawmakers Pass Landmark Artificial Intelligence Regulation,” CNBC, June 14, 2023, https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html.
[12] European Commission, “Artificial Intelligence Act Proposal,” April 21, 2021, https://artificialintelligenceact.eu/the-act/.
[13] Kamales Lardi, “New Rules for AI in the EU Attempt to Control the Technology,” Bloomberg Tax, June 13, 2023, https://news.bloombergtax.com/daily-tax-report-international/new-rules-for-ai-in-the-eu-attempt-to-control-the-technology (subscription required).
[14] Stephan Zheng, Alexander Trott, Sunil Srinivasa, David C. Parkes, and Richard Socher, “The AI Economist: Taxation Policy Design via Two-Level Deep Multiagent Reinforcement Learning, Science Advances, May 4, 2022, https://www.science.org/doi/10.1126/sciadv.abk2607.
[15] The AI Economist’s developers include academic researchers and Salesforce, a business technology company.
[16] William Resvoll Skaug, “Optimizing Tax Policies with Artificial Intelligence,” Business Review at Berkeley, December 27, 2021, https://businessreview.berkeley.edu/optimizing-tax-policies-with-artificial-intelligence/.
[17] Will Douglas Heaven, “An AI Can Simulate an Economy Millions of Times to Create Fairer Tax Policy,” MIT Technology Review, May 5, 2020, https://www.technologyreview.com/2020/05/05/1001142/ai-reinforcement-learning-simulate-economy-fairer-tax-policy-income-inequality-recession-pandemic/.
[18] An analytical tractable economic model generally means a model one can solve, or find solutions that based on a theoretical model.
[19] Stephanie Kanowitz, “AI Project Targets Hidden Tax Shelters,” Route Fifty, May 1, 2013, https://www.route-fifty.com/digital-government/2023/05/ai-project-targets-hidden-tax-shelters/385838/.
[20] See 1) Jill Rosen, “Tax Loopholes Abound, but AI Could Shut Them Down,” Johns Hopkins University HUB, April 6, 2023, https://hub.jhu.edu/2023/04/06/ai-tax-loopholes/, and 2) Robert A. Weinberger, “Will Artificial Intelligence Be Able to Prepare Our Tax Returns?” Tax Policy Center, May 18, 2023, https://www.taxpolicycenter.org/taxvox/will-artificial-intelligence-be-able-prepare-our-tax-returns.
This material may be quoted or reproduced without prior permission, provided appropriate credit is given to the author and Rice University’s Baker Institute for Public Policy. The views expressed herein are those of the individual author(s), and do not necessarily represent the views of Rice University’s Baker Institute for Public Policy.