Utah Policy Innovation Lab Explores AI
On June 2, 2023, the Utah Policy Innovation Lab assembled a panel of informed AI practitioners and policy-makers for a discussion on the implications of AI called “Navigating AI Policy in Utah: Opportunities and Risks.”
Reflecting several industries and disciplines, the panelists, moderated by Rep Jefferson Moss, explored a range of issues and concerns regarding AI, and the kinds of safeguards, policies and regulations needed to responsibly shape the future of AI in Utah—and potentially serve as a model for other states and even at the federal level.
The panel included:
- Moderator: Rep. Jefferson Moss—Representative for 51st district (Eagle Mountain) and Majority Whip, Executive Director of the Innovation District
- Chris Bramwell—Chief Privacy Officer, State of Utah
- Sen. Kirk Cullimore Jr.—Senator for 19th district (Salt Lake City), Assistant Major Whip, Attorney at Law Offices of Kirk A. Cullimore
- Nick Pelikan—CEO, Piste.AI
- Margaret Busse—Executive Director, Utah Department of Commerce
- Barclay Burns, PhD—CEO, GenerativeImpact.AI
- Matthew Poll—CEO, GTF
- Alan Fuller—Chief Information Officer, State of Utah
- Alex Lawrence, PhD—Associate Professor, Weber State University
Nick Pelikan, CEO of Piste.Ai, kicked off the discussion with an instructive overview of AI, illustrated with slides. Pelikan says we are at the apex of yet another hype cycle around AI. In the past ten years he has witnessed at least two other similar hype cycles. He urges reason and caution "and to not believe everything people are saying about AI, such as we are close to a 'Skynet' scenario," referring to the fictional all-powerful military-based AI hegemony that decides to exterminate the human race in James Cameron’s 1984 dystopian classic, The Terminator.
Pelikan clarifies that the flavor of AI that is dominating news cycles is actually Generative AI, or AI that generates actually content. He said this form of AI is not actually new–it has been around since about 2016, he says, citing the example of Google Translate. But what changed, he argues, is the quality of generative AI with the world has seen with the rapid rise of Chat GPT, developed by San Francisco-based Open AI and released in November 2022.
Pelikan points out just in Q1 2023 $1.6 billion has been invested in AI-based startups— “an absolutely amazing, unforeseen, quantum leap in the funding and interest in this space,” he says.
As compared with previous AI models, the size of Chat GPT is notable, said Pelikan. “GPT 3.5 has 175 billion parameters..Going back to high school algebra, imagine an equation has 175 billion terms. And now the next generation of Chat CPT, GPT4, has one trillion parameters." he says. "These models, not the data underlying them, but just the math of these models, takes hundreds of gigabytes to store.”
Beyond size, Pelikan also explained that the refinement of these latest models, using human feedback, has pushed Generative AI models to new levels—"to the point that they seem human."
“Imagine a chatbot personalizing the customer experience to a point that it is impossible for a customer to tell whether it's a human talking to them or a robot. For businesses this could increase customer engagement by orders of magnitude.”
However, as humans can and do make mistakes, the same is true with current AI models, as the panel pointed out, citing examples of grossly inaccurate data and embarrassing mistakes generated by AI, including one whopper of a query showing a picture of Bill Clinton as Utah’s legislative minority leader with a fake name under his picture.
Dr. Alex Lawrence, Associate Professor at Weber State University believes that in time, and with the pouring of billions (trillions) of investment into AI-oriented companies, the mistakes generated by AI will go away, or at least be reduced to a negligible, manageable amount. That said, he also sees AI as potentially the greatest cheating tool in education that has ever been invented, “and not just at the collegiate level, but cheating in life,” he adds. He also thinks it has the potential “to make good bad people bad more easily.”
“If you wanted to do bad things on the internet, a lot of times you needed to have technical skill to do it, and that was enough of a barrier to stop people,” says Lawrence “But now the barrier is so low to do things that you shouldn't or try. It’s going to entice people to do things they shouldn’t do.”
Still, Dr. Lawrence sees AI as having the potential to enhance Learning Equity, or the ability to give opportunities for people to learn at a faster, better level, or in unique ways, that they might not have had access to previously. He uses the example of giving students the option to pick one of four different learning styles for a given assignment and using AI to tutor the student in the learning style that best suits that particular student.
Margaret Busse, Executive Director, Utah Department of Commerce, envisions the potential for AI to democratize information, ushering in an era in which “we see a massive decrease in the asymmetry of information” Under this scenario, citing the example of the Internet giving rise to medical websites such as WebMD in the 1990s which provided medical data to consumers in a way that was impossible before, Busse imagines an AI-infused world in which “expertise becomes much less valuable…potentially.”
Utah's Chief Privacy Officer, Chris Bramwell envisions AI potentially accomplishing the work of many employees within government agencies, thereby reducing staff sizes and making government smaller, a value many Utahns agree with. He cites the endless lists of forms and documents filled out, managed and stored by thousands of government employees. Much of this could be done with AI, supervised by a handful of highly specialized people, says Bramwell.
Busse agreed that much of the registration and form processing and management could be accomplished using AI.
Similarly, Senator Kirk Cullimore, a practicing attorney, added that the legal profession deals with a multitude of forms and other documents that could be streamlined using AI. He also believes it can have policy implications because of the use of synthetic data.
Dr. Barclay Burns, CEO of GenerativeImpact.AI, elaborated on the value and high-value use cases of synthetic data. He provided examples in healthcare and education in which synthetic data and generative algorithms are having a positive impact on patient outcomes with cancer patients and people struggling with self-harm and suicidal ideation. He cited specific examples of AI enabling educators to anticipate a child’s learning needs.
Matthew Poll, CEO of GTF, agreed that AI can help solve fundamental problems of the human condition such as cancer and other diseases. He also predicted that it could help boost the strength of the US currency by adding significant amounts of GDP to the US economy through AI-driven innovation and increased worker productivity.
Regarding AI in the education context, Dr. Burns expressed concern that students are vulnerable to educational deficits and that they could be negatively impacted by the widespread use of AI instead of going through effort of the learning process themselves. He is concerned that students will skip all of the steps that go into understanding issue deeply and answering questions correctly. Using AI students “won’t have to create and generate meaning, understanding, and an ability to make sense of things. Across the board, my concern is that machines are going to become a lot smarter and people are going to become a lot stupider.”
Inaccurate data generated by AI was an issue repeatedly addressed by the panel. The panel agreed that government documents and statements cannot be generated solely by AI without human supervision. “Transparency and accountability become major issues in this space,” said Bramwell. “One of the things we need to do from a use-case perspective is when using AI, a real human needs to look at the document or statement and be accountable for it before it goes out.”
Matthew Poll agreed with Burns, but indicated that society will need to arrive at a new balance, a new acceptance of the data we will need to memorize and which data we can allow machines to know on our behalf. “I don’t even know my mother’s phone number anymore.”
The panel addressed AI's relationship with privacy, such as the laws requiring AI producers to notify consumers that their individual data has been indexed by AI, and other rules governing businesses that employ AI models, and how federal and state governments should regulate AI with regards to with privacy concerns.
Rep. Moss echoed the privacy concerns that were brought up by many panelists. "Some of the concerns I'm hearing from my constituents is about privacy, 'Is my data safe" Is my identity safe? What about deep fakes?' We are hearing all these things from the public," said Moss.
Chris Bramwell said AI and privacy is a complex question. He urges executive teams in companies to learn about the implications of AI on privacy. He pointed out that people and organizations do need to know that they have lots of personal information that can be processed by AI models. "You may prompt it and enter data into that AI can be used to to make decisions and determinations about individuals," he said. "We don't want to stop the industry from growing but we do need figure out what the guardrails are and the rules that companies have to play by so that we can then have AI companies in Utah protect personal information."
Senator Cullimore argued that even though historically privacy laws have been the domain of federal agencies, he thinks that states can lead on this issue, and Utah especially. “I think there’s a role for states to lead out and deal with it it because we can set a good model. States like Utah are primed to deal with it because we have consumer privacy laws, we have technical expertise and entrepreneurship here in Utah to deal with these things. We can set a model that can become ubiquitous.
To view the full panel discussion see video below:
Created through legislation passed in the recent 2023 state legislative session, The Utah Innovation Lab was announced to the public on April 11, 2023, after a public signing ceremony at the Thomas S. Monson Center in which Governor Spencer Cox signed seven technology and infrastructure-related bills into law.
The lab serves as an incubator site for ideas, a public policy staging area, a catalyst and convener for technology commercialization, a place of continuity for startups, and other functions supportive of new ideas and innovation in support of the core mission of the Point. Currently, the lab is housed in the Thomas S. Monson Center in downtown Salt Lake City. Eventually, it will establish a presence at The Point.
Panel sponsors include Utah System of Higher Education, Silicon Slopes, Utah State Legislature, Wilson Sonsini, Governor's Office of Economic Opportunity, Kinect Capital, World Trade Center Utah, and Wasatch Innovation Network.