CEO Mike Capps participates in Senate Majority Leader Chuck Schumer’s most recent hearing on AI.
Started in 2023, the AI Insight Forum seeks to bring AI experts and legislators together in order to educate lawmakers on pressing AI issues. A small selection of experts that have previously spoken to the forum include technology leaders such as Elon Musk, Bill Gates, Satya Nadella, Sam Altman, and Mark Zukerberg.
Today’s session focuses on “Transparency, Explainability, Intellectual Property, and Copyright.” Here, leaders from Stability AI, Credo AI, Spotify, Sony Music, The Allen Institute for AI, Brookings, and more will convene to discuss ways to make AI more transparent, trustworthy, and traceable.
Co-founder and CEO Mike Capps will share how Howso’s AI engine, built on instance-based learning (IBL), can provide a fully explainable alternative to black-box systems that often misinform, spread bias, and hallucinate.
Senator Schumer has already been an advocate of explainability in AI, stating in June 2023:
“If the user of an AI system cannot determine the source of the sentence or paragraph or idea—and can’t get some explanation of why it was chosen over other possibilities—then we may not be able to accomplish our other goals of accountability, security, or protecting our foundations.
Explainability is thus perhaps the greatest challenge we face on AI. Even the experts don’t always know why these algorithms produce the answers they do. It’s a black box…But we do need to require companies to develop a system where, in simple and understandable terms, users understand why the system produced a particular answer and where that answer came from.”
At Howso, we applaud the Senate’s attention to the critical need for transparency in AI and are gratified to affirm that it is possible to create AI systems that are fully trustworthy.
Howso’s explainable open-source AI engine already allows developers and data scientists to build fully transparent AI models that users can trust, audit, and explain—without sacrificing performance. With IBL-based AI, users can audit the outcomes generated, interrogate those outcomes to understand why and how the AI made decisions, and then intervene to correct mistakes and bias.
Attribution demonstrates both why a decision was made and why a contrary decision was not made. This is especially important when checking to see if a prohibited feature of an individual’s profile (e.g., race, gender, sexual orientation, political affiliation, etc.) was used to make decisions.
One meeting won’t decide the fate of whether AI continues down a black-box path, or instead comes out into the open to become fully transparent and trustworthy. But we are thrilled the U.S. government is putting a focus on the burning need for explainable AI. Those tasked with regulating and overseeing AI must demand full transparency when algorithms make critical life-affecting decisions.
Co-founder and CEO Mike Capps’ full statement delivered to the US Senate can be found below.