Heart of the Buddha, hand of the demon.
– David Lee Roth
Artificial Intelligence has come a long way in the past decade. When I first started in comp sci, my AI professor started off every semester with the same story
Imagine an AI-powered bomb-squad robot. This is the latest and greatest equipment. The robot goes into a shed to defuse a suspicious package that we know will blow up in T-MINUS 10 MINUTES. The task is simple: drive through the open door, find the package, and encapsulate it so the explosion is contained. The robot rolls forward through the doorway and stops to re-plan the detonation. By the time the robot figures out that the walls in the shed are blue, the bomb blows up.
A decade ago, path planning was slow, object recognition largely manual, and AI as a whole much much further behind than our sci-fi dreams would hope.
Then came cloud computing and deep learning. By throwing pattern matching algorithms against massive web-scale datasets, we could begin to tackle previously intractable problems. Want to identify cats? Sure, just show a computer a billion cat photos and boom, cats found. Want to understand the English language? No problem, just send in billions of hours of voice data and computers will find out when you want to jam out to “Britney Spears”.
And then, the room got dimmer. AI was scary and not understandable, with many famous folks warning of a coming AI apocalypse. The end was nigh, and Elon Musk and Sam Altman wanted us to know that everything was not okay (for a fee). I believe the truth is this: much of what humans do boils down to simple pattern recognition and the early applications of AI are low-hanging fruit that will be over soon enough. Driving is about recognizing the patterns of the lines on the road, the patterns of street signs out there, etc. Chess is about making strategic moves within the rules of the game. Pattern recognition is fundamental to how humans operate in the natural world, and with enough data and advanced statistical algorithms, computers can start to mimic humans in an interesting way.
When I sent an article about AI algorithms being able to bluff in Poker to a colleague, he started to panic. Oh no! They’re acting human. Once I gave him my opinion that most poker players actually bluff predictably and this is just another form of pattern recognition, his mind was put at ease and turned elsewhere: if AI is just advanced pattern recognition, and a lot of humans’ interactions with the natural world revolves around pattern recognition, how do we govern the interaction between AI and humans in the natural world? Should an AI behave differently if it’s screening a mortgage application versus recommending shows on Netflix? If you (like me), think that a lot of the AI fear is overblown and we are just seeing more of routine human behavior being classically disrupted by cheap computing power, then you should want to develop rules around how this technology can be harnessed without causing societal damage.
My response to my colleague was that as a techie, my inclination is that the fewer rules the better for advancing a disruptive innovation at this early stage. But, given that AI is beginning to have real impact in tangible sectors like healthcare, real estate, and energy, we should define a governance process based on the following principles:
- Reuse of Existing Legislative Precedent. Existing societal problems have existing (albeit imperfect) solutions. For example, the Fair Housing Act sets out boundaries outlawing the racist redlining practices that were so prevalent in the pre-Civil Rights Era. Rather than define our values and implementation at the same time, we should look to existing regulations as a basic set of values, and define 21st-century AI-specific implementations. This would probably lead to regulations such as “input data must be representative of the broader population or be proven to not have any adverse effects on specific protected groups as laid out by the FHA.”
- Tax-Free Innovation. I am a major believer in having carve-outs for small companies or safe harbor for firms that abide by certain rules. Although this has proven problematic when scammers set up shop as small “whack-a-mole” entities or big companies exploit loopholes (like YouTube avoiding regulation of explicit content on its platform), putting in place compliance frameworks often weakens investor appetite for startups, thereby curbing innovation.
- Be Whitelisted to Specific Industries. The web is permeating through every avenue of our life. There is no longer such a thing as a “traditional” firm, all firms invest heavily in technology. In order to avoid broadly dampening harmless activities (such as Netflix’s machine-learning-based recommendation algorithm), the governance should apply to an explicitly chosen set of industries rather than being the default for any algorithms running on “big data”.
Tech firms are coming around to the idea of regulation and there seems to be plenty of precedent for it. Microsoft recently called for advanced regulation on facial recognition, though the skeptic in me wonders if this is to cement their position as a cloud market leader via a regulatory moat. Regardless, tighter governance is a better approach to battling AI’s societal disruptions than trying to put the genie back in the bottle.
 In fact, the biggest barrier to wider adoption of fully autonomous vehicles is getting humans (who can be inherently unpredictable) off of the damn road