Corporate Strategy, Healthcare, Software Development

Governing AI and Us

Heart of the Buddha, hand of the demon.

– David Lee Roth

Artificial Intelligence has come a long way in the past decade.  When I first started in comp sci, my AI professor started off every semester with the same story

Imagine an AI-powered bomb-squad robot.  This is the latest and greatest equipment.  The robot goes into a shed to defuse a suspicious package that we know will blow up in T-MINUS 10 MINUTES.  The task is simple: drive through the open door, find the package, and encapsulate it so the explosion is contained.  The robot rolls forward through the doorway and stops to re-plan the detonation.  By the time the robot figures out that the walls in the shed are blue, the bomb blows up.

A decade ago, path planning was slow, object recognition largely manual, and AI as a whole much much further behind than our sci-fi dreams would hope.

Then came cloud computing and deep learning.  By throwing pattern matching algorithms against massive web-scale datasets, we could begin to tackle previously intractable problems.  Want to identify cats? Sure, just show a computer a billion cat photos and boom, cats found.  Want to understand the English language? No problem, just send in billions of hours of voice data and computers will find out when you want to jam out to “Britney Spears”.

giphy
AI Can Bluff, But Smiling is Still Hard

And then, the room got dimmer.  AI was scary and not understandable, with many famous folks warning of a coming AI apocalypse.  The end was nigh, and Elon Musk and Sam Altman wanted us to know that everything was not okay (for a fee).  I believe the truth is this: much of what humans do boils down to simple pattern recognition and the early applications of AI are low-hanging fruit that will be over soon enough.  Driving is about recognizing the patterns of the lines on the road, the patterns of street signs out there, etc.[1]  Chess is about making strategic moves within the rules of the game.  Pattern recognition is fundamental to how humans operate in the natural world, and with enough data and advanced statistical algorithms, computers can start to mimic humans in an interesting way.

When I sent an article about AI algorithms being able to bluff in Poker to a colleague, he started to panic.  Oh no! They’re acting human.  Once I gave him my opinion that most poker players actually bluff predictably and this is just another form of pattern recognition, his mind was put at ease and turned elsewhere: if AI is just advanced pattern recognition, and a lot of humans’ interactions with the natural world revolves around pattern recognition, how do we govern the interaction between AI and humans in the natural world? Should an AI behave differently if it’s screening a mortgage application versus recommending shows on Netflix?  If you (like me), think that a lot of the AI fear is overblown and we are just seeing more of routine human behavior being classically disrupted by cheap computing power, then you should want to develop rules around how this technology can be harnessed without causing societal damage.

My response to my colleague was that as a techie, my inclination is that the fewer rules the better for advancing a disruptive innovation at this early stage.  But, given that AI is beginning to have real impact in tangible sectors like healthcare, real estate, and energy, we should define a governance process based on the following principles:

  1. Reuse of Existing Legislative Precedent.  Existing societal problems have existing (albeit imperfect) solutions.  For example, the Fair Housing Act sets out boundaries outlawing the racist redlining practices that were so prevalent in the pre-Civil Rights Era.  Rather than define our values and implementation at the same time, we should look to existing regulations as a basic set of values, and define 21st-century AI-specific implementations.  This would probably lead to regulations such as “input data must be representative of the broader population or be proven to not have any adverse effects on specific protected groups as laid out by the FHA.”
  2. Tax-Free Innovation.  I am a major believer in having carve-outs for small companies or safe harbor for firms that abide by certain rules.  Although this has proven problematic when scammers set up shop as small “whack-a-mole” entities or big companies exploit loopholes (like YouTube avoiding regulation of explicit content on its platform), putting in place compliance frameworks often weakens investor appetite for startups, thereby curbing innovation.
  3. Be Whitelisted to Specific Industries.  The web is permeating through every avenue of our life.  There is no longer such a thing as a “traditional” firm, all firms invest heavily in technology.  In order to avoid broadly dampening harmless activities (such as Netflix’s machine-learning-based recommendation algorithm), the governance should apply to an explicitly chosen set of industries rather than being the default for any algorithms running on “big data”.

Tech firms are coming around to the idea of regulation and there seems to be plenty of precedent for it.  Microsoft recently called for advanced regulation on facial recognition, though the skeptic in me wonders if this is to cement their position as a cloud market leader via a regulatory moat.  Regardless, tighter governance is a better approach to battling AI’s societal disruptions than trying to put the genie back in the bottle.

[1] In fact, the biggest barrier to wider adoption of fully autonomous vehicles is getting humans (who can be inherently unpredictable) off of the damn road

Entrepreneurship, Software Development

Chopping Down Trees

Strategy is a system of expedients; it is more than a mere scholarly discipline. It is the translation of knowledge to practical life, the improvement of the original leading thought in accordance with continually changing situations.

– Helmuth von Moltke the Elder

Ages ago, when I was an intern at Microsoft, I attended a big open lecture given by the (then-president of Microsoft Windows) Steven Sinofsky.  Sinofsky waxed philosophical about the organization, his vision for the company, and why Windows 8 was going to crush it. Towards the end, he opened it up for questions.  Sitting in the back, my hand shot up and I asked, “Microsoft is famous for Waterfall software development, but these days (2011), all the rage is with Agile technologies.  What gives? Why aren’t we, Microsoft, great inventor of the modern software company, using Agile?”

Sinofsky, without missing a beat, answered: “If software development were building bridges, we’re at the point in history where people are chopping down trees to cross rivers.” He believed that it would cost too much to implement this change, and too much about software engineering was still ad-hoc and/or unknown magic, so he couldn’t be sure that the benefits would be worth it.

For years, that viewpoint of how we approach software engineering as a discipline has stuck with me.  Chopping down trees.  Looking around in the forest, finding something that can get the job done, doing it quickly and cleanly, then bravely walking across the flowing rapids.

giphy
Me Building SmarterCloud this Summer

Before getting back into coding this summer, I had decided that I was going to engineer our web application architecture from the ground up.  I was going to make plans for exactly how I wanted our project to be built and deployed.  In fact, it would be so perfectly documented, automated, and repeatable that all you would need was a repository with a README and a one-click AWS CodeStar environment, and SmarterCloud would work.

Getting back into it though, I’ve discovered that this is nearly impossible in modern software environments.   My intentions were good.  My start was promising.  But before long, I was adding custom configurations for my load balancers, downloading third party tools locally to my machine and invoking them in my build system, and doing all the things I promised I wouldn’t.

Don’t get me wrong, things have gotten much much better in the last decade (since back when I had to implement my own CSS grid system in college).  But one major irony I learned in my life as a software engineer was that I wasn’t really an engineer.  I mean, let’s be honest: what civil engineers can use multiple hemorrhaging-edge paving techniques to build our bridges? A lot fewer people would survive the daily commute if they did.

I’m encouraged by the open source movement and the stellar projects I’ve been using to help build SmarterCloud.  Projects like Galen and Codacy have great open-source support and the ability to be quickly and easily extended. These products have employed software strategy well, and combined it with strong leadership to keep quality high.  But still, with all the tools I’ve interacted with or built I’m reminded of Von Moltke’s other, more famous quote: “No plan survives contact with the enemy.”

So here’s my outlook: if you are building a product, or hiring developers to build it for you, understand that there is software engineering is still 90% art and only 10% science. Have patience with software, and put more of an emphasis on quality.  That way, if you aim high and provide ample time for work to get done, when tradeoffs inevitably have to be made the ramifications won’t be catastrophic.

EDIT: There’s a great piece from Dr. Dobb’s I encountered from HackerNews that speaks to this subject in a more scientific way that’s worth the read.