Last night I had the opportunity to join several local entrepreneurs and non-profit leaders to talk about how AI is impacting non-profit fundraising. I don’t know very much about fundraising myself, but Westmont is developing a pretty robust machine learning infrastructure to help guide the campus fundraising efforts. I had something to say about machine learning though, so I was invited to frame the discussion for the evening in terms of machine learning generally.
MIT Enterprise Forum
The event was sponsored by the MIT Enterprise Forum of the Central Coast which is a kind of TED talk like organization that hosts high-quality events every month about all kinds of interesting topics. It is a community building organization focussed on keeping a strong local business ecosystem. In attendance were a whole pipeline of people from CEOs to students from several local colleges.
The talk
For my part I wanted to talk about “machine learning” vs. “artificial intelligence” (AI) because the latter tends to be considerably more amorphous and difficult to talk about because it spans a range of topics from computer chips to theories of the mind.
I framed machine learning as a way of writing computer programs that depends primarily on data instead of primarily on programmers. Now anyone in the field knows that this is a bit of a false distinction but it’s a good place to start for thinking about what is going on. It’s also important to point out that the discipline is partially an art as well as a science.
Once you have data and have trained a model I argued that there were three primary tasks that one does with the model:
- Categorize unseen data or predict an outcome from previous unseen data
- This is what face recognition and a variety of industrial automation applications do
- Complete partial data
- Given a known category for data and partial input, complete the input so that it would met the model’s criteria for a category. This is what predictive text on your phone does for example.
- Generate completely new data
- Given some category, create an input that would be classified as the category, so this is how we get computers to create new things like cats that don’t exist.
Going through some example was fun. I showed the audience Google quickdraw and broke down some of it’s failure cases. Like why it didn’t recognize that I had drawn a toilet.
Machine Learning is just us, bigger
Finally I wanted to a make a clear point that machine learning (and AI) is not going to save us from our most pressing problems: climate change, gun violence, partisanship, balancing various civil liberties, etc. It is a system that only reproduces patterns that we already have data on. So if we give it bad data, based on a bad culture, it is only going to amplify that bad culture. So the answer to our hardest problems is not machine learning, it is the much harder cultural and political work that needs be done to create better data. In this way the talk was an extension of my interest in #EthicalCS
I enjoyed listening to the other speakers Reed Sheard from Westmont, Charles Schultz from Crescendo Interactive and Adam Martel from Gravyty. The questions and answer period was also engaging, although I fear I came across too strong for a non-academic audience! I’m happy to provide the slide deck to anyone who attended and is interested.