5/21/19

The AI future is near

AI can currently write as well as a high school student. This might not sound like much until you realize that most adults never get past that level. This means that AI might currently write better than most human adults.


AI is learning to drive, diagnose cancer, target weapons, write code.


The possibilities are staggering.


I know, I’ve built a couple AI tools, and let me tell you, the implications are staggering.


As an operations person I know that the tooling around getting from the lab out into the world is lacking (it will be a while before AI screens your xrays even if initial results are promising). But I can also tell you as an insider that we’re working on and we’re making rapid progress. Having watched the progress in operations tooling over the last ten years I can tell you, the gap between concept and reality is rapidly closing. Soon every AI tool you can imagine will exist.


The question is what we’ll do with it.


If we build the future we want, it will be utopia


if we build a future we don’t want, it will be dystopia


Sounds easy right?


The problem is we suck at that.


Take for example the future we currently live in. The engagement algorithms at Facebook and other social and sharing platforms prioritize engagement, which accidentally prioritizes outrage. This tends to surround us each by bubbles of like minded people, tends to deepen our prejudices and preconceptions. These outcomes are obviously undesired but here we are. We got a future nobody wanted because nobody bothered to ask if we were building the future we wanted. Or if they did they didn’t think it through. We’re surrounded by technology that does more harm than good.


Who’s fault is that?


Or take the current situation with privacy


Google has created a terrifying private surveillance industry where staggering quantities of personal information are routinely gathered and secretly traded with data brokers, data enrichment services etc. Why? In support of some nebulous goal of improved search. But nobody wants the surveillance future we created. Nobody asked is slightly better search worth the exposure of all parts of our private lives to the highest bidder. If we had asked that question we’d certainly have answered in the negative.


The problem is that we didn’t choose. We just sort of forgot to ask these important questions. At a fundamental level failing to choose is itself a choice.


We’re poised to make the same mistakes for the coming AI future.


The decision we’re failing to make is: “What AI future do we want to live in?”


AI that takes our [jobs, agency, lives] is dystopian


AI that helps us be better [people, thinkers, workers] is utopian


Having built an AI tool that fits squarely in the dystopian camp I can confidently predict that the dystopian future is coming.


It was almost the easier way. We did it in two weeks using off the shelf tools. It outperformed me. The AI just did everything. And then it got weird. It went off the rails and I didn’t know how to fix it. Because one of the problems with AI is that we didn’t give it inspectable instructions, we gave it data and asked it to find patterns.


So when I asked it to generate a persuasive argument, it did. And part of a persuasive argument is references right? And references follow a pattern. And so the AI dutifully made up a reference for me. Fake author, fake publication, fake article. I don’t even know how to tell it to not do that. Because I never told it to do so in the first place.


This is an example of AI going weird. It’s not a huge deal, but it’s part of the AI dystopia.


So we started over.


And it happened again. This time we realized that the tool made humans obsolete. It basically did the work for us, but if AI does all the work for us there’s no reason for us to work. And metaphysical arguments aside, the work we do in this world, applying our own unique aesthetic upon the environment, that’s one of the primary reasons for existence. Without work what are we?


So we started over again, and that one we got more right.


We created a tool to help people do their research, to organize information, to think more clearly and to communicate that thinking through writing.


But the first two were firmly in the dystopian camp. We could have easily ignored the little niggling warning about and shipped a supremely powerful tool of disinformation.


The second time we could have easily shipped a tool that not only does our research entirely, but really removes us from the equation. Or rather not we could have. We built them and it was startlingly easy.


So the dystopian AI future is coming and there’s nothing anyone in this room can do about it.


The only solace I can offer is that we can simultaneously build the utopian AI future


Everyone will get a little from column A and a little from column B


The better we do at building the utopian one the more everyone gets from that column.


So… get to work.



Sign up to get the latest news from Grounded AI