Rise of the Human-AI
It’s hard to build a startup that uses artificial intelligence. So hard that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get robots to behave like humans.
“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe.
How to start an AI startup:
- Hire a bunch of minimum wage humans to pretend to be AI pretending to be human.
- Wait for AI to be invented
In a chicken and egg situation startups are using humans to test the viability and requirements of AIs before development starts. This practice has been highlighted by a Wall Street Journal article detailing the hundreds of third-party app developers that Google allows to access people’s inboxes.
The third parties highlighted in the Wall Street Journal article are far from the first ones to do it. In 2008, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work.
In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.
Are investors being tricked? More jobs for humans but do the humans actually want the jobs?
In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.
Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.
In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself.
In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.
Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health.
A team from the University of Southern California tested this with a virtual therapist called Ellie. They found that patients would reveal more when they knew that Ellie was an AI system versus when they were told there was a human operating the machine.
But the questions over data protection as well as moral questions of tricking investors still remains.