CHATBOTS VS. MEATBOTS

by Geoff Olson

A few weeks back, news outlets and tech blogs erupted with a story that a ‘supercomputer’ had passed the so-called “Turing Test” for the first time.

Named after the 20th century information scientist Alan Turing, this is a proposed test of a computer program’s capacity to pass as a conscious entity. The Turing Test requires that one of the participants in a keyboard and screen dialogue should be a computer undetectable by a human participant.

A computer programme developed in Russia convinced 30 percent of participants that it was a 13 year-old boy – an age chosen by programmers to hide gaps in their creation’s information base.

Some of the news stories gave the impression that scientists had cooked up an artificial intelligence worthy of Stanley Kubrick’s HAL. Not so. There is a world of difference between constructing a mainframe that refuses to open the doors on a spacecraft and writing a script that mimics human conversation.

The Russian program was more hack than a HAL – a “chatbot,” actually. But even though chatbots have no innate intelligence, they behave otherwise. For example, every once and a while I’ll get a tweet or a message on my blog that appears to be from a reader, but it’s just a bot that combed through my archive for keywords.

Just as algorithms operating at millionths of a second constitutes most of the traffic on the world’s stock exchanges, “bots”  engineered to mine data constitute the bulk of Internet traffic. Some researchers estimate that only 35 percent of the average Twitter member’s followers are actual human beings.

I’m not concerned here with automated e-mail scams out of Nigeria, or other varieties of spambots. I’m focusing on the chatbots that pollute social media sites. Like the Russian program, these are scripts engineered to convince the target at the other end that they are dealing with another person.

Many chatbots follow sleep-wake cycles that keep them from being flagged as mindless programs. They travel across cyberspace  ‘liking’ posts and collecting friends, while trawling news and marketing databases for keywords that might intrigue their marks. Some chatbots have social network accounts of their own, for the sake of a plausible digital footprint.

When the dating site Ok Cupid ought and redesigned another dating site, the programmers observed a sharp decline in bots in the refurbished site, “along with  a sudden 15 percent drop in use of the new site by real people,” according to  2013 report in The New York Times.

This decrease in traffic in the redesigned site occurred because bots had been posting flirtatious messages and automated “likes” to members’ pages, luring them toward pay-for-service pornography sites and other profitable portals. With the redesign, some of the bots apparently got lost.

In mining responses from the lovelorn, the bots  “had imbued the former site with a false sense of intimacy and activity,” notes the Times report.

“Love was in the air. Robot love,” said Christian Rudder, a co-founder and general manager of OkCupid. The company programmers had a battle plan, he said: to create bots of their own to flirt with the invading drawing them to a special forum – “a purgatory of sorts” – to conduct endless cycles of lovey-dovey bot-chat.

If that’s not crazy enough, even the dead can now get a piece of the bot action.  In a tech development worthy of a Philip K. Dick novel, there are multiple firms offering posthumous social media services. With their help, you can keep your online profile active long after you’ve popped your clogs.

The slogan of LivesOn is “When your heart stops beating, you’ll keep tweeting.” The Twitter service, developed by London-based advertising agency Lean Mean Fighting Machine, reportedly mines clients’ past tweets to analyze their syntax and favourite topics.  It uses the data to predict what they would tweet about, and how.

So there you go. In our brave new wired world, your friends and relatives won’t need an Ouija board or James Van Praagh to access your spirit. Cloud-based bots can keep yammering on indefinitely on your behalf,  in 140-character bursts.

Computer programs may never achieve full consciousness, but given how far they’ve come already, will that even be necessary to convince most of us they are alive – or once were?

The Vancouver, July 11

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s