-->

3 Spooky Examples of AI

Article Featured Image

Would you let a drone walk your kid to school? How about take your dog around the block? Combine those two—how do you feel about seeing-eye drones? Purdue University researchers would love to see these options become reality. Robots that operate without human intervention could be around the corner thanks to the Center for Brain Inspired Computing (C-BRIC).

Most research is focused on perception-centric intelligence. We are going to look at designing smart devices that can read and interpret data, but then take that information and use reasoning to make decisions. Today’s typical computing is unable to do those things very well,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering, in an article.

Personally, I doubt any machine's ability to handle my dog when she sees deer running through the woods, or another dog dares to bark at her, but I could be accused of lacking faith. If we learned anything from Microsoft's Tay.ai—a bot on Twitter that was supposed to learn from other people's tweets—which took less than 24 hours to turn into a racist, sexist troll, it's that AI doesn't always learn what we hope it will. The human brain doesn't have a much better track record.

Scary, right?

Halloween may have come and gone, but that doesn't mean the creep factor of some AI applications have disappeared. Here are three uses of AI that might make your skin crawl:

  1. Facial Recognition Tells Gay from Straight—Researchers used Deep Neural Networks (DNN) to analyze pictures of people from dating sites, and predict whether they were gay or straight. The AI was over 90% accurate with men, and just over 71% accurate with women. Why? Good question. The researchers say it tells us that faces tell us a lot more about sexual orientation than we know. Why that matters is another question. It's hard to imagine a use for this that isn't sinister.
  2. Chatbots Learn to Lie—Hate haggling? There was almost a chatbot that could help you with that, but Facebook had to abandon the project when the chatbots negotiating each learned to lie. “Sometimes bots feigned interest in objects they didn’t really want, and then pretended to give them up during the bargaining process,” wrote Katyanna Quatch. Maybe they were just practicing “the art of the deal.”
  3. A Robot Psychic—Nautilus is a self-learning computer that, with enough data, was able to predict where Osama Bin Laden was found. As LearningMind.com put it, “The same task took 11 years, two wars, two presidents and billions of dollars for the U.S. government and its allies.” While this particular use of AI might be scary—do we really want to know the future?--it may also have the potential to save lives if people are willing to listen.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues