Editor’s note: This is the latest in an exclusive UpTech series about Artificial Intelligence, Machine Learning and much more as part of a partnership between YourLocalStudio.com and WRAL TechWire. Previous posts can be found by searching “Uptech” at WRAL TechWire.com. Interviews are conducted by Alexander Ferguson, CEO of YourLocalStudio.com.

Welcome to UpTech Report. In this deep dive video, we hear from Dr. Chris Hazard, a unique figure in the world of artificial intelligence who draws from experience in software development, psychology, physics, economics, hypnosis, robotics, and privacy law.

[Hazard, CEO of Hazardous Software,obtained his PhD in computer science from NC State on artificial intelligence for trust and reputation. He has worked in and been published in a variety of fields from wireless network infrastructure as a software architect at Motorola, to psychology as part of a post-doc at NCSU, to hypnosis with the National Guild of Hypnotists, to robotics at Kiva Systems, to privacy law working with the Future of Privacy Forum.]

Known for leading the development of the game Akron, Dr. Hazard is a renowned, award-winning researcher of advanced technology applications, entrepreneur, and public speaker. In this interview, Dr. Hazard offers some startling revelations on how artificial intelligence and machine learning can inadvertently expose sensitive and personal information, even when that information was not willingly offered.

UpTech

The interview

Hazard begins by telling us how important it is to fully understand this new technology.

A.I. right now—it’s very easy to over-hype it and also very easy to dismiss it. The right path is somewhere between: to understand how it can be used in your industry, how it can be applied, what results you’re likely to see, and make sure that you can understand why the decisions are being made.

I don’t think it’s such a clear path that we’ve got right now. I think it will take a little while to get these systems in place, to get them de-bugged and tuned, and understand all the different facets that they will interact with.

  • Dr. Hazard gives us an example of how the application of A.I. is not always fully considered.

WeBank just gave a talk recently at Troy, looking at how they can merge together all these different models from their different customers, and it’s great. It’s really powerful in a lot of ways, but at the same time, it’s like all ideas being pushed together in ways that we don’t know where it is.

If your data is in a decision tree or your data influenced the decision tree in some way in this grander forest or in a neuro-network, it’s an approximated view based on some part of the function—maybe an approximated view more than somebody else because you were a more influential data point for some specific example. It’s really hard to tell that.

You can use influence functions. There are ways to tease that out, but it’s not really trackable enough that a lot of people are really doing that. And so, to just sort of help manage this and help use data for good, it’s one of my driving forces.

  • Dr. Hazard tells us, despite the great promises we’ve been told about the future of A.I., the applications of this technology still face major hurdles.

If you train a self-driving car on a million miles of driving on a highway and have no accidents——nothing unusual—great. It can drive on a highway. But what happens when there’s a snow storm? What happens when you’re in a city driving uphill and there’s a road that’s half cobbled and half not because you’re in Boston or Pittsburgh or some old city? And it’s snowy and icy, and all of a sudden the car in front of you has its brakes on and slides into you. You could have easily avoided it.

It’s New Years. It’s like two in the morning. People are flooding across the street and not behaving in the ways that they would normally behave. Or on Halloween, all of a sudden there’s a new costume that all the kids are dressed as, which looks like a statue on the side of the road and is fooling self-driving cars.

  • And Dr. Hazard cautions us that it’s precisely because there is so much work left to be done with this technology that it is so important that we understand it’s distinctions, including the difference between A.I. and machine learning.

I prefer to take a little bit of a different approach defining A.I. and machine learning, and I tend to define it on two axes. So, we’ve got the classic exploration verses exploitation trade-off in A.I.

And this trade-off is if you don’t know something, and there’s a lot of unknowns or unknown unknowns, then you have to go find out what are the answers to all of this? And that’s the exploration.

The exploitation is when you know some things. And you know if I do this just a few more times, I’ll get an expected result that might be very good. And so, where do you draw the line between those two, and how do you trade those off?

And there’s been thousands and thousands of papers examining that, and it’s got other names as well that are closely related. It’s related to the variance bias trade-off and statistical models as well—and the one-armed bandit, the multi-armed bandit problem in gaming theory. There’s a lot of related work.

So, that’s one axis. The other axis I would define it on is: are you trying to achieve goals or are you trying to achieve accuracy? And so, the difference here is: are you working with data or are you working with rules and causality? Accuracy isn’t the only thing. You need to know why it was made.

Related story coming up: Some shocking ways that very personal and private information could be discovered and exploited—all from just playing a video game.

Thanks for watching this installment of our deep dive into the world of artificial intelligence. For more information on Chris Hazard and his incredible work in A.I., visit his site: hazardoussoftware.com. This is Alexander with UpTech Report.