Editor’s note: This is the latest in the UpTech series focusing on Artificial Intelligence brought to you in partnership betweenYourLocalStudio.com and WRAL TechWire. Alexander Ferguson is founder and CEO of YourLocalStudio. Links to some earlier posts in the series are embedded for your convenience and information.

CARY – Welcome back to UpTech Report series on AI. In this episode we continue our conversation with Alicia Klinefelter, research scientist for Nvidia. Alicia is an expert in her field. She has a PhD in Electrical Engineering from the University of Virginia. Since joining Nvidia, her focus has turned towards high-performance hardware, including machine learning circuits and systems.

UpTech

The interview

  • What is your view of the AI revolution?

I think of it a little bit differently then maybe someone who is more on the algorithms or kind of theoretic side. For me, I see it more as a revolution of finally having enough compute power. Essentially to do a lot of these complex algorithms that, you know, have been limiting this revolution for years. Because a lot of the algorithms and the, you know, the underlying mathematics of machine learning have been around for decades—since the 1950s—and really what kind of revolutionized these things is, finally in the mid-2000s to late-2000s we’ve been having this this enormous progression of compute power that has enabled a lot of us to finally kind of implement these algorithms on a larger scale.

  • What key factors do you think played a large part in enabling this revolution?

I think that usually when you talk to a lot of people in the space, they’ll highlight a handful of things that have kind of enabled this AI revolution in the last five to ten years in particular.

I mean, the first one people will say is compute power, as I already mentioned, which I think is really important. But another one that people will site is kind of open source—open sourcing of everything.

Whether it’s the software or the models or just any infrastructure around machine learning.

Open sourcing: that is so important because in order to generate a complex model essentially it just takes so much of like an end-to-end stack that you would have to develop yourself that if, I mean it would be so time intensive to do that on your own that if these open source tools weren’t there then no one could do anything very quickly.

  • You mention the impact of compute power and open source, can you give a bit more insight on compute power and progression?

Usually, a lot of hardware engineers will talk about Moore’s Law, which is this law that is existent from Gordon Moore from Intel, when he founded it, you know, 50, 60 years ago, which … you can basically fit as many transistors in a dye, you know, or like double the amount of transistors on a dye in a period of 18 months, every 18 months.

And that’s held totally steady and allowed the miniaturization of all of our electronics for the last, few decades and it’s steadily kind of moved along, but of course, in the last, five years probably, we’ve started to see this stagnation finally, of Moore’s Law.

You get into specifics like Intel … since they have a foundry they usually are releasing new technologies every two years. And usually we describe those technologies as something we call their feature size, which is really indicative of the size of the transistor getting smaller and smaller and smaller.

And they kind of have stuck at this one node as we call it, at ten nanometer, which sounds incredibly small, but they kind of haven’t been able to move beyond that for the last few years. And we’ve seen other foundrys as well. The other main one, TSMC in Taiwan. We kind of see them have struggles as they move down to the single digit nanometer scale.

Computer power opens doors for AI and machine learning – a NVIDIA research scientist explains

So, we’re starting to see this massive stagnation in technology in terms of how much we can really scale it. And the one thing to note, when we scale these technologies to smaller and smaller and smaller, we also see benefits of scaling these other hardware parameters, such as voltage. And with, voltage goes down and frequency can go up, meaning your performance can increase. So that means that you get more performance for less power because power is a function of voltage.

So basically, when we start to see that slow down, then we basically, we have to find more creative ways to get to what we want to get to. We can no longer rely on the scaling—this technology scaling we’ve kind of relied on for so long with Moore’s Law.

Just what is machine learning? AI experts talk definitions and uses in UpTech series

So that gets a little bit tricky because right now —you have to kind of think about it as we have what we have in terms of technology and in terms of the underlying silicon that we use to create these chips. And so, we have to start getting really creative with different types of like architectures on chip to implement these algorithms and find creative ways to implement these neural network topologies and hardware to basically minimize the power or get more specialized with whatever you’re trying to do to keep power down.

Because, as I mentioned before, there’s this constant trade-off that exists between hardware flexibility and energy efficiency, and you’re always trying to find that sweet spot. So, I think what we’re going to see in the next couple years is you’re going to see hardware that’s much more dedicated for the function it needs to be for so that you can basically minimize power for a very specific function.

So, I think we’ll see a lot more of that. And edge computing is very dedicated chips that are very energy efficient, and I think we’ll go from there.

Coming in Part Two: The power of edge computing for machine learning – what it means

Worried about manipulation by Artificial Intelligence? You should be, author warns