Podcast Edge Computing: The Next Big Bet?
Achieve data consistency and cost savings with edge data centers.
By Andrew Nelson / 17 Dec 2021
By Andrew Nelson / 17 Dec 2021
What becomes possible when computation and data storage are done right at the data source? Edge computing solves for real-time data suffering latency issues while also reducing costs, benefiting both your organization and your customers. In this week’s episode, Insight’s Andrew Nelson and Dell Technologies' Andrew Clingerman discuss the future of edge computing.
To experience this week’s episode, listen on the player above, watch the conversation below, or scroll down to read a complete transcript. You can also subscribe to Insight TechTalk on Apple Podcasts, Pandora, and Spotify.
Audio transcript:
Andrew
Hey Allen, it's nice to see you again. So those of you that haven't sat in on one of our joint sessions, I'm Andrew Nelson. I'm a Principal Architect for Insight CDCT. And Allen, I'll let you introduce yourself.
Allen
Yeah, so my name's Allen Clingerman, I'm Chief Technology Strategist for PowerEdge and workloads for the North American Channel at Dell Technologies and thanks for having me back on again, Andy.
Andrew
Thank you. So this is an extension of a conversation that we've had in a couple other venues. We've been talking a little bit about data center technology and compute options in the past. This specific talk is around some of the trends that we see in AI and edge computing, in that AI and IOT space and it's a hot topic for us at Insight, I know it is for you as well at Dell. We're seeing the market kind of stratify or tier in some fashion into what we like to call a micro on a macro edge. The micro edge is the traditional IOT footprint that I think both of us have seen for awhile, the small device, single CPU, a little bit of memory, kind of desktop laptop class. And that's how a lot of our customers initially did IOT, how we were doing a lot of IOT designs, really small, low cost, low horsepower, just enough technology to aggregate and pull back the data, maybe do a little processing at the edge. That's what we're calling the micro edge at Insight.
We're also seeing a macro edge, which we'll dive into both of these topics. That's, I'm kind of teeing it up for you, Allen. But the macro edge is the more interesting one in our industry is that, that generally has a video component, video or audio or a larger data source or more horsepower and processing. And that footprint looks a lot different. It's got beefy networking, it's sometimes multiprocessors, servers, or multiple servers, and frequently we see accelerators. And you and I have talked about this in a couple other sessions, Allen, but that accelerator technology, whether it's an FPGA, a GPU, a DPU, all of those are really interesting. And the reason I bring it up to you is you're coming from the Dell compute side of the house. Both of those swim lanes, if you will, are very interesting for different reasons. Maybe you want to talk about some of the challenges in that footprint and what you're seeing on the Dell side.
Allen
So Andy, just so I'm clear, like, clarification around, like, when you think about it, 'cause I like where you went, I think the big challenge is that we're seeing with our customers, right, is that executive leader or enterprise architect that's trying to figure out how to take something and put it into production. Like, there's been lots of experiments and I like where you said micro and macro edge. It's exactly it and everybody's trying to figure out how do I take maybe those smaller science projects that I might've had and bring it to scale because I might've done one thing at the edge, but then we start seeing all these, you know, these interconnection points between the data for you to make a real-time action decision, right? And that's the way it is, really the next wave of AI is enterprise and industrial edge. And it's funny, you say micro, macro, terms are the same thing, we're saying, near edge, far edge. Same exact concepts here, right? And it's all about where AI is, you know, is doing something and automated a process at the point of action. And this is also the area where I see many organizations struggle of really thinking about IT and OT meeting, where some organizations don't even have operational technology people, right?
And they started to look at sensors and HVAC and other areas that might meet the facilities team, but they really haven't, they don't have an OTT team for innovation to go drive things. And if I'm a manufacturer in the robotics field and a retail and cameras and other things, and being able to look at it differently, right, than just how they're using the technology today and we're doing a lot of things in this space, right? Certainly in the edge, I'll kind of call this out and this might've been also what you've teed up there, Andy, is what I think about it, you know, there's lots of things going on in this category, in the marketplace and lots of different motions you might want to talk to the viewers about, of what you're seeing, right, from a much broader perspective than just Dell Technologies, but we're building a whole business unit on it. We know this is the next big bet. So you might've heard that from Michael of Edgecore Cloud, we are creating a whole edge business unit. So starting right now with Telco Retail and manufacturing, but you can imagine, just start thinking about what this means for all these devices. And I think what you just said of, like, breaking down in those three discrete areas is going to be very important moving forward and Pete Evan, an enterprise architecture, a leader that can direct his teams from a technology perspective of what they should be looking at for innovation is important.
Andrew
Yeah, and the code isn't very different. What I find fascinating about this is that micro macro edge, a lot of times I'm using a lot of the same containerized solutions, a lot of the same software, but the challenges are very different. And you and I have talked about this offline is that we're seeing a bunch of industry, your industry peers, exit the micro edge or a part of that industry, getting out of it, ending products that were specifically designed for them, I guess what I mean by that for if the viewers haven't seen it, a lot of our vendor partners had, and Dell does too, has hardened boxes that are sealed fan lists. You know, even though they look like a desktop or laptop on the inside, the outside of them looks more like a car amplifier and a traditional car, they're meant for hostile solutions because at the micro edge, we may be sticking this on a boat. We may be sticking this on a shed out in the middle of nowhere. And that micro edge is very interesting in that concept. The challenges, there are your remote management and OT challenges, I have an electric utility state of background so you're poking at PTSD for me, but you can't do with commodity hardware that doesn't have a little bit of secret sauce and so that's where things like your DRAC intelligence and things like that are interesting.
So that's the micro edge and some of those challenges, the flip side is what you and I've talked about before is how do I do GPUs and accelerators efficiently and get all of that management and OT and that's a part of the industry that everybody is leaning into, but I don't know that you can do one without the other. We've got a lot of IT deployed or IOT deployments that have a little bit of both and I need video and I need analytics, but I might just need simple door sensors and I might need simple thermostats and on-off kind of devices and I don't need GPU. And so we're going to see a blended architecture and a lot of these IOT deployments where I need two of these and three of these and these kind of tiers. And so that was why I kind of brought up the question, I liked the fact that, you know, I'm not trying to steal your thunder, but Dell is definitely pivoting into both of those markets from an IOT perspective. Right, I mean, that's something you and I talked about.
Allen
Absolutely and I like what you said there, because that's exactly right. In fact, you know, you kind of woke me up a little bit on the tiny ML conversation.
Andrew
Thank for that segue, that's the third one we haven't mentioned yet. That's the curve ball that got thrown at me a bit ago is there's another kind of opposite pivot that's going on in the market for this tiny amount, which is all of this stuff we're talking about, but slim down to not even fit on a Raspberry PI, you know, for those that you haven't seen it, our Raspberry PI, smaller than that, it's putting it onto an Arduino chip or embedding it into a camera or embedding it into a PLC or into a sensor. And the interesting thing there is, there's not a huge hardware play there, actually it's kind of an anti hardware play. But we're seeing the software vendors go downmarket so, like, TensorFlow has a tiny ML lite version of TensorFlow that can play there. It's very underdeveloped and very work in progress, but just as much as our industry is pivoting to the right to do the GPU, DPU, FBGA accelerator goodness, this is the opposite. And what I found fascinating about it is it solved the use case that as enterprise, I mean, you and I deal with the enterprise pretty heavily. It's not an enterprise use case so it's a sensor that might be on a glacier in Greenland monitoring glacier flow, that tiny ML has a use case of meeting low latency, super low power, super low compute, but being able to make decisions onsite, and I may not have good connectivity, it might be heavily intermittent connectivity. And that's a very interesting IOT use case we're seeing bubble up. The example that I gave on one of these previous talks was like the packs that they put on the back of whales and animals to track them. Like, what if I can do a little bit of AI, AML on the back of the sensor while the whale is underwater and not have to wait for it to come aboard. But I don't know that Dell is going to have a great play in that tiny ML space, but it's fascinating to see how this thrives.
Allen
It all interconnects though, like, in the project, right? How do you make this stuff real? Because I was even talking to our OCTO's office after we spoke and I did some research to understand how we were even using it internally. Like for example, like sensors that our manufacturing floor actually use tiny ML to do certain things that are robotics level that are very repetitive in nature, right? To train it how to do something very quickly and right there, local to the arm, right? So they kind of said, I like what their commentary was, is, like, very small sample sets, right? So very fixed function. You kind of described that, where it's very low power and they typically will follow into a model going into something else required a deeper dataset with GPUS and DPUs that also live at the edge so those robotic arms then send that sensor data back, right? So they're taking those actions at that specific station, right, to build something, but then they're sending that information back get a bigger view of what's happening across the plant floor for efficiency and a larger train model with GPUs and DPUs in the compute, right, within the manufacturing floor so it was just kind of interesting. I had one of the one that I thought was interesting if anybody had visited any of the theme parks, that if you've been to Universal at all and you've seen Harry Potter, they have a little station where you can wave the wand. That's actually a tiny ML sensor that actually senses how the end user is doing it and uses IOT to trigger something else to happen right, so --
Andrew
And why send that data back anywhere else if you can process it right there? Its fascinating.
Allen
All I'm trying to do is spit water at the participant, right, or flip a sign to interact with the you know, members so I thought it was great, interesting use cases of tiny ML that also then can tie into either not tie into or tie into a larger framework. And I think that --
Andrew
What you're talking about is this whole AI journey is probably all of what we're talking about in some fashion hierarchically and it's non-trivial, there isn't a recipe for, you know, application X, Y, and Z to do that so that's why these talks for me are so fun to kind of exchange ideas and you get visibility into the building blocks and the infrastructure that Dell is coming up with and I'm trying to navigate that building block and infrastructure to help customers map this so, but no, I appreciate this dialogue. And I think that's about all the time we had today was to talk about this, but it was kind of a little bit of a teaser. And if anybody's more interested in this, Allen and I love talking about this kind of stuff.
Allen
Thanks, Andy.
Andrew
Thanks, Allen, I appreciate the time.