spk-logo-white-text-short
0%
1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

Today’s Growing IoT Problem: Embedded Software Security – An Interview with Cambashi’s Peter Thorne

Written by Chris McHale
Published on December 8, 2015

PeterWhen it comes to the Internet of Things (IoT) and connectivity, embedded software security is a top concern for a growing number of manufacturing companies and their software engineers. The worldwide market for embedded security software is forecasted to rise to $2.95 billion by 2019 at a growth rate of 17.9%. In addition, according to a recent survey of software engineers, security was ranked high in importance.

With smarter, more connected products constantly hitting the market, manufacturing companies are likely to see a higher amount of product safety and security problems in the future. And according to Peter Thorne, there is no easy resolution as security is extremely difficult to build into today’s connected products. Peter is the Director at Cambashi, an independent industry analyst firm based in Cambridge, U.K. which provides market insights on the use of IT in the manufacturing, energy, utilities and construction industries. Peter has applied information technology to engineering and manufacturing enterprises for more than 30 years, holding development, marketing and management positions with both user and vendor organizations.

Our co-founder, Chris McHale spoke with Peter about today’s growing safety and security issues surrounding embedded software development. This comprehensive interview covers a lot of ground on this hot topic including the magnitude of the issue, the driving factors, and the steps needed to resolve. It begins with this executive summary presentation on the key takeaways.


Chris: Peter, earlier this year you and I discussed the security challenges growing for manufacturing companies as their products become increasingly smart or connected to the Internet. Developing Internet of Things, or IOT as it’s being referred to, as software is becoming a more prominent feature of what used to be purely mechanical products. And then, ironically, a few days after we were talking, there were some articles published announcing that “Chryslers can be hacked over the Internet.” That hackers can cut the brakes, shut down the engine, or do many other nefarious things that affect the safety of the people in driving the cars, inside the cars.

Now, to be fair, we’re quite sure Chrysler is not the only manufacturer with security vulnerabilities, and that the automotive industry is not alone in this challenge. Indeed, any industry with embedded software products, where safety of individuals is a concern is at risk. The medical device industry comes to mind, industrial products, aerospace, and so on.

To set the stage here, you pointed me to an academic article that was published in 2011, which provided a really thorough analysis of these kinds of vulnerabilities and it was titled “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” which is a mouthful. The University of California San Diego and the University of Washington sponsored this paper.

So I found this to be a fascinating article. It identified a problem area that would apply to any industry, I think. The paper notes, and I’m going to quote here, that “Inter-bus bridging is critical to many of the attacks we explore since it exposes the attack surface of one set of components to components on a separate bus.” Or in different words, another quote, “Reflecting upon our discovered vulnerabilities, we noticed interesting similarities in where they occur. In particular, virtually all vulnerabilities emerged at the interface boundaries between code written by distinct organizations.”

To begin our discussion, could you explain this a bit further in more layman’s terms? What does this really mean to the product development cycle and how do these vulnerabilities impact the general public’s safety and security?

Peter: Well, it’s a big topic. But that second quote you mentioned puts a focus on development methods since it addresses the interfaces between software components as the weak point. In order to understand it, the starting point is really just the extent of interconnection of everything. And as you say, it’s not just cars. For example, earlier this year, there was a report from the U.S. Government Accountability Office, the GAO, saying even in aircraft, there’s only a firewall between passenger systems and avionics, which is terrifying. And I will say that this was just a theoretical, not an experimental report.

But when reporters found that report and contacted security specialists, their response was to say no firewall guarantees security. The only thing that guarantees security is an air gap between systems, which translates into no communication. In other words, if systems are physically connected, then a hacker might get through.

So the issue now revolves around this feature of connectivity and obviously, we want connectivity. For example, if you call an ambulance, you’re pleased that it’s connected to the traffic management system and the emergency room. With such connectivity the ER will be ready when you arrive. You’re pleased that there is that sort of connectivity. We want connectivity. And indeed, it’s the connectivity that is the source of the kind of new capabilities, the things that we never thought could be done with products.

Now Chris, let’s consider other industries. Connectivity is how agricultural electronics can arrange to handle the spread of fertilizer, by using a GPS to know exactly where it is, and a Cloud-based system of the records of yields on a farm. To actually provide new functions. This is where the interface has come in also know as ‘inter-bus bridging’. This is all driven by our need for connectivity. We don’t want these things to be completely separated.

Now, one of the main drivers of this need is functionality, but another drive is cost. Who’s going to pay double or triple the cost to avoid connectivity so your screen in your car has got three separate network controllers for entertainment, for navigation, for anything to do with the car’s status. You just don’t want to do that. You want it all connected, so you’ve got one screen, one way of interacting.

We want those features but the issue also revolves around costs. The current development methods also work for product developers because it’s likely a low-cost way to share components and such. And after considering the security issues driven by functionality and costs, the next thing you have to think about is the design and development workflow. Essentially every designer and developer in every product category has to use components, whether they’re mechanical components or software components, that they don’t produce themselves. You have to use things that other people have done.

So the reality is making these third-party components, or externally-sourced components work correctly is hard work. This means development teams spend a vast proportion of their time in this area which leaves less time for other areas.

And so, the thing about vulnerabilities is that most times, they’re not to do with using the interface correctly. They’re to do with using the interface incorrectly. Someone in that first paper you quoted, there was the remarkable example that it was possible to have an audio track in the entertainment system. And if you put the right sort of WAV form on that audio track, it was possible to crash the entertainment system.

Chris: I think they noted something a song in WMA format, which would be where the vulnerability or the code that was trying to hack would be completely unknown to whoever was listening to it, and you simply play that, right, on the CD player, and the code gets in there and starts to do its thing.

Peter: Yes and it can be during the operation. Often, the hacks are to do with crashing the system, and then being ready as the system comes back up, which is often a vulnerable time, to do something different. And to dump in code, which is going to give the hacker access.

So the key issue for the development team is how to design and test a system to secure that vulnerability? Because, as we said, it’s just hard enough to make the system work correctly. Just think about the scale of the testing example if you now test for incorrect use of an interface, as well as correct use. The number of tests just explodes. You quickly get to the number of atoms in the universe as to the number of tests you need to be able to cover every logical sequence, every timing alternative, every data format, every set of external conditions; all of those things need to be considered.

So I suppose more thorough testing is part of the answer, but it’s not the whole answer. And I think the bottom line is that right now, there are, especially regulated industries, that have development procedures and standards and review processes and sign-off in place, and these do a good job so that everything gets checked by multiple people. But the industry has more to do.

Chris: Let’s discuss these vulnerabilities. Here are these problems, these growing problems that we’re going to be facing in multiple industries. How can the manufacturers try to avoid some of these problem? What could change in their product development and design processes or methods that could minimize these vulnerabilities?

One thing I thought was interesting in the article…and again, this article is dated, so this may not be entirely true today, but it might be. Is I noticed that they just noted a number of just vulnerable demons and/or processes running that with a normal server, for example, our system would have been shut off. I was surprised, for example, that they noted that FTP was running, for example, or telnet or things like that. And I just wondered if there is a little bit of a mindset in manufacturing that hasn’t moved over into the world of PCs. So PCs used to be that way in the ’90s and people weren’t too worried about threats and vulnerabilities. And then suddenly, everything was starting to get compromised, and then they went to town trying to protect those PCs.

Well, has that happened in the world of these interconnected devices yet, and is that something that might prevent some kind of major crisis from happening? What is the mindset of the manufacturers? Do you know that?

Peter: I’ll try to characterize my view of that by talking about the vision. How should things be?

Chris: There you go.

Peter: And then I’ll wind back to where we are now, and what an engineering manager or a member of a development team can actually do about it. I think the vision is an unbroken chain of security from hardware right up through every level of the software. Now, the interesting thing about this is it’s not a new concept. So you can go back to the 1960s, and at that time, people built hardware architectures that haven’t survived to the present day, but at the time, they were called “capability computers.” And in these systems, the hardware was involved in ensuring that every software module only got the minimum access and control that it needed.

Now, in those days, it was nothing to do with hackers. It was actually to protect the computer systems from mistakes in the software. The idea was that they would crash in…what should we say, controlled and graceful ways. They were building stuff into the hardware that meant it was…if you made the hardware correctly, then you could not write software which could get more privileges than you had allowed. And because there was this chain from the hardware through each level of the software.

And I say, these architectures didn’t survive. They were interesting research pieces. There was some commercial examples…in fact, there were people investigating them, really…it was for the telephone systems. They were trying to build…these were the days of the first builds of computers to run the telephone and switching systems. And they needed this to give them the kind of security that telephone engineers had taken as an absolute given for their analog systems.

But they didn’t survive. The architecture died because it added too much cost for the level of performance it could deliver. There was a significant overhead to do this. But the hardware level remains important, and if you look at the electronic sector, they have done quite a lot of work, which started with finding ways to avoid chips being cloned in a robust way. And they got solutions for these things.

More recently, they’ve moved forward so that there are more and more chips that are designed and built with hardware checking circuits that are monitoring the functional circuits. So that platform, that base, is beginning to exist. But as you go up and into the software, it’s much, much harder. As I’ve said, the regulated industries have got all these procedures and standards, and they pretty much guaranteed that the right people look at the right type of documents and sign-offs that they believe that everything is okay.

But that’s not really…it helps, it definitely helps. It means you’ve got more than one set of eyes. One person can’t make one of these mistakes. Somebody else has to look at it and sign it off and say it’s okay. But for an individual development team, there’s no complete answer. But if you are handling it, I think there are things that you can do. Number one is make sure you keep security on the agenda. Don’t assume that it’s just going to be there.

The next thing, I think, is to put together some sort of plan that questions what happens when an interface that you are providing, and possibly using, is used incorrectly. This is everything…and exactly what this means will vary from project to project. But it’s everything from the buffer overflow conditions, so much publicity in the PC world, to timing errors, to sequence of call errors, to data format errors. And this is why it’s so difficult. The variation there is so enormous, that I think you just have to set the goal to the development team. Say “Okay, you’re spending most of your time using these interfaces correctly and thinking about that and performance and efficiency and all that stuff. Now then, take a small percentage of that time and think about using these things incorrectly.”

And then, use the procedures that you’ve got in a regulated industry, as they are prescribed. You’re going to have to be adopting these procedures to gather feedback and suggestions. Not only on the functionality, but also on security. So that people just put their black hats on for a while and think “Well, if I was trying to break this, what would I do? Would I know a way to do it?”

Now, it’s a problem because I know magicians always say that scientists and engineers are the easiest audience to perform to because they are so trusting. If you are a scientist or an engineer, you have to put that very desirable part of your character to one side, and you have to be deeply cynical and suspicious in order to do this.

But if you’re a manager, you can sort of help with that by just starting your team learning about issues and methods. And that means, what sorts of things will hackers try? And at the other end of the scale, you can look at some of the techniques that are just starting to work in this area. So formal proof methods. They exist, have existed for years. But they’ve only been able to cope with very small code fragments to actually prove that a piece of code executes and can only do what its specification says that it does. And it works for just tiny fragments of code.

But if you’re an engineering manager and if you start that conversation and maybe start that as a learning process, and build that into saying “Look, we’re trying to gather feedback when we do these reviews.” You’re not going to guarantee security. At the moment, as I understand it…and I don’t put myself in the kind of expert category here at all…but the experts still say “If you want to guarantee security, don’t connect the system.” That’s just not good enough. That’s simply incorrect.

Chris: That’s just not an option anymore. But it’s interesting what you’re saying; these suggestions that you’re providing which I think are very helpful. Especially to any manufacturers in the audience that are listening. It actually brings to mind…something that I noted…it was actually a medical device…it was a company in the medical device industry, which has nowhere near, I think, examined this closely as say, the automotive or aerospace in this regard. But here, you have systems or devices, rather, that are obviously connected, interconnected. And some of them are really based on a Windows operating system environment.

And I noticed…and this is a couple years back, that obviously, these companies are regulated by the FDA. But there really was no specific security requirement that were placed on…and they’re kind of general security requirements, nothing specific. And this particular company was trying to sell something to the U.S. military. And as such, was put under a completely different set of security requirements. One of them is called a DIACAP examination or a DIACAP analysis, which…it’s a DOD-sponsored analysis, and it requires that a certain level of security is met in the operating system of the device. And so, a whole bunch of tests were then run, and a bunch of things were turned off effectively in the operating system to make sure that at least you get rid of that level of compromise.

And I was really struck by the fact that that was a requirement for obvious reasons to sell to a specific customer who was in the government. But that’s not really something that’s required of any of these industries and any of these manufacturers as far as I know anyway. Maybe it is in automotive and I’m unaware of it. But not to be the one that proposes more government regulation, but I just wonder if some basic security requirements like that in this world of increasing inter-connectivity need to be put out there for manufacturers? Or will these manufacturers rise to the occasion themselves, self-regulate, and start to introduce more of those security requirements? Anyway, your thoughts on that?

Peter: It’s an interesting thing. Yes, there’s no question there are a number of standards. And if you look in the military, they’ve probably got a good grip on what you have to do in order to make it a kind of leak-proof system. But I would guess…and I must admit, I haven’t talked to military developers for quite a while, but I guess even they would say “Actually, we don’t think we’ve got all of the answers yet.”

Chris: I would imagine that is the case, actually, yes.

Peter: And you mentioned medical devices, and I did a research interview with a software developer for medical devices not so long ago. And they were making the points that they developed the control system for the embedded equipment that they were working on in one way. But then for the final versions, for deployment, they moved to a Windows-based environment, curiously, because of the specifications of this project, the specifications that they had for the access control to this equipment, and the way the hospital group they were working with were using a Windows base for equipment…I think it was for the servers, for the equipment to log in and register itself on the network and that sort of thing.

And yeah, they were asking some of the same questions as to “What can we do here to be sure that we’re not creating a dangerous environment?” Obviously because they required and as potential users of the equipment or being on the receiving end, we all want them to be asking those questions.

But I think, ultimately, the thing that’s going to make this happen is probably the fact that there are commercial consequences. And the parallel that I see is, strangely, the history for genetic modification technology in Europe. GM technology has been great and does some fantastic stuff. But the truth is that here in Europe, the situation–and this is going back maybe five years, something like that–often, an enthusiastic, initial response, the market changed quite rapidly, the market situation. And the reason was that the sources of the technology appeared to be denying rather than handling the perceived risks.

Now, the point of that was that here in Europe, it’s still true, the regulatory environment for GM is, I think, the manufacturers, the providers of GM, would say it’s a challenging regulatory environment. And actually, even this year, there was…and it’s not law yet at all, it’s just a politician making a policy intention statement. A minister in Scotland, believe it or not, announced that he wanted to make Scotland a GM-free country. And his reason for that was that he believed there was market demand from both crop growers and from consumers for this. Of course, there had been a huge backlash to this, but there’s still this complicated regulatory environment.

And I think that’s going to be exactly the situation with embedded system security. Everyone loves connected systems. At doing new things, at doing things that encourage us all to think “Yes, this growth path, this ability of the world to design better and better technology that allows us to do more things, and saves us time, and allows a higher standard of living around the world, it’s continuing.”

But if producers don’t take the issue seriously, it’s going to be a problem. So I would say keeping security on the agenda…if you’re a developer, if you’re handling marketing strategy for these products, keep security on the agenda. Keep pounding out the message that security is a priority and it’s really not too early to be putting serious effort into solving the problems. Because it won’t take too many of those articles of…especially consumer products being hacked. And people are going to put two and two together and say “Hold on a moment. I remember Stuxnet from…” however many years ago that was. “I don’t want my machines, my stuff wrecked by viruses from the other side of the world.” I saw this picture of a car in the ditch which I saw as one of the reports of the hacking of the cars. And probably a bit of a poetic license or journalistic license there to dramatize it.

Chris: Certainly.

Peter: But it doesn’t need too many of those pictures to create quite a backlash. So I don’t have the answers. But it’s not too early to be pounding out a message of security as a top priority. And if the technologists can focus on and keep taking incremental steps to solve the problems, we’ll get there one day.

Chris: That’s fantastic. Peter, thank you so much for joining us and for sharing your insights and your knowledge. We just really appreciate you taking the time.

Peter: Much appreciated. Great conversation as always, Chris.

Latest White Papers

The Hybrid-Remote Playbook

The Hybrid-Remote Playbook

Post-pandemic, many companies have shifted to a hybrid or fully remote work environment. Despite many companies having fully remote workers, many still rely on synchronous communication. Loom offers a way for employees to work on their own time, without as many...

Related Resources

Optimize Your Databases with Azure SQL

Optimize Your Databases with Azure SQL

Making data-driven decisions is one of the most valuable things a business can do to achieve and maintain success. Businesses thrive on their ability to make intelligent, timely decisions based on accurate, accessible data. Without the use of data to inform their...