How can you know your computer is secure when you can hardly tell what goes on behind the screen? Operating systems and electronic devices are a black box for most users and a security nightmare for anyone holding sensitive or volatile data. Compartmentalizing programs and data is a proven, but complex way to control what goes in and out of your computer. Spectrum is a new operating system supported by the Next Generation Internet that will make isolation of data and applications easier to use. Read more in in our interview with free software developer Alyssa Ross.
Computer security for many users is a matter of trust: because my device or operating system is so complex, I have to put my faith in someone else, usually the manufacturing company, to protect it. Do you think users should have more agency over their security and how should technology help empower them?
Alyssa Ross: Absolutely. Because everything is so complex, nobody can possibly hope to understand it all. I think the best thing we can do in this regard is make sure that users have multiple options to choose from. That way, if you distrust one component of your system, you can replace it with something else, even if you couldn’t possibly create a replacement yourself.
You bring up the manufacturing company: hardware security is probably the most critical area in trustworthy computing, because if you can’t trust the hardware, you also can’t trust any of the software it runs. It’s also one of the most difficult areas for us to make progress in, though, primarily because manufacturing hardware is just so expensive, and you need economies of scale to really be able to do it at all. So, whereas a sufficiently knowledgeable user can often write their own software rather than using somebody else’s, you can’t really do that with hardware, because once you’ve designed it you still have to produce it.
One important way forward here is Open Hardware, where hardware designers publish their designs so they can be reviewed by the community. It’s probably still infeasible for you, as a single person, to make that yourself instead of buying it from them, but if enough people wanted to get together, suddenly its possible to have multiple entities selling the same thing, and so you can choose who you trust most not to have inserted a backdoor or something. By putting work into the commons, everybody can benefit, and usually people will still buy from the original designer anyway. Setting up a new manufacturer would be a lot of effort, so people are only going to do it if there’s a really compelling reason.
There are exciting developments in this area — the design for the Librem 5 smartphone is fully open, and it uses entirely free software. The RISC-V and OpenPOWER projects are both processor designs that are completely open, as well. There aren’t many computers using either just yet, but things are moving in a promising direction.
On the software side, things are a little better. Almost any computer can get by running software that is free (as in freedom) almost exclusively. This means that the software can be studied and improved by the whole community. You still have to trust people, but now you can put your trust in a large community that’s trying to make good software rather than a small group of people trying to sell you something.
Your project aims to take a ‘step towards usable secure computing’. Who do you think needs user-friendly secure computing the most, which users may be at risk but at the same time are unable to adequately secure themselves?
Alyssa Ross: A lot of people who are the most at risk from poor computer security unfortunately have the least time to dedicate towards learning how to do it better. I know from my own experience in activist groups that having good computer security is like flossing your teeth or going the gym. It’s something that everybody knows they -should- be doing, but its always something to sort out in the future, because right now you’re busy with the latest issue and you don’t have time for that! I really like this article by somebody trying to preach computer security to political campaigners who faced the same issues. There’s been a lot of good work helping investigative journalists improve their computer security, too. I think that any sort of environment like that, where you have people under a huge amount of pressure working with very sensitive information, is the place where we most need better computer security.
Unique features of Spectrum are its intended transparency and modularity. Users can keep an overview of all their isolated applications in a global configuration file and change their setup with just a few lines of text, instead of maintaining compartments on their own and struggling to make them work together. How do you balance security and usability in Spectrum and do you consider this to be a trade-off in secure computing?
Alyssa Ross: I think that, with current systems, this is a trade-off. A lot of things we take for granted today only work because programs can communicate with each other on our behalves, but there’s nothing to stop them communicating maliciously, or even in some way that we just don’t intend. Generally, this communication can be done in a better way, with the consent of the user being required, but this is a problem we have to solve individually for every different communication method. Some that have been solved quite well already are copy/paste, file open and save dialogs, etcetera. But there are always more, and whenever somebody comes up with a new idea, it’s easier for them to implement it in an insecure way first and try to add security on later than it is to design it with security in mind.
To use Spectrum, people are going to have to learn some new things about how to interact with their computer. There’s just no way right now to deliver a computing experience that is secure, but exactly like what people are used to, while still allowing them to install and run whatever programs they like. But, I want to try to make these new concepts as easy to understand as possible. It’s early days, but my hope is that once you understand the fundamental principle of Spectrum – that different tasks should happen in isolation from each other except where absolutely necessary – it should be easy to understand how to apply that and understand how to actually use the system. I’d love to have, for example, graphical tools for manipulating and explaining the system configuration, so you didn’t have to learn a computer language to be able to interact with the system. But that’s quite a bit off. For now, we need to make sure we have a solid foundation for building that stuff in future.
I think that, to get to a point where secure computing is the norm, we will have to make some sacrifices along the way. I think people are willing to make small sacrifices, as long as it doesn’t interfere too much with what they’re used to. But that sacrifice doesn’t have to be a bad thing! Maybe necessity will lead us to find better ways of communicating with computers that are even more nice to use than what we have now, and are secure by design.