FOSS and security

<!> GENERAL POINTS: 1. After reading the first sentence, I would expect some discussion of how the fact that source code is freely available affects the security of software. We go straight into 'what one can do for oneself', etc. without explaining this fundamental link, e.g. open source -> certain security issues (discuss them) -> two ways of addressing: 1. directly; 2.via the community. Or am I missing the point? If so, it needs spelling out. 2. We say that it's beyond the scope of this document to compare open and closed source i.t.o. security, but touch on that theme a few times, and in the conclusion. Should we perhaps be rethinking the scope of the article? 2. Is it worth defining security at the outset - I know we're assuming some knowledge on the part of the reader, but I think the term can mean different things to different people, e.g. vendors and users. 4. I don't think we refer back to the main theme often enough - we need to add some 'signposts' to help maintain the thread - useful to return to these at the beginning of paras (or at least sections).

This article examines the implications that open source software may have for security. <!> [We never actually say what they are] The defining characteristic of open source is that a program's source code is made available to its users, who can examine, modify and redistribute it. <!> [say here what implications this has, before we mention characteristics not common to all open source projects; i would even suggests mentioning these later] In many cases, this also makes possible distinctive ways of producing and distributing the software – for example, development projects that outsiders can join and contribute to; software that is distributed by a number of vendors, and companies whose business involves their participating in this activity. However, though some of these will also be addressed here, the only characteristic common to every instance of open source is the free availability of its source code.

The question of what implications this <!> [What does 'this' refer to - freely available source code] has for security is addressed from two perspectives. First, what one can do for oneself <!> ['what one can do for oneself' to achieve what? To address security flaws? If so, we haven't discussed implications of freely available source code yet. Is it more/less likely to have flaws and why?], by modifying the software, engaging with a development project, commissioning modifications, and so on. Second, we look at the implications of other people being able to take part in these activities. <!> [Other people having access to your code - is this good or bad for security?]

It is beyond the scope of this article to ask which of open source and proprietary software tends to be more secure; besides, there is not the evidence to easily answer the question. <!> [but this is precisely what we do in the conclusion] Even if this could be answered in general, though, one would still have to evaluate individual cases when choosing software. <!> [Last sentence repeated in conclusion]

What one can do for oneself

If there is a need <!> [for what? fixing/identifying a security problem? If so, we haven't explained where that problem came from - we haven't examined implications yet, and are already talking about potential solutions. Sorry, I need spoon-feeding here] that is best addressed by modifying an existing piece of software at the source code level, then the choice is clear: only open source software makes this possible. An example of this is the SELinux sub-system for the Linux kernel, an access control system developed by the United States National Security Agency (NSA). This system, based on previous work implemented in the context of other operating system kernels, was ported to Linux in order to introduce it to a wider audience. It has since become part of the mainstream version of the Linux kernel, and also ported to other open source operating system kernels. Note that this does not mean the NSA has 'guaranteed', or even audited, the security of any of these kernels in any wider sense. This case applies where there is a specific need and the resources and expertise to satisfy it.

Another possibility <!> [for doing what?] is checking or auditing the source code of a piece of software, and perhaps fixing problems that are found. This is not a trivial exercise, especially since identifying security problems requires specialist expertise. It can be done by various methods, one of which involves using semi-automated code analysis tools to detect certain types of errors. Such tools usually detect errors, rather than security vulnerabilities specifically, and are a useful tool to help humans rather than a replacement for them. An example of an organisation doing this is the United States Department of Homeland Security. Its programme, which has been in operation for the last couple of years, pays for a company to apply its source code analysis software to selected open source projects. Representatives of a project may register to view the possible flaws identified in their project's code.

Someone planning to review source code would have to have a very clear understanding of why they are doing it and what benefit they would hope to gain. In most cases, code review is likely to be both irrelevant and also beyond their capabilities. For most users, it is probably more important to decide whether the software's developers give due regard to security, and whether the project has a good reputation. If this is the case, then the need for users to review code themselves is lessened; if not, then it may not be worth undertaking the work oneself to compensate, even if it were feasible.

It is also possible for someone to take upon themselves some security-related task <!> [too vague] that they feel is not being addressed properly by an open source vendor or project, where doing so needs either access to the source code or access to the project. <!> [Isn't this what this whole section is about?] However, this very much depends on the circumstances, on the capabilities of the user, and perhaps on the project as well. It is not so much that a user could patch a specific vulnerability in a case where the vendor or project is failing to address it: details of security vulnerabilities are usually not released publicly until a fix has been released, so in such a case, a user might not have enough information to fix the issue unless they are the party that actually discovered it. Also, there are some important questions to ask: if the main project behind a piece of software continued to fall short in some way, would one be prepared to continue to compensate for this oneself, perhaps even becoming a part of the 'ecosystem' around that project that others then take for granted? And, if so, is there sufficient benefit to taking on work from which many others would then benefit, and where responsibility could therefore reasonably be seen as belonging just as much to any other member of the community, or to the community as a whole? <!> [Isn't this the general idea with open source?]

In general, when considering options that involve making contributions to a project, either in terms of code or some other effort, a party would benefit most if someone else were to do it. It is worth doing it oneself when the direct benefits are important enough that one would not be willing to wait for others to do it, or if it is judged that contributing to the overall health of the project is important enough that one acts 'public spiritedly'. <!> [Again, isn't this the general idea with open source?] <!> [Please mention security.]

Finally, the ability to make modifications to a piece of software makes it possible to continue to use an old, unsupported version of a piece of software, patching it oneself to correct security issues that are discovered. Again, this requires time and expertise, and could raise further problems as it falls further and further behind other software that it relies on. Also, one's own efforts may not be enough to compensate if the software no longer receives the same degree of scrutiny from its main developers and community. This is a real possibility, and may be important in some cases.

Benefiting from others' activities

<!> [PERHAPS ADDING SUBHEADS HERE MIGHT HELP TO CLARIFY ARGUMENT] The caveat mentioned earlier – that many characteristics commonly attributed to open source are not necessarily present in every case – is most applicable to this section. Nothing described here is a necessary part of open source, but is either common enough, or asserted often enough as a benefit, to be worth discussing.

We have already established that open source can be redistributed by anyone, and anyone may modify it and possibly become part of its development community. This means that there may be a number of different parties that offer security-related services in the open source system. An example of this would be Linux distributions, where different 'distributions' are put together by third-parties, who also then assume responsibility for the default configuration of the system and the provision of security updates. This means that one might be able to choose between different approaches and possibly also different levels of competence with regard to security. Of course, this is not entirely unique to open source: a variety of suppliers may offer services based on proprietary systems and software as well. However, they will be more limited in what they can do than a party working within open source.

While users of open source software may well be choosing between many 'vendors', it should be stressed that there is often no option but to do so, as this is simply how the system of distributing open source software operates. This makes a comparison with the proprietary world very difficult. <!> [We have already stated that this is beyond the scope of this document] Where there is choice, one could lose as well as gain. For example, where a single open source vendor offers an option that others do not, then choosing another vendor means losing out on that option. On the other hand, open source vendors tend to make their work available to the community, and their work actually often constitutes much of the work done on the projects themselves. In this case, there is no chance of losing out, but also no benefit in having the choice of vendors if every vendor's customers can benefit. The fact that each vendor is ultimately reliant on the core projects and wider community for the software they distribute often means that no single vendor's offerings differ much from those of any other, and the choice is sometimes not as wide as it may seem. Bearing all this in mind, it may make more sense simply to compare what individual pieces of software and systems offer, and make a case-by-case decision, rather than to work out in principle whether a single or multi-vendor approach is likely to offer more benefits. <!> [This paragraph seems to be more about choosing software]

Another possibility <!> [For what? or is this just another scenario?] is that the community could take on a task <!> [to do with security?] that is not being fulfilled adequately by an open source vendor or project. In the open source world, community and project are not clearly distinct, and so the community around a project does a certain amount of this work as a matter of course. Therefore, the kind of situation under consideration would be of a very distinctive, specific type, where a gap grows up between the main development effort and the wider community – for example, where there is a known security vulnerability in a piece of software that remains unpatched by the core project, and a third-party steps in. It fact, in such a case, it would even be possible for members of the community to establish an entirely separate branch of the project if there were real dissatisfaction with the original maintainers. With proprietary software, of course, only the vendor can make a permanent solution, and ownership of the product is strictly controlled. The question here, then, is whether one would choose open source in anticipation of such a situation arising, especially as one cannot predict what the outcome would be.

Finally, and most substantively, the ability to freely redistribute open source software means that a vendor can offer a comprehensive service for security updates, for all of the software that they distribute. Linux distributions, for example, will provide a set of software with each release, for which they undertake to provide updates (including security updates) for an agreed period. Because one receives all one's software and patches through the same vendor, every piece of software on a computer can be updated through the single service, reducing the burden on the user or administrator; again, this is how most Linux distributions work. While it might be possible in theory for a proprietary operating system vendor to do this, it would require either negotiating agreements with the producers of every piece of proprietary software for which they want to distribute patches, or allow third-party pieces of software to extend the central updating mechanism. Open source vendors do not need to do this.

Are there any clear security risks in open source?

<!> [I think this section, modified accordingly, should come earlier in the document - after a discussion of implications. For me, it plugs a gap, but would have been useful earlier on]

Overall, there is no significant evidence that open source poses any additional risks above those normally associated with computer systems. There are, in fact, many pieces of open source software that are regarded as being among the best in their fields and are widely used in demanding situations.

However, there is one argument that remains somewhat compelling. Where open source projects do not have strong commercial backing they might be less likely to have a sufficient understanding of security issues and may not have the procedures in place to address these issues, whereas in a well-resourced company, developers are more likely to have the resources and expertise to do so. Of course, one may also encounter proprietary developers that are inadequate in this regard. The only useful way to address this issue is to try to assess, in each instance, whether the procedures put in place by developers, whether proprietary or open source, are of an adequate standard; that, or be willing to compensate for them, and bear the costs and possible problems that this may entail.

Conclusion

In summary, then, this article takes the position that there is not, as yet, any clear argument for favouring open source over proprietary software in principle with regard to security, except in certain clear and specific circumstances. <!> [But this is beyond scope of document - we need to link back to original theme, or change the theme, since we keep coming back to it!] The greatest, and clearest, benefits would be where a user is willing to do some work themselves to fulfil a specific need, knows what benefit they themselves will gain from the work, and understands how their work fits into that of the rest of the community and the main project. The case to choose open source in the hope that the open source community will necessarily produce clear advantages over proprietary software is less convincing, at least when made in general. However, there is certainly no clear evidence that would lead one to reject open source software in principle, either. As such, the field is open to allow one to choose the best software, open source or not, on a case-by-case basis.

OSSWatchWiki: FOSSAndSecurity (last edited 2013-04-15 13:56:25 by localhost)

Creative Commons License
The content of this wiki is licensed under the Creative Commons Attribution-ShareAlike 2.0 England & Wales Licence.

OSS Watch is funded by the Joint Information Systems Committee (JISC) and is situated within the Research Technologies Service (RTS) of the University of Oxford.