Showing posts with label Open Source. Show all posts
Showing posts with label Open Source. Show all posts

Friday, January 6, 2012

Open Source Software Licences Improve And Tutorial

With the current legal framework, the licence under which a program is distributed defines exactly the rights which its users have over it. For instance, in most proprietary programs the licence withdraws the rights of copying, modification, lending, renting, use in several machines, etc. In fact, licences usually specify that the proprietor of the program is the company which publishes it, which just sells restricted rights to use it. In the world of open source software, the licence under which a program is distributed is also of paramount importance. Usually, the conditions specified in licences of open source software are the result of a compromise between several goals which are in some sense contradictory. Among them, the following can be cited for a more complete discussion on this topic):



Guarantee some basic freedoms (redistribution, modification, use) to the users.
Ensure some conditions imposed by the authors (citation of the author in derived works, for instance).
Guarantee that derived works are also open source software.

Authors can choose to protect their software with different licences according to the degree with which they want to fulfill these goals, and the details which they want to ensure. In fact, authors can (if they desire) distribute their software with different licences through different channels (and prices)8 Therefore, the author of a program usually chooses very carefully the licence under which it will be distributed. And users, especially those who redistribute or modify the software, have to carefully study its licence.

Fortunately, although each author could use a different licence for her programs, the fact is that almost all open source software uses one of the common licences (GPL, LGPL, Artistic, BSD-like, MPL, etc.), sometimes with slight variations. To simplify things even more, some organizations have appeared recently which define which characteristics a software licence should have to qualify as an open source software licence. Amongst them, the two most widely known are the Debian Project, which defines the Debian Free Software Guidelines (DFSG, see appendix A.1), and the Open Source Initiative (OSI), which defines “open source” licences, and is based on the DFSG. The GNU Project also provides its own definition of free software.

It is easy to see from the DFSG that price or availability of source code in itself is not enough to characterize a product as “open source software”. The significant point lies in the rights given to the community, to freely modify and redistribute the code or modifications of them, with only the restriction that these rights must be given to all and must be non-revocable. The differences between open source licences lie usually in the importance that the author gives to the following issues:

Protection of openness. Some licences insist in that any redistributor maintains the same licence, and hence, recipient’s rights are the same, whether the software is received directly from the author, or from any intermediary part.

Protection of moral rights. In many countries, legislation protects some moral rights, like acknowledgement of authorship. Some licences also provide protection for these matters, making them immune to changes in local legislation.

Protection of some proprietary rights. In some cases, the “first author” (the party that originally made the piece of software) have some additional rights, which in some sense are a kind of “proprietary” rights.

Compatibility with proprietary licences. Some licences are designed so that they are completely incompatible with proprietary software. For instance, it can be forbidden to redistribute any software which is a result of a mix of software covered by the licence with any kind of proprietary software.

Compatibility with other open source licences. Some open source licences are not compatible with each other, because the conditions of one cannot be fulfilled if the conditions imposed by the other are satisfied. In this case, it is usually impossible to mix software covered by those licences in the same piece of software.

BSD (Berkeley Software Distribution). The BSD licence covers, among other software, the BSD (Berkeley Software Distribution) releases. It is a good example of a “permissive” licence, which imposes almost no conditions on what a user can do with the software, including charging clients for binary distributions, with no obligation to include source code. In summary, redistributors can do almost anything with the software, including using it for proprietary products. The authors only want their work to be recognized. In some sense, this restriction ensures a certain amount of “free marketing” (in the sense that it does not cost money). It is important to notice that this kind of licence does not include any restriction oriented towards guaranteeing that derived works remain open source. This licence is included verbatim in appendix A.2.

GPL (GNU General Public License). This is the licence under which the software of the GNU project is distributed. However, today we can find a great deal of software unrelated to the GNU project, but nevertheless distributed under GPL (a notable example is the Linux kernel). The GPL was carefully designed to promote the production of more free software, and because of that it explicitly forbids some actions on the software which could lead to the integration of GPLed software in proprietary programs. The GPL is based on the international legislation on copyright9, which ensures its enforceability. The main characteristics of the GPL are the following:

it allows binary redistribution, but only if source code availability is also guaranteed; it allows source redistribution (and enforces it in case of binary distribution); it allows modification without restrictions (if the derived work is also covered by GPL); and complete integration with other software is only possible if that other software is also covered by GPL. This is not the case with LGPL (GNU Lesser General Public License), also used in the GNU project, which allows for integration with almost any kind of software, including proprietary software. The GPL is included verbatim in appendix A.4. More details about the reasons and implications of the GPL are available in [23].

MPL (Mozilla Public License). This is the licence made by Netscape to distribute the code of Mozilla, the new version of it network navigator. It is in many respects similar to the GPL, but perhaps more “enterprise oriented”.

Other well-known licences are the Qt licence (written by Troll-Tech, the authors of the Qt library), the Artistic licence (one of the licences under which Perl is distributed), and the X Consortium licence (see appendix A.3).

What Is Open Source Software Tutorial

It is not easy to define the term ‘open source software’ with few words, due to the many categories and variants that exist. But it is not too complicated, either, since the idea in itself is simple. Therefore, before using stricter definitions, let us devote a moment to explain, in a relatively informal way, what do we understand as open source software.

General idea of open source software
When we talk, in English, about free software, there is a dangerous ambiguity, due to ‘free’ meaning both ‘freedom’ and ‘gratis’. Therefore, in this article, we will use mainly the term ‘open source’ when referring to users freedom of use, redistribution, etc., and ‘gratis software’ when referring to zero acquisition cost. The use of the Spanish and French word ‘libre’, by the way, has been adopted in many environments to refer to open source software, but will not be used here for the sake of uniformity. Anyway, before going into more detail, it is a good idea to state clearly that open source software does not have to be gratis. Even more, it usually is not, or at least, not completely. The main features that characterize free (open source) software is the freedom that users have to:

$. Use the software as they wish, for whatever they wish, on as many computers as they wish, in any technically appropriate situation.

$. Have the software at their disposal to fit it to their needs. Of course, this includes improving it, fixing its bugs, augmenting its functionality, and studying its operation.

$. Redistribute the software to other users, who could themselves use it according to their own needs. This redistribution can be done for free, or at a charge, not fixed beforehand.


It is important now to make clear that we are talking about freedom, and not obligation. That is, users of an open source program can modify it, if they feels it is appropriate. But in any case, they are not forced to do so. In the same way, they can redistribute it, but in general, they are not forced to do so. To satisfy those previous conditions, there is a fourth one which is basic, and is necessarily derived from them:

- Users of a piece of software must have access to its source code.

The source code of a program, usually written in a high level programming language, is absolutely necessary to be able to understand its functionality, to modify it and to improve it. If programmers have access to the source code of a program, they can study it, get knowledge of all its details, and work with it as the original author would. Paradoxically, if this freedom is to be guaranteed for a given piece of software, with current legislation, it is necessary to “protect” it with a licence which impose certain restrictions on the way that it can be used and distributed (as it will be shown later).

This fact causes some controversy in certain circles, because it is considered that these licences make the software distributed under them “less free”. Another view, more pragmatic, is that software will be made more free by guaranteeing the perpetuation of these freedoms for all its users. Because of that, people holding this view maintain that it is necessary to limit the ways of use and distribution. Depending on the ideas and goals of the authors of a piece of code, they can decide to protect it with several different licences.

Open Source Free Software Improve and History

Although all the stories related to software are obviously short, that of open source software is one of the longest amongst them. In fact, it could be said that in the beginning, there was only free (libre) software. Later on, proprietary software was born, and it quickly dominated the software landscape, to the point that it is today considered as the only possible model by many (knowledgeable) people. Only recently has the software industry considered free software as an option again.

When IBM and others sold the first large-scale commercial computers, in the 1960s, they came with some software which was free (libre), in the sense that it could be freely shared among users, it came with source code, and it could be improved and modified. In the late 1960s, the situation changed after the “unbundling” of IBM software, and in mid-1970s it was usual to find proprietary software, in the sense that users were not allowed to redistribute it, that source code was not available, and that users could not modify the programs. In late 1970s and early 1980s, two different groups were establishing the roots of the current open source software movement:

* On the US East coast, Richard Stallman, formerly a programmer at the MIT AI Lab, resigned, and launched the GNU Project and the Free Software Foundation. The ultimate goal of the GNU Project was to build a free operating system, and Richard started by coding some programming tools (a compiler, an editor, etc.). As a legal tool, the GNU General Public License (GPL) was designed not only to ensure that the software produced by GNU will remain free, but to promote the production of more and more free software. On the philosophical side, Richard Stallman also wrote the GNU Manifesto, stating that availability of source code and freedom to redistribute and modify software are fundamental rights.

* On the US West coast, the Computer Science Research Group (CSRG) of the University of California at Berkeley was improving the Unix system, and building lots of applications which quickly become “BSD Unix”. These efforts were funded mainly by DARPA contracts, and a dense network of Unix hackers around the world helped to debug, maintain and improve the system. During many time that software was not redistributed outside the community of holders of a Unix AT&T licence. But in the late 1980s, it was finally distributed under the “BSD licence”, one of the first open source licences. Unfortunately, at that time every user of BSD Unix also needed an AT&T Unix licence, since some parts of the kernel and several important utilities, which were needed for a usable system, were still proprietary.





Other remarkable open source project of that time is TeX (a typesetting system, by Donald Knuth), which formed around it a strong community which still exists today. During the 1980s and early 1990s, open source software continued its development, initially in several relatively isolated groups. USENET and the Internet helped to coordinate transnational efforts, and to build up strong user communities.

Slowly, much of the software already developed was integrated, merging the work of many of these groups. As a result of this integration, complete environments could be built on top of Unix using open source software. In many cases, sysadmins even replaced the standard tools with GNU programs. At that time, many applications were already the best ones in their field (Unix utilities, compilers, etc.). Especially interesting is the case of the X Window System, which was one of the first cases of open source software funded by a consortium of companies.

During 1991-1992, the whole landscape of open source software, and of software development in general, was ready to change. Two very exciting events were taking place, although in different communities:

* In California, Bill Jolitz was implementing the missing portions to complete the Net/2 distribution, until it was ready to run on i386-class machines. Net/2 was the result of the effort of the CSRG to make an unencumbered version of BSD Unix (free of AT&T copyrighted code). Bill called his work 386BSD, and it quickly became appreciated within the BSD and Unix communities. It included not only a kernel, but also many utilities, making a complete operating system. The work was covered by the BSD licence, which also made it a completely free software platform. It also included free software under other licenses (like for instance the GNU compiler).

* In Finland, Linus Torvalds, a student of computer science, unhappy with Tanenbaum’s Minix, 4 was implementing the first versions of the Linux kernel. Soon, many people were collaborating to make that kernel more and more usable, and adding many utilities to complete GNU/Linux5, a real operating system. The Linux kernel, and the GNU applications used on top of it are covered by GPL.

In 1993, both GNU/Linux and 386BSD were reasonably stable platforms. Since then, 386BSD has evolved into a family of BSD based operating systems (NetBSD, FreeBSD, and OpenBSD), while the Linux kernel is healthy evolving and being used in many GNU/Linux distributions (Slackware, Debian, Red Hat, Suse, Mandrake, and many more). During the 1990s, many open source projects have produced a good quantity of useful (and usually high-quality) software. Some of them (chosen with no special reason in mind) are Apache (widely used as a WWW server), Perl (an interpreted language with lots of libraries), XFree86 (the most widely used X11 implementation for PC-based machines), GNOME and KDE (both providing a consistent set of libraries and applications to present the casual user with an easy to use and friendly desktop environment), Mozilla (the free software project funded by Netscape to build a WWW.browser), etc.

Of all these projects, GNOME and KDE are especially important, because they address usability by non-technical people. Their results are already visible and of good quality, finally allowing everybody to benefit from open source software. The software being produced by these projects dispels the common myth that open source software is mainly focused on server and developer-oriented systems. In fact, both projects are currently producing lots of desktop personal productivity applications.

The late 1990s are very exciting times with respect to open source software. Open source systems based on GNU/Linux or *BSD are gaining public acceptance, and have become a real alternative to proprietary systems, competing head to head with the market leaders (like Windows NT in servers). In many niches, the best choice is already open source (an outstanding case is Apache as Web server, with a market share consistently over 50%). The announce of the liberation of Netscape Communicator, in 1998, was the starting point of a rush of many big companies to understand open source software.

Apple, Corel and IBM, for instance, are trying different approaches to use, promotion or development of open source software. Many companies of all sizes (from the small startup composed of a couple of programmers to the recently public Red Hat) are exploring new economic models to succeed in the competitive software market. The media has also started to give attention to the formerly marginal open source software movement, which is now composed not only of individuals and non-profit organizations, but also of small and medium companies.

Looking at all these events, the following question easily arises: Are we at the beginning of a new model of the software industry? This is a question which is difficult to answer, since there is no known and reliable method for looking into the future. However, through this document, we hope to provide readers with some information which can be useful can use to reach their own answer.

Wednesday, January 4, 2012

Open source Document Management Systems and Formats Document

Open source Open Standards

A document’s ‘format’ is the structure used to store it and the data it contains. Historically, the formats used for proprietary systems have often been ‘closed’, so documents created using one piece of proprietary software could not be recognised by another. This made it costly and time consuming to switch to another software product and often resulted in a ‘lock-in’ to one product. However, there is now a trend towards introducing open standards for document formats that can be used by all software developers. Open standards offer a guarantee that the data will be accessible in the future. Industry is taking measures to increase both document interoperability and digital rights management (DRM) interoperability (see below). Advocates of OSS argue that, by making source-code available with the software, the risk of lock-in is avoided because document formats are transparent.

Digital rights management (DRM) technologies Concerns over the illegal copying and distribution of digital information (music, video, etc.) have led companies to introduce a range of DRM measures. They allow content vendors to control electronic material and restrict its use. Examples include encryption methods used to prevent DVDs from being copied, or to prevent unauthorised access to data in a database. Such systems prevent infringement of IP. However, there are concerns that DRM technologies can act as another layer of proprietary lock-in. Attempting to break or counter a DRM technology is now a criminal offence under the EC Copyright Directive. Critics argue that this could prevent users from extracting data from one system (even if they own it) in order to transfer it to another, especially if this involves bypassing a DRM technology.



Open source culture
The principle of open source can be applied to a variety of other applications as well as software development. Some commentators believe that several sectors of government and industry could benefit from the open source approach (Box 4). The ideas behind it are spreading into pharmaceutical drug production; music; book and journal publishing; television broadcasting and many other cultural areas. The BBC is planning to make some material available in a ‘Creative Archive’ for viewing, copying and reuse but with some rights reserved, such as commercial exploitation.

Key Points
• Acceptance of open source software is increasing in both the public and private sector. The Office of
Government Commerce report states that it is a viable and credible alternative to proprietary software
for infrastructure and for most desktop users.

• The government’s OSS policy promotes a ‘level playing field’ in which OSS solutions should be
considered alongside proprietary ones in IT procurements.

• It is increasingly acknowledged that there is a role for both open source and proprietary approaches and that a combination of both approaches stimulates creativity and innovation.

Box 4. Open Source and Transparency

Some researchers and think tanks, such as Demos, believe that open source can contribute to a more vibrant democratic culture. Just as laws can be scrutinised by the general public, the ability to see the ‘code’ would mean that governmental processes could be laid open for inspection.

Examples include:
• Tax and benefits: under the Open Government Code and the Freedom of Information Act, the general public may have the right to know how a particular tax or benefit has been calculated. Open source may help achieve this, as having access to the source-code allows calculations to be read and checked;

• E-voting: with the transition to e-voting, political parties or the public might wish to inspect any software used in the process to counter electoral fraud or vote-rigging. Some say that OSS is one possible way of doing this because the source-code is freely available to anyone wishing to scrutinise it.

• Public participation in Parliament: some innovative projects have been developed using OSS. Examples include the websites, ‘They Work For You’, which presents Hansard debates and Written Answers and ‘The Public Whip’, which details MP voting records. These sites search the contents of Hansard and present it in an easy to read format for the public.

Use of Open Source Software Tips and Tutorial

The private sector

There is increasing awareness and uptake of OSS within the private sector, with OSS and proprietary software becoming increasingly interwoven5. Major corporations such as IBM believe it enables them to make use of a worldwide community of developers to improve their products and services. Some industry commentators suggest that OSS will lead to a more competitive software industry. Currently over 67% of web-servers run open source software called Apache6. The majority of websites and email systems run on OSS. Worldwide, around 30% of infrastructural computers run GNU/Linux, an open source operating system. However, use of OSS on the desktop is more limited: over 96% of desktop computers still use Microsoft Windows. OSS has inspired new portable device projects, such as the ‘Simputer’. This is a small, inexpensive, handheld computer, intended to bring computing power to India and other emerging economies.


Open source software in government
Governments’ interest in OSS is increasing, due to their reliance on sophisticated software. The UK Office of Government Commerce released a series of case studies in October 2004 outlining how OSS has been used in the public sector (Box 3). However, UK parliamentary responses to questions on the use of OSS in government show that uptake is still limited7. The Office of the Deputy Prime Minister is funding the ‘Open Source Academy’ project. This is intended to overcome barriers to uptake of OSS in local government such as lack of information, skills, confidence and lack of suitable products.

Policy on use of OSS within government is outlined in the updated e-Government Unit’s policy document released in October 20049. Key points are:
• reaffirmation of the UK Government’s commitment to ‘procurement neutrality’: OSS solutions should be considered alongside proprietary ones in IT procurements;
• contracts will be awarded on a case-by-case basis, based on value for money. The UK Government will seek to avoid ‘lock-in’ to proprietary IT products. 




Research licensingThe updated government OSS policy now includes policy on the exploitation of software arising from government funded research projects. There is growing debate over whether such software should be released under open source licences. Government policy states that the ‘exploitation route’ for such software (that is whether it is released commercially, within the research community, or as OSS) should be chosen to maximise returns on public investment. Decisions should be made at the discretion of the researchers and institutions involved. Some academics and open source groups have proposed dual-licensing as means of getting the benefits of a proprietary and open source licence.

The UK Particle Physics Grid project is an example of a research project using OSS. Grid computing, seen as part of the next-generation Internet, is a massive new area of information systems development. The UK Particle Physics Grid project relies on internationally developed OSS, ‘Globus’ and ‘Condor’. By building on the work of existing international communities, the project has saved significant amounts of development work and money.

Other usage
Advocates of OSS argue that, in principle, the OSS model allows software to be developed for minority markets, that is, product development can be need-driven rather than market-driven. In practice, it is not clear there is such a clear distinction between the two models: for example both GNU/Linux and Windows now have versions in a number of minority languages.

Open Source Software Legal issues

Copyright
Software is protected using the copyright system. Relying on the same protection as on books, music or film, the buyer of software is licensed the use of a copy of the product. Proprietary software is normally distributed under an ‘all rights reserved’ licence where the rights to exploit the software are held by the copyright owner. Open source relies on copyright law to give legal backing to the licences under which it is released (Open Source Software Part I).

Open Source Software Software patents

Whereas copyright protects software code from being copied, patents can be used to prevent the innovative solution or effects of the software from being copied (what it does and how it does it). Government grants the patent holder rights, in return for sharing the information on how the technical result was achieved. The extent to which software should be patentable is controversial. A key issue is whether the software has a ‘technical effect’ (for example controls the function of a robot arm) or is used for a ‘business process’ (no technical effect).

In the US, it is possible to patent software used for business processes. Amazon, for example, has patented the ‘1-click’ process, which gives a monopoly on ‘clicking once’ using a mouse to buy a product from a website. As all websites are built on the idea of clicking links, patent experts have argued that these broad ‘business process’ patents can be destructive by granting a monopoly on standard processes. This affects open source developers because, when writing a piece of software, they may not realise that the software technique is patented.

Currently, ‘business processes’ are not patentable in the EU. There is widespread debate over the ‘EU Computer Implemented Inventions Directive’, awaiting its second reading in the European Parliament. Under this directive, software will be patentable only if it has a technical effect. However, there are concerns that this may lead to widespread granting of patents, because it is hard to make the distinction between whether software is used for a business process or for a technical effect.

Developers and users of OSS, and some small and medium sized enterprises (SMEs), have voiced concerns over the potential negative impact of the directive on the competitiveness of the software industry. Proponents say software patenting is already possible in the EU; the directive will not allow patents in new areas. The UK Patent Office says the directive aims to ‘clarify the situation’ and to ‘prevent a drift towards the more liberal regime of the US’. Moreover, it is pointed out that business processes cannot be patented under the directive. Proponents (including some SMEs) also argue that patent protection is needed to encourage innovation and investment in research and development.

Box 3. Examples of government use of OSS
• Powys County Council, Wales: by replacing existing machines with GNU/Linux servers (a server is a computer that manages network resources), the number of servers has been dramatically reduced. This has led to cost savings on hardware, licensing and support.

• Ministry of Defence (MoD) Defence Academy: OSS was chosen on the basis of functionality (to meet requirements) rather than to reduce costs. However, its use has led to lower licensing costs, lower consultancy rates for developers and faster development times. The software used was security accredited by the MoD.

• Beaumont Hospital, Dublin, Ireland: the hospital has projected savings of €8 million as a result of using OSS. These were mainly due to an elimination of software licensing costs for an x-ray system and the ability to reuse hardware using GNU/Linux.

Open Source Software and OSS

Open Source Software Part II

Desirable software attributes

There is widespread debate over the relative merits of proprietary software and OSS. However, it is difficult to make general comparisons; most analysts say comparisons should be made only on a case-by-case basis. It is generally agreed that whether software is open source or proprietary, the following attributes are of key importance:

• reliability: defined as how long a system can stay in operation without user intervention;
• quality: commonly defined as the number of errors in a fixed number of lines of code;
• security: how resilient the software is to unauthorised actions (e.g. viruses);
• flexibility: how easily the software can be customised to meet specific needs and run on different types of device;
• project management: how well organised the development process is;
• open standards: documents created with one type of software being readable in another. This avoids ‘lockin’ to a particular document format;
• switching costs: the cost of moving from one system to another;
• total cost of ownership (TCO): the full costs incurred over the lifetime of the software;
• user-friendliness: how easy the software is to use.



Advocates of OSS argue that, because it harnesses a large team of developers, bugs and errors can be rapidly spotted and fixed, thus increasing reliability and security. They also say that having a large team means that OSS is by necessity ‘modular’ (made up of discrete units, each with a specific function). Modularity simplifies software design and can increase the reliability as well as flexibility of software. Advocates also argue that, by making the source-code available with the software, there is no danger of ‘lock-in’ because document formats are transparent. However, critics point out that proprietary software can also have a high degree of reliability, flexibility and security and can also conform to open standards.



Many commentators argue that OSS projects can suffer from weak project management (because of their complex development structure) and that OSS can be difficult to use. The OSS community point out that new project management tools are being introduced and that efforts are being made to increase the ‘user-friendliness’ of OSS desktop applications. There are often concerns that OSS is unsupported, and contains unauthorised intellectual property (IP) belonging to third parties. However the OSS community say this can also be the case with proprietary software. Moreover, large firms such as IBM and Hewlett Packard now manage open source projects and indemnify users to give them added insurance.

There is broad acceptance that OSS and proprietary software are comparable in terms of software quality. It is acknowledged that switching costs can be high, whichever software model is used. There are conflicting reports on how total cost of ownership (TCO) varies for the two models. It is widely agreed that TCO should be evaluated only on a case by case basis. Many analysts believe that there is increasing symbiosis between the two models. For example, modularity is now seen as an important factor in the development of both proprietary and OSS. New project management tools are being used to manage both types of software projects.

OPEN SOURCE SOFTWARE

Open-source software Part I

Open source software (OSS) is computer software that has its underlying ‘source-code’ made available under a licence. This can allow developers and users to adapt and improve it. Policy on the use of OSS in government was updated in 2004. This briefing explains how OSS works, outlines current and prospective uses and examines recent policy developments. It discusses its advantages and disadvantages and examines factors affecting uptake.

Background
Computer software can be broadly split into two development models (see Box 1 for definitions):
• Proprietary, or ‘closed’ software, owned by a company or individual. Copies of the ‘binary’ are made public; the ‘source-code’ is not usually made public.

• Open source software (OSS), where the source-code is released with the binary. Users and developers can be licenced to use and modify the code, and to distribute any improvements they make.

In practice, software companies often develop both types of software. OSS is developed by an on-going, iterative process where people share the ideas expressed in the source-code. The aim is that a large community of developers and users can contribute to the development of the code, check it for errors and bugs, and make the improved version available to others. Project management software is used to allow developers to keep track of the various versions.


Both OSS and proprietary approaches allow companies to make a profit. Companies developing proprietary software make money by developing software and then selling licences to use the software, for example Microsoft receives a payment for every copy of Windows sold with a personal computer. OSS companies make their money by providing services, such as advising clients on the version that best suits their needs, installing and customising software and development and maintenance.

The software itself may be made available at no cost. There are two main types of OSS licences:
• Berkeley Software Distribution (BSD) Licence: this permits a licencee to ‘close’ a version (by withholding the most recent modifications to the source-code) and sell it as a proprietary product;

• GNU General Public Licence (GNU GPL, or GPL, Box2): under this licence, licencees may not ‘close’ versions. The licencee may modify, copy and redistribute any derivative version, under the same GPL licence. The licencee can either charge a fee for this service or work free of charge.

Box 1. Some Common Definitions
Software is the name given to the programs that run on a computer, e.g. Microsoft Word. Programs are written as text documents, known as ‘source-code’, that contain the readable instructions controlling the program’s operation (written in computer code, such as C++). Software must be transformed, or ‘compiled’, before it can be used by a computer. This is known as the ‘binary’. Once compiled into a binary, a computer application may be used, but cannot be modified and improved unless developers have access to the underlying source-code.

An operating system is the basic software required for a computer to work. Usually all other applications require an operating system in order to work. The most widely used proprietary and open source operating systems are Microsoft Windows and GNU/Linux (page 3) respectively. Software is also required for the desktop and for infrastructure, that is to handle the basic data-processing and connections in a computer network.

Box 2. History of open source and free software
Open source first evolved during the 1970s. Richard Stallman, an American software developer who believes that sharing source-code and ideas is fundamental to freedom of speech, developed a ‘free’ version of the widely used ‘Unix’ operating system. The resulting ‘GNU’ program was released under a specially created General Public Licence (‘GNU GPL’). This was designed to ensure that the source-code would remain openly available to all. It was not intended to prevent commercial usage or distribution. This approach was christened ‘free software’. In this context ‘free’ meant that anyone could modify the software. However, the term ‘free’ was often misunderstood to mean ‘no cost’. Hence ‘open source software’ was coined as a less contentious and more ‘business-friendly’ term.

Open Source Security Map Tutorial Tips and Trick

The Open Source security map is a visual display of the security presence. The security presence is the environment of a security test and is comprised of six sections which are the sections of this manual. The sections each overlap and contain elements of all other sections. Proper testing of any one section must include the elements of all other sections, direct or indirect.

The sections in this manual are:
1. Information Security
2. Process Security
3. Internet Technology Security
4. Communications Security
5. Wireless Security
6. Physical Security

Security Map Module List
The module list of the security map are the primary elements of each section. Each module must further include all of the Security Dimensions which are integrated into tasks to be completed. To be said to perform an OSSTMM security test of a particular section, all the modules of that section must be tested and of that which the infrastructure does not exist for said Module and cannot be verified, will be determined as NOT APPLICABLE in the OSSTMM Data Sheet inclusive with the final report.

1. Information Security Testing
a. Posture Assessment
b. Information Integrity Review
c. Intelligence Survey
d. Internet Document Grinding
e. Human Resources Review
f. Competitive Intelligence Scouting
g. Privacy Controls Review
h. Information Controls Review

2. Process Security Testing
a. Posture Review
b. Request Testing
c. Reverse Request Testing
d. Guided Suggestion Testing
f. Trusted Persons Testing

3. Internet Technology Security Testing
1. Logistics and Controls
2. Posture Review
3. Intrusion Detection Review
4. Network Surveying
5. System Services Identification
6. Competitive Intelligence Scouting
7. Privacy Review
8. Document Grinding
9. Internet Application Testing
10. Exploit Research and Verification
11. Routing
12. Trusted Systems Testing
13. Access Control Testing
14. Password Cracking
15. Containment Measures Testing
16. Survivability Review
17. Denial of Service Testing
18. Security Policy Review
19. Alert and Log Review

4. Communications Security Testing
1. Posture Review
2. PBX Review
3. Voicemail Testing
4. FAX Testing
5. Modem Survey
6. Remote Access Control Testing
7. Voice over IP Testing
8. X.25 Packet Switched Networks Testing

5. Wireless Security Testing
1. Posture Review
2. Electromagnetic Radiation (EMR) Testing
3. 802.11 Wireless Networks Testing
4. Bluetooth Networks Testing
5. Wireless Input Device Testing
6. Wireless Handheld Testing
7. Cordless Communications Testing
8. Wireless Surveillance Device Testing
9. Wireless Transaction Device Testing
10. RFID Testing
11. Infrared Testing
12. Privacy Review

6. Physical Security Testing
1. Posture Review
2. Access Controls Testing
3. Perimeter Review
4. Monitoring Review
5. Alarm Response Review
6. Location Review
7. Environment Review

Open Source Security ISECOM Result and Analysis

This is a document of security testing methodology; it is a set of rules and guidelines for which, what, and when events are tested. This methodology only covers external security testing, which is testing security from an unprivileged environment to a privileged environment or location, to circumvent security components, processes, and alarms to gain privileged access. It is also within the scope of this document to provide a standardized approach to a thorough security test of each section of the security presence (e.g. physical security, wireless security, communications security, information security, Internet technology security, and process security) of an organization. Within this open, peer-reviewed approach for a thorough security test we achieve an international standard for security testing to use as a baseline for all security testing methodologies known and unknown.


The limitation to the scope of external security testing is due to the substantial differences between external to internal and internal to internal testing. These differences are fundamentally in the access privileges, goals and deliverables associated with internal to internal testing. The testing towards the discovery of unknown vulnerabilities is not within the scope of this document nor is it within the scope of an OSSTMM security test. The security test described herein is a practical and efficient test of known vulnerabilities, information leaks, and deviations from law, industry standards, and best practices.

ISECOM requires that a security test may only be considered an OSSTMM test if it is:
• Quantifiable.
• Consistent and repeatable.
• Valid beyond the "now" time frame.
• Based on the merit of the tester and analyst not on brands.
• Thorough.
• Compliant to individual and local laws and the human right to privacy.
 

ISECOM does not claim that using the OSSTMM constitutes a legal protection in any court of law however it does serve as the highest level of appropriate diligence when the results are applied to improve security in a reasonable time frame.

Intended AudienceThis manual is written for security testing professionals. Terms, skills, and processes mentioned in here may not be clear to those not directly involved and experienced with security testing. Designers, architects, and developers will find this manual useful to build better defense and testing tools. Many of the tests do not have a way to be automated. Many of the automated tests do not follow a methodology or follow one in an optimal order. This manual will address these issues.


AccreditationA security test data sheet is required to be signed by the tester(s) and accompany all final reports to submit an OSSTMM certified test. This data sheet available with OSSTMM 2.5. This data sheet will show which modules and tasks had been tested to completion, not tested to completion and why, and not applicable and why. The checklist must be signed and provided with the final test report to the client. A data sheet which indicates that only specific Modules of an OSSTMM Section has been tested due to time constraints, project problems, or customer refusal can NOT be said then to be a full OSSTMM test of the determined Section.

Reasons for the data sheet are:
• Serves as proof of thorough, OSSTMM testing.
• Makes a tester(s) responsible for the test.
• Makes a clear statement to the client.
• Provides a convenient overview.
• Provides a clear checklist for the tester.

The use of this manual in the conducting of security testing is determined by the reporting of each task and its results even where not applicable in the final report. All final reports which include this information and the proper, associate checklists are said to have been conducted in the most thorough and complete manner and may include the following statement and a stamp in the report:

This test has been performed in accordance to the Open Source Security Testing Methodology available at http://www.osstmm.org/and hereby stands within best practices of security testing.

Result
The ultimate goal is to set a standard in security testing methodology which when used results in meeting practical and operational security requirements. The indirect result is creating a discipline that can act as a central point in all security tests regardless of the size of the organization, technology, or defenses.

Analysis
The scope of this document does not include direct analysis of the data collected when using this manual. This analysis is the result of understanding the appropriate laws, industry regulations, and business needs appropriate to the particular client and the best practices and regulations for security and privacy other the client’s regions of operation. However, analysis of some form is implied by the use of “Expected Results” within the methodology so some analysis must be done to assure at least these expected results are met.

Internet and Network Related Terms
Throughout this manual we refer to words and terms that may be construed with other intents or meanings. This is especially true through international translations. For definitions not associated within this table below, see the reference of the OUSPG Vulnerability Testing Terminology glossary available at http://www.ee.oulu.fi/research/ouspg/sage/glossary/.

Application Test The security testing of any application whether or not it’s part of the Internet presence.
Assessment An overview of the security presence for the estimation of time and man hours.
Automated Testing Any kind of unattended testing that also provides analysis
Black Box The tester has no prior knowledge of the test elements or environment
Black Hat A hacker who is chaotic, anarchistic and breaks the law
Client This refers to a sales recipient with whom confidentiality is enforced through a
signed non-disclosure agreement.
Competitive Intelligence A practice legally for extracting business information from competitors.

Open Source Security Testing Methodology OSSTMM

This manual is a combination of ambition, study, and years of experience. The individual tests themselves are not particularly revolutionary, but the methodology as a whole does represent the benchmark for the security testing profession. And through the thoroughness of its application you will find a revolutionary approach to testing security. This manual is a professional standard for security testing in any environment from the outside to the inside. As a professional standard, it includes the rules of engagement, the ethics for the professional tester, the legalities of security testing, and a comprehensive set of the tests themselves. As security testing continues to evolve into being a valid, respected profession, the OSSTMM intends to be the professional’s handbook.

The objective of this manual is to create one accepted method for performing a thorough security test. Details such as the credentials of the security tester, the size of the security firm, financing, or vendor backing will impact the scale and complexity of our test but any network or security expert who meets the outline requirements in this manual will have completed a successful security profile. You will find no recommendation to follow the methodology like a flowchart. It is a series of steps that must be visited and revisited (often) during the making of a thorough test. The methodology chart provided is the optimal way of addressing this with pairs of testers however any number of testers are able to follow the methodology in tandem. What is most important in this methodology is that the various tests are assessed and performed where applicable until the expected results are met within a given time frame. Only then will the tester have addressed the test according to the OSSTMM model. Only then will the report be at the very least called thorough.

Some security testers believe that a security test is simply a “point in time” view of a defensive posture and present the output from their tests as a “security snapshot”. They call it a snapshot because at that time the known vulnerabilities, the known weaknesses, and the known configurations have not changed. Is this snapshot enough? The methodology proposed in this manual will provide more than a snapshot. Risk Assessment Values (RAVs) will enhance these snapshots with the dimensions of frequency and a timing context to the security tests. The snapshot then becomes a profile, encompassing a range of variables over a period of time before degrading below an acceptable risk level. In the 2.5 revision of the OSSTMM we have evolved the definition and application of RAVs to more accurately quantify this risk level. The RAVs provide specific tests with specific time periods that become cyclic in nature and minimize the amount of risk one takes in any defensive posture.



Some may ask: “Is it worth having a standard methodology for testing security?” Well, the quality of output and results of a security test is hard to gauge without one. Many variables affect the outcome of a test, including the personal style and bias of a tester. Precisely because of all these variables, it is important to define the right way to test based on best practices and a worldwide consensus. If you can reduce the amount of bias in testing, you will reduce many false assumptions and you will avoid mediocre results. You’ll have the correct balanced judgment of risk, value, and the business justification of the target being tested. By limiting and guiding our biases, it makes good security testers great and provides novices with the proper methodology to conduct the right tests in the right areas.

The end result is that as security testers we participate and form a larger plan. We’re using and contributing to an open-source and standardized methodology that everyone can access. Everyone can open, dissect, add to, suggest and contribute to the OSSTMM, where all constructive criticism will continue to develop and evolve the methodology. It just might be the most valuable contribution anyone can make to professional security testing.