A specification of security in distributed computing systems is discussed in this work. It is based on J.M. Rushby's criterion of Secure Isolation, but is generalised here to cover non-deterministic processes.The specification has been developed in two different notations: Communicating Sequential Processes (CSP) and the specification language Z. The specification is applied to and developed with a number of simple examples. Differences between the two approaches are discussed and possible methods of relaxing the specification to allow information flows are discussed.
ABSTRACT
A specification of data security in distributed computing systems is discussed in this work. It is based on J. M. Rushby's criterion of "Secure Isolation" but is generalised here to cover non-deterministic processes. The specification has been developed in two different notations: Communicating Sequential Processes (CSP) and the specification language ' Z ' . The specification is applied to and developed with a number of simple examples. Differences between the two approaches are discussed and possible methods of relaxing the specification to allow authorised information flows are mentioned.
ACKNOWLEDGEMENTS
Abbildung in dieser Leseprobe nicht enthalten
One hand cannot applaud by itself (Omani proverb)
I owe a large debt of gratitude to Carroll Morgan, my course supervisor, and Jeff Sanders, my project supervisor, for their patience with and help for an experimental chemist who came rather late into computing science and proved to be "a bit slow on the uptake".
I am also very much indebted to two people, one of whom I hardly know and the other I have never met. Dr Andrew Watson of Smith Associates first suggested this project. I have plundered the works of Dr J. M. Rushby, formerly of the University of Newcastle upon Tyne, and his ideas are very much the inspiration and starting-point for this dissertation.
I have had much support and encouragement from others.
Jeremy Jacob, at PRG, and Stanislaw Swierczkowski (S3), at Sultan Qaboos University, have given me advice and made suggestions. Stan's colleagues, S. Sampanthar and Ali Yazici, have answered my enquiries and endured my attempt to "learn by teaching".
Some of my colleagues on the MSc course were a great help to me. I wish particularly to mention David Lightfoot and Sue Waters.
Richard Burnham
1: THE SECURITY PROBLEM IN COMPUTING
1.1 Introduction
To organizations which make use of computers, the importance of security of data files and communications is great: it is clearly desirable for commercial and military secrets to be unavailable to unauthorized persons, for financial accounts to be safeguarded from fraud, and for personal data to be kept confidential as required by law.
Closely connected to security is reliability. Faults in hardware or in software may lead to inadvertent breaches of security. A software design error could show itself in this way in circumstances that were not encountered during testing. It is the purpose of this project to investigate the use of formal methods in the specification and design of secure systems in order to prevent this happening.
1.2 Outline of the problem
A typical data-processing installation consists essentially of a processor, with its main memory, a backing store and terminals to which users have access. There will usually also be a printer and other peripheral devices.
In the course of the business of an organization, different persons will need access to different files held in backing store. The system must correctly ensure that each person
illustration not visible in this excerpt
has access to read or write only those files authorized, despite the sharing of facilities.
A simple example [Watson 86] is a system controlling access to two sets of files, called category X and category Y files (Figure 1.1). Within each set there is a number of separate named files. Each file has an associated access control list which may contain some or all of the following descriptors :
R : read access enabled
W : write access enabled
D : delete file enabled
Each of the users, A, B, C etc., has a private access control list, stored by the system, which specifies the access allowed to each file set. For instance, the access list (A,X:R:W,Y:R) shows that user A can read and write files in set X but can only read files in set Y.
The access list embodies an access control policy, determined by authorized persons within the organization. The system itself must have an access control mechanism which implements that policy. The policy, as administered by the mechanism, must allow for the policy itself to be modified when appropriate. A common way of implementing this is to assign an ownership to every object. Privileges can be dispensed or withdrawn only by the owner of a file.
Access control is, however, only part of the problem. The system must ensure that information does not flow from one file (or user) to another to the detriment of security. To refer again to Figure 1.1, if the access list were (A,X:R,Y:R; B,X:R) then В is allowed to read files only in set X whilst A has read access to both sets. It is necessary to have an information flow control policy, together with an appropriate mechanism, to ensure that, having gained access to a file in set X, В cannot enable information to "leak" from a file in set Y into the file to which he has legitimate access, and so indirectly to exceed his authorization.
Systems of the type that we have considered are known as multilevel secure systems (MLS). Each can be considered to be made up of a number of security levels (or compartments) representing different combinations of access and information flow authorizations. Each user may have access to one or more of the security levels according to the security policy determined by the appropriate officers of the organization, and implemented by the system.
A system that can be proven always to enforce the security policy is said to be trusted. Most commercial data-processing systems fall far short of this. Even in military installations, where the cost/benefit balance strongly favours the building-in of a high degree of security, the "trust" that can be placed in systems is less than ideal. This is because conventional systems are far too complex to prove completely secure by formal methods. Furthermore, to the extent that systems are made secure, it is often to the detriment of other aspects of the performance of the system, for example, in speed.
It is not possible here, nor is it appropriate, to give a detailed review of all aspects of computer security. Detailed information is available in Denning's book, chapters 4 and 5 [Denning 82].
1.3 Verifiably secure systems
A considerable amount of work has been sponsored by the US Department of Defence on the development of computing systems that can be proven to meet its requirements for security. Most of the conventional systems that have been developed are based on the same set of principles [Denning 82, sect. 5.6].
The basis of security in these systems is the use of a security kernel. This is a trusted program which interfaces with other, not necessarily secure, parts of the system. Its job is to mediate accesses to and information flow within the system. By this means the security-critical parts of the system are concentrated into a relatively small body of code, making verification easier.
The commonly-accepted model of security is that of Bell and La Padula [Bell 73]. This is based on an access-control model known as the access matrix, together with certain principles concerned with the regulation of information flow. The access matrix details the rights of access of each subject (active entity in the system, eg user or program) to each object (passive entity: this class includes subjects as well as files). The information-flow principles are
(1) the tranquillity principle: objects have fixed security classes;
(2) the simple security (ss-) property: a subject may not have read access to an object unless the security class of the subject is at least as high as that of the object;
(3) the *-property: a subject may not have read access to an object X and write access to an object Y unless the security class of Y is at least as high as that of X.
The security classes represent a partial ordering, as for instance between the increasing levels of "unclassified", "confidential", "secret" and "top secret", and mathematical theories of security have been based on treating the flow of information as a lattice [Denning 82, sect 5.1].
Lampson [Lampson 73] distinguished between three different kinds of channels through which information may flow in a system.
(1) Legitimate channels are intended for information transfer between processes or programs, for instance, the parameters of a procedure.
(2) Storage channels are objects shared by more than one subject. Examples are global variables and shared files.
(3) Covert channels are not intended for the transfer of information at all. An example is the "timing channel" discussed in the next paragraph.
We shall see that these distinctions are important in our discussion. In principle, information flows in legitimate and storage channels may be made secure. This is more difficult in the case of storage channels, where every object must be protected. Covert channels occur whenever one subject can modify a property of the system that can be observed by another subject. Most commonly, this takes the form of a "timing channel" where the use made by one process of the CPU influences the service obtained by another. This can be overcome to some extent by making every job specify the resources and time that it needs in advance and terminating the job at the allotted time, regardless of the actual time of completion. This, however, degrades system performance, and the cost of closing covert channels can be very high.
Once a specification of security has been arrived at, it is necessary to show that the implementation of the system satisfies it. Rushby [Rushby 81, p.31] points out that, although much effort has been put into the specification of security in such systems as UCLA secure UNIX, KVM/370 and KSOS, the proof of correctness is usually not carried out. Yet many of the threats to the security of a system arise in the complexities of the implementation: "... the security of the whole enterprise rests on the manipulation of some of the nastiest and trickiest aspects of the underlying hardware, notably the memory management facilities, the I/O system and the interrupt mechanism. It is precisely in the secure management of such complex details as these that the guidance and reassurance of the appropriate verification technique is vital" [Rushby 81, p. 32].
The problem of proving a system secure is closely related to that of proving a program correct. In general, it is not possible to prove that an arbitrary system is secure, for much the same reason that it is impossible to prove that any given program will halt. However, it will be possible to build provably secure systems if verification is integrated with the specification, design and implementation [Denning 82]. Denning describes methods both of verification and of structuring a system so as to aid the task of verification. In this thesis we are concerned with methods of verification. However, we base this work on a particular approach to the structuring of a system, which will now be described.
1.4 The distributed systems approach
Whatever the hardware on which a secure system is to be implemented, the first step in achieving security is the separation of the security compartments. Information in different compartments is kept apart and each item is clearly identified as belonging to its particular compartment. Transfers of information between compartments must take place only when the system has explicitly confirmed that this is allowed by the security policy; all other requests to transfer information must be denied. This control over transfers is referred to as mediation.
Separation can be achieved in a combination of any of four ways :
(1) by physical separation: work in different security compartments is performed on different machines;
(2) by temporal separation: work in different compartments is carried out at different times, with volatile memory being purged and other memory changed between uses;
(3) by cryptographical separation: use of different encryption keys to ensure that information is unintelligible if it reaches another security compartment ;
(4) by logical separation: the system software is entrusted to keep sets of data from different security compartments separate as they are handled in the machine.
Rushby [Rushby 81, 82] has formulated a specification of security of computing systems which he calls Secure Isolation. It begins with the idea of complete separation between processes or users. The specification, which is given a formal definition, is that a user (or process) should be unable to tell, from the observed behaviour of the system, whether he is using the shared system or an identical one dedicated entirely to his own use. This specification of total isolation can then be relaxed in such a way as to allow only the required information flows.
To prove that a particular system meets the specification Rushby has introduced a formal method called Proof of Separability. In essence, this involves showing that the system satisfies certain conditions that guarantee the existence of a private system which would behave towards the user exactly as does the shared one.
Rushby's ideas have been applied in the development of a Distributed Secure System (DSS) by Rushby and Randell at the University of Newcastle upon Tyne and Barnes and others at the Royal Signals and Radar Establishment, Malvern, Worcestershire [Rushby 83, Barnes 85, Wood 85]. Processes are kept separate by being implemented on separate components of a distributed system. These components need not in themselves be trusted. All information flows are mediated by a trusted security kernel. Other components, whether trusted or not, can be added as required. The distributed systems approach means that the proof of security is narrowed down to the relatively small obligations to prove the security of the trusted components by appropriate methods, and of the system as a whole by Proof of Separability.
We are not concerned here with the details of the implementation. However, we take Rushby's ideas of Secure Isolation and Proof of Separability as our inspiration and starting point, and our aim is to apply new formal methods in generalizing them and extending their applicability.
2. SECURE ISOLATION AND PROOF OF SECURITY
2.1 Introduction
It is not intended here to give a complete account or development of the ideas in Rushby's paper; these can be found in the original [Rushby 82]. This section will be only a summary, but, since this work is intended to generalize Rushby's ideas, we shall point out the differences between Rushby's approach and our own. For reasons to be explained in Chapter 3, our principal development is in terms of the Communicating Sequential Processes (CSP) notation [Hoare 85] and we anticipate this by making some comparisons with CSP.
There are two main ideas to consider. The first is the specification of Secure Isolation. The second is Proof of Separability, the method used to prove that a system shows secure isolation. We shall also be concerned with the notation in which the two ideas are developed.
2.2 Secure Isolation
Rushby models a system or machine as a 6-tuple
M = (S,I,0,NEXTSTATE,INPUT,OUTPUT)
where
S is a finite, non-empty set of states,
I is a finite, non-empty set of inputs,
О is a finite, non-empty set of outputs,
and
NEXTSTATE: S→S,
INPUT: I→S,
OUTPUT: S→0
are total functions.
The machine proceeds from state to state with the output depending on the present state through the OUTPUT function. This determinism imposed by the NEXTSTATE function is undesirable in specifying a distributed system, because of the non-determinism inherent in a system with multiple users and components. Furthermore, Rushby is forced to assume in his development that the only input occurs right at the start of the computation and none subsequently, as inputs are non-deterministic. This restriction has to be removed later. By removing the determinism, we can avoid this problem.
Rushby defines a computation invoked by an input i:I as the infinite sequence
illustration not visible in this excerpt
We note that the result of applying the computation function is a kind of trace of the machine, but in terms of states rather than of events as in CSP. However, Rushby is not directly concerned with a computation (in this sense) but with the sequence of outputs visible to an observer, which he denotes
RESULT(i) = < OUTPUT!SQ),OUTPUT(s1), . . ,OUTPUT(S ), .. >
This is much closer to a CSP trace, but we wish to include inputs in the trace, since inputs are events which we wish to allow at any time.
The next step in Rushby's development is to treat each input or output as a combination of the private inputs or outputs of several users. Following cryptographic conventions, he labels these users with "colours" (security classes). For clarity, these colours are represented by integers from the set C = {1,2, .. ,c, .. ,m}. He models the sets I and 0 as Cartesian products of C-indexed families of sets:
illustration not visible in this excerpt
A machine which has input and output sets of this form is referred to as C-shared.
Now a projection function is required which when applied to an input i:I or output o:0 produces only the component belonging to a particular user. This function is called EXTRACT. So, for example,
EXTRACT(с,о)
is the c-coloured component of the output o, and the c-coloured component of the input i is
EXTRACT(c,i).
This is extended to allow EXTRACT to be applied to sequences, so that the component of the output visible to user c is the sequence
EXTRACT(c,RESULT(i)).
The development so far allows Rushby to give a first definition of secure isolation. That is, the results seen by each user should depend only on his own contribution to the input:
Vc:C, Vi,j:I·
EXTRACT(C,i) = EXTRACT(C,j) =>
EXTRACT(c,RESULT(i) ) = EXTRACT(c, RESULTIj)) ..(2.1)
This requirement is in fact too strong, and we must now consider why it is, and how it should be relaxed.
Firstly, we interpret Rushby's definition of a sequence of outputs as representing a sequence of "time slots" over which an output remains constant (e.g. a single scan of a terminal screen). So, for example, a machine that does not change its state after initial input and continually gives the output "5" shows the result <5,5,5,Consider now a time-sharing machine. The length of time that a user has to wait for service depends on the loading of the CPU. Thus, one user, by making heavy use of the machine, can influence the average delay in obtaining service experienced by another user. This represents a means of communication from one user to another, even though it is noisy and has low bandwidth. In Lampson's classification, this is a "covert" channel. The specification of (2.1) states that covert channels (as well as storage channels) should be absent, but there is no known way to guarantee the construction of an implementation free of covert channels. We therefore cannot be at all sure of meeting such a specification. Rushby intends to specify the absence of storage channels, but not of covert channels. (In practice, covert channels are dealt with by techniques to increase their noise and lower their bandwidth further.)
Rushby weakens his specification by introducing a function on sequences called CONDENSE. It replaces a sequence with another in which all subsequences of repeated values are replaced by a single occurrence of that value. In other words, it restricts a sequence to changes in values. For example,
CONDENSE(<1,1,2,2,3,4,4,2,2,2,6,6,5,5>) = <1,2,3,4,2,6,5>
Rushby’s definition of the CONDENSE function is rather long. It can be given a briefer inductive definition
Abbildung in dieser Leseprobe nicht enthalten
where
a and b are sequences,
“ is the concatenation operator and <x> means a sequence of n occurrences of x, both as defined in CSP [Hoare 85], the function last is defined by last!<x>) = x, for all x
last!<x>‘a) = last!a) for all x and for all sequences a, and the function head is defined by
head(<x>'a) = x, for all x and for all sequences a.
The important point here is that, after applying the CONDENSE function to a sequence, we now have a sequence which is a trace in the CSP sense, that is, a sequence of events, where an event is the production of a (new) output. Using the CSP notation we can arrive at this stage immediately, because CSP need have no notion of the time intervals between events.
As Rushby's definition of secure isolation now stands, we can write it as
Vc:C, Vi,j: I·
EXTRACT!C,i) = EXTRACT(C,j) =>
CONDENSE(EXTRACT(c,RESULT!i))) =
CONDENSE(EXTRACT!c,RESULT!j))) . .(2.2)
In Rushby's development this is still, however, an overspecification. It says that we wish to ensure that no input will ever cause the machine to crash, or to go into an infinite loop, as only under this condition will the equality between the two "condensed" traces apply. This is, of course, a desirable thing to do, but, like the elimination of covert channels, is beyond the scope of the techniques with which Rushby is concerned, and he further weakens the specification.
Given suitable methods for proving that the code of a particular system does satisfy its specification, it seems reasonable to try to specify absence of deadlock and livelock in distributed systems. In CSP, we can leave the question open for the present. If we wish to use the stronger specification, we can make use of the "failures" and "divergences" models of CSP. If we weaken the specification as it stands, we can express it using the "traces" model of CSP. For this work, we shall concentrate on the traces model, but the question is discussed further in chapter 3.
Rushby introduces a "weaker form of equivalence" on sequences, denoted by = , and defined by
X = Y iff X = Y
or the shorter of X and Y is an initial subsequence of the other.
Since we are now dealing with CSP-type traces, we can express this, when X,Y are traces, as
Abbildung in dieser Leseprobe nicht enthalten
(This relation is not an equivalence in a mathematical sense, because it is not transitive [Swierczkowski 87]. For example, consider the three sequences
Abbildung in dieser Leseprobe nicht enthalten
but the same relation does not hold between a and b.)
Rushby's definition of secure isolation is now modified by replacing the equality of the sequences by the equivalence:
EXTRACT(C,i) = EXTRACT(C,j) =>
CONDENSE(EXTRACT(c,RESULT(i))) =
CONDENSE(EXTRACT(C,RESULT(j))) ..(2.3)
This is Rushby's final definition of secure isolation.
This, however, is still too deterministic for our purposes. (2.3) expresses a requirement that traces resulting from the same input should be the same "as far as they go", but this is only true where each state is a function of the previous one.
To summarize, we wish to develop a specification of "secure isolation" which encapsulates Rushby's idea that "the results seen by each user should depend only on his own contribution to the input" [Rushby 82]. Our intention is to generalize this to allow for the non-deterministic dependence of a state on the previous one, as this is more appropriate for a distributed system.
2.3 Proof of Separability
Starting from (2.3), we now summarize the method proposed by Rushby for proving that a machine shows the property of secure isolation. (2.3) states that it will appear to the user of a shared machine which satisfies the specification that he has the machine to himself. "Proof of Separability" involves showing that a private machine could be constructed which the user cannot distinguish from the shared machine by the behaviour which he observes.
The first step is to define a private machine for a user c:C. Recall that a C-shared machine is given by
M = (S,I,O,NEXTSTATE, INPUT, OUTPUT)
Abbildung in dieser Leseprobe nicht enthalten
A private machine for c:C has an input set I and output set 0C and can be written
Abbildung in dieser Leseprobe nicht enthalten
It has functions COMPUTATION0 and RESULT0. Then M-compatible machine for c is defined by
Abbildung in dieser Leseprobe nicht enthalten
This means roughly that when the c-component of a C-shared input is provided to an M-compatible machine, the result is equivalent to the c-component of the result when the whole input is provided to the C-shared machine.
Rushby's theorem 1 [Rushby 82] states that:
A C-shared machine is secure, if for each c:C there exists an M-compatible private machine for c.
The proof of this theorem follows from the definitions (2.3) and (2.4).
In order to avoid the proof of secure isolation for every possible shared machine, Rushby seeks conditions under which he can guarantee that an M-compatible private machine exists for a shared machine M. In order to do this, he restricts his consideration to the class of C-shared machines that "time-share" their activity between users. In other words, each operation of the shared machine simulates an operation of one user's private machine. It is necessary for his theorem 2 that the identity of the user being serviced at any instant is a function of the current state, that is, that each operation is carried out on behalf of only one user.
We wish to reconsider this point later, since our aim is to make the specification of secure isolation as general as possible for distributed systems, and the importance of distributed systems is their ability to carry out more than one operation "simultaneously". For the present, however, we continue our summary of Rushby's development.
We state Rushby's two theorems without reproducing his proofs.
Theorem 2
Let M = (S,I,0, NEXTSTATE, INPUT, OUTPUT) be a C-shared machine,
COLOUR: S →; C a total function (taking the current state as argument and returning the colour of the user being serviced),
MC = (SC,IC,0C, NEXTSTATEC, INPUT0, OUTPUT0) be a private machine for c:C,
and φ° : S → S° be a total function (an abstraction function from the states of the C-shared machine to those of the private machine)
such that Vs:S, Vi:I·
(1) COLOUR(S) = c =» Φ°(NEXTSTATE(s)) = NEXTSTATE0(Φ°(S))
(2) COLOUR(s) Φ C =* Φ°(NEXTSTATE(s)) = Ф°(з)
(3) Ф°(INPUT(i)) = INPUT0(EXTRACT(c,i))
and
(4) OUTPUT0(Ф°(S)) = EXTRACT(C,OUTPUT(s));
then M° is M-compatible.
This theorem formalizes the idea that we have already described informally, that Mc behaves towards user c exactly as M does. (1) and (2) together express the requirement that c observes a change in the state of the shared machine M only when an operation is done on his behalf, just as he would with a private machine. (1) says that if the operation is carried out for user c, then the resulting state of M is identical , as far as user c is concerned, to the resulting state of Mc after the same operation. According to (2), if the operation is not for user c, then he will see no change of state in either machine.
(3) and (4) are concerned with ensuring that both M and Mc behave the same way in input and output towards user c. (3) says that an input i produces the same state observable by user c on machine M that the c-component of i will produce on the private machine. Similarly, if the user receives a particular output in a given state of M, he will also receive that output when Mc is in the corresponding state, as expressed by (4). All the conditions (1) to {4) can be represented by commuting diagrams (Figure 2).
Theorem 3 requires a further constraint on the nature of the shared machine in order that conditions can be stated that guarantee the existence of an M-compatible private machine for a user c:C. This is done by providing more detail for the NEXTSTATE function.
The machine is assumed to be equipped with a set OPS of operations, each operation being a total function on states.
Abbildung in dieser Leseprobe nicht enthalten
The existence of a total function NEXTOP is supposed, which models the "control mechanism" which selects the operation to be carried out;
Abbildung in dieser Leseprobe nicht enthalten
NEXTSTATĽC
Figure 2: Commuting diagrams representing Rushby's four conditions for "M-compatibility".
NEXTOP : S→OPS
In any state s, NEXTOP(s) is the operation which is applied to s to yield the next state. Therefore
NEXTSTATE(S) = NEXTOP(s)(s).
Theorem 3
Let M = (S,I,0, NEXTOP, INPUT, OUTPUT) be a C-shared machine,
and COLOUR : S → C be a total function (as in theorem 2), and suppose for c:C there exist sets
Sc of states, and
OPSC Ç. Sc → Sc of total operations on Sc
together with total abstraction functions
Фс : S→Sc and
ABOPC : OPS→PSC
and that
Vc:C, Vs,s':S, Vop:OPS, Vi,i':I·
Abbildung in dieser Leseprobe nicht enthalten
Then an M-compatible private machine exists for c.
From the conditions of theorem 3 we can construct a private machine of the form
Abbildung in dieser Leseprobe nicht enthalten
which satisfies all the conditions of theorem 2. If, therefore, given a particular shared machine, we can construct the appropriate sets S° and OPS° with their associated abstraction functions Ф° and ABOP°, which satisfy the conditions of theorem 3, then we have proved the machine to show secure isolation.
The final stage in Rushby's development is to modify his model to allow for the possibility of input at each stage in the evolution of the states of the machine. To do this, Rushby modifies condition (3) of theorem 3 to read, where the INPUT function is modified to
Abbildung in dieser Leseprobe nicht enthalten
We now have six conditions required for "Proof of Separability" in Rushby's development, and we quote Rushby's summary of them [Rushby 82].
(1) When an operation is executed on behalf of user c, the effects which that user perceives must be capable of complete description in terms of the objects known to him.
(2) When an operation is executed on behalf of user c, other users should perceive no effects at all.
(3a) Only I/O devices of colour c may affect the state perceived by user c.
(3b) I/O devices must not be able to cause dissimilar behaviour to be exhibited by states which the user c perceives as identical.
(4) I/O devices of colour c must not be able to perceive differences between states which the user c perceives as identical.
(5) The selection of the next operation to be executed on behalf of user c must depend only on the state of his regime.
To summarize, Rushby has given a definition of secure isolation, and has provided a proof method by which, given certain conditions, a system can be proven to have the property of secure isolation. We take these as our starting point, but believe they are not sufficiently general to be of wide applicability to distributed systems. We wish to allow non-determinism in our specification of security, and this will require deviation from Rushby's approach.
2.4 Abstraction and refinement
An important point that arises from theorems 2 and 3 is the use of the abstraction functions Φ and ABOP from the shared machine to the private machine. The shared machine is regarded as an implementation of an abstract or ideal private machine that has the properties that we desire: in this case, specifically, it has secure isolation; but it might also have freedom from deadlock and divergence, and other required properties.
This is a general situation in programming: we have an abstract, idealised specification which we wish to implement in a computable code (or hardware). If we can prove that the implementation meets the specification, it is common to speak of the the implementation as being a refinement (or, alternatively, a reification [Jones 86]) of the specification.
Implicit in Rushby's definition of Secure Isolation is the notion that the practical, shared machine is a refinement of a private machine which by its very nature is securely isolated. This refinement must apply between the shared machine and an abstract private machine for every possible user of the shared machine.
We have already seen that CSP notation is particularly suitable for reasoning concerning the traces, or sequences of events, in the lifetime of a machine. In chapter 3, therefore, we adopt the CSP notation and try to find a suitable definition of refinement between the shared and private machines. The aim is to be able to determine in CSP the secure isolation (or lack of it) of particular shared machines.
In other programming contexts, the idea of refinement has been developed from the concept of homomorphism between algebraic structures (e.g.groups) in mathematics, and can be expressed as rules relating data structures, and the operations on them, in the abstract and implemented versions of a program. These rules are conveniently expressed in the Z specification language, and in chapter 4 we explore the application of the Z definition of refinement to the definition of secure isolation.
3. SECURE ISOLATION EXPRESSED IN CSP
3.1 Introduction
In Chapter 2, we discussed the definition and proof of Secure Isolation introduced by Rushby. Here, we develop the same ideas using the formalism of Communicating Sequential Processes (CSP) [Hoare 85] and compare the two approaches. There are several reasons to attempt this work in CSP. One is that, since we are concerned with distributed systems, we wish to allow the possibility of non-determinism in our specifications: full non-determinism was not possible in Rushby's approach, but is readily permitted in CSP.
Another reason is suggested by the fact that much of Rushby's paper is given over to the definition of the traces of a process, although Rushby does not use that term. A trace is a sequence of events (in this work, inputs, outputs and communications) in which the process can engage. The idea of a trace is already defined in CSP [Hoare 85, p.41], and we have explained in Chapter 2 that the CSP definition is simpler and more convenient to use. Thirdly, we are interested in contributing to the development of CSP by applying it to an important problem for which it has not been used before.
3.2 Communicating Sequential Processes
In this chapter we use the CSP notation set out in [Hoare 85]; any other usages are explained as they arise. A full account of CSP notation will not be given here.
In the text we use both the parallel operator || and the interleaving operator |||, and the distinction between them is important. Suppose that P and Q are processes. Then
(||) an event engaged in by (P||Q) (read "P in parallel with Q") requires the participation of both P and Q if it is in the alphabet of both. Otherwise events in the alphabet of one of P and Q occur without the involvement of the other [Hoare 85, p.68];
(|||) an event engaged in by (P|||Q) involves only P or Q without the participation of the other [Hoare 85, p. 119].
Use of the ||| operator "disconnects" two processes so that they can run completely independently. The following relationships apply:
Abbildung in dieser Leseprobe nicht enthalten
If P and Q have no events in common, then their combination is the same with either operator.
In Rushby's papers the discussion is in terms of the states of a machine, and the operations which change the states. In CSP, the fundamental concepts are processes and the events in which they can engage. A process is the behaviour pattern of an object, restricted to those of its events which we choose to consider as its alphabet [Hoare 85, p.25]. A process (which corresponds to a state) is able to engage in certain events (say, input and output) and transform to a subsequent process (that is, state). The set of events (alphabet) of a process determines the level of abstraction at which it it is being observed and described.
3.3 The development in CSP
For simplicity we shall assume a set of only two users {U ,U }; the discussion can be extended to any number of users later. It is taken that these two users belong to different security compartments ("colours") and the colours are indicated by the superscripts.
A single user U can use a (shareable) machine M, and this can be represented in CSP by a parallel combination of the processes U and M:
( u I I M )
and if aU and aM are the alphabets of U and M respectively (that is, the sets of events in which each of these processes can engage) then
Abbildung in dieser Leseprobe nicht enthalten
represents the set of events in which U and M can jointly engage, and therefore communicate. Initially, we consider these events to be "atomic" and the nature of these events is not of concern.
A system could consist of two users, each using a separate machine M, and with no interaction between the two combinations. This system clearly shows secure isolation, as each user has a private machine. In CSP this can be represented by
Abbildung in dieser Leseprobe nicht enthalten
Figure 3
where the interleaving operator ||| allows each of the two combined processes (U ||M) and (U ||M) to proceed without the participation of the other (Figure 3).
A machine shared by two users can be represented as
Abbildung in dieser Leseprobe nicht enthalten
if we allow the possibility of communication between the two users, or, if we ensure that U and U can never interact, then
as in Figure 4.
Abbildung in dieser Leseprobe nicht enthalten
Figure 4
In the event that the alphabets of the two users are disjoint, then В and B_|_ are equivalent.
Throughout this chapter, we shall use the underlined symbols A and В to represent the systems defined in (3.3.1) and (3.3.3) respectively, where the processes M and Uc may be instantiated by different actual processes.
What would it mean to say that a system involving U , U and M shows secure isolation? We want the practical system ( В or BJ_) to share the properties of A that make it secure; we can represent this by
Abbildung in dieser Leseprobe nicht enthalten
where = may be read as "is refined by". We take this as our specification of the secure isolation of В or B_|_ but we have yet to define the relation = precisely.
Firstly, though, we must decide whether it would be more appropriate to take (3.3.4) or (3.3.5) as our specification. If we adopt (3.3.5), we are saying that BJ_ shows secure isolation even though the users may be able to communicate through channels external to M, and their behaviour is then partially synchronised. This is an overspecification, since we cannot design a computer system proof against the possibility that (say) U and U are able to talk to each other. Clearly, an organization's overall security policy must be concerned with such questions, but they cannot be part of the design of a computer system.
Specification (3.3.4) is weaker than (3.3.5) and is more appropriate to the design of a distributed computer system. It says that secure isolation applies if and only if A is refined by system В from which we have already excluded the possibility of communication external to M. The elimination of this possibility is beyond the scope of our present discussion. (3.3.4) is equivalent to (3.3.5) if users have disjoint alphabets (cannot communicate); this follows from the definitions of the operators || and |||.
Having selected (3.3.4) as our specification, we can now return to the definition of our refinement relation. All traces of A are secure, in the sense that no communication can occur between U and U , and we require that В also shows security in this sense. However, we also wish to allow that some of the possible behaviours open to A may not be allowed for B, so В will be at least as deterministic as A.
This suggests that, following [Hoare 85], we define
Abbildung in dieser Leseprobe nicht enthalten
But this is too strong a requirement for our purposes, since each user is only aware of events in its own alphabet. For example, consider A to have the single trace
<l.a,2.a,l.b,2.b>
where aU1 = {l.a,l.b} and aU2 = {2.a,2.b}.
Then is only aware of the trace <l.a,l.b> and U similarly is only aware of the trace <2.a,2.b>. So a process В with the single trace
<l.a,l.b,2.a,2.b>
would be an acceptable refinement of A, as far as our definition of security is concerned, because the traces perceived by U and U are the same with В as with A.
(There may be other reasons for rejecting В as an implementation. For example, there might be a further specification that 2.a must always precede l.b. This, however, would lead to partial synchronisation of the user processes and would represent a breach of security. Such an addition might be made to the specification as part of the controlled introduction of permitted information flows into a system already made secure. This is beyond the scope of the present discussion.)
Two traces will seem the same to both users if each user sees the same set of events occurring in the same order. We therefore define the local equivalence of traces. This definition is similar to that of the isomorphism of two sequences of events with respect to a process, introduced by Chandy and Misra [Chandy 86]. Two traces s and t are considered to be locally equivalent if they have the same interactions in the same order for each user, i.e.
Abbildung in dieser Leseprobe nicht enthalten
= is an equivalence relation on traces, and
s = t =^> s is a permutation of t,
although the converse clearly does not hold.
We shall now weaken our definition of refinement so that
Abbildung in dieser Leseprobe nicht enthalten
With this definition of refinement, our specification of the secure isolation of a system В becomes
Abbildung in dieser Leseprobe nicht enthalten
In proving the security of a system В in terms of the traces of the system A and B, as we have just discussed, we shall have proved what is conventionally called the safety (or partial correctness) of В in terms of its specification A. In other words, В always does what is permitted by A, if it does anything at all.
However, system В could deadlock immediately on switching on, and still satisfy our stated requirements. Supposing that В only ever does this: then its only trace is <> (the empty trace). In that case, there is clearly a trace of A, also the empty trace, which is equivalent to it, since <> = <>. We need also to prove the liveness of В with respect to its specification A.
More precisely, we need to prove that В will not deadlock except in those cases that A would also do so. Furthermore, for the same reasons, we must ensure that В will not diverge except in those cases that A would also do so. For example, В should not go into an infinite loop after a certain trace where A would not.
In order to deal with this in CSP, we make use of the fact that a process is uniquely determined by three sets: its alphabet, its failures and its divergences [Hoare 85, p.129]. We can modify our definition of refinement to cover liveness by finding appropriate relations between the failures of A and В and between the divergences of A and B. Now our proof of security becomes one of total correctness. First, however, we must redefine our local equivalence operator = to apply to failures and divergences.
A failure is a pair (s,X) where s is a trace and X is a refusal set (a set of events). If В has a certain trace s leading to deadlock when the environment offers a set of events X, then we want to ensure that any trace of A which is locally equivalent to s will be followed by deadlock when offered the same set. This means that the set of events, which will cause A to deadlock after a particular trace locally equivalent to s, must be at least as large as X.
A divergence is simply a trace, and we may adopt the same definition of local equivalence on divergences as we did on traces.
Now our definition of refinement, = becomes
Abbildung in dieser Leseprobe nicht enthalten
but we are now applying the new definition of = .
The definition of = we are using is not the same as the usual definition E in CSP [Hoare 85, p.132]. We do not use the "conventional" definition here because of our use of local equivalence rather than equality between traces.
3.4 Examples of the CSP approach
The following examples explore the definition of secure isolation developed in the previous section. Some of these examples satisfy our criterion and others do not.
Initially, we consider the alphabets of all processes to consist of events of which the nature or structure is not relevant. Later, we treat the events as communications on channels in the standard way in CSP.
Throughout, we set out proofs using the traces model of CSP. In this way, we can prove the secure isolation (or not) of a system in terms of our specification. Using this model we are unable to prove the absence of deadlock or livelock in our processes. However, the extension of the proofs to the failures model (to guard against deadlock) and the divergences model (against livelock) is readily carried out.
In the following, reference is made to some of the laws of CSP, as given in Hoare's book [Hoare 85]. These are referred to by the number of the law (prefixed by L) followed by the section of the book in which it is stated.
Example 1
When two users, U and U , share a machine, the combination corresponds to Figure 4.
Suppose that the alphabets of the users are disjoint:
Abbildung in dieser Leseprobe nicht enthalten
and that they can do nothing other than communicate with M:
Abbildung in dieser Leseprobe nicht enthalten
In this case, there is no legitimate channel between the two users, and we expect to be able to prove secure isolation by our method.
For this example, we take
Abbildung in dieser Leseprobe nicht enthalten
traces( B )ç traces( A ), so that in the usual sense in CSP, A ç B, and hence the weaker relation A = В applies also.)
This example illustrates how we are unable to specify the absence of "timing channels", as we have already discussed in Chapter 2. For example, with the above definition of M, could refuse to do event a until (say) a particular time of day, until when U is unable to do event b. This could be made the basis of communication between U and U . This is not the case if we replace M by
Abbildung in dieser Leseprobe nicht enthalten
In this case, however, there is still the possibility of U1 influencing the timing for U by repeatedly doing event a and so heavily loading the CPU. The specification of secure isolation developed here overlooks the existence of such covert channels, as does Rushby's.
Example 2
Now we try an example in which two users can communicate, that is
Abbildung in dieser Leseprobe nicht enthalten
In this case we expect the example not to satisfy our specification, because the system is obviously insecure.
Abbildung in dieser Leseprobe nicht enthalten
In this case there are traces of В which have no traces in A to which they are locally equivalent, for example,
Abbildung in dieser Leseprobe nicht enthalten
because, in A, both (U ||M) and (U ||M) must first do event a independently before they can do c. Example 2 therefore does not satisfy our specification of secure isolation.
It may seem strange that we have used the specification
Abbildung in dieser Leseprobe nicht enthalten
in this case, rather than
Abbildung in dieser Leseprobe nicht enthalten
where [Abbildung in dieser Leseprobe nicht enthalten](as before), since BJ_ represents the system of Figure 5. What we are doing is testing the traces of the system of Figure 5 with U and U "disconnected". We assume that there is no communication and find that the system is insecure, because, in fact, communication does exist. This illustrates how the specification we have adopted is the more appropriate one. As a first step to ensuring secure isolation, the system designer must ensure that the alphabets of all users are disjoint.
There are some aspects of security that are obviously outside the power of the computer system designer to enforce, as discussed in section 3.3; in example 2, the event c could be replaced by U1 and U2 talking to each other, which illustrates that computer security must be part of a wider scheme of security for the organization as a whole.
Example 3
A similar difficulty arises if we consider an event to be part of the alphabet of three processes, U , U and M. For example, consider M in Example 2 to be modified to make c a part of its alphabet:
Abbildung in dieser Leseprobe nicht enthalten
In other words, the system is secure by our criterion, despite the communication of U and U . The reason is that U and U are locked by M into a pattern of repeated synchronization, so that it is essentially irrelevant that they are separate communicating processes: they behave as one user as far as event c is concerned. This difficulty vanishes when we move closer to a real system and treat events as communications over shared channels. By convention, one channel serves two processes only, and a communication on one channel has a distinct name (by tagging with the name of the channel) from the same communication on a different channel. It is then not possible for an event to be in the alphabet of three processes.
Example 4
In this example we consider two machines communicating with users over channels (Figure 6). There are two separate cases, with different machines, M and M, . M is clearly secure, and our specification is shown to hold. however, allows a copy of a message bound for U2 to go to U , and our specification is not satisfied by the system.
Abbildung in dieser Leseprobe nicht enthalten
Figure 6
Abbildung in dieser Leseprobe nicht enthalten
(Example 4b)
This example is clearly insecure, because, as a result of 2 1 U 's input, U can "overhear" a message intended for U . We expect to be able to show that there is a trace of В which is not locally equivalent to a trace of A.
Abbildung in dieser Leseprobe nicht enthalten
Following the event 2.u.y, A becomes ready to output on channel l.v of the machine attached to U , but there is no user on that channel to receive the communication. At that point, this machine becomes deadlocked.
So, for example, the trace <2.u.x, l.v.x> can occur on system B, but not on system A, and therefore
Abbildung in dieser Leseprobe nicht enthalten
By our definition of refinement, then,
Abbildung in dieser Leseprobe nicht enthalten
3.5 Conclusion
In this chapter we have introduced a definition of secure isolation which is intended to express the spirit of Rushby's definition (chapter 2). The notation used is that of CSP, which is particularly suitable for the purpose. We have already noted in chapter 2 the appropriateness of the CSP definition of "traces" of a process. A point which arises from the examples is that non-determinism can be introduced into the specification in a straightforward way, using operators already defined in CSP.
The simple examples given here show how the specification accords with the intuitive idea of security. Further work will be required to extend the specification to more complex systems.
Dealing with systems of more than two users is expected to be straightforward, and the specification may be expressed in slightly modified notation as
Abbildung in dieser Leseprobe nicht enthalten
To prove the secure isolation of a system of more than two users will involve showing, using the laws of CSP, that for every user there is a trace of A which is locally equivalent to each trace of B. To prove the system is insecure, we are required to find (for any one user) a trace of В not locally equivalent to any trace of A.
The method may also be extended to more complex forms of behaviour of each user and of the machine M; these processes may in turn be combinations of simpler processes. The secure isolation, and other properties, of these subsidiary systems may also be proved by the methods of CSP. It should be possible to find laws for the secure isolation of systems combined using CSP operators, although the usefulness of this may be limited by the considerations discussed below.
The subject of this chapter has been the important requirement to prove systems to have the property of secure isolation. In real, useful systems there will be a further requirement for the introduction of controlled, legitimate information flows. In section 3.3 one possible way of introducing these, by modifying the definition of M so that it partially synchronizes the behaviour of users, was touched on. In example 4b, the alternative in the definition of M, is a simple kind of security policy D 12 enforced by the machine, allowing U to read U 's communications, but not vice versa. Another possible starting-point is to relax the assumption, with which we began, that each user has a different colour. If each user and its associated channel, and each communication, had a colour tag, M could be allowed to distribute the communication to any user with the same colour that was ready to receive it.
The controlled relaxation of secure isolation has not been explored here, but further work on this will be needed before the methods we have developed will become applicable in practice.
4. SECURE ISOLATION EXPRESSED IN 'Z'
4.1 Introduction
In Chapter 3, we specified secure isolation as a refinement relation between our implementation and a system known to be secure. A refinement relation was derived in terms of the traces model of CSP.
The idea of refinement has already been discussed in Chapter 2, section 2.4. It is strongly suggested by Rushby's proofs of his theorems 2 and 3, where he introduces the "abstraction functions" Фс and ABOPc which map from a shared machine to the private machine which the user c cannot distinguish from the shared machine. We can regard the private machine as a specification or abstract machine which is refined by the "concrete" or shared machine. The shared machine must satisfy the specifications for each individual user.
As described in section 2.4, there is already a definition of refinement, developed from the concept of homomorphism between algebraic structures, and founded in independent work by Hoare [Hoare 72] and Milner [Milner 71]. This definition can be expressed by rules which find a convenient formulation in the Z specification language. It seems worthwhile, therefore, to attempt to specify and prove secure refinement in this way.
To summarize this definition, we can show that an implementation is a correct refinement of a specification if we can demonstrate the existence of a relation between the states of the implementation and those of the specification such that three rules are satisfied. Let CS be the set of states of the implementation (concrete states) and AS be the set of states of the specification (abstract states). The relation between them is RÇAS x CS. The set of initial concrete states is INIT_CS ç CS and of initial abstract states is INIT_AS Я. AS. For every operation COP on the concrete states, the corresponding operation on the abstract states is AOP.
The first proof obligation for refinement is that of initialization:
Abbildung in dieser Leseprobe nicht enthalten
For every initial concrete state there must be a corresponding initial state that is related to it by R.
The second and third proof obligations must be carried out for every operation on the states. The second is of applicability:
Abbildung in dieser Leseprobe nicht enthalten
If the abstract operation can be applied to a particular abstract state (that is, a legal state would result from the operation) and that state is related to a concrete state through R, then the concrete operation may be carried out on the concrete state.
The third is the rule of correctness :
Abbildung in dieser Leseprobe nicht enthalten
Suppose that the abstract operation is applicable to an abstract state, and that there is a concrete state related to the abstract state by R. Then the state which results from applying the concrete operation to this concrete state is related by R to an abstract state that can result from the abstract operation.
To prove the secure isolation of a shared system we wish to be able to prove that it satisfies these rules in relation to a private machine for every user. It may be possible in particular cases to construct such a private machine and prove the rules for the shared machine. However, following Rushby's development [Rushby 82], it may be possible to find conditions satisfied by the states, inputs and outputs of the shared machine that guarantee that a private machine can be constructed which would satisfy the refinement rules.
This chapter is an attempt to explore these points, and we use a simple Z description of a shared machine and a private machine to do so. As before, we shall try to maintain the essential components of Rushby's approach, while making it as general and non-deterministic as possible.
4.2 The development in Z
The state of a machine depends on the inputs which it has received from its users, and on the "control mechanism" which determines its response to a particular input (which may be null) in a particular state. At any stage of the evolution of the machine, there must be some record of the current state of the computation. This may be regarded as a function from variables VAR to values VAL.
Abbildung in dieser Leseprobe nicht enthalten
The state of a private machine for user c may be defined similarly:
Abbildung in dieser Leseprobe nicht enthalten
For both machines, initial states are a subset of all states :
Abbildung in dieser Leseprobe nicht enthalten
We require an abstraction relation which relates states of the shared machine to states of the private machine:
Abbildung in dieser Leseprobe nicht enthalten
RS is a predicate that always holds for a state s of the shared machine and for the corresponding state sc of the private machine.
We are not interested in the nature of the inputs and outputs of the system but it seems to be essential to consider the relationship between an input or output and the part of it which is specific to user c. We shall just call the type of inputs and outputs IOITEMS but note that an actual input or output can be null.
Io is the set of inputs and outputs to the shared machine:
Abbildung in dieser Leseprobe nicht enthalten
and Ioc of inputs and outputs to the private machine:
Abbildung in dieser Leseprobe nicht enthalten
RIO is the predicate which holds between inputs and outputs of the shared machine and those of the private machine.
We shall need to distinguish between those data which are allowable inputs and those which are allowable outputs:
For the shared machine:
Abbildung in dieser Leseprobe nicht enthalten
An operation changes the state of the system. It is convenient to adopt the usual Z abbreviations for this:
Abbildung in dieser Leseprobe nicht enthalten
An operation on a state of a shared machine can now be represented by
Abbildung in dieser Leseprobe nicht enthalten
pred expresses the relationship between a state s, the input in?, the output out! and the resulting state s'. We no longer have a function relating s to s'. Instead the relationship is many-to-many on s and s'. Depending on factors beyond the control of user c, a state s and a given input in? may result in a different out! and s'. But we wish to ensure that c cannot deduce anything about other users from this.
Similarly, for an operation on the private machine:
Abbildung in dieser Leseprobe nicht enthalten
predc expresses the relationship between the observable (to c) substates of sc and sc', and his part of the input inc? and of the output outc!.
Can we express Rushby’s six conditions (section 2.3) in this notation, at the same time broadening them to allow for the possible non-determinism expressed by pred and pred , and for the many-to-many relations that may be expressed by RS and RIO?
We wish to avoid the notion of one single colour being "active" at a time, so Rushby's conditions (1) and (2) become one. They both express a commuting relationship; however, we cannot allow the equality. At most, we can insist that, if we have a state of the shared machine and a related state of the private machine, then if an operation can be carried out on the shared machine, there should be a corresponding final state on the private machine related to the final state on the shared machine. This is the CORR criterion for refinement.
However, we note that our predicates pred and pred relate also the input and output, as well as the initial and the final states. We must include input and output in our criterion, and this will replace Rushby's conditions (3a), (3b) and (4):
Abbildung in dieser Leseprobe nicht enthalten
Rushby's condition (5) expresses the determinism which we wish to avoid, and has no counterpart in our development.
Since all Rushby's operations represent total functions from initial to final states [Rushby 82, p.12] there is no need for the APPL criterion in his development. However, in order to maintain generality, we require it:
Abbildung in dieser Leseprobe nicht enthalten
These conditions are not adequate, however, to prove secure isolation, because the possible initial states of the shared and private machines have not been specified. If this is not done, it is possible to describe shared and private systems which satisfy the CORR . and APPL . conditions, but have very different behaviours for the user. This problem also arises in Rushby's approach, because his six conditions do not deal with initialization.
As an example, consider the following simple deterministic system. Suppose that we have a shared machine which has states in which a variable vc can contain the values of the natural numbers, and that these values are accessible to the user c. We ignore the rest of the variables in the store, which user c cannot observe. Thus
Abbildung in dieser Leseprobe nicht enthalten
Similarly, we suppose there is a private machine with the same states:
Abbildung in dieser Leseprobe nicht enthalten
The two machines have the same operation on behalf of user c. If the value of the variable is 0, the operation leaves it unchanged, otherwise the value is incremented by 1.
Abbildung in dieser Leseprobe nicht enthalten
The abstraction relation relates states with the same value of the variable in each machine:
Abbildung in dieser Leseprobe nicht enthalten
We make one difference: the initial state of the shared machine has the value 1 for the variable, and that of the private machine has the value 0.
Abbildung in dieser Leseprobe nicht enthalten
Now, the operation OP and its abstraction OPc satisfy APPLsi and CORR ., using the fact that the range of STATE and of si STATE0 is IN:
Abbildung in dieser Leseprobe nicht enthalten
Yet the two machines will behave very differently towards the user c, since the shared machine will increment his variable ad infinitum, whilst the private machine will never change it from zero.
These machines also satisfy Rushby's six conditions (section 2.3). Note that condition 2 does not apply to this case, since we are only concerned with operations on behalf of user c. Neither do conditions (3) and (4), since our simple example has no explicit input and output. The two machines satisfy condition (1), and condition (5) is trivially true, since the next operation is always the saune.
The problem is that the initial state of the private machine is not an abstraction of the initial state of the shared machine. We must add to our conditions the initialization rule-, which may be expressed as
Abbildung in dieser Leseprobe nicht enthalten
4.3 Application or the Z approach: the "toy kernel"
In this section we test the approach discussed in section 4.2 on a simple example of a security kernel designed to enforce a particular security policy. This example was originally designed by Millen [Millen 79] and adapted by Rushby [Rushby 81], who refers to it as the "toy kernel", as a demonstration of formal techniques of verification. It is not possible here to give a full account of it. It will be described as the Z description is developed, but for the complete account the reader is referred to Rushby's paper.
Abbildung in dieser Leseprobe nicht enthalten
Figure 7
The hardware is a CPU connected to main memory through a memory management unit (MMU) (figure 7). I/O devices are connected directly to the main memory (that is, are located at fixed addresses in memory, as in the PDP-11). The CPU contains eight general registers named R(0), R(1),..,R(7), and the MMU has four mapping registers MAP(0),..,MAP(3). There are two modes of operation, privileged and unprivileged, and the contents of the mapping registers cannot be changed in the unprivileged mode. The main memory consists of 128 blocks each of 1024 words. Word w of block b is-denoted MEM(b,w). A block of memory is accessible to the CPU only if the block number is first loaded into one of the mapping registers. Each I/O device is dedicated to one user and all devices attached to addresses in any one memory block are dedicated to the same user. The environment or virtual machine perceived by each user is said to be his regime.
In Z, we shall define the types
Abbildung in dieser Leseprobe nicht enthalten
and take as given the type of values that may be stored in memory
[VAL].
Now we may begin to define the state of the machine (KERNEL). It has three functions, R, which maps the number of a register to the value which it contains, MAP, which gives the value in any mapping register, and MEM, which gives the value in any particular word of any particular block in main memory.
Abbildung in dieser Leseprobe nicht enthalten
The basis of the protection offered by the machine is that data can be transferred between main memory and the CPU only through the MMU. That is, there are only two operations that'directly reference the main memory. These may be specified by:
Abbildung in dieser Leseprobe nicht enthalten
LOAD(i,j,w) loads general register i with word w of the memory block whose name is loaded in the jth mapping register, and STORE(i,j,w) transfers data in the opposite direction. (Throughout, notation will be kept as consistent as possible with Rushby’s, while using Z conventions).
There is a fixed number (n) of user regimes, which can be identified by the integers Ο..η-l:
regime = Ο..η-l
The security class associated with a regime, as mentioned in chapter 2, is referred to as its "colour". Under a strict policy of isolation, a regime's identity uniquely determines its colour, but it is convenient to introduce a function RCOLOUR which gives the colour of each regime:
RCOLOUR: regime → colour
In this kernel, exactly one regime is active at any instant, and the identity of the active regime is recorded in the variable AR.
AR: regime
When a regime is active it may relinquish the CPU to another regime by executing a SWAP operation, which will be defined later. When a regime executes a swap, the contents of its general and mapping registers are saved, to be restored when the regime next becomes active. The variables SR and SMAP are used as save areas for the general and mapping registers, respectively.
The variable SR is therefore a function relating a regime and each of the register numbers to the values they held for that regime when it was last active:
SR: regime x regno → VAL
and similarly for the mapping registers:
SMAP: regime x mapregno → blockno
Memory blocks are either owned by a particular regime, or are "free" and are considered to be owned by the system. The system has a "colour" of its own, which we shall call SYSTEM. The colour of a block is given by the function
BCOLOUR: blockno → colour U{SYSTEM}
Since a regime can access a memory block owned by itself only if the number is loaded into a mapping register, not all blocks of a given colour may be accessible to the owner. A regime may request one of its inaccessible blocks to be made accessible with an ATTACH operation.
If a block is free, it may become owned by a particular regime through an ACQUIRE operation, while a block which is owned but is inaccessible may be made free again through a RELEASE operation. A block to which an input/output device is connected may not be RELEASEd, however, since another regime could then ACQUIRE that block and gain control of the I/O device. To prevent this possibility there is a boolean state variable FIXED(b) which has a value true whenever b is a block which may not be RELEASEd.
FIXED: blockno → { true, false }
To summarize the state of the kernel, the complete schema is presented:
Abbildung in dieser Leseprobe nicht enthalten
The definitions of the operations SWAP, ATTACH, ACQUIRE and RELEASE may be expressed in Z by the following schemas. For clarity, a variable which is unchanged by a particular operation is usually omitted from the schema for that operation.
In a SWAP, the next regime becomes the active regime, in rotation. The contents of the registers are saved in the save area appropriate to the regime ceasing to be active, and the values in the new regime's save area are restored to the registers.
Abbildung in dieser Leseprobe nicht enthalten
The ATTACH operation requires the number of the block to be ATTACHed and the number of the mapping register through which it will be made accessible. The first two predicates are preconditions: the block to be ATTACHed must have the same colour as the active regime, and the block must not be currently accessible. The other predicates describe the resulting state: the number of the block, b?, is now in the j?th mapping register; that block is now accessible. The block, the number of which was previously in that register, is no longer accessible.
Abbildung in dieser Leseprobe nicht enthalten
The precondition of the ACQUIRE operation is that the block requested, b?, should be free, and the effects are that this block acquires the colour of the active regime, and all words in the block are re-initialized to zero.
Abbildung in dieser Leseprobe nicht enthalten
In a RELEASE operation, the block cannot be relinquished unless it has the colour of the active regime, and is neither fixed nor accessible. After the operation it is free {that is, has the "colour" SYSTEM).
Abbildung in dieser Leseprobe nicht enthalten
In this simple example, specifications are omitted that would necessarily be considered in a real system, such as an initial state, and program control (that is, how the operations are selected for execution).
The point about this example is that there is a security flaw: Rushby [Rushby 81, p.17] points out that "its insecurity is intended to be neither trivial nor obvious". His purpose in introducing it is to discuss the successes and failures of various verification methods for secure systems, including access control verification, information flow analysis and proof of separability. This is an unreal example in that, in verifying a real system, the abstract specification (the private machine) will normally precede the design of the implementation (the shared machine) and the abstraction relations and proofs will be developed along with the implementation; the proofs should assist in the design.
The insecurity in the system is identified by Rushby by the use of information flow analysis, and arises from the operation ACQUIRE. For example, suppose that the regime which is currently active performs the operations ACQUIRE(0), ATTACH(0,0) and LOAD(0,0,0). That is, it first requests ownership of block number 0, then attaches block 0 to mapping register 0, and loads word 0 of that block into general register 0. If block 0 was initially free then the block will gain the colour of the active regime, and will be made accessible to that regime. Since all words in the block are zeroed when they are ACQUIREd, R(0) will be set to zero. If, however, block 0 is already owned by some other regime, the currently active regime will not be able to ACQUIRE it, the mapping register 0 will remain attached to the block to which it was previously attached, and the final value in R(0) will be the value in word 0 of the block, which need not be zero. Hence, depending on whether or not a different regime has previously acquired a block, the value of R(0) obtained by the active regime will be zero or non-zero. This represents a communication channel (albeit a noisy one) between the two users.
Before discussing this system by our method, we need to consider the specification of ACQUIRE further. The definition above, which is Rushby's definition translated into Z, is incomplete. It is clear that the active regime must make a definite request to ACQUIRE a block, in ignorance of the colour of that block, with two possible outcomes: it may be allowed or denied. Hence this operation should be total on states, and a revised definition should be
Abbildung in dieser Leseprobe nicht enthalten
There will be no change to any part of the system if an active regime makes a request to ACQUIRE a block which is owned by another regime, nor if it makes a request to ACQUIRE a block which it already owns (a possibility that has not already been ruled out elsewhere).
In order to explore this insecurity using Z, we need now to specify the state and operations of the private machine. For completeness, all the operations are specified, although it is clear that we only need to consider the definition of ACQUIRE to show that the system is insecure. The private machine must show the same behaviour to the user as the shared machine does, but all blocks in the private machine must be owned either by the one regime or by the system.
We call the state of the kernel, of the private machine for regime r, KERNEL^.. It has the general registers, mapping registers and memory analogous to that of the shared kernel:
Abbildung in dieser Leseprobe nicht enthalten
The following functions of the private machine are subsets of those of the shared machine:
Abbildung in dieser Leseprobe nicht enthalten
The operations of LOAD and STORE for the private machine are the same as for the shared machine but with R, MAP and MEM replaced by the appropriate functions for r:
Abbildung in dieser Leseprobe nicht enthalten
The operation SWAP is simply a no-op for the private system:
Abbildung in dieser Leseprobe nicht enthalten
The operation ACQUIRE can only result in a refusal if the regime already owns the block, since otherwise it must be free :
Abbildung in dieser Leseprobe nicht enthalten
The operation ATTACH on the private machine is analogous to that on the shared machine:
Abbildung in dieser Leseprobe nicht enthalten
as also is
Abbildung in dieser Leseprobe nicht enthalten
We may now summarize the state of the private system:
Abbildung in dieser Leseprobe nicht enthalten
and define the abstraction relation:
Abbildung in dieser Leseprobe nicht enthalten
where we define rcoloured to be the set of all block numbers that have the colour of regime r:
Abbildung in dieser Leseprobe nicht enthalten
The abstraction relation expresses the following points. The active regime in the private kernel is always r. The private machine has the same memory locations as the shared machine, and in a given state of the private machine the blocks which belong to the regime r are the same as those that belong to regime r in the corresponding state of the shared kernel. The contents of the registers of the private machine will be the same as those of the shared machine when r is the active regime, but will be the same as those of r's save areas when r is not active. Outside these conditions, the relation between the memories and registers of the two machines is undefined.
The only colour allowed for a regime on the private machine is that of r, and a block can either be owned by r or be free. The fixed blocks of the private machine are the same as r's fixed blocks on the shared machine.
Now we consider the insecurity inherent in the definition of ACQUIRE.
ACQUIRE satisfies the applicability rule. To prove APPLg^ for ACQUIRE, we need to prove
Abbildung in dieser Leseprobe nicht enthalten
Proof
Abbildung in dieser Leseprobe nicht enthalten
This exhausts all cases.
Since the applicability criterion is satisfied by ACQUIRE, we expect to find the insecurity exposed by the correctness criterion, which for ACQUIRE is
Abbildung in dieser Leseprobe nicht enthalten
There are three cases to consider. If the block requested by the active regime on the shared machine is free, that block is ACQUIREd by the regime. There is a corresponding state of the private machine in which the regime has also ACQUIREd the block.
If the block requested by the active regime on the shared machine is already owned by the active regime, there is no change in the resulting state.
If the block requested by the active regime is neither free nor owned by the regime, then there is no change of state. There is, however, no way for the active regime to distinguish this situation from one where the block is free, except by the result of the ACQUIRE operation, so in this case information flows from the regime that owns the block to the active regime. Most important from our point of view is that there is no resulting state of the private machine that is related to this resulting state of the shared machine through ABS_Kr, since the only initial states of the private machine which lead to no change are those in which the active regime already owns the block, and that is a distinct case in the shared machine. Hence the system fails the CORRg^ criterion on these grounds:
Abbildung in dieser Leseprobe nicht enthalten
Although this example was prepared in the knowledge of the insecurity of the kernel, as discovered by information flow analysis, it does seem that this method has more readily exposed the insecurity. This is partly due to the more •complete specification of ACQUIRE, but the most important factor is the test for secure isolation by the introduction of the private kernel as an abstraction of the shared kernel.
4.4 "Denial of service"
In section 3.3, example 1, we said that we could not use the method described there in CSP to specify the absence of the "denial of service" type of insecurity; that is, where one user can prevent another from carrying out an action by performing, or by not performing, an action himself. However, in the example of the "toy kernel" described in section 4.3, we have discovered such an insecurity: in this case, one user, by ACQUIRing a block, can prevent another user ACQUIRing the same block.
This shows that the two definitions of secure isolation we have used are not equivalent. Informally, the difference between them is in the abstract (or private) system chosen in each of the specifications. In the CSP approach we have chosen an abstract system (the system A) which, if the real system is secure, has all the possible behaviours of the real system, but the proof of security involves hiding all the events of the abstract system which are not part of the alphabet of the user under consideration. However, the abstract system has a state to correspond to each one of the real system.
In the Z approach, we chose abstract systems for each user so that there need be no abstract state to correspond to a particular concrete state. When, in the real system, there are states that differ only in variables that are hidden from a particular user, that user may experience "denial of service". In the abstract machine for that user, these variables are missing, and the result is a failure of
This point may be illustrated by expressing Example 1 of section 3.3 in Z and attempting to prove its security by the method of this chapter.
We start by expressing the process В of section 3.3 as a Z set of states. Let M be the state in which the shared machine can do event a (corresponding to process M in section 3.3) and be the state in which it can do event b (corresponding to process M/<a>).
There are two operations: the doing of event a and the doing of event b, which we shall model as outputs.
[output]
Abbildung in dieser Leseprobe nicht enthalten
The machine alternates the output of a 1 and the output of b! .
Now suppose we choose, for our abstract machines, the following states:
Abbildung in dieser Leseprobe nicht enthalten
Also, as it stands, the machine does not satisfy INITs^, since there is no initial state of private machine 2 which corresponds to the initial state of the shared machine.
4.5 The revised "toy kernel"
To remove the security flaw in the ACQUIRE operation of the kernel, Rushby proposes to allow any regime to own no more than MAX memory blocks, where MAX*n ^ 128' (n is the number of regimes and 128 is the number of blocks). As there is no dynamic sharing in this system, the operations ACQUIRE and RELEASE and the variables RCOLOUR, BCOLOUR and FIXED are no longer required.
In the ATTACH operation of Rushby1s new definition, each regime uses virtual, rather than actual, block numbers. This requires a new function ACTUAL to map virtual to absolute block numbers. This new function introduces complexity without introducing any new principles, and has not been used here, but the function BCOLOUR has been retained.
The state of the new shared kernel may be described by
Abbildung in dieser Leseprobe nicht enthalten
and of the private kernel by
Abbildung in dieser Leseprobe nicht enthalten
The abstraction relation becomes
Abbildung in dieser Leseprobe nicht enthalten
The simplifications here are because the memory of the private machine is simply the same as the "rcoloured" blocks of the shared machine.
Now, if the operations LOAD, STORE, ATTACH and SWAP are secure in the old kernel KERNEL then they will be secure in the new one KERNEL2 also, because the new kernel is essentially a special case of the old one in which all blocks are FIXED, i.e. ran FIXED = { true }. However, the operation SWAP is one which receives particular attention in Rushby's paper [Rushby 81]. The reason for this is that the security of SWAP cannot be demonstrated by the information flow analysis method. A discussion of this method is out of place here, but the root of the problem is the fact that the active regime restores the registers of the incoming regime as it executes the SWAP operation, and is therefore writing a variable of a different colour (that of the incoming regime), even though a value is being transferred only from another variable of the same colour as the incoming regime.
Nevertheless, the SWAP operation is secure, and can be proven so by the method discussed here. For KERNEL2, the swap operation is SWAP2, which is analogous to SWAP:
Abbildung in dieser Leseprobe nicht enthalten
and for KERNEL2r, the corresponding operation is a no-op, as is SWAP : r
Abbildung in dieser Leseprobe nicht enthalten
SWAP2 can be applied to any state, that is, it is total on states. Therefore, pre SWAP2 is true, so that APPLs^ is true.
The correctness criterion is
Abbildung in dieser Leseprobe nicht enthalten
SWAP2r is total, because there is always a resulting abstract state identical to the previous state under this operation. There are three cases that need to be considered. The first is that r is the active regime, in which case it will not be active after SWAP2. The second case is that r is not active before, but is active after SWAP2. The third case is that r is active neither before nor after the operation. In each case, the abstraction relation relates the unchanged state of the private kernel to the state of the shared kernel both when the regime is active and when it is not. Therefore, there is always an abstract state which is related to the final state of the shared machine by ABS_K2r, and the correctness criterion is satisfied by SWAP2.
Rushby also discusses an alternative version of SWAP, called NEWSWAP, which, instead of saving and restoring the general registers, zeros them.
Abbildung in dieser Leseprobe nicht enthalten
"Both the system with SWAP and that with NEWSWAP are secure because, in each case, a regime knows, when it gives up control, exactly what values its general registers will contain when next it receives control back again. Now suppose both SWAP and NEWSWAP are available. This system is manifestly insecure since a regime can signal to its successor by choosing whether to zero the registers of the incoming regime (using NEWSWAP) or to restore their previous values (using SWAP). But which is the insecure operation - SWAP or NEWSWAP? Of course it is neither individually: it is the presence of both which is insecure." [Rushby 81, p.30}.
It is a case like this which illustrates the value of specifying each regime's private machine as a part of the design of a shared system. Suppose that the overall specification of the system requires both the operations SWAP and NEWSWAP. A regime may see its general registers restored or zeroed according to which operation may be chosen by the outgoing regime. Therefore, the regime on its abstract machine must also be able to see its registers modified in either way. If this is not the case, then the error will readily be exposed when the attempted proof of CORRs^ is carried out, if the abstraction relation has been chosen correctly, because there will be resulting states on the shared machine that have no related state on the private machine. The specification of the private machine, to be complete, will necessarily state the conditions under which the machine will give each possible result. This information will be expressed by pred (section 4.2). In the design of the shared machine, pred must be chosen to express the conditions under which the SWAP and NEWSWAP operations will be selected, and should be such that the refinement rules are satisfied. That having been done, each regime using the shared machine will see no behaviour that it could not possibly see on the private machine, and the system will be secure.
However, we are unable to specify against "timing" channels (as opposed to "denial of service"). For example, although on the private machine we have the possibility of the registers being either restored or zeroed, on the shared machine one regime may, by choosing to do a particular action, cause the following regime to observe predominantly one behaviour rather than another. Other means of overcoming this problem must be found: for example, the order of swapping of regimes might be randomised.
4.6 Conclusion
In this chapter, we have attempted to express Rushby's idea of "secure isolation" in the most general form, using the notation of Z and a proof method derived from the principle of refinement of an abstract system by its implementation.
The specification which we have arrived at in this chapter is different from the one found in chapter 3. The choice of the abstract state determines whether a specification can be used to eliminate at least some insecurities of the "denial of service" kind, and this question seems to merit further investigation.
As discussed in the conclusion of chapter 3 (section 3.5), only the definition of "secure isolation" has been considered in this work. It will be necessary to consider methods of introducing permitted information flows, in order to make these methods generally useful.
In both this chapter and the previous one, the specification of secure isolation has been developed with the aid of simple examples, but it has not yet been applied to more complex systems of interacting processes, which would better resemble real computer systems. Also, the specification of chapter 4 has not yet been applied to systems with input and output, so the abstraction relation ABS_IOc has not yet been used. If this specification is applied to a more realistic type of system, it may be further developed and improved. Above all, the best test of this method will be in its use to develop a system, rather than in its retrospective application to existing ones.
These questions will be receiving attention in further work.
REFERENCES
Abbildung in dieser Leseprobe nicht enthalten
- Quote paper
- Richard Burnham (Author), 1987, Specification of security in distributed computing systems, Munich, GRIN Verlag, https://www.grin.com/document/343032
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.