On Sat, 2005-11-05 at 04:22 +0100, [email protected] wrote:
> It turns out
> that implementing persistence in the "real world", with persistance
> barriers to the non-persistent core and to the outer world, you have to
> deal with many issues you hoped to avoid and that wouldn't exist in a
> perfect all-persistent world.
Hmm. Here are a couple more bits of background on this.
My originally intended PhD was a distributed consistent checkpointing
mechanism. I arrived at one that would operate successfully. My
constraint at the time was that no machine should see an individual
"freeze" exceeding 10ms. The precluded global coordination, which is why
the problem is hard.
After I got the design stable, I realized something embarrassingly
obvious: if you take all the machines in the country and gather than
into a single system, then the failure of a machine in California can
cause a machine in New York to roll back. Not good. Fine at the scale of
a lab or a data center, not good at the scale of a country.
What this mainly illustrates is that domains of persistence are
necessary, and an all-persistent world would not be perfect at all.
> I'm not claiming persistence is bad. It may be very well that the total
> amount of problems to solve in a persistent system turns out to be
> smaller than in the opposite case. Yet it seems to, must be roughly of a
> similar order.
> Just wanted to point out that persistence is not a magic silver bullet
> solving many hard problems for free.
Indeed. This is why, for a while, Coyotos was not intended to be
But let me give my current view of it:
Performance: From a performance perspective, there is no disadvantage to
persistence that we have been able to identify. In many cases there
appear to be performance *advantages*. There are ugly corner cases in
both styles of system where performance can be bad. With proper resource
accounting these are hard to achieve in real scenarios in either type of
Complexity: The EROS-style implementation is much simpler than a
conventional file system, and it is therefore a better basis for a
robust system. One of my concerns about Coyotos is that the persistence
layer will become more complicated.
Code impact: There is very little impact on most user-visible
applications, because the need to save documents and serialize them for
cut&paste seems to remain. There is a small simplification of various
helper components, and there is a significant simplification for
security-critical subsystems and various server subsystems.
Implications of persistence boundaries: every boundary that has been
identified (foreign file systems, network links) exist in conventional
systems too. These appear to be unavoidable in any design. The
difference is that in conventional systems there are no effective
boundaries you can draw anywhere and say "inside this boundary,
everything is self-consistent".
Consistency impact: taken in combination with IPC, persistence makes
certain kinds of consistency much easier to achieve, because it sits on
a transactional mechanism. I can write a program that either
a) performs five steps in sequence, or
b) performs k of these steps that get saved before a system
failure, and completes the remaining steps on resume, or
c) performs none of these steps.
That is: preservation of control flow across restart does appear to
simplify a bunch of transaction-like design patterns.
Security impact: global persistence sits underneath the ability for a
constructor (or any other process) to hold capabilities that it did not
get from a conventional file system (i.e. from a place where anyone else
might obtain them). The ability to take a large authority, and reliably
wrap it with a program that mediates its use in such a way that the
program and the authority cannot be separated from the outside, is a
very powerful tool.
My personal opinion is that the first three are not compelling one way
or the other, and given this, the last two cause me to lean in favor of
L4-hurd mailing list