The Eternity Service

- a new solution to the data storage and recovery problem?

 

 

Abstract: The Eternity Service is a new approach to the way we manage our data. Using many servers over the world interconnected with Internet can be achieved very high reliability of storage. We put very, very brief overview of our notion of The Eternity Service in this article. Only the basic principles and the most important protocols are described, both in non formal way. The purpose of the article is to verify, whether our approach to the problem have the chance to succeed.

 

  1. Introduction
  2. The risk of data loss is inherent problem of the whole data processing and, in fact, can not be completely avoided anyway. Besides of appropriate policy, the only solution is a backup of critical data objects. Reasonable backup can minimize the risk to acceptable degree. The main problem with backup is that the backup copies itself could be damaged, or inaccessible at least, in time they are needed.

    We want now to introduce a new way to create, store and handle copies of data of high importance - The Eternity service - with very high reliability degree [And]. The Eternity service is resistant not only against classical threads such as natural disasters and vandals, but against political or court decisions, religious leader orders, activity of secret services and so on too.

    The main ideas behind the Eternity Service (in further text, the abbreviation "Service" denotes The Eternity Service) are uncertainty and data diversity. Nobody knows, where the data is stored, how many copies of data was created, which copy is still surviving. There is no central authority used here, the whole system is fully distributed. Data is accessible by anybody, no ownership is defined. The Service even does not use the notion such as user in the case of classical operating system. If we add the fact, that all communication is encrypted and when regular communication is missing or not strong enough the padding traffic is generated instead our picture is almost complete. The lack of information just described is essential for resistance of the Service against intentional attacks.

    We hope the Service can provide reliability, which is almost absolute. The quality of encryption algorithms could by measured by supposed amount of time needed to crack the cipher. In the case of top level algorithms this time exceeds the age of the Universe considerably. The problem is much more complicated here. But we expect the reliability of Service is comparable with the ability of the whole our civilization to survive.

    The specific feature of Service worth our attention. Once the process of data storage is completed, there is no way to change stored information. If we add to Service reliable timestamping, it could be used for many more purposes.

  3. Design decisions
  4. Our goal was to design Service such that its widespread will be probable. We assume, that at least the first installations of the Service servers will be conducted by enthusiasts. Even in the next period, when the installations will be conducted mostly by professionals, it is undesirable to create groups of machines in the Service that are owned by individual company. Accordingly to this assumptions, we prefer to build system from relatively independent components, that can cooperate with any other. Furthermore, we did our best to design whole system in such way that no custom hardware devices are needed. Custom hardware devices tends to be expensive. They are difficult to replace in the case of design error or when became obsolete. Mentioned features of such devices seems to have negative impact on the growth of Service usage.

    On the other hand, optional use of hardware device could improve some important Service features significantly. So we will introduce appropriate hardware device in further part of our work. Use of such device brings better quality of service and bigger protection for server operators itself. Installations not using hardware devices have to emulate functions of the device in software.

    There is no hierarchy between individual parts of the whole Service. We are using different types of servers only. Anybody can arrange selected type of server, launch the service and take part in the Service without any approval or certification. This simple design has very important advantages. Service can extend easily and, in the case of particular server loss, no negative consequences of this event will impact any other server. The flat architecture of the Service brings better resistance against disasters and attacks.

    We assume the lack of information is the best defense against intentional attacks. Random selection of target server that will store our data supports this uncertainty. If the total number of Service servers is sufficiently high then the random selection will be appropriate for defense of the data against natural disasters and unintentional defects too. The randomness take important part in our design of the Service.

    In last time many useful standards had been defined, namely X.509 certificates [X509], resource reservation protocol [RSVP], perfect forward secrecy [PFS] and many others. It seems to be reasonable use those standard mechanisms whenever it is possible. So we will do it.

  5. Basic structure
  6. The whole Service is assembled from servers of three types. Namely from the mixes (MX), access certificate servers (ACS) and eternity servers (ES). As a special part of the Service could be considered various financial institutions, that will carry all payments necessary for real functionality of the Service. The exact form of this financial institutions depends on local circumstances, but we hope the appropriate solution is possible in majority of countries.

    1. Eternity servers
    2. Eternity servers form the core of the Service. Its provide capacity for data storage, and retrieval of requested file. In fact, ES executes two types of commands only. The first is request to store data for specified period. If server agrees, it receives the data and stores its. It records data specification like keywords, filename etc. After specified period the data is deleted automatically.

      The second command is to find particular file and send it to requester. Server responses in following manner. First it checks, whether it holds requested file itself. If no, it randomly selects certain number of other ES and requests its to send the file to it. If the request fails, server increases the number of attached servers and repeats this step until it receives the file. Then the file sends to requester. If it receives more than one copy of identical file, it randomly selects one to be sent as a result.

      One of ES main task is time synchronisation to achieve reliable information about current time. The information is crucial for proper function of Service. Bogus time information could led to premature destroying of important files, as will be showed further.

      Each ES issues access certificate, that allows to other servers and users to contact it. Access certificate is multicasted to Access Certificate Servers. When anybody wants to contact ES, it requests access certificate from CS and using it he forms message with request.

      Besides this, servers accomplishes various minor tasks. The most interesting, for the server owner at least, is receiving of payments for stored data. The server must prove it still possess the data before bank transfers money to its account. Second important task is generating of padding communication with others. Attacker then has no possibility to analyse communication between servers and guess, which of its holds requested data.

      The special attention worth the fact the ES does not accept any other command, especially command to delete or modify file. This way nobody can manipulate with stored information and thus to "change history". Although the server holds the data encrypted with its secret key and before sending the data is reencrypted with transport key the Service does not guarantee privacy of the data. The integrity checks computed by servers are intended for internal use of Service too. The user should solve this problems itself.

    3. Mixes
    4. We hope the strength of Service depends in fact, that it is very difficult to locate particular server. This difficulty grows with increasing total number of servers. Introducing of mixes is simple and cheap method how to achieve this result.

      If anybody is communicating with any part of Service it never sends messages directly to recipient. The message is sent through arbitrary number of intermediate nodes - mixes, similarly as in [Cha][Gold]. Each node along the path knows only its predecessor and successor. So when attacker wants to locate target server, it has no other possibility than follow the path and violate all servers.

      Mixes are used for construction communication paths only. Each mix publishes its public key certificate and its address. Then it only receives incoming traffic and resends messages to predefined receivers. Mix adheres to perfect forward secrecy [PFS]. Similarly as ES, mix takes part in generating of padding traffic to avoid traffic analysis.

      We suppose, that conducting of mix does not brings any further problems to maintainers of many Internet routers. So it is possible, that great number of mixes comes into existence very soon. Any attacker can collect number of ES access certificates. Each of that certificates depicts location of the first server on the path to ES which issued it. Now with high probability the first server on the path will be some mix, which is nearly worthless for the attacker. Destroying of such server does not brings any significant loss for service, because it does not involve any risk to stored data.

    5. Access Certificate Servers
    6. We tempt to hide any information about ES, especially about its location. Mixes are kept in background too as possible. But there must be a way to contact Service without exposure of ES and mixes security. To overcome this problem, we will use Access Certificate Servers.

      ACS are assumed to be well known and widely advertised. They represent entry point to the Service. Mixes and ES contacts ACS through multicasts [MCST]. ACS publicise ES access certificates and public key certificates of mixes. If user wants anything from Service, he have to contact ACS. From ACS he receives access certificate of some ES and requested number of mix's public key certificates. Using this material user can construct message with request to Service.

      ACS does not perform any logs about its history. It holds certificates, accepts new and discards expired ones. ACS is not responsible to have complete set of all available certificates. It rather holds its own subset. When Service become large, it is possible to use several different multicasts to contact ACS. Then particular ACS randomly selects, which multicasts it will be receiving. It is recommended that server does not contact ACS via multicast directly, but using some mixes similarly to the way the users contacts servers.

      The protocol between user and ACS should be constituted, that allows to user to select certificates in such way the ACS could not be able to determine, which one the user had selected.

    7. Role of financial institutions

    It must be taken into account, that operation of servers, especially ES is long term and costly. Service will not perform well without good mechanism of payments developed. We will need very special financial services. Financial institutions (for sake of simplicity the abbreviation bank will be used further) must be able to follow new specific protocols. Its will must bring feeling of safety and certainty to users and server operators.

    As in the case of other parts of Service, we assume the large number of different banks in many countries will be involved. This arrangement decrease the threat of freezing of accounts, from which payments to server operators are taken.

    Bank must be able to receive payment to anonymous account. Along with this payment bank signs engagement to pay specific value at some time or later upon receiving of defined message. The message contains proof the server is still holding data entrusted to it. So the server operator cannot cheat, because he receives money after the period when he stored data. Naturally the whole period of data storage should be divided into several subperiods, after each the operator receives payment. The signed engagement is sent to server together with data to be stored. Similar principle as that just depicted, which is used by ES will be used by MX and ACS too.

  7. Building blocks
  8. Now we turn our attention to logical structure of individual parts of Service. We put brief overview of modules, from which are servers composed. It may help you to understand better the protocols we will discuss in following section. Ordered from bottom to up in protocol stack the following layers are used:

    TCP/IP Interface covers minor differences between various implementations depending on used operating system.

    ES Router realises forwarding of communication between machines in Service. Co-operates on creating of padding traffic.

    ES Message Constructor is responsible for constructing of public-key certificates (mix certificates) and for communication with ACS. Contains cache with pre-fetched certificates. Prepares paths for messages to be sent.

    ES Generic Manager contains implementation of main part of Eternity server. It uses ESSM and ESTCB internally. The most important decisions are done here. This module decides, whether store new file, prepares messages to be sent, transforms data before storage, does time synchronisation, manages creation of access certificates. It takes care of receiving payment for stored data and of deletion of expired ones. One of the main task of this module is searching for requested files. There is implemented administrator's interface too.

    ES Specific module contains code which maps generic functions of ESGM into real functions of operating system. Among others module contains local file system interface, implementation of random number generator calls, interface for system console, system timer etc. There is also library for calling ESTCB functions, which depends on the fact whether hardware or software emulation of TCB is used.

    ES Trusted Computing Base could be realised in hardware, or in software as well, as had been mentioned above. It implements all cryptographic functions performed on stored data and necessary auxiliary functionality. If properly implemented in hardware, nobody, even the server administrator is able to determine content of stored files. Software implementation must guarantee mentioned feature for everybody except administrator. Hardware implementation should contain hardware-based random number generator, for example free-running oscillator, accessible both internally and externally.

    Access Certificate Server Module contains whole functionality of ACS. It receives new certificates both from ES and mixes, responses to request from users that wants to get some certificate and destroys expired ones.

    The relationships between modules and structure of ES, MX and ACS shows he picture. It could be simply seen, that every ES and ACS has full functionality to be able to perform as a mix. Although it is possible, we discourage all server operators from this. It could be admitted for the first period only, when the Service will consist from low number of servers.

    Figure 1: Basic structure of Service servers

  9. Used protocols
  10. We will describe the most important protocols in this section. Every protocols used with Service share the same basic feature. Active party is always the one, which requests some service. This arrangement decrease the thread of active attacker. There left very little opportunity for an attacker, to utilise its great activity, when it is make to wait request from somebody to can start its faulty action.

    In further description of protocols we will pay our attention mostly on the content of the messages exchanged between communicating parties. We assume implicitly the whole communication is encrypted with arbitrary session key using appropriate encryption algorithm, although the encryption and the process of establishing the key will not be mentioned explicitly.

    1. Message transfer
    2. We use similar way to transfer messages between sender and receiver as is described in [Cha] or [Gold]. The protocol we have extended so that neither sender, neither receiver can determine who its partner is.

      Basic idea behind this protocol is as follows. Message is not transferred directly to the receiver, but through arbitrary number of intermediate mixes. Sender prepares description of the whole way through mixes. The description is concatenated to data and whole message is sent to first mix. Each mix along the path first strips own address after receiving the message, pads message to original length and passes the message to following node accordingly to path description.

      The path description is encrypted so that each mix can decrypt the information about next node only. This way each mix knows it predecessor and successor along the path only. If we divide the construction of path description into two parts so that sender will construct one part and potential receiver the other one, we can achieve very important feature. The communicating parties does not known themselves each other.

      For this purpose special access certificates are used. Access certificate describes the second part of the path. When sender wants to contact any server, he gets access certificate and using it constructs path description, i.e. constructs first part of path description and adds it to certificate. Besides path description access certificate contains its expiration date, maximal time for which the server stored data and possibly any other useful information. The exact form of access certificate is as follows:

      Here Ki denotes public key of mix i, Ai address of it, and Si is the symmetric key which mix will use to encrypt transferred data before sending it to the next hop. PubKey is receiver's public key certificate, Expiration is expiration time of certificate, MaxTime maximal time for which server stores data, Misc is the place where additional information could be place. The whole certificate is signed and the digital signature Sign is appended to it.

      When node A (often it is a user) receives certificate in the form depicted above it adds second part of path description and prepares its own access certificate in similar manner. Then encrypts transferred data with recipient public key.

      There is still little problem here. The recipient is not able to decrypt message because it does not know the keys Sn through Sn/2+1. To overcome this, sender creates structure called key-box. The key-box contains all the symmetric keys Sn through Sn/2+1 concatenated together and encrypted with Pub Key. The key-box is decrypted with the keys Sn ,., Sn/2+1 before it is appended to the message. The exact form of the key-box is as follows:

      Each mix which receives the key-box with message encrypts it with the symmetric key it gets from the path description. Using this arrangement the recipient receives key-box encrypted with the keys Sn/2 . S1. So it is able to recover all keys stored in the key-box and to decrypt the message just received.

      The form of every part of the transmitted data including the message itself is changed at every node along the path. Thank to this fact, nobody is able to track the message even if the attacker can see all traffic over the whole network. There are two exceptions here. The first is sender of the data, which is able to track message along the first half of the path, because it knows all the symmetric keys used to encrypt the message. It is not the flaw, remember that the sender construct this part of the path. Similarly with the receiver along the second half of the path.

    3. Proofs of data ownership
    4. The server must be able to prove to bank it is still holding the stored data. Only when server proves he did not deleted or lost the stored data, the bank makes payment. The arrangement is necessary so that the server must have its own copy of the data to be able to construct the proof. We must make impossible or unfeasible at least to servers to construct the proof on the base of another copy of the same data stored on any other server. Recall the server sends data to anybody who requests it. It is important the server is not able to reduce the amount of data that it must hold to be still able to prove it holds copy of data entrusted to it using any number of another copies of the same data.

      When file is submitted to the Service, it is clear every server must obtain slightly different copy of data. On the other hand, it must be feasible to recover original data from each copy, fulfilling the conditions stated in previous paragraph at once.

      The solution is surprisingly simple and straightforward. The user have to create as many copies as it wants. Each copy arises from original data in following way. User puts random string before data. The product of previous step is encrypted with randomly selected secret key and the corresponding public key is appended. Everybody can then recover the original data, but each copy is completely different and nobody except the user who submits the data can reconstruct it. User can even choose whether it creates new key pair for each copy or it creates different random strings only. The exact form of copy of data is described bellow:

      Here KPub and KSec are corresponding public and secret key respectively. The Random_String starts with Service-defined number of bits where the length of string is written.

      When server wants to obtain payment for data storage, it contacts the bank and identifies the payment. Bank responses with half of the secret key for computing of MAC function. Server combines the half of key with its half received along with data and computes MAC function over the data. The resulting MAC it sends to bank. The bank compares result received from server with value that it got along with the payment form user. If both numbers matches bank transfers money.

      It is necessary the data are encrypted directly with asymmetric key. There can not be used symmetric key here instead of asymmetric as it is done in hybrid encryption systems frequently.

    5. Payments for data storage

Two main goals must be achieved by payment protocol. We must prevent anyone from requesting data from server and then masquerading the server and receiving the payment from bank using this way acquired copy of the data. On the other hand the mechanism must be used so that the server can not pre-compute results of further checks done by bank and discard data still being able to receive the payments.

To meet previously mentioned conditions, we use two random strings. First of its (O-auth) is shared between the bank and the user who deposits data to the Service. It prevents server from pre-computing results and thus from discarding data prematurely. Second string (S-auth) is shared between bank and the server involved and prevents others, especially the data depositor from masquerading the server.

The strings are used in following manner. String S-auth creates server when user requests it to store data. Server encrypts S-auth with public key of the bank it selected for money operations creating Sealed S-auth and sends it to the user as a part of its response to the user's request. Copy of S-auth server stores for further use.

String O-auth is created by the user. It consists of two parts. The first part is used as initial vector and the second part as secret key for computing the MAC code used in protocol for data ownership proofs. The user creates O-auth, computes MAC over data being deposited to Service and sends to the bank (digital) cash, along with O-auth and result of MAC computing and Sealed S-auth. Some mechanism for payments identification have to be implemented. We will not go into details of the mechanism here, because no cryptography is involved with it.

We will describe the protocol in more details. We suppose the data is stored for one time period. In the case the data is stored for longer time, the protocol should be slightly more complicated, but the extension is straightforward, so we abandon it for this time.

    1. User acquires access certificate of arbitrary ES, forms request and sends it to the ES.
    2. ES decides if it receives data or not. If so, it creates S-auth encrypts it with bank's public key creating Sealed S-auth and responses to the user. The response contains Sealed S-auth, price of storage etc.
    3. User creates O-auth, digital payment order and computes a Control Count. The payment order contains information, when it could be carried out. The Control Count is produced computing cryptographically secure hash function over the data before which the O-auth is concatenated as described above. User sends to the bank message containing O-auth, Sealed S-auth, Control Count, the payment order and order to transfer appropriate amount of money from his account to account used to carry Service payments. The exact form of the message depends on bank's requirements.
    4. Bank stores the message for further use, transfers money from user's account and decrypts Sealed S-auth. Then it computes digital signature of S-auth and returns it to the user.
    5. User prepares data to be stored and sends it together with signature of S-auth to server.
    6. Server verifies the signature and if everything holds, it stores the data.

When server wants to receive payment for data storage it contacts bank and identifies payment it wants to receive. Bank checks whether the payment could be carried out already. If so, the bank sends to server challenge, containing O-auth. Server computes its version of Control count. After computing the Control count over O-auth and the data, the server performs one more step of the function over the block containing S-auth. Result of the function is sent to the bank as proof of ownership of the data.

Bank performs the last step too on the data it posses and compares the results. If its are equal the bank transfers money to server's account and deletes all unnecessary information concerning the payment.

The mechanism just described is not free of weaknesses. Nothing prevents server from storing the data to streamer cartridges or similar devices and use it for receiving payment only not responding the requests of data. We suppose however the device necessary to conduct server in such way is suitable for regular functionality of the server, so there is not any economical pressure to cheat. On the other hand, user can send to the bank incorrect control counts intentionally so that the server is not able to prove it is still holding the data. Again, as far as we will not approve returning of the money back to the issuer of the payment when the payment will not be utilised, the user can gain no advantage from such behaviour.

    1. Random numbers generator
    2. Many keys used with Service are long term keys. Wit its data is encrypted or signed before storage on the server. Other keys are used to construct access certificates, that are widely accessible and thus exposed to analysis. So we need source of keys of high quality. As mentioned in [Rand] such keys can not be generated completely by software means. We recommend to use two stage process of keys generation. In first stage high quality random seed from truly random source is taken. The source could be hard disk drive, LAN-card or so. In the case of user station we could use input devices such as keyboard. The seed is de-skewed and tested to be of sufficient quality. The process just described is likely to be very slow and demanding. To generate necessary volume of randomness, we expand the seed in the second stage using cryptographically good (deterministic) random number generator. As example of such algorithmic generator could serve RSA generator, or generators based on linear feedback shift registers.

      In the case the hardware TCB is used, we assume the truly random number generator is a part if TCB. We can expect it, because hardware implementation of good generator is simple and cheap enough. Then we can use directly the randomness produced by this generator.

    3. Time synchronization

Time synchronization is crucial part of whole Service. Only the reliable time information allows ES to work properly. We need protocol, which is immune to intentional introducing of faulty information by specific hosts as much as possible. We suppose, that it is impossible to collect time information from all servers, so each host must make synchronization based on some local information rather than on knowledge of the state of whole system. It seems to be very difficult to eliminate information from unknown number of malicious hosts without the aid of global information.

Suitable protocol for time synchronization is not available to this moment.

  1. Other ideas
  2. We will briefly describe some interesting ideas here, which we have not mentioned previously.

    Access certificate should contain some kind of electronic pseudonym of issuing ES. The pseudonym will be used for effective implementation of data requests. If user decides to store the pseudonym of ES to which he had stored the data, he could ask directly this server (by requesting the ACS to send access certificate of server with this pseudonym) when wishes to retrieve data from Service. With reasonable probability the server is still holding the data and the response will be very quick.

    It seems to be reasonable to introduce notion of accessibility of the data. User could define how fast it wants to receive back the data just stored to the ES. It allows to ES operators use mass storage devices such as CD jukebox, tape archives etc. which are much slower than discs, but offer far better price for stored byte. The price of storage of data on such devices could be many times lower.

    The bank could track the payments that were not realised within some period after scheduled time. Money from such payments then could be divided into other payments in some following period. For example the bank could treat payment order value not to be meant in some currency, but in points. The value of the point is then dynamically computed in dependence of amount of not realised payments during some previous period.

    As the last security arrangement we suggest that server operator should approve deletion of file after acknowledged time, or at least approve the local time is correct.

  3. Conclusion
  4. We assume that we have introduced an consistent model of the Eternity service. We also have introduced all live-necessary protocols, except the time synchronisation protocol, and the basic structure of whole parts of the system. There are still some problems too that are going to be subject of our further effort.

    We did not bother with some important details such as the exact method how the ES keeps information about stored files, or the protocol for finding requested data and many more. We did it intentionally so that we assume the discussion of that problems is not necessary to understand our proposal of the Service and the solution is a question of proper use of well known algorithms in obvious way. Some problems mentioned in [And] such as perjury trap were not discussed too, because its belongs to the field of law rather than to cryptography or computer science.

  5. References

[And] R. J. Anderson - The Eternity Service, lesson published at Pragocrypt'96

[Cha] D. Chaum - Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms, Communications of the ACM, vol. 24, Feb. 1981

[Gold] - D. M. Goldschlag et.al. - Hiding Routing Information, Workshop on Information Hiding, Cambridge, UK, May 1996

[MCST] - RFCXXX - Multicast transport protocol

[PFS] - Perfect forward secrecy RFC

[Rand] - RFCxxx / Randomness recommendations for cryptography

[RSVP] -- RFC 2205 - The resource reservation protocol

[X509] - CCITT Recommendation X.509, The Directory-Authentication Framework, Blue Book - Melbourne 1988