Facing the Database



Facing the Database

Some points you must know before planning your database

A Decalogue for Researchers


A database is not a mere container where to store information. It also makes possible to access this information in a variety of ways; to retrieve components in accordance to specific demands; to structure, organize, develop and make explicit raw contents.

When creating a database, the computing side of the business is only part of the story. It is, of course, a necessary step. But this step is conditioned by a previous analysis of available data, by the purpose assigned to the database, and by the use later to be made of the stored data, questions which have little to do with computing science proper. The choice of a specific package, a problem that potential users use to put first, must be delayed until such questions have been answered.

I decided to write the present essay after attending a meeting which some authorities in digital humanities also attended1. Most contributions were nevertheless in some way defective, and made clear the nefarious consequences of neglecting a previous setting into context of the purported task.



I. Know and tame your computer

If you plan to build a digital database, get first a good knowledge of your computer. Make sure you understand basic concepts such as field, table, file, record, export, query, sorting, and some others of this kind. Programming would be a useful ability, but we'll try and do without. You'll need, nevertheless, a clear knowledge of the possibilities and limits of the main available technologies. You must know, for instance, that formulating a query in SQL is out of reach of common users, a fact which makes databases based on this language practically unmanageable by researchers. You must also know the strong points and limits of the main applications for data handling. Excel, for instance, is a fantastic spreadsheet but a very, very, very bad database; in the same way as a Ferrari can be a fantastic race car, but quite unsuitable to drive shopping to the supermarket. You'll need a working knowledge of the main classes of packages, of the difference between a text processor, a spreadsheet, a database or a mapping package. You'll know, of course – I am not joking, this is a fundamental ability – how to pass data from one class to another by means of tabulated or csv files2.

Such a knowledge is basic matter, and necessary to exchange fruitfully with engineers, whom you'll necessarily need at some point. It will allow you to understand their observations, while preserving your judgment and autonomy in the design and general strategy of your research. This basic knowledge is not hard to learn, but you must dedicate some days to acquiring it. May be courses in your university or in some academy. But in no way imagine that having been born a "digital-native" is enough to give you an innate knowledge of computing. As if having been driven in your family car when a child was enough for you to drive in Indianapolis! Don't go further without mastering these points. You would be wasting your time for an indifferent result.


II. Plan your project and attach it to a class

Every databases has a purpose. A database is nothing but a tool to realize something beyond itself. Woe to those researchers who after spending time and energy in building a database, ingenuously ask what to do with it. Clearly formulate what you want to do with your data before giving any step further.

From your purpose and objectives, in fact, many fundamental features of the database depend. In our view, databases can be described under four classes:

  • Databases of raw data. They reproduce a document, as it is. They digitize it and provide an image of the same, along with the necessary description to locate it among others. They usually aggregate a short summary, or at least a long title. Any further analysis is up to the user, in accordance to his own purposes. Such databases usually publish existing series, created before and independently from the database. They articulate their content in accordance to the existing articulations of the series which they closely reproduce. Gallica, a database of digitized books and documents from the Bibliothèque Nationale de France, or PARES, a database of digitized Spanish archival material, are good examples of this class.
  • Databases of results. Such databases store elaborated data. They always form an artificial corpus, selected by their author in view of a clearly delimited purpose. They are sometimes made of raw or semi-raw data, which served for a research, recycled as a published database. They may also be documentary collections formed around a specific topic, as books use to be – rather art-books, because authors use to take full advantage of digital image handling tools. They suggest to "readers" one or various routes for reading, in form of narratives (Virtual Shanghai, by Christian Henriot y Gérald Foliot). Digitalization endows them with surprising versatility. Users may change the order of reading, form new groups of objects, or let the computer form them in function of calculated indexes of closeness automatically produced on demand. On the whole, readers may use the database as a starting point for their own creativity, with a suggesting power far above that of printed paper, which forces readers into a unique narrative route. Such databases are usually closed or semi-closed objects. They have not been built to receive huge volumes of new data, and less still data of another kind than that they presently store. They cannot break them up into their constitutive elements, nor reorder them into new objects and classes according to the aim pursued by a specific research. Slavevoyages.org, a database of the Atlantic slave trade is a good example of this class.
  • Databases for the storage of one-use-only information3. These are the digital equivalents of the "dossiers" in which our predecessors accumulated data on the topics they were investigating. You find there notes taken from archive documents, graphical reproductions, copies of original documents, more or less elaborated statistical files, results of analysis produced by various packages, etc. Each item is marked as belonging to a class freely chosen by the user, who in that way can retrieve and display any documentary set he chooses to. The high classificatory potential of digital devices allows almost instantaneous retrieval and reading. Such databases were not planned to systematically analyze complex data; but rather to easily bring to the user's eye the results of a set of such analysis. We call them one-use-only databases because their content and design are both highly personal. They are very useful working tools. There exist some paying as well as free-ware packages of this kind (Mydata Keeper, Treepad Lite, etc.). A global evaluation and overview of the same, from a strictly practical point of view, would make a useful contribution to research.
  • Cumulative databases for research. These are databases intended and designed to store raw data in view of using them as future research material. They may be of general use, or stick to one class of documents. They always atomize the information, that is they reduce it to items of the smallest possible granularity, in accordance to the internal basic structure of the data and independently of any possible specific use of the same. So that materials elaborated in this way can be mobilized in any context and for any possible purpose, among other reasons because users are able to select and sort them freely, down to a minute level of detail. Such databases are by nature collective tools. Each atomized item is related through an explicit link to all other items to which the original documents link it. A unique identifier identifies each item. The database is incremental a far as aggregating new pieces of informations, of any kind, is always possible, without disarranging the design of the database and without leaving aside any detail of the new data – an all-important point for a research database. Such databases demand, as a first step, an in-depth observation of the nature of the documents they are based on. Operators must select a degree of granularity inferior to that of the conceptual and physical limits which mark a frontier between one document and another, a frontier which determines the module of the databases of raw data. The structure of the database must reflect the most intimate structure of the stored items. This endows the database with a high degree of generality, which allows it to process the most various kinds of objects, not as closed blocks of information, but as collections of individual items which users can individually reach and handle. Such a versatility, nevertheless, make them complex objects which only experts are able to manage. They allow creating with absolute flexibility any subset among the stored data, and export them to any other package in view of further analysis. Fichoz belongs to this class4.

Determining beforehand to what class belongs the database you are planning is a necessary step. Consulting will probably be required. Don't go to an engineer at this stage. Rather go to a researcher of your own discipline who made a database with which published results have been produced5. What matters here is not the technical side of the question; rather getting a global overview of the process, including the use to be made of the data in further stages.

From now on, we shall concentrate on the last class: cumulative databases for research.


III. Atomize your data and arrange them into tables

From a formal point of view, any database, whatever be the underlying device, distributes its material into blocks which can be individually handled. These blocks can be compared to containers, all of them formally identical, but holding each of them a different information piece. Such containers are equipped with labels describing their content, thus making possible their selection; and with handles by means of which they can be handled or linked to one another into strings of complex meaning.

Building a database demand as a first step, as we saw before, an analysis of the available data in the light of the purpose users have in their mind. The question of the means to be used to implement the database must not even be formulated before such an analysis has been completed, for the simple reason that this implementation will depend on its results; and that as scientific requisites must never be conditioned by technical considerations, such considerations are better reserved for after this analysis has been completed.

The first question is that of the nature and size of the containers. What is the smallest atom of information needed? I'll take as an example a research on Western pictures screened in China in the years 1920-19306. On hearing the researcher who made the research, one immediately understands that the atom of information needed is not the picture, as she believes. She wants to know in fact how many times and in what conditions each picture was screened in any city. The atom she is interested in is obviously the session, which must be described by a set of labels: acoustic devices, location of the theater, number and class of the spectators, date, hour, prices, etc. All the containers describing a session will be arranged into a common space which we call "Sessions".

Two among the labels which describe the session must nevertheless be themselves described with more detail to express all the information they carry. A description of each theater will be stored into a special container, and described by a specific set of labels, such as capacity, owner, manager, address and the same. All these containers will be stored into a separate space called "Theaters". In the same way, pictures will be stored into a specific set of containers, with labels such as date, title, genre, duration, main actors, etc.

Pictures? No. Picture do not make an adequate degree, of granularity. From a same picture, many versions use to be made, with different titles, in different languages; what spectators see is not the picture in itself, but a specific version of the same. As the purpose of the research consists in an evaluation of the impact of these picture on spectators, and as the version is the only object which interacts with spectators, versions must be the basis of the object "picture". So that we are now combining two objects we had not in our mind when we first posed the problem: sessions and versions. We must then create a new class of containers, and store them into a specific space which we call "Versions".

All the versions of a same picture nevertheless derive from a same archetype. Even if this archetype is deprived of material existence, it is endowed of a strong conceptual being, as the original picture. We must then create another set of containers to store these archetypes. We shall equip them with the necessary labels, such as the name of the director, the pitch, the date, the main characters of the shooting, etc. We create another specific space, called "Pictures" to store these containers

Let us stop here. At this level we already possess all the data we need for our research (Sketch I). We could link at least one container more to "Pictures". To each picture we could, in fact, add a list of actors and other contributors. And so on, ad infinitum. We stop because adding new dimensions to our database would be meaningless in view of the objective we are pursuing. Available information and the researcher's common sense are the only limits to the burgeoning expansion of this kind of database. They always can be extended if need be. But the core of them remains unchanged, as a stub from where to attach possible new dimensions.


Sketch I.



Each container comprises handles, as I said before, to handle them and to link them to others. Handles are represented by a unique identifier describing each record (each container). This identifier carries one information only: an identification of the record. It says nothing of its content. I personally use to make it a number of eight digits. Reproducing in some part of record B the identifier of record A, one links A to B. You create in that way strings of records belonging to various classes, stored in various specific spaces, which allow you to describe any object as completely and under as many dimensions as you choose (Sketch II).


Sketch II.


A careful analysis of your raw matter in the light of the aims you pursue, previous to the selection of the database package to be used, is fundamental to your enterprise. Sorry for the reiteration So many projects I saw failing because this stage had been passed over too lightly, that I prefer being tiresome than negligent. A correct selection of the module of atomization is of special importance: reversing this decision is practically impossible. Take advice, preferably from a researcher with experience of databases. An engineer would probably not grasp the full implications of your clumsily-expressed requirements. The author of the project we are commenting upon is a living proof of the fact. She made the picture the basic atom of her research. An experienced researcher, used to decipher such ingenuous demands based on inexperience and to form a real image of their implications, would immediately understand that this was an error. Such deciphering is not the engineer's job. His own consists in translating a demand into a computing design, no more7


IV. As a fourth step, choose your tool

Once you get over this difficult hurdle, you'll be in condition to select the best tool for a project the full dimension of which you'll then appreciate. At this stage, take advice from an engineer. If possible, of various engineers, to better contrast options. The set of containers I described here above can in fact be transcribed to the computer in a variety of ways. I personally strongly prefer those which strictly transcribe the scheme derived from the internal logics of the information, as exposed in Graph II, in terms of records (containers), fields (labels) and tables (specific spaces), a similarity which makes the database at the same time far more manageable and far more robust.

You'll have to decide if you aggregate your data to an already existing database; if you work with an empty copy of an existing database; or if you create a fully new system, the option you probably like best, but a far more complex task than you imagine. This point once decided, you'll be in condition to select a package. Never forget that this package:

  • Must fully meet all the requirements of the class of database you are planning.
  • Must possess a high degree of ergonomics. Making and managing a database demands many manual operations. Saving some seconds in each of them means in the end hours and days. Moreover, a database is only a part of a complex information string of which your eye is always the final link. So that the capacity of a package to design with ease and flexibility any kind of visual screen layout must be an essential criteria in your selection. No for any vain inclination to showiness, but for very earnest reasons of efficiency8.
  • Must be flexible enough to allow changes in the design of the database once work is in progress; and most of all NOT to expect from you hasty and irreversible interpretative decisions when inputing your data.

Sustainability will be a major concern. A database is a lasting object. Your basic package must be a commercial standard, so as to warrant that the day it disappears, some solution will be made available, and that a numerous community of users make easy recruiting operators. Refuse home-made applications. Use commercial products or standard freeware applications. Above all, make sure your data can be exported to csv or tabulated files, without being mutilated, and in a way an expert can easily understand9. The training alluded to in Command I will be an asset in making such decisions.

Costs will be another major concern. Apart from the obvious limits which they impose upon your creativity, they have a great incidence on the database's continuity. Don't use paying web portal to store or broadcast your stuff. The monthly rent you can afford to pay now may become an unbearable burden at some moment. Remember that nothing is free, except what you do yourself. Remember, very specially, that the University's engineers have a cost for the collectivity, a very high cost, in most cases higher than what any private enterprise would take for the same task. Use them in what they have been made for: to answer specific needs of researchers. Specially remember that substituting a leaving key-person is easier for a firm than for most public institutions10.


V. Test your database before making full use of if

Once your database has been designed, don't launch it at once to huge research projects. Test it first. With a limited quantity of complex data (He who can do more, can do less). A data base is an engineered object. Such objects usually do not work well when first tested. They need tuning. Sometimes, they demand a full revision to reach full efficiency. Remember the De Haviland Comet, the pride of British post-war aircraft industry. It was prone to explode while flying. The fault was with its square portholes, the corners of which were as many structural weak spots. Once the problem had been corrected, the Comet lived a long career and was considered as a rather sturdy plane. Don't let your database, and with it your research project, blow up on flight! And the more so if you get some huge funding, which makes you conspicuous. Before climbing to your cruise flight level, make sure everything works. Test during a long period: many defects only show up on routine use. Then, and only then, make your tool the trusted instrument of your creativity.

I confess, o reader, that as a member of assessing committees, I used to reject any application based on a future database to be designed as part of the project. Database design and testing, in my view, have to be previous to applying for external funding. Databases are tools, not ends (see Command II). Tools on which depends any future research based on them. Designing a database is not expensive, and must be funded by the budget of permanent research teams as part of the design of the research project.


VI. Keep apart storage and calculation

The content of your database is intended to feed downstream tools for data analysis and data processing. Resist the temptation to use the statistical tools that database packages usually include. Not for poor quality. They use to be rather good. But because they are part of an architecture which was not planned, and had not to be planned, to maximize their efficiency and flexibility.

Use your database to store data: for this it has been designed. Store them in such a way as to be able to select any subset to match any possible question you might ask the data. To select neat and fast. Then export the selection as tabulated, csv, merge, or any other data-exchange standard, to any analytical tool you choose to. Process them independently with such tools. If need be, import back the results to the database. Doing so, you'll preserve your freedom of action, and your ability to choose the best-suited tools. The non-imposition of a limited set of analytical tools on users is, by the way, one of the conditions for any collective use of the database.

Such downstream tools must be managed by the researcher himself. This is a most fundamental point. Researchers' demands are by nature unstandardized. Researchers, in the first steps of an analysis, have only a vague idea of what the next step will be; and this second step heavily depends on the interpretation they make of the results of the first operation. This decision involves the full set of all research skills. Only researchers can manage it.


VII. Carefully keep data apart from interpretation

Databases of raw data use as basic atoms the information blocks which they account for, such as they are segmented in the source. This is an exception. All other databases change in depth the information they receive from the moment they read it. The basic operations of database making, that is dividing an information flow into records (atomization), selecting inside this same flow markers to describe the records (fields), means dismantling in a permanent way the unity and consistency of the object. Making an object part of a database means breaking it up in view of building it up back by putting together, may be in another way, the disaggregated components. The universe of rebuilt objects which makes the database, wholly substitutes the original reality in the researcher's eye. He no longer sees real objects, but artifacts he himself produced. Not only does he depend on his own creations, but he also looses any capacity of detecting errors embodied in the same. The problem does not derive from the digital nature of the tool. Our predecessors also had to confront it11.

So that:

  • The deconstruction and rebuilding of the original object must be done within the limits and in accordance to the best criteria of the hermeneutics of the concerned field.
  • Any transformation trespassing such limits must never be done on the fly, while loading data. At that moment operators still do not possess a global view of the matter. Moreover they must attend various other tasks and lack concentration. The trespassing operation must be done under a specific procedure, and the result stored in a specific field, while preserving the original data for further control.
  • Breaking down data and rebuilding them are tricky operations, for experts only. This means that hired assistants for inputing data must receive before a good training in the field12.

Let us see, for instance, an entry from a book of the Chamber of Castile, a standing committee in charge of nominating candidates for the higher offices in the kingdom of Castile.

25 January 1674, Juan de la Mesa, alcalde de la cuadra de Sevilla.

Given the context, this is obviously an appointment. This line would consequently make a record in our database. Meaning may nevertheless be more ambiguous:

25 January 1674, Juan de la Mesa alcalde de la cuadra de Sevilla.

28 January 1674, Pablo de Alzaola alcalde del crimen de Sevilla.

15 March 1674, José Fuentes oidor de Sevilla.

28 May 1674, Jerónimo Bastos alcalde mayor segundo de Sevilla

28 January 1674, Pablo de Alzaola alcalde del crimen de Sevilla.

Any expert knows that alcalde de la cuadra and alcalde del crimen both refer to criminal judges of the Audience of Seville; that an oidor is a civil judge of the same; and that an alcalde mayor does not belong to the Audience, but to the corregidor's court. Knowing all that is necessary for a correct understanding of the data. Any conclusion based on the formal words of the source would be erroneous. Should change we in the database the way in which the information has been formulated by the Camara's clerks to make it more manageable? In a first moment, obviously not. Any error in the first stage of the uploading process would be beyond repair13. Input data as they come. Keep ready another field for a rectified version, but always preserve the original formulation in the first field for control. Calculations will of course be made from the rectified version, which provides a more structured, that is a more meaningful, information.

This rule is also valid for persons, corporations, places and any other actor mentioned in the source. Copy them as they come, among other reasons because the way original sources name them is, in itself, a historical information. You'll aggregate a unique identifier which will describe every actor as a same person or a different person, as you choose. These identifiers will obviously be stored in a field different from the one where you stored the name, so that any erroneous assignment can be corrected by changing the identifier and nothing else.


VIII. One record/one information, one field/one value

The labels (fields) which describe every container (record) allow selecting one container among the whole set of them. To do so, fields make visible the features which describe each record. Ideally, each field should mention only one value. If the nature of the data makes necessary to mention various, they must belong to the same class. For instance: "color, green and yellow". Never: "color, green" and "position, vertical".

Moreover, and in a most absolute way, each record must store only one piece of information.

For instance, this entry in a book of the Chamber of Castile:

15 March 1674, José Fuentes oidor of Seville and protector of the Macarena confraternity

generates two records:

15 March 1674, José Fuentes oidor of Seville

15 March 1674, José Fuentes, protector of the Macarena confraternity

This rule of information-uniqueness in records has no exception. The capacity of a database to comply with this requirement is one of the main indicators of its quality. If it does not, the database must be redesigned.


IX. Data integrity: your supreme law

If, to comply with the uniqueness command (VIII), you must chose one piece of information and let another out, stop and redesign your database. Both pieces of information must be accepted without breaking the principle of uniqueness. This is another absolute law. It has no exception. Mutilating data to fit them into a storing device is no scientific practice. Poets and clerks may do so. Not a researcher, whose job precisely consists in investigating what does not fit. For instance:

"José Huidobro is alderman and ploughman" makes two different pieces of information

The database must provide as many places where to set labels as pieces of information found in the source. If the object of the analysis is an actor, as many labels as characters describing this actor all along his lifecourse. In the present example, we could create two fields, called "Social position" and "Public function". But what should we do in the following case?

"José Huidobro, alderman, ploughman and weaver, attorney for orphans"

We need two fields for public functions (alderman and attorney for orphans), and other two for the social position. A bit complicated. The more so when the list grows longer. The titles of the Duke of Alba at the end of the XVIIIth century comprise more than fifty items. Adding so many fields is clearly impossible. The only way of managing this multiplicity consists in adding records, not fields; and in aggregating all records which concern a same actor by means of a unique identifier. Exactly as beads of a rosary. We'll not enter into further details here. We'll nevertheless remember this fundamental rule: any database which makes necessary to leave part of the information out is radically vicious and invalidated by this only fact.


X. Consider an ample schedule and be modest in your ambitions

Making a database is a long process. One almost always plans too short a schedule for it. Tuning a database design may be highly time and energy-consuming, unforeseen problems may arise practically till the end of the process and make necessary rethinking the whole device. Data-loading is a long and boring process. Emending errors in the data is more boring and time-consuming still, but absolutely necessary: in some case, an error in one data is enough to distort your results. Anyway, your database will probably provide tools which make emendation faster, such as sorting devices and index creation routines. If managing ancient sources, nevertheless, don't trust blindly dictionaries. They may ignore obsolete forms. For obvious reasons, forget Autospell routines and the like. Once emended, data must be indexed, and equipped with markers, to make them really manageable. All that, in a research database, is expert job. In some case, highly trained experts' job. You must not externalize it. Obviously, you reserved at the beginning of the project various days (may be weeks) to learn and manage the database package and the analytical packages you'll need to derive conclusions from your data. Analysis is also an expert job and cannot be subcontracted. Understanding in depth the way your analytical packages work is absolutely necessary to validate your findings If you are really prepared, you'll see the last phase of the process as short, because incoming results and intellectual stimulation will make it look so. But it will be the only moment you'll feel real excitement.

Finally, the total duration of your research will probably be shorter, even far shorter, using a digital database. But computing is based on repetition, and an avalanche of repetitive tasks and data will probably disappoint you and, possibly, induce you to leave. May be also that, fascinated by the promise of speed and ease you expect of the last step, you discard with impatience the long preparatory phase you have to pass through before. When submitting a project, don't be overambitious. If you think you'll make two, say one. In that way you'll be immune from frustration. And your assessing committee too.




Finally, cherish in your heart this last command, in which are enclosed the Law and the Prophets:


Only efficiency matters


Many will insist on a supposed computing orthodoxy. Others will extol the elegance of a new approach… to be developed in a near future. Forget them. Never loose sight of the only real truth: your aim consists in accessing, in the easiest possible way, to your data, to all your data, and to the links which make them a system. In question of seconds. And to move among them with complete agility. Any solution, although brilliant, which would not meet these demands, must be ruled out. The price you would have to pay in terms of time wasted and, worst still, of low quality and distorted results, would be unbearable. Choose, among all the devices which meet your requirements, the most efficient for your purpose. Everything else is meaningless.


Jean Pierre Dedieu
Directeur de recherche CNRS (émérite)
Framespa (Université Toulouse Jean Jaurés) / IAO (ENS Lyon, Université de Lyon)






1Seminar organized by Christian Henriot in Aix-en-Provence in September 2016.

2If any word you did not understand in this paragraph, stop and learn; then, and only then, go on with your project.

3Cecile Armand presented a good example of such a database in the Aix-en-Provence meeting mentioned in n. 1.

4On Fichoz and its underlying principles, see: Dedieu (Jean Pierre), Designing databases for historical research
With special reference t
o Fichoz, 2016, which makes the second part of the present essay.

5Stay away from false prophets! "Digital humanities" attract them as honey attracts flies. You'll know them by their fruit: sound and fury, but little content. They publish much about digital tools, little using digital tools.

6This case was exposed in the Aix meeting (n. 1) by Anne Kerlan.

7I must make an exception for a short number of elite engineers who were previously trained as researchers. I was lucky enough to work with two of them: Michel Demonet (then EHESS), who taught me much of what I know on computing, and Gérald Foliot (CNRS, Lyon).

8The almost absolute rigidity of spreadsheet layouts is one of the main reasons which, in my view, bar them from any use as databases. The same is true of Access and of most SQL databases.

9This is one of the reasons why I prefer databases in which the computing design fully matches the logical structure.

10Various German universities are presently (2016) confronted with the problem of the survival of huge databases initiated in the 70s and 80s with the help of (then) young engineers who are now retiring, leaving behind devices nobody knows how to manage.

11A famous German historian, at the end of the XIXth century, who trusted too much his young assistants, took a story invented by a novelist for a scientific report, and published it as such. His database quickly became a standard source for witchcraft history. During almost one century the whole research on witchcraft was based on false premises (Cohn (Norman), Europe's inner demons. An inquiry inspired in the great witch hunt, 1975).

12Most deciders disgracefully are not aware of this point. I know of a millionary historical project who externalized the loading of its data to a private enterprise which usually worked for the industry. They are now frantically looking for volunteers to correct their database. They should have known better from the beginning.

13While copying a manual listing of agents of the XVIIIth century Spanish Monarchy to a database, I was surprised by the huge number of "ministers" it mentioned. This was the result of an unfortunate correction. What we call now a "minister" was termed then a "Secretary of State for…", followed by the name of the department. Which was frequently abbreviated into "Secretary for [Department] ». But some high-ranking clerks working for Councils were also known as Secretaries of the Council of…, which clerical practice shortened into « Secretary of [Council]". One of our operators was misled by the similitude. Undoing his error was no easy task.


Ce contenu a été publié dans Non classé. Vous pouvez le mettre en favoris avec ce permalien.