The Open System Approach to Pictorial lnformation Systems 87
The Open System Approach to Pictorial Information Systems
Wendy Hall and Frank Colson*
There are many rea.sons why we should be able to argue that the use of pictorial
infonnation systems in the education of students in the humanities will dramatically increase.
The need to educate students in the critical analysis of images has always been
apparent, but factors which have made this difficult in the past have recently been receding
from view. lndeed, it could be argued that a healthy political and social discourse within
Western European society requires that universities must turn increasing attention to the
criticism of the image. The student population of Western Europe has now been exposed
to such systems in both everyday life and the school for some twenty years on a formidable
scale. Pietorlai information systems are becoming a part of everyday commercial life, they
can be invoked in museurns, libraries and even shops, they play a role in the ‚discourse‘
of society. Such pictorial inforrnation systems can be far more cheaply mounted than was
the case several years ago.
The attraction of the pictorial information system is that,there is a fascinating similarity
between the format and Iayout of such systems to that of the late-lamented ‚Illustrated
London News‘. A ‚reader‘ can browse between . image, comment, image and comment,
subconsciously retaining the assumptions on which the collection of materials has been
built upon by its editor. The editor of a system is building up a metaphor upon which
to work, for as Lynette Hunter has recently suggested [1], the builder of a hypermedia
dataset ’selects a potentially enabling set of links and and directs the user away from other
selections‘. The danger of unstated yet embedded editorial comment has increased since
cumbersome but cheap and practical applications allow work presented on these systems
to be made available at an ever more moderate price. Since their impact on culture and
society in the closing decades of the twentieth century may weil be comparable to that of
the printed word in Europe during the hegemony of the Habsburgs it is entirely proper that
historians, concerned as they are to ‚argue interpretation‘ should come to grips with the
issues at the Ievel of software structure if only to alert their colleagues to the need to pose
alternatives to those ‚embedded‘ by the often anonymous creators of pictorial information
systems.
The important challenge that pietonal information systems pose is that of enabling
the user to exercise systematic criticism of their contents. For there is no such thing as
an objective pictorial information system any more than there is an ‚unbiased text‘. The
problern is that we have all learned the canons of criticism of the printed word, whereas
those canons which apply to images are often so hallowed, or so inherent in our experience
that it becomes very difficult to criticise them. At the same time dassical hypertext
systems while pleasantly explicit in their biases, do not easily allow the user to exercise
criticism since the ‚authority of the link‘ retains a tyranny which cannot easily be gainsaid .
* Acknowledgements: The authors would like to express their thanks to all colleagues
who have helped in the writing of this chapter, in particular Hugh Davis and the rest of
the Microcosm team.
88 IVendy Hall and Frank Colson
It is our view that this tyranny ‚of the link‘ can perhaps best be overcome by the creation
of an ‚open system‘ which separates authoring from the content of the original material
contained in a pictorial information system.
At one Ievel, that of „great art“ , the issues of interpretation should be tackled in
a systematic manner, through software which allows for issues of contextualization to
be clearly aired. At another Ievel, that of „small art“, these issues of contextualization
might conceivably be of greater importance, especially since such ’small art‘ often consists
of the image and text. Historians normally wish to ‚read‘ the entire corpus of visual
art from the area of the past under their gaze, not least because their interpretations
demand a ‚feel‘ for its overall texture and context. The historian of nineteenth century
Brazil for example would wish to range widely through ‚official art‘, through ‚popular
works‘ whether expressed in newspapers, formal official portraits, architecture, traveller’s
sketch, the icon on the wall of a peasant’s hut or slave’s engraving. Such a scholar would
wish to search reproductions of work in a range of media, from feathers, fans, textiles to
photographs, cartoons, street or shop signs, to official architecture. Since it is theoretically
impossible to predict all the questions that historians might wish to ask of an image it is
vital that software platforms which allow for the systematic and automatic recording of the
re-arrangement which necessarily accompanies re-interpretation be brought into Operation
as soon as possible. Only then might historians systematically and formally, be able to
refine central propositions which underpin interpretation.
Our concern in this and the following chapter is to move the discussion begun earlier
a little further. We would suggest that it is very important to ‚contextualize‘ the images
tobe discussed, by identifying the technologies required for their creation, by isolating the
conjuncture within which they were created, and examining the uses to which they have
been put. It is also vital that this contextualization be argued out by users.
Perhaps we should stipulate this concern in a more systematic fashion. In the first
place it is vital that anyone examining a pictorial information system in an academic environment
should be able to verify the information already contained in related data which
has already been loaded into the system. It has already been argued that digitized reproductions
are interpretations of objects, and we must be able to document their accession.
At the very least the ‚technical identification‘ [discussion of provenance and composition
of a given object] must be open to ‚review‘. Such a review implies that the technical
choices made by the originator of the pictorial information system can be assessed. Thus
the ‚filters‘ created at the time when data was setup in the system can be identified and
made the subject of separate comment. Some of this comment may be best ‚dealt with‘ by
techniques which derive their inspiration from certain tenets of artificial intelligence, some
may not.
Whether or not this is achieved through artificial intelligence, authoring links must be
available to be criticised, and cannot be embedded. This is vital because basic information
about objects certainly ranks as interpretation, especially when this contains inferential
data on the artist. For original authors of a pictorial information system may weil have
fittered out data that they thought was not relevant to their purpose, yet subsequent
scholars feit worthy of inclusion. In this context it is essential that the interpretations
The Open System Approach to Pietonal Information Systems 89
[extended information] inferred by successive authors be preserved in such a way that they
can be identified as separate by third parties and submitted to extensive analysis.
These arguments are central to the canons of our discipline, which are thoroughly nonRanckean
in their disavowal of the ‚definitive‘ interpretation of evidence. They therefore
favour the use of a software platform which works in such a way that it enables ‚individual
authorial work‘ to be separately stored and accessed. But that is only the start – such
software must not only provide access to stored images, as weil as systematic and consistent
descriptions of the contents of those images and the ability to manipulate them, but must
also provide for the separate and identifiable storage of the ‚comment‘ which might be
made on them. This allows for the effective argument of ‚interpretation‘ as successive
scholars are able to compare their ‚readings‘ of an image. Even though the presentation
of !arge numbers of irnages might provide some means of ‚validating‘ interpretation, the
ability to record and ’structure‘ evaluation and criticism is vital.
These issues are clearly not those at the forefront of commercial manufacturer’s concerns,
and conventional multimedia authoring systems do not provide the facilities we
outline above. Neither do the current generation of hypermedia systems. At first sight
this is strange since the dassie notions of hypermedia ([2], [3]) were developed to exploit
the need for lateral thinking which is inherent in the scholarly activity which underpins
the creation of a pictorial information system. .
In the current generation of hypermedia development environments, such as Apple’s
HyperCard, Asymmetrix’s Toolbook and OWL’s Guide, links are typically stored within
each artefact, whether text or image. Such links are integral to the file containing such
data, which is therefore irrevocably altered. Consequently there is the danger that a link
cannot be distinguished from the material in wb.ich it is embedded. Furthermore the
reader cannot easily isolate the argument which led to the link being placed in a. particular
piece of information and has to infer the reasoning behind the author’s choice in devising
the link. As has been argued of the impressive work of the Hartlib Papers Project ‚the
process is once more analagous to story telling, finding a good fit between conventions of
communication and material experience‘. While this has a certain intellectual charm it is
hardly to be recommended as scholarly endeavour which can be effectively questioned and
hardly conducive to the type of argument which should be encouraged. There is constantly
the uneasy awareness that many of the material’s significances will not be noticed simply
because we cannot recognise the signs of its different resistance. This seems to us to be
the nub of the argument.
For though, as ‚buttons‘, specific links have something to recommend them, they
are ‚overt classification‘. As such they have been used to devise ‚filecard-like systems‘
which have some utility as a means of Straightforward classification and ‚branching. When
the formal clarity of the systems involved can be understood by another expert. The
HyperCard stack is perhaps the best known of such devices, and has been found effective
in providing initial classification. Whether or not the stack has great utility in a discipline
such as history, where the interpretation of the argument is as important as source itself
is open to question.
A number of authors have suggested that existing hypermedia systems are not satisfactory
for large information sets in any discipline ([4], [5], [6]). Many hypermedia systems are
90 Wend11 Hall and Frank Co/son
now being developed with an open integrated architecture in order to address this problem.
The key difference between open and closed hypermedia systems is in their relationship
to the user’s working environment [7). Existing hypermedia systems are typical of closed
systems: they are separate applications that are isolated from the user’s other applications
such as word processors, database management systems and spreadsheets. In contrast,
a typical open hypermedia system will act as a virtual linking layer abstracted from the
information bound to various existing applications. Whenever possible it will use existing
applications to present the information consequence transform the user’s previously discrete
set of tools into a fully integrated information environment. The hypermedia system
simply acts as a framework that provides the relationships between documents [data). For
example, in the open hypermedia system Microcosm described below and in [4), [8) and
[9) no linking informa.tion is embedded within the documents; link information is maintained
in external link databases. This has a nurober of side effects, the link information
is easily analysa.ble, the applications that generated the documents can still be used to
edit and maintain them, and the user is able to configure link sets according to their own
requirements.
For historians the open system, defined a.s one in which authoring is explicitly recognized
and can be subject to separate processing from the ma.terials in the dataset, is
an intellectual imperative. In traditional hypermedia systems, embedded links which cannot
be separately processed have crucial disadvantages for the historian, especially where
‚!arge corpora‘ [in excess of 100 images with associated data), are concerned. At this Ievel
formidable barriers to navigation and authoring can only be dealt with by rather complex
and cumbrous navigation systems, which often owe more to ingenuity than elega.nce. The
wealth of links to be invoked could hardly be displayed within a GUI without engulphing
the reader in a. ‚chaos of Stimuli‘. The barriers to rapid and distinctive authoring become
formidable.
For once, the cyclopean struggle between software and hardware manufacturers for
dominion of the US business market has meant that open systems are coming within
the range of the historian. The DOS based system, so long so unfriendly has, with the
application of various graphical user interfaces such a.s Microsoft Windows, evolved into a
more powerful platform than was previously offered to the scholarly world. The Windows
environment enables the scholar both to take advantage of the enormous strides which have
recently occurred in the field of image-processing, storage and retrieval and to process the
da.ta [extended information) which have been compiled by other scholars. One particular
platform, Microcosm was developed in order to overcome the difficulties encountered by
many different scholars, including historians of the moving image, and to handle corpora
of significant size and orthogonal shape.
Microcosm { [4), [7), [8), (9)) is an open hypermedia. system which has been developed
in the Department of Electronics and Computer Science a.t the University of Southarnpton.
Within Microcosm it is possible to browse through !arge bodies of multimedia information
by following links from one place to another. In this respect, Microcosm provides all the
services that would be expected in any hypermedia system. However, Microcosm adds
many significant features to this ba.sic model, which place it at a higher Ievel than most
currently available bypermedia systems, and make it a particularly suitable environment
The Open System Approach to Piclorial lnformation. Systems 91
for integrating data and processes. It is currently implemented on Microsoft Windows
3. Versions are under development for Apple Macintosh machines and for Unix machines
running X- Windows. In order to understand the facilities that Microcosm provides it is
necessary to examine the underlying model. The following description is taken from the
Microcosm Pre-Release Documentation [10].
Text
Viewer
Video
Viewer
Bit map
Viewer
Word for
Wi ndows
Viewer
Microcosm
Docu ment
Control
System
Microcosm s nds message
lhrough filte chain
Figure 1: The Microcosm Model
Fi lter
Dispatcher
asks
Microcosm
to load
selected
L i n k
Dispatcher
(This is where the
file in
appropriate user may select
which of the
viewer
available I inks
to follow)
A vailable inks
sent to
dispatche
92 Wendy Hall and Frank Colson
Microcosm consists of a nurober of autonomous processes which communicate with
each other by a message passing system (currently based on Microsoft Windows Dynamic
Data Exchange). No information about links is held in the document data files in the form
of mark-up. All data files remain in the native format of the application that created them.
Link information is held in link databases, which hold details of the source anchor (if there
is one), the destination anchor and any other attributes such as the link description. This
model has the advantage that it is possible for processes to examine the complete link
database as a separate item, and it is also possible to make link anchors in documents that
are held on read only media such as CD-ROM and videodisc.
Microcosm allows a nurober of different actions to be taken on any selected item of
interest. The user selects the item of interest (e.g. a piece of text or an area of a picture)
and then chooses an action to take. A button in Microcosm is simply a binding of a specific
selection and a particular action – the end-effect to the user in this case is the same as
a button in conventional hypermedia. A particular feature of Microcosm is the ability to
generalise source anchors. In most hypertext systems the source anchor of any link is fixed
at a particular point in the text. In Microcosm it is possible for the author to specify three
Ievels of generality of link sources.
1) The ‚generic link‘. The user will be able to follow the link after selecting the given
anchor at any point in the document.
2) The ‚local link‘. The user wiU be able to follow the link after selecting the given anchor
at any point in the current document.
3) The ’specific link‘. The user will be able to follow the link only after selecting the
anchor at a specific location in the current document. Specific links may be made into
buttons.
Generic links are of considerable benefit to the author in that a new document may
be created and immediately have access to all the generic links that have been defined for
the system.
The basic Microcosm processes are viewers and filters. Viewers are programs which
allow the user to view a document in its native format. In the current implementation
there are viewers for text, structured text, images, video, audio, mirnies (guided tours),
animations and micons (moving icons). The task of the viewer is to allow the user to
peruse the document , to make selections and to choose actions. Any Windows application
might be used as a viewer, with the proviso that it is possible to select objects, and at least
copy them to the clipboard. Current applications that are used as viewers in the Windows
environment include Word for Windows, Guide, Toolbook, Superbase and Excel. A major
strength of Microcosm is its ability to integrate other applications. In fact Microcosm may
be seen as an umbrella environment, allowing the user to make links from documents in
one application package to documents in another application package.
Filters in Microcosm are processes which are responsible for receiving messages, taking
appropriate actions, and then handing the message onto the next filter in the chain.
The actions that filters take are öf the nature of changing the message, or adding or removing
messages. Current filters provided by Microcosm include the link da.ta.bases, show
links, compute links ( a full text retrieval mechanism ), navigational aid filters and a. link
The Open System .4pproach to Pictorial Information Systems 93
construction filter. The order that the filters appear in the · chain is und er user control,
and may be dynamically re·ordered and re-configured.
The software platform described above has several crucial advantages for the historians
in question. In the first case it meets some of the major objections to traditional
hypermedia systems by providing for the separation of text and authorial comment. It does
this by ’storing‘ the ‚links‘ created by authors in a free-standing ‚linkbase‘ which could be
accessed from any software or dataset resident within Windows 3.1 or above. This means
that versions of existing interrogation routines such as HiDES running in C can be accessed
[11], so that domain files created to provide authorial comment can also be ported, and
earlier authorial efforts and learning is not lost. More importantly has the ability to exploit
and integrate other application packages running in the same environment which are
powerful enough in their own right to handle very significant datasets. Typically enough
a database in excess of 30,000 records could be accessed and brought to bear to study the
occupation of major town houses ‚palacetes‘ in Viana do Castelo, Portugal. The open system
approach means that years of experience with database ma.nagement, spreadsheet a.nd
graphics handling packages need not be lost in the ada.ptation to the new platform and the
creation of pictorial information systems running on more powerful ha.rdware platforms.
This is significant since authoring a dataset containing tens of thousands of data.files ( both
text and image) is a non-trivial task.
We have arrived at a definition of a researcli workstation, one which an investigator
uses so as to examine the assumptions reached by the person who has created a pictoria.l
information system. Naturally enough the investigator would be anxious to preserve the
approach of the originator, include their own work, run a.ppropriate applications programs
and include new data. At the same time the investigator might weil wish to transfer
the ‚interpretation‘ to a new hardware or software environment. Given that changes in
hardware and software technologies are extremely rapid this requirement is both non-trivial
and vital.
Perhaps we should phrase this requirement in another wa.y. The platform should
enable the investigator to obtain practical and inteiiectua.l support from the system without
overt ‚editorializing‘ from the a.uthors of courseware contained on it. At the same time
the system should contain a full range of aids to navigation; comprising access to specific
interactive help fi.Ies (devised by initial author]; as weil as access to information and tools of
a generic nature. Needless to say the platform should also contain a range of ‚intelligence‘,
which would operate at both ’specific‘ and ‚generic‘ Ievels. As already argued it must a.llow
the student to use a range of application programs, so as to produce findings which can
be assessed with reference to questions of historica.l debate and interpreta.tion.
In other words the platform should operate an open system offering sea.mless integration
to a range of media [text, graphics, still images, video and sound] and applications.
Such integration must allow the investigator to compare data processed through such applications
with other data held in the system. In this way the platform ca.n provide for an
open corpus of data. Since historica.l data is never complete, the system must allow for
the handling of new data, whether text, sound, video or in processed form. Furthermore
it should do this in such a way that the integrity of a.ll such data is guara.nteed.
94 Wendy Hall and Frank Colson
There are several tools which may be used within this new platform, and are integral
to the software. These tools can be sub-divided into two classes – those which can be used
with any multimedia data.set, and those which are used to generate information that is
specific to a particular dataset. Examples of the first dass of tools in Microcosm include the
‚compute links‘ facility enables investigators to invoke a sophisticated and complex search
algorithm in order to provide them with an understanding of the resources available in
the text files of the data.set. This allows the investigator to gain a sense of the extent
to which ‚extended information‘ provided in the system can be germane to a particular
interpretation and can be supported from the images to which ‚comment‘ has been made
available. The ‚compute links‘ facility is, in other words, a text retrieval systems with
all the virtues and vices of such an instrument. Since its use is hallowed and apparently
understood it might be regarded as a crucial beginning. A second instrument which a
scholar might invoke is also traditional, a ‚history‘ a device which allows for the tracking
of a particular investigation. This is a listing [in chronological order] of all files used
and is implemented as one of the navigational aid filters in Microcosm. In the current
implementation it is provided with minimal ‚intelligence‘ to indicate that a file bas already
been accessed. The device wbicb allows an investigator to highlight a given text and invoke
the ’show links‘ facility within Microcosm, allows for the display of any links which bave
already been authored. This is particularly apposite when an investigator is faced with
the inteUectual issues involved in the comparison of descriptions of the visual image, or
tbe contextualization of a moving image.
At the same time such a platform should also provide tools for retrieving images
using ‚visual‘ methods of identification such as presenting the use with a set of ‚thumb-nail
images‘ or ‚visual abstracts‘ as the result of a query to an image databa.se. In Microcosm
this facility has been extended to incorporate moving icons or ‚micons‘, which are samples
extracted from videotape or videodisc and stored in digital form on the hard disc. Such
tools have a distinct advantage. in that they allow an author to ‚deconstruct‘ a particular
sequence of moving images, juxtaposing still images, text or graphics. These should be
accompanied by generic help files which are iconized by the platform.
Tbe second set of tools wbich sbould be invoked by an investigator are those wbich are
unique to tbat dataset. These ‚unique tools‘ when invoked provide results which will only
refer to materials held in that pictorial information system. The first of these are specific
links, the dassie ‚buttons‘ weil known to the users of hypermedia, which only apply at
a specific point in a document. The second, known as ‚generic links‘ in Microcosm, are
arguably far more important. Such links explicitly provide interpretation, they can be
used by one author to group all references to a given object [a type of dictionary] or,
alternatively, they can be a means by which an author suggests a correspondence between
a range of discrete references, as in the discussion of inference in images. The advantage
of such generic links is tbat their individual nature is apparent, if ‚further links‘ are added
to tbem then the provenance of such authoring can also be clearly distinguished. In some
senses ‚generic links‘ are ‚the ideal means of distinguishing authoring in a system.
The use of guided tours [mimics in Microcosm] also enables an author to put a
‚thumbprint‘ on the way in which a given series of texts or images might by used as a
means of developing an argument or interpretation. Juxtaposing ‚traditional‘ with ‚modThe
Open System Approach to Pietonal Information Systems 95
ern‘ in a sequence of photographs, especially if these are coupled to generic links through
extended comment is actually a way of documenting the construction of images, while
commenting intensively upon their interpretation.
On another plane, the building of individual ‚expert mode‘ interrogation files within
a platform such as that offered by Microcosm allows for various forms of argument to be
adopted, and labelled, by a specific author. In the HiDES programs used at Southampton
we have developed four modes by which authors can question evidence in either images
or text; these are debate, progressive argument, exposition, and reconstruction. All can
exploit Microcosm, but the latter two are particularly effective. This is because Microcosm
also allows an investigator to examine the mode of argument used by a predecessor, by
using ‚compute links‘ or ’show links‘ on the domain files stored by the system. These
domain files are clearly unique to a particular dataset and type of data, as are traditional
arrangements of buttons into contents or dictionary files.
Though existing technologies can provide a wealth of material for discussion, and
enable us to posit powerful arguments, it might be argued that the potential of new ones is
such that they too cannot be ignored. Research and teaching in modern and contemporary
history pose specific problems for the social and economic historian. Some of the more
traditional of these have been anticipated by providing substantial, systematic, interactive
help 6.les designed to enable students who are pot weil versed in statistical method to
exploit complex data and can be effectively exploited as iconized help files. Nevertheless
there remains a plethora of material, (e.g. film, audio and facsimiles) which present certain
inherent difficulties in that they cannot easily be utilised – stored, accessed and queried on
the systems commonly available to scholars of the humanities. At the same time we have
the problern that historians face two further difficulties, data is often stored and accessed
via a range of very different technologies, some of which have long been incompatible,
while it has proved extremely difficult to document usage of these data, e.g. which clips
of film or stills were used as evidence to support a particular argument and how were they
engineered. The result of such research in the past has often been unsatisfactory because
i t might be Iabelied impressionistic.
In conclusion, we have argued in this chapter that pictorial information systems for
historians, and scholars in other disciplines where interpretation and debate are essential
techniques for the clasisfication of pictorial information, must be designed using an open
systems approach. This involves the integration of several different application packages
running under a unifying umbrella platform that provides linking, filtering and data viewing
functionality. At Southampton we are currently using this approach to build a number
of very substantial multimedia datasets and the results of this work are discussed in the
following chapter.
An obvious and natural extension to the work described above is to integrate a set
of image processing tools into the information system to allow users to interact with and
process pictorial data in the same way as text. For example, generic links are only currently
implemented in Microcosm for text data. Extending the model to apply generic linking
capability to image data (both still and moving) is in principle very straight forward but
the image processing techniques required to match any selected area of an image to a set of
target images are as yet unavailable. It will however be possible to achieve useful results in
96 Wendy Hall and Frank Colson
certain well-defined areas, such as cartography and manuscripts, which can be extended as
developments in technology permit. We are moving towards the ideal of an ‚image retrieval
system‘ but this must be part of an open information processing environment if it is to
serve as more than mere illustration.
References
(1) Hunter, L. „Hypermedia Narration: Providing Social Contexts for Methodology“.
Paper given at the ALLC/ ACH’92 Conference, Oxford, April 1992.
(2) Bush V. „As We May Think“. Atlantic Monthly, July 1945, pp 101-108.
(3) Nelson T.H. „Getting It Out of Our System“. In: Schechter G. (ed.) Information
Retrieval: A Critical Review, Thompson Books, Wash. D.C., 1967
(4) Fountain, A., Hall, W., Heath, I. & Davis, H. „Microcosm: An Open Model for Hypermedia
with Dynamic Linking“ . In Hypertext: Concepts, Systems and Applications,
Proceedings of ECHT’90 (eds. A. Rizk, N. Streitz & J. Andre) Garnbridge University
Press, pp 298-311, 1990.
(5) Malcolm, K., Poltrock, S. & Schuler, D. „Industrial Strength Hypermedia: Requirements
for a Large Enterprise“ . In Proceedings of Hypertext ’91, ACM Press, pp 13-24,
1991.
(6) Halasz, F. „Seven Issues Revisited“. Keynote talk given at Hypertext’91, San Antonio,
Texas, December 1991.
(7) Hili, G., Wilkins, R. & Hall, W. „Open and Reconfigurable Hypermedia Systems: A
Filter-Based Model“ . CSTR 92-12, Department of Electronics and Computer Science,
University of Southarnpton, 1992.
(8) Hall, W., Davis, H., Heath, I., Hili, G. & Wilkins, R. „Microcosm: the State of the
Art“. Computer Science Technical Report, University of Southarnpton, 1992.
(9) Davis, H., Hall, W., Heath, I., Hili, G. & Wilkins, R. „Towards an Integrated Information
Environment with Open Hypermedia Systems“ . Computer Science Technical
Report, University of Southampton, 1992.
(10) Davis, H. & Rush, D. „Microcosm Pre-Release Documentation“. Department of Electronics
and Computer Science, University of Southarnpton, 1992.
[11) Hall, W. & Colson, F. „Multimedia Teaching with Microcosm-HiDES: Viceroy Mounthatten
and the Partition of India“ History and Computing, Vol.3, No. 2, pp 89-98,
1991
Halbgraue Reihe
zur Historischen Fachinformatik
Herausgegeben von
Manfred Thaller
Max-Planck-Institut für Geschichte
Serie A: Historische Quellenkunden
Band 14
Erscheint gleichzeitig als:
MEDIUM AEVUM QUOTIDIANUM
HERAUSGEGEBEN VON GERHARD JARITZ
26
Manfred Thaller (Ed.)
Images and Manuscripts
in Historical Computing
Max-Planck-Institut für Geschichte
In Kommission bei
SCRIPTA MERCATURAE VERLAG
St. Katharinen, 1992
© Max-Planck-Institut für Geschichte, Göttingen 1992
Printed in Cermany
Druck: Konrad Pachnicke, Göttingen
Umschlaggestaltung: Basta Werbeagentur, Göttingen
ISBN: 3-928134-53-1
Table of Contents
lntroduction
Manfred Tb aller . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
I. Basic Definitions
Image Processing and the (Art) Historical Discipline
.Jörgen van den Berg, Hans Brandhorst and Peter van Huisstede ……………. , .. 5
II. Methodological Opinions
The Processing of Manuscripts
Manfred Tballer …….. . ……….. . . . … …. . . .. . . . ……………… . . .. …… 41
Pietonal Information Systems and the Teaching Imperative
Frank Colson and Wendy Hall . . . . . . . . . . . . . . . . . . . . . .. . .. . . .. … . . . . . .. . . . . . . . . . . . 73
The Open System Approach to Pictorial Information Systems
Wendy Hall and Frank Colson . . . . . . . . . . . . . . . . . . . . . . . . . . . . … . …….. . . . . . . .. . . . 87
111. Projects and Case Studies
Tbe Digital Processing of Images in Archives and Libraries
Pedro Gonzi.lez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
High Resolution Images
Anthony Hamber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . 123
A Supra-institutional Infrastructure for Image Processing in the Humanities?
Espen S. Ore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Describing the Indescribable
Gerhard Jaritz and Barbara Schub . . . . . …. . … . . . . . . . . . . . . . . … . . . . . . .. . . . . .. . 143
Full Text / Image DBMSs
Robert Rowland . . . . . .. . …. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
lntrosluctjon
lntroduction
Manfred Thaller
This book is the product of a workshop held at the International University Institute
in Firenze on November 151h, 1991. The intention of that workshop has been to bring
tagether people from as ma.ny different approaches to „ima.ge processing“ as possible.
The reason for this „collecting“ approach to the subject was a feeling, tha.t wbile image
processing in many ways has been the „hattest“ topic in Huma.nities computing 1n recent
years, it may be the least weil defined. It seems also much barder to say in this area., wbat
is specifically important to historia.ns, tha.n to other people. In that situation it was feit,
that a foruin would be helpful, which could sort out what of the various approaches can
be useful in historical resea.rch.
To solve this task, the present volume has been produced: in ma.ny ways, it reflects
the discussions which actually have been going on less, than the two compa.nion volumes
on the workshops at Glasgow a.nd TromS0 do. This is intentional. On the one band,
the pa.rticipa.nts at the workshop in Firenze did strongly feel the need to have projects
represented in the volume, which were not actually present at the workshop. On the other,
the discussions for quite some time were engaged in cla.rifying what the metbodological
issues were. That is: what a.ctua.lly a.re the topics for schola.rly discussion beyond the
description of individual projects, when it comes to the processing of images in historical
resea.rch?
The situation in the a.rea is made difficult, because some of the underlying a.ssumptions
are connected with vigoraus research groups, who use fora of schola.rly debate, which are
only slightly overlapping; so, what is ta.citly a.ssumed to hold true in one group of research
projects may be considered so obviously wrang in a.nother one, that it sca.rcely deserves
explicit refutation.
We hope, that we have been succes:.ful in bringing some of these hidden diJferences
in opinion out into the open. We consider this extremely importa.nt, because only that
cla.rification allows for a fair evaluation of projects which may have sta.rted from different
sets of a.ssumption. So importa.nt, indeed, that we would like to catalogue here some of the
basic differences of opinion which exist between image processing projects. Tbe reader will
rediscover them in many of the contributions; as editor I think however, that suma.rizing
tbem at tbe beginning may make the contributions- which, of course, have been striving
for impartiality – more easily rccognizable as parts of one coherent debate.
Three basic differences in opinion seem to exist today:
(1) Is ima.ge processing a genuine and independent field of Computer ba.sed resea.rcb in
the Humanities, or is it an auxiliary tool“? Many projects a.ssume tacitly – a.nd some do so
quite outspokenly- that imag on the computer act as illustrations to more conventional
applications. To retrieval systems, as illustrations in catalogues and the like. Projects of
this type tend to point out, that with currently easily available equipment a.nd currently
clearly understood data processing technologies, the analysis of images, which can quite
easily be ha.ndled as illustrations today, is still costly and of uncertain promise. Wb ich is the
rea.son why they a.ssume, that such analytical approaches, if at all, should be undertaken
2 Introductjon
as side effects of projects only, which focus upon the relatively simple administration of
images. Their opponents think, in a nutshell, that while experiments may be needed, their
overalJ outcome is so promising, that even the more simple techniques of today should be
implemented only, if they can later be made useful for the advanced techniques now only
partially feasible.
(2) Connected to this is another conflict, which might be the most constant one
in Humanities data processing during the last decades, is particularly decisive, however,
when it comes to image processing . Shall we concentrate on Ievels of sopbistication, which
are available for many on today’s equipment or shall we try to make use of the most
sophisticated tools today, trusting that they will become available to an increa.singly !arge
number of projects in the future? This specific battle has been fought since the earliest
years of Humanities computing, and this editor has found bimself on both sides at different
stages. A „right answer does not exist: the debate in image processing is probably one
of the best occassions to understand mutually, that both positions are full of merit. It is
pointless to take permanently restrictions into consideration, which obviously will cease to
exist a few years from now. It discredits all of us, if computing in history always promises
results only on next years equipment and does not deliver here and now. Maybe, that is
indeed one of the more important tasks of the Association for History and Computing:
to provide a link between both worlds, Jending vision to those of us burdened down by
the next funding deadline and disciplining the loftier projects by the question of when
sometbing will be affordable for all of us.
(3) The third major underlying difference is inherently connected to the previous ones.
An image as such is beautiful, but not very useful, before it is connected to a description.
Shall such descriptions be arbitrary, formulated in the traditionally clouded langnage of
a historian, perfectly unsuitable for any sophisticated technique of retrieval, maybe not
even unambigously understandable to a fellow historian? Or shall they follow a predefined
catalogue of narrow criteria, using a carefully controlled vocabulary, for both of which it is
somewbat unclear how they will remain relevant for future research questions which have
not been asked so far? – All the contributors to this volume have been much to polite to
pbrase their opinions in this way: scarcely any of them does not have a strong one with
regard to this problem.
More questions than answers. „Image processing“, whether applied to images proper
or to digitalized manuscripts, seems indeed to be an area, where many methodological
questions remain open. Besides that, interestingly, it seems to be one of the most consequential
ones: a project like the digitalization of the Archivo General de Indias will
continue to influence the conditions of historical work for decades in the next century.
There are not only many open questions, it is worthwhile and neccessary to discuss them.
While everybody seems to have encountered image processing in one form or the
other already, precise knowledge about it seems to be relatively scarce. The volume starts,
therefore, with a general introduction into the field by· J. v.d. Berg, H. Brandhorst and
P. v. Huisstede. While most of the following contributions have been written to be as self
supporting as possible, this introduction attempts to give all readers, particularly those
lntroductjon 3
with only a vague notion of the techniques coucerned, a common ground upon which the
more specialized discussions may build.
The contributions that follow have been written to introduce specific areas, where
handling of images is useful and can be integrated into a !arger context. All authors have
been asked in this part to clearly state their own opinion, to produce clearcut statements
about their methodological position in the discussions described above. Originally, four
contributions were planned: the first one, discussing whether the more advanced techniques
of image processing can change the way in which images are analysed and handled by art historians, could unfortunately not be included in this volume due to printing deadlines:
we hope to present it as part of follow up volumes or in one of the next issues of History
and Computing.
The paper of M. Thaller argues that scanning and presenting corpora of manuscripts
on a work station can (a) save the originals, (b) iutroduce new methods for palaeographic
training into university teaching, (c) provide tools for reading damaged manuscripts, the
comparison of band writing and general palaeographic studies. He further proposes to
build upon that a new understanding of editorial work. A fairly long tr.chnical discussion
of the mechanisms needed to link images and transcriptions of manuscripts in a wider
context follows. ·
F. Colson and W. Hall discuss the role of images in teaching systems in university
education. They do so by a detailed description of the mechanism by which images are
integrated into Microcosm I HiDES teaching packages. Their considerations include the
treatment of moving images; furthermore tbey enquire about relationships between image
and text in typical stages in the dialogue between a teaching package and a user.
W. Hall and F. Colson argue in the final contribution to thill part the general case
of open systems, exemplifying their argument with a discussion of the various degrees in
which control about the choices a user has is ascertained in the ways in which navigation is
supported in a hyper-text oriented system containing images. In a outshell the difference
between „open“ and closed systems can be understood as the following: in an „open
system“ the user can dynamically develop further the behaviour of an image-based or
image-related system. On the contrary in static „editions“ the editor has absolute control,
the user none.
Following these general description of approaches, in the third part, several international
projects are presented, which describe in detail the decisions taken in implementing
„real“ image processing based applications, some of them of almost frigthening magnitude.
The contributors of this part were asked to provide a different kind of introduction to the
subject than those to the previous two: all of them should discuss a relatively small topic,
which, however, should be discussed with much greater detail than the relatively broad
overviews of the first two parts.
All the contributions growing out of the workshop came from projects, which had
among their aims the immediate applicability of the tools developed within the next 12-
24 months. As a result they are focusing on corpora not much beyond 20.000 (color) and
100.000 (blw) images, which are supposed to be stored in resolutions manageable within
:::; 5MB I image (color) and :::; 0.5 MB I image (blw). The participants of the workshop
feit strongly, that this view should be augmented by a description of the rationale behind
4 lntroductjon
the creation of a !arge scale projt’Ct for the systematic conversion of a complete archive.
The resulting paper, by P. Gonza!ez, describes the considerations which Iead to the design
of the .’\rchivo General de Indias projt’Ct and the experiences gained du ring the completed
stages. That description is enhanced by a discussion of the stratrgies selected to make the
raw bitmaps accessible via suitable descriptions I transcriptions I keywords. A critical
appraisal, which decisions would be made dilferently after the developments in hardware
tecbnology in recent years, augments the value of the description.
The participants of the workshop feit furthermore strongly, that their view described
above sbould be augmented by a description of the techniques used for the handling of
images in extremely high resolution. A. Hamber’s contribution, dealing with the Vasari
project, gives a very thorough introduction into the technical problems rncountered in
handling images of extremely high quality and also explains the economic rationale behind
an approach to start on purpose with the highest quality of images available today on
prototypical hardware.
As these huge projects both were related to iustitutions which traditionally collect
source material for historical studies, it seemed wise to include also a view on the roJe
images would play in the data archives which traditionally have been of much importance
in the considerations of the AHC. E.S. Ore discusses what implications this type of machine
readable material should bave for tbe infrastructure of institutions specifically dedicated
to Humanities computing.
Image systems which deal with the archiving of pictorial material and manuscript
systems have so far generally fairly „shallow“ descriptions. At least in art history, moreover,
the rely quite frequently on pre-defint’d terminologies. G. Jaritz and 8. Schuh describe
how far and wby historical research needs a different approacb to grasp as much of the
intemal structure and the content of an image as possible.
Last not least R. Rowland, who acted as host of the workshop at Firenze, describes tbe
considerations which currently prepare the creation of another largescale archival database,
to contain !arge arnounts of material from the archives of the inquisition in Portugal. His
contribution tries to explore the way in which the more recent developments of image
processing can be embedded in the general services required for an archival system.
This series of workshop reports shall attempt to providr a broader basis for thorough
discussions of current methodological questions. ‚fheir main virtue shall be, that
it is produced sufficiently quick to become available, before developments in this field of
extremely quick development make them obsolete. We hope we have reached that goal:
the editor has to apologize, however, that due to the necessity to bring this volume out in
time, proofreading has by neccessity be not as intensive as it should have been. To which
nother shortcoming is added: none of the persons engaged in the final production of this
volume is a native speaker of English; so while we hope to have kept to the standards of
what might be described as „International“ or „Conti111mtal“ English, the native speakers
among the readers can only be asked for their tol(‚rance.
Göttingrn, August 1992
Halbgraue Reihe
zur Historischen Fachinformatik
Herausgegeben von
Manfred Thaller
Max-Planck-Institut für Geschichte
Serie A: Historische Quellenkunden
Band 14
Erscheint gleichzeitig als:
MEDIUM AEVUM QUOTIDIANUM
HERAUSGEGEBEN VON GERHARD JARITZ
26
Manfred Thaller (Ed.)
Images and Manuscripts
in Historical Computing
Max-Planck-Institut für Geschichte
In Kommission bei
SCRIPTA MERCATURAE VERLAG
St. Katharinen, 1992
© Max-Planck-Institut für Geschichte, Göttingen 1992
Printed in Cermany
Druck: Konrad Pachnicke, Göttingen
Umschlaggestaltung: Basta Werbeagentur, Göttingen
ISBN: 3-928134-53-1
Table of Contents
lntroduction
Manfred Tb aller . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
I. Basic Definitions
Image Processing and the (Art) Historical Discipline
.Jörgen van den Berg, Hans Brandhorst and Peter van Huisstede ……………. , .. 5
II. Methodological Opinions
The Processing of Manuscripts
Manfred Tballer …….. . ……….. . . . … …. . . .. . . . ……………… . . .. …… 41
Pietonal Information Systems and the Teaching Imperative
Frank Colson and Wendy Hall . . . . . . . . . . . . . . . . . . . . . .. . .. . . .. … . . . . . .. . . . . . . . . . . . 73
The Open System Approach to Pictorial Information Systems
Wendy Hall and Frank Colson . . . . . . . . . . . . . . . . . . . . . . . . . . . . … . …….. . . . . . . .. . . . 87
111. Projects and Case Studies
Tbe Digital Processing of Images in Archives and Libraries
Pedro Gonzi.lez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
High Resolution Images
Anthony Hamber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . 123
A Supra-institutional Infrastructure for Image Processing in the Humanities?
Espen S. Ore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Describing the Indescribable
Gerhard Jaritz and Barbara Schub . . . . . …. . … . . . . . . . . . . . . . . … . . . . . . .. . . . . .. . 143
Full Text / Image DBMSs
Robert Rowland . . . . . .. . …. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
lntrosluctjon
lntroduction
Manfred Thaller
This book is the product of a workshop held at the International University Institute
in Firenze on November 151h, 1991. The intention of that workshop has been to bring
tagether people from as ma.ny different approaches to „ima.ge processing“ as possible.
The reason for this „collecting“ approach to the subject was a feeling, tha.t wbile image
processing in many ways has been the „hattest“ topic in Huma.nities computing 1n recent
years, it may be the least weil defined. It seems also much barder to say in this area., wbat
is specifically important to historia.ns, tha.n to other people. In that situation it was feit,
that a foruin would be helpful, which could sort out what of the various approaches can
be useful in historical resea.rch.
To solve this task, the present volume has been produced: in ma.ny ways, it reflects
the discussions which actually have been going on less, than the two compa.nion volumes
on the workshops at Glasgow a.nd TromS0 do. This is intentional. On the one band,
the pa.rticipa.nts at the workshop in Firenze did strongly feel the need to have projects
represented in the volume, which were not actually present at the workshop. On the other,
the discussions for quite some time were engaged in cla.rifying what the metbodological
issues were. That is: what a.ctua.lly a.re the topics for schola.rly discussion beyond the
description of individual projects, when it comes to the processing of images in historical
resea.rch?
The situation in the a.rea is made difficult, because some of the underlying a.ssumptions
are connected with vigoraus research groups, who use fora of schola.rly debate, which are
only slightly overlapping; so, what is ta.citly a.ssumed to hold true in one group of research
projects may be considered so obviously wrang in a.nother one, that it sca.rcely deserves
explicit refutation.
We hope, that we have been succes:.ful in bringing some of these hidden diJferences
in opinion out into the open. We consider this extremely importa.nt, because only that
cla.rification allows for a fair evaluation of projects which may have sta.rted from different
sets of a.ssumption. So importa.nt, indeed, that we would like to catalogue here some of the
basic differences of opinion which exist between image processing projects. Tbe reader will
rediscover them in many of the contributions; as editor I think however, that suma.rizing
tbem at tbe beginning may make the contributions- which, of course, have been striving
for impartiality – more easily rccognizable as parts of one coherent debate.
Three basic differences in opinion seem to exist today:
(1) Is ima.ge processing a genuine and independent field of Computer ba.sed resea.rcb in
the Humanities, or is it an auxiliary tool“? Many projects a.ssume tacitly – a.nd some do so
quite outspokenly- that imag on the computer act as illustrations to more conventional
applications. To retrieval systems, as illustrations in catalogues and the like. Projects of
this type tend to point out, that with currently easily available equipment a.nd currently
clearly understood data processing technologies, the analysis of images, which can quite
easily be ha.ndled as illustrations today, is still costly and of uncertain promise. Wb ich is the
rea.son why they a.ssume, that such analytical approaches, if at all, should be undertaken
2 Introductjon
as side effects of projects only, which focus upon the relatively simple administration of
images. Their opponents think, in a nutshell, that while experiments may be needed, their
overalJ outcome is so promising, that even the more simple techniques of today should be
implemented only, if they can later be made useful for the advanced techniques now only
partially feasible.
(2) Connected to this is another conflict, which might be the most constant one
in Humanities data processing during the last decades, is particularly decisive, however,
when it comes to image processing . Shall we concentrate on Ievels of sopbistication, which
are available for many on today’s equipment or shall we try to make use of the most
sophisticated tools today, trusting that they will become available to an increa.singly !arge
number of projects in the future? This specific battle has been fought since the earliest
years of Humanities computing, and this editor has found bimself on both sides at different
stages. A „right answer does not exist: the debate in image processing is probably one
of the best occassions to understand mutually, that both positions are full of merit. It is
pointless to take permanently restrictions into consideration, which obviously will cease to
exist a few years from now. It discredits all of us, if computing in history always promises
results only on next years equipment and does not deliver here and now. Maybe, that is
indeed one of the more important tasks of the Association for History and Computing:
to provide a link between both worlds, Jending vision to those of us burdened down by
the next funding deadline and disciplining the loftier projects by the question of when
sometbing will be affordable for all of us.
(3) The third major underlying difference is inherently connected to the previous ones.
An image as such is beautiful, but not very useful, before it is connected to a description.
Shall such descriptions be arbitrary, formulated in the traditionally clouded langnage of
a historian, perfectly unsuitable for any sophisticated technique of retrieval, maybe not
even unambigously understandable to a fellow historian? Or shall they follow a predefined
catalogue of narrow criteria, using a carefully controlled vocabulary, for both of which it is
somewbat unclear how they will remain relevant for future research questions which have
not been asked so far? – All the contributors to this volume have been much to polite to
pbrase their opinions in this way: scarcely any of them does not have a strong one with
regard to this problem.
More questions than answers. „Image processing“, whether applied to images proper
or to digitalized manuscripts, seems indeed to be an area, where many methodological
questions remain open. Besides that, interestingly, it seems to be one of the most consequential
ones: a project like the digitalization of the Archivo General de Indias will
continue to influence the conditions of historical work for decades in the next century.
There are not only many open questions, it is worthwhile and neccessary to discuss them.
While everybody seems to have encountered image processing in one form or the
other already, precise knowledge about it seems to be relatively scarce. The volume starts,
therefore, with a general introduction into the field by· J. v.d. Berg, H. Brandhorst and
P. v. Huisstede. While most of the following contributions have been written to be as self
supporting as possible, this introduction attempts to give all readers, particularly those
lntroductjon 3
with only a vague notion of the techniques coucerned, a common ground upon which the
more specialized discussions may build.
The contributions that follow have been written to introduce specific areas, where
handling of images is useful and can be integrated into a !arger context. All authors have
been asked in this part to clearly state their own opinion, to produce clearcut statements
about their methodological position in the discussions described above. Originally, four
contributions were planned: the first one, discussing whether the more advanced techniques
of image processing can change the way in which images are analysed and handled by art historians, could unfortunately not be included in this volume due to printing deadlines:
we hope to present it as part of follow up volumes or in one of the next issues of History
and Computing.
The paper of M. Thaller argues that scanning and presenting corpora of manuscripts
on a work station can (a) save the originals, (b) iutroduce new methods for palaeographic
training into university teaching, (c) provide tools for reading damaged manuscripts, the
comparison of band writing and general palaeographic studies. He further proposes to
build upon that a new understanding of editorial work. A fairly long tr.chnical discussion
of the mechanisms needed to link images and transcriptions of manuscripts in a wider
context follows. ·
F. Colson and W. Hall discuss the role of images in teaching systems in university
education. They do so by a detailed description of the mechanism by which images are
integrated into Microcosm I HiDES teaching packages. Their considerations include the
treatment of moving images; furthermore tbey enquire about relationships between image
and text in typical stages in the dialogue between a teaching package and a user.
W. Hall and F. Colson argue in the final contribution to thill part the general case
of open systems, exemplifying their argument with a discussion of the various degrees in
which control about the choices a user has is ascertained in the ways in which navigation is
supported in a hyper-text oriented system containing images. In a outshell the difference
between „open“ and closed systems can be understood as the following: in an „open
system“ the user can dynamically develop further the behaviour of an image-based or
image-related system. On the contrary in static „editions“ the editor has absolute control,
the user none.
Following these general description of approaches, in the third part, several international
projects are presented, which describe in detail the decisions taken in implementing
„real“ image processing based applications, some of them of almost frigthening magnitude.
The contributors of this part were asked to provide a different kind of introduction to the
subject than those to the previous two: all of them should discuss a relatively small topic,
which, however, should be discussed with much greater detail than the relatively broad
overviews of the first two parts.
All the contributions growing out of the workshop came from projects, which had
among their aims the immediate applicability of the tools developed within the next 12-
24 months. As a result they are focusing on corpora not much beyond 20.000 (color) and
100.000 (blw) images, which are supposed to be stored in resolutions manageable within
:::; 5MB I image (color) and :::; 0.5 MB I image (blw). The participants of the workshop
feit strongly, that this view should be augmented by a description of the rationale behind
4 lntroductjon
the creation of a !arge scale projt’Ct for the systematic conversion of a complete archive.
The resulting paper, by P. Gonza!ez, describes the considerations which Iead to the design
of the .’\rchivo General de Indias projt’Ct and the experiences gained du ring the completed
stages. That description is enhanced by a discussion of the stratrgies selected to make the
raw bitmaps accessible via suitable descriptions I transcriptions I keywords. A critical
appraisal, which decisions would be made dilferently after the developments in hardware
tecbnology in recent years, augments the value of the description.
The participants of the workshop feit furthermore strongly, that their view described
above sbould be augmented by a description of the techniques used for the handling of
images in extremely high resolution. A. Hamber’s contribution, dealing with the Vasari
project, gives a very thorough introduction into the technical problems rncountered in
handling images of extremely high quality and also explains the economic rationale behind
an approach to start on purpose with the highest quality of images available today on
prototypical hardware.
As these huge projects both were related to iustitutions which traditionally collect
source material for historical studies, it seemed wise to include also a view on the roJe
images would play in the data archives which traditionally have been of much importance
in the considerations of the AHC. E.S. Ore discusses what implications this type of machine
readable material should bave for tbe infrastructure of institutions specifically dedicated
to Humanities computing.
Image systems which deal with the archiving of pictorial material and manuscript
systems have so far generally fairly „shallow“ descriptions. At least in art history, moreover,
the rely quite frequently on pre-defint’d terminologies. G. Jaritz and 8. Schuh describe
how far and wby historical research needs a different approacb to grasp as much of the
intemal structure and the content of an image as possible.
Last not least R. Rowland, who acted as host of the workshop at Firenze, describes tbe
considerations which currently prepare the creation of another largescale archival database,
to contain !arge arnounts of material from the archives of the inquisition in Portugal. His
contribution tries to explore the way in which the more recent developments of image
processing can be embedded in the general services required for an archival system.
This series of workshop reports shall attempt to providr a broader basis for thorough
discussions of current methodological questions. ‚fheir main virtue shall be, that
it is produced sufficiently quick to become available, before developments in this field of
extremely quick development make them obsolete. We hope we have reached that goal:
the editor has to apologize, however, that due to the necessity to bring this volume out in
time, proofreading has by neccessity be not as intensive as it should have been. To which
nother shortcoming is added: none of the persons engaged in the final production of this
volume is a native speaker of English; so while we hope to have kept to the standards of
what might be described as „International“ or „Conti111mtal“ English, the native speakers
among the readers can only be asked for their tol(‚rance.
Göttingrn, August 1992