Notice: file_put_contents(): Write of 28648 bytes failed with errno=28 No space left on device in /opt/frankenphp/design.onmedianet.com/app/src/Arsae/CacheManager.php on line 36
Warning: http_response_code(): Cannot set response code - headers already sent (output started at /opt/frankenphp/design.onmedianet.com/app/src/Arsae/CacheManager.php:36) in /opt/frankenphp/design.onmedianet.com/app/src/Models/Response.php on line 17
Warning: Cannot modify header information - headers already sent by (output started at /opt/frankenphp/design.onmedianet.com/app/src/Arsae/CacheManager.php:36) in /opt/frankenphp/design.onmedianet.com/app/src/Models/Response.php on line 20 Centre for New OED and Text Research Home Page
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.
In January 1985, the University of Waterloo established the Centre
for the New OED to fulfill its obligations under an agreement with the Oxford University Press to computerize
the OED. The fundamental goal of the Centre remains to
support innovative research through the development of
application-driven text management software.
In 1989, the Oxford University Press published the 20-volume Oxford
English Dictionary, second edition. As a joint venture with the
Press, the University of Waterloo designed an on-line dictionary
database that is suitable for editors charged with maintaining the OED,
lexicographers working on other dictionaries, and researchers who wish
to consult the OED interactively.
The project adopted the philosophy of modern text markup systems
that a computer-processable version of text is well-represented by
interleaving ``tags'' with the text of the original document, still
leaving the original words in proper sequence. In its simplest
interpretation, by suppressing the tags we are left with the original
text, and by converting the tags to formatting instructions we can
produce a suitable presentation of the text.
The most visibly successful aspect of our research is embodied in
the flexible and efficient search and display software.
LECTOR is a general purpose browser that takes as input a stream of
arbitrarilty tagged text and formats it to the screen through
user-defined typography reflecting its structure. As a complementary
software component, the PAT text search engine efficiently retrieves all
occurrences of words or phrases appearing in the 570-megabyte OED
in less than 1 second, and allows users to restrict queries and results
to arbitrary structured text fragments. Used together, PAT and LECTOR
form a powerful query facility for text-dominated databases.
While the research continues at the University, the tangible results
are further developed and commercialized by Open Text Corporation, a spin-off
company located in Waterloo.
The Centre was an active member of CSSC, the Canadian Strategic
Software Consortium, formed in 1994 to conduct pre-competitive research
leading to database management systems that integrate structured text
and relational data.
Our prototype distributed
federated database system allowed integrated data access via the
Oracle (Oracle) and DB2 (IBM) relational database systems and the
SearchServer (Fulcrum), PAT (Open Text), and MultiText (UW) text
engines. A fundamental goal of this research is the creation of good standards for
managing and manipulating text. Such standards encourage and facilitate
software interoperability and thus cooperation between between software
vendors, developers, and integrators.
Principal research interests include, but are not limited to:
This project explored the use of powerful retrieval engines that
combine fast text search with the power of structured data retrieval to
support resource discovery.
Currently users of the World Wide Web issue queries against one or
more indexing sites and browse through long lists of page references. As
the internet grows in size and as more and more data is not converted to
HTML until requested by users via query forms, monolithic indexing of
the Web becomes less effective in meeting users' needs. Locating
information is better achieved by characterizing resources (that is,
extracting profile templates that describe data sources and indexed
data), distributing and communicating resulting descriptions, and
flexibly searching multiple, diverse template databases.
A simple user query can be transmitted to several sites (chosen by
an appropriate site selection process) by a program that sits behind the
browser. At each site a backend program transforms the query into
several queries specific to the particular index, issues the queries,
and packages the query responses into an appropriately rich structure.
The frontend then provides facilities to the browser for examining the
structured response and preparing subsequent refinements to the queries.
The goal is to help users to find specific query forms to be used in
conjunction with specific data resources to serve their information
needs.
The resulting infrastructure provides scalability in terms of sites,
data volume and heterogeneity of document collections.
For further information about the Centre, the New OED project, the
T/RDBMS project, the ReDWooD project, the software, or any related
activities, please direct inquiries to:
Professor Frank Tompa
School of Computer Science
University of Waterloo
200 University Avenue West
Waterloo, Ontario N2L 3G1
CANADA