Barcodes are dead. Long live the barcode!

I’ve noticed a mindless consensus among the geek literati when talking about QR Codes. Often a blog post will discuss the various uses they are put to, then end with the phrase ‘But I think that NFC will replace them in the near future’.

Bullshit.

I have an RFID system in my library. My long term experience in libraries is with barcodes. I’ve already done the QR Code vs NFC dance. While both have their advantages, I think the two technologies are complementary, not competitive. And barcodes have the advantage.

The cost of production for barcodes is ink and paper. They have a standard protocol for communication between the reader and the computer. I can plug a barcode reader into a USB port and scan off the raw numbers on the barcodes into Excel.

I can make a barcode whenever I want using standard office products. If I’m enthusiastic I can decode what a barcode means with a Mark 1 Eyeball. This simplicity of communication between printed text and computer systems is the major advantage of barcodes

The major problems with barcodes is the bandwidth and the data storage. Laser scanning is a little slow, and the few bits you can store in a printed barcode are not really sufficient.

Using barcodes as identifiers, then pulling the rest of the information from a networked system solves the bandwidth and information storage problem.

RFID tags cannot just replicate what barcodes do. If I’m using a NFC tag to point to a URL, what’s the point? The $1 NFC tag sticker pointing to that URL is redundant when it can be duplicated by my printer as part of the standard design for a few cents in design fees and ink.

RFID will work in a few crucial areas.

  1. The offline data requirements are of a larger size than what can be accommodated in a barcode.
  2. You need to avoid the one-at-a-time scan bottlenecks of barcodes.
  3. You need to track the location of a tagged item at all times.
  4. You have two networked devices (say POS and smartphone) that need to communicate securely only in a specific place. (and Facecash proves that QR codes work just fine for this)

RFID stickers will not replace the barcode.

NFC Point of Sale devices have promise, and are actually in use already with many tap-to-pay services. Industry and government will continue to use RFID in the locations where it works. NFC will probably revolutionize device-to-device communication.

For you and me, the easiest way to get information from print to computer will still be some variation of the barcode.

Related Editions and Associated Information in Integrated Library Systems

This is based on a request from Tejaswi Tenneti in relation to an answer I gave a while ago here: Brett Williams’s answer to Ontologies: Is there an Ontology that models Enterprise products and associated knowledge articles?

This post will walk you through some of the advanced information processing that goes on in large library systems.

The basis of library records is a standard interchange format called the MARC record. The MARC record is an ISO standard that was originally developed by Henriette Avram at the Library of Congress in the 1960’s.

MARC is an interchange format, it’s a way to transfer information both offline and online, and was originally developed for tape-based processing of information, long before ARPA-net was a twinkle in a CRT screen.

There are several standard MARC formats, as well as several different implementations of MARC, both ASCII and UTF-8 encoded. For simplicity, I’m just going to show you what a bibliographic record looks like. A bibliographic record is filled in with the information about a single item.

Here’s the first couple of lines of a MARC record I’ve cracked open with a tool to show you the structure of the record.

=LDR 01437nam 2200373 a 4500
=001 9781849729390
=003 Credo
=005 20110223080808.0
=006 m\\\\\\\\d\\\\\\\\
=007 cr\cn|||||||||
=008 110211r20032011enka\\\\od\\eng\d
=020 \\$a9781849729390 (online)
=020 \\$z9780007165414 (print)
=020 \\$z0007165412 (print)
=035 \\$a(OCoLC)703091350
=035 \\$a(CaBNvSL)slc00226357
=035 \\$a(Credo)hcdquot2003
=040 \\$aCaBNvSL$cCaBNvSL$dCaBNvSL
=050 \4$aPN6081$b.D495 2003eb
=082 04$a080$222
=245 00$aCollins dictionary of quotations$h[electronic resource].

The raw record looks like this. Quora can’t even display some of the critical separation markers used. Like I mentioned, this was developed in the 1960s.

01437nam 2200373 a 45000010014000000030006000140050017000200060019000370070015000560080041000710200027001120200026001390200023001650350021001880350026002090350023002350400030002580500025002880820012003132450061003252500025003862600080004113000063004915060060005545200060006145300037006745380036007115880054007476500039008016550022008407100027008627760046008898560128009359781849729390Credo20110223080808.0m d cr cn|||||||||110211r20032011enka od 000 0 eng d a9781849729390 (online) z9780007165414 (print) z0007165412 (print) a(OCoLC)703091350 a(CaBNvSL)slc00226357 a(Credo)hcdquot2003 aCaBNvSLcCaBNvSLdCaBNvSL 4aPN6081b.D495 2003eb04a08022200aCollins dictionary of quotationsh[electronic resource].

And yes, before you ask, there is an XML version. It’s still catching on, but it’s looking like the future interchange format.

So, here’s my workflow. I get a new book in the library. I take a look at it, and then I search for the MARC record in another library, the Library of Congress or from a company called OCLC that has tools that make this process very quick. I download the record to my computer and open it up using tools in my ILS (integrated Library System). I check over the MARC record. It doesn’t have to be exact, but I do check the ISBN, the title, author and a few other points. I also have tools that check the author (is it Chester M Arthur or Chester M. Arthur?) title and some other fields to ensure consistency.

Then, I tell my ILS to add it to the catalog. At this point, the MARC record is broken up and put into tables in a database. My current ILS runs on Oracle, others use MySQL.

When the record is put into the database, it’s checked against the current holdings. If I already have this exact edition of the Collins Dictionary of Quotations, it will show up as another item, identical to the first. If I have an older edition, my new record will show up as another item, with a different ISBN and other details, and the ILS will show a unified record when I look up ‘Collins Dictonary of Quotations’ with both editions identified.

Is the item written by Mark Twain, who’s really Samuel Clemens? That decision got made years ago, so if you search the Harvard Library or the library in Minot, North Dakota you’ll get the same answer. Were there 15 different editions with different contents? There’s a rule for that.

Bringing this back to industry, say you are taking a product through it’s lifecyle. In R&D it’s Project Wiffleball. When it moves over to Product Development, it becomes Project Unicorn E456. When you release it to the public it’s called the E-Nebulizer, which you release versions E999, E678, E469, ET689 and SV368. What a library system could do is collate the information from

Project Wiffleball
Project Unicorn E456
E-Nebulizer E999
E-Nebulizer E678
E-Nebulizer E469
E-Nebulizer ET689
E-Nebulizer SV368

and show it all in the same place, on a single screen, with no duplication, and with clear links to knowledge objects that relate generally to the project and others that relate to specific versions.

Now, this is all done within the ILS. I can also pull information from outside sources to enrich the contents of the bibliographic record

Here’s a record from my catalog.

http://library.cna-qatar.edu.qa/?itemid=|marc-qatar|000060791

That section in the middle is the MARC record from my catalog. On the right is the book cover, drawn from a 3rd party service (Syndetics). It runs by grabbing the ISBN, running it through a collation service called xISBN to identify all of the editions. Then, if there is a cover for the given ISBN, it gives that cover. If Syndetics doesn’t have it, I get another edition’s cover.

Moving on down, the full table of contents, also drawn from Syndetics. This is indexed, along with all of the main record information and is searched. Further down, reviews of the book are also available, also keyword indexed and searchable.

All right! Now my library is very basic, let me show you something a little more complex.

Here’s the University of Southern Queensland Library in Toowoomba, about 3 hours from Brisbane. I had the pleasure of talking quite extensively with their IT team a couple of weeks ago at a conference.

Take a look at this record

http://library.usq.edu.au/Record/vtls000420789

You can see the Syndetics book covers over on the left hand side of the screen. Now, check out the Similar Items link.

Using a relevance algorithim, a list of similar works is pulled up and integrated with the individual record. This is a database maintained in-house by their IT department, it’s part of the library database that serves up this finding aid.

OK, let’s flip over to USQ’s mobile interface.
http://m.library.usq.edu.au/
Now try that top link again, you’ll have the right cookie now to view the mobile interface.

Check out the ‘Show on Map link’ This will load a graphic, sized for the iPhone that indicates the location of the book. This is from a database maintained by their IT department, separate from the library database that serves this finding aid.

OK, let’s hop back a little closer to home.

The San Diego Public Library has included QR codes in their book records using the Google Charts API

http://libpac.sdsu.edu/search~S0?/XTesting&SORT=D/XTesting&SORT=D&SUBKEY=Testing/1%2C13291%2C13291%2CB/frameset&FF=XTesting&SORT=D&1%2C1%2C

If you’ve made it this far, you’ve got a real interest in displaying information. As I mentioned before in my answer, this is a very rough introduction to a complex topic. If you’re really interested, and are looking for consultants, I have a few contacts in Knowledge Management, Libraries and the systems behind them that can give you much more help

Cross posted from Quora

eBook Distribution – Experiment 2 – Calibre

While sharing a folder via a simple webserver is a bare-bones way of distributing content, much of the work for a clean, well-presented content server for ebooks has already been done.

Kovid Goyle’s Calibre program is the iTunes of eBooks. Calibre can handle just about any ebook format, and can convert that ebook format to any other format with a high degree of accuracy.

Calibre can download some metadata about a book, including the cover and manage multiple copies of an ebook in different formats.

Calibre also comes with a content server that serves up both HTML and ODPS (XML ebook catalog format) versions of the catalog.

The Calibre content server delivers a very attractive interface that is easy to update and maintaing

The advantages

1) Excellent program with automatic news updating

2) Content server solves many security/privacy concerns, open source program minimizes tracking & privacy problems

3)Easy conversion allows future-proof access to library materials

4)Integrates tightly with widely used Stanza iOS ebook reader. Fails gracefully with HTML catalog.

Disadvantages

1)Like the previous post, not user friendly unless directly linked from a familiar page

2) Fiendishly difficult to get the latest versions working on Ubuntu flavors of Linux. The PPA’s are usually woefully out of date.

3) Requires a more powerful computer with GUI unless one is very familiar with the command line

eBook Distribution – Experiment 1

I’ve been experimenting with some different methods of distributing ebooks using existing infrastructure. These experiments are localized (thinking of the library as a place where people go) while at the same time using WiFi, networks, computers, web servers and other tools of the internet.

My first bare-bones experiment is using Mongoose

Mongoose is a radically simple webserver. Download the executable, put it in the folder you want to share, run it and you can access it at http://localhost:8080 or at http://%5BcomputerIP%5D:8080.

If you want to run it off of port 80, or add additional functions, you can edit the optional mongoose.conf file

With a small edit of the mongoose.conf file, enabling directory listing, you can serve up a folder over a network, allowing users to download whatever is inside that folder.

Here’s some advantages

1) This is a 2 minute project.You have a folder to share, and a way to share it.

2) This method respects your user’s privacy. With a quick setup like this, you’re not harvesting their usernames, reading preferences or creating advertising profiles.

3) You can set this system up anywhere, using almost any hardware from the last 10 years.

 

Disadvantages

1) It’s ugly. You get a basic directory listing with limited functionality.

2) It violates the concept that most people have of the internet. The URL doesn’t end in .com or another top level domain, and unless you link to it, n0 one will ever know it exists.

3) While the Mongoose webserver is fairly secure, this should be run on a computer isolated from your regular network.  I secured mine by limiting access only to the 192.168.* addresses provided within my test network, it’s also running on a test server outside the regular network I usually work on.

I’ll talk about more attractive methods of serving up eBooks in my next post.

 

 

Sustainable Technology

My article on Sustainable Technology for libraries just went live on the ALIA blog.

Take a look!