Minimalistic Text Android Widget

This is just a quick update on a widget I made for my own use using

Minimalistic Text

Tasker

Google Maps Directions API

The time, day of week, and weather information are all built into the Minimialistic Text program. Transit information was a little bit more complex.
screenshot_20161026-215751minimalistic-text-widg

The Transit information circled is two API calls to the Google maps directions API through Tasker, one for my trip to from the train station to my work bus stop and one for my trip home. They run every 5 minutes and extract out variables including the bus number, the time of departure, the train name and it’s time of departure, the length of the trip and the time when I will arrive home from the XML response to the API call.

Tasker pulls the full result from train station to work bus stop, and caches it in a variable. I then use tasker to pull each variable out of the XML results cached in the full variable.

Then, using Tasker, I format the variables into a easy-to-read (for me) one glance information screen with emoji’s to indicate bus trips, train trips, total time and icons for home and work.

Some of the additional features I’ve built into this include

  1. A widget that speaks my commute when pressed.
  2. A popup that lists out departure times for my commute
  3. I added the api call and variable update feature for this widget into my 5 minute data widget  which turns my data plan on for 5 minutes.

What I end up with is a very simple, one-glance tool that gives me all the information I need to know for my commute.

Have MLIS, Will Travel – Quick note

I’ve been slowly working out the kinks on my new job-sharing site, Have MLIS, Will Travel. While it is not quite ready for general use right now, it is looking very promising

In my test phase, since the API calls to the Google Places API contain so much data, I’ve been saving and logging the Latitude/Longitude of the city for each job posting. I’m attaching a quick visualization for a small selection of the data collected over the last month.

https://batchgeo.com/map/d4dbd9868697acca508ca651a0dbdc54

 

Advanced Information Management for Job Searching

This post is a collection of tools and a workflow I used for my latest job search.

Sources

My job search was focused on the Greater Toronto Area.

Indeed.ca – is a job search tool. It has it’s good points and it’s bad points but the best part of Indeed is… RSS feeds!

My Indeed.ca feeds were searches for –

  • Every college and university in the GTA as the ‘Employer’
  • Librarian
  • Knowledge Management
  • Document Management
  • Records Management
  • Instructional Designer

INALJ.com Ontario Great job aggregator tool, but no RSS feeds.

University of Toronto iSchool Job Board – Great job aggregator tool, and it has an RSS feed

The Partnership Job Board

Code4Lib Job Board

McGill SIS Job Listserv

IFLA LibJobs Listserv

Tools

Feedly

Instapaper

Google Documents

LinkedIn

Page2RSS

Feed43

Zapier

Gmail

 

Setup

At CNA-Q, I’ve been operating under contract-based employment for all six years I’ve been here, so I’ve kept my ear to the ground about jobs in Canada. I signed up for the McGill SIS Jobs Board early on, and I push all of those emails into a folder in my Gmail labelled ‘Job Listings’.

As I started actively planning for my move from CNAQ, I went to Indeed and set up all of my RSS feeds for jobs. I knew that I was looking for an academic library job, so I set up alerts for every college and university in the Toronto area. I then created general searches for

  • Librarian
  • Knowledge Management
  • Document Management
  • Records Management
  • Instructional Designer

I put all of these searches in to my RSS reader. I like Feedly, and use it to monitor a lot of news and library blogs. I added all of these searches into a folder called ‘Job Search’

The U of T job board has an excellent RSS feed, so it was added to Feedly too.

I knew of INLJ, but it has a frustrating interface. More importantly, it has no RSS feeds. I started using the Page2RSS service to grab daily changes to the INLJ Ontario page, but that got annoying after a while and I started looking for an option that would allow me to identify individual jobs that are posted. At this point, I came across Feed43. Feed43 setup is a little beyond the scope of this post, but it allows you to identify sections of repeating HTML on any webpage and turn that into a properly formatted RSS feed.

I keep both the individual job Feed43 feed and the Page2RSS feed live in Feedly.

I then subscribed by email to the IFLA job board, the Partnership job board and the Code4Lib Job Board

This put me in a situation where I was checking two locations. I’m always on Feedly, but I don’t check my email as often. I went looking for a tool to turn Email into an RSS feed and I found Zapier. If you’re familiar with IFTTT, then you grasp the basic idea of Zapier.  Zapier provides you with a very simple graphical way to connect services that wouldn’t otherwise talk to each other. Zapier allowed me to create an RSS feed for my Gmail jobs folder. That feed was plugged into Feedly.

Workflow

Every morning, I’d read through the postings from all of these RSS feeds on Feedly. I’d make very quick decisions on whether to look further at a job, and skipped the majority of the posts that came up.

When I decided to look closer at a job posting, I’d open it up and read the post briefly. If it looked reasonable and matched my skill set I saved it to Instapaper using the share function on my iPad or the plugin in Chrome, depending on where I was reading it.

Once a day, after work was over, I’d review the selection of jobs I saved to Instapaper. Using this method, I found myself finding between 2-5 reasonable jobs per week that matched my skill set and experience. I applied for about 95% of the jobs I saved to Instapaper. I often printed these out and highlighted key words in each job application

Writing Cover Letters

 The University of Toronto has a standard job application format where you email a single file containing a cover letter, resume and references. While each institution  has their own standards for how to receive documents, the University of Toronto standard application format has some real benefits to the job applicant.

  • It keeps your customization within a single document.
  • It provides a way to collect information in one place if there are any special requirements
  • It limits the number of files you have to look at

I do all of my word processing in Google Docs. The U of T format allowed me to have application ‘packages’. If the potential employer was using a particularly stringent format, I could print single pages to PDF and then upload them into the individual Applicant Tracking Systems.

The standard advice for job hunting is to customize your application for every employer. That is wise advice, but if you try to write each application from scratch, you will find yourself without any other time (even at 2-5 applications per week).

I started by very carefully applying for jobs, and re-writing my application for each position. I then approached some very wise friends, including someone who works as a recruiter and a friend at the University of Toronto. These people helped me refine my resume and cover letter to something short, effective and well developed.

After I had a ‘template’ for my cover letters down with input from working librarians and my recruiter friend, I started customizing my letters slightly less. I would start with a previously-written application package for a similar position, then customized each cover letter and resume based on keywords.

Here are some of the things I did to minimize errors.

  • I only used the name of the institution and the name of the job position in the first line of the application package
  • I read through the completed cover letter forward, then backwards
  • I minimized customization to my resume to only ‘critical’ requirements for the job. ‘Optional’ requirements were covered in my cover letter.

Any extra documentation, like salary requirements were placed on sheets after the completed ‘package’. I could tell with a glance if the application package had this extra documentation because it was more than 4 pages.

Please contact me @brettlwilliams if you’d like to see examples of my application “package” format, or if you would like to use the RSS feeds I created for INLJ Ontario.

 

 

 

 

ISBN Title Lookup Google Doc Spreadsheet

We needed to do a quick inventory of some discarded books, and while we could pull the majority of the information from our catalog, we have some donations and other books we had no quick method of getting title data.

We’ll scan the barcodes in using a barcode scanner

This uses the ISBNdb API and a quick bit of importXML

There is a 25 ISBN/day limit on this API key for testing. Google limits importXML to 50/sheet. Please get your own ISBNdb account to implement this.

https://isbndb.com/

Here’s the bit of code for the spreadsheet. Copy the example here

=importXML( concatenate( "http://isbndb.com/api/v2/xml/EV31C4LJ/books?q=", A2), "//title")

=importXML( – grabs the XML response from the ISBNdb API

concatenate( – assembles the API call URL

http://isbndb.com/api/v2/xml/EV31C4LJ/books?q= – initial APIcall string. Specifies v2 API, XML response, key and the source we’re pulling from (books)

A2 – The cell from which we’re pulling the ISBN from

“//title” – The section of the XML response that we want to put in the cell, in this case the title of the book.

The Architecture of the Next Gen ILS

Marshall Breeding’s Systems Librarian column in the October 2010 edition of Computers in Libraries has been on my mind extensively recently. I’ve spent the last two years integrating our ILS with a hosted Aquabrowser instance and Summon and I was pretty surprised at the amount of ‘hacking’ that had to go on to make things work.

The primary method we are using in the Library world is MARC 21 export for these services. Why don’t these services support our existing search infrastructure of Z39.5?

Is this a speed issue? A caching issue? It just seems quite silly to depend on a frail script to upload large files on a regular basis when the infrastructure to support queries already exists and has been the standard for years.

Barcodes are dead. Long live the barcode!

I’ve noticed a mindless consensus among the geek literati when talking about QR Codes. Often a blog post will discuss the various uses they are put to, then end with the phrase ‘But I think that NFC will replace them in the near future’.

Bullshit.

I have an RFID system in my library. My long term experience in libraries is with barcodes. I’ve already done the QR Code vs NFC dance. While both have their advantages, I think the two technologies are complementary, not competitive. And barcodes have the advantage.

The cost of production for barcodes is ink and paper. They have a standard protocol for communication between the reader and the computer. I can plug a barcode reader into a USB port and scan off the raw numbers on the barcodes into Excel.

I can make a barcode whenever I want using standard office products. If I’m enthusiastic I can decode what a barcode means with a Mark 1 Eyeball. This simplicity of communication between printed text and computer systems is the major advantage of barcodes

The major problems with barcodes is the bandwidth and the data storage. Laser scanning is a little slow, and the few bits you can store in a printed barcode are not really sufficient.

Using barcodes as identifiers, then pulling the rest of the information from a networked system solves the bandwidth and information storage problem.

RFID tags cannot just replicate what barcodes do. If I’m using a NFC tag to point to a URL, what’s the point? The $1 NFC tag sticker pointing to that URL is redundant when it can be duplicated by my printer as part of the standard design for a few cents in design fees and ink.

RFID will work in a few crucial areas.

  1. The offline data requirements are of a larger size than what can be accommodated in a barcode.
  2. You need to avoid the one-at-a-time scan bottlenecks of barcodes.
  3. You need to track the location of a tagged item at all times.
  4. You have two networked devices (say POS and smartphone) that need to communicate securely only in a specific place. (and Facecash proves that QR codes work just fine for this)

RFID stickers will not replace the barcode.

NFC Point of Sale devices have promise, and are actually in use already with many tap-to-pay services. Industry and government will continue to use RFID in the locations where it works. NFC will probably revolutionize device-to-device communication.

For you and me, the easiest way to get information from print to computer will still be some variation of the barcode.

Related Editions and Associated Information in Integrated Library Systems

This is based on a request from Tejaswi Tenneti in relation to an answer I gave a while ago here: Brett Williams’s answer to Ontologies: Is there an Ontology that models Enterprise products and associated knowledge articles?

This post will walk you through some of the advanced information processing that goes on in large library systems.

The basis of library records is a standard interchange format called the MARC record. The MARC record is an ISO standard that was originally developed by Henriette Avram at the Library of Congress in the 1960’s.

MARC is an interchange format, it’s a way to transfer information both offline and online, and was originally developed for tape-based processing of information, long before ARPA-net was a twinkle in a CRT screen.

There are several standard MARC formats, as well as several different implementations of MARC, both ASCII and UTF-8 encoded. For simplicity, I’m just going to show you what a bibliographic record looks like. A bibliographic record is filled in with the information about a single item.

Here’s the first couple of lines of a MARC record I’ve cracked open with a tool to show you the structure of the record.

=LDR 01437nam 2200373 a 4500
=001 9781849729390
=003 Credo
=005 20110223080808.0
=006 m\\\\\\\\d\\\\\\\\
=007 cr\cn|||||||||
=008 110211r20032011enka\\\\od\\eng\d
=020 \\$a9781849729390 (online)
=020 \\$z9780007165414 (print)
=020 \\$z0007165412 (print)
=035 \\$a(OCoLC)703091350
=035 \\$a(CaBNvSL)slc00226357
=035 \\$a(Credo)hcdquot2003
=040 \\$aCaBNvSL$cCaBNvSL$dCaBNvSL
=050 \4$aPN6081$b.D495 2003eb
=082 04$a080$222
=245 00$aCollins dictionary of quotations$h[electronic resource].

The raw record looks like this. Quora can’t even display some of the critical separation markers used. Like I mentioned, this was developed in the 1960s.

01437nam 2200373 a 45000010014000000030006000140050017000200060019000370070015000560080041000710200027001120200026001390200023001650350021001880350026002090350023002350400030002580500025002880820012003132450061003252500025003862600080004113000063004915060060005545200060006145300037006745380036007115880054007476500039008016550022008407100027008627760046008898560128009359781849729390Credo20110223080808.0m d cr cn|||||||||110211r20032011enka od 000 0 eng d a9781849729390 (online) z9780007165414 (print) z0007165412 (print) a(OCoLC)703091350 a(CaBNvSL)slc00226357 a(Credo)hcdquot2003 aCaBNvSLcCaBNvSLdCaBNvSL 4aPN6081b.D495 2003eb04a08022200aCollins dictionary of quotationsh[electronic resource].

And yes, before you ask, there is an XML version. It’s still catching on, but it’s looking like the future interchange format.

So, here’s my workflow. I get a new book in the library. I take a look at it, and then I search for the MARC record in another library, the Library of Congress or from a company called OCLC that has tools that make this process very quick. I download the record to my computer and open it up using tools in my ILS (integrated Library System). I check over the MARC record. It doesn’t have to be exact, but I do check the ISBN, the title, author and a few other points. I also have tools that check the author (is it Chester M Arthur or Chester M. Arthur?) title and some other fields to ensure consistency.

Then, I tell my ILS to add it to the catalog. At this point, the MARC record is broken up and put into tables in a database. My current ILS runs on Oracle, others use MySQL.

When the record is put into the database, it’s checked against the current holdings. If I already have this exact edition of the Collins Dictionary of Quotations, it will show up as another item, identical to the first. If I have an older edition, my new record will show up as another item, with a different ISBN and other details, and the ILS will show a unified record when I look up ‘Collins Dictonary of Quotations’ with both editions identified.

Is the item written by Mark Twain, who’s really Samuel Clemens? That decision got made years ago, so if you search the Harvard Library or the library in Minot, North Dakota you’ll get the same answer. Were there 15 different editions with different contents? There’s a rule for that.

Bringing this back to industry, say you are taking a product through it’s lifecyle. In R&D it’s Project Wiffleball. When it moves over to Product Development, it becomes Project Unicorn E456. When you release it to the public it’s called the E-Nebulizer, which you release versions E999, E678, E469, ET689 and SV368. What a library system could do is collate the information from

Project Wiffleball
Project Unicorn E456
E-Nebulizer E999
E-Nebulizer E678
E-Nebulizer E469
E-Nebulizer ET689
E-Nebulizer SV368

and show it all in the same place, on a single screen, with no duplication, and with clear links to knowledge objects that relate generally to the project and others that relate to specific versions.

Now, this is all done within the ILS. I can also pull information from outside sources to enrich the contents of the bibliographic record

Here’s a record from my catalog.

http://library.cna-qatar.edu.qa/?itemid=|marc-qatar|000060791

That section in the middle is the MARC record from my catalog. On the right is the book cover, drawn from a 3rd party service (Syndetics). It runs by grabbing the ISBN, running it through a collation service called xISBN to identify all of the editions. Then, if there is a cover for the given ISBN, it gives that cover. If Syndetics doesn’t have it, I get another edition’s cover.

Moving on down, the full table of contents, also drawn from Syndetics. This is indexed, along with all of the main record information and is searched. Further down, reviews of the book are also available, also keyword indexed and searchable.

All right! Now my library is very basic, let me show you something a little more complex.

Here’s the University of Southern Queensland Library in Toowoomba, about 3 hours from Brisbane. I had the pleasure of talking quite extensively with their IT team a couple of weeks ago at a conference.

Take a look at this record

http://library.usq.edu.au/Record/vtls000420789

You can see the Syndetics book covers over on the left hand side of the screen. Now, check out the Similar Items link.

Using a relevance algorithim, a list of similar works is pulled up and integrated with the individual record. This is a database maintained in-house by their IT department, it’s part of the library database that serves up this finding aid.

OK, let’s flip over to USQ’s mobile interface.
http://m.library.usq.edu.au/
Now try that top link again, you’ll have the right cookie now to view the mobile interface.

Check out the ‘Show on Map link’ This will load a graphic, sized for the iPhone that indicates the location of the book. This is from a database maintained by their IT department, separate from the library database that serves this finding aid.

OK, let’s hop back a little closer to home.

The San Diego Public Library has included QR codes in their book records using the Google Charts API

http://libpac.sdsu.edu/search~S0?/XTesting&SORT=D/XTesting&SORT=D&SUBKEY=Testing/1%2C13291%2C13291%2CB/frameset&FF=XTesting&SORT=D&1%2C1%2C

If you’ve made it this far, you’ve got a real interest in displaying information. As I mentioned before in my answer, this is a very rough introduction to a complex topic. If you’re really interested, and are looking for consultants, I have a few contacts in Knowledge Management, Libraries and the systems behind them that can give you much more help

Cross posted from Quora