Advanced Information Management for Job Searching

This post is a collection of tools and a workflow I used for my latest job search.

Sources

My job search was focused on the Greater Toronto Area.

Indeed.ca – is a job search tool. It has it’s good points and it’s bad points but the best part of Indeed is… RSS feeds!

My Indeed.ca feeds were searches for –

  • Every college and university in the GTA as the ‘Employer’
  • Librarian
  • Knowledge Management
  • Document Management
  • Records Management
  • Instructional Designer

INALJ.com Ontario Great job aggregator tool, but no RSS feeds.

University of Toronto iSchool Job Board – Great job aggregator tool, and it has an RSS feed

The Partnership Job Board

Code4Lib Job Board

McGill SIS Job Listserv

IFLA LibJobs Listserv

Tools

Feedly

Instapaper

Google Documents

LinkedIn

Page2RSS

Feed43

Zapier

Gmail

 

Setup

At CNA-Q, I’ve been operating under contract-based employment for all six years I’ve been here, so I’ve kept my ear to the ground about jobs in Canada. I signed up for the McGill SIS Jobs Board early on, and I push all of those emails into a folder in my Gmail labelled ‘Job Listings’.

As I started actively planning for my move from CNAQ, I went to Indeed and set up all of my RSS feeds for jobs. I knew that I was looking for an academic library job, so I set up alerts for every college and university in the Toronto area. I then created general searches for

  • Librarian
  • Knowledge Management
  • Document Management
  • Records Management
  • Instructional Designer

I put all of these searches in to my RSS reader. I like Feedly, and use it to monitor a lot of news and library blogs. I added all of these searches into a folder called ‘Job Search’

The U of T job board has an excellent RSS feed, so it was added to Feedly too.

I knew of INLJ, but it has a frustrating interface. More importantly, it has no RSS feeds. I started using the Page2RSS service to grab daily changes to the INLJ Ontario page, but that got annoying after a while and I started looking for an option that would allow me to identify individual jobs that are posted. At this point, I came across Feed43. Feed43 setup is a little beyond the scope of this post, but it allows you to identify sections of repeating HTML on any webpage and turn that into a properly formatted RSS feed.

I keep both the individual job Feed43 feed and the Page2RSS feed live in Feedly.

I then subscribed by email to the IFLA job board, the Partnership job board and the Code4Lib Job Board

This put me in a situation where I was checking two locations. I’m always on Feedly, but I don’t check my email as often. I went looking for a tool to turn Email into an RSS feed and I found Zapier. If you’re familiar with IFTTT, then you grasp the basic idea of Zapier.  Zapier provides you with a very simple graphical way to connect services that wouldn’t otherwise talk to each other. Zapier allowed me to create an RSS feed for my Gmail jobs folder. That feed was plugged into Feedly.

Workflow

Every morning, I’d read through the postings from all of these RSS feeds on Feedly. I’d make very quick decisions on whether to look further at a job, and skipped the majority of the posts that came up.

When I decided to look closer at a job posting, I’d open it up and read the post briefly. If it looked reasonable and matched my skill set I saved it to Instapaper using the share function on my iPad or the plugin in Chrome, depending on where I was reading it.

Once a day, after work was over, I’d review the selection of jobs I saved to Instapaper. Using this method, I found myself finding between 2-5 reasonable jobs per week that matched my skill set and experience. I applied for about 95% of the jobs I saved to Instapaper. I often printed these out and highlighted key words in each job application

Writing Cover Letters

 The University of Toronto has a standard job application format where you email a single file containing a cover letter, resume and references. While each institution  has their own standards for how to receive documents, the University of Toronto standard application format has some real benefits to the job applicant.

  • It keeps your customization within a single document.
  • It provides a way to collect information in one place if there are any special requirements
  • It limits the number of files you have to look at

I do all of my word processing in Google Docs. The U of T format allowed me to have application ‘packages’. If the potential employer was using a particularly stringent format, I could print single pages to PDF and then upload them into the individual Applicant Tracking Systems.

The standard advice for job hunting is to customize your application for every employer. That is wise advice, but if you try to write each application from scratch, you will find yourself without any other time (even at 2-5 applications per week).

I started by very carefully applying for jobs, and re-writing my application for each position. I then approached some very wise friends, including someone who works as a recruiter and a friend at the University of Toronto. These people helped me refine my resume and cover letter to something short, effective and well developed.

After I had a ‘template’ for my cover letters down with input from working librarians and my recruiter friend, I started customizing my letters slightly less. I would start with a previously-written application package for a similar position, then customized each cover letter and resume based on keywords.

Here are some of the things I did to minimize errors.

  • I only used the name of the institution and the name of the job position in the first line of the application package
  • I read through the completed cover letter forward, then backwards
  • I minimized customization to my resume to only ‘critical’ requirements for the job. ‘Optional’ requirements were covered in my cover letter.

Any extra documentation, like salary requirements were placed on sheets after the completed ‘package’. I could tell with a glance if the application package had this extra documentation because it was more than 4 pages.

Please contact me @brettlwilliams if you’d like to see examples of my application “package” format, or if you would like to use the RSS feeds I created for INLJ Ontario.

 

 

 

 

Related Editions and Associated Information in Integrated Library Systems

This is based on a request from Tejaswi Tenneti in relation to an answer I gave a while ago here: Brett Williams’s answer to Ontologies: Is there an Ontology that models Enterprise products and associated knowledge articles?

This post will walk you through some of the advanced information processing that goes on in large library systems.

The basis of library records is a standard interchange format called the MARC record. The MARC record is an ISO standard that was originally developed by Henriette Avram at the Library of Congress in the 1960’s.

MARC is an interchange format, it’s a way to transfer information both offline and online, and was originally developed for tape-based processing of information, long before ARPA-net was a twinkle in a CRT screen.

There are several standard MARC formats, as well as several different implementations of MARC, both ASCII and UTF-8 encoded. For simplicity, I’m just going to show you what a bibliographic record looks like. A bibliographic record is filled in with the information about a single item.

Here’s the first couple of lines of a MARC record I’ve cracked open with a tool to show you the structure of the record.

=LDR 01437nam 2200373 a 4500
=001 9781849729390
=003 Credo
=005 20110223080808.0
=006 m\\\\\\\\d\\\\\\\\
=007 cr\cn|||||||||
=008 110211r20032011enka\\\\od\\eng\d
=020 \\$a9781849729390 (online)
=020 \\$z9780007165414 (print)
=020 \\$z0007165412 (print)
=035 \\$a(OCoLC)703091350
=035 \\$a(CaBNvSL)slc00226357
=035 \\$a(Credo)hcdquot2003
=040 \\$aCaBNvSL$cCaBNvSL$dCaBNvSL
=050 \4$aPN6081$b.D495 2003eb
=082 04$a080$222
=245 00$aCollins dictionary of quotations$h[electronic resource].

The raw record looks like this. Quora can’t even display some of the critical separation markers used. Like I mentioned, this was developed in the 1960s.

01437nam 2200373 a 45000010014000000030006000140050017000200060019000370070015000560080041000710200027001120200026001390200023001650350021001880350026002090350023002350400030002580500025002880820012003132450061003252500025003862600080004113000063004915060060005545200060006145300037006745380036007115880054007476500039008016550022008407100027008627760046008898560128009359781849729390Credo20110223080808.0m d cr cn|||||||||110211r20032011enka od 000 0 eng d a9781849729390 (online) z9780007165414 (print) z0007165412 (print) a(OCoLC)703091350 a(CaBNvSL)slc00226357 a(Credo)hcdquot2003 aCaBNvSLcCaBNvSLdCaBNvSL 4aPN6081b.D495 2003eb04a08022200aCollins dictionary of quotationsh[electronic resource].

And yes, before you ask, there is an XML version. It’s still catching on, but it’s looking like the future interchange format.

So, here’s my workflow. I get a new book in the library. I take a look at it, and then I search for the MARC record in another library, the Library of Congress or from a company called OCLC that has tools that make this process very quick. I download the record to my computer and open it up using tools in my ILS (integrated Library System). I check over the MARC record. It doesn’t have to be exact, but I do check the ISBN, the title, author and a few other points. I also have tools that check the author (is it Chester M Arthur or Chester M. Arthur?) title and some other fields to ensure consistency.

Then, I tell my ILS to add it to the catalog. At this point, the MARC record is broken up and put into tables in a database. My current ILS runs on Oracle, others use MySQL.

When the record is put into the database, it’s checked against the current holdings. If I already have this exact edition of the Collins Dictionary of Quotations, it will show up as another item, identical to the first. If I have an older edition, my new record will show up as another item, with a different ISBN and other details, and the ILS will show a unified record when I look up ‘Collins Dictonary of Quotations’ with both editions identified.

Is the item written by Mark Twain, who’s really Samuel Clemens? That decision got made years ago, so if you search the Harvard Library or the library in Minot, North Dakota you’ll get the same answer. Were there 15 different editions with different contents? There’s a rule for that.

Bringing this back to industry, say you are taking a product through it’s lifecyle. In R&D it’s Project Wiffleball. When it moves over to Product Development, it becomes Project Unicorn E456. When you release it to the public it’s called the E-Nebulizer, which you release versions E999, E678, E469, ET689 and SV368. What a library system could do is collate the information from

Project Wiffleball
Project Unicorn E456
E-Nebulizer E999
E-Nebulizer E678
E-Nebulizer E469
E-Nebulizer ET689
E-Nebulizer SV368

and show it all in the same place, on a single screen, with no duplication, and with clear links to knowledge objects that relate generally to the project and others that relate to specific versions.

Now, this is all done within the ILS. I can also pull information from outside sources to enrich the contents of the bibliographic record

Here’s a record from my catalog.

http://library.cna-qatar.edu.qa/?itemid=|marc-qatar|000060791

That section in the middle is the MARC record from my catalog. On the right is the book cover, drawn from a 3rd party service (Syndetics). It runs by grabbing the ISBN, running it through a collation service called xISBN to identify all of the editions. Then, if there is a cover for the given ISBN, it gives that cover. If Syndetics doesn’t have it, I get another edition’s cover.

Moving on down, the full table of contents, also drawn from Syndetics. This is indexed, along with all of the main record information and is searched. Further down, reviews of the book are also available, also keyword indexed and searchable.

All right! Now my library is very basic, let me show you something a little more complex.

Here’s the University of Southern Queensland Library in Toowoomba, about 3 hours from Brisbane. I had the pleasure of talking quite extensively with their IT team a couple of weeks ago at a conference.

Take a look at this record

http://library.usq.edu.au/Record/vtls000420789

You can see the Syndetics book covers over on the left hand side of the screen. Now, check out the Similar Items link.

Using a relevance algorithim, a list of similar works is pulled up and integrated with the individual record. This is a database maintained in-house by their IT department, it’s part of the library database that serves up this finding aid.

OK, let’s flip over to USQ’s mobile interface.
http://m.library.usq.edu.au/
Now try that top link again, you’ll have the right cookie now to view the mobile interface.

Check out the ‘Show on Map link’ This will load a graphic, sized for the iPhone that indicates the location of the book. This is from a database maintained by their IT department, separate from the library database that serves this finding aid.

OK, let’s hop back a little closer to home.

The San Diego Public Library has included QR codes in their book records using the Google Charts API

http://libpac.sdsu.edu/search~S0?/XTesting&SORT=D/XTesting&SORT=D&SUBKEY=Testing/1%2C13291%2C13291%2CB/frameset&FF=XTesting&SORT=D&1%2C1%2C

If you’ve made it this far, you’ve got a real interest in displaying information. As I mentioned before in my answer, this is a very rough introduction to a complex topic. If you’re really interested, and are looking for consultants, I have a few contacts in Knowledge Management, Libraries and the systems behind them that can give you much more help

Cross posted from Quora

Lessons Learned from using an Appreciative Inquiry to define a presentation

I used the interview idea from AI to define the scope of my presentation during CNA-Q’s PD days. The topic was Apps in the Classroom, and you can find the link here http://cna-qatar.libguides.com/apps

My intention was to use the interviews and summary sheets to define how much information, and what kind of information was presented during my session. The activities went like this

20 minutes for interviews & summary sheets

10 minutes discussion on themes that emerged.

I was prepared for a longer presentation on using Apps in the Classroom, and I was hoping to talk on a higher level about the role of smartphones in education.

What emerged from the interviews and the summary sheets was that the session participants were interested in the following

1. Demonstrations

2. Where to find applications.

3. How to gain more experience with mobile devices.

The result of this initial inquiry made me shift my presentation. We covered the difference between webapps and native apps, what you could do with a device (as a mouse, as a presentation device, as a dictionary) and we covered how to find free applications.

In the end, I directed everyone to the website for more information.

I learned from this that the people in the presentation knew what they wanted to get out of the presentation. I’m not the expert in what they need. I poured a lot of my higher-level ideas into the website for the presentation, and it’s seen some significant use already.

Crossposted at Appreciative Inquiry Positive Changes Ning

 

Cop Shows & the New Business Organization

I watch cop shows with a kind of morbid fascination. Law & Order:SVU, NCIS, Criminal Minds and their offshoots require minimal mental effort, portray the world in simplistic terms and seem to be doing very well from their proliferation on network television.

I think they are important in another way. I think they are showing an example of the new business organization.

Let me start with example. I watched the premier of Criminal Minds: Suspect Behavior. The pilot was not good literature, if good TV. If I were asked to describe the plot I’d retitle it: ‘Missing White Girl FBI Ninja’s’

That being said, while I was chuckling over the inanity of the plot I also realized that I wanted to work with those people portrayed on the screen.

  • They were an independent team working on clearly defined projects (cases).
  • Their organization was horizontal, with only small concessions made to the structure of the organization to which they belonged.
  • Each member of the team was a professional.
  • Each member of the team has incredibly high ethical standards
  • The team respected each other, and each others abilities.

Step back and take a look at CSI, Law & Order, Law & Order SVU, NCIS, NCIS: LA, CSI: Miami, CSI: New York. All of these shows have the same pattern!

What shows can you think of that portray a more realistic office structure, something we’re all more accustomed to? The Office is the first that comes to mind.

So why are we being entertained by the professional teams we want to be a part of while living in the dystopian comedy of the traditional organization?

Cross-posted from Quora

On Visualizing Projects Beyond Your Ken – Freemind

Don’t you love that phrase? Beyond your ken? It’s a Scottish idiom meaning beyond your ability to comprehend.

It’s very much how I’m feeling in approaching some of the projects here at CNA-Q.  Just doing edits to the OPAC requires editing several files at the same time, then aut0-generating the html, testing it with (at least) two different browsers and tracking those changes over a number of days. This is in addition to keeping track of admin passwords, login details, ip addresses, and even the structure and location of folders on the server.

I’ve been depending very heavily on Freemind to do a lot of this. It’s a mind manager, available as a portable app, which allows me to install it when and where I feel like it. I’ve mapped the entire library network, with our library server as the center, as well as two sub-maps of users and employees. Here is just a sample,  this is a map of our online databases.

Online Databases

This is just a screengrab, from the little icons you can see there are notes about each database, as well as hyperlinks to the library admin pages. The color coding is temporary, and relates to our implementation of 360 Link.

Green is ‘Not done Yet’. Yellow is ‘Working’.  Blue is ‘Full Text and doesn’t need it’

There is no way I can keep all these connections in my head. Notes are helpful, but being able to map out the different systems within the library makes a huge difference. I keep Freemind open behind my work window, when I need to navigate to a new area of the network to work on another project, I pop it open and click on the links embedded there.

Enterprise Content Management – Magnolia – Introduction & Resources

What is Enterprise Content Management?

It’s a centralized way to identify, capture, tag, control and publish information in your business. That’s also one of the driest sentences I’ve ever written.

ECM, when it’s done well, is a unified way to work with a wild array of different types of information. With enough thought, a good ECM setup acts like the operating system on your desktop computer, it handles almost any program, any video, any audio, and any document. All of this content can be accessed through a common interface in which all employees can comment, collaborate on new documents and preserve work for regulatory, legal and strategic reasons.

I’ve decided to chronicle an installation and setup of Magnolia, an open-source Enterprise Content Management system on my home network as the next direction I’m going to take on this blog. I’ve used KnowledgeTree extensively and have it installed on my home network. I thought it would be of use to the larger KM community to see a from-the-beginning installation of an OpenSource CMS from the perspective of a Library professional. I am an experienced Windows user, but I am not familiar with commercial ECM installations like Microsoft Sharepoint. I also don’t have extensive web-server experience. I’ll be documenting my problems, my success and my problem-solving methods. Magnolia is a mature product with a large user base and this experience should provide some value to anyone at a small to medium sized business interested in implementing an ECM installation

Here’s some background on Magnolia.

Magnolia bills itself as easy to use. It’s use of Java makes it usable across the Mac, Windows and Linux worlds, a definite plus from my perspective. Java does have a tendacy to run slowly on older hardware, but a relativly modern computer should have little trouble handling the demands of Magnolia. Owned by Magnolia International, based in Basel, Switzerland, Magnolia boasts a rather large client list including Maserati and Monsanto.

Look for the install post in the next few days!

Resources:

Magnolia

Magnolia Twitter Feed

Magnolia Manual

Magnolia Wiki

If you can’t open it, you don’t own it

I’m a big fan of Make Magazine, the subtitle of which I have shamelessly stolen for this article. The philosophy behind Make is that consumers have a right to know how the products they buy work and where they come from. In furthering this goal, the magazine features a cornucopia of teardowns, rebuilds, unboxing and tinkering. So how does this apply to Knowledge Management and the business world?

Business and government regularly fall into traps with proprietary software. A software product is purchased, deployed and used within a division without much thought to interoperability, future access or alternatives. Within the Canadian Government we have a rats nest of software that is unable to work across departments, even within divisions. After enough divisions and agencies have competing software products it becomes a bugetary and political issue to move to a standard software platform, even for something as simple as desktop publishing. This should be familiar to anyone who has tried to open a WordPerfect document with Microsoft Word, or even a Word 2007 document with Word 2003.

In KM, we make extensive use of intelligent agents, automatic search programs that monitor the internal network or selected data sources and send new information to our clients. If we are limited to a proprietary application or email output for the data we are requesting it seriously limits the creative uses we can put that intelligent agent to. However, with XML output, RSS output or even a simple text file logging we can significantly increase the utility of our intelligent agents

Let me show you a recent example I put together for my desktop. I’m a news junkie and I enjoy reading CNN’s top stories. I have a rule that I don’t put news in my RSS reader, however, because I get overrun with the constant news cycle, especially during the US presidential campaign. As a soloution to this, I embedded a news feed in the background of my desktop using SeriousSamurize, a free system monitoring program.

It’s a very simple display, LCD style screen font, with only two lines of text so headlines are displayed. This way, I can glance at the news when I feel like it and I don’t feel obligated to read every story in my RSS reader.

If there’s any interest, I’d be happy to put together a how-to.

Without the free access to CNN’s Top Stories RSS feed, I would never have been able to do this. It’s a quick solution that does not distract me from other work and I have instant access to the top 5 news stories of the day.