Linkdump for July 25th through September 21st

An automatically generated list of links that caught my eye between July 25th and September 21st.

Sometime between July 25th and September 21st, I thought this stuff was interesting. You might think so too!

Linkdump for June 25th through July 16th

An automatically generated list of links that caught my eye between June 25th and July 16th.

Sometime between June 25th and July 16th, I thought this stuff was interesting. You might think so too!

DVD/Blu-Ray conversion with text soft subs on Mac OS X

How to convert DVDs or Blu-Rays for personal use on OS X, including OCR conversion of subtitles to text-based .srt files suitable for use as ‘soft’ subtitles (rendered by the video player rather than burned into the image).

Saved here for my own reference, and possibly others’ if they should stumble across it: the easiest workflow I’ve found yet for converting DVDs or Blu-Rays (if you have a Blu-Ray reader, of course) for personal use on OS X, including OCR conversion of subtitles in either VOBSUB (DVD) or PGS (Blu-Ray) format to text-based .srt files suitable for use as soft subtitles, either as a sidecar file or included in the final movie file.

Movie Rip WorkflowThe flow diagram to the right gives an overview of the process I’ve landed on. Here’s a slightly more detailed breakdown.

  1. Use MakeMKV to rip the DVD or BluRay disc to an .mkv file (if I run into a stubborn DVD, or one with a lot of multiplexing, I’ll use RipIt to create a disk image first, then run that image through MakeMKV). To save space, you can select only the primary audio track for inclusion, or you can select others if you want other languages or commentary tracks archived as well (though this will require more storage space). I also select all available English-language subtitle tracks, as some discs will include both standard subtitles and subtitles for the hearing impaired or closed captions, which include some extra information on who is speaking and background sounds, or occasionally even transcriptions of commentary tracks.
  2. Use Subler to OCR and export the subtitle files. This takes two runs through Subler to complete.
    1. First run; drag the .mkv file onto Subler, and only select the subtitle track(s). Pop that into the export queue, and after a few minutes of processing (this is when the OCR process happens) Subler will output a tiny .m4v file.
    2. Second run; drag that file back onto Subler, click on the subtitle track, and choose File > Export… to save the .srt file(s). The tiny .m4v file can then be deleted.

    Now, the OCR process is not perfect, and the resulting .srt file(s) are virtually guaranteed to have some errors. How many and how intrusive they are depends on the source. BluRay subs seem to come out better than DVD subs (likely due to the higher resolution of the format giving better quality text for the OCR process to scan), DVD subs are also affected by the chosen font and whether or not italics were used. For correction, I use one of two methods.

    1. For a quick-and-dirty “good enough for now” run, I use BBEdit (but just about any other text editor would work) to do a quick spellcheck, identifying common errors and using search-and-replace to fix them in batches.
    2. For a real quality fix, I use Aegisub to go through line-by-line, comparing the text to the original audio, adding italics when appropriate, and so on.

    Of course, these two processes can be combined, done at different times, or skipped entirely; right now, I’m just living with the OCR errors, because I can always go back and use Subler to extract the .srt files for cleanup later on when I have more time.

  3. Use HandBrake to re-encode and convert the .mkv file (which at this point will be fairly large, straight off the source media) to a smaller .m4v file. You can either embed the .srt files at this point, under HandBrake’s ‘Subtitles’ tab, or if you prefer…
  4. …you can use Subler to .srt files into into the .m4v: Drag the .m4v file from HandBrake on to Subler, drag the .srt file(s) into the window that opens, and then drop that into the queue for final remuxing (optionally, before adding the files to the queue, use Subler’s metadata search tools to add the description, artwork, and other metadata). Then run the queue to output the final file.

And that’s it. Now, you should have a .m4v file with embedded text-based soft subtitles for programs that support that (VLC, Plex, etc.), or you can just use the .srt file(s) created by Subler earlier as a sidecar file for programs that don’t read the embedded .srt.

Trust

There is a better than 50% chance that I’ll be ordering an Apple Watch on the day they’re added to the Apple Store.

There has been a tendency to mock people that want to buy products simply because a certain company makes them. Some will say this type of buyer is being guided by marketing, or is just a follower, but in reality it comes down to trust. Many people trust Apple. It is this very important connection with users that will likely get people to at least try the Apple Watch, and for Apple that is the best outcome they can wish for.

There is a better than 50% chance that I’ll be ordering an Watch on the day they’re added to the Apple Store.

My 2015 Resolutions

640×1136, 2,048×1,536, and 5120×2880. Yes, I make this joke somewhat annually. But…it amuses me, so I’ll probably continue to do so.

  • 640×1136 (iPhone 5s)
  • 2,048×1,536 (iPad Air 2)
  • 5120×2880 (iMac with Retina Display)

Yes, I make this joke somewhat annually. But…it amuses me, so I’ll probably continue to do so. One of these days I should dig back through prior years to figure out where I’ve posted this (blog, Facebook, Twitter) and see how my resolutions have changed over time.

No such thing as “just metadata”

There’s an MIT project called ‘Immersion’ that gives a good visualization of what can be learned from a relatively limited dataset.

With all the recent news concerning the NSA’s surveillance programs (Prism et al.), one of the common defenses has been that for at least some of these programs (though not all), the government is “just” collecting metadata. For example, should the government access your email records, they might not have access to the content of the email, merely the associated data — like who you communicate with, when, how often, who else is included in the messages, and so on.

Techdirt has a good overview of why the “it’s just metadata” argument is a foolish argument to make — basically, there is a lot of information that can be derived from “just metadata” — but there’s also an MIT project called “Immersion” (noted in the TechDirt article, though I found it elsewhere) that gives a good visualization of what can be learned from a relatively limited dataset.

Immersion scans your Gmail account (with your explicit permission, of course), and then runs an analysis on the metadata — not the content — of your email history to create a diagram showing you you communicate with and the connections among them.

As an example, here’s my result (with names removed). This is an analysis of almost 52 thousand messages over nearly nine years among 201 separate contacts. Each dot is a single contact, the size of the dot is a measure of how often I’ve communicated with them, and the lines between them show existing relationships between those people (based on messages with multiple recipients).

Immersion Contact Map

In that image, there are two obvious constellations: the blue grouping at the top right are my family and long-time friends; the orange/green/red/brown grouping to the left are my Norwescon contacts. The scattering of purples and yellows are contacts that fall outside of those two primary groups. While there’s not much here of great surprise or import for me, I did already learn one thing of interest — apparently one of my old high school friends has had some amount of contact with one of my Norwescon friends (that’s the single line connecting the two constellations). Now, I have no idea what sort of relationship exists between them — it could be nothing more than my sending a group email that included one and accidentally including the other as part of the group — but some sort of relationship does, and that’s information I didn’t have before.

Now, my metadata is fairly innocuous. But for argument’s sake, suppose I was involved not with Norwescon, but with some other group of people that, for whatever reason, I wanted to keep quiet about. Maybe I’m involved in the local kink scene, and could face repercussions at my job or in my personal life if this became known. Maybe I’m having a gender identity crisis that I’m not comfortable publicly discussing, but have a strong internet-based support group. Maybe I’m part of Anonymous or some similar group, discussing ways to cause mischief. Maybe I’m a whistleblower, and these are my contacts. Maybe I’m a news reporter who has guaranteed anonymity for my sources — but suddenly, this metadata exposes not only who I communicate with, but when and how often, and if there’s a sudden ramp in communication between me and certain contacts in the weeks or months before I break a big story with a lot of anonymous sources, suddenly they’re not so anonymous any more. And, yes, of course, because no list like this would be complete without the modern boogeyman that is the government’s excuse for why this surveillance is necessary — maybe I’m a terrorist. (For the record, I’m none of the above-mentioned things.)

However, of that list of possibilities, terrorism (or, less broadly, investigation of known or suspected crimes) is the only one that the government should really have any interest in, and that’s exactly the kind of investigation that they should be getting warrants for. If they suspect someone, get a warrant, analyze their data, and build a case from there. But analyzing everyone’s data, all the time, without specific need, without specific justification, and without warrants? And then holding on to the data indefinitely, allowing them to troll through it at any time for any reason, whether or not a crime is suspected?

There’s a very good reason why terms like “Orwellian”, “Big Brother”, and “1984” keep coming up in these conversations.

Now PGP-enabled

With all the recent concerns about security and privacy in the world of PRISM, I finally decided to carry through on something I’d considered from time to time in the past, and have set myself up to be able to handle PGP encryption for my mail.

With all the recent concerns about security and privacy in the world of PRISM, I finally decided to carry through on something I’d considered from time to time in the past, and have set myself up to be able to handle PGP encryption for my mail. I’m using GPGTools for the OS X Mail client and Mailvelope for Chrome when I need web access to my Gmail account.

To be honest, I don’t know how often I’ll actually use PGP for anything other than signing my messages — I can’t think of a time when I’ve ever been truly concerned about what someone might find if they snooped through my email (they’d probably be pretty bored) — but as long as the option is there, might as well make sure I’m set up to use it in case I ever feel the need.

My PGP public key follows:

Continue reading “Now PGP-enabled”

Markdown is the new Word 5.1

Almost all of my writing for many, many years now has been in a text editor using Markdown-formatted text. I’m using Markdown formatting for this blog post (which WordPress then automatically translates into HTML), I’ve written many, many discussion board posts for school in Markdown format before pasting them into BlackBoard, and I use Markdown formatting whenever I’m writing email messages.

From Markdown is the new Word 5.1:

There’s a way out of this loop of bouncing between cluttered word processors and process-centric writing tools, a way to avoid having cater to Clippy’s every whim while not having to hide your own work from yourself in order to concentrate. People have been saying for years that Word 5.1 needs to be ported to Mac OS X; that having that program running on current hardware would be the ideal solution to all of these problems with writing tools.

The truth is, there’s a solution now that’s most of the way there: Markdown and a good text editor. That’s the new Word 5.1. Think about it: a program like TextMate (I use TextWrangler. –mh) has almost no window chrome, and opens almost instantly. You start typing, and that’s all you have to do. I bring up Gruber because he invented Markdown, which lets you do basic formatting of text without really having to sweat much else. The types of formatting you don’t need aren’t even available to you when writing Markdown in a text editor, so you never have to deal with them.

Markdown will never be unreadable by a program, because it’s just ASCII text. It’s formatted, but if you’re reading the raw text, it’s not obscured the way a raw HTML file is. Any decent editor will give you a word count and can use headings as section and chapter breaks. With MultiMarkdown the options get even crazier: render your text file as a LaTeX document, or straight to PDF, or any number of other things. All from a text file and an editor with a minimal interface.

Almost all of my writing for many, many years now has been in a text editor using Markdown-formatted text. I’m using Markdown formatting for this blog post (which WordPress then automatically translates into HTML), I’ve written many, many discussion board posts for school in Markdown format before pasting them into BlackBoard, and I use Markdown formatting whenever I’m writing email messages.

I’m in that set of people who fondly remember Word 5.1, and miss the days of having a word processor that was actually a word processor, not an overblown attempt to do absolutely everything ever related to desktop publishing all at once (even Apple’s Pages, while far preferable to any post-5.1 version of Word, is far more than just a simple word processor). My senior year of high school, I booted my Mac Classic into Mac OS 6 with one 1.44 MB floppy; another 1.44 MB floppy held Word 5.1 and every paper I wrote that year.

Those days will never come again, admittedly. But a simple text editor and Markdown formatting is all that’s really needed.

My first computer: The Osborne 1

This Sunday marks the 20th anniversary of the introduction of one of the first ‘portable’ computers, which also happens to be the first home computer that my family had. This was the machine that first got me into much of the geekery I’ve been into for years.

This Sunday marks the 30th anniversary of the introduction of one of the first “portable” computers, which also happens to be the first home computer that my family had. This was the machine that first got me into much of the geekery I’ve been into for years.

From Osborne!:

The Osborne 1 had a Z-80 processor (like Radio Shack’s TRS-80 and many other early systems) and a generous-for-the-time 64KB of RAM. It had two single-density floppy-disk drives, each of which stored a relatively skimpy 102KB of data, plus a handy pocket for extra disks. And it ran Digital Research’s CP/M, the popular operating system that was very much like Microsoft’s later MS-DOS.

Even by 1981 standards, the Osborne 1′s 5″ monochrome CRT was puny; today, there are smartphones with displays as big. It could display only 52 columns of text at a time–less than the eighty you really wanted for word processing, but more than the Apple II’s forty. The screen size was chosen in part because 5″ displays were readily available, having been engineered for a 55-pound behemoth that IBM had optimistically marketed in 1975 as the IBM 5100 Portable Computer….

Osborne 1 (Image via Wikipedia)The sewing machine-sized Osborne 1 weighed 24 pounds (slightly more than ten modern-day 11″ MacBook Airs) and sported a handle; it created a class of PC that would forever be known as “luggables.” It was famously touted as fitting under an airplane seat, but you couldn’t actually use it on an airplane–not only because you would have busted your tray table, but also because it had no battery. Just getting it from place to place involved effort. Felsenstein has written that “carrying two of them from my car four blocks to the [West Coast Computer Faire] had nearly pulled my arms out of their sockets.”

The fact that the Osborne 1 was a fully-functioning personal computer in a portable case captured the imagination of techies in 1981. But it was only the second most innovative thing about the system. The most impressive part of the deal was that the computer gave you absolutely you needed to be productive for one remarkably low price: $1795 (about $4370 in current dollars).

I spent hours entranced by the machine. I learned to type (with the help of my mom’s vintage typing class book from when she was in school), I figured out the intricacies of the WordStar word processor (which gave me a leg up in learning HTML a decade and a half later, as the printer control codes used to create bold and italicized text in the not-even-close-to-WYSIWYG interface of WordStar mapped very closely to HTML tags), and I used BASIC to translate entire Choose Your Own Adventure books into simple text-based command line video games.

Not only did our family have one of these, but we eventually ended up with three. A few other families that we were friends with had had Osbornes, and as newer, smaller, more powerful computers from competitors like IBM and Compaq came on the market, they upgraded and gave us their old Osbornes as hand-me-downs. Not only did this let us upgrade ours with some goodies that we hadn’t added — like the state-of-the-art 1200 baud modem — but I was able to keep one working for quite a few years by cannibalizing pieces from the other two.

Eventually, of course, the machines either died out or simply got shoved away into storage as the family upgraded. I saved up and got myself my own computer — a Mac Classic, with 1MB RAM and no hard drive, just a single 1.4MB floppy disk drive — in 1991, and though I’ve occasionally pieced together a Frankenstein PC, Macs have always been where I feel most comfortable. Interestingly, the same article excerpted above points out that the Osborne itself may have influenced why the simplicity and “it just works” attitude of the Mac has always appealed to me.

Price was only part of the appeal of the Osborne 1′s all-in-one approach, Thom Hogan, an InfoWorld editor who became Osborne Computer’s director of software, says that the company’s greatest achievement was:

Something that Steve Jobs eventually learned from us, actually: simplicity of customer decision. At the time the Osborne 1 was launched, your choices at that level of capability were basically CP/M based systems from a number of vendors or an Apple II. In both cases, those other choices required you to make a LOT of decisions as a customer. For an Apple II: memory, drives, monitor, sometimes boards to add those things, plus software. A typical customer had to make five or six, sometimes more, decisions just to get the boxes necessary to build a useful system, and then they had to put it all together themselves…So Osborne not only saved the person money, but time and agony on the decision-making. Note how iPads are sold: two decisions: memory and communications. And they work out of the box, nothing needing to be assembled by the user.

The Osborne 1 was the first personal computer product that really did that (even the Radio Shack TRS-80 forced you into a number of decisions). Basically, plop down US$1795, take the box home, unpack it, plug it in, and start using your computer. One of the things that was integral to that was a stupid little <1K program I wrote. Previous to the Osborne, the user had to CONFIGURE CP/M. Even once configured, you’d boot from CP/M, then have to put in your word processing disc and execute from that. When you got an Osborne, you put the WP disk into the computer and you ended up in WordStar. In other words, we booted through the OS to the task the user wanted to do. Again, simplification of both process and pieces. As a result of that the Osborne was a no-brainer in terms of selling it against any other computer that was available in 1981: any sales person could demonstrate “put in the disc, turn it on, start writing” compared to “assemble the computer, configure the software, start the software program, start writing.”

(via /.)

21st Century Television (Part Two)

As promised, here’s a bit more information on the geeky details of how I’ve set up our cable-free TV system.

As promised, here’s a bit more information on the geeky details of how I’ve set up our cable-free TV system.

First off, credit where credit is due: I got a lot of pointers in setting all of this up from this post at Nyquil.org, along with a couple of follow-up email messages with Jer. Thanks!

  1. Set up a GigaNews Usenet account. While Usenet, in the pre-web days, was one of the premier methods of communicating across the ‘net and thus included free with most Internet packages, those days are long gone. Now, Usenet is the best and fastest way to grab those TV episodes we’re looking for, but it costs a few dollars a month to get access (far less than your average cable bill, however). There are other Usenet providers available, but Giganews was recommended to me, is working fine for me, and is reasonably priced, so I’m passing on the recommendation to you.

  2. Set up a (free) NZBs(dot)ORG account. .nzb files are the Usenet equivalent of Bittorrent’s .torrent files: pointers to all the various pieces of each media file. NZBs(dot)ORG lists NZBs in a number of categories; the TV > XVID category is non-HD if you still have an old non-HDTV; people with HDTVs may want to use the x264 category for 720p/1080p content.

  3. Install SABnzbd+. This is a free, open-source program that handles all the pain-in-the butt steps of using .nzb files. Without SABnzbd+…well, I’ll let Jer explain:

    …you…find yourself manually extracting RAR files, applying PAR2 files to regenerate missing chunks, and then disposing of all the compressed/encoded files after extracting your media file. Not to mention seeking out and downloading every episode of everything you want to download. It’s not for the faint of heart.

    With SABnzbd+, you simply toss it the .nzb file, and it takes care of all of that for you. Even better, it supports a “drop folder” system, so you can simply put a downloaded .nzb file into a folder, and moments later it automagically gets slurped into SABnzbd+ and the files start downloading. Even better than that, though, is its support for RSS feeds…and since NZBs(dot)ORG lets you save RSS feeds of particular searches, it’s relatively trivial to automate the downloading process.

    For my setup, I created an “nzb” folder inside my usual “Downloads” folder. Inside that, I have three folders: “new” (my SABnzbd+ drop folder, for adding manually downloaded .nzb files), “incomplete” (where SABnzbd+ stores the in-progress downloads), and “complete” (where SABnzbd+ stores the finished downloads after post-processing). I also have an alias to the media folder that the Roksbox software accesses; this is for my own convenience and not necessary in all setups.

    SABnzbd+ folder structure

  4. Set up and save searches on NZBs(dot)ORG for the shows you want to track. (NOTE: NZBs(dot)ORG has redesigned since this post was written, so these instructions aren’t quite correct anymore. They should be close enough to point you in the right direction, though.) Click on the “My Searches” link towards the top right of the NZBs(dot)ORG page, then click on “[Add]” next to “Saved Searches” towards the left of the “Add Search” page. Because NZBs(dot)ORG doesn’t allow for a preview of a search, I’ve found it easiest to keep the NZBs(dot)ORG front page open in a separate tab so that I can do a test search for my primary search terms, then look for which terms I want to exclude.

    For example, we want to watch CSI, but aren’t interested in the New York or Miami spinoffs. So, my saved search uses the search term “csi” in the “TV-XviD” category, but filters out anything with “dvdrip” (as I’m not interested in older episodes ripped from DVDs), “ny,” “york,” “miami,” or “geographic” (apparently there’s a National Geographic show that uses the initials CSI in its title).

    Safari002.png

    Eventually, you’ll build up a list of shows that will automatically populate whenever a new show that matches any of your saved searches appears on Usenet. Here’s a look at how my searches are set up — no snarks on our taste in TV, please, we’re quite aware of our guilty pleasures. ;)

    Safari003.png

    Now, see that little “RSS” link after each search? Those are going to come in very handy, as we flip back over to SABnzbd+….

  5. Add your saved searches to SABnzbd+. Under the “Config” link in the left hand sidebar of SABnzbd+, click on “RSS”. Copy the RSS feed link for one of your NZBs(dot)ORG saved searches, paste it into the “RSS Configuration” > “New Feed URL” field in SABnzbd+, name the feed something other than “Feed1”, and hit the “Add” button. That’s it!

    (While SABnzbd+ does offer various filtering options for RSS feeds, because you’re taking care of the filtering ahead of time in your NZBs(dot)ORG searches, you shouldn’t need to worry about these fields. If you’re using a different .nzb search site that doesn’t allow customization of RSS feeds, you should be able to use these filters to remove items you’re not interested in.)

    Safari004.png

    The first time SABnzbd+ scans the RSS feed, it will not download anything — this is intentional, as you probably don’t want to suddenly be downloading all of the items listed in the RSS feed. If there are any recent episodes that you’d like to download, you can click on the “Preview” button next to your newly-entered feed to choose which items you’d like to download.

    Go through and add the rest of the RSS feeds for your saved searches, and you’re all set. From here on out, as long as SABnzbd+ is running, it will keep an eye on your saved searches. Whenever a new episode that matches one of your searches appears, SABnzbd+ will see it in the RSS feed, grab the .nzb file, download everything it needs, assemble and decompress it, and store the finished download in the “completed” folder.

Now, if all you’re interested in is getting ahold of TV episodes and having them on your computer to watch, you’re set! I copy the downloaded files to a network drive and use the Plex software to pipe the shows over to the Roku player attached to our TV. Good to go!


NOTE: The following information is the original ending to this post, but is deprecated, as the situation is now simpler. However, I’m keeping it here for the sake of completeness.

However, in our case, I also need to convert the downloaded video from .avi to H.264-encoded .mov or .mp4 files, as that’s the only format that the Roku player will accept, and then move the files into their proper place within my computer’s webserver for Roksbox to access. While I haven’t been able to automate all of this, I have managed to use Automator, AppleScript, and the HandBrake video conversion software’s command line interface to automate the .avi to .mp4 conversion.

Now, I’m no Automator or AppleScript guru — this is actually one of my first experiments with either technology — so this may not be the best or most efficient way to handle this particular option. I’m certainly open to suggestions for improvement! However, it’s working for me…so far.

If you’d like, you can download my Automator action (121k .zip file). To install it, decompress the .zip file and add it to your ~LibraryWorkflowsApplicationsFolder Actions folder. Create a folder named “TV” inside the ~Downloadsnzbcomplete folder (it will be added automatically by SABnzb+ the first time it downloads a TV episode, but it needs to exist for this to work). Additionally, the HandBrake CLI must be installed in your main Applications directory.

To activate the HandBrake action, right-click on the “TV” folder and choose “Folder Actions Setup…” from the pop-up menu. In the Folder Actions Setup dialog, choose “Handbrake.workflow” and click the “Attach” button. Once that’s done, whenever SABnzbd+ finishes post-processing a download and moves the folder containing all of the files to the “TV” folder, this Automator workflow will automatically be triggered. Here’s what it does:

  1. Get Folder Contents and repeat for each subfolder found. This scans the TV folder and the folder that’s just been added to it to find all the contents.

  2. Filter Finder Items for files with the .avi extension that are larger than 20 MB (this avoids running into a conflict with the small quality sample .avi files that are sometimes included).

  3. Run AppleScript

    on run {input, parameters}
      set input to POSIX path of input
      set ConvertMovieCmd to "nice /Applications/HandBrakeCLI -i " & input & " -o " & input & ".mp4 --preset="Normal" ;"
      do shell script ConvertMovieCmd
      return input & ".mp4"
    end run
    

    This simple AppleScript: grabs the file passed to it by step two; converts the file path to use POSIX slashes rather than HFS+ colons as delimiters; creates a terminal command for the HandBrake CLI using the .avi file as input, the “Normal” preset, and simply appending .mp4 to the existing file name on output; and passes the newly created file to the next step in the action.

  4. Move Finder Items moves the new .mp4 file to the “complete” folder, one level up from the “TV” folder.

  5. Show Growl Notification pops up a sticky Growl alert to let me know that a new episode has finished transcoding. Obviously, this step will only work if you have Growl installed.

Eventually, I’d like to figure out how to get the action to move the folder containing the just-processed .avi file to the trash, but I haven’t quite figured out how to do that without possibly also moving any other folders at the same level to the trash (which might interfere with other downloads not yet transcoded), so for now, I’m sticking with manually cleaning up the extra files after the transcoding is finished.

From there, all that really needs to be done is moving the file from the “completed” folder to its proper place in the Roksbox file structure, and it’s ready to watch on our TV. I do a few other steps manually to “pretty up” the experience — adding “poster art” and XML-based episode descriptions for the Roksbox interface — but those are entirely optional, and many people won’t see the need to bother with those steps.

And that’s it! 80% of the process is now completely automated, and that last 20% that I do manually is entirely optional and basically just feeds my anal-retentive need to present things as slickly as possible whenever I can.

Hopefully all this has been interesting and informative to at least a few people out there. Questions, comments, ideas for improvement? Let me know!