Linkdump for November 12th through December 19th

An automatically generated list of links that caught my eye between November 12th and December 19th.

Sometime between November 12th and December 19th, I thought this stuff was interesting. You might think so too!

  • Toxic Masculinity Is the True Villain of Star Wars: The Last Jedi: SPOILERS: “Poe's character, while not one of the main protagonists, has even more to do in The Last Jedi. However, while he may be filling the role of the dashing pilot that Han did in the Original Trilogy, director Rian Johnson is using the archetype to say something completely different about heroism, leadership, and—perhaps most importantly—masculinity.”
  • Star Wars: The Last Jedi Offers the Harsh Condemnation of Mansplaining We Need in 2017: SPOILERS: “Any female boss in 2017 or American still nursing the hangover of the 2016 presidential election can tell you that even nice guys often have trouble taking orders from women.”
  • Star Wars, the Generations: SPOILERS: “Great movies reflect an era through the eyes of artists who embody that era. George Lucas embodied the era of Baby Boom ‘destiny’ and self-conceit. Rian Johnson embodies our era of diminished heroism, cynicism and near despair– tempered by the hope, if we can but learn from our heroes’ mistakes, that somehow, some way, some day, we may yet restore balance to the Force.”
  • Rian Johnson Confirms The Dorkiest Reference In ‘The Last Jedi’: SPOILERS: “There is a dorky reference in Star Wars: The Last Jedi that even director Rian Johnson admits that you may have to be of a certain age to get – thanks to a narrow window where you might have been watching premium cable in the very early ‘80s when this bizarre little short film would air in-between feature-length films.”
  • Rian Johnson Says There Are No Twists, Only Honest Choices: SPOILERS: “It seemed completely honest to me. It seems like the most dramatic version of that. And that’s what you’re supposed to do. Find what the honest moment would be, and then find the most dramatic version of it. So, in terms of the big ‘twists’ in the movie, they sprung from a process of trying to follow where these characters would go as honestly as possible.”
  • Star Wars: The Last Jedi humanizes the Force: SPOILERS: This was one of my favorite things about The Last Jedi. To my mind, a very smart direction to take things.
  • Did You Catch the Brazil Reference in Star Wars: The Last Jedi?:
  • ‘Star Wars: The Last Jedi’ Redeems the Prequels: SPOILERS: “One of the many reasons I love Star Wars: The Last Jedi is that it redeems the prequels. … It recontextualizes the prequels and reinforces what I loved about them.”
  • Pro-Neutrality, Anti-Title II: Interesting argument that the likely change to ISP regulations — the 'net neutrality' debate — may not be quite the horrid thing it appears to be. Worth thinking over. "The question at hand, though, is what is the best way to achieve net neutrality? To believe that Chairman Pai is right is not to be against net neutrality; rather, it is to believe that the FCC’s 2015 approach was mistaken."
  • Keyboard Maestro 8.0.4: Work Faster with Macros for macOS: Saving for me to remember and look into when I have more time.
  • The Amazons’ New Clothes: “The Wonder Woman designs received acclaim from fans and costume fanatics alike. They were clearly inspired by the Amazon’s origins in the Mediterranean and were feminine but very functional. Why mess with perfection? Oh, right. The all-male team of directors and executive directors wanted women to fight in bikinis.”

Linkdump for July 25th through September 21st

An automatically generated list of links that caught my eye between July 25th and September 21st.

Sometime between July 25th and September 21st, I thought this stuff was interesting. You might think so too!

Linkdump for June 25th through July 16th

An automatically generated list of links that caught my eye between June 25th and July 16th.

Sometime between June 25th and July 16th, I thought this stuff was interesting. You might think so too!

DVD/Blu-Ray conversion with text soft subs on Mac OS X

How to convert DVDs or Blu-Rays for personal use on OS X, including OCR conversion of subtitles to text-based .srt files suitable for use as ‘soft’ subtitles (rendered by the video player rather than burned into the image).

Saved here for my own reference, and possibly others’ if they should stumble across it: the easiest workflow I’ve found yet for converting DVDs or Blu-Rays (if you have a Blu-Ray reader, of course) for personal use on OS X, including OCR conversion of subtitles in either VOBSUB (DVD) or PGS (Blu-Ray) format to text-based .srt files suitable for use as soft subtitles, either as a sidecar file or included in the final movie file.

Movie Rip WorkflowThe flow diagram to the right gives an overview of the process I’ve landed on. Here’s a slightly more detailed breakdown.

  1. Use MakeMKV to rip the DVD or BluRay disc to an .mkv file (if I run into a stubborn DVD, or one with a lot of multiplexing, I’ll use RipIt to create a disk image first, then run that image through MakeMKV). To save space, you can select only the primary audio track for inclusion, or you can select others if you want other languages or commentary tracks archived as well (though this will require more storage space). I also select all available English-language subtitle tracks, as some discs will include both standard subtitles and subtitles for the hearing impaired or closed captions, which include some extra information on who is speaking and background sounds, or occasionally even transcriptions of commentary tracks.
  2. Use Subler to OCR and export the subtitle files. This takes two runs through Subler to complete.
    1. First run; drag the .mkv file onto Subler, and only select the subtitle track(s). Pop that into the export queue, and after a few minutes of processing (this is when the OCR process happens) Subler will output a tiny .m4v file.
    2. Second run; drag that file back onto Subler, click on the subtitle track, and choose File > Export… to save the .srt file(s). The tiny .m4v file can then be deleted.

    Now, the OCR process is not perfect, and the resulting .srt file(s) are virtually guaranteed to have some errors. How many and how intrusive they are depends on the source. BluRay subs seem to come out better than DVD subs (likely due to the higher resolution of the format giving better quality text for the OCR process to scan), DVD subs are also affected by the chosen font and whether or not italics were used. For correction, I use one of two methods.

    1. For a quick-and-dirty “good enough for now” run, I use BBEdit (but just about any other text editor would work) to do a quick spellcheck, identifying common errors and using search-and-replace to fix them in batches.
    2. For a real quality fix, I use Aegisub to go through line-by-line, comparing the text to the original audio, adding italics when appropriate, and so on.

    Of course, these two processes can be combined, done at different times, or skipped entirely; right now, I’m just living with the OCR errors, because I can always go back and use Subler to extract the .srt files for cleanup later on when I have more time.

  3. Use HandBrake to re-encode and convert the .mkv file (which at this point will be fairly large, straight off the source media) to a smaller .m4v file. You can either embed the .srt files at this point, under HandBrake’s ‘Subtitles’ tab, or if you prefer…
  4. …you can use Subler to .srt files into into the .m4v: Drag the .m4v file from HandBrake on to Subler, drag the .srt file(s) into the window that opens, and then drop that into the queue for final remuxing (optionally, before adding the files to the queue, use Subler’s metadata search tools to add the description, artwork, and other metadata). Then run the queue to output the final file.

And that’s it. Now, you should have a .m4v file with embedded text-based soft subtitles for programs that support that (VLC, Plex, etc.), or you can just use the .srt file(s) created by Subler earlier as a sidecar file for programs that don’t read the embedded .srt.

Trust

There is a better than 50% chance that I’ll be ordering an Apple Watch on the day they’re added to the Apple Store.

There has been a tendency to mock people that want to buy products simply because a certain company makes them. Some will say this type of buyer is being guided by marketing, or is just a follower, but in reality it comes down to trust. Many people trust Apple. It is this very important connection with users that will likely get people to at least try the Apple Watch, and for Apple that is the best outcome they can wish for.

There is a better than 50% chance that I’ll be ordering an Watch on the day they’re added to the Apple Store.

My 2015 Resolutions

640×1136, 2,048×1,536, and 5120×2880. Yes, I make this joke somewhat annually. But…it amuses me, so I’ll probably continue to do so.

  • 640×1136 (iPhone 5s)
  • 2,048×1,536 (iPad Air 2)
  • 5120×2880 (iMac with Retina Display)

Yes, I make this joke somewhat annually. But…it amuses me, so I’ll probably continue to do so. One of these days I should dig back through prior years to figure out where I’ve posted this (blog, Facebook, Twitter) and see how my resolutions have changed over time.

No such thing as “just metadata”

There’s an MIT project called ‘Immersion’ that gives a good visualization of what can be learned from a relatively limited dataset.

With all the recent news concerning the NSA’s surveillance programs (Prism et al.), one of the common defenses has been that for at least some of these programs (though not all), the government is “just” collecting metadata. For example, should the government access your email records, they might not have access to the content of the email, merely the associated data — like who you communicate with, when, how often, who else is included in the messages, and so on.

Techdirt has a good overview of why the “it’s just metadata” argument is a foolish argument to make — basically, there is a lot of information that can be derived from “just metadata” — but there’s also an MIT project called “Immersion” (noted in the TechDirt article, though I found it elsewhere) that gives a good visualization of what can be learned from a relatively limited dataset.

Immersion scans your Gmail account (with your explicit permission, of course), and then runs an analysis on the metadata — not the content — of your email history to create a diagram showing you you communicate with and the connections among them.

As an example, here’s my result (with names removed). This is an analysis of almost 52 thousand messages over nearly nine years among 201 separate contacts. Each dot is a single contact, the size of the dot is a measure of how often I’ve communicated with them, and the lines between them show existing relationships between those people (based on messages with multiple recipients).

Immersion Contact Map

In that image, there are two obvious constellations: the blue grouping at the top right are my family and long-time friends; the orange/green/red/brown grouping to the left are my Norwescon contacts. The scattering of purples and yellows are contacts that fall outside of those two primary groups. While there’s not much here of great surprise or import for me, I did already learn one thing of interest — apparently one of my old high school friends has had some amount of contact with one of my Norwescon friends (that’s the single line connecting the two constellations). Now, I have no idea what sort of relationship exists between them — it could be nothing more than my sending a group email that included one and accidentally including the other as part of the group — but some sort of relationship does, and that’s information I didn’t have before.

Now, my metadata is fairly innocuous. But for argument’s sake, suppose I was involved not with Norwescon, but with some other group of people that, for whatever reason, I wanted to keep quiet about. Maybe I’m involved in the local kink scene, and could face repercussions at my job or in my personal life if this became known. Maybe I’m having a gender identity crisis that I’m not comfortable publicly discussing, but have a strong internet-based support group. Maybe I’m part of Anonymous or some similar group, discussing ways to cause mischief. Maybe I’m a whistleblower, and these are my contacts. Maybe I’m a news reporter who has guaranteed anonymity for my sources — but suddenly, this metadata exposes not only who I communicate with, but when and how often, and if there’s a sudden ramp in communication between me and certain contacts in the weeks or months before I break a big story with a lot of anonymous sources, suddenly they’re not so anonymous any more. And, yes, of course, because no list like this would be complete without the modern boogeyman that is the government’s excuse for why this surveillance is necessary — maybe I’m a terrorist. (For the record, I’m none of the above-mentioned things.)

However, of that list of possibilities, terrorism (or, less broadly, investigation of known or suspected crimes) is the only one that the government should really have any interest in, and that’s exactly the kind of investigation that they should be getting warrants for. If they suspect someone, get a warrant, analyze their data, and build a case from there. But analyzing everyone’s data, all the time, without specific need, without specific justification, and without warrants? And then holding on to the data indefinitely, allowing them to troll through it at any time for any reason, whether or not a crime is suspected?

There’s a very good reason why terms like “Orwellian”, “Big Brother”, and “1984” keep coming up in these conversations.

Now PGP-enabled

With all the recent concerns about security and privacy in the world of PRISM, I finally decided to carry through on something I’d considered from time to time in the past, and have set myself up to be able to handle PGP encryption for my mail.

With all the recent concerns about security and privacy in the world of PRISM, I finally decided to carry through on something I’d considered from time to time in the past, and have set myself up to be able to handle PGP encryption for my mail. I’m using GPGTools for the OS X Mail client and Mailvelope for Chrome when I need web access to my Gmail account.

To be honest, I don’t know how often I’ll actually use PGP for anything other than signing my messages — I can’t think of a time when I’ve ever been truly concerned about what someone might find if they snooped through my email (they’d probably be pretty bored) — but as long as the option is there, might as well make sure I’m set up to use it in case I ever feel the need.

My PGP public key follows:

Continue reading “Now PGP-enabled”

Markdown is the new Word 5.1

Almost all of my writing for many, many years now has been in a text editor using Markdown-formatted text. I’m using Markdown formatting for this blog post (which WordPress then automatically translates into HTML), I’ve written many, many discussion board posts for school in Markdown format before pasting them into BlackBoard, and I use Markdown formatting whenever I’m writing email messages.

From Markdown is the new Word 5.1:

There’s a way out of this loop of bouncing between cluttered word processors and process-centric writing tools, a way to avoid having cater to Clippy’s every whim while not having to hide your own work from yourself in order to concentrate. People have been saying for years that Word 5.1 needs to be ported to Mac OS X; that having that program running on current hardware would be the ideal solution to all of these problems with writing tools.

The truth is, there’s a solution now that’s most of the way there: Markdown and a good text editor. That’s the new Word 5.1. Think about it: a program like TextMate (I use TextWrangler. –mh) has almost no window chrome, and opens almost instantly. You start typing, and that’s all you have to do. I bring up Gruber because he invented Markdown, which lets you do basic formatting of text without really having to sweat much else. The types of formatting you don’t need aren’t even available to you when writing Markdown in a text editor, so you never have to deal with them.

Markdown will never be unreadable by a program, because it’s just ASCII text. It’s formatted, but if you’re reading the raw text, it’s not obscured the way a raw HTML file is. Any decent editor will give you a word count and can use headings as section and chapter breaks. With MultiMarkdown the options get even crazier: render your text file as a LaTeX document, or straight to PDF, or any number of other things. All from a text file and an editor with a minimal interface.

Almost all of my writing for many, many years now has been in a text editor using Markdown-formatted text. I’m using Markdown formatting for this blog post (which WordPress then automatically translates into HTML), I’ve written many, many discussion board posts for school in Markdown format before pasting them into BlackBoard, and I use Markdown formatting whenever I’m writing email messages.

I’m in that set of people who fondly remember Word 5.1, and miss the days of having a word processor that was actually a word processor, not an overblown attempt to do absolutely everything ever related to desktop publishing all at once (even Apple’s Pages, while far preferable to any post-5.1 version of Word, is far more than just a simple word processor). My senior year of high school, I booted my Mac Classic into Mac OS 6 with one 1.44 MB floppy; another 1.44 MB floppy held Word 5.1 and every paper I wrote that year.

Those days will never come again, admittedly. But a simple text editor and Markdown formatting is all that’s really needed.

My first computer: The Osborne 1

This Sunday marks the 20th anniversary of the introduction of one of the first ‘portable’ computers, which also happens to be the first home computer that my family had. This was the machine that first got me into much of the geekery I’ve been into for years.

This Sunday marks the 30th anniversary of the introduction of one of the first “portable” computers, which also happens to be the first home computer that my family had. This was the machine that first got me into much of the geekery I’ve been into for years.

From Osborne!:

The Osborne 1 had a Z-80 processor (like Radio Shack’s TRS-80 and many other early systems) and a generous-for-the-time 64KB of RAM. It had two single-density floppy-disk drives, each of which stored a relatively skimpy 102KB of data, plus a handy pocket for extra disks. And it ran Digital Research’s CP/M, the popular operating system that was very much like Microsoft’s later MS-DOS.

Even by 1981 standards, the Osborne 1′s 5″ monochrome CRT was puny; today, there are smartphones with displays as big. It could display only 52 columns of text at a time–less than the eighty you really wanted for word processing, but more than the Apple II’s forty. The screen size was chosen in part because 5″ displays were readily available, having been engineered for a 55-pound behemoth that IBM had optimistically marketed in 1975 as the IBM 5100 Portable Computer….

Osborne 1 (Image via Wikipedia)The sewing machine-sized Osborne 1 weighed 24 pounds (slightly more than ten modern-day 11″ MacBook Airs) and sported a handle; it created a class of PC that would forever be known as “luggables.” It was famously touted as fitting under an airplane seat, but you couldn’t actually use it on an airplane–not only because you would have busted your tray table, but also because it had no battery. Just getting it from place to place involved effort. Felsenstein has written that “carrying two of them from my car four blocks to the [West Coast Computer Faire] had nearly pulled my arms out of their sockets.”

The fact that the Osborne 1 was a fully-functioning personal computer in a portable case captured the imagination of techies in 1981. But it was only the second most innovative thing about the system. The most impressive part of the deal was that the computer gave you absolutely you needed to be productive for one remarkably low price: $1795 (about $4370 in current dollars).

I spent hours entranced by the machine. I learned to type (with the help of my mom’s vintage typing class book from when she was in school), I figured out the intricacies of the WordStar word processor (which gave me a leg up in learning HTML a decade and a half later, as the printer control codes used to create bold and italicized text in the not-even-close-to-WYSIWYG interface of WordStar mapped very closely to HTML tags), and I used BASIC to translate entire Choose Your Own Adventure books into simple text-based command line video games.

Not only did our family have one of these, but we eventually ended up with three. A few other families that we were friends with had had Osbornes, and as newer, smaller, more powerful computers from competitors like IBM and Compaq came on the market, they upgraded and gave us their old Osbornes as hand-me-downs. Not only did this let us upgrade ours with some goodies that we hadn’t added — like the state-of-the-art 1200 baud modem — but I was able to keep one working for quite a few years by cannibalizing pieces from the other two.

Eventually, of course, the machines either died out or simply got shoved away into storage as the family upgraded. I saved up and got myself my own computer — a Mac Classic, with 1MB RAM and no hard drive, just a single 1.4MB floppy disk drive — in 1991, and though I’ve occasionally pieced together a Frankenstein PC, Macs have always been where I feel most comfortable. Interestingly, the same article excerpted above points out that the Osborne itself may have influenced why the simplicity and “it just works” attitude of the Mac has always appealed to me.

Price was only part of the appeal of the Osborne 1′s all-in-one approach, Thom Hogan, an InfoWorld editor who became Osborne Computer’s director of software, says that the company’s greatest achievement was:

Something that Steve Jobs eventually learned from us, actually: simplicity of customer decision. At the time the Osborne 1 was launched, your choices at that level of capability were basically CP/M based systems from a number of vendors or an Apple II. In both cases, those other choices required you to make a LOT of decisions as a customer. For an Apple II: memory, drives, monitor, sometimes boards to add those things, plus software. A typical customer had to make five or six, sometimes more, decisions just to get the boxes necessary to build a useful system, and then they had to put it all together themselves…So Osborne not only saved the person money, but time and agony on the decision-making. Note how iPads are sold: two decisions: memory and communications. And they work out of the box, nothing needing to be assembled by the user.

The Osborne 1 was the first personal computer product that really did that (even the Radio Shack TRS-80 forced you into a number of decisions). Basically, plop down US$1795, take the box home, unpack it, plug it in, and start using your computer. One of the things that was integral to that was a stupid little <1K program I wrote. Previous to the Osborne, the user had to CONFIGURE CP/M. Even once configured, you’d boot from CP/M, then have to put in your word processing disc and execute from that. When you got an Osborne, you put the WP disk into the computer and you ended up in WordStar. In other words, we booted through the OS to the task the user wanted to do. Again, simplification of both process and pieces. As a result of that the Osborne was a no-brainer in terms of selling it against any other computer that was available in 1981: any sales person could demonstrate “put in the disc, turn it on, start writing” compared to “assemble the computer, configure the software, start the software program, start writing.”

(via /.)