Categories
Education Portfolio Technology

Swift Student Challenge

A few days ago, Apple announced the winners of their Swift Student Challenge. I had applied and used my “taking a test” tactic, which was to hit ‘submit’ and then promptly erase the whole thing from my brain. (What’s done is done, and I feel silly worrying about something I have no control over.)

So when I got the email that “my status was updated” it was a bit of a surprise.

And when I clicked through the link (because, of course, they can’t just say in the email, you have to sign in) I was in for more of a surprise.

My submission had been accepted. I’m one of 350 students around the world whose work sufficiently impressed the judges at Apple.

Screenshot from Apple Developer website. It reads: Congratulations! Your submission has been selected for a WWDC20 Swift Student Challenge award. You'll receive an exclusive WWDC20 jacket and pin set at the mailing address you provided on your submission form. You'll also be able to download pre-release software, request lab appointments, and connect with Apple engineers over WWDC20 content on the forums. In addition, one year of individual membership in the Apple Developer Program will be assigned free of charge to eligible accounts of recipients who have reached the age of majority in their region. For details, see the WWDC20 Swift Student Challenge Terms and Conditions.
Neat!

Now, throughout the whole process of applying, I was my usual secretive self. I think two people knew that I was applying at all, much less what I was working on. Since it’s over with, though, it’s time for the unveiling.

What I made

I wanted to bring back a concept I’ve played with before: cellular automata. A few days before the competition was announced, I’d seen a video that really caught my interest.

Well hey, I thought, I’ve got some code for running cellular automata. I want to learn Swift Playgrounds. And I’ve been having fun with SwiftUI. Let’s combine those things, shall we?

The first big change was a visual history; when a cell dies, I don’t want it to just go out, I want it to fade slowly, leaving behind a trail of where the automata have spread.

The second was rewriting all the visuals in SwiftUI, which was a fun project. Animation timings took me a bit to get right, as did figuring out how to do an automated ‘update n times a second’ in Combine. The biggest issue I had, actually, was performance – I had to do some fun little tricks to get it to run smoothly. (Note the .drawingGroup() here – that made a big difference.)

And third, I didn’t want it to just be “here’s some code, look how pretty,” I wanted to actually use the Playground format to show some cool stuff. This turned out to be the most frustrating part of the whole thing – the Swift Playgrounds app doesn’t actually support creating a PlaygroundBook, and the Xcode template wasn’t supported in the then-current version of Xcode.

But the end result? Oh, I’m quite happy with it. PlaygroundBooks are cool once you get past how un-documented they are. You can, to borrow a Jupyter turn of phrase, mix code and prose in a lovely, interactive way.

Screenshot of the 'Grid' page of the playground book.  The full text is at https://github.com/grey280/SwiftLife/blob/master/Swift%20Student%20Submission.playgroundbook/Contents/Chapters/Chapter1.playgroundchapter/Pages/Grid.playgroundpage/main.swift
Don’t worry, the real version (and some videos) are below.

Doing the actual writing was pretty fun. This is a concept I’ve spent a lot of time learning about, just because it captured my interest, and I wanted to share that in a fun way.

Overall, I’m quite happy with the result. If you’d like to see more, I’ve made recordings of the ‘randomized grid’ and ‘Wolfram rule’ pages running, and the actual playground is available on GitHub.

Categories
Technology

Reversi: A Postmortem

Spring quarter consisted of two things: beginning the internship, and an “intro to programming” course. Which, at first glance, seems like it would’ve been a “coast to an easy A” kind of thing for me, but that wasn’t my goal. And, to quote the Dean of UCI’s Graduate Division, “grad school is for you.”

So, at the start of the quarter, I sat down to figure out what my goals for this class would be, and came up with two things. The first, which I won’t be writing about, was to get a bit more teaching experience – in the vein of “guiding people to asking the right questions,” rather than just showing them the answers.

Second, and the topic of this post, was that I wanted to learn Vapor. The professor was kind enough to let me do this – instead of doing the course project (an online game of Reversi) in Node, I did it in Vapor.

As a learning exercise, I’d say it was… okay.

What Went Well

I love Swift as a language. The type system just fits in my head, it aligns incredibly well with how I think.

In this case, that meant representing all the events to the server, and the responses from the server, as enums.

It also meant that I could have a solid Game class that represented the whole game board, with some neat logic, like getters that calculate the current score and if the game has ended. Pair those with a custom Codable implementation, and you’ve moved the majority of the logic to the server.

… and What Didn’t

The fact that I’m representing events to and from the server as enums, instead of using Vapor’s routing system, was a result of tacking on another thing I wanted to learn about, and trying to loosely hew to the nominal course objectives. The official version of the project used WebSockets for all the communication. Vapor supports WebSockets. Great combo, right?

Well, sure, but it meant I did almost nothing with the actual routing. Instead I re-implemented a lot of it by hand, and not in a very clean way. Vapor doesn’t scope things the way I expected – based on some experimentation, it instantiates a single copy of your controller class and reuses it, rather than having one per connection. So instead of having nice class-level storage of variables, and splitting everything up into functions with the main one handling routing, it all wound up crammed into the main function. Just so I could maintain the proper scope on variables. I’m still not happy about it.

What’s Next

I’d like to keep tinkering with Vapor. When I’ve got the time, I have a project in mind where it seems like a good fit.

In the meantime, I hope their documentation improves a lot. The docs they have are good tutorials, and cover their material well; they also, it feels like, leave out the lion’s share of the actual framework. By the end of the project, I’d given up on the docs and was just skimming through the source code on GitHub, trying to find the implementation of whatever I was trying to work with. (This, by the way, doesn’t work with Leaf, the templating engine – the docs are basically nonexistent, and the code is abstracted enough that you can’t really skim it, either.)

Complaints aside, I still like Vapor. I picked up a book on the framework, which seems like a pretty good reference on the topic.

And hey, it was a neat little project. (The JavaScript is a disorganized mess, but it’s also aggressively vanilla – while the rest of the class was learning about NPM, I decided to see how far I could get with no JS dependencies whatsoever. Answer: very.) Check it out, if you’d like:

Categories
Technology

State of the Apps 2019

Inspired by CGP Grey’s post that started a Cortex tradition, here’s the current state of my phone:

This is… a work in progress. I got this phone in September, and while it’s been on my mind to do a full reorganization, I haven’t had time to do a full “tear it all down and start from scratch” process. The top two rows, especially, are very temporary — for the first time since iOS 7 came out, I’ve disabled Reduce Motion, and the parallax makes the fake invisible icons trick look terrible.
So rather than go through things in top-to-bottom, right-to-left order, I’m just going to talk about them in whatever order strikes my fancy.

  • Things remains my task management app of choice. I love it across all platforms, and happily recommend it to anyone who’s looking for something more robust than Reminders or a list in Notes. For me, it strikes the right balance of features without getting too heavy, and while I’ve got one or two things I’d like to see added, I have no great complaints.1
  • FoodNoms has been a very nice addition – it replaced Calory, which had replaced MyFitnessPal, which had replaced Lose It!. I’ve got a long history of tracking food, and while I quite liked Calory, FoodNoms is the first time I’ve gone “ah, never mind, don’t need this” and tossed out my notes on how I would build a food-tracking app. I haven’t yet gone for the subscription, because it just doesn’t have any features that interest me, but based on the rate of development, I’m expecting to make that change within the next few months.’
  • Timery is another stellar addition. It’s in that same category as FoodNoms — I had some sketches started of how I’d make an app in this category, and Timery made them completely irrelevant. The last two updates have added some truly excellent Shortcuts integrations — the last one added conversational shortcuts, so I can now just say “Hey Siri, Toggl” and talk through starting a timer with a specific project and description, or kick off a few frequently-used ones with a short phrase. The newest updated added some more programmatic stuff, and I’m planning to take some time over Christmas weekend to rebuild my old Toggl shortcuts, based on Federico Viticci’s examples, with Timery instead of custom web API calls.
  • Toolbox Pro – speaking of Shortcuts, Toolbox Pro is a neat little collection of Shortcuts actions. I’m most excited about the Variables feature, which I’m hoping I can use to improve some of my daily automation stuff.
  • Mail has replaced Airmail. I’d been vaguely looking for a replacement for Airmail, because it had a nasty habit of crashing all the time, and then they did a terrible job of switching to a new business model, and I threw my hands up in the air and decided to try the system default. It’s been working perfectly on iOS; on macOS, I’ve got a cobbled-together system using BetterTouchTool that sorta gives it real keyboard shortcuts,2 and a launchd script that relaunches it when it crashes.3
  • Day One remains my stalwart for journaling, but I’ve been slowly increasing the things I use it for. It’s my archive of Instagram, where I store my sketches, and the app I used to record some interviews I did for class.4
  • Ulysses is where I’m writing this article! It’s still my go-to for any long-form writing, and I love it. I haven’t yet made much use of their recent ability to store Ulysses files in Dropbox (or other arbitrary locations on disk), but I do have a collection of plain-old-markdown files that I edit in Ulysses on my Mac and Sublime Text on Windows.5
  • Reeder is a continuation and an addition all at once — I’ve been using it on my Mac for a while, but didn’t have it on iOS. I hit the maximum number of feeds on the free level of Feedly, and was extremely unimpressed with their paid offerings; I considered making a second account to keep syncing, but decided that was sorta rude to them, and instead opted to not have sync at all. That worked for a while, and then I got a Synology, and after setting it up as a Plex server, spent some time looking into RSS server options. At the moment, I’m using TT-RSS with a plugin for Fever support, but if anyone knows of something that’s easy to set up and has Google Reader API support, I’d appreciate it.6
  • Dark Sky’s recent redesign has me pretty happy. If they let me reorder the types of information, I’d be happier, but the clarity of the “when is it going to rain” charts is still excellent.
  • Overcast remains my podcast app of choice. Podcasts have been a great way to help me establish a gym habit — I established a podcast habit, and then decided that podcasts are things I can only listen to while driving, cleaning, or working out. (If you want podcast recommendations: Cortex, Do By Friday, ATP, 99PI, and MBMBAM are my mainstays.)
  • Strong, speaking of a gym habit, is the driving force of my time at the gym. A couple of my friends have been helping me out with designing actual workout programs to do, but Strong is where I put those in. It’s easy to use, remembers all the numbers so I don’t have to, and has instructions, often accompanied by images or GIFs, on a lot of exercises.
  • Streaks is where I track all my habits, from “did you remember to take your meds” to “do some writing for your blog” to “have you gone to the gym enough times this week?” It’s very good at what it does, and I’m still a fan.
  • Fluidics is a bit self-serving to include here, but I use it all the time. I’m planning to update it eventually — I’d like proper Dark Mode support, at the very least — but it’s hard to find the time.
  • Wallet has gotten more and more important over time, though not as fast as I’d like it to. Let me put my driver’s license in there, already. Apple Card is slowly taking over as my main credit card, Apple Cash is even more handy with the cash back in there, and it’s not too hard to make your own pass of your gym membership.
  • Sleep Cycle is possibly on it’s way out; I’m strongly considering getting a beddit, although I need to do more research — does it have the ‘smart alarm’ feature? How accurate is it? Is Apple going to kill the app soon? Lots of questions.
  • Dark Noise is a new addition in the past few days; I’ve been switching from a ‘sleep’ playlist to white noise in an effort to get Apple Music’s recommendations to not be entirely useless.7 I tried to use Sleep Cycle’s white noise feature for a while, but it assumes that I want the white noise to stop after a while, which is absolutely not the case. Dark Noise’s actual noise is a bit less interesting overall, which is possibly the point, and the app itself is delightfully well-made.

  1. I was, admittedly, tempted by OmniFocus, because OmniFocus for Web means I could have a single unified system across my Mac, iOS devices, and work PC, but it’s still just too much for my needs. And expensive. 
  2. Listen, Apple: you can either comp me the cost of getting a new keyboard that’s got the Inverted T arrangement for the arrow keys, or you can let me go from message to message using j/k. (And even if they did give me a free keyboard, I’d still complain; I’ve been using j/k to get around for two decades now, and it’s just easier.) 
  3. And on Windows, both Outlook and Windows Mail crash frequently, too; apparently IMAP, despite being 30-something-years-old, is still an unsolved problem? 
  4. In typing this, I’ve just realized that I think I’m using Day One the way Evernote wants to be used. Huh. 
  5. All synced by Git, because… why not? 
  6. The Fever API neglected any form of subscription management, and needing to pull up the TT-RSS frontend in a web browser whenever I want to add or remove a subscription just feels silly. 
  7. Fun fact: the “use listening history” setting that all Apple Music clients have? Doesn’t appear to do anything. Neither does “stop recommending music like this.” 
Categories
Programming Technology Tools

Automated Playlist Backup With Swift

I mentioned in my post about scripting with Swift that I’d been working on something that inspired this. Well, here’s what it was: a rewrite of my automated playlist backup AppleScript in Swift. That version ran every hour… ish. Partly that scheduling issue is because launchd doesn’t actually guarantee scheduling, just ‘roughly every n seconds’, and partly it’s because the AppleScript was slow.1
Then I found the iTunesLibrary API docs, such as it is, and thought “well, that’d be a much nicer way to do it.”
And then I remembered that Swift can be used as a scripting language, cracked my knuckles, and got to work. (I also had some lovely reference: I wrote up my very basic intro post, but this post goes further in depth on some of the concepts I touched on.)

Not the best API I’ve ever written, but not bad for something I threw together in a few hours. And I had fun doing it, more so than I did with the AppleScript one.
Oh, and it’s much faster than the AppleScript equivalent: this runs through my ~100 playlists in under a minute. So now I have it run every 15 minutes.2
(The configuration for launchd is about the same, you just replace the /usr/bin/osascript with the path to the Swift file, and make the second argument the full path to the directory where you want your backups going. See the original post for the details.)
I’m a bit tempted to turn this into a macOS app, just so I can play around with SwiftUI on macOS, and make it a bit easier to use. Of course, by ‘a bit tempted’ I mean ‘I already started tinkering,’ but I doubt I’ll have anything to show for a while — near as I can tell, SwiftUI has no equivalent to NSOutlineView as of yet, which makes properly showing the list a challenge. Still, it’s been fun to play with.


  1. I was going to cite this lovely resource, but since that website was built by someone who doesn’t understand the concept of a URL, I can’t link to the relevant section. Click ‘Configuration,’ then the ‘Content’ thing that’s inexplicably sideways on the left side of the screen, and ‘StartInterval’ under ‘When to Start’. 
  2. I’m also looking at the FSEvents API to see how hard it would be to set it up to run whenever Music (née iTunes) updates a playlist, but that… probably won’t happen anytime soon. 
Categories
Programming Technology

Swift Scripting

I’m a bit of a fan of Swift, though I don’t get to tinker with it nearly as much as I’d like. Recently, though, I did some tinkering with Swift as a scripting language, and thought it was pretty fun! (I’m planning another blog post about what, exactly, I was trying to do later, but for now just take it as a given.)
The most important step is to have Swift installed on your machine. As a Mac user, the easiest way is probably just to install Xcode, but if you’re looking for a lighter-weight solution, you can install just the Swift toolchain. Swift is also available for Ubuntu, which, again, takes some doing. If you want Swift on Windows… well, it’s an ongoing project. Personally, I’d say you’ll probably have better luck running it in Ubuntu on the WSL.
Alright, got your Swift installation working? Let’s go.
Step 1: Make your Swift file. We’ll call it main.swift, and in true Tech Tutorial fashion, we’ll keep it simple:

print("Hello world!")

Step 2: Insert a Magic Comment in the first line:

#!/usr/env/swift
print("Hello world!")

Step 3: In your shell of choice, make it executable:

$ chmod +x ./main.swift

Step 4: Run!

$ ./main.swift
> Hello world!

No, really, it’s that simple. The magic comment there tells your interpreter ‘run this using Swift’, and then Swift just… executes your code from top to bottom. And it doesn’t have to be just function calls — you can define classes, structs, enums, whatever. Which is the real benefit to using Swift instead of just writing your script in Bash; object-oriented programming and type safety are lovely, lovely things.

My next post is going to go into some of the more interesting stuff you can do, with a lovely worked example, but for now I’ll add a couple other things:

  • By default, execution ends when it gets to the end of the file; at that point, it will exit with code 0, so Bash (or whatever) will assume it worked correctly. If you want to exit earlier, call exit(_:) with the code you want. exit(0) means “done successfully,” while any other integer in there will be treated as an error.1
  • print(_:) outputs to stdout, which can be piped using |. If you want to output an error (to be piped with 2>, or similar) you need to import Foundation, and then call FileHandle.standardError.write(_:).2
  • To explicitly write to stdout, it’s FileHandle.standardOutput.write(_:).

  1. Which is useful if your script is going to be called programmatically. ./main.swift && echo "It worked!" will print “Hello world” and then “It worked!” with exit(0), but just “Hello world” if you add exit(1) to the end of the file. 
  2. And note the types here – this expects Data, not String, so to write a string, you need to convert it by adding .data(using: .utf8)! 
Categories
Technology Tools

Automatic OCR with Hazel: The Easy Way

I have previously written about how to run OCR (Optical Character Recognition) on a PDF using Hazel and… a complicated pile of Python scripts and other software. Since I wrote that post, several of those pieces of software have been updated, and the core component has been, apparently, entirely abandoned.
Recently, while I was waiting for yet another keyboard replacement on my MacBook, I took another look at the OCR thing and found that there’s a much easier way available: OCRmyPDF.
It’s easy to install, assuming you’ve already got brew: brew install ocrmypdf
From there, it’s just a single action in Hazel. “Run embedded shell script: ocrmypdf $1
Admittedly, you can use some of their many settings to get something a bit nicer than just OCR; personally, I’m using --rotate-pages --deskew --mask-barcodes – the first two to help with variations in the input because I sometimes use a bed scanner, and the latter to help Tesseract, which can have issues with barcodes..
I’ve also paired it with a couple additional actions, just to keep everything organized:

I also took the time to stop using Dropbox as the go-between for my scanner and the Mac running Hazel; I’d forgotten that the scanner has a USB port. Plug in a cheap flash drive, and it’s available as a (very slow) file server. Mount the drive, add it as a Login Item so it’ll auto-mount on boot, and you can set Hazel automations to run right there. I’m not OCRing them there, though — like I said, it’s a very slow server, so it tags them ‘for OCR’ and moves them to my desktop.1


  1. With iCloud Drive handling my desktop, I’ve found it to be a pretty great ‘intake’ folder for all of my Hazel automations. It’s quite nice to be able to save a PDF from my phone, add a tag, and watch it disappear again as it’s auto-sorted, or throw a PDF on my desktop with a tag and see it pop in and out as the OCR runs. 
Categories
Technology Tools

Automatic Playlist Backup

You may have seen my monthly playlist posts on here; I put those together with a Shortcut that grabs the playlist, runs through all the songs, and makes a spirited attempt to fill in all the links off the iTunes Store Search API without hitting their mysterious rate limits.1
It’s not the be-all end-all, though — I’ve been wanting more and more lately to start making more and smaller playlists, things to match different moods. Y’know, the way normal people do playlists.
But, of course, I’m me, and I want to have the history of my music tastes, because, hey, sometimes you feel like reminiscing.
So, what to do? Well, I’ve done some work with the iTunes Library XML file, and while it’s sorta true that just wrapping that in, like, Git or something for version control could work, there are three problems with that:
1. iTunes is a weird, weird piece of software, and I don’t want to mess with its files too much.
2. The result is not at all human-readable.
3. It isn’t an excuse to learn something new.

So, what else can I do? Well, I’ve done a very light bit of tinkering with AppleScript,2 so I know it can interact with iTunes pretty well; there’s gotta be a way to do it there, right?
There is! I’ll share the script in a moment, but the functionality I wanted was “clear out the folder I give you, replicate my playlist hierarchy as directories, and spit out each playlist as a markdown file listing the title, artist, and album for each track, then commit the changes to a git repository.”
It took a while to get working — I’ve learned that AppleScript’s repeat with in loop is hilariously slow, unless you change it to repeat with in (get). I’ve also found out that the way it works with paths is super annoying, and that while it can write to a file, it can’t conceptualize creating a directory. There’s some great workarounds for that.
Now, here’s the script: I’ve left a couple {replace me} type things where you should fill in variables – namely, the path to your home directory (or wherever else you want it), and your own username, to fix some permission issues that can crop up.3

But wait, there’s a caveat: it’ll fail if the folder you gave it isn’t a git repository. Considering that I wanted this as a ‘set it and forget it’ sort of thing, I figured it wouldn’t be worth the effort to write a bunch of conditional code to do the setup. Do it yourself: git init && touch temp.txt && add temp.txt && git commit -m "Initial commit" takes care of all you need.4
Oh, and if you want it to be pushing the changes somewhere, because you’re paranoid and want everything in someone’s cloud, at least, add the remote and set it as the default upstream: git remote add origin {remote URL} && git push --set-upstream origin master

Set It and Forget It

So that’s pretty neat, but it isn’t really “set it and forget it,” now, is it? You’ve gotta open up Script Editor, pull up the script, and run it every time you want it to back up your playlists. Possibly workable for some people, but I don’t have a home server for nothing. Let’s make this truly automated.
From my prior experience with AppleScript, I know that you can set it off through a shell script by way of /usr/bin/osascript, so my first thought was to add a cron job. After a bit of research, though, I found out that Apple would prefer we use launchd instead, so I set about figuring out how to do that.
Now, if this wasn’t all an excuse to learn how to do something, I’d probably have just bought one of the GUI clients for launchd; Lingon looks pretty nice, and seems to work well.5
The process for writing your own launchd process is actually pretty simple: create a .plist file containing some XML, add it to the launchd queue with launchctl, and you’re off to the races!6
(Hint: if you want an easier way to see if your script runs than waiting and checking git log, you can add a line to the start of the AppleScript: display notification "Running playlist export".)
So, creating the XML: you want it to live in ~/Library/LaunchAgents/, and the convention is the usual reverse-TLD. (You can also use local.{your username}.{your script name}, but I’m so used to using net.twoeighty. in bundle identifiers that I just went with that.)
The important parts are the ProgramArguments array and the StartInterval integer. For ProgramArguments, give it the path to osascript,7 and as your second argument, the full path to the .scpt of the AppleScript.
Then, set the StartInterval to the number of seconds between runs; I’m using 3600, because hourly change tracking seems frequent enough for my purposes.
The result:

(You can skip the StandardErrorPath and StandardOutPath – they help a little with debugging, more so if you’re running a full shell script and not a wrapper on an AppleScript.)
Finally, add it to the queue:

launctl load ~/Library/LaunchAgents/net.twoeighty.backupPlaylists.plist

And there you go – every hour, your iTunes playlists will get backed up to your Git repo, and you’ll have a nice history of your music tastes over time.


  1. iTunes Search is a really fun API to use, because via Shortcuts you only get a single input to it, and it is really bad at finding anything. Seriously — try to find anything off the top charts. As far as iTunes Search is aware, Billie Eilish doesn’t exist. 
  2. In lieu of Shortcuts having a way to set the volume on a HomePod, I’ve achieved a similar result with “run SSH script: osascript -e tell iTunes ...”. 
  3. Related: don’t put this anywhere with weird macOS access control things. Y’know, places like “Documents”, “Desktop”, anywhere in iCloud Drive or Dropbox, or even “Downloads”, which apparently is a much worse work directory than I thought it was. I eventually configured it to run out of and into my Public directory, because I figured that’d be easier than trying to mess around with the permissions somewhere else. 
  4. Without a file there, the git rm -rf . && git clean -fxd bit at the beginning is unhappy. 
  5. I used the ‘free trial’ version as a viewer for my works in progress; I figured if I’d done something really wrong, it’d complain about it being an invalid file or something. 
  6. He said, glossing over the couple hours of “fight me, macOS, why isn’t this working” 
  7. Probably /usr/bin/osascript, but you can use which osascript in Terminal to check. 
Categories
Programming Technology

Publishing a Private Angular Library via Git

So, you’ve built yourself a nice new Angular library and you’re excited to put it to use in your apps! It’s an exciting time. But wait: the code you used is proprietary, and you can’t upload it to NPM.1
Good news: thanks to some features of Git, NPM, and most Git hosts, there’s a way to bypass NPM, and you don’t even need to set up your own repository. Sound good? Let’s go.
Now, I’m assuming that you’ve already (a) created your Angular library, and (b) have it in Git. If you haven’t done (a), there’s a pretty good guide on Angular’s site; if you haven’t done (b), allow me to evangelize source control to you: Git is awesome, and you should be using it. This book, available online for free, is an excellent getting-started guide.
So, you’ve got a version-controlled library; how do we make it available?
1. Build the library. ng build {YourLibrary} spits out your library as a ready-made NPM package.
2. Track the built files. git add dist/{your-library}. If you’ve got that in your .gitignore, you’ll need to remove it, or git add -f. I’d recommend the former; you’ll need to do this every time you update the library. Wrap it up with git commit and a nice message explaining what you’ve changed.
3. Set up a second repository. This is where the NPM package version of your repository will live. Leave it empty; we’ll push to it in a moment.
4. Push the subtree. This is the the magic part: we’re going to treat the built files as their own separate repository. git subtree push --prefix dist/{your-library} {path to the second repository}2
5. Tag the release with a semantic version number.3 While it’s possible to do this via the command line, it’s not fun, so I’d recommend using the GUI for your Git host of choice.
6. Generate a read-only token for the second repository. In GitLab, this is under Settings > Repository > Deploy Tokens. Configure it as you’d like, but be sure to enable read_repository.
7. Add the library to your app’s package.json. Give it the Git address for your second repository, and follow it up with either a tag or branch name to pull from. (For example: "your-library": "git+https://{GitLab token username}:{GitLab token password}@gitlab.com/{your username}/{your-second-repo}#master would pull the latest version from master.)

Et voilà; you’ve got a private Angular library, ready for use.


  1. Or maybe you’ve got issues with the fact that NPM is a private company, and can remove, un-delete, or hand over your packages without your permission
  2. i.e.: git subtree push --prefix dist/my-own-library https://github.com/grey280/my-own-library.git 
  3. This isn’t strictly necessary – I’ll explain a bit more in step 7. 
Categories
Programming Technology

Wrapping UserDefaults

UserDefaults, formerly NSUserDefaults, is a pretty handy thing. Simply put, it’s a lightweight way of storing a little bit of data — things on the order of user preferences, though it’s not recommended to throw anything big in there. Think “settings screen,” not “the image cache” or “the database.” It’s all based up on the Defaults system built into macOS and iOS,1 and it’s a delightfully efficient thing, from the docs:

UserDefaults caches the information to avoid having to open the user’s defaults database each time you need a default value. When you set a default value, it’s changed synchronously within your process, and asynchronously to persistent storage and other processes.

How handy is that! All the work of writing to disk, abstracted away just like that. Neat!
Now for the downside: it’s got a very limited range of types it accepts.2 Admittedly, one of these is NSData, but it can be a bit annoying to do all that archiving and unarchiving all the time.
One solution I use is writing a wrapper on UserDefaults. Swift’s computed properties are a very neat way to do it, and any code you write elsewhere in your project will feel neater for it.
The basic idea is this:


There you go: you’ve got an easy accessor for your stored setting.
Of course, we can make this a lot neater; we’ll start by wrapping it up in a class, and make a couple tweaks while we do that:

First, we made a variable to point at UserDefaults.standard instead of doing it directly. This isn’t strictly necessary, but it makes things a lot easier to change if you want to switch to a custom UserDefaults suite later.3
Secondly, we pulled the string literal out and put in a variable instead. Again, this is more about code maintainability than anything else, but that’s certainly a good thing to be working for. Personally, I tend to wrap all my keys up in a single struct, so my code looks more like this:

That’s a matter of personal taste, though.
You might also have noticed that I made both the keys and the UserDefaults.standard private — I’ve set myself a policy that any access of UserDefaults that I do should be via this Settings class, and I make it a rule that I’m not allowed to type UserDefaults anywhere else in the app. As an extension of that policy, anything I want to do through UserDefaults should have a wrapper in my Settings class, and so private it is: any time I need a new setting, I write the wrapper.
There are a few more implementation details you can choose, though; in the example above, I made the accessors static, so you can grab them with Settings.storedSetting. That’s a pretty nice and easy way to do it, but there’s a case to be made for requiring Settings to be initialized: that’s a great place to put in proper default values.4

In that case, accessing settings could be Settings().storedSetting, or

You could also give yourself a Settings singleton, if you like:

I don’t have a strong feeling either way; singletons can be quite useful, depending on context. Go with whichever works best for your project.
And finally, the nicest thing about writing this wrapper: you can save yourself a great deal of repeated code.

Or, if you don’t want to have a default return, make it optional, it’s not much of a change:

You can also do similar things with constructing custom classes from multiple stored values, or whatever else you need; mix and match to fit your project.
(Thoughts? Leave a comment!)


  1. If you’ve ever run defaults write from the Terminal, that’s what we’re talking about. 
  2. If it matters, it’s also not synced; the defaults database gets backed up via iCloud, but if you want syncing, Apple recommends you take a look at NSUbiquitousKeyValueStore
  3. If you want your preferences shared between your app and its widget(s), or between multiple apps, you need to create a custom suite; each app has its own sandboxed set of defaults, which is what UserDefaults.standard connects to. 
  4. UserDefaults provides default values, depending on types, but they may not be the same defaults that you want. If you want a stored NSNumber to default to something other than 0, you’ll need to do that initial setup somewhere. 
Categories
Technology

iOS Notification Routing

The other day I was thinking about the way iOS handles notifications; the new Do Not Disturb stuff in iOS 12 is a good start, but it’s still rather lacking. It’s a fun thought exercise: say you’re Jony Ive or whoever, and you’re setting out to redesign the way that notifications work, from a user standpoint.1 How do you make something that offers advanced users more power… but doesn’t confuse the heck out of the majority of your user base?
After a while dancing around the problem, I came to the conclusion that you don’t.23
Instead, imagine something more along the lines of Safari Content Blockers. By default, the existing system stays in place, but create an API that developers can use to implement notification routing, and allow users to download and install those applications as they so desire.4
Obviously, this would have some serious privacy implications — an app that can see all your notifications? But hey, we’re Jony Ive, and Apple has absolute control over the App Store. New policy: Notification routing apps can’t touch the network.5 And, to prevent any conflict of interest stuff, let’s just say that the routing apps aren’t allowed to post notifications at all.
Alright, we’ve hand-waved our way past deciding to do this, so let’s take a look at how to do it, shall we?
Let’s start with the way notifications currently work. From UNNotificationContent we can grab the properties of a notification:


For proper routing, we’ll probably want to know the app sending the notification, so let’s add the Bundle ID in there, and we’ll also give ourself a way to see if it’s a remote notification or local.

Alright, seems nice enough.6
Next up, what options do we want to have available?
1. Should the notification make a sound?7
2. Should the notification vibrate the phone?
3. Should the notification pop up an alert, banner, or not at all?
4. If the user has an Apple Watch, should the notification go to the Watch, or just the phone?
5. Should the notifications show up on the lock screen, or just notification center?
6. Finally, a new addition, borrowing a bit from Android: which group of notifications should the notification go into?8

Alright, that should be enough to work with, let’s write some code.


Not a complex object, really, and still communicating a lot of information. I decided to make the ‘group’ aspect an optional string — define your own groupings as you’d like, and the system would put notifications together when the string matches; the string itself could be the notification heading.9
And with that designed, the actual routing could just be handled by a single function that an application provides:

And with that, I’d be free to make my horrifying spaghetti-graph system for routing notifications, and the rest of the world could make actually sensible systems for it.
Thoughts? There’s a comment box below, I’d love feedback.


  1. I haven’t done much work with the UserNotification framework, so I’m not going to be commenting on that at all. 
  2. I spent a while mentally sketching out a graph-based system, somewhere between Shortcuts and the pseudo-cable-routing stuff out of Max/MSP, but realized pretty quickly that that’d be incredibly confusing to anyone other than me, and would also look very out of place in the Settings app. 
  3. As a side concept, imagine that but implemented in ARKit. “Now where did I put the input from Messages? Oh, shoot, it’s in the other room.” 
  4. Unlike Safari Content Blockers, though, I think this system would work best as a “select one” system, instead of “as many as you like, they work together!” thing. Mostly because the logistics of multiple routing engines put you back in the original mess of trying to design data-flow diagrams, and users don’t want to do that. Usually. 
  5. I’d call this less of an ‘App Store’ policy and more of a specific entitlement type; if you use the ‘NotificationRouting’ entitlement in your app, any attempt to access the network immediately kills the application. 
  6. Of course, those last two additions wouldn’t be things that you’d be able to set while building a UNNotificationContent object yourself, so perhaps we should be writing this as our own class; UNUserNotification perhaps? 
  7. We’ll assume that setting notification sounds is handled somewhere else in the system, not by our new routing setup. 
  8. This would be at a higher level than iOS 12’s new grouped notifications (the stacks), more like the notification channels in Android: categories like ‘Work’, ‘Family’, ‘Health’, and so on. 
  9. Since we’re Jony Ive, and everything has to be beautiful, we’re presumably running it through some sort of text normalization filter so people don’t have stuff going under the heading “WOrk” 
Categories
Technology Tools

Automatic OCR with Hazel

I recently got a copy of Hazel and have been doing a bit of tinkering around with various ways to automate my file management. Because, y’know, I can do it by hand, but why would I when I can make a computer do it for me? That’s the whole point of computers, after all.
I have a great deal of PDFs — something about scanning every paper, handout, receipt, or bit of mail I’ve received in the past six years or so does that. And if you have a commercial-grade scanner, it can be pretty easy to automate that stuff with Hazel, as the scanner will run everything it scans through Optical Character Recognition, and the PDF you’ll get will be nicely searchable.1
Unfortunately, the scanner I’ve got, while a pretty good one, is in a different price tier than the ones that’ll do the automatic OCR, so I needed a way of doing that after the fact.
There are some guides to doing that, such as this one,2 but they tend to require either Acrobat Pro or PDFPen Pro, which both have price tags above the “a couple hours of tinkering and no money” that I was hoping to spend on this project.
Throw a few computer science keywords on what you’re Googling, though, and you’ll find stuff that’s more in that vein.3 So, compiled here after I used Chase as a guinea pig, a guide to putting together automated OCR for free.4

Prerequisites

Before we can automate OCR, we need a few things installed. Open up Terminal, and let’s go.
sudo easy_install pip
(For those of you who didn’t put a few years into classes on computer science, I’ll try to explain as I go along. That first word, sudo, means “super user do”, basically; it’s the Admin Override for terminal commands. Be careful with it, you can make quite a mess tinkering with it. The next bit, easy_install, is part of the version of Python that comes default with macOS. pip is what we’re telling easy_install to install; ironically, pip is the modern version of easy_install.5)
The first time you use sudo in a Terminal session, you’ll be prompted for your password; if you’re not an administrator on the mac you’re using, you’ll need an administrator password. That’s a good opportunity to check with the administrator if this is something you should be doing at all.
Once pip is done installing, we’re going to get another installation helper, Homebrew:
sudo /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Again, this is just installing a piece of software, Homebrew.

Components

Now that we’ve got the infrastructure built, we’re going to install the components that the OCR system uses.
brew install tesseract
brew install ghostscript
brew install poppler
brew install imagemagick
(If any of those fail, you can try to rerun them with sudo added to the front, i.e. sudo brew install tesseract.)
For reference: Tesseract is the actual OCR engine, Ghostscript makes it easier to interact with the PDF format,6 Poppler is similarly PDF-related, and ImageMagick handles conversion between basically any types of images.
Finally, we’ll use pip to install a specific version of another:
sudo pip install reportlab==3.4.0
ReportLab is yet another PDF-related library, but version 3.5.0 has some compatibility issues with the OCR system.

Installation

Finally, we’ll get the actual thing that ties these all together:
sudo pip install pypdfocr
PyPDFOCR is a lovely open-source project that ties all these components together into a single thing. Once it’s installed, you can use it from the terminal:
pypdfocr {filename}, where you replace {filename} with the non-OCR’d version of the file you want in OCR’d form.7 It’ll take a bit to run, but once it’s done, you’ll have a file (named {filename}_ocr.pdf) that contains, hopefully, the text of the document you scanned.89
Go ahead and test it; if you get an error about the file not being found, see if the file name or directory structure included a space. If it did, tweak the command a bit: instead of pypdfocr {filename} you’ll need to do pypdfocr "{filename}".
You may also get an error that mentions File "/Library/Python/2.7/site-packages/pypdfocr/pypdfocr_pdf.py", line 190… and a bit more after that. If it’s AttributeError: IndirectObject…, then you’ll need to tweak part of the code.10
cd /Library/Python/2.7/site-packages/pypdfocr
sudo nano pypdfocr_pdf.py
That’ll open up nano, a very lightweight text editor. Press control+W, type in orig_rotation_angle = int(original_page.get and hit return; this will take you to the line we want to edit. It’ll read orig_rotation_angle = int(original_page.get('/Rotate', 0)) — we want to change it to orig_rotation_angle = int(original_page.get('/Rotate', 0).getObject()) by adding .getObject() before the last close-paren.
Once you’ve done that, press control+X, then hit return again. Try OCRing something again; it should work this time.

Using Hazel

Now all you need to do to have Hazel automatically OCR a PDF is, in the actions, add a “Run shell script” action, use “embedded script”, and in the ‘edit script’ bit, put in pypdfocr "$1".
Keep in mind, this doesn’t replace the PDF in place, it’ll create a copy with _ocr added to the end of the name. If you’d like the original to be deleted once it’s done, rather than having Hazel do it, just add a second line to the embedded script: rm "$1"
You’ll probably want another rule to move the OCR’d versions somewhere else; while you’re building that, you can also use the ‘rename’ action to remove the _ocr bit, just tell it to replace “_ocr” with “”.
Have fun automating!


  1. And, as a result, useable for Hazel sorting by way of the ‘contents’ filter. 
  2. I was hoping to link to Katie Floyd’s original post about it, but her website is down at the moment, so I guess I won’t be doing that. 
  3. Technically speaking, I think all I added was “site:github.com”, but that did the trick. 
  4. This assumes you have a Mac, since you’re working with Hazel, and that you’re willing to do a bit of tinkering in the terminal, which I also kinda assumed, since you’re working with Hazel. 
  5. I think that’s irony; I was a computer science major, not an English major. 
  6. “the Printable Document Format format” 
  7. Tip: you can type pypdfocr  (including the trailing space) and then drag-and-drop the PDF from Finder into the Terminal, and it’ll automatically fill in the filename. If any part of the path includes a space, though, it’ll fail, so for filenames or folders that contain spaces, do pydpfocr "{filename}" – type pypdfocr ", drop in the file, and then ", and then hit enter. 
  8. Caveat: Tesseract isn’t perfect, especially with regard to the formatting, so don’t expect this to give you a perfectly-formatted version of whatever you scanned. That said, the process is lossless: {filename}_ocr.pdf is built by taking the original PDF file and then adding an invisible text layer over the analyzed text, so you won’t lose any information by doing this, you just might not gain anything useful. 
  9. Note that it’ll spit {filename}_ocr.pdf out not necessarily where the original file was, but wherever the Terminal session currently is; if you’re unsure about where that is, you can use pwd to have it displayed, or just open . to open it in Finder. 
  10. Don’t ask me why this is all “you might have to do this”, because I genuinely don’t know why this problem only pops up some of the time. 
Categories
App Portfolio Technology

Fluidics 1.1: The Animation Update

The first major update to Fluidics is now available on the App Store!1 In all honesty, it was largely a ‘bug fixes and performance improvements’ update, but I’ve always hated when app updates list that, so I made sure to include a couple user-facing features so there’d be something fun to talk about, at least.
In this case, those features were animations. The most notable is the background – rather than being drawn once, the ‘water’ in the background is now animated, which I think makes the visual effect much nicer overall. Swiping between the three main pages of the app is also much smoother now; instead of a single ‘swipe’ animation being triggered by any swipe, it directly responds to your swipe, so you can change your mind about which direction to swipe halfway through, and it feels more like you’re moving things around, rather than switching pages.2
The big changes, though, are largely invisible; a whole lot of work on the internals to allow for future features I’m planning.3 The gist of it is that a lot of the internals of the app are now a separate library, which means I can share code between the widget and the main app without needing to copy-and-paste all the changes I make in one place to the other.
Past that, there were a couple little tweaks — the algorithm that calculates the water goal is a bit less aggressive with the way it handles workout time, and there’s now a little “this isn’t a doctor” disclaimer in the Settings page that I put there because the lawyer I don’t have advised that I do that.
And, the bit that turned into more of a project than I thought: VoiceOver support. VoiceOver, for those that don’t know, is one of the core accessibility features of iOS; when enabled, it basically reads the contents of the screen to the user, making it possible for visually-impaired people to use iOS. By default, any app built on UIKit has some support for VoiceOver, but the further you go from the default controls, the more broken that’ll get. The way Fluidics works, it was super broken; technically useable, but downright painful to do. After a day or two of vigorous swearing and arguing with the Accessibility framework, I’m proud to say that Fluidics is now VoiceOver-compatible.
If you’ve already got Fluidics on your phone, it’s a free update from the App Store.4 If not, the whole app is a free download from the App Store, and I’m hoping that you’ll enjoy using it. Leave a review or whatever; I’m trying not to be pushy about that.
Oh, and I’m in the process of updating the app’s website; I got such a good URL for it that I want it to look good to match.


  1. There was a bugfix update earlier, version 1.0.1, but that’s not at all exciting, so I didn’t bother writing anything about it. 
  2. If you’re curious, this involved rebuilding the entire interface, from three separate pages that’re transitioned between to a single page that’s embedded in a scroll view. 
  3. And no, I won’t be telling anybody what those are just yet; I don’t want to promise anything before I know for sure it’ll be possible. 
  4. In fact, it may have already been automatically updated — the easiest way to tell is to open the app and see if the water is moving or not. 
Categories
App Portfolio Technology

Open-sourcing Variations

Now that the whole concert is over, and I’ve finished going through approximately all of the WWDC sessions, I’ve decided that Variations won’t be receiving any further development — it wasn’t going to be enough of a priority for me to do it any justice, and I’d hate to half-ass it.1 The app will remain on the App Store, for now, though if it breaks in future iOS versions, I’ll probably pull it entirely. Instead, I’m releasing the source code, as-is; if you’d like to look through it, it’s right here.
I had fun building it, and I like to think that it does some interesting things with the implementations under the hood, so hopefully somebody can find some use from it.


  1. This is, hopefully, a hint about some of my other projects that are a higher priority; announcements of those will, of course, show up on this here blog. 
Categories
App Portfolio Technology

Fluidics

I made an app! I’m quite excited about it; this is, after all, the sort of thing I want to spend my career doing.

The app is called Fluidics, and it’s for tracking the amount of water you drink. As I mentioned a while back, I like to do a lot of tracking of what I’m eating and how much I’m drinking. That first part wasn’t too hard; there’s a variety of apps on the App Store for logging food, and after a while I was able to find one that wasn’t too bad.1 For water, though, nothing quite worked – Workflow came closest, but using it to do the sort of goal calculations I wanted was on the line between clunky and painful, and it’s such a general-purpose app that it felt visually lacking.
Eventually I remembered that I’m a computer science major, and why am I sitting around complaining about the dearth of options when I’ve basically got a degree in making the dang thing. Months of sketching, programming, swearing, and repeating the whole thing eventually lead to this: what I hope is the easiest water-tracking app on the App Store to use.
It’s been a fascinating process. (Here, by the way, is where I’m going to take advantage of the fact that this is my blog for rambling and start talking about what it was like making it; if you’d like to get more information on the app, I’ve put together a rudimentary website, or you can skip straight to the ‘it’s free on the App Store’ part and give it a whirl.) As it turns out, there’s a whole lot of work involved in making an app; my original sketch was the widget and two screens. Those came together pretty quickly, but I realized that probably nobody would feel comfortable using an app if the first time they opened it it just threw up a message saying “trust me!” and then asked for a bunch of health information, so I wrote up a privacy policy and started building an onboarding flow. Which then ballooned in complexity; looking at the design files, more than half of the app is screens for dealing with something having gone wrong.2
One of the most interesting debates I had with myself during the whole process was deciding what business model to use.3 The App Store has had an unfortunate tendency to be a race to the bottom; while there’s a bit of a market for pro apps, a minimalistic water-tracking app doesn’t fit into that category. There’s also no argument to be made for a subscription, so I’d narrowed it down to ‘free, because I’m turning it in as the capstone project for my computer science major’, ‘free with ads’, or ‘paid up-front’. The first one was the one I was most comfortable with; sure, ‘paid up-front’ would be nice, but I’d also get approximately zero people to download it what with all the free competitors out there. ‘Free with ads’ feels deeply gross, both because I hate online advertising in general, and because I’m doing a lot with health data, and I really don’t want to have any chance of that getting stolen. For a while, I thought it was going to be ‘free forever’, and I’d be justifying it as ‘building a portfolio’.
That wasn’t what I actually settled on, however; instead, I’m going with ‘free with in-app purchase.’ Instead of building in a paywall and locking some features behind it, though, I decided I’d go simpler; the app and all of its features are free. Starting in version 1.1, there’ll be a button in the Settings; a little tip jar.4 I probably won’t make much, but I’ll feel better about it overall, and what’s the harm?
Beyond that debate, most of the challenge of the project as a whole was just building it. I knew going in what I wanted it to look like; what I didn’t know was how to go about doing that. The way the background overlaps the text? That alone took a week of trying different things to get working right.5 A few things I wanted to include in the first version didn’t make it – the widget was originally going to be entirely different, but the way Apple has done the security on health data makes the original design significantly more difficult to do, so I switched it to the current design.6
It was definitely a learning experience, too – I’d done some iOS application design for classes before, but never gone all-in on making something that would be both functional and enjoyable for the end user. If you’re releasing something on the App Store, you can’t just include a note that says “on first run, it’ll ask for a bunch of permissions; just say yes” because nobody will read that. And getting something uploaded to the App Store is itself a whole process – the App Store page doesn’t fill itself out, after all, and copywriting definitely isn’t my strongest suit.7
But it’s done; I’ve made an app and released it to the world. 8 By the time you’re reading this, it should be available on the App Store; as I mentioned, it’s free to download, and I’d love it if you’d give it a try.


  1. That said, I’m also doing some design sketches for my own entry into the field; don’t get your hopes up, I make no promises. 
  2. I’m not talking “my code is full of bugs and something crashed” went wrong, either; it’s all “the user originally gave permission to do something, but then changed their mind and used the Health app to take it away” and other such nonsense. Computers may be finite-state machines, but “eleventy hojillion” is still a finite number. 
  3. I also talked about this a lot with my friend Chase, without whom I would’ve long ago given up on technology and disappeared into the woods to be a Bigfoot impersonator.. 
  4. Yes, I know, I’m just now releasing version 1.0, and I’m already mentioning plans for 1.1. Don’t worry, I’ve got versions 1.2 and 1.3 mapped out, feature-wise, as well, and have some rough ideas for 1.4. 
  5. For a while I thought I was going to have to write code to draw the numbers ‘by hand’; fortunately, I was able to get the drawing to work by taking advantage of layer masks, but good lord are the Interface Builder files a mess as a result. Behind The Scenes, everybody! 
  6. I do still want to get the original design working, probably as an option in the Settings page of the app; a future version is going to add watchOS support, and I believe that a lot of the work I’ll have to do for that will also apply to making the widget work like I intended, so those two will either be the same or subsequent updates. 
  7. Another shoutout to Chase, who wrote the App Store description and turned my pile of 100 disjointed screenshots into the four that’re currently on display. 
  8. Well, “done”; it’s functional and available to the public, but software, as the saying goes, is never finished, only abandoned. I’ve no plans to abandon this project anytime soon; I use it myself several times a day, so I’m pretty invested in keeping it working and making it better. 
Categories
Technology

Tidbits from Apple’s Machine Learning Journal

A short while ago, Apple launched a journal on machine learning; the general consensus on why they did it is that AI researchers want their work to be public, although as some have pointed out, the articles don’t have a byline. Still, getting the work out at all, even if unattributed, is an improvement over their normal secrecy.
They’ve recently published a few new articles, and I figured I’d grab some interesting tidbits to share.
In one, they talked about their use of deep neural networks to power the speech recognition used by Siri; in expanding to new languages, they’ve been able to decrease training time by transferring over the trained networks from existing language recognition systems to new languages.1 Probably my favorite part, though, is this throwaway line:

While we wondered about the role of the linguistic relationship between the source language and the target language, we were unable to draw conclusions.

I’d love to see an entire paper exploring that; hopefully that’ll show up eventually. You can read the full article here.
Another discusses the reverse – the use of machine learning technology for audio synthesis, specifically the voices of Siri. Google has done something similar,2 but as Apple mentions, it’s pretty computationally expensive to do it that way, and they can’t exactly roll out a version of Siri that burns through 2% of your iPhone’s battery every time it has to talk. So, rather than generate the entirety of the audio on-device, the Apple team went with a hybrid approach – traditional speech synthesis, based on playing back chunks of audio recordings, but using machine learning techniques to better select which chunks to play based, basically, on how good they’ll sound when they’re stitched together. The end of the article includes a table of audio samples comparing the Siri voices in iOS 9, 10, and 11, it’s a cool little example to play with.
The last of the three new articles discusses the method by which Siri (or the dictation system) knows to change “twenty seventeen” into “2017,” and the various other differences between spoken and written forms of languages. It’s an interesting look under the hood of some of iOS’ technology, but mostly it just made me wonder about the labelling system that powers the ‘tap a date in a text message to create a calendar event’ type stuff – that part, specifically, is fairly easy pattern recognition, but the system also does a remarkable job of tagging artist names3 and other things. The names of musical groups is a bigger problem, but the one that I wonder about the workings of is map lookups – I noticed recently that the names of local restaurants were being linked to their Maps info sheet, and that has to be doing some kind of on-device search, because I doubt Apple has a master list of every restaurant in the world that’s getting loaded onto every iOS device.
As a whole, it’s very cool to see Apple publishing some of their internal research, especially considering that all three of these were about technologies they’re actually using.


  1. The part in question was specific to narrowband audio, what you get via bluetooth rather than from the device’s onboard microphones, but as they mention, it’s harder to get sample data for bluetooth microphones than for iPhone microphones. 
  2. Entertainingly, the Google post is much better designed than the Apple one; Apple’s is good-looking for a scientific journal article, but Google’s includes some nice animated demonstrations of what they’re talking about that makes it more accessible to the general public. 
  3. Which it opens, oh-so-helpfully, in Apple Music, rather than iTunes these days.