Handy URI Templates 2.1.0 Released

I’ve just published Handy URI Templates 2.1.0, which fixes the following issues:

  • #26 Some issues with custom VarExpander
  • #30 Replace SimpleDateFormat with Joda Time to Improve Performance
  • #34 Seems to be an issue with expanding UUID value
  • #35 Expanding enums

Please note that the changes for Issue #26 have removed the use of the java.beans.* package and have been replaced by a custom implementation. This was necessary in order for this library to function on Android when using POJOs and the explode modifier. As it turns out, the java.beans.* package doesn’t exist on Android.

As a result, behavior may differ slightly. Please file an issue something isn’t working out for you.

I can’t make a Windows 10 Mobile app on a Mac, and that’s a problem

So now that Apple has shown it’s hand with the iPhone 6s and Google set to unveil new Nexus devices at the end of the month, one may (or may not) start wondering about what’s next for Microsoft, which will be holding their event in October. While Microsoft may release some interesting hardware at the October event, hardware isn’t the major problem for Windows 10 Mobile, it’s the ecosystem. It sucks. The apps that you can get for iOS and Android simply aren’t there for Windows Mobile. I’d love to give Windows 10 Mobile a shot, but I really need access to the MBTA mTicket app, which hasn’t been available for anything other than iOS and Android. Microsoft is desperately trying to woo developers to its mobile platform, but they seem to be missing the mark with their recent moves.

Most Mobile Developers use Macs

If you’re going to develop an iOS App, you need a Mac. Even if you choose to develop on another OS, a Mac comes into the picture eventually, either in the form of a server of hosted service. If you’re already developing and supporting iOS apps, you’re probably using Mac OS X already as your primary environment. If you’re interested in making an Android version of your app, it’s pretty painless to snag Android Studio and SDKs and run them along side XCode. Easy. It also doesn’t hurt that iOS and Android come from familiar UNIX-y underpinnings. You likely don’t even need a Windows PC because you already have everything you need. What you do need are emulators and actual devices – not Windows.

React Native is Mac-only

React Native is one of those tools that I think underscores the situation. React Native is getting a lot of attention lately, but it’s actual real-world usage is still relatively small. I bring up React Native as it kind supports where the majority of mobile development is taking place. At present React Native for both iOS and Android, is currently only available for Mac OS X. While it is possible for React Native for Android to run on Windows and Linux, it’s currently not supported. At present, React Native has the following dependencies:

Only one of these dependencies is cross-platform. While yes, Homebrew could easily be swapped out for apt-get or OneGet in Windows 10, learning the details of how to distribute this stuff non-OS X platforms is a sizable time investment.

VisualStudio 2015 for Windows 10 Mobile Development won’t help

Microsoft is investing into tooling that caters to existing Windows developers but does nothing to appeal to existing mobile developers on non-Windows platforms. In fact, adding VisualStudio 2015 to an existing workflow presents more work to existing mobile developers. They now need to consider adding Windows infrastructure to support what is currently a niche mobile platform. Adding Windows 10 Mobile into an existing Mac or Linux-based development is a pretty big commitment. If you’re already invested in a Mac OS X development environment, you now need a Windows PC or a Virtual Machine running Windows. You also need a copy of Visual Studio. If you’re running a business, Visual Studio Community isn’t an option and you need to shell out $1,200 per user. A VM is a lousy option given the capacity demands of Visual Studio (30+GB) and that the Windows Mobile emulator doesn’t play nice when running in a VM.

The release of VisualStudio 2015 is a pretty big deal. It added a lot of nice new capabilities for mobile developers. The Windows Bridges are a great idea and might help in the long term. While it’s cool that you can get up and running quickly with existing Objective-C and Android Java code, it glosses over the fact that existing mobile devs coming from a Mac/UNIX background, probably don’t know the Windows/VisualStudio way of doing things. Coming from a non-Windows development background, learning the VisualStudio developer tool chain is awkward. In fact, it sucks because it feels so weird. That’s a learning curve that those comfortable with UNIX-based build tools aren’t going to enjoy much. It also takes time. If you’re a small shop, you are going to seriously question if it’s worth going through these shenanigans for roughly 2.7% of the mobile market.

Windows 10 Mobile is a minor player that thinks it’s in the majors

While Microsoft seems to acknowledge that Windows 10 Mobile is a minority platform, the developer tools side of the house doesn’t seem to get this yet. The browser team seems to understand that a lack of testing for IE was due to the fact that a lot of design shops were ignoring IE. Either they didn’t have access to Windows PCs running IE or they were ignoring it due to cost issues. This ultimately led to Modern.ie, which provides Windows VMs with various versions of Edge and IE. This is an incredibly helpful resource for all web developers.  Mobile developers have no such resources available for Windows 10 Mobile. Given Windows Mobile’s relatively insignificant market share, it’s easier to just ignore the platform outright.

How about a Window 10 Mobile Tools that run on Mac OS X and Linux?

VisualStudio Code is a nice light-weight developer tool. It’s great for working with Node.js and the early builds of ASP.NET 5, but it could be so much more. It would be SUPER great if this were the tool that could help developers get started with the Windows Mobile on non-Windows platforms. Toss in a Windows 10 Mobile emulator that runs on non-Windows OS’s and now you’re cooking with gas. To be able to debug and test something like a Cordova-based app through a Window Mobile emulator on Mac OS X would be a huge leap forward. Hell, you might even see a version of React Native for Windows 10 Mobile! Top it off with being able to publish to the Windows Store from a bash shell, and you’re doing that much, much better. And these tools also need to be free – just like XCode and Android Studio.

App developers aren’t going to flee from Mac OS X to run Windows 10 and VisualStudio 2015 because of the new tooling in VisualStudio 2015. XCode users weren’t really pining for Obective-C support in VisualStudio as that’s not their jam. But add C# and Windows Mobile tooling as XCode plugins or through VS Code, things get a lot more interesting. That’s not so crazy now given that the .NET 5 beta runs on Mac OS X and should be final by 2016.

Microsoft desperately needs to put out tools for Windows Mobile developers on the platforms they are currently developing on – not just Windows. If Microsoft keeps pretending that the majority of mobile developers want VisualStudio, they’ll still be struggling to crack that 3% market share. These folks don’t do Windows.

Handy URI Templates 2.0.3 Released

I have put out a new release of Handy URI Templates this morning that fixes a few issues and adds some new features such as the ability to perform partial template expansion. A big thanks to Christoph Nagel for the pull request! Please file an issue if you have any problems with this release. Up next, I plan on fixing a few issues that some Android developers have been hitting and then eventually get around to reverse matching.

How putting credentials in Git can cost you at least $6,500 in just a few hours

There’s an interesting post making the rounds in Twitter called How a bug in Visual Studio 2015 exposed my source code on GitHub and cost me $6,500 in a few hours. The short version is that the developer from Humankode attempted to create private repo on Github via the Visual Studio Git extension. Unfortunately, for this developer, the Visual Studio Git extension made the repository public rather than private. Complicating matters, the developer had committed his AWS access key and AWS secret access key. Thus, the keys were compromised and got into the hands of the wrong party and ran up $6,500 in AWS charges. In the end, the developer had this to say as a lesson learnt:

At face value one might say it’s simple : don’t publish your access keys to a public repository, which is what many before me have done. In my instance, I specifically published to a private repository, but a bug in visual studio meant that the code was published to a public repository. As soon as it was out in the wild, it was too late. Bots scan GitHub repositories and it only takes 2 or 3 minutes for some of them to pick this up.

It’s reasonable advice, but you should do much more.

Don’t ever publish credentials to an SCM, public or private

By credentials, I mean anything that allows someone to gain elevated privileges to your systems, which include:

  • Passwords
  • SSH Keys
  • Private keys or certificates
  • OAuth Consumer or Token Secrets
  • AWS Access and Secret Keys (also the equivalent of Azure or any other public cloud)

While stating “don’t publish your access keys to a public repository,” is sound advice, I’d expand on that and assert that credentials shouldn’t be published in any repository. Period. Also, don’t put these things into your Wiki, or on a shared file system. Avoid exchanging credentials via email too. They’re credentials. They give anyone the power to do things on your system. You should control who has access to manage your runtime environment, this means locking down the credentials and entitling credentials. If credentials reside in source control, you’re asserting that every developer has full admin rights to your deployment environments as well. That’s probably not what you want.

But why is storing them in an SCM a bad idea? Chances are, you put the credentials in your SCM as a means to simplify deployment. As a result, they’re likely being included in your builds too. If you’re producing a package that gets published to a package repository (NuGet, Maven, RPM, etc.), the credentials are going there too. Is your package repository private too? If you’re using a CI tool like Jenkins, those credentials are also being copied there too. Is that locked down and private too? Can you prevent others from seeing your working directory? Another big gotcha, especially with tools like Git, is the possibility that a developer on your project “backs up” a project to another remote repository. There’s nothing stopping anyone from cloning your private repo on GitHub to a public repo on BitBucket by simply adding a new remote. Your best defense is to not put credentials where your source code lives.

Where should you put credentials?

If not in an SCM, then where? You probably keep your personal credentials private. You may even use a password utility like LastPassword, 1Password, etc. because you can’t remeber all of these passwords in your head and need to retrieve them at some point. Basically, you’re keeping these secrets private, as they should be, as opposed to slapping post-it notes to your desk. Application credentials should be treated no differently. There are tools like Hashi Corp’s Vault, Conjur, or Cyberark to name a few, which are all designed to manage application credentials so that they are protected.

Use a Token Service

If you are using AWS, consider using Amazon’s Secure Token Service. This allows you to use temporary credentials that expire between 15 minutes to 24 hours. You can authenticate users with ADFS or OpenID Connect, and only after they have been authenticated by your systems, will they be able to obtain an STS token. Of course, this all falls apart if you are storing your ADFS or OIDC credentials in your Git repository. Since that’s a crazy idea, you’re not doing that. Right?

Get in the Habit of Rotating your Credentials

It’s usually a good idea to change passwords periodically. And by periodically I mean every few days to weeks, not months. Access keys are no different and you should rotate them on a regular basis. The AWS blog has an informative post on how to rotate access keys for IAM users. If you look closely, it’ll make more sense why IAM users are allowed to have a maximum of 2 access keys. Something like this is pretty easy to automate as well. Keeping an IAM access key valid for more than 2–12 months is not a good idea.

Entitle your Credentials

Proper use of entitlement, roles, or privileges can help minimize the impact that an attacker can have if your credentials are compromised. If an application only needs to have read/write access to DynamoDB, then it should only have read/write access to DynamoDB. It shouldn’t have the ability to spin up new EC2 instances, call CloudFormation, etc.. It’s easy to select “PowerUser” from the IAM console but you shouldn’t. Yeah, an attacker might be able to read your data, but they’re not going to have the ability to spin up 1,000 EC2 instances to mine bitcoins.

Just keep your credentials private, even in private.

Credentials need to be secured, plain and simple. Even if your running you app in-house, with a private wiki and private git repo, you still should not put credentials of any sort into those systems. It’s one of the first places attackers look if your environment is compromised.

Containerizing the W3C Mobile Checker App

I had been looking for some type of tool that would be similar to what the Google Mobile-Friendly Test web app does. For internal enterprise apps, the Google tool doesn’t help too much as it can’t see them. Eventually, I came across the W3C Mobile-Checker. This does most of what I’d need to do, but it’s deployment is a bit of a challenge. To simplify things, I thought it might be a fine idea to package it up using Docker.

The first thing to note is that the W3C Mobile-Checker has a bunch of dependencies such as:

The number of processes violates the Docker best practice of running only a single-process per container and I am not a big fan of approaches like the ones advocated by the Phusion Baseimage-Docker container. But in the interest of simplifying deployment, I figured might be okay for this type of app. This of course means running something like supervisord.

The first big challenge was figuring out how to get XVFB running in Docker. Thankfully, Linuxmeerkat has an excellent post on running a GUI application in a Docker container, which was incredibly helpful in setting this up. One thing that was interesting was that it seems like Chrome has changed quite a bit as there was no need to run the container in privilaged mode. From there, the rest of the set was pretty straight forward and I hacked up supervisord config that looks a bit like this:


command=/opt/browsermob-proxy-2.1.0-beta-1/bin/browsermob-proxy --use-littleproxy true

command=/usr/bin/Xvfb :1 -screen 0 1920x1080x24

command=/usr/bin/node /opt/Mobile-Checker/app.js > /dev/stdout

The full code is up on GitHub here. It’s also published to Docker Hub and you can run it like so:

    docker run -p 3000:3000 damnhandy/mobile-checker-docker

I’m still figuring out how best to package the Node.js app itself. Right now, the build on Docker hub contains a snapshot of the Mobile-Checker version at the time it was built. The one upside of using supervisord here is that it respawns the Mobile-Checker app each it crashes, which is quite often. But at any rate, this type of set up gives you can idea how one could do automated browser testing using Docker.

On Describing Push Notifications in Web APIs

A few weeks back, I attended my first RESTFest “unconfernece””. This was a really great event put on by some fantastic folks. The “everyone talks” aspect of RESTFest is actually awesome idea for a conference this size. This format is great because if you’re like me, you had some things you wanted to get some feedback on and this provides the forum to do just that. Everyone gets a chance to do a quick 5-in-5 talk about anything. I was looking to get some feedback on some ideas I’ve been considering about push notifications in Web APIs.

I titled my talk “Real-Time Web APIs,” but I’m no so much interested in Web APIs that follow “real-time computing” constrains. Really, I’m looking to see if there’s a good way to describe a service that streams push notifications from a Web API that follows “RESTful” architectural constraints. There’s a lot of good ideas out there, but many of them either assume that the client either a web browser or running an HTTP server. Additionally, the mechanism to describe these services need some work.

The Goal

The core building blocks for push notifications are already available, but not so much the means that aid in discovery and the description of such resources. I’d like to be able to organize these pieces so that the following requirements can be met:

  1. Don’t assume that the subscriber is able to receive a callback using HTTP POST. If the subscriber is a web browser or a thick desktop client like a Swing or JavaFX application (yes, people still make these!), or even a nativ app on iOS, running a web server to receive and HTTP callback isn’t always practical or feasible.
  2. Advertise that a given link is exposing a stream of events via a link relation. This might be similar to the monitor link relation, but not necessarily bound to SIP.
  3. The link relation should be able to indicate the media type that event messages are described in, ideally via the type property.
  4. If there’s a sub protocol involved, it should also advertise what sub-protocol. If event stream is a media type that supports embedded content, it should also express that as well.
  5. Subscribing to a stream or feed should be simple

WebSockets and HTM5 Server-Sent Events (SSE) present some interesting opportunities for Web APIs that demand low-latency push notifications while also removing the need for a consumer to run an HTTP server. Keep in mind that I’m not talking about doing something like “REST over WebSockets (Shay does make some great points though!),” I’m simply looking for push notifications without the need for for the consumer to run a web server as well as describing the stream via link relations. I think it could be done, but there’s some missing bits.


PubSubHubub ticks a lot of check boxes for me, such as:

  • While the protocol is built around Atom and Atom concepts, it could support a variety of media types. It’s using the Content-Type header to express what is coming over the wire.
  • It sports a discovery model using the rel="hub" link relation either in a link header or a link within an Atom feed.
  • The subscriber subscribes to the Topic URL from the Topic URL’s declared Hub(s) using the PubSubHubbub subscription protocol.
  • Publishers ping the “Hub” to notify it of updates, aggregates the content, and sends it to the subscriber using an HTTP POST request to the call hub.callback URL.

What I like about it a lot is the use of hypermedia to do discovery and call backs. Additionally, it’s using standards means to describe what’s coming over the wire. The rel="hub" link relation combined with the script ion protocol is super easy an fit works. My challenge with PubSubHubub is the requirement of an HTTP callback on the part of the subscriber. As stated earlier, this isn’t always possible.

From my perspective, PubSubHubub has the right foundational model. It’s simply HTTP callbacks that are the sticking point. So can we do something similar with WebSockets or Server-Sent Events? I think so, but there’s some challenges with existing formats in order to make this work.

Server-Sent Events

I REALLY like HTML5 Server-Sent Events (SSE). A lot. I’m also really annoyed by the fact that Internet Explorer still isn’t supporting SSE in IE11. What I like about SSE is that it’s a media type (text/event-stream) so the subscriber will know that this URL represents an event stream. The event stream model is also dead simple:

event: change
data: 73857293

But it could be also like this:

event: change
data: {"@id" : "15628", "@type" : "ChangeEvent",  "value" : "73857293" }

Or even this:

event: change
data: {
data: "@id" : "15628", 
data: "@type" : "ChangeEvent",  
data: "value" : "73857293" 
data: }

Or like this (If you’re into this sort of thing):

event: change
data: <ChangeEvent><id>15628</id><value>73857293</value></ChangeEvent>

As you can see, the data field could contain nested media type like XML, JSON, or something else. The problem is that that there is no way to indicate that. How does one know that the data field contains structured content such as JSON? The browser gets around the issue by embedding JavaScript in an HTML document that references the stream:

var source = new EventSource(’/updates');
source.addEventListener('add',  changeHandler, false);

Obviously, the changeHandler function will parse and handle the embedded content. This works great where the subscriber is a web browser (or embedded browser), but for other environments it’s not so easy.

We could express this via a link, but it’d be missing some details. Let’s assume we have a link relation called stream that informs a client that this link represents a stream of events:

Link: <http://example.com/events>; rel="stream"; 

It works and declares that the link is a stream of event and it exposes the events via SSE. But the subscriber has no hints that the data field contains JSON or XML content. In cases where the browser, or an embedded web browser, are not available, how does can a client get more information as to process a stream? For SSE, one option might be to include a media type parameter, call it data if you will, that would indicate that the nested type is something like JSON-LD:

Link: <http://example.com/events>; rel="stream"; 

It’s just an idea, but it could be workable. I would love feedback on this and would REALLY like to see Microsoft add Server-Sent Events to IE at some point.


WebSockets are neat, but the majority of use cases for streaming notifications only really needs to go one way. The bi-directional nature of the WebSockets protocol is a nice to have but not entirely necessary for most applications. WebSockets by itself really isn’t that useful. A number of WebSocket examples you’ll see are effectively someone’s home-grown, JSON-based, socket protocol. It’s a bit too cowboy for my tastes, but it can get the job done.

Where I do find WebSockets more useful is being able to leverage a well-defined subprotocol over a WebSocket. At the moment, I’m quite of fond of STOMP, particularly STOMP over WebSockets. In a Java shop that is already heavily invested in JMS, STOMP over WebSockets is a reasonable leap given that tools such as ActiveMQ, RabbitMQ, and others are support STOMP over WebSockets now.

Building on the Server-Sent Event examples earlier, we have some similar problems such as:

  • We still don’t have a good way to indicating what might be coming over the wire
  • We have a new problem since we’re dealing with another protocol that supports subprotocols, we don’t have a means to identify the sub-protocol that the WebSocket will be using

At RESTFest, I crapped up a variation of link relation like so:

Link: <wss://example.com/events>;rel=”stream/v12.stomp";

Here, we’d overload the rel field to indicate that it’s a stream but that it’s also using STOMP as the sub-protocol, specifically STOMP v1.2 (note I’m using IANA WebSocket subprotocol IDs here). Because the URI begins with wss://, we know that we’re using WebSockets over SSL. The type property is indicating that the messages will be using application/ld+json. The problem with both approaches is that if I want to offer another alternate message formats (say JSON-LD or XML), then this solution does really work. But maybe that’s not a problem.

Constrained Application Protocol

One of the great things about attending a workshop like RESTFest is that you’re surrounded by people who are smarter or more experienced than you. After my 5-in-5, Mike Amudsen had a few good questions about what I was trying to do. He then asked if I had considered CoAP, or the Constrained Application Protocol. Having never heard of CoAP, I obviously hadn’t taken it into consideration. CoAP more than likely satisfies a number of my needs. Since it’s still in draft form, it’s not an easy sell yet. Without a doubt, CoAP is something to keep an eye one.

Wrapping Up

Right now, I’m going down the STOMP over WebSockets route. I’d REALLY prefer Server-Sent Events, but the fact that Microsoft isn’t supporting SSE in IE10 and IE11 is AND the corporate standard in most shops, it sadly makes SSE a non-starter. In the coming weeks, I’ll be slapping some code up on GitHub to test out some ideas. I’d love to get feedback on these ideas to see if I’m going off the rails or if these ideas have some merit.