Hi, this is YOSHIMURA, researching a wearable device at ATL. Just from an idea, I posted 3 ~ 4 articles about Google Glass, which by accident I had had exposure to Google Glass. I find the device interesting a lot. A mobile phone has the UX such as people getting a notification→starting up an application by grabbing a phone from your pocket→checking the information→finished. On the other hand, Google glass has totally a different UX from the ones on a mobile phone. For example, we can control the device by voice commands, and when we get a notification, just by nodding, we can check it with a screen indistinctly projected in the upper right corner in our sight, and when we finish it, the screen goes off by itself. The totally new UX should be needed in the applications too. I have written about those for a couple of times though.

I was faced to the difficulties of the cost and of the ways to get a real device that any formal emulator even if I want to make applications on the device. While thinking it over and over again, I finally got an idea to make a prototype of Glassware using a Nexus 5, about which I will write this time. Please note that it is possible for my story to go off the mark.

I just want you to know that this is absolutely just one of the discoveries I got through my research.

The ways I am going to discuss these are not the best and I have been in the situation those cases sometimes.

Nexus 5?

The reason why I use a Nexus 5 is that it works in the original Android 4.4 (the version as a base of XE16 or later) and is comparatively easy to use. The “original” version becomes a very important factor because I am going to make a prototype working with a different system even on an Android basis. On the other hand, because Google Glass is, if anything, similar to the Nexus S in terms of hardware specifications, we may have some cases where we had better to work with CyanogenMod 11 with the Nexus S.


In general, the step-by-step approach is somewhat effective; first, we put an appropriate emulation layer in between, start modeling mainly Activity, and once we finish it, port it to glass.


We insert the layer which mocks up Glass organic API like below.

LiveCard→Ongoing Notification

Though it is popular in Google ware to match Service and LiveCard together, I think it is better to emulate Service +ServiceNotification or WidgetRemoteViews.

You can see those in below;

Emulation layer which gets this Android Notification activated goes as follows:

Mirror API→GCM+Notification?

In the same way, we want to use Google Now for Mirror API but unfortunately for now we cannot make Google Now provide an original card. As for this, we have to set up a service that we can use Notifications with a remote control by using Google Cloud Messaging (GCM), as there is no other way to make emulation by using it.

We can write these services by using Go and Google App Engine. (note: the latest Google Cloud SDK has been changed a little bit.)

Start up with app.yaml.

It is fortunate enough that there is a library called gcm which handles the GCM relationship in Go, and I use it here. All we have to do is just to install it through goapp to make it work in an development environment, but since we forget to include it when we apply it to Google App Engine with the situation as it is, make a copy to a working directory.

I write how to implement the logic as reflector/reflector.go as follows; The processing of GCM relevance is made in a library, so here it is just very simple to receive the registration of a device, to write in Datastore, and to pass on a parameter which we want to spread to a library.

Put them together in this way below and develop it in Google App Engine.

It’s all done with server settings. This can make us register the Android device by posting as ragid= … to http://your-application-id-goes-here.appspot.com/device and then by posting the data message=… to http://your-application-id-goes-here.appspot.com/, we can push notifications from outside.

By receiving it in a device and sending out a notification, when we use Mirror API to send out a non-obstructive notification from outside, I think that we can make emulation roughly though.

In this example, since we do not limit the target of notifications, when we post a notification and all devices receive it. It will be no problem as this is for a test only and is used in a small group.


Next, we need to prepare GestureDetector special to GDK in case of Glassware to detect gesture, while there is a GesutureDetector in the Android SDK. Let’s say there is the following code. The emulation layer to get the code working is as follows:


I skip the implementation in the virtual logic of the detector, but this level of implementation quality is the same as the one in detection in the Android SDK normally.

The Gesture class in GDK is just an enum, and we need to define it since it does not have incompatibility with the Android SDK as might be expected.

At last, we need to analyze all MotionEvents in the Activity layer.

All of those steps have the code activate for Google Glass.

GCM→Standalone GCM

As Glass does not have Google Play Services, we use the one in the Standalone edition in case of using GCM. We can use the Glass in Standalone edition without any trouble, so using it makes Glass active.


The Card plays a helper role in making the Glass standard layout through receiving parameters, so it is adequate to load the rough custom layout with LayoutInflater easily.


In case of these codes as below, we should make emulation in Android. In this example, while we do not define the method of AddImage and so on, we need to add more if needed in more complicated case.


It is the best way to rely on system base emulation environment by clear ActiveOption Menu, instead of struggling custom designs. The appearance seems totally different when you activate Android in this way.


The clear Activity to launch menu is as follows:

Voice Trigger relevance

While the Meta data is ok as it is, we need to make a class reference be a stub as follows:

Since just with this procedure it does not listen to the voice command, we need to be prepared to activate the Service supported by voice commands separately from android.intent.action.MAIN/android.intent.category.LAUNCHER

※Please refer to the article in lifehacker for more detailed procedures.


We need to fix it if there is an incompatibility by taking out an emulation layer. If the application itself is simple and the layer design goes well, there will be not such a big problem. But I think the following can be a problem in porting in a real situation.

OpenGL ES 2.0

GAs the GPU installed on Glass is a SGX540 which is the same as the Nexux S, it is totally different from the Nexus5. For this reason, it often fails in compiling shaders. In this case, make sure you check the activation with Nexus S+CyanogenMod 11.

Generation of heat

While the CPU itself in Glass is the same as in the Nexus S, the form factor is so different that it generates heat immediately with heavy use. Be aware of the message “ok, glass” and “Glass must cool down to run smoothly.” if it temperature gets high.



Google Glass and Android mobile devices have a lots in common. Can I share the image that the more you go away from the front end, the more both devices have points in common? The differences I mentioned above are what I found by accident during participating in my research, and this is most likely about it. As I worried before, this entry went off the mark. But I hope this will be helpful for developing efficiency improvement with developing for Google Glass.