Recently we released version 1.03 of the Madefire app. Our initial release of the Madefire app was limited to the iPad 2 and new iPad using the UIRequiredDeviceCapabilities functionality provided by Apple. With the new release we removed one of the restrictions to allow the app to run on the iPad 1 (in addition to a lot of memory and performance tuning).
We released the app via iTunes Connect and waited for it to process to the App Store. As soon as it did we grabbed an iPad 1 and downloaded it onto the device. Only what we got wasn’t what we expected.
In the App Store we saw the version listed as 1.03 and the requirements “Compatible with iPad.” meaning that we were no longer restricting iPad 1 installs. After running the app on the iPad 1 we noticed things that should not have been happening on an iPad 1. For a moment we thought that we had messed up somehow, that the version we released wasn’t working like the test build. We had a brief moment of panic.
That’s when it dawned on us to check the version number in app. Surprisingly to us it was the previous release, 1.02. In a small window between when the App Store had been refreshed and when we downloaded the app onto our iPad 1 we were able to install the old version that had UIRequiredDeviceCapabilities prohibiting it from being installed.
We were a little concerned with this as we didn’t want any customers downloading the old version on an iPad 1 where performance was less that ideal. Returning to the App Store on the device showed that an update was available for Madefire, so our fears were set aside as any normal users that had happened to download in that brief window of time would also see an update eventually.
It’s odd that there is a chance for a new user to download an old version of your app on update, but what’s even more concerning is that the old version of the app installs and runs even even though it has the device capabilities restrictions on it. It’s something to keep an eye on for sure when dealing with support issues.
Being an effective engineering team is often about making the right trade-offs. When our CEO proposed a 21 iPad video wall for Comic-Con San Diego everyone was excited at the ridiculousness of it. The initial excitement was tempered a bit by the timeline we were working with. With less than two weeks before the convention and plenty of “normal” work to do we we’d have to go the quick and dirty route. Once the wall was at the convention the result was going to be on display for thousands of people, ten hours a day, for three or more days. Flaky and unreliable was not an option.
18 iPads fresh from the Apple Store. 3 more would join the next day. We’re setting up the iPads, assembly line style. We configured the first, backed it up to iCloud, and restored that to the rest. There was still a bit of manual configuration to do, but it saved a lot of work.
“Real” solutions would involve tightly synchronized clocks or timecode. NTP is made for that purpose, but in the walled garden of iOS apps it wouldn’t be that simple. We’d have to maintain a clock/time in our app that synchronized to a central server frequently and then instruct the app to play a particular video at a specific time. Another option was to provide a syncing signal via the headphone jacks of the devices, but that involved hardware, something we didn’t really have the time to try. In the end, we decided on a simpler approach.
Our initial implementation used a multi-threaded server which connected to each client. The manager had a list of movies to play and the duration of those movies. It would loop through the videos, quickly sending out commands telling the clients to start playing, wait for the video to finish playing and then tell the clients what to play next. If the connections were quick and reliable enough it should work. Thankfully the human vision system is rather forgiving.
Once the iPads were configured we pushed our proof of concept app to all of them and started it up.
The truth is that it worked pretty well. The one change we ended up needing to make was to allow for the time it takes an iPad to load the video before playing. That was addressed by making it a two-step process. We first tell the the clients to prepare and then after waiting a sufficient amount of time, tell them to play the video. It’s not perfect. We’ll see a bit of skew on an iPad every once in a while, but you generally have to be looking for it and even then it’s not objectionable.
Our first content was a set of individually crafted videos with a counter on each screen so that we could detect drift. That’s not very interesting. Now we needed to cut up some video content to fit the wall. To do this we coded up a python script that crops a version of the source video for each iPad to play using avconv/ffmpeg.
There was a lot of drawing and annotating pictures involved with getting the slicing right. We went back and forth quite a bit trying to decide whether we should fit all of the source pixels in to the output videos or allow for the borders by dropping the pixels that fall behind them. The iPad has a huge surround, over 20% of the size of the screen itself. In the end we decided to go with dropping the pixels as there was too much distortion of the content when there’s a physical jump of 1¾” between rows.
At this point we had a working iPad video wall, but it wasn’t particularly fault tolerant, at least not enough to stand up to 10 hour days in a crowded (in terms of RF traffic) convention hall. The lack of time took away a lot of our options here. We thought about trying to connect the iPads over USB, maybe even using MIDI, but the route was a non-starter. Unfortunately USB hubs that can power multiple iPads are hard to come by so we’d lose the ability to charge the iPads while playing.
Once wired connections were ruled out we were left with the option of making the wireless connections as robust as possible. Stable is one thing, but they will never be fool-proof. To handle hiccups we spent some time making all of the code more robust. If an iPad drops out it will immediately start trying to reconnect so that it’ll be ready for the next round. There were several potential failure-cases and we had to look closely at each of them to ensure the iPad would retry as expected without beating the server to death. It’s not perfect, but good enough for a couple days of hack-a-thon style work.
A test run of the 7×3 setup on a table in our office.
We’re pretty proud of the results. If you’re at Comic-Con stop by our booth, #4902, and check it out. We have some great signings scheduled and you’ll be able to check out our app on our demo iPads. Otherwise we plan to open source the results once we’ve recovered from the event and had a little bit of time to clean up the code. Once we do the iPad Video Wall Code will live on github.
Footage of the wall in our booth at Comic-Con.