Preamble to building a first game

It’s amazing how fast time flies when you’re having fun.  It’s already time to let the puppy out again.

I’m not actually a big game player, anymore.  That’s partially because there’s so much stuff I want to make and learn that I don’t have time for traditional games (I’d love it if learning were more gamified…).  And it’s partially because of repetitive stress injury from working with computers 12-16 hours a day as it is.  The last thing I want to do is play a game when it hurts to do so.

That said, as part of my Hololens initiative (it amuses me to use the word ‘initiative’), I decided I needed to learn Unity, a 3D environment for making games.

As I was playing around with the Kinect I realized that it really is only a partially realized device (probably meant to be paired with the Hololens from the beginning [and probably why it was included with the XBox from the start {because people would have bitched |but they did anyway <damned if you do and damned if you don’t>|}]).  The things I would want to do with it, can’t really be done out of the box.

For instance, if I wanted to create a runtime definable macro sign-language to associate with code expansions for use in my code editor of choice (visual studio) I could not at this time.  It’s probably possible, but I couldn’t in any direct sort of way.  Just like I described that there were steps necessary just to get my computer ready to develop the Kinect, there are steps necessary to get the Kinect into working order.

First of all, if I were to want to make such a Visual Studio plugin I would have to learn about writing Visual Studio plugins.  That’s a non-problem.  I hear it’s a somewhat tortuous process, but it’s not a real problem, that’s just knowledge acquisition, to be muscled through on a weekend.  I would also have to think of a way to get the Kinect to send messages to this plugin.  One way or another, that data pipeline would have to be established – ideally I could send very succinct messages back and forth between the editor and the Kinect code.

The Kinect code is what I’m really interested in (actually, that’s quite subordinate to the real goal and the coolness of a kinetic macro language), and specifically, the gesture recognition stuff.  But the fact is, out of the box, Kinect is not good enough for what I want.  It tracks three joints in the hand, four if you include the wrist.  Furthermore, it tracks them very poorly, IMO, and they jump around like Mexican jumping beans.  I could make something work over the top of that, but it probably wouldn’t help with RSI.  As far as I can see, any reliable complex gesture recognition from the Kinect with the provided SDK’s would require larger motions than are available from the fingers.  Larger motions translates into elbow and shoulder and that gets tiring quick.

Here’s an interesting article from Microsoft researches in China titled: Realtime and Robust Hand Tracking from Depth.  Apparently, good hand tracking along the generally recognized 26 degrees of freedom of motion of the human hand is a hard problem.  Nevertheless, they demonstrate that it has been done, including seemingly very accurate finger positioning.  And that is using inferior hardware to the Kinect and my computer.

I have some interesting intuitions of how to improve existing body tracking through a sort of geometric space of possible conformations as well as transitions between them (think finger position states and changes available to any particular state based on the reality of joint articulation, etc).  Ultimately, a body state would be maintained by a controller that understood likely body states and managed the transitions between states as indicated by the datastream keeping in mind the likelyhood, for instance, that the users leg is above their neck line.  I use that as an example because Kinect’s gesture recognizer very commonly puts my knee joints above the crown of my head when I’m sitting in a chair and moving my hand.  A body would be an integration of various body state controllers.  It would all have to be fleshed out (pun entirely intended).

Watching the demo in the linked article above got me into 3D modeling which led me to Unity.

Now that I’ve went through the Unity tutorials, I feel quite prepared to begin making a game.  I have to say that I am taking to Unity’s model of development like a fish to water.  GameObjects and Components are very intuitive to work with.  Seeing how easy it really is, I decided I’d make a game, even if game development in these sorts of traditional terms isn’t something that I intend to do a great deal.  I’ve got some catching up to do in terms of 3D math and geometric reasoning, but that stuff is fun when it is being learned in relation to problems that you encounter and need to solve to get to where you want to go.  That’s how math is best learned, IMHO. YMMV.

So, with all that, in my next post I’ll describe the initial plans for my first game.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s