Light Painting

This video shows something of what I think one corner of our holographic future could actually look like:

The artist, Darren Pearson, makes the moving images in a rather work intensive manner.  He paints with light sticks in the air while the camera’s lens is open in a dark scene.  It takes 24 paintings to make one second of movie.  Ouch.

Presumably tools are coming along that will make it easier for us to paint in light.  That’s one of my hopes, anyway.

Preamble to building a first game

It’s amazing how fast time flies when you’re having fun.  It’s already time to let the puppy out again.

I’m not actually a big game player, anymore.  That’s partially because there’s so much stuff I want to make and learn that I don’t have time for traditional games (I’d love it if learning were more gamified…).  And it’s partially because of repetitive stress injury from working with computers 12-16 hours a day as it is.  The last thing I want to do is play a game when it hurts to do so.

That said, as part of my Hololens initiative (it amuses me to use the word ‘initiative’), I decided I needed to learn Unity, a 3D environment for making games.

As I was playing around with the Kinect I realized that it really is only a partially realized device (probably meant to be paired with the Hololens from the beginning [and probably why it was included with the XBox from the start {because people would have bitched |but they did anyway <damned if you do and damned if you don’t>|}]).  The things I would want to do with it, can’t really be done out of the box.

For instance, if I wanted to create a runtime definable macro sign-language to associate with code expansions for use in my code editor of choice (visual studio) I could not at this time.  It’s probably possible, but I couldn’t in any direct sort of way.  Just like I described that there were steps necessary just to get my computer ready to develop the Kinect, there are steps necessary to get the Kinect into working order.

First of all, if I were to want to make such a Visual Studio plugin I would have to learn about writing Visual Studio plugins.  That’s a non-problem.  I hear it’s a somewhat tortuous process, but it’s not a real problem, that’s just knowledge acquisition, to be muscled through on a weekend.  I would also have to think of a way to get the Kinect to send messages to this plugin.  One way or another, that data pipeline would have to be established – ideally I could send very succinct messages back and forth between the editor and the Kinect code.

The Kinect code is what I’m really interested in (actually, that’s quite subordinate to the real goal and the coolness of a kinetic macro language), and specifically, the gesture recognition stuff.  But the fact is, out of the box, Kinect is not good enough for what I want.  It tracks three joints in the hand, four if you include the wrist.  Furthermore, it tracks them very poorly, IMO, and they jump around like Mexican jumping beans.  I could make something work over the top of that, but it probably wouldn’t help with RSI.  As far as I can see, any reliable complex gesture recognition from the Kinect with the provided SDK’s would require larger motions than are available from the fingers.  Larger motions translates into elbow and shoulder and that gets tiring quick.

Here’s an interesting article from Microsoft researches in China titled: Realtime and Robust Hand Tracking from Depth.  Apparently, good hand tracking along the generally recognized 26 degrees of freedom of motion of the human hand is a hard problem.  Nevertheless, they demonstrate that it has been done, including seemingly very accurate finger positioning.  And that is using inferior hardware to the Kinect and my computer.

I have some interesting intuitions of how to improve existing body tracking through a sort of geometric space of possible conformations as well as transitions between them (think finger position states and changes available to any particular state based on the reality of joint articulation, etc).  Ultimately, a body state would be maintained by a controller that understood likely body states and managed the transitions between states as indicated by the datastream keeping in mind the likelyhood, for instance, that the users leg is above their neck line.  I use that as an example because Kinect’s gesture recognizer very commonly puts my knee joints above the crown of my head when I’m sitting in a chair and moving my hand.  A body would be an integration of various body state controllers.  It would all have to be fleshed out (pun entirely intended).

Watching the demo in the linked article above got me into 3D modeling which led me to Unity.

Now that I’ve went through the Unity tutorials, I feel quite prepared to begin making a game.  I have to say that I am taking to Unity’s model of development like a fish to water.  GameObjects and Components are very intuitive to work with.  Seeing how easy it really is, I decided I’d make a game, even if game development in these sorts of traditional terms isn’t something that I intend to do a great deal.  I’ve got some catching up to do in terms of 3D math and geometric reasoning, but that stuff is fun when it is being learned in relation to problems that you encounter and need to solve to get to where you want to go.  That’s how math is best learned, IMHO. YMMV.

So, with all that, in my next post I’ll describe the initial plans for my first game.

My Holodeck

So, I got pretty pumped about Microsoft’s Hololens.  So much so, in fact, that I managed to register for Build 2015.  I’ve known I’d jump on the Virtual Reality/Augmented Reality bandwagon some day (the tune’s pretty impressive if you listen closely), and really, reflecting on the matter, I was waiting for it to mature to the point where I was willing to engage with it.

I’m confident that my impulse towards such things is not unconnected to the nature of my Grandfather (it would be really cool to make his paintings immersive and navigable).  I’ve got lots of very intriguing artworks that I’d like to make, but I could never reconcile myself with paint and canvas.  Too… much… ancient.

Anyway, obviously I can’t get my hands on a Hololens quite yet but I wanted to get ready for when I can.  How?  Start programming for the Kinect, I figured.  I kind of assume that Microsoft is going to use a similar design philosophy between the two since they stated that the Kinect was their road to the Hololens.

In any case, it took some doing.  First of all, I didn’t have a Kinect.  Secondly, the Kinect V2 actually requires Windows 8 and I was running 7.

Blah, it’s a long story of boring tech challenges that included having to literally rip my laptop screen apart (plastic flew and blood flowed and you can see what I’m talking about in the image [this is the sole non-boring detail]) so that I could replace some parts so that I could install Windows 8 so that I could install the SDK so that I could play.

But none of that is the point of this post, which is to create a sort of monument to the newest iteration of my workspace/holodeck lab.  Some people take pictures of their face each day for years.  That’s really interesting and I’ve thought of doing it myself.  On the same note, I’ve been taking pictures of my workspaces for years (not every day, although that would likely be revealing).  It’s interesting to see them/it evolve.  Who even knows if the sky’s the limit for such a pregnant space/concept/role.

Now, mind you, I’ve watched some YouTube videos of people showing their workspaces, practically jerking their electronics off onto their furniture as they went (“And over here you can see my Gold Exclusive Version 15 Flippetywidget, and over there my Platinum Spectral Wank-Wonk…”), and it literally depressed me and threatened to ruin my mood of an evening.  All I’d wanted were good layout ideas.  I felt like I’d made a horrible mistake in a Google image search.

I just want to be clear that although I like my monitors, for instance, it is because they create walls of text in front of me.  I like electronics and stuff-in-general just to the degree that they manage to serve as ice to the figure skates of my creativity.

You can see the Kinect up in top right corner, peering down on where I sit, waiting for me to tell it how to interpret my gestures.

holodeck

Pavlov‘s fretting in the background, concerned about a squirrel that’s one layer too deep for this depiction.  So many layers, foregrounds, backgrounds, Magrittian grounds…

Here’s the (IR depth) view from the other side:

KinectScreenshot-Depth-12-40-41