My Grok Hurts

I’m reading Category Theory for the Sciences.  It’s a wild ride through abstraction connecting the tools and concepts  I use daily to the deep mathematical concepts that underlie them and then going a step further to explore the even broader abstractions that unite the representations of the mathematical abstractions behind the concreteness.

The goal:

… category theory is incredibly efficient as a language for experimental design patterns, introducing formality while remaining flexible.  It forms a rich and tightly woven conceptual fabric that allows the scientist to maneuver between different perspectives whenever the need arises.  Once she weaves that fabric into her own line of research, she has an ability to think about models in a way that simply would not occur without it.

I’m a Princess Programmer

I’ve been programming professionally for eight years now.  Something I encountered early on was the non-programmer business professional’s appraisal that programmers are “princesses”.  I always bridled at that, but now I identify.

I actually don’t have the time to write a pretty blog post.  I never do anymore.  Nevertheless, I’ll just say that being a “princess programmer” isn’t necessarily a bad thing.  The very thing that makes a programmer a “princess” is the very thing that makes them good.

For instance, I realized, I don’t like to program without 4 monitors, 3 of which must be in portrait mode.  Also, I require a mechanical keyboard with at least two mice, left and right, at least one of which is a Logitech smooth scroller.  I prefer my keyboard to be “10 keyless” (no numpad).  I also have expectations about the speed of my computer and my graphics card and the updatedness of the programs I use to program programs.

I am a princess programmer.

But… I am also absurdly effective.  I’m worth two of me.  I am good at developing workflows, yours and mine.  I have workflows that I use to develop your workflows.  I develop workflows at levels of detail that bring you to tears.  I develop workflows at the level of my workflow-building tools.

Understandably, my tools are important to me.  My various hammers beat out your various hammers.

Call us princesses.  Call us soldiers.  You can’t win your war without us.  Don’t complain about how sharp we’ve become accustomed to our swords being when you expect us to cut through iron.

Preamble to building a first game

It’s amazing how fast time flies when you’re having fun.  It’s already time to let the puppy out again.

I’m not actually a big game player, anymore.  That’s partially because there’s so much stuff I want to make and learn that I don’t have time for traditional games (I’d love it if learning were more gamified…).  And it’s partially because of repetitive stress injury from working with computers 12-16 hours a day as it is.  The last thing I want to do is play a game when it hurts to do so.

That said, as part of my Hololens initiative (it amuses me to use the word ‘initiative’), I decided I needed to learn Unity, a 3D environment for making games.

As I was playing around with the Kinect I realized that it really is only a partially realized device (probably meant to be paired with the Hololens from the beginning [and probably why it was included with the XBox from the start {because people would have bitched |but they did anyway <damned if you do and damned if you don’t>|}]).  The things I would want to do with it, can’t really be done out of the box.

For instance, if I wanted to create a runtime definable macro sign-language to associate with code expansions for use in my code editor of choice (visual studio) I could not at this time.  It’s probably possible, but I couldn’t in any direct sort of way.  Just like I described that there were steps necessary just to get my computer ready to develop the Kinect, there are steps necessary to get the Kinect into working order.

First of all, if I were to want to make such a Visual Studio plugin I would have to learn about writing Visual Studio plugins.  That’s a non-problem.  I hear it’s a somewhat tortuous process, but it’s not a real problem, that’s just knowledge acquisition, to be muscled through on a weekend.  I would also have to think of a way to get the Kinect to send messages to this plugin.  One way or another, that data pipeline would have to be established – ideally I could send very succinct messages back and forth between the editor and the Kinect code.

The Kinect code is what I’m really interested in (actually, that’s quite subordinate to the real goal and the coolness of a kinetic macro language), and specifically, the gesture recognition stuff.  But the fact is, out of the box, Kinect is not good enough for what I want.  It tracks three joints in the hand, four if you include the wrist.  Furthermore, it tracks them very poorly, IMO, and they jump around like Mexican jumping beans.  I could make something work over the top of that, but it probably wouldn’t help with RSI.  As far as I can see, any reliable complex gesture recognition from the Kinect with the provided SDK’s would require larger motions than are available from the fingers.  Larger motions translates into elbow and shoulder and that gets tiring quick.

Here’s an interesting article from Microsoft researches in China titled: Realtime and Robust Hand Tracking from Depth.  Apparently, good hand tracking along the generally recognized 26 degrees of freedom of motion of the human hand is a hard problem.  Nevertheless, they demonstrate that it has been done, including seemingly very accurate finger positioning.  And that is using inferior hardware to the Kinect and my computer.

I have some interesting intuitions of how to improve existing body tracking through a sort of geometric space of possible conformations as well as transitions between them (think finger position states and changes available to any particular state based on the reality of joint articulation, etc).  Ultimately, a body state would be maintained by a controller that understood likely body states and managed the transitions between states as indicated by the datastream keeping in mind the likelyhood, for instance, that the users leg is above their neck line.  I use that as an example because Kinect’s gesture recognizer very commonly puts my knee joints above the crown of my head when I’m sitting in a chair and moving my hand.  A body would be an integration of various body state controllers.  It would all have to be fleshed out (pun entirely intended).

Watching the demo in the linked article above got me into 3D modeling which led me to Unity.

Now that I’ve went through the Unity tutorials, I feel quite prepared to begin making a game.  I have to say that I am taking to Unity’s model of development like a fish to water.  GameObjects and Components are very intuitive to work with.  Seeing how easy it really is, I decided I’d make a game, even if game development in these sorts of traditional terms isn’t something that I intend to do a great deal.  I’ve got some catching up to do in terms of 3D math and geometric reasoning, but that stuff is fun when it is being learned in relation to problems that you encounter and need to solve to get to where you want to go.  That’s how math is best learned, IMHO. YMMV.

So, with all that, in my next post I’ll describe the initial plans for my first game.

My Holodeck

So, I got pretty pumped about Microsoft’s Hololens.  So much so, in fact, that I managed to register for Build 2015.  I’ve known I’d jump on the Virtual Reality/Augmented Reality bandwagon some day (the tune’s pretty impressive if you listen closely), and really, reflecting on the matter, I was waiting for it to mature to the point where I was willing to engage with it.

I’m confident that my impulse towards such things is not unconnected to the nature of my Grandfather (it would be really cool to make his paintings immersive and navigable).  I’ve got lots of very intriguing artworks that I’d like to make, but I could never reconcile myself with paint and canvas.  Too… much… ancient.

Anyway, obviously I can’t get my hands on a Hololens quite yet but I wanted to get ready for when I can.  How?  Start programming for the Kinect, I figured.  I kind of assume that Microsoft is going to use a similar design philosophy between the two since they stated that the Kinect was their road to the Hololens.

In any case, it took some doing.  First of all, I didn’t have a Kinect.  Secondly, the Kinect V2 actually requires Windows 8 and I was running 7.

Blah, it’s a long story of boring tech challenges that included having to literally rip my laptop screen apart (plastic flew and blood flowed and you can see what I’m talking about in the image [this is the sole non-boring detail]) so that I could replace some parts so that I could install Windows 8 so that I could install the SDK so that I could play.

But none of that is the point of this post, which is to create a sort of monument to the newest iteration of my workspace/holodeck lab.  Some people take pictures of their face each day for years.  That’s really interesting and I’ve thought of doing it myself.  On the same note, I’ve been taking pictures of my workspaces for years (not every day, although that would likely be revealing).  It’s interesting to see them/it evolve.  Who even knows if the sky’s the limit for such a pregnant space/concept/role.

Now, mind you, I’ve watched some YouTube videos of people showing their workspaces, practically jerking their electronics off onto their furniture as they went (“And over here you can see my Gold Exclusive Version 15 Flippetywidget, and over there my Platinum Spectral Wank-Wonk…”), and it literally depressed me and threatened to ruin my mood of an evening.  All I’d wanted were good layout ideas.  I felt like I’d made a horrible mistake in a Google image search.

I just want to be clear that although I like my monitors, for instance, it is because they create walls of text in front of me.  I like electronics and stuff-in-general just to the degree that they manage to serve as ice to the figure skates of my creativity.

You can see the Kinect up in top right corner, peering down on where I sit, waiting for me to tell it how to interpret my gestures.

holodeck

Pavlov‘s fretting in the background, concerned about a squirrel that’s one layer too deep for this depiction.  So many layers, foregrounds, backgrounds, Magrittian grounds…

Here’s the (IR depth) view from the other side:

KinectScreenshot-Depth-12-40-41

Metaprogramming Skill

There comes a time when programming itself is normal, easy.  You can encounter problems that are difficult to solve, sure, but programming itself is as second-nature as talking or driving.  Thinking in that way of looking at problems as systems of decomposable parts becomes second nature.

Whether at this point or beforehand, a programmer encounters a particular reality over and over again.  The real impediment to coding is not lack of knowledge about a programming language or a functional library, nor again is it the difficulty of reasoning about the problem space.  The difficult thing about programming, once you’re good at it, is learning how to enter that highly efficient flow-state at will.  The really difficult thing is in being able to pop in and out of it as people distract you without even knowing they’re plunking your cognitive handle holds.

I don’t think a lot of people really get good at this (especially that second part).  But it really depends on your flow state, too.  Some flow states are deeper than others.  You may be accustomed to a “2 inch deep” flow state whereas someone else may prefer “10 foot deep” flow states.  My experience is that 10 foot deep flow states are more challenging to get into and stay in than 2 inch deep flow states, but they’re also better.  When you’re flowing 10 feet deep you can not only flow with your fingers over the keyboard and complex editor key chords and the nano-functionality you’re working on at the moment, but you can also keep a strategic eye on the overall architecture and even on the long-term goals of the system.  Ideally, you can gain the practice of settling into your coding time such that you sink down through 2 inch flow states to 10 foot flow states as distractionless-time accumulates as if pulled by gravity.

Picture it – the flow state is pulling you down.  A little hypnotic suggestion can always help.

But the first thing you have to do is give yourself space to let that settling down commence.  From the outside, that looks like a person sitting at a computer staring at a screen.  From the inside, it feels like disengaging-from/quieting the inner dialog and expanding one’s proprioreception (oh yeah, that’s right, I have an abdomen and it has more to say than “I’m hungry”, in fact, it’s talking about how it is straining to help my lower back with my bad posture…).

This is all the same as meditation.

A foundational skill of a programmer, in my book, is the ability to sit in front of a computer and check into oneself to the point where one finds oneself flowing into the flow state, almost as if by habit.  Programming becomes a meditation on the programmer-body-action/problem-object/solution-visualization.

In this state, we are the wizards of a world that is redesigning itself for us through us.

 

Bonus Material: What is flow state?  Why does “going deeper” allow you to keep track of more that is going on (and there is a hell of a lot going on when a programmer programs).  I think the going deeper is simply the stilling of the personal ego.  Less of the mind’s resources are going into propping up an internal context of meaning concerning one’s “personality” and ALL of its desires and complaints.  As these resources of conscious attention free up, they naturally start seeing all the other stuff that is going on in the mind.  They also give the brain’s “user”, the conscious will, a chance to stand back and SELECT which thoughts to give extra time in the spotlight of conscious attention.

 

USE my_brain

SELECT * FROM BackgroundThoughts WHERE StrategicValue > 100 and AssociativityThreshold < 5

GO

 

In this way, the conscious mind can be used in the way it is best utilized – as a strategic overlay to initiate and adjust automatic processes in situ, in relation to a higher-level view of the details of the moment.  This is where masters like to set up base camp.