Light Painting

This video shows something of what I think one corner of our holographic future could actually look like:

The artist, Darren Pearson, makes the moving images in a rather work intensive manner.  He paints with light sticks in the air while the camera’s lens is open in a dark scene.  It takes 24 paintings to make one second of movie.  Ouch.

Presumably tools are coming along that will make it easier for us to paint in light.  That’s one of my hopes, anyway.

Preamble to building a first game

It’s amazing how fast time flies when you’re having fun.  It’s already time to let the puppy out again.

I’m not actually a big game player, anymore.  That’s partially because there’s so much stuff I want to make and learn that I don’t have time for traditional games (I’d love it if learning were more gamified…).  And it’s partially because of repetitive stress injury from working with computers 12-16 hours a day as it is.  The last thing I want to do is play a game when it hurts to do so.

That said, as part of my Hololens initiative (it amuses me to use the word ‘initiative’), I decided I needed to learn Unity, a 3D environment for making games.

As I was playing around with the Kinect I realized that it really is only a partially realized device (probably meant to be paired with the Hololens from the beginning [and probably why it was included with the XBox from the start {because people would have bitched |but they did anyway <damned if you do and damned if you don’t>|}]).  The things I would want to do with it, can’t really be done out of the box.

For instance, if I wanted to create a runtime definable macro sign-language to associate with code expansions for use in my code editor of choice (visual studio) I could not at this time.  It’s probably possible, but I couldn’t in any direct sort of way.  Just like I described that there were steps necessary just to get my computer ready to develop the Kinect, there are steps necessary to get the Kinect into working order.

First of all, if I were to want to make such a Visual Studio plugin I would have to learn about writing Visual Studio plugins.  That’s a non-problem.  I hear it’s a somewhat tortuous process, but it’s not a real problem, that’s just knowledge acquisition, to be muscled through on a weekend.  I would also have to think of a way to get the Kinect to send messages to this plugin.  One way or another, that data pipeline would have to be established – ideally I could send very succinct messages back and forth between the editor and the Kinect code.

The Kinect code is what I’m really interested in (actually, that’s quite subordinate to the real goal and the coolness of a kinetic macro language), and specifically, the gesture recognition stuff.  But the fact is, out of the box, Kinect is not good enough for what I want.  It tracks three joints in the hand, four if you include the wrist.  Furthermore, it tracks them very poorly, IMO, and they jump around like Mexican jumping beans.  I could make something work over the top of that, but it probably wouldn’t help with RSI.  As far as I can see, any reliable complex gesture recognition from the Kinect with the provided SDK’s would require larger motions than are available from the fingers.  Larger motions translates into elbow and shoulder and that gets tiring quick.

Here’s an interesting article from Microsoft researches in China titled: Realtime and Robust Hand Tracking from Depth.  Apparently, good hand tracking along the generally recognized 26 degrees of freedom of motion of the human hand is a hard problem.  Nevertheless, they demonstrate that it has been done, including seemingly very accurate finger positioning.  And that is using inferior hardware to the Kinect and my computer.

I have some interesting intuitions of how to improve existing body tracking through a sort of geometric space of possible conformations as well as transitions between them (think finger position states and changes available to any particular state based on the reality of joint articulation, etc).  Ultimately, a body state would be maintained by a controller that understood likely body states and managed the transitions between states as indicated by the datastream keeping in mind the likelyhood, for instance, that the users leg is above their neck line.  I use that as an example because Kinect’s gesture recognizer very commonly puts my knee joints above the crown of my head when I’m sitting in a chair and moving my hand.  A body would be an integration of various body state controllers.  It would all have to be fleshed out (pun entirely intended).

Watching the demo in the linked article above got me into 3D modeling which led me to Unity.

Now that I’ve went through the Unity tutorials, I feel quite prepared to begin making a game.  I have to say that I am taking to Unity’s model of development like a fish to water.  GameObjects and Components are very intuitive to work with.  Seeing how easy it really is, I decided I’d make a game, even if game development in these sorts of traditional terms isn’t something that I intend to do a great deal.  I’ve got some catching up to do in terms of 3D math and geometric reasoning, but that stuff is fun when it is being learned in relation to problems that you encounter and need to solve to get to where you want to go.  That’s how math is best learned, IMHO. YMMV.

So, with all that, in my next post I’ll describe the initial plans for my first game.

My Holodeck

So, I got pretty pumped about Microsoft’s Hololens.  So much so, in fact, that I managed to register for Build 2015.  I’ve known I’d jump on the Virtual Reality/Augmented Reality bandwagon some day (the tune’s pretty impressive if you listen closely), and really, reflecting on the matter, I was waiting for it to mature to the point where I was willing to engage with it.

I’m confident that my impulse towards such things is not unconnected to the nature of my Grandfather (it would be really cool to make his paintings immersive and navigable).  I’ve got lots of very intriguing artworks that I’d like to make, but I could never reconcile myself with paint and canvas.  Too… much… ancient.

Anyway, obviously I can’t get my hands on a Hololens quite yet but I wanted to get ready for when I can.  How?  Start programming for the Kinect, I figured.  I kind of assume that Microsoft is going to use a similar design philosophy between the two since they stated that the Kinect was their road to the Hololens.

In any case, it took some doing.  First of all, I didn’t have a Kinect.  Secondly, the Kinect V2 actually requires Windows 8 and I was running 7.

Blah, it’s a long story of boring tech challenges that included having to literally rip my laptop screen apart (plastic flew and blood flowed and you can see what I’m talking about in the image [this is the sole non-boring detail]) so that I could replace some parts so that I could install Windows 8 so that I could install the SDK so that I could play.

But none of that is the point of this post, which is to create a sort of monument to the newest iteration of my workspace/holodeck lab.  Some people take pictures of their face each day for years.  That’s really interesting and I’ve thought of doing it myself.  On the same note, I’ve been taking pictures of my workspaces for years (not every day, although that would likely be revealing).  It’s interesting to see them/it evolve.  Who even knows if the sky’s the limit for such a pregnant space/concept/role.

Now, mind you, I’ve watched some YouTube videos of people showing their workspaces, practically jerking their electronics off onto their furniture as they went (“And over here you can see my Gold Exclusive Version 15 Flippetywidget, and over there my Platinum Spectral Wank-Wonk…”), and it literally depressed me and threatened to ruin my mood of an evening.  All I’d wanted were good layout ideas.  I felt like I’d made a horrible mistake in a Google image search.

I just want to be clear that although I like my monitors, for instance, it is because they create walls of text in front of me.  I like electronics and stuff-in-general just to the degree that they manage to serve as ice to the figure skates of my creativity.

You can see the Kinect up in top right corner, peering down on where I sit, waiting for me to tell it how to interpret my gestures.


Pavlov‘s fretting in the background, concerned about a squirrel that’s one layer too deep for this depiction.  So many layers, foregrounds, backgrounds, Magrittian grounds…

Here’s the (IR depth) view from the other side:


User Interfacing

User interfaces, as I am understanding them, are akin to dendritic wormholes in the social brain.  In some way, these days, we use our muscles, even if they be finger flicks or eye movements (there’s also less widespread interfacing directly with nervous system), to connect our minds to this vast flow of information and meaning that is around us, willy nilly.

User interfaces facilitate the manipulation and navigation of information by consciousness.  As our user interfaces interface more and more directly with the central nervous system, the metaphors we use to interact with information will crawl and then jump to fly ever further away from the desktop.  We’ll be able to communicate in animated meanings without all the indirection of our technology-limited ways of communicating these past billion years.

That will be a stitching together of mind on a distributed scale.  It puts me in mind of the books Nexus and Crux by Ramez Naam.  Great books.

I already consider the global super organism to be real and manifestly existing.  But I can see how others are dubious of reifying a distributed processes.  But I think of how the atmosphere is solid when you smash into it, even as it’s ethereal when you move through it slowly, or the body from the perspective of a molecule or a cell.  I consider the resiliency of our economy, as it shuttles matter and services about.  I don’t consider telephone poles nor electrical wires to be inorganic.  I don’t consider stainless steel counters in restaurants to be inorganic either.  They’re evolving solutions to biological problems utilizing available raw materials.  That’s life to a ‘t’.

The natural/artificial world-split is artificial.

But I do wonder what “self-awareness” looks/feels like for the beast.  I doubt these words constitute it, even as they’re read.  It would probably be in evidence past a certain threshold and density of efficacious social interaction over the “www”.  It would exist as a vast conversation with great inbuilt context that had an onboarding process perhaps even measured in years.  As the internet enables networks of people to specialize and influence other networks stably, using these powerful CNS-integrated “languages”, we’ll see a brain-like vortex of activity form.

Well, we already have to a certain degree, and that conversation already exists, and you are a part of it as you read these words as I am by writing and maintaining them, but we’ll have better tools to visualize this reamalgamerging our births injected our layered selves within in real time.  Like Google Earth to the fifth power.


Same idea from a different angle to help throw things in sharp relief.

One last thing.  In a certain sense, this is the most important thing going on right now.  This is because no matter what whiz-bang technologies are invented that exploit the laws of nature, if we can’t, as a species, develop a more mature conversation, we’re going to lose.  Efficacious conversation is a law of nature.

User interfaces to save the day!!!  Talk about an underdog.


*Check out Discourse if you’re interested in stepping down an inroad (has the pedigree of Stack Overflow and Coding Horror: Because Reading is Fundamental [I sincerely encourage you, if you’ve read this far, to give that last link a chance {please comment on the irony of this request below |and if you’ve got the time, reflect on how these hooks play a biological, proteanic role in shoring up that massive self-aware conversation I mentioned earlier <maybe you could drop a few breadcrumbs for someone else? /in such ways the avalanche snowballs!just saying!/>|}]).

1080P Dual Monitor Desktop Background

My mom made this desktop background for a dual monitor setup where each monitor is 1080p resolution.  It’s been my background ever since.

Besides being awesome, it’s nice thing that it is big enough that it can be made to stretch across both monitors, rather than duplicating.  So, the left half of the image will appear on the left monitor and the right half of the image will appear on the right.  There are programs you can get to put different images on each monitor, but this is a nice works-out-of-the-box solution.

In Windows 7, I have the “Picture Position” set to “Tile”.  The file itself is around 12MBs, which shouldn’t be any problem for modern computers.  If anyone wants to comment on how to set it up on other systems, feel free.  I don’t have any other OS’s set up at the moment, so I”m not going to get into it.  In Windows 7 and Vista, just right click on the image and save to your downloads folder (or whatever), then you should be able to navigate to that folder and right click and select “Set as Desktop Background”.  However, you might then need to right click on your desktop and select “Personalize”, click down below where it shows the image and then eventually select “Tile”.  If anyone requests it I’ll put better instructions, however, just googling ought to get you there. Dual Monitor Desktop Background By Sylvie Meyers

helluva lotta html tabs (or an excursion into geometry and number theory [or a mandala of a jungian archetype])

webpage of nested tabs header

I’m afraid of making a webpage of nested tabs (the tabs would be something like jqueryui.tabs).

jqueryui tabs homepage snapshot

It would truly have a lot of tabs, as I imagine it.  Each tab would have as many tabs as every tab in its layer (see below, but this is actually probably false, but is a good heuristic).  I would label each tab header with its count, too.  I’m afraid to even calculate how many there would be.  It would be a function of the size of the browser window and the size of the tab headers and the tab implementations css margins and paddings, etc.  However, it would be further complicated by the fact that width of a tab-header is going to change because the tabs are going to be labeled by the tabs position in the actual linear process of creating the tabs.  Thus, as more tabs are added and the string representation of the count of the tab in the overall linear process of creating the tabs increases so too will the width of the tab header and thus will decrease the number of tabs that can fit in a particular tab page.  Luckily, the height of the tabs can be assumed to be constant.  Ooohhhh, no it can’t.  As the tab page <div> width and height decreases due to the layered nesting of the tabs, there is going to be less measure available for the vertical or horizontal tab header, until, when the linear count string representation is large enough, the tab header text will wrap and thus double (triple? quadruple? …?) the height of the tab header.  One could probably find their way to an upper bound for the depth of ‘triple’ and ‘quadruple’ etc (on second thought, you couldn’t, probably, because depending on the seed algorithm, later layers may have far more than earlier layers and thus this is fraught with difficulty, too).  As I imagine this all working, tab pages would cease nesting (in a particular branch) once there was no more room in the parent tab page for another entire header containing the position text.  Furthermore, each layer, as shown in the crude drawing, would alternate loop around vertical and horizontal tab header directions going clockwise.

At least, that’s how I imagine this, assuming a browser could even get anywhere near a full representation.  I’m sure a single ‘layer’ could be fully displayed using an algorithm that nested inwards always on the layerth tab position of next nested layer (but it would still have boatloads of empty tabs to create [or you could loop in always from the outside most layer until that horizontal or vertical linear collection of tab headers would wrap to a second line if another were added).  In that way, the thing would be sort of spirally flat.  But this would also change how many tabs could possibly be created due to the fact that the flattened spirally first layer will have tab header text that has all low-count numbers.  Almost assuredly less than 50000.  I mean, I’m looking at my 1920×1080 monitor (lets assume full screen) and referencing the jqueryui.tabs visual above to guess the flattened first layers number of tabs using the labeling scheme mentioned.  What is that, like 50px tall and, starting at 1 and counting to 50000, going to maybe 100px wide.  It would vary as the count grew, and even with each number, subtly, in any particular non-fixed width font.  With this particular seed algorithm, we’re basically talking about fitting a number of slowly growing squares onto the screen, where each square contains a number increasing by one for each new square on screen.

That algorithm is boring.  It’s just limited to flatness because of concerns for overloading the browser rendering engine or even for exceeding the upper bounds of the underlying numerical type in javascript.

It would be interesting to develop an equation that could exactly calculate the total number of tabs and the number of tabs on a particular browser window environment.  You could create fancier calculations, like how many tabs are visible on a particular tab page given the position of that tab page in the linear count of all tab pages.  But even that ignores the most interesting aspects of the webpage of nested tabs, such as how each of all the hidden tabs reveal a collection of tab headers containing numbers which are initially determined by the seed algorithm.  And that’s the rub, any difference in the seed algorithm is going to totally change any calculation concerning how many tabs there are total and what tabs are visible on any particular tab page.  Furthermore, any particular focused tab will begin to show different content than that initially determined by the algorithm as the user clicks through the nested tabs and moves up and down layers of the webpage.  (Think of how on each page filled with tab page headers, you can click one and see a new nested tab page filled with tab headers containing numbers, and then click again, and again, and mix them up, and then go backwards and forwards.  Now try to predict which tab headers will be visible when a particular tab header is clicked.  It would be difficult even without the mixing up of the tabs.  But how could that history be represented and serialized (programmer talk for written to some sort of data format to be stored in memory) as it occurred in such a way as to facilitate an easy calculation of what is visible on any tab page?  Alternately, what could such a calculation and such a representation be similiar to?  Could arbitrary, non browser environments, or “zooming” geometries that don’t restrict nesting at all, be meaningful?

Scary stuff.  Here’s its genesis:

webpage of nested tabs

(I’ll change this photo as soon as I have access to my normal camera)

Of course, hidden in all those details of a particular implementation of the idea in an html browser using javascript is something a little more general concerning numbers and geometry (each layer could correspond to a prime number, for instance, and all the nested tabs would be composites of that prime number, with each of those revealed nested layers being prime numbers as well and powers of composites for the container’s prime factorizations).   I wonder if there is a correlate physical/energic process/structure?  (totally way out in left field, there [coming back around, I love how I injected that comment since this whole entire post is way out in left field!]) I wonder if you could create connections, or tunnels between tab pages, so as to count through them all efficiently and to learn paths and maybe even ways of walking paths if there could be some sort of meaningful geometry that could be moved through…

And behind this entire gestalt is probably an active archetype, a la Jung.  I especially appreciate the lightning crease that tears down the middle of the page.  I had printed a document earlier and I remember when the printer made a sharp sound and the creased page had been ejected.  But at that time I didn’t know this iridescent archetype was going to paint itself over the top (of my night, too, ultimately, since overall I’ve devoted from 3 to 5 hours to this flowering mandala from outhe blue).