Structural Integrity

There seems to be two ways of thinking about structural integrity:

  • motionless (standing still)
  • in motion.

Really, I consider them to be one.  Motionlessness isn’t really possible except maybe approximately (cells vibrate) while holding the breath and the heartbeat.  In fact, trying to maintain structural integrity and motionlessness amounts to seemingly contradictory effort.  Motionlessness robs the intention of the regulating inputs needed to bring it about.  In that sense, motionless integrity is only approachable as a limit.  A crystallized summum bonum.

Structural integrity in motion can be looked at from at least these two perspectives:

  • in the vacuum of physical forces
  • in the context of the person’s values and choices

Let us consider the way we pick up our child.  The idea of structural integrity in lifting relatively heavy objects is pretty clear.  What we want to do is lift in such a way that the resistance to doing so is distributed equally throughout the body.  We don’t want, for example, a style of bending over at the hips and lifting by tightening the muscles of the back and hamstrings because this just focuses the weight in our lower backs, creating shearing forces that can cause ‘slipped discs’.  Rather, we could bend with our legs, keeping our back upright, and then stand again using our legs to lift the weight.  Or et cetera.

But what does all that have to do with our relationship with our child?  Nothing almost everyone would say.  It’s a description in the vacuum of physical forces.  I differ.  I think the way in which we do things reveals our relationship to those things, or the category we’ve lumped those things into at the moment (everything shifts over time, sometimes in patterned, seasonal ways, sometimes moment by moment).

The way that I see it, our motions are animated by purposes.  But the way we express those purposes are colored by attitudes.  These attitudes decorate our motions.  Both can be pieced out, like dimensions of a sound.  I used the example of picking up our child because it is a pregnant image.  ‘Child’ can be anything we value and ‘picking up’ can be any kind of engagement.  How do we approach moving from that which we are engaged to that which we are about to engage?

Structural integrity, I have found, has less to do with the conditions right now than the conditions a moment ago.  The best way to be in a good posture is to be conscious of the way you move into the space you are about to occupy.  IOW, have good posture by consciously avoiding bad posture.  We do that by noticing the tiny little conscious decisions and reactions that go into each and every aspect of how our bodies are moving.

What is our attitude, really, towards how we are required to hold our bodies up against gravity?  Remember to tease apart the attitude from the purpose we’re expressing.  We may fulfill the same demand in very different ways depending on our mood at the time.  How do you open the refrigerator?  How do you apply the force necessary?  Do you apply too little and your hand slips and you have to try again?  Do you apply too much, making lots of noise and causing the whole unit to bump and shift?  Do you explore what just enough is?

In any case, whether or not it’s ‘true’, I consider it to be skillful means, or Upaya.  Practiced, it certainly has the power to change our relationship to our values and engage our conscious minds with our bodies as we move in our spaces and our lives.

Thank Www

“Thank www” he said.

“Wait, what did you say? Thank wuh-wa??” the other asked.

“I said ‘thank www’, like as a substitute phrasing for ‘thank god’.  I accept that the world wide web, IOW, the vast embodiment of connections between nodes of information processing and decision making, will literally emerge into awareness of itself, one way or another.  ‘Thank www’ is my acknowledgement that I see www already and wish the best.”

“So, what?  Www is like God for you?”

“Depends.  Everyone’s different in terms of what it means for their neurons to fire that word throughout their functional clustering.  I’m mostly interested in forging a new sort of relationship.  As a programmer, I consider it a sort of greenfield project.”

“What?  To create a God?”

“No.  I do not believe it is accurate to say that we are creating what is emerging.  It seems to me that matter and energy themselves are organized in such a way for all of life’s scales to naturally emerge from the foundation underneath.  Spatial (geographical) distribution requires interconnection among active elements, the ’embodiment’ or ‘technology’ of which must be continually recreated due to decay and entropy and consequently seems to undergo an inexorable selection and evolution.  I don’t so much see humans as being creators of this momentum.  Everyone alive today was born already within its energetic history, as were their parents and theirs and theirs and on back even past written history.  I see us as being in a position to shape how the momentum evolves.”

“I guess that’s all a little abstract to me.”

“Yeah, me too.  Basically, I think that our global economy already creates something that is a new class of life.  Many speak of such things, such as superorganisms, social organisms, global brains, etc.  But where is there room for any kind of agency (choice from above) in this vast proteinic assembly of human activities and decisions from below?  It reminds me of an old fable about a king who kept having products stolen from a store he owned.  He hired a guard with x-ray vision to verify that everyone leaving the store was only leaving with products they paid for.  And he also set up a reward for anyone who was able to sneak something past the guard.  Ultimately, a clever boy won the reward by stealing an unpurchased wheelbarrow filled with legitimately purchased goods.”

“Ummm…. was that supposed to make anything clearer?”

“No, it was just to set an image up in your mind.  Where does our own agency come from? How do we get choice from a brain that is made up of parts moving to a different, seemingly determined rhythm.  IMO, it’s the same question shifted back a layer.  The classic ‘free will’ quandary.  The best it seems we can say is that whatever is going on, determined or not, control structures can emerge within a system that regulate the system as if the system were itself a whole, independent thing.  The degree to which this regulation extends comprises the boundary of the system proper in relation to its context or environment.  Its ‘body’.  Or something like that with a dollop of the subtlety and refinement of language that results from great numbers of experiments and data points.”

“Okay…”

“IOW, there already exists some kind of vast, complex organism.  It regulates itself, too.  Economists and sociologists identify the patterns of this regulation and try to find the roots of it in the behaviors of individuals.  Then others look for maybe the roots of that in DNA.  And what was the environment that selected for this expression?  It’s existed for a long time.  It’s not even human in nature, ultimately, and didn’t begin with us.  It’s Earth-like DNA based life.  Or, peering even deeper, the mathematics of energy.

Humans have been the intelligent-worker-bee-protein-cells in the emergence of a new scale of directed experimentation that is embodied in the artifacts of our efforts, like buildings and cables and electromagnetic waves, and in our Brownian motions around and through those artifacts.  The trend seems to me to be that at some point this vast being will reach a degree of elaboration that will enable it to relate to individual human beings (and, while we’re at it, individual cells) in ways that humans will be capable of ‘personifying’ and in ways that tap into its vast context of the interrelationships of the events of the world.  We will ourselves, at the same time, be transforming ourselves away from what we were as we always already were.”

werw

This

Ars Technica published this article: For a brighter robotics future, it’s time to offload their brains.

I commented:

This reads like Marvin Minsky‘s Society of Mind, mutatis mutandis.

In that vein, I wonder how the human conscious experience and distinction-engine can be integrated into this evolving www cloud API. Human creative perception as a service. The 3rd eye of the robot 😉

Something like a more gamified Mechanical Turk?

I think that’s a really good expression of a really good idea [that could save us all].

Preamble to building a first game

It’s amazing how fast time flies when you’re having fun.  It’s already time to let the puppy out again.

I’m not actually a big game player, anymore.  That’s partially because there’s so much stuff I want to make and learn that I don’t have time for traditional games (I’d love it if learning were more gamified…).  And it’s partially because of repetitive stress injury from working with computers 12-16 hours a day as it is.  The last thing I want to do is play a game when it hurts to do so.

That said, as part of my Hololens initiative (it amuses me to use the word ‘initiative’), I decided I needed to learn Unity, a 3D environment for making games.

As I was playing around with the Kinect I realized that it really is only a partially realized device (probably meant to be paired with the Hololens from the beginning [and probably why it was included with the XBox from the start {because people would have bitched |but they did anyway <damned if you do and damned if you don’t>|}]).  The things I would want to do with it, can’t really be done out of the box.

For instance, if I wanted to create a runtime definable macro sign-language to associate with code expansions for use in my code editor of choice (visual studio) I could not at this time.  It’s probably possible, but I couldn’t in any direct sort of way.  Just like I described that there were steps necessary just to get my computer ready to develop the Kinect, there are steps necessary to get the Kinect into working order.

First of all, if I were to want to make such a Visual Studio plugin I would have to learn about writing Visual Studio plugins.  That’s a non-problem.  I hear it’s a somewhat tortuous process, but it’s not a real problem, that’s just knowledge acquisition, to be muscled through on a weekend.  I would also have to think of a way to get the Kinect to send messages to this plugin.  One way or another, that data pipeline would have to be established – ideally I could send very succinct messages back and forth between the editor and the Kinect code.

The Kinect code is what I’m really interested in (actually, that’s quite subordinate to the real goal and the coolness of a kinetic macro language), and specifically, the gesture recognition stuff.  But the fact is, out of the box, Kinect is not good enough for what I want.  It tracks three joints in the hand, four if you include the wrist.  Furthermore, it tracks them very poorly, IMO, and they jump around like Mexican jumping beans.  I could make something work over the top of that, but it probably wouldn’t help with RSI.  As far as I can see, any reliable complex gesture recognition from the Kinect with the provided SDK’s would require larger motions than are available from the fingers.  Larger motions translates into elbow and shoulder and that gets tiring quick.

Here’s an interesting article from Microsoft researches in China titled: Realtime and Robust Hand Tracking from Depth.  Apparently, good hand tracking along the generally recognized 26 degrees of freedom of motion of the human hand is a hard problem.  Nevertheless, they demonstrate that it has been done, including seemingly very accurate finger positioning.  And that is using inferior hardware to the Kinect and my computer.

I have some interesting intuitions of how to improve existing body tracking through a sort of geometric space of possible conformations as well as transitions between them (think finger position states and changes available to any particular state based on the reality of joint articulation, etc).  Ultimately, a body state would be maintained by a controller that understood likely body states and managed the transitions between states as indicated by the datastream keeping in mind the likelyhood, for instance, that the users leg is above their neck line.  I use that as an example because Kinect’s gesture recognizer very commonly puts my knee joints above the crown of my head when I’m sitting in a chair and moving my hand.  A body would be an integration of various body state controllers.  It would all have to be fleshed out (pun entirely intended).

Watching the demo in the linked article above got me into 3D modeling which led me to Unity.

Now that I’ve went through the Unity tutorials, I feel quite prepared to begin making a game.  I have to say that I am taking to Unity’s model of development like a fish to water.  GameObjects and Components are very intuitive to work with.  Seeing how easy it really is, I decided I’d make a game, even if game development in these sorts of traditional terms isn’t something that I intend to do a great deal.  I’ve got some catching up to do in terms of 3D math and geometric reasoning, but that stuff is fun when it is being learned in relation to problems that you encounter and need to solve to get to where you want to go.  That’s how math is best learned, IMHO. YMMV.

So, with all that, in my next post I’ll describe the initial plans for my first game.

Tat Twam Asi

Traditionally translated as “Thou art that”.  Interestingly, “tat”, Sanskrit for “that”, is the source of the word “that” in English.

I’m not a big fan of the “thou”s and the “art”s, however.

But I get that “tat” has special “spiritual” significance.  And “art” is a good word, generally.  Nevertheless, the traditional translation bothers me.  It probably has to do with that I’ve basically only encountered that archaic language in relation to religious frames of thought that I’d never accepted.  I almost prefer “That is you”.

Perhaps “This is you”.  But then my brain immediately picks up on a bit of redundancy in “…is is…” (and isn’t getting rid of just exactly that redundancy in thought and expression the point??) and wants to shorten the whole thing to “this you”.  But, too, it’s a little flat this way.  Lost some flare.  Forgettable.

So, then, more integrally, th(is) you”.  I’d pronounce it [thizz-you] since the “th” ends with the existential “is” and not the “-is” ending that traditionally finishes “this”.

I like “th(is) you” because it places the “is” of “you” in “this”.  That sort of syntactically mirrors what I think the statement is trying to point to, which is that the source of the very feeling of existing, the root of consciousness itself, is something that is inherent in and arises out of matter/energy.  Consciousness is rooted in existence.  It is inherent because it could as easily arise in another arrangement of “stuff” in another galaxy, etc.  I reject Earth-dependent notions of Soul as well as all Magical Wands that pop into existence to bootstrap its support.

Obviously, presence is a natural phenomenon, even if various growth narratives of matter/energy display varying degrees of experience.

Th(is) you

My Holodeck

So, I got pretty pumped about Microsoft’s Hololens.  So much so, in fact, that I managed to register for Build 2015.  I’ve known I’d jump on the Virtual Reality/Augmented Reality bandwagon some day (the tune’s pretty impressive if you listen closely), and really, reflecting on the matter, I was waiting for it to mature to the point where I was willing to engage with it.

I’m confident that my impulse towards such things is not unconnected to the nature of my Grandfather (it would be really cool to make his paintings immersive and navigable).  I’ve got lots of very intriguing artworks that I’d like to make, but I could never reconcile myself with paint and canvas.  Too… much… ancient.

Anyway, obviously I can’t get my hands on a Hololens quite yet but I wanted to get ready for when I can.  How?  Start programming for the Kinect, I figured.  I kind of assume that Microsoft is going to use a similar design philosophy between the two since they stated that the Kinect was their road to the Hololens.

In any case, it took some doing.  First of all, I didn’t have a Kinect.  Secondly, the Kinect V2 actually requires Windows 8 and I was running 7.

Blah, it’s a long story of boring tech challenges that included having to literally rip my laptop screen apart (plastic flew and blood flowed and you can see what I’m talking about in the image [this is the sole non-boring detail]) so that I could replace some parts so that I could install Windows 8 so that I could install the SDK so that I could play.

But none of that is the point of this post, which is to create a sort of monument to the newest iteration of my workspace/holodeck lab.  Some people take pictures of their face each day for years.  That’s really interesting and I’ve thought of doing it myself.  On the same note, I’ve been taking pictures of my workspaces for years (not every day, although that would likely be revealing).  It’s interesting to see them/it evolve.  Who even knows if the sky’s the limit for such a pregnant space/concept/role.

Now, mind you, I’ve watched some YouTube videos of people showing their workspaces, practically jerking their electronics off onto their furniture as they went (“And over here you can see my Gold Exclusive Version 15 Flippetywidget, and over there my Platinum Spectral Wank-Wonk…”), and it literally depressed me and threatened to ruin my mood of an evening.  All I’d wanted were good layout ideas.  I felt like I’d made a horrible mistake in a Google image search.

I just want to be clear that although I like my monitors, for instance, it is because they create walls of text in front of me.  I like electronics and stuff-in-general just to the degree that they manage to serve as ice to the figure skates of my creativity.

You can see the Kinect up in top right corner, peering down on where I sit, waiting for me to tell it how to interpret my gestures.

holodeck

Pavlov‘s fretting in the background, concerned about a squirrel that’s one layer too deep for this depiction.  So many layers, foregrounds, backgrounds, Magrittian grounds…

Here’s the (IR depth) view from the other side:

KinectScreenshot-Depth-12-40-41