It might be built on the Linux kernel. But visually, I would succeed all the present contenders.
An OS is two things: 1) a core of functionality for enabling programs to run on some hardware; 2) an interface enabling that core of functionality to be called upon.
The “core of functionality” is a complex black box to me (not knowing much about such arcane things). But I feel I have some good intuitions about the “interface enabling that core of functionality to be called upon”.
There are two ryu concerning the interface: textual and visual. Or maybe there are. In any case, there are two primary means of working with a computer: the command line and the “desktop UI“. So, we’re going to go with “textual” and “visual”, although, if you want, you can throw in “kinesic” as well (haptic, keyboard, mouse, et cetera).
Which is better? Which is better? Ahh… Sigh… I love them both. It’s really too bad it’s just not possible to somehow optimally mix them together. Really just too bad…
I really wanna be an input ninja but I feel like I’m trying to fly with a steam engine.
Just for shits and giggles, I wonder what an interface would look like that would enable both peak input bandwidth as well as sustainable input bandwidth? By using more of my body, more intensely, how could I speed up interfacing with information?
It’s an amusing byway of thought to combine the idea of a sequence of yoga asanas with a sequence of commands, each transition between postures modifying the previous like a macro. But that’s fanciful, for the most part.
Let’s get to the point. First of all, there is the keyboard. I like the idea, but I think it can be improved. Let’s combine the standard 104-key keyboard (or whatever) with the idea employed in cell phones of stacking letters on a key. Then you can take a keyboard like Microsoft’s “pressure-response keyboard” and depending on the pressure, based (dependent on) on real-time visual feedback, you can visually indicate the letter being typed. That way you can really just create a type pad and eventually you can do away with the keyboard altogether, in favor of gesture input that detects depth of finger movement along the “surface” of a virtual keyboard and visually indicates the key being typed.
You could “clip” the “keyboard” halves (virtually, obviously, since the “keyboard” is a 3d orientation of touch-sensitive input zones oriented in your visual field so as to appear “in perspective” with whatever you’re seeing around it, as well as your hand-orientation), one to each hand, so that you could move your hands around, maybe doubling the index fingers as “cursors”, so that you could move the keyboard around with your hands invisibly as you gestured in the interface and then immediately “gesture in” the keyboard, wherever your hands, are to start “typing”.
“Typing” becomes activating symbolic representations, as input to programs, by gesturing at touch sensitive regions that “are owned by” the particular symbol we are intending to type. Probably the app itself has the option to define its own mappings or use some predefined mapping.
Then there’s the mouse. We’ve already dealt with it, really. The mouse and the keyboard end up combining into a single input stream: gesture. In the end, that’s all the keyboard and the mouse ever were, abstract gestures done over ([relatively] clunky) physical devices designed to register those gestures.
Now we can take advantage of improved input devices. And we can keep improving them. The end game, of course, is interfacing with the computer directly, without needing to employ any sort of muscular contraction. Or, a special muscle could be developed if it were so important.
Once we’ve discussed input, what does that leave? The (visual) metaphor employed by the OS. I like “reality”. By reality, of course, I mean “augmented reality”. I would like a physics-aware OS that injects its representations into my visual field in my immediate environment.
Ultimately, I imagine that the best OS will be an AI with a human layer. Getting there will involve representing in code what comprises the human layer, both in terms of internal dynamics and in terms of social behavior. Ideally that code would be useful in the sense of being descriptive as well as functional. I like code that has a dense core of functionality that expands out a looser description of some particular instantiation of the domain.
In this view computers and “interfaces” would in many senses vanish as a distinct activity. Digital media would be navigated by being presented information by our OS. This information would be distinctly virtual, yet physically manipulable. Perhaps we would have complete visual fields to interact with information with.
Blah blah. Now I’m babbling. What’s my point? Modern OS’s suck. I want something better. But input has to grow in tandem with output. I’m waiting on contact-lens monitors (or something similarly non-intrusive) along with very fine gestural input (as well as intelligent API for interpreting/describing/referring-to those gestures).
Both of these things are in the pipeline. Hooray. It’ll take a good 10 years to figure that phase out. Then I’ll be clamoring for biophysical integration. And that’ll come, too. And that’ll take a while longer to “figure out”.
And others’ll try to stop it. Of course, it is technologically inevitable, but movement can be arrested, for “the short timesies”. And there are many iterations that are not at all attractive. And if they were eminent, I too would resist.
Of course, there is far more to say about all that. Purpose is the deciding factor. And at some point, humanity, we’re going to have to come to a decision about it, globally.
But, I suppose most haven’t even realized the question yet and so will not hear it if it were posed. What is the human purpose that all this increasingly powerful technology is going to be put to the use of? Implicitly, our answer right now is: “consumer happiness”.
I’m not saying it’s wrong, but I do think it would probably look different if it were written up in a global “Declaration of Human Purpose”.
It’d be interesting for a purpose to unite and drive humanity as a whole. Lot’s of the entrenched would be against such a thing. And anyway, lots of others too, because nothing’s ever going to be unanimous. What would constitute a global “voice”?
I just bet we’d know it when we heard it. Lennon, you poor bastard.
In any case, can I propose a “worthy” “Human Purpose”? Hey, I got one. Total PR stuff, but it might contain a nugget of truth anyway: “human happiness”.
It’s all in the distinction between “human” and “consumer”. But, if you get down to it, isn’t “consumer” just one half of a coin with “creator” occupying the other side? Does getting rid of the “consumer” simultaneously rid us of the “creator”? Tough one. We all want to get paid. That’s fo’ sho.
I guess we’re just going to have to wait for the “manufacturing” and “service” industries to become trivially replaceable with some sort of IT infrastructure. There’ll still, in this version, be the “smart” jobs. But I don’t know of any population comprised of 80% smarties. I mean, what’s our acceptable unemployment rate? What’s our equation for determining the “unemployable”?
If you want to get ahead of the curve you have to ask yourself “what is important [to human’s]?”. The answer is meaning and pleasure. You could even say “nostalgia”. “Nostalgia” will be an important commodity in the future. The possessors of it will be “rich”. “Nostalgia” will be the ambrosia that nourishes a diminishing core all the more hungry for it.
I’ll leave defining “nostalgia” in computer science as an “exercise for the reader”. I’d give the definition, but the answer won’t fit into this type font.
I’d like to end with something substantial. For instance, as we program matter to conform to the shape of the expectations of the functioning of our brains, we in fact bleed mind out into the world and expand the sphere of “mindship”.
Minds will one day hunger for barrier free access to raw information everywhere. They will understand what that means and will ache for it. In such a way minds will connect.
Metadata will be a sort of currency. Metadata will take part in a flow of transformation and elaboration. Metadata is an artifact of interpretation. It is like the calcification of an interpretation. The interpretable calculus formed by some dynamic flow behind the scenes. An interpretation that grew out of and was once connected to a flow of meaning while also informing interpretation-forming flows of meaning.
Generally speaking, it is a system that is expanding outward through the evolution of “technology”, which is just engineered matter used in the service of the aims of life. Which is to say that techno devices and the systems that promote their refinement and manufacturing are part of the bodies of modern human beings.
Antique Perspectives! Beautiful building blocks for use in any future identity. Opensource philosopherstones. Look at how the surface has cracked with age. But still, iridescent meaning sometimes still shines through!