Unix in a keyboard-less age

Admittedly, I'm not well versed in the details of Unix history. But I'm guessing that Unix didn't always have a tty subsystem. That it originally started with paper tape. Then it evolved to have ttys. Then it later evolved to have ptys and mouse support. What's next?

Keyboards/mice may go away in the distant future and there are already devices that lack these. So, how does a human interface with Unix now? I doubt shells/ttys are up to the task and I imagine shells will eventually be relegated to interpreting scripts than user input.

So how do you imagine human-Unix interaction in the future?

I imagine some sort of hybrid between touch screen and AI (and maybe dictation). Touch screen is horribly inefficient and to make it viable, I think you need smart software interfaces that can predict user intentions.

I especially see it as an opportunity to make things right, particularly with some of those hilarious problems mentioned in the UNIX-Haters Handbook (which is worth a read for laughs). For example: wildcard expansion in the shell (which won't be an issue in a keyboard-less future anymore).
 
The day the keyboard and mouse is no more, is the day I call it quits. I'll find something else to replace the computer.
 
zspider said:
The day the keyboard and mouse is no more, is the day I call it quits. I'll find something else to replace the computer.

But what if there was a general and practical interface to do everything as easily (if not easier) than with a keyboard/mouse?
 
Touchscreens are not really there yet, it's just too clumsy to write anything equivalent of a typical UNIX shell pipeline made of multiple commands on them.
 
kpa said:
Touchscreens are not really there yet, it's just too clumsy to write anything equivalent of a typical UNIX shell pipeline made of multiple commands on them.

I don't think touch screens will ever be efficient by themselves. I really think future software will need AI interfaces to predict user intentions for touch screen to ever be efficient. This goes without saying for dictation too.

That said, I never suggested anything like a desktop interface either (as we think of desktop anyway). It must be general enough to supplant the shell and practical enough to do real work reliably.
 
nslay said:
Admittedly, I'm not well versed in the details of Unix history. But I'm guessing that Unix didn't always have a tty subsystem. That it originally started with paper tape. Then it evolved to have ttys. Then it later evolved to have ptys and mouse support. What's next?

The earliest incarnations actually did begin with tape though later with electricity based created by Morse for electrical telegraph using Morse code. Communications can be traced back to earliest written history with homing pidgins to human messengers back to the greek times where various encryption style logic like the Cæser cipher (i.e. rot13 ) can be identified.

TTY actually was used pre-unix during the multics and ctss years as they where beginning to explore time-sharing shared resource computation. It actually stood for Teletype - teletypewriter or simply teleprinter( i.e. expensive typewriter).

Early documentation on Thompson's line editor ed() explores UNIX as if a hard copy; paper based; terminal would be used. Bill Joy's original vi() took several years to complete because backward compatibility through the creation of a terminal driver to take advantage of a "glass" terminal while being compatibly with "paper" terminals. In fact the original vi has a so called 5th mode called "open mode" to do just that. The work which Joy put into the termcap library would later influence curses().

Also naming conventions for signals would also follow early computing experiences; SIGHUP stands for `Hang Up` which sends the signal that the serial connection had been severed to simulate hanging up a phone.

nslay said:
Keyboards/mice may go away in the distant future and there are already devices that lack these. So, how does a human interface with Unix now? I doubt shells/ttys are up to the task and I imagine shells will eventually be relegated to interpreting scripts than user input.

So how do you imagine human-Unix interaction in the future?

I imagine some sort of hybrid between touch screen and AI (and maybe dictation). Touch screen is horribly inefficient and to make it viable, I think you need smart software interfaces that can predict user intentions.

Someone at one of my local user groups showed me that they had ported editors/vim to the ipad (iOS). As interesting as it was there was no ESC key as it was using the system level keyboard widget. As a workaround he pointed out that they had made a "shortcut" with another symbol on it's keyboard. If such a thing is important enough not to be a toy to show to your friend someone will have to create a proper UNIX layout keyboard to use in conjunct with the software.

I highly doubt anyone will be developing on these devices. Imagine writing a program by speaking to siri. Simple 4th gen interfaces could be created for end users such as we have now like "Compose message to Alyssa P. Hacker" but verbose COBOL style syntax had never really worked well for iron clad programming.

Even imagine administration (or even security) on mobile connected to server. Something as simple as chmod +x /path/to/file could be spoken as "change mode add execute home username bin program"

Now building on expansion pipes redirection and upper/lower case sigils and so forth a predetermined syntax will need to be created. With that we are looking at another dialect or another layer of abstraction. Most people who search google or web commerce sites don't know SQL. Nor should they have to. The keyboard will never really go away. It's easy to confuse the consumer market with the professional market. The problem is that computers became a commodity more of a toy than a productivity tool when they became home devices.

There will always be people who only care about playing games and social connection. You won't find any applications for building a spreadsheet calculator via voice command on the phones or tablets. It's unlikely that we ever will get to that style of "star trek" future of computing.

Also we still live in the day and age where consumers make no distinction between the internet and the web. Then again these people are not concerned with what kernel they are running or even if it's secure or as fast as can be. They are the market for these devices. As long as we have programmers, secretaries and any profession which deals with data input and entry there will always be physical keyboards.

My 2 bits.
 
UNIXgod said:
Someone at one of my local user groups showed me that they had ported editors/vim to the ipad (iOS). As interesting as it was there was no ESC key as it was using the system level keyboard widget. As a workaround he pointed out that they had made a "shortcut" with another symbol on it's keyboard. If such a thing is important enough not to be a toy to show to your friend someone will have to create a proper UNIX layout keyboard to use in conjunct with the software.
*yawn* vim on iPad. More of the same old ...

I highly doubt anyone will be developing on these devices. Imagine writing a program by speaking to siri. Simple 4th gen interfaces could be created for end users such as we have now like "Compose message to Alyssa P. Hacker" but verbose COBOL style syntax had never really worked well for iron clad programming.
Why not? Machine learning and computer vision can create realistic photographs from simple labeled sketches (here). Why can't an AI assist you in development through otherwise inefficient interfaces like touch screen and dictation?

Even imagine administration (or even security) on mobile connected to server. Something as simple as chmod +x /path/to/file could be spoken as "change mode add execute home username bin program"
More of the same thinking ... How about content-based filesystems? "Show me pictures of my dog I took recently." No hint of file names or directory hierarchies (because no human thinks of file names and directory hierarchies). Nobody said information storage and security was limited to our current understanding of file systems.

Now building on expansion pipes redirection and upper/lower case sigils and so forth a predetermined syntax will need to be created. With that we are looking at another dialect or another layer of abstraction. Most people who search google or web commerce sites don't know SQL. Nor should they have to. The keyboard will never really go away. It's easy to confuse the consumer market with the professional market. The problem is that computers became a commodity more of a toy than a productivity tool when they became home devices.

There will always be people who only care about playing games and social connection. You won't find any applications for building a spreadsheet calculator via voice command on the phones or tablets. It's unlikely that we ever will get to that style of "star trek" future of computing.

Also we still live in the day and age where consumers make no distinction between the internet and the web. Then again these people are not concerned with what kernel they are running or even if it's secure or as fast as can be. They are the market for these devices. As long as we have programmers, secretaries and any profession which deals with data input and entry there will always be physical keyboards.

My 2 bits.

We could be using punch cards and switches for data entry still. But it's not very efficient. The keyboard and later the mouse made human-computer communication more efficient. It took us 20 years to master the mouse and GUI. Now we have touch screens and dictation and we are still designing software with the keyboard and mouse in mind.

I refuse to believe the keyboard and mouse are the limit.

I do think Siri-like software is the future ... imagine, an NLP-widget toolkit. Not graphical widgets, but language-based widgets.
 
Although it's talking about Windows 8, this guy writes about the difference between input methods between a desktop and mobile devices which may apply to this.

But this only holds if the original premise is correct, that the tablet is the evolution of the laptop, and I just don’t think that’s right. Where the division lies is not a the desktop and the mobile level, or between the laptop and the tablet, but between professional use (i.e. content creation), and light/entertainment use (i.e content consumption). While tablets are not necessarily used purely for content consumption, their limitations (small screen size and lack of a hardware keyboard) mean that this will always be their main use.

The PC does not die just because there are more mobile devices on the market, it remains to play its own role. There is a clear line between devices you use for things like writing, coding, photo editing, 3D design, and so on, and devices you use for reading, browsing the web, watching videos and playing games. While the latter can be done on both, the tablet and the PC, the former will always require a PC, and because of this, there will always be a need for an operating system tailored specifically for it.
 
drhowarddrfine said:
Although it's talking about Windows 8, this guy writes about the difference between input methods between a desktop and mobile devices which may apply to this.

Agreed, tablets are quite the toy presently. But I don't think they have to be and I do think they can be productivity systems with a lot of UI work. Whether their plethora of input devices can be made to be more efficient than keyboards and mice remains to be seen ... I think they can be with some smarter software.

The screen size might increase some day ... if they ever become more than just a novelty.
 
nslay said:
*yawn* vim on iPad. More of the same old ...

Didn't mean to put you to sleep there buddy. Ironic that someone spent time on porting the editor but couldn't implement a proper interface (hmmm... sound familiar?)

nslay said:
Why not? Machine learning and computer vision can create realistic photographs from simple labeled sketches (here). Why can't an AI assist you in development through otherwise inefficient interfaces like touch screen and dictation?

More of the same thinking ... How about content-based filesystems? "Show me pictures of my dog I took recently." No hint of file names or directory hierarchies (because no human thinks of file names and directory hierarchies). Nobody said information storage and security was limited to our current understanding of file systems.

Never said it wasn't possible. No one really works on those types of things unless funding is in place. AI also didn't mature as quickly as it was thought of back in the 60's and 70's.

If your passionate about this sort of thing there is nothing stopping you from building your own interface on top of the preexisting technology.

Though what your telling me is you'd like to create a spoken context search interface which is still an abstraction of regular expression and standard query. Assuming there is an open source library already available for speech recognition I would start by building against that with an active record pattern so it would be "Display pictures of dog by `date`. of course date could be 4.days.ago to now or recent would be a preference based on integer value of time in range of int.min int.hour int.day and so on.

nslay said:
We could be using punch cards and switches for data entry still. But it's not very efficient. The keyboard and later the mouse made human-computer communication more efficient. It took us 20 years to master the mouse and GUI. Now we have touch screens and dictation and we are still designing software with the keyboard and mouse in mind.

I refuse to believe the keyboard and mouse are the limit.
GUI's and mice are a bit older than you think. Your off by a decade easy if not more. It's just another interface. comparability comes on layers. Interfaces are always separated from the logic as the logic is always separated from the data which could come on many forms, styles and media type. It's all about being as modular as possible to work with change as the world of computing is changing all around us all the time.

nslay said:
I do think Siri-like software is the future ... imagine, an NLP-widget toolkit. Not graphical widgets, but language-based widgets.

Siri is more of less beta. Moore's law may fix that as the low level implementations become better as well as closer to real-time processing. As to the spec I explained above one would need to also build an interface to other engines (for example whatever android uses and blackberry, palm and M$ where applicable)... Since audio is dealt with on the user device transliterations will need to be done to be streamed over to your BSD server network.

As for neural networks and learning machines we haven't gotten much further than silly Markov chained pseudo software psychologists. There will always be a market for software which acts smart though. As for development we already have programmable programming (i.e. lisp, smalltalk, ruby). It's a far cry to expect us to get to actual programming with any other type of interface than one which allows text input. No one programs with a mouse. Some may compose like the arts with music, video or animation with 4th generation software tools which may be closer to being like IDEs.

Though I like the idea. If your future ever does happen where everything is truly automated it may be interesting to voice the command after buffer overflow "Computer, debug fusion impulse pointer while normalizing dilithium crystal memory segment; Number One to the bridge!" =)
 
UNIXgod said:
Siri is more of less beta. Moore's law may fix that as the low level implementations become better as well as closer to real-time processing. As to the spec I explained above one would need to also build an interface to other engines (for example whatever android uses and blackberry, palm and M$ where applicable)... Since audio is dealt with on the user device transliterations will need to be done to be streamed over to your BSD server network.
I don't know. I would hope that PCs and mobile devices continue to become more powerful. With more powerful devices and improved algorithms, perhaps this sort of toolkit can run on the devices itself. The cloud may stifle that (all in the name of marketing/protecting IP).

As for neural networks and learning machines we haven't gotten much further than silly Markov chained pseudo software psychologists. There will always be a market for software which acts smart though. As for development we already have programmable programming (i.e. lisp, smalltalk, ruby). It's a far cry to expect us to get to actual programming with any other type of interface than one which allows text input. No one programs with a mouse. Some may compose like the arts with music, video or animation with 4th generation software tools which may be closer to being like IDEs.
Machine learning has advanced quite a lot (especially for computer vision tasks). I'm not sure about AI and NLP ... Watson is pretty impressive but I don't know very much about NLP.

With massively parallel computing emerging, I imagine new types of languages that are better suited to massive parallelism will appear. They'll probably be text-based ... but perhaps non-text-based languages are possible (workflows are kind of an example ... not always practical though).

Though I like the idea. If your future ever does happen where everything is truly automated it may be interesting to voice the command after buffer overflow "Computer, debug fusion impulse pointer while normalizing dilithium crystal memory segment; Number One to the bridge!" =)

At some point, I imagine we may even rely on computers to automatically propose hypotheses (i.e. idea machines, computers that read journal/conference papers and propose unexplored connections). Prerequisite knowledge required for research is exploding. On top of that, it can be difficult to make interdisciplinary connections.

I don't care about star trek. A lot of this is already possible if you could design the software to neatly package it all (and that is a formidable task). However, scientists don't typically write end-user software (or write it well, some even use matlab alone). It may be decades before you see any of it in products.
 
nslay said:
Agreed, tablets are quite the toy presently.

The screen size might increase some day ... if they ever become more than just a novelty.
You really think that?

I won't go into how I see the iPad used in a lot of work places nowadays but my iPad goes with me every time I leave the office, go on trips, or just to the living room. There are a few companies that have converted them into cash registers.

Only recently did I get mine back from my son who borrowed it (for 8 months) but I need to find terminal software so I can use it instead of dragging my notebook around as much.
 
drhowarddrfine said:
You really think that?

I won't go into how I see the iPad used in a lot of work places nowadays but my iPad goes with me every time I leave the office, go on trips, or just to the living room. There are a few companies that have converted them into cash registers.

Only recently did I get mine back from my son who borrowed it (for 8 months) but I need to find terminal software so I can use it instead of dragging my notebook around as much.

Almost certainly. It is a burden (for example) to prepare presentations, write papers and program on tablets. Even Swype does not compare to a real keyboard for these tasks. Perhaps if you plug a keyboard into a tablet ...

That said, I do know of someone with no hands who programs in C++ with dictation. It is possible ... but I imagine it's not easy.

I do think smarter software can solve this problem, and perhaps do it more efficiently than with a keyboard or mouse.
 
matoatlantis said:
Thanks for the link! That "god stuff" in the end was weird for me, but the rest is really good reading.

Yeah it gets a bit hokey at the end. Far be it for me to judge anyone on ones personal beliefs; still a pretty epic blog post. Social commentary and all.
 
Until you have a computer that can interpret an ambiguously-worded instruction with the same accuracy as the average human, we're not even close to having an interface that can match a keyboard for efficiency. In other words, we're talking Star Trek-esque advanced AI:

Scotty: Computer! Computer?
[He's handed a mouse, and he speaks into it]
Scotty: Hello, computer.
Dr. Nichols: Just use the keyboard.
Scotty: Keyboard. How quaint.
 
purgatori said:
Until you have a computer that can interpret an ambiguously-worded instruction with the same accuracy as the average human, we're not even close to having an interface that can match a keyboard for efficiency.

Talking to chat bots
Cold and lonely winter night -
Echoing my words
 
nslay said:
So how do you imagine human-Unix interaction in the future?

By walking up to a terminal, saying, "It's a Unix system! I know this!" and using it to fly around the filesystem so you can lock the door before the raptors get through.
Like it has been since 1993.

Except flying around with a tablet should be easier, since you can just use the accelerometers in the tablet to navigate by tilting it.


Passwords will be a 9 letter dictionary word displayed algorithmically as a 3d cube. I guess tilting and touchscreen gestures will help out there, too.

Coding will work the same way as passwords, but it will involve more complicated 3D rotating objects, that you piece together like a jigsaw puzzle. Nobody actually knows how it works, the people that do it for a living pretend to so they can keep their jobs, but they just wave their hands around while playing montage music in the background, and if it finally does something, they ship it.


All other text will be discarded completely in exchange for video recordings of whatever you were going to write.
Wikipedia articles will look like Max Headroom (the character, not the show), as they get revised/redacted.
Open Office will become a non-linear video editor that also makes spreadsheets. Microsoft will sue because they patented that for Office. Apple somehow manages to be the one hat invents it, after seeing that everyone else has it.

Ubuntu will require that you be assimilated. You will add your biological and technological distinctiveness to their operating system. Your culture will adapt to service Canonical. Resistance will be futile.
Plus, they've done testing that shows it's easier to use that way.
 
UNIXgod said:
Keep it on topic. We are talking about FreeBSD on these forums not Haiku or any other BeOS related crap.

I was on topic:

purgatori said:
Until you have a computer that can interpret an ambiguously-worded instruction with the same accuracy as the average human, we're not even close to having an interface that can match a keyboard for efficiency.

You just didn't understand it.

UNIXgod said:
Also chat bots don't echo().

Or understand chatbots like Daisy or Billy either, who build a database/mindfile of words that are input during conversation, thus echoing words. I have the 9th highest ranked bot at the PersonailtyForge, that also competed in The Loebner Prize in Artificial Intelligence Turing Test, have programmed ALICE bots at Pandorabots, and created mindfiles for both Billy and Daisy that I made available for download to the AI community, so don't try and school me on chatbots.

In fact, you were the first one to bring up the subject:

UNIXgod said:
I wasn't referring to s for neural networks and learning machines we haven't gotten much further than silly Markov chained pseudo software psychologists.

However your knowledge of chatbots seems restricted to Eliza.
 
Trihexagonal said:
Talking to chat bots
Cold and lonely winter night -
Echoing my words

Your quote above is a traditional haiku. My lame attempt at humor was to make a BeOS reference.

Trihexagonal said:
I was on topic:

You just didn't understand it.

Or understand chatbots like Daisy or Billy either, who build a database/mindfile of words that are input during conversation, thus echoing words. I have the 9th highest ranked bot at the PersonailtyForge, that also competed in The Loebner Prize in Artificial Intelligence Turing Test, have programmed ALICE bots at Pandorabots, and created mindfiles for both Billy and Daisy that I made available for download to the AI community, so don't try and school me on chatbots.

In fact, you were the first one to bring up the subject:

However your knowledge of chatbots seems restricted to Eliza.

Wasn't attempting to school you. Just a poorly executed attempt at word play. Actually now I'm interested. Pandorabots and PersonailtyForge look like very interesting sites to explore.
 
Back
Top