Unix in a keyboard-less age

UNIXgod said:
Wasn't attempting to school you. Just a poorly executed attempt at word play. Actually now I'm interested. Pandorabots and PersonailtyForge look like very interesting sites to explore.

No problem.
 
drhowarddrfine said:
That doesn't make them toys. They're not there for doing those thing necessarily.

Agreed. Tablets are not (in my view) general purpose computers in the sense that we traditionally understand, and it is misguided to think of them as such and further to infer that they signpost the destiny for general computing. I think these and similar devices belong to a new parallel class - 'Content Consumption Devices'. I have one, and it's great - within its problem domain.

I think of more relevance to mainstream computing will be some of the enabling technologies that helped make modern tablet devices workable - ubiquitous network, data mining, prediction, behaviour analysis, 'cloud' intelligence etc etc. These advanced technologies will still require (more than ever) smart people who can architect and program increasingly sophisticated large scale systems - the sort of systems that have traditionally had a solid foundation in Unix environments.

sim
 
I don't really like looking at them for consumption only and I'll give give three examples. A hospital's doctors I do work with use iPads to look at patient history and data and also use it for entering information. A restaurant chain I know uses iPads as cash registers. And, just a few days ago, I had maintenance done on my furnace. The tech did all the billing, credit card, confirmation email through his iPad.

So they seem to be taking the place of other handheld devices while offering more potential features and functionality. It would seem to offer a lot of freedom in a creative way.
 
nslay said:
Admittedly, I'm not well versed in the details of Unix history. But I'm guessing that Unix didn't always have a tty subsystem. That it originally started with paper tape. Then it evolved to have ttys. Then it later evolved to have ptys and mouse support. What's next?

Currently the whole 'industry' makes the keyboard less useful and more 'stupid' unfortunately, check the keyboard layout between Dell Latitude E6410 and E6420, the difference between ThinkPad T420 and T430 and You will know what I mean.

Maybe that is their point, make it more 'stupid' and less useful to have an argument that the 'on screen' one is 'better'.
 
vermaden said:
Currently the whole 'industry' makes the keyboard less useful and more 'stupid' unfortunately, check the keyboard layout between Dell Latitude E6410 and E6420, the difference between ThinkPad T420 and T430 and You will know what I mean.

Maybe that is their point, make it more 'stupid' and less useful to have an argument that the 'on screen' one is 'better'.

They did the same with the x220 to x230. Thinkpads are just generic machines now. No more decent laptops for programmers.
 
I personally will never trade a keyboard for a touchscreen or touch based interface. I just couldn't live without the sound of a plastic key press in my life.
 
Until we have software that can work reliably up to the level of voice recognition, keyboards will still be reasonably commonplace.

Voice recognition, guestures, etc. are all nice, but if the daemon/process that provides those services crashes you're stuck.
 
throAU said:
Until we have software that can work reliably up to the level of voice recognition, keyboards will still be reasonably commonplace.

Voice recognition, guestures, etc. are all nice, but if the daemon/process that provides those services crashes you're stuck.

Everything else provided by the operating system is also software and prone to crash. Some of it can bring down the entire machine (or leave the machine hanging). What's the difference?
 
vermaden said:
Currently the whole 'industry' makes the keyboard less useful and more 'stupid' unfortunately, check the keyboard layout between Dell Latitude E6410 and E6420, the difference between ThinkPad T420 and T430 and You will know what I mean.

Maybe that is their point, make it more 'stupid' and less useful to have an argument that the 'on screen' one is 'better'.

Meanwhile, software continues to be dumber than a bag of hammers.

I personally believe smart software can overcome stupid input interfaces while being practical and convenient.

Dictation is about as stupid as it gets. But when NLP and AI advance enough so that you can hold a meaningful conversation with your computer, keyboards and mice will look really stupid and primitive by comparison. I mean, you could practically ask your computer to do some challenging task that would otherwise require some heavy typing (as if you were asking another person to do it).

But for now, I imagine smart software can already make pretty good use of something simpler, like a touch screen, while still being practical and convenient enough for productive work. Maybe even throw some dictation and accelerometers in there.

We only just mastered mice and GUI. We don't know jack about other types of input yet.
 
nslay said:
Everything else provided by the operating system is also software and prone to crash. Some of it can bring down the entire machine (or leave the machine hanging). What's the difference?

Add up the lines of code involved in a simple keyboard based TTY driver and compare to a voice recognition engine.

Compare RAM utilisation and CPU utilisation.

One of those programs will be a LOT easier to audit, debug and ensure is stable.
 
throAU said:
Add up the lines of code involved in a simple keyboard based TTY driver and compare to a voice recognition engine.
Why stop at the keyboard or even TTY? Anything can fail in the kernel and leave you equally stuck ... even unprovoked.

But while we're at it, if such a service is a user space process then that is definitely more stable than anything in kernel space.

Compare RAM utilisation and CPU utilisation.
*yawn*

Besides, what do you base that on? What you think would be required for such a system?

One of thse programs will be a LOT easier to audit, debug and ensure is stable.

The types of the systems I imagine (and work with) build (train) themselves and are effectively blackboxes. All you have is statistical theory to make guarantees (and tried and tested maturity). You don't program smarts into software (and you wouldn't want to).
 
nslay said:
Why stop at the keyboard or even TTY? Anything can fail in the kernel and leave you equally stuck ... even unprovoked.

The TTY and kernel can't be omitted for administrative purposes. They are the lowest level software and required to gain any sort of access to the box.

Note: I'm not talking about normal end user operation here.

I am saying that the keyboard will remain for when things go pear shaped.

But while we're at it, if such a service is a user space process then that is definitely more stable than anything in kernel space.

User vs kernel mode just means it won't take the entire system out (hard system crash) if it crashes in user mode.

However, if your only method of interaction machine is broken, it really doesn't matter whether it is running in kernel space or user space - you can no longer interact with the machine.

Running stuff in user mode is no magical cure for software bugs, and you'll find that the vast majority of remote exploits and software crashes on your box today are in fact in user space software.
 
drhowarddrfine said:
For a second I thought this was a Windows board. Or maybe Reddit. Can we show some semblance of respect here, please?

No, it's fine.

It gives you a clear indication of noobs who have no idea what they're talking about, when they resort to such posts because they can't actually form a coherent argument.
 
nslay said:
Keyboards/mice may go away in the distant future and there are already devices that lack these. So, how does a human interface with Unix now?

So how do you imagine human-Unix interaction in the future?

The great prophet Vernor Vinge has already given us the answer in Rainbows End, page 105:

He felt a moment of pure joy the first time he managed to type a query on a phantom keyboard and view the Google response floating in the air before him.

He also talks about kids that could learn to use the fidget interface so they could text each other in class without being noticed. But older people have trouble learning the fidget interface, so they tend to stick with the phantom keyboard.

This, of course, was written as fiction, but it's not as extreme as I thought it was when I read it. Shortly thereafter I read that someone was actually working on a prototype computer display in a contact lens. And as for the phantom keyboard--if computers can identify people from camera images, why couldn't they see what you're typing without the keyboard?
 
nslay said:
Meanwhile, software continues to be dumber than a bag of hammers.

I personally believe smart software can overcome stupid input interfaces while being practical and convenient.

Dictation is about as stupid as it gets. But when NLP and AI advance enough so that you can hold a meaningful conversation with your computer, keyboards and mice will look really stupid and primitive by comparison. I mean, you could practically ask your computer to do some challenging task that would otherwise require some heavy typing (as if you were asking another person to do it).

But for now, I imagine smart software can already make pretty good use of something simpler, like a touch screen, while still being practical and convenient enough for productive work. Maybe even throw some dictation and accelerometers in there.

We only just mastered mice and GUI. We don't know jack about other types of input yet.

I do not have anything against the new or/and better ways of 'telling' the computer what You want from 'him', but as everything in the computing, also the keyboards should be better and better as times passes by, not less and less usable and more stupid.
 
drhowarddrfine said:
For a second I thought this was a Windows board. Or maybe Reddit. Can we show some semblance of respect here, please?

I think you read too much into it.
 
throAU said:
No, it's fine.

It gives you a clear indication of noobs who have no idea what they're talking about, when they resort to such posts because they can't actually form a coherent argument.

You also read too much into it. CPU and RAM is a more expendable resource than it used to be.

You're the newbie here. You still haven't told me what you base your claim on

Compare RAM utilisation and CPU utilisation.
 
fonz said:
With datagloves and eyephones. Ask Johnny Mnemonic, he knows.

Fonz

I also think this is going to be the way to go. Not for everything, but for most end-users this can be the best. I have read "Daemon" these days and I feel that this is one kind of end user interface which can be accepted by most users. Aside from that, I can highly recommend this book and the second one, they are real thought-provokers.

The interface using a HUD in your glasses has a big advantage, no one can easily look over your shoulder what you are browsing ;)

Also, it saves lots of space and material since no big 100" monitors need to be produced and shipped around the globe. Think environment.

What I would see as a disadvantage is the limited input bandwidth when it comes to non-fuzzy data, as in putting in numbers or source code. I would not want to be seen or heard intoning the magic "#include <stdio>" or chant to the dark "template of templates of templates".
Chuck Moores' color forth keyboard has only a few keys, and you could cover the basic input to the system, for numbers, test, whatever, in only some hundert lines. That would be enough to provide exact input, the more complex gesture recognition would come later.

So I see other typed of keyboards and mice in the future, also other GUIs or command lines. But I do see little future for touch screens.
 
nslay said:
You also read too much into it. CPU and RAM is a more expendable resource than it used to be.

You're the newbie here. You still haven't told me what you base your claim on

Despite my join date, I've been looking after production unix systems and getting paid for it since 1996.

OK, system is running like a pig and swapping like mad and you need to log into fix it.

What is more likely to work? A voice control interface, or a terminal?

I'm not saying new UIs are bad. I'm saying that low level troubleshooting is going to require low level tools.

Despite the proliferation of high level languages, some people are still required to write in assembler. This will be similar.

Sure, 90% plus of the future population will never touch a keyboard. Some will though.
 
throAU said:
Despite my join date, I've been looking after production unix systems and getting paid for it since 1996.

OK, system is running like a pig and swapping like mad and you need to log into fix it.

What is more likely to work? A voice control interface, or a terminal?

I'm not saying new UIs are bad. I'm saying that low level troubleshooting is going to require low level tools.

Despite the proliferation of high level languages, some people are still required to write in assembler. This will be similar.

Sure, 90% plus of the future population will never touch a keyboard. Some will though.

I couldn't say. Such a system doesn't exist ... the closest thing to it is Siri which runs on a remote server farm. If your Internet connection is severed, then I'd agree, you'd definitely need something else (like a keyboard) in such a case.

Complexity doesn't necessarily mean it's horribly unreliable or inefficient. What's more likely to work, a bicycle or a car? I'd say the bicycle, but the car is pretty reliable too.
 
Back
Top