Posts Tagged mouse

Interfaces for creating?

I’ve found this discussion a very interesting one… There’s a lot of conjecture out there that the iPad (and similar devices) shifts use of the Internet away from “creating” and towards “consumption”. To some extent, some of this seems obvious. Activities like music and video are clearly consumptive and these activities often are more convenient (and seem more of a probable use) for portable devices like the iPad. Also, in general, reading is quite easy with the iPad/Kindle but typing is harder than with a regular laptop or keyboard. I find myself definitely being a consumer far more on the iPad. Even with emails, I tend to read and mark for later handling far more with the iPad. On my desktop on the other hand, I tend to immediately reply to the emails that I can knock out in the next minute or two. I might look at pictures on my iPad but I definitely don’t any editing (although the Photoshop Mobile app is kind of neat for really simple tweaking)

So while I can agree with the observation that iPads and other smaller devices are currently being used for consumption vs. creation, I think that this may just be a phase. Computer users have used keyboards for a long time. In fact, the first keyboard appears to date to the 18th century and our current qwerty keyboard dating to 1873. In addition, the mouse, first created in 1963 but not in common use until the 1980’s is also ubiquitous in modern systems. One could argue that it’s a powerful device for manipulating interfaces, but I don’t think it’s the end-all of human-machine interfaces.

There will be something new. There always is. Touch-based computing has its strengths and weaknesses. There’s an almost nauseating volume of interfaces that can all be summarized as “sort of like the interface used in Minority Report“. With faster processors, better algorithms for processing inputs, etc. it simply seems a matter of time before a new breed of general purpose input devices will become standard.

How would you like to write code with this?Keyboard input (and to a slightly lesser degree computer mouse input) are currently preferred because they are precise. Learning to type is a relatively easy task and provides a very easy-to-control way of interfacing with systems. Using a mouse is trivial to learn although it is much slower to use for many tasks. Its strength is that it works very well in dealing with graphical environments that involve manipulation of elements that rely on eye-hand coordination. The combination of both in modern systems allows precise control when needed, and manipulation of complex interfaces when needed.

Touch input devices provide a more natural feel for the second type of interface, but not the first. Precise input is slow and painful The value gained is that the iPad and similar devices are instant-on devices that don’t require you to sit, position yourself, or even use both hands. A user gains speed, portability, and convenience but loses precision.

Two things really interest me in this area. The first is motion-based systems like (to some extent) the Wii and more importantly the Kinect. Both systems use the concept of movement (one with a controller you hold and the other by simply viewing the user themselves). The second is voice-based systems like Siri. There have been many voice-based systems previously, but Siri seems to have attained a more natural level of interaction that I think finally makes voice control more practical.

The interesting thing about both systems is that both approaches reduce precision in the system and attempt to get at underlying intent of the input. You can ask Siri “What’s the weather like”, “will it rain today”, or “Weather” and it will give the same response. The attempt is to map a number of inputs to the same output. It can handle heavy accents, variations in speed, pitch, and intonation and still give results that make sense. Kinect based systems are looking at standard or typical behavior and are all about averaging inputs to try to get an approximate value rather than working with precise values.

These new technologies can be leveraged in interesting ways. It’s clear that games that involve more physical activity are fun and interesting. It’s also clear that being able to speak to your phone to perform tasks that would take longer to do with the touch input saves time. But will anything ever replace the keyboard?

I don’t have a crystal ball, but I think the important thing is that touch input, voice input, and motion-based input are really not trying to solve that issue. All of these inputs are inherently less precise (just as a mouse is less precise than a keyboard). Although there are some very interesting efforts to use a Kinect to write code in Visual Studio, it seems more likely that at best, motion technology could replace only the mouse or replace the mouse for specialized types of manipulation. Speech seems to be a good way of performing out-of-band or contexual tasks (say for example you’re in the middle of a coding task and want to send the current file to a team mate for review without stopping what you’re doing and performing this task manually.

Rapid but precise input is what’s needed for devices like the iPad to shift the trend from consuming information to creating information. This could be accomplished by new types of one-handed keyboards (which have been attempted); I have a hard time seeing that we will be able to achieve precision with devices not controlled by the human hand. Another option is a radical change in the interfaces themselves. To give an example, instead of writing code using a complex written syntax like that in most modern languages, a special language could be developed that encapsulated the structure of the code but could be represented in a format that could be more easily parsed and understood audibly. Transitions like this have already taken place in languages like LabVIEW which attempts to represent programming code in a visual format vs. a a written syntax. I have a hard time picturing how this could be accomplished, but in theory, I can see that it may be a possibility. There will be naysayers. But there are naysayers now with regards to high-level languages which already abstract an enormous amount of “what really happens” from the user.

Any thoughts on input devices and human-computer interaction as it’s currently evolving?

 

, , , ,

1 Comment