Here is my talk from FDConf, Minsk about feedback in the user interface:
Here is my talk from FDConf, Minsk about feedback in the user interface:
All screenshots for this post were made by Rakhim Davletkaliev.
File. It’s interesting that it was F2 to save, F3 to open, even though the order is already New, Open, Save — as on the GUI systems:
Edit. These keyboard shortcuts for clipboard were much better than Ctrl+K,K and Ctrl+K,B from Turbo Pascal 5.0:
Compile. It’s always bothered me that Run, which you always wanted, had a more complex shortcut (Ctrl+F9) than Make (F9), which you never wanted by itself.
Destination: Memory. An interesting menu item where the value is displayed inline.
Environment (we’ll get to the windows behind these items later):
File, Edit, whatever, Window, Help — Borland has copied this standard from the GUI OSes even though they didn’t have to. It was nice.
Change directory (in MS-DOS, there always was a current directory):
Notice how the active window has a double border.
No windows. Notice the background:
A simple program:
Arguments (what is parameters, then?):
Compile-time error message:
The full-stop at the end of a program is a nice quirk of Pascal.
Go to line number:
Notice that the main window still has the double border when a dialog is open.
Call stack. Not a dialog box, so the unfocused main window gets the single border:
Evaluate and Modify:
Messages. I never knew what Messages were, and neither did the school teachers. And you couldn’t have just googled it:
The language syntax:
No search. Functions organised alphabetically in strange groups:
Error messages, by number:
There also was Turbo Help, the help system available from dialog boxes. For mysterious reasons it looked very different from the main Help:
Editor options. Editors of 2017 have so many options that you need search just for them:
Mouse options. This was not system-wide:
Colours. My favourite window:
Previous exhibit: Norton Commander 5.0
Someone has tweeted this and got several retweets:
What they mean is this: when the content is still, tapping the screen is interpreted as a tap, but when the content is in motion, tapping the screen just stops the motion. So is behaviour modal? No, here’s why.
Most people think that an interface is modal when it has modes, i.e. when same user input produces different output depending on the state of the interface. However, that’s not the definition.
Let’s read Jef Raskin carefully:
An human-machine interface is modal with respect to a given gesture when (1) the current state of the interface is not the user’s locus of attention and (2) the interface will execute one among several different responses to the gesture, depending on the system’s current state.
Most people’s understanding includes only the (2), but not the (1). But they both equally matter. Perhaps, Raskin didn’t name the thing well, but we have what we have.
You unlock your iPhone’s and tap Messages:
But just as you are tapping it, you notice that it’s actually Shazam:
Oops, you are on a wrong page of your home screen.
In this case, launching of Shazam instead of Messages is a mode error: your gesture (tap in the top left corner) produced the wrong output depending on the current state (the page number), which was not your locus of attention. So, the iPhone’s home screen is modal.
Now let’s say you are in Contacts and tap the bottom left corner for Favourites:
Is there any chance you actually meant to go to a previously visited web page?
The gesture is the same (tap in the bottom left corner), and it produces different outputs depending on the current state (the active app). But here, the app is your locus of attention: you are fully aware whether you are looking for a contact or browsing the web. That’s why a modal error is not possible here, and this interface is not modal.
If we get back to iOS scrolling, it now becomes clear that it is not modal. When the scrolling animation is playing, it is the user’s locus of attention. The user is fully aware of the interface’s state: they are looking at the moving content. So the fact that the tap is interpreted differently during this animation is not a surprise and doesn’t produce a mode error.
All screenshots for this post were made by Rakhim Davletkaliev.
The canonical view of two directories in the Left and Right panels:
The right panel is active — you see the cursor and the highlighted path in the title. Use arrows to navigate the files. Change currently active panel with Tab.
Alt+F1 or Alt+F2 changes the displayed volume in Left or Right panel respectively:
MS-DOS uses drive letters for volumes, and the letters A and B are reserved for the floppy drives. Most computers had just one, and then the main hard drive was C.
Hide the panels with Ctrl+F1 or Ctrl+F2:
In MS-DOS, the filename’s length is limited at 8 characters, and the extension, at 3. The dot between the file’s name and extension is not shown. Unlike the command prompt, Norton Commander displays filenames in lowercase letters and directories in all-caps. There is a special halftone pattern separating the extension of the system files like Io.sys and Msdos.sys (also they have their first letter capitalised).
Ctrl+O hides both panels, for work with the MS-DOS prompt:
The main menu is the one displayed on the bottom, with keys F1 to F10 mapped to the most-used functions.
F1 for Help was a standard for many MS-DOS programs. I’m not sure if Norton Commander originated it.
F2, User Menu. The items can be programs for quick access. Here, the User Menu is empty:
The user has to edit a text-based configuration file to add items to this menu (press F4 while in this F2 menu):
The main panels are blue, the menus are cyan and the dialogs are grey. Unless they are error messages:
F3, View. Opens text view of the currently selected file:
Of course, for executables it shows “garbage”. But as far as I remember this version has the ability to set up external file viewers, so you can make it display images as actual images.
See the Mark item in the main menu inside the editor? It has something to do with a clipboard. There was no system-wide clipboard, of course, and the programs that had this feature used very different keyboard shortcuts for it. In Turbo Pascal 5.5, I remember, there were crazy combinations like Ctrl+K,B and Ctrl+K,K.
The Copy command opens the Copy dialog where the user enters the destination path. The input is pre-populated with the name of the directory in the opposite panel, and normally the user doesn’t edit it.
F6, Rename or Move (shortened to RenMov in the menu line):
Why Rename or Move is one command? Because from the system point of view it’s the same: while you are within the same drive, it just changes the full name (i.e. including the path) of the file. Again, the input is pre-populated with the path of the directory in the opposite panel, so just press Enter to move there. Or type a name instead of the path to rename the file in-place. Or add the new name to the end of the path to move and change name at the same time.
Moving between volumes involves actual copying and then deleting the original, but it was also done with this menu.
F7, Make Directory:
Interestingly, the window has no confirmation button, unlike the previous ones. Just press Enter. Also notice that in the main menu bar it’s called Mkdir, not MkDir. Why? Probably, because that was the name of the MS-DOS system command to make a directory.
F8, Delete. No question mark:
Before invoking the file operations like Copy or Delete, you could select many files with the Insert button, and they would display in yellow:
When files are selected, the bottom line changes to reflect this state.
The top menu, or the “Pull-down menu” is shown when the user presses F9 (or clicks the top line of the UI, if your computer has a mouse).
The Left menu sets up the Left panel. You may see many options of what to display in the panels, then how to sort it and more:
The Files menu lists many elements that have nothing to do with files, including Quit — not dissimilar to how the File menu is used in today’s applications:
Right, same as Left:
As you see in the menu, the panels could display a Brief or a Full view. The Brief view just shows the files, in three columns (as in the previous screenshot). The Full view shows one column of files with details:
In this mode, each line looks exactly like the bottom line in the panel, so why not just remove it to show two more files?
You could also set up file filters:
Use a panel for file preview:
Or for search results:
Or for item information. Directory:
Go to file (in the current directory) by typing a couple of letters of its name:
Find File. The dialog windows that have other dialog windows inside, are cyan, not grey — for contrast:
The inputs are shown with square brackets and dots between them. The focus is shown with a black background. Advanced search options — this window is grey:
Notice how checkboxes differ from radio buttons.
NCD (Norton Change Directory? I’m not sure), a fancy way to navigate the folders on your disk:
Commander Disk Cleanup helps find files to delete:
Honestly, I don’t remember this feature at all (but I didn’t use Norton Commander at that time because there was DOS Navigator, which is a whole different story).
Found something. Here, files are displayed with the dots in the names:
DOS memory blocks:
Now let’s look into the configuration dialog. Notice how the controls are organised and grouped:
By the fifth version, Norton Commander got colour schemes:
The palette was limited to 16 colours. There was no way to make the colours darker in the shadows, so to simulate the darkened cyan-on-blue or white-on-cyan the applications of the era used grey-on-black. I’m not sure, but I guess Norton Commander has borrowed this visual style from Turbo Vision. The earlier versions had no shadows. If you go back to the Main menu section, you’ll notice that the help window has no shadow for some reason.
EGA Lines is a mode where twice as many lines of text fits the screen. Nobody I knew ever used it:
Fun fact: this post has 57 full-screen full-quality screenshots. Their aggregate size is 271 kilobytes.
These are words that usually signal problems with the user interface.
Variants: regular, general, basic, advanced, extended, miscellaneous, more
Examples: main section, basic options, advanced plan, miscellaneous settings
The line between “primary” and “secondary” is not defined. The user is looking for a particular thing and has no idea whether it’s “main” or “advanced”. “More” is usually a graveyard of elements that the designer didn’t find place for.
Variants: important, notable
Examples: useful hints, important information, notable changes
When you nominate some stuff important, this means all the rest is unimportant. Instead of stating the usefulness, explain the benefit: “How to start snowboarding”.
Variants: post, entry, publication
Examples: add entry, next post
These, again, name the type of the content. The words can be useful among the editorial staff, but meaningless for the reader: there are no newspapers with an “articles” section.
Variants: list, archive
Examples: shoes catalogue, news archive
There’s no need to signal that a list is following. Just put the list with an informative heading: “Shoes”, “News”.
Variants: enter, select, go, follow, open, launch
Examples: click here to open, enter password, select country, follow the link
There is no need to explain how to use buttons, links, input fields and other standard user interface elements. Links should just name the places they lead to: “iPhone 7 review”. Form fields should just name the content: “Password”, “Country”.
Examples: application form, inquiry form
A form is a table of fields to fill. The word “form” just names a type of screen, but the user already sees that it’s a form. Name it with the benefit in mind: “Job application: designer”.
Variants: necessary, must, please
Examples: required field, you must agree, please specify the phone number
The user doesn’t care if a field is required. If the form doesn’t work without it, they will put in some garbage. Instead of demanding or begging, explain the benefit: “We will call to coordinate delivery time”.
Variants: process, transaction, request, step, state, module, function, data
Examples: process is not responding, bad request, step 5 of 12, module not installed, wrong data format
These words are handy to describe how something works technically. But there is no point in using them in the user interface: they just complicate the matters for the user. Write as a human being: “Spell check available in paid version”, “Due to an error, the app needs to re-open”.
Variants: authentication, authentification, identification, session, limit
Examples: please authorise, session timeout
Even programmers mix up the autho-whatever-s constantly. Use the verbs “to enter” or “to sign in”, or, even better, name the thing that’s inside: “Shopping history”.
Example: operation completed successfully
If operation hasn’t completed successfully, it hasn’t completed, period. Write what’s been done: “Money sent”, “Update installed”.
If you know why a particular word from this list is not good, but in your case it makes perfect sense, leave it.
A slider is a simple user interface control where you adjust some value by dragging a handle in a groove:
Most web developers can’t get it right. You may think that there is not much you can fail at with such a simple thing. But most sliders are bad. They don’t respect the Fitts’s law and don’t provide decent feedback.
I decided to write a manual. Let’s see what usually goes wrong and how to fix it. If you are reading this via RSS, please go to the browser to see the demos.
Common mistake is requiring to grab the handle to drag it. Logically, this makes sense. But a small handle is very hard to grab. Try here:
A tiny bit better is making the groove clickable:
But in this design the groove is so thin that it hardly adds anything — aiming is still a pain, particularly on a touch screen. Speaking of touch screens, some implementations would just forget about them and not handle the touch events.
You should be able to grab the handle from any point in the slider area:
Make this area at least as big as a comfortable button would be. Notice how the areas to the left and to the right of the slider also work, moving the slider to the minimal and maximal positions.
In all the examples above the mouse cursor was changing to a pointing finger when you hovered over the draggable area of a slider. Sometimes this feedback is absent or inconsistent. Here, you can drag from anywhere, but the mouse cursor changes only over the handle:
Having no sign that the slider would work otherwise, many users would try to grab the handle. An even worse mistake would be requiring to drag the handle but displaying a pointing finger over the whole slider area.
Use feedback consistently, change the cursor over all slider area and make it all work:
The change of the cursor is the minimal feedback necessary for the user to understand how the slider works. But it’s better to also highlight the slider itself:
Notice how this one just feels nicer to use.
A small detail: the slider gets its red “hover” highlight, but also keeps it while being dragged, even when mouse is outside.
The slider should move continuously as I move the mouse. Some implementations would only repaint the slider when I release the mouse button:
Others would accept clicks in the whole slider area, but only move the handle to the click position instead of grabbing it (unless you grab the very handle):
Both feel broken.
The slider is usually there to control something else. In this case, the rectangle to the right of it. In some implementations, the slider would move continuously, but the external changes would happen only on release:
Everything should stay in sync: hover effects, the cursor shape, actual active areas; reaction to click and drag; feedback inside and outside the slider.
Sometimes it may take noticeable time for the changes in slider to take effect somewhere else. You may need to make complex calculations or get data over the network. You may think you just can’t provide continuous feedback in such cases. But you must still aim to provide as much continuous feedback as possible.
In my examples, the slider controls the numeric value and the background colour. Let’s pretend they are slow to update.
In the worst implementations, the slider would get repainted only when the data is ready:
This is painful to use.
At least the slider itself should repaint continuously during the drag, no matter how long it takes for the change to take effect:
But we can do better. To fake continuous zooming, Google Maps shows at least a blurry map image (see my post on immediate feedback when data is unavailable).
What if only the colour is slow to update, but the number can change instantly?
Again, compare with the previous implementation — this one just feels snappier. So: always consider at least partial continuous updating. This is much better than waiting for a full update before making any change.
Notice that the data updates as fast as it can without the need to release the mouse button. If the data takes even more time to update, some progress indicator may be used, but the slider should never become unresponsive.
There are too many sliders above. Which are the good ones?
This is the best, with full continuous feedback:
This is the second best, with partial continuous feedback when some data needs time to update (colour, in this example):
Please show this to your web development team.
In Sayve 1.3, we added the ability to display a badge with the number of unsorted audio recordings:
This is for those of us who don’t like their audio recordings to pile up. Actually, we wanted this in 1.0, but it didn’t get there. Why? It wasn’t that easy to design.
OK, this must be laughable: what even is there to design? The badge is a system feature that has a fixed look, just take it. But you can’t.
This is going to be a long and boring post, by the way.
Since iOS 10, displaying a badge requires an app to have asked user’s permission for sending notifications. So if we just tried to just show it, the user would see the system dialog asking for the permission.
We already ask for two permissions on first open: microphone and speech recognition. For iOS these are two separate things, unfortunately. Before asking, we show this screen:
This prepares the user for the system questions and helps us get a “Yes”. Microphone:
Notice that we can also add some of our own text in these dialogs to explain why we are asking for these permissions.
We could add a “tutorial” screen before asking about notifications: explain that we are going to display a badge and only then ask for permission. But adding another such screen and third permission question would make the first-launch experience really clumsy. And what’s more important, many users would still presumably press “Don’t Allow”, because at first launch they have no idea about how they would use Sayve and why a badge might be useful to them.
And what should we do if the user answers “Don’t Allow”? You can ask for a permission only once. How would the user even know they could change their mind later? For comparison, when the user answers “Don’t Allow” to any of the two sound-related questions, Sayve just cannot work, and we show this screen until both permissions are given:
We can’t do the same for the badge permission, because the badge is not necessary, it is just a preference. Some users would like it, others would not. So if the user says “Don’t Allow”, we would need to explain how to enable the badge later. But where and when? We don’t even have an About or Settings screen.
We could go and add a Settings screen just for this one setting. See how this is getting complicated?
Adding in-app Settings screen makes little sense: iOS already has notifications settings, and this is the only “notification” feature we use. So the next idea was to make this setting configurable only in the Settings app:
Let’s just display a badge when the user has enabled notifications. And we could explain this option in the app’s description.
This also didn’t work. Turns out, you can’t just put this “Notifications” element to the apps settings screen quietly — it would only be added there after your app has asked for a permission. So we have to ask for a permission even to get a “No”! By the way, the question starts with the text “Sayve Would Like to Send You Notifications”, which is not true:
We would not like to send any notifications, and we don’t even have them. As to the badge, we just can, but not would like to use it. To make things even worse, in this dialog, unlike the two previous ones, you cannot even put your explanatory text.
But this is not something we can change: the question is asked by iOS, not us.
So we added this icon to the top right corner:
The idea was: when the user taps this icon, we would first ask if the user would like to see the badge, and if yes, we’ll ask for the system permission to enable notifications. And we would remove the icon then.
Do you think this worked? Of course not. What would we do if the user declined the first time? There would no way to add the notification permission setting to the system Settings app then. So we cannot ask the user if they want a badge, we must just tell them that the notification permission dialog will follow and it is there to display a badge if they want.
So, in 1.3, when you tap the icon in the top right corner, you’ll see:
And after you press “OK” (which is misspelled as “Ok”, unfortunately), you see the actual, system notification permission dialog where you can decide whether you want the badge or not.
After having shipped this, we see that it’s better to say “Continue” instead of “OK” in the box above, otherwise it may seem to the user that they have no choice.
Is this the best design of a feature? I don’t think so. But that’s the best that we could come up with given the system limitations that we have.
We hope you like Sayve (we are Mihail Rubanov, the developer, and myself).
It’s about time to flip the mobile user interface vertically.
It was easy to tap “Back” or “Edit” with your thumb on the original iPhone. On iPhone 5, it became harder. Since the 6, it’s almost impossible.
Apple has added two features to make things less bad. The first was a gesture where you slide from the left edge of the screen to get back. The second was Reachability, where you double tap the Home button to make everything on the screen shift down so you could reach the top row of buttons. This already felt like a duck tape fix.
The question is: why put buttons on top and then add obscure gesture to reach them when you could just put the buttons on the bottom in the first place?
Windows Phone has the browser’s address bar on the bottom:
Apple, when will you do the same?
It’s interesting that the revolution has already happened in Maps. Before and after:
The search field and results have moved to the bottom. Why did this change only in one app?
We’ve designed Sayve, the smart voice recorder so that all the important controls are reachable. A couple of intermediary user interface layouts:
At first, the audio controls were on top (the left layout). Then it moved down (the right layout).
Finally, all the user interface changed to gravitate towards the bottom:
I’m hopeful that Apple will do something similar throughout iOS 11.
When you turn the things upside-down, you may not like the result aesthetically at first. But our idea of what’s beautiful is largely formed by the technology.
The rule: in the mobile user interface, put the controls on the bottom.
I’ve designed the UI for the panel protection program Securige. It’s a program where operators see if someone’s broken into your apartment and send the rapid response team if they have. That’s what it looks like:
My favourite moment of the project was when I was sitting at the operator’s post quietly (it was the condition to let me in), and the operators are like “if you have questions, just ask!”. And so I started asking. That’s where I’ve learned why the keyboard is almost never used. Otherwise I wouldn’t have come up with the Fitts-law-optimised search bar on the left.
Also, when I asked what irritated the most in the existing program, everyone said: “it’s fine”. But when I gave examples of possible improvements, they said: “wow, can this be done?”.
Very interesting project. A whole book on user interface can be written just on the examples from this one. Read a more detailed description.
A web application’s front end (what the user sees) and back end (what happens on a remote server) are often developed separately. If the back end of some feature is not ready yet, the front-end developer is very limited in what they can do.
I’ve designed the user interface for Mimic 2.0, a web developer tool for mocking server responses in a browser. With Mimic, you can develop as if the server was alive. It’s very easy to set up a simple mock. Say, you want to pretend the server responds with a line of JSON:
It lets you set up very advanced mocks, adjusting HTTP headers, timeouts and what not:
The great thing about Mimic is that you don’t need to set up a local server and change request URIs in your application. It works with the existing applications as they are, right in the browser. And you don’t even need to install browser extensions: you just link one script to your application and that’s it.
Read more about the user interface on the project page.