Passivity in the Workplace

In addition to our tendency to inadequately empathize, assuming similar past experiences is another common tendency. In our minds, all our past experiences become similar to everyone else’s, and we don’t customarily imagine, say, a victim of abuse or neglect or some other heinous criminal act as standing or sitting right there beside us, quietly keeping that traumatic experience from becoming known. On the flip side of that notion, we don’t customarily imagine the possibility of some deviant individual standing or sitting right there beside us, working hard at gaining your trust, say, to take advantage of that later. Note that this is not an issue of paranoia; this is about possibility. You’re a professional; thinking in terms of possibility is part of your job now. Forget about what is probably true. You must now consider the BIG picture

….people often distort their thoughts about reality in order to make themselves feel more comfortable or happier. — (Norman) Stuart Sutherland, Irrationality: Why We Don’t Think Straight!, 1992.

People like to kid themselves.
— Detlof von Winterfeldt and Ward Edwards, Decision Analysis and Behavioral Research, 1993.

It is now time to turn your brain on and start recognizing possibility over probability. Billions of personalities in our world, with only one — yours — to comprehend them.

II. Policy and Procedure

Three problems plague the workplace:

• Rules that don’t take into account certain, specific possibilities

• Employees — superiors and subordinates — who either just don’t try imagining or simply choose to disregard alternate possibilities other than those currently addressed

• No means in place by which an alternate possibility or exception could be discreetly suggested and appreciated

A. Rules that don’t take into account certain, specific possibilities 
There is an exception to every rule … almost. You should always be suspicious of any rule written without at least one exception, because it more than likely sanctions or dictates some course of action that should be avoided in some particular situation. What happens if … or in the case of …? Brainstorm to eliminate all possible irrational consequences that could result from a poorly thought out rule before one surprises you some day.

We inhibit learning when we view people as machine-like, suggesting that they follow instructions like a machine, and force them to justify behavior exclusively in terms of previously articulated plans…. People do not simply plan and do. They continuously adjust and invent. Managing this process means managing learning, not managing application of a plan.
William J Clancey, “Practice Cannot be Reduced to Theory: Knowledge, Representations, and Change in the Workplace,” 1995, in Organizational Learning and Technological Change, by S Bagnara, C. Zuccermaglio and S. Stucky (Editors); papers from the NATO Workshop held September 22-26, 1992 in Siena, Italy.

B. Employees — subordinates and superiors — who either just don’t try imagining or simply choose to disregard alternate possibilities other than those currently addressed 
EMPLOYEES! Don’t let your personal mood interfere with your professional outlook. Don’t make excuses and perform according to how you “feel” or according to “what the policy is,” but aspire to take a step back, analyze the situation and consider the possibility for an exception to the rule. What is the right thing to do here? ..the responsible thing to do? ..any safety concerns? ..quality of life issues? Learn to CARE. You may not always be able to empathize depending on your own personal prior experiences, but you can still IMAGINE possibilities.

We lead our lives day-to-day, typically, in a relaxed, easy state. Our thoughts revolve around information picked up by our senses and analyzed in a cognitive manner or style shaped by our experiences and influenced by our personal emotional attitudes. This is passive thinking.

Passive thinking finds no safe place among professional decision makers. Professional decision makers have learned how to brainstorm and use their imaginative abilities to the utmost in the way of arriving at unforeseen solutions. They recognize their own personal attitudes and beliefs and disallow those thoughts from shaping or influencing professional decisions constituting their own individual responsibility. Instead, they stick to the tried and true scientific method of formulating hypotheses and conducting carefully controlled experiments to test their validity.

Professional decision makers don’t expect good ideas to just “happen.” They search for them, create them, imagine every possible contingency surrounding them and prepare for them before they even practically exist.

Similarly, professional decision makers don’t expect themselves to be immune to bad ideas or erroneous thinking. We’re all human; we all make mistakes. But professional decision makers know how to study their ideas to root out the bad ones so as to safeguard themselves and others from possible unwanted or undesirable consequences. The idea is to rack your mind and use your imaginative and analytical capabilities to the fullest to recognize all the possibilities and discover what other possible approaches there may be so that you can compare and contrast their feasibilities, pros and cons before trying to do something that you may later regret.

There is nothing magical going on here. Just self-discipline and putting your own mind to work even when you don’t want to, doing what you have to do to get a job done … professionally. It centers on CARING about what you do and how well you do it: caring for all those who will be affected by your decision, before you decide to implement it.

C. No means in place by which an alternate possibility or exception could be discreetly suggested and appreciated.

Quality never comes easy. Professional decision-making entails a lot of hard work that cannot possibly be accomplished by any single person in every single situation. In most cases, it entails a formidable job for even a group of dedicated individuals and over a very lengthy period of time. Although optimal solutions may take some time to arrive at good, safe, viable solutions can usually be found which can be improved upon in time, assuming that we will later possess the ability and motivation to do film porno gratis so.

But how can you ever hope to find that optimal solution if you stifle creativity among your subordinates? Ideas can hide in the most inaccessible places sometimes, well beyond your imaginative reach. Listen to and consider other perspectives! If you cannot prove that some particular idea bears no further scrutiny, allow for that possibility, regardless of your own personal beliefs or attitude. Appreciate and encourage all suggestions. You’re gonna’ get an outstanding one some day!

As true as that may be, still you must recognize a possibility for some timidity among the personalities in your gang. Even if they all seem to be outspoken individuals, certainly at times some things shouldn’t be blared out for everyone to overhear. The means of communicating ideas (and complaints!) should be a discreet — and anonymous — one if so desired.

Speech recognition heads to portable media players

The structure of applications follows the type of user interface used. The first interactive apps in PC-DOS days were text-based console apps. An application would ask the user a set of questions, one at a time, ending with the ubiquitous Are you sure? Y/N. A poor user who mis-typed an item would press N, and have to fill in the list all over again.

GUIs gave the initiative to the users, who could fill in (or not) fields in any order. Validation could occur on each item as it was entered. A big step forward, if you had a PC handy.
VUIs (voice user interfaces) are a whole different animal. In a VUI you must activate the grammar before you ask the question. It must contain all the possible answers. This complicates the user interface because some data values are open-ended. Consider getting a mailing address from the user.

The State is easy; there is a fixed set of them. Zip code is more open-ended but there is still an underlying pattern (5 digit number in the US, AlphaNumAlpha-NumAlphaNum in Canada) that can be used to create a grammar.

Street addresses are completely open-ended. It has, if you’ll pardon the pun, a large address space.

Heres a sampling:
1. Dr. Martin Luther King Jr. Ave is the longest street in Albuquerque
2. Ho Road in Carefree, AZ meets Hum Rd at the corner of Ho and Hum pic
3. Akaaka Street is in Oahu
4. not to mention the dreaded Welsh names like Gwernymynydd
The only feasible approach for getting a street address is divide-and-conquer. Ask the zip code first and then, using census data, have grammars for every zip code. Suddenly, your simple feature of getting the callers address requires determining every street name in the country! As discussed before, this is a perfect job for third-party speech objects.

The structure of speech application code reflects this issue. Much of the validation code of GUIs now becomes grammar generation code that runs at the start of the dialog. When the speech dialog ends, theres not much validation to do since the user was picking from lists that we generated. Of course, dynamic grammar generation creates problems its own: caching and avoiding unnecessary grammar reloads.

Apps that do well in an everything is a listbox world are ones that already know about the user. Existing customers call in, enter an account number, and the app already knows their phone numbers, address, GPS co-ordinates.

Two US firms have outlined ambitious plans to enable users to talk to their digital media players instructing them what they want to hear next.

Music library firm Gracenote has teamed up with Scansoft to offer a control system that hopes to give people hands-free access to their digital music collection on the move and make the need for thumbs a thing of the past.

“Voice command-and-control unlocks the potential of devices that can store large digital music collections,” said Ross Blanchard, vice president of business development for Gracenote.

“These applications will radically change the car entertainment experience, allowing drivers to enjoy their entire music collections without ever taking their hands off the steering wheel,” he added.

If the Gracenote name sounds familiar its because it currently provides music library information and ID3 tagging for millions of different albums for music download services such as Apple’s iTunes and Windows Media Player.

“Speech is a natural fit for today’s consumer devices, particularly in mobile environments, and the increasing portability of large libraries of music and video files make speech a necessary interface for safety and convenience for entertainment devices,” stated Alan Schwartz, vice president of SpeechWorks, a division of ScanSoft.
“Pairing our voice technologies with Gracenote’s vast music and video database will bring the benefits of speech technologies to a host of consumer devices and enable people to access their media in ways they’ve never imagined.”

Targeted products include car entertainment, portable media players and home entertainment devices such as media servers. The companies estimate that fully-integrated porno mexicano solutions for hardware and software platforms will be available in the fourth quarter of 2005.

However the companies have not commented on which players will be using the new software.

Example UI Spec: Text Instant Messages

Users can send each other Sound Instant Messages, which are short sounds that have meanings associated with them. (The sounds are currently being designed.) To send a SIM, the user taps the name of the person they wish to contact, and that person’s Bub screen appears. This page provides some information about this bub as shown, namely their full name, their Sound ID (which users can tap to play), and whatever awareness information is available. In addition, it shows the SIM icons. [Can we have a “Sounds turned off” indicator on this page when the person is muted or blocking sounds from this person?] To send this bub a SIM, the user taps the associated icon. When they do so, if the sound reaches the server (other person?), it plays back to this user so they know the sound was sent. [Does this mean it may not reach the bub even if this user hears it play back? Did we decide to require an acknowledgement or not?] When the user sends a sound, the recipient hears first the sender’s Sound ID followed by the SIM. (Note that this is the opposite of the awareness sounds, which play before the SID.) In addition, they see a visual alert of the message. Figure P5 shows what it looks like for Walendo when Nancy sends him the SIM “Ready?”. The alert in the header area alternates between the two images for five seconds. Note that Walendo could be on any screen in Hubbub and he would see this alert. [We need to Figure Pout if there are any exceptions to this.] Possibly for Version 2: If the user taps on the flashing alert, then they are taken to that bub’s screen so they can quickly reply to the SIM with a SIM.
Header area alternates between two images for 5 seconds. If the recipient happens to be mute, the visual alert still flashes but the sound does not play. In addition, an error sound plays on the sender’s device instead of the sound and the “Sounds turned off” text flashes off and on for [5] seconds. [Maybe we play a very short beep-type sound to tell them just that a message came in so they look at the screen, but don’t play the sound.] If the recipient is blocking all sounds from the sender, then they do not receive the visual alert or the sound. The sender sees the same “Sounds turned off” message flashing. There is no way for the sender to tell the difference between someone who always keeps their device muted and someone who is blocking their sounds. (Over time, we expect users to Figure Pout that the visual indicator still plays if they’re mute but not blocked, so they can have conversations even when one person wants to keep their device quiet. But there is plausible deniability if someone is blocking a bub when they don’t respond because it’s easy not to know about the sound if you’re not looking at the device.)

As mentioned in the next section, if a user tries to send a sound or text message to a bub who is offline, a window appears telling them that the bub is offline and asking if they want to send a message through email. See explanation in the TIMs section. In the case of trying to send a sound message, the text area would start out blank, since we can’t send sound messages through email.

Hubbub provides a log of the last 5 SIMs to arrive. This might be useful if theuser hears a sound but isn’t able to identify the sound and doesn’t look at the screen fast enough to see the flashing message. Or they may have muted Hubbub and had their attention elsewhere, but they can still find out the most recent messages to arrive. To see the log, users tap the “Last msgs” button from the main screen. [It would be nice if you could tap one of those items in the list to go to that person’s Bub page so you could easily respond without having to navigate your way to them. Let’s try to get this in if we can.]

We expect that people will want to create new SIMs once they get the hang of using the ones we provide. We are still working out our plans for supporting this activity.

Users can also send each other Text Instant Messages (TIMs), which are equivalent to the instant messages of such programs as AOL Instant Messenger, Yahoo Pager, ICQ, Excite PAL, etc. The difference is that the users can send these messages between wireless Palms or between computer desktops and Palms.

To initiate a text message, the user taps on the name of the person from the Hubbub main screen. To start an instant message, the user types a message into the text area at the bottom of the screen and taps “Send.” If the other person is accepting text messages from this person, the Text Message screen appears on the user’s screen.

Each time a user send a message, that message appears at the bottom of the scrolling area and the rest of the conversation scrolls up. The user’s comments are in regular type and the other person’s comments are in bold. The header of a TIM indicates the time at which the conversation was initiated, it does not update with each new contribution.

Users can also send Sound Instant Messages as part of their text messages.
To do so, they tap the menu button at the bottom left of the screen to bring up the SIM menu, as shown in Figure P8b. If the user has created their own sound messages, then those appear in the menu as well. When a user sends a sound message, the sound plays for both of them, and the icon for that message appears in-line in the text, on its own line followed by the label and preceded by the name of the sender (seen in P8a).

To save on memory, the TIM screen does not retain a history of all the messages in a given conversation. Instead, the last 100 messages (from either person) are available, but if the user tries to scroll back beyond that, the messages are no longer available. To end a conversation, the user taps the Done button. They are returned to the main screen, showing whatever group was last displayed.

When a user receives a text message, the incoming text message sound plays followed by the sound of the person sending the message. In addition, the text message window automatically displays on their screen except for a few cases. Namely, if they are already in a conversation with someone else, then the newly created conversation appears in the IM menu and the number next to the menu changes to reflect the new conversation, but the screen does not switch to the new message. To see the new conversation, the user pulls down the menu and switches to it. The other exception is when the user has a blocking popup on their screen. These appear when the user is in the middle of some action and needs to provide input to complete it, e.g. adding a user, changing the name of a group. In this situation, the sounds play when the message arrives, and as soon as the user closes the popup, the incoming message appears. [Might be better to give it something like two seconds so they can see that whatever they were doing took effect before being moved off. We’ll probably need to tweak this behavior, since it’s very annoying to have things happen when you’re in the middle of something else.] Each time the person sends a new message to an existing conversation, just the TIM sound plays (without the Sound ID of that person). [Maybe you hear the SID if you’re not looking at that window, or maybe SIDs only announce new conversations. We could also do something to show which message has unseen contributions, e.g. make that name in the menu bold, to help you Figure out which message to look at next.]

The TIM screen provides information about the other person’s focus and activity in their TIM window with this user. Specifically, an icon to the right of the bub’s name indicates which of three states they are in: (a) typing in this IM window, (b) viewing or has focus in this IM window but is not typing, and (c) not viewing (Palm) or does not have focus (desktop) in their window for this exchange. As the other person switches between different states, the icon updates to reflect this information.  This TIM activity indicator enables people to coordinate their conversation, which is bound to be punctuated by pauses given the speed at which people can write on the Palm. Users can interpret whether the long pause is because the other person is composing a long response or because they’re busy doing something else, and then adjust accordingly.
Users can have more than one conversation active at once. The menu in the upper right shows that this user has three such conversations active. The number the the left to the menu also indicates the number of active IMs. (If there is only one active IM, however, no number appears, since a 1 is more likely to be confusing than helpful.) To switch among the conversations, the user taps the menu and selects another one. Alternatively, the user can put the current conversation “on hold” by tapping the “Hold” button, which brings the user back to the main screen but keeps this conversation “active.” (This is the equivalent of moving focus to another window in the desktop world.) Being active means that it still appears in the IM menu, the history of messages in that exchange (up to 100) is retained, the IM icon appears with the bub’s entry on the main screen (see the listing for Ellen), and the timestamp for the conversation continues to show the time the exchange began. (The icon next to the user’s name on the main window is the same icon used to indicate the bub’s activity in their TIM window and it updates on the main screen just as it does on that IM window.) In addition, if there are any active IMs, the main screen has an IM button at the bottom of the window, regardless of which group they’re looking at. Tapping that takes the user back to the IM window showing the last IM they were in. If there are no active IMs, this button does not appear. (This is intended to make it quicker to get back to an IM even if the user is not looking at the group that contains their current IM partner.)

When the user returns to the conversation, everything appears as it did before, with perhaps additional messages at the bottom if the bub has entered any. If the user wants to end the conversation, they tap the “Done” button on the IM window. (This is the equivalent of closing an IM window in the desktop world.) The conversation no longer appears in the menu and there is no way to “get back to it.” However, note that the other person may not have “closed” their end of the IM. If they send a new message, the recipient experiences that as a new conversation being started from that person, with a new initiation time and no history of previous messages.
If the user tries to send a text (or sound) message to someone who is offline, then a window pops up. Since the would only get the screen if they had already typed a message into the text field first, that message is provided in the text area so the user can either send it as is, or modify it and then send it by tapping the Send button. When it arrives in the recipient’s inbox, the Subject is “Hubbub message from {Bubname}.” Since the message must be short, the interface allows only 512 characters. If the user tries to write more, the character is not echoed and the interface beeps. [Note: if scrolling doesn’t come for free on this screen, then the message can be only as long as this interface allows, which is about 170 characters. We will not implement scrolling just for this window.] If the user taps Cancel, then they are returned to that Bub’s Bub screen.

Example UI Spec: Awareness

Hubbub’s main screen shows a list of people (called Bubs) whom the user has added to Hubbub and has put into the group AT&T (shown in the upper right). Bubs are listed with the user at the top, followed by those bubs who are on line right now (either active or idle) in alphabetical order, followed by those who are offline, also in alphabetical order. Each bub is listed either in bold to indicate that they are currently active, or in regular font to indicate that they are either idle or offline. A user is considered active if they have used the pen (Palm) or mouse or keyboard (PC) within the last 5 minutes. They are considered idle if the device is reachable but they’re not active. They are offline if the application is not reachable, which in the case of the Palm would happen if the modem was off or could not receive signal, and in the case of the PC may happen if the person is not running Hubbub, if they logged off, if the computer is turned off, their network connection is lost, etc. A user may be active or idle on the Palm even if they’re using another application at the time. [Note: We’re not sure we can do this, but this is the goal.]

The numbers next to each bub indicate how long they have been active or idle. If the bub is active, the number indicates how long they’ve been active; if they’re idle, it shows how long they’ve been idle with a minus sign in front. Time is measured in hours:minutes (not seconds). The icons next to the numbers indicate which device the bub is active on if they’re active, or was last active on if they’re idle. The keyboard icon indicates the PC, the Palm icon indicates the Palm, and the phone handset indicates the phone. (Version 1 does not include a phone client, so this would not appear until that client is implemented.) If the person is unavailable, they are listed as “Offline.” [Should we give time since last available??]

For bubs who are active, a small indicator to the left of their name indicates how active they are. That is, it indicates the frequency of mouse or keyboard events (PC) or pen events (Palm). This enables users to get some sense of the other person’s activity. If they’re very active, then maybe they’re busy writing, if they’re not very active, maybe they’re surfing the web or reading. This information is another small cue to help people feel connected and to give them context should they try to contact each other. The meter has four states: idle, low activity, medium activity, and high activity. [We will need to define what that means in terms of number of events and per what unit of time.] Walendo and Libby show high activity, Bonnie shows medium activity, Ellen shows low activity, and Jonathan, Julia and Steve are idle. The group label also has an activity meter, which gives the average activity level of everyone in the group, including those who are idle. This gives a quick measure of how many people are active (and therefore accessible). (The groups menu shows the average activity for each group, which is when this is most useful.)

If a user has awareness sounds turned on for a particular friend, then each time that user changes from active to idle or the reverse, the user hears an audio indication. If their friend becomes active, then they hear the active sound followed by the Sound Identification (Sound ID or SID) for that person. If they go idle, they hear the idle sound followed by their SID. In addition, the user gets a visual cue to indicate the change. The header bar flashes the message ” is active” or ” is idle” for [5] seconds.

The speaker icon in the left-hand column shows whether the current user is accepting awareness sounds about this person. (If the speaker is filled in with “sound waves” coming out, then sound is on, if it is hollow, then sound is off.) The user can click on the speaker icon for any person to toggle it from on to off or the reverse. In addition, the user can always Mute the entire interface by clicking the Mute button at the bottom of this screen (also available from other screens). The Mute button turns off all audio from all people, including incoming Sound Instant Messages, awareness sounds, and alerts of incoming Text Messages. The Mute half of the choice button is selected and all the sound icons become hollow, indicating that no sound is arriving. When the user unmutes by tapping the sounds half of the choice button, all the sound icons return to the state they were in before the interface was muted. [We might decide to have a “semi-mute” state that plays a single short sound each time a message comes in but does not play the full range of sounds. This would enable people to know when a message came in even when they were in a situation where all the sounds weren’t appropriate.]

When sound events arrive when the system is muted, the user sees the visual feedback that ordinarily accompanies these sounds, but they don’t hear the alerts. If someone tries to send a friend a Sound Instant Message when the friend’s sound is muted, they receive an alert telling them so. [Need to show what that looks like.] (This is different from the case where someone is blocking sound messages from another person, in which case the intended recipient gets no visual notification. In either case, the sender is told that the recipient has sounds turned off.)

Users also have the ability to block others’ access to their activity (active/idle) information and/or their location information, in which case their listing would change. Figure P2 shows the case where Chris has blocked Walendo’s (the current user) access to both his location and activity information. As a result, his name appears in regular font with no active/idle time, no location icon, and no speaker (since it’s not possible to enable awareness sounds). If Chris blocked Walendo’s access to his activity information but not his location information, then his listing on Walendo’s screen would not have the time information or the speaker icon, and he would always be listed in regular font, never switching to bold. If Chris blocked Walendo’s access to his location information but not his activity information, then the location icon would not be presented, but the active/idle time and the speaker button would. See the Editing a Bub section for an explanation of how to allow or disallow access for another user.

If the Hubbub client loses its connected to the server for more than 30 seconds, then the a visual indication indicates this to the user [need to figure out what this is]. Once Hubbub reconnects, the indication shows that they are connected again.

Hubbub Overview

Hubbub is a wireless and desktop application that supports awareness and very lightweight communication among people who are distributed and/or on the go. It runs on a Palm V connected to the network via modem and on a PC desktop. Hubbub makes extensive use of sounds to enable people to hear (as well as see) when other people become active or idle on their computers or Palms. This gives them a background level of awareness of who’s doing what and when someone might be available for an interaction. Hubbub also lets people send text instant messages to each other between Palms and/or desktops. And it supports a novel concept of “Sound Instant Messages,” short “earcons,” or strings of notes, with simple meanings that help people coordinate or simply keep them in touch. Examples of such messages are “hi,” “Want to go to lunch?” or “Ready to a video?” Sounds are also used to identify people; everyone chooses a “Sound ID,” which accompanies both awareness information and sound messages. This lets people simply hear that “Bonnie says hello” or “Bonnie just became active on her work computer,” without having to look at the device.

This specification is a detailed description of the user interface design for version 1. Ideas for version 2 are noted, but are meant to indicate a direction and not a full design. This document will continue to be updated as the design evolves; each page of the document indicates the date on which it was most recently updated.