Why don't companies fix troubled concepts?

I watched your 2020 talk " Design by Concept: A New Way to Think About Software" and was struck by the Dropbox example and how few MIT undergrads correctly predicted how Dropbox would behave despite being frequent users. If so few users understand the behaviors of apps we use on a day-to-day basis why are companies that create these apps not incentivized to fix them?

2 Likes

Hi Imogen, and welcome to the forum! That’s a great question. In short, I think most companies do get how important it is for their concepts to be understandable to all the users, and they realize that if even experts don’t understand them, then they need to fix them.

In Dropbox’s case, I think the problem is quite deep, because they’ve adopted the Unix folder concept, and that’s what causes all those nasty problems. Arguably that was a reasonable decision since the concepts in Unix are stable and well understood, and widely implemented. And remember how well Apple did by adopting Unix as the basis for OS X!

But that concept turned out not to be so good for shared file systems, and Dropbox has to deal with that challenge. At this point, it would be pretty hard for them to change such a basic concept in their system, so they don’t have an easy way to fix the kinds of problems I discussed.

I think this is often the case, and a reason to design your concepts really carefully. Over time, concepts become cast in stone—so embedded in the system as a whole that they’re close to impossible to change.

Ironically, for many users, it may not matter that much that they don’t understand the concepts of an app they use, because they know enough to steer clear of cases in which really understanding what’s going on matters. I’ve talked to many computer science researchers, for example, who tell me that they only use apps like Dropbox and Google Drive in the most limited way because they’re nervous of what might go wrong. Of course, this is kind of crazy because it means that all the fancy functionality the companies develop goes unused.

2 Likes

this talk?

Hi Stephen, Welcome to the forum! Yes, I think that must be the talk she’s referring to. Unfortunately it’s not the best recording, as I was on vacation in the Berkshires and internet in the area failed because of a storm, so I was giving the talk to hundreds of people sitting in my car outside the local library which had the best connection I could find!

2 Likes

Really enjoyed it and was happy to see luminaries from the UX Design field referenced. I’m a User Experience researcher and I agree that concepts are at the room of user interface design problems. Of course not everyone, and especially large teams can all think about or agree on concepts. Do you know of any creative ways to test concepts with users? I’m thinking that you if you allow someone to use an app, and then ask them to predict how X task might be approached, you could determine if the prediction exposed their (mis)understanding. Open ended for insights or closed set of answers for quantitative numbers perhaps. Have not tried this. Do you know any methods, or colleagues that might have come across this ‘concept’ please?

Hi Ron – Welcome to the forum!

You raise a really interesting question, namely how concepts impact user testing. I haven’t done much research on this yet, but I’d say two things. First, if you modularize your design into concepts, you should be able to run user testing on the concepts independently. Second, you shouldn’t need to retest concepts that are already widely used and known to be effective; you can concentrate your testing efforts on less familiar concepts. So, for example, if you use a completely standard authentication-session concept, you’re probably wasting effort testing it extensively. Third, and perhaps most tricky, if you have some new concepts, you would want to test whether users selected the appropriate concept for the task at hand.

This last point reminds me of an interesting example in Discourse itself regarding the difference between the Tag concept and the Category concept, which are subtly different. See:

This put me in mind of a different conceptual issue, around how things are identified. It is about private languages that arise in companies, whether or not they are producers of digital wares.

I have two examples. The first has to do with a system known as “ATF” (not related to arms trafficking or bootlegging) inside a corporation where I was a newcomer to the IT organization. This was the suite of programs and computers that sales, billing and accounts receivables were handled by, and that were becoming obsolete in the face of new requirements of the business. It turns out that “ATF” stood for “Accounting Task Force” and that was the effort that gave rise to the definition and implementation of the systems originally.

The second might be more familiar, since it arises in a programming-language context. In the early days of commercial computer systems (say 1950-1960), each computer producer provided their own nomenclature for a variety of systems and components. At Remington Rand Univac, tape drives were Uniservos, consoles were Unitypers and Uniscopes, etc. Unitypers recorded magnetic tape, and in this case the magnetic tape was a form of metallic ribbon akin to that of a metal tape measure. There was an additional nomenclature and branding around punched cards (with 90 columns), magnetic drums, and even character codes. There was specialized language around order codes and instruction architectures. Meanwhile, at IBM, there was a different nomenclature and namings which became dominant over time and which also became increasingly parochial (e.g., in the arrival of System 360 and its operating systems) and distinct from the architectual concepts of Unix systems.

An interesting situation arose then, as now, in that participants would tend to view their chosen instruments as standards (i.e., a COBOL compiler implementation being the archetypical standard without knowledge of what the specified COBOL standard was).

An amusing case of this had to do with the 45-column round-holed Hollerith card being forked into 90 columns round-holed cards promoted by Remington Rand and the 80 column rectangular-holed cards promoted by IBM. This was driven by differences in electromechanical technologies and patent thickets. The Hollerith label stuck to the IBM derivative, where the 12-row coding was most similar. The 6-bit codes recorded on 7-channel magnetic tapes encoded characters differently and the character sets differed, of course.

All of this was maybe not so troubling for those immersed in it, until worlds collide. For example, the appropriation of the reserved words “function” and sometimes “integer” and “real” with decidedly non-mathematical significance (and where “procedure,” “int,” and “float” serve rather better).

It is a lot of fun on a forum when someone can’t understand a C-like-language compiler error, and we don’t realize that it is not the C-like-language we think it is and can’t tell from the code fragment. And of course gcc and VC are not the same processors of presumably the same language.

We are aswim in a sea of ambiguity, clinging to ephemeral identifiers