Ethical UX As a Social/Security Vector

Bending UX to see other vectors of organizational ethics

A friend and I joked about a photo sent to him. He asked about the name scribbled on the cup. To him and his SE USA context, it came across as a push/play on the term “field negro.” To him, it was a bold play. Now, the name on the cup (“fields”) was in reference to the name of the drink (Strawberry Fields Latte), but it did invite us to take the convo further. What if my name were Fields? Do I even look like someone with that name? And then there was my push: could I change my name in the Starbucks app to “Field Negro?” If so, would they call out the whole name? Would it be shortened?

At that point I put forward this point:

Now wondering if there’s field validation on the names in their app to prevent that

For those involved within web technologies, the idea of field validation is an important one. From in putting the correct information, to validating whether or form is ready to be completed, field validation stands as a very important topic. And I will not even get started in talking about the various ways that web developers, designers, etc. all display whether a field has been successfully validated or not. Suffice to say, that is a seriously challenging subject.

But, I didn’t stop there. I went further with my comment and introspection into field validation:

But… they (Starbucks staff) all went thru sensitivity training right? If UX is the digital modeling of the org’s character, shouldn’t field validation (in their app) be tested for that too

Here lies will where we find a different factor for user experience and then what may have been considered before. If user experience is a successful translation of a company‘s values to the performance of its software/services, it would make sense that something as simple as field validation would also go through the same lessons/outcomes of sensitivity training that those who work with the inputs or outputs have also gone through. If you will, making it digitally clear that every aspect of an organization is showing forth the ethics the company says it espouses.

This quickly goes from being a technical question, is there a field validation for a specific string of characters, to an ethical question, should a company that is allowing a field for identification prohibit certain strings of characters from showing in order to display it sensitivity to a particular group of people or cultural context. That’s not an easy answer; that is an easy answer.

What about a naming field tells you that a company has considered points of view outside of a dominant narrative? There was a story a few years ago about a person in Hawaii whose name was too long to be printed on a drivers license because it contains too many letters. Is it the responsibility of the department of motor vehicles to consider that native names, when written in Latin characters, can be much longer than byte lengths allow? What happens when the byte is too small? Do we change their name to fit our structures? What about password requirement scenarios? Being asked to create a password of a specific minimum number of characters, containing a certain number of symbols, numbers or upper/lower case characters seems to be a sensible framework. But what about when it isn’t? What about when that kind of framework actually limits how the system can be secured, and also enables people to be more easily surveilled?

I will admit, this line of questioning and hypothesizing serves no specific end personally. Professionally however, it opens a door up two types of automation/machine learning in which humans are helped out of their biases, instead of entrenched within them. Going forward, it may not make sense socially to change my name and a Starbucks application for kicks, but it does make sense to explore why some of those changes should not happen. Yelling “fire” in a movie theater doesn’t just test the people sitting in it, but the frameworks supporting those people who came to watch the movie.