Race is quite a topic, but you can’t really talk about diversity or inclusion without talking about racism, can you?
Maybe the best entry book to know more about racism is White Fragility or How to Be an Antiracist, while the article How to See Race provides some critical insights in the way we think about, and indeed, see race.
TL;DR, we can’t really see it —
Race is incredibly tenacious and unforgiving, a source of grave inequality and injustice. Yet over time, racial categories evolve and shift.
To really grasp race, we must accept a double paradox. The first one is a truism of antiracist educators: we can see race, but it’s not real. The second is stranger: race has real consequences, but we can’t see it with the naked eye.
And I get excited whenever there’s a mention of “power”:
Race is a power relationship; racial categories are not about interesting cultural or physical differences, but about putting other people into groups in order to dominate, exploit and attack them. Fundamentally, race makes power visible by assigning it to physical bodies.
The most important proposition, though, is that race is neither the factual input nor the factual output, but an effect:
Race does not exist as a matter of biological fact, but only as a consequence of a process of racialisation.
And maybe sometimes (OK, almost all the time), the media get lazy by showing just what we can see, instead of exposing what we need to see but invisible:
The most powerful racial category is often invisible: whiteness. The benefit of being in power is that whites can imagine that they are the norm and that only other people have race.
And the issue is exactly about what we see and what we don’t:
Genetic inheritance isn’t what matters. What we literally see is shaped by politics…
That we think we see race naturally, when in fact it’s socially constructed, is the third eye through which we see the world.
What I don’t quite agree, though, is the conclusion, where the author says:
Getting rid of racism requires clarity about the nature of the enemy. The way to defeat white supremacy is to destroy it.
Sometimes we can’t and will never get the clarity about the nature of things. Sometimes we won’t be able to destroy things completely. Sometimes pursuing the purity of destruction is an obsession with an ideal. And sometimes, as some Chinese or Borg would probably tell you, the way to defeat something is to assimilate the people who hold on to that thing — after the assimilation, you may not be able to say for sure whether something you hate has been destroyed, but you might call it progress when you rarely see it ever again.
The three levels of accessibility provides a neat framework for addressing accessibility in products and services:
Modeling spaces of Accessibility provides another useful framework by looking at factors like agency, autonomy, capability, and capacity.
Design as Participation, Designer as Participant
In a thought-provoking article, Design as Participation, the author explores the notion of design and role of designers:
…a question emerged about designers: This new generation of designers that work with complex adaptive systems. Why are they so much more humble than their predecessors who designed, you know, stuff?
The answer is another question, a hypothesis. The hypothesis is that most designers that are deliberately working with complex adaptive systems cannot help but be humbled by them.
The designers of complex adaptive systems are not strictly designing systems themselves. They are hinting those systems towards anticipated outcomes, from an array of existing interrelated systems. These are designers that do not understand themselves to be in the center of the system. Rather, they understand themselves to be participants, shaping the systems that interact with other forces, ideas, events and other designers.
At the centre of the exploration is the notion of “user.” The problem is:
When designers center around the user, where do the needs and desires of the other actors in the system go? The lens of the user obscures the view of the ecosystems it affects.
For users, this is what it means to be at the center: to be unaware of anything outside it. User-Centric Design means obscuring more than it surfaces.
The user made perfect sense in the context in which it was originally defined: Human-Computer Interaction.
But we are no longer just using computers. We are using computers to use the world.
And that’s where participation comes in:
Designing for participation is different than designing for use, in any case.
To some, the role of designer should be “to create a context for participation.”
When the methodologies of design and science infect one another […] design is not just a framework for participants, but something that is also, itself, participating.
And when designer also becomes a participant:
The designer is one of many influences and directives in the system with their own hopes and plans.
And to go even further:
…we are not only designing for participation, but that design is a fundamentally participatory act, engaging systems that extend further than the constraints of individual (or even human) activity and imagination.
This is design as an activity that doesn’t place the designer or the user in the center.
And finally, the beautiful conclusion:
We can build software to eat the world, or software to feed it. And if we are going to feed it, it will require a different approach to design, one which optimizes for a different type of growth, and one that draws upon—and rewards—the humility of the designers who participate within it.
AI and Conversations
No. You Still Cannot Have A Real Conversation With a Chatbot. provides some interesting clues about why:
People have a great deal of knowledge about the world that is used to understand natural language by inferring the implied meaning of natural language utterances.
Which is called “world knowledge.” And unfortunately (for AI), understanding language requires world knowledge.
People take language understanding for granted. To understand natural language, people must make use of all their world knowledge and reason based on that world knowledge. Despite the fantastic advances in artificial intelligence, we still have no idea how to build this world knowledge and these reasoning capabilities into computers. We also have no idea how to teach computers to acquire this knowledge on their own.
In other news, Dispelling Some Common Myths About Conversational AI, well, dispels some common myths about conversational AI, including:
- Conversational AI is just talking to machines.
- Pre-trained bots are less work.
- Automating current workflows is optimal
- You can upload all your organization’s stored data and proceed to launch
- Your goal is to mimic humans
- This stuff is easy
Not sure the last one is a myth, though. That’s merely stupidity.
The Ideal Interface to AI is…
How to collaborate with AIs sharply highlights the most important perspective on how we think about applying AI:
So imagine this: you have a text editor, and your team is there too. Your colleagues are making suggestions, answering questions, filling in gaps, and being sounding boards.
But one of the team is an AI. And they appear not as a special interface element like a hovering window or a special sidebar or a squiggly underline, but in comments, chats, and suggested edits, alongside everyone else.
I think that would feel truly interactive and collaborative, and it opens the door to different styles of assistant: ones that provide creative prompts, ones that have the facts and figures at their fingertips, ones that are brilliant at wordsmithing the prose for different audiences.
The ideal interface to AIs is the team.
When it comes to AI, its augmenting us happens way earlier than replacing us (if it ever happens).
The Rise and Fall of…
Important themes and lessons identified:
- AI, algorithms and automated decision-making are a significant new frontier in human rights, due process and access to justice.
- Simple solutions and complex problems.
- AI and algorithms often embed, and obscure, important legal and policy choices.
- There are data issues and choices at every stage of AI and algorithmic decision-making.
- AI and algorithmic systems make predictions; they do not set policy, make legal rules or decisions.
- Legal protections regarding disclosure, accountability, equality and due process for AI and algorithmic systems are often inadequate.
- The use of AI and algorithms in criminal proceedings raise important access to justice issues.
- The criticisms of AI and algorithms are legitimate, but there are also opportunities and emerging best practices.
- There must be broad participation in the design, development and deployment of these systems.
- Comprehensive law reform is needed.
- Incremental reforms, deliberately.
But of course, the most important one is highlighted above. And if you think digital transformation is by any means easier, think again.