
“Trust is a very human thing.”
Mind the Product recently released an article outlining how to build trust in AI-powered products. The authors propose a strategy akin to tip-toeing, or baby-stepping. The notion is straightforward – start with bite-sized features that users can comfortably test and digest. Over time, as these features amalgamate into a cohesive system of products, trust in AI will be established.
While I’m completely on board with the article’s premise and its nifty development tips, including micro-touchpoints, regulatory compliance, and data control, there’s one tip I want to spotlight: the call for quantitative feedback loops.
I’d argue for robust qualitative feedback and thick description; an embellishment that adds necessary layers to our ethical understanding of our products.
As the swift leaps of generative artificial intelligence have cast a shadow of uncertainty over the future of work and human creativity, sser sentiments in relation to ethics are sorely needed. We’re seeing a rise in AI anxiety. Workers of all ages see these rapid advances and experience fear for the future – which is not what we want our products to inspire. On top of the possibility of human obsolescence, concerns include online data privacy protections, job loss protections, and student cheating.
It should be news that these anxiety are a driving factor in the entry barriers, adoption barriers, and scale barriers AI-powered products are facing in the market.
The kind of baby steps proposed by Mind the Product, at least for B2B systems, are ethically oriented around a framework of transparency, consistency, and impact understanding with, by, and for users. These components build what they call a “staircase of trust” by gradually introducing AI-powered features. Practically speaking, this means start small and allow users to learn before expanding to more sophisticated products. But the premise of progressive introduction of AI and its development is hinged on quantitative feedback data.
No matter how well thought out an AI-powered roll out is for a given product (we’re doing this in my team right now too!), public hype and anxiety remains the backdrop. Even before Chat-GPT became a technology known to your average household, the World Economic Forum found that 60% of adults in 28 countries thought artificial intelligence would profoundly change life within half a decade. They weren’t wrong, but the hype surrounding AI is akin to a wildfire spreading across our social media feeds and kitchen table discussions. Unfortunately, these perceptions all too often obscure the realities and limitations of these systems.
Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum, said: “In order to trust artificial intelligence, people must know and understand exactly what AI is, what it’s doing, and its impact. Leaders and companies must make transparent and trustworthy AI a priority as they implement this technology.”
I don’t believe a product development process that lacks qualitative research is ethical in this space.
What caught my eye was the suggestion to map people’s concerns – both users and stakeholders – to tangible risks. Perhaps because I’m reading a lot of Wilfrid Sellars at the moment, but this seems to me like a practical expression of a method to navigate the trenches between the manifest image and the scientific image as they relate to artificial intelligence.
Wilfrid Sellars emphasized how language and conceptual frameworks shape our views of the world and how it works. He critically examined how technology and its perceptions as progress interacted with our belief systems about how the world works. A framework he’s famous for is the manifest image and the scientific image; the former being how we believe or perceive things to be, and the latter as how they technically or truly function.
If we applying Sellars’ framework to artificial intelligence, we could suggest that the “manifest image” is the everyday understanding of AI held by the public and users of a product. The “scientific image,” then, encapsulates the intricate technical processes, algorithms, and LLMs that make AI possible. Due to its specialized nature, the scientific image concerning AI is often opaque for people without a background in computing.
The trench between these two ideas is wide.
Just because developers have a more scientific understanding of how AI works doesn’t excuse us from engagement with the manifest image created by AI’s introduction. There’s a broader philosophical and ethical discussion to be had on the role companies (and product teams) have in influencing the public’s images of AI – manifest or scientific. For developers, it might seem tedious or feel exhausting to explain why certain fears are unwarranted. Yet, the tension between the manifest and scientific images is where AI adoption faces its hurdles.
Rather than trying to merge two distinct languages, we need to facilitate a sense of familiarity between them. Users harbor preconceived notions and concerns that may yield practical ethical insights to developers if mapped to potential tangible risks. While some levels of friction and discomfort from the public is reasonable on any innovation – especially those that appear to “replace” established processes (or jobs) – we should maintain investments in these research areas and critical reflection.
Qualitative UXR, especially thick description, can help us understand this – and identify new opportunities for AI along the way. Qualitative data enables us to have both an ethical development practice and a worthwhile product discovery process.
Centering first-hand perspectives in real-world scenarios can identify:
- Potential biases that exist in the data used to train AI models
- Discriminatory outcomes or unintended consequences
- User expectations of data privacy and transparency
- Necessary elements of an accessible and trustworthy UI
A Sellarsian framework can facilitate these qualitative critical reflections on bridging the public assumptions and our products, which are partially responsible for the public’s anxiety. development are partially responsible for. However we engage in a reflective development practice, our teams must be enthusiastic in transparently communicating the capabilities and limitations of our AI products.
We’re creating new things. The hype may indeed be an exaggerated manifestation within the manifest image of unrealistic expectations, but it may also raise exactly the right ethical considerations.

Leave a comment