← Back to Blog

Building fair AI model through design patterns

12 July 2018

Recently, I was recognised at the BIBA Insurance Forum for highlighting algorithmic discrimination in risk models that inadvertently led to inequitable outcomes. I also shared my work on addressing this issue and creating fairer models for risk assessment.

Last year, I noticed something troubling: brokers in specific regions, particularly in areas like Birmingham and Bradford, were struggling to provide competitive quotes. A closer examination revealed that algorithms were incorrectly tagging some clients as high-risk, often due to name-based biases embedded in the risk models. Clients with names resembling those on sanctions lists faced inflated premiums, a bias that disproportionately affected communities with Southeast Asian backgrounds.

I collaborated closely with our data science team to address these misclassifications. By examining each stage of the insurance quoting process and the data points driving these models, we questioned every decision point, investigating how even minor design choices could lead to inequitable outcomes. Out of this process, three key design patterns emerged to embed transparency and flexibility directly into the system:

Graceful Friction

Our algorithms needed thoughtful checkpoints—what we termed “graceful friction.” By introducing moments where brokers were prompted to review flagged data, we created a natural pause, giving human judgment a chance to supplement machine classification. These checkpoints didn’t slow brokers down but instead fostered understanding and critical evaluation of the AI’s decisions.

Data Provenance

Understanding the origin and journey of data is essential for making informed adjustments. With data provenance, brokers could trace how each piece of information contributed to a client’s risk score. This transparency clarified the basis of classifications and empowered brokers to make fairer, client-centered adjustments when necessary.

Design Interventions Through Warnings

I designed warnings as subtle yet effective design interventions. These warnings flagged potential misclassifications, signaling brokers to review and, if necessary, adjust the AI’s risk assessments. This approach not only gave brokers a measure of control but also improved the model’s accountability, ensuring that no single factor disproportionately influenced a client’s premium.

These patterns weren’t mere tweaks; they restored agency to brokers, enabling them to address biases actively and foster fairer outcomes.

Algorithms play a central role in setting premiums and assessing risk—a task that’s critical, complex, and often opaque. Ensuring fairness in technology requires a commitment to transparency and adaptability, allowing systems to learn and grow. It was not about “fixing” biases but creating a dynamic process where AI evolves to better understand the communities it serves.

Within just four months, brokers using these tools managed to sell £1.2 million in premiums, showcasing both the ethical and business benefits of an inclusive design approach.

So here’s the question we should all be asking: In a world increasingly driven by algorithms, how do we keep sight of the individual stories behind each data point?