Solution with responsibility.
When AI experts begin solutioning, we often start with the question, “Can we solve this problem with AI?”
Because Insight is committed to upholding responsible AI practices, we challenge the question, “Can we?” with a more important question: “Should we?”
The use of AI in decision-making raises important ethical questions, such as how to ensure AI systems are fair and unbiased and how to protect people's safety, privacy and security.
Thinking outside the box is sometimes an exercise in knowing when to use AI and when to lean on alternative methods for decision support.
For example, mortgage loan approvals are a use case for AI, but historical data often introduces discriminatory biases that are difficult to systematically detect in AI models. Instead of solutioning a stand-alone AI model, we might instead engineer a “human-in-the-loop” step in the final decision support system so that a loan officer can observe the historic factors that influence the AI-produced approval or denial. Even state-of-the-art AI models may lack the transparency or sophistication needed to protect us from unfavorable outcomes.