<$BlogRSDURL$>

Thursday, August 03, 2023

Not Quite Ready 

After the introduction of AI-driven chatbots, the limits of the software are showing.  The problem is hallucination when a chatbot makes facts up and presents them as truth.  A communicator dare not use the systems without checking the responses.  For those who do, they are setting themselves up for embarrassment.  There already has been an example of a lawyer who submitted court papers with cites that were fraudulent from a chatbot.  What to do?  Be wary of using chatbots until developers figure out how to eliminate hallucination.  That is a blow to Microsoft, for example, which went all-in on the use of a chatbot in its search engine.  Chatbots will be a great help when they stick to facts rather than making them up.


Comments:

Post a Comment

This page is powered by Blogger. Isn't yours?