With so much talk and interest in GPT models, another piece of research from the Macquarie University Cyber Security Hub researchers couldn't be timelier. This time it is a call out to remind us on risks associated with the use of GPT Models.
"Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories"
Our team has just published a new paper on the potential unintended consequences of long-term memory mechanisms in chit-chat bots, particularly those that extract personal information from their conversation partners. With the recent popularity and advancements in chat bot technology, it's crucial to understand the potential risks associated with these mechanisms.
Our research shows that when personal statements are combined with informative statements, it can lead to the bot remembering misinformation alongside personal knowledge in its long-term memory. This vulnerability can result in the bot regurgitating misinformation as facts when recalling information relevant to the topic of conversation.
It's important to consider these potential risks when designing and implementing chat bot technology. Our team hopes that this research will contribute to the ongoing efforts to create safe, secure and responsible chat bots that benefit society.
https://lnkd.in/gSQVSbxr