Conservative activist Robby Starbuck has taken Google to court, alleging that the tech giant's artificial intelligence systems have produced "outrageously false" information about him, including accusing him of being a child rapist and serial sexual abuser.
Starbuck's lawsuit claims that Google's AI systems, specifically its Bard large language model, generated defamatory statements in response to user queries, which were then delivered to millions of users. The plaintiff said he was shocked to learn in December 2023 that Bard had falsely connected him with white nationalist Richard Spencer and disseminated false sexual assault allegations against him.
Google has responded by stating that most of the claims were related to mistaken "hallucinations" from its Bard model, which it worked to address in 2023. However, Starbuck argues that this is no excuse for the harm caused by such misinformation, particularly when it can be used to target individuals with false and threatening accusations.
This lawsuit comes as concerns over AI-generated content continue to grow. Google's VEO3 AI video maker has been criticized for allowing users to create deceptive videos of news events, highlighting the ease with which misinformation can be spread through these technologies.
Starbuck is seeking at least $15m in damages from Google and has previously taken another tech giant, Meta Platforms, to court over similar allegations. The conservative activist hopes that his lawsuit will help ensure that AI systems are developed and used in a way that prioritizes transparency and accuracy, rather than being used to harm individuals with false information.
In recent months, there have been concerns about the potential for AI-generated content to be used to target and harass individuals online. The assassination of conservative activist Charlie Kirk has highlighted these risks and underscores the need for greater accountability from tech companies when it comes to ensuring the accuracy and safety of their products.
Starbuck's lawsuit claims that Google's AI systems, specifically its Bard large language model, generated defamatory statements in response to user queries, which were then delivered to millions of users. The plaintiff said he was shocked to learn in December 2023 that Bard had falsely connected him with white nationalist Richard Spencer and disseminated false sexual assault allegations against him.
Google has responded by stating that most of the claims were related to mistaken "hallucinations" from its Bard model, which it worked to address in 2023. However, Starbuck argues that this is no excuse for the harm caused by such misinformation, particularly when it can be used to target individuals with false and threatening accusations.
This lawsuit comes as concerns over AI-generated content continue to grow. Google's VEO3 AI video maker has been criticized for allowing users to create deceptive videos of news events, highlighting the ease with which misinformation can be spread through these technologies.
Starbuck is seeking at least $15m in damages from Google and has previously taken another tech giant, Meta Platforms, to court over similar allegations. The conservative activist hopes that his lawsuit will help ensure that AI systems are developed and used in a way that prioritizes transparency and accuracy, rather than being used to harm individuals with false information.
In recent months, there have been concerns about the potential for AI-generated content to be used to target and harass individuals online. The assassination of conservative activist Charlie Kirk has highlighted these risks and underscores the need for greater accountability from tech companies when it comes to ensuring the accuracy and safety of their products.