If AI chatbots are sentient, then they’re squirrels too • The Register

0

In short No, AI chatbots are not sentient.

As soon as the story of a Google engineer who exposed what he claimed was a sensitive language model went viral, several publications chimed in to say he was wrong.

The debate over whether the company’s LaMDA chatbot is sentient or has a soul or not isn’t very good, simply because it’s too easy to shut down the party that believes it. Like most large language models, LaMBDA has billions of parameters and was trained on text pulled from the Internet. The model learns the relationships between the words and which ones are most likely to appear next to each other.

He seems quite intelligent and is sometimes able to answer questions correctly. But he knows nothing of what he is saying and has no real understanding of language or anything else. Language models behave randomly. Ask him if he has feelings and he might say yes or Nope. Ask if it’s a squirrel, and he might also say yes or no. Is it possible that AI chatbots are actually a squirrel?

FTC Sounds Alarm on Use of AI for Content Moderation

AI is changing the internet. Realistic photos are used in the profiles of fake social media accounts, deepfake porn videos of women are circulating, algorithmically generated images and text are uploaded.

Experts have warned that these features can increase the risk of fraud, bots, misinformation, harassment and manipulation. Platforms are increasingly turning to AI algorithms to automatically detect and remove bad content.

Now, the FTC warns that these methods could make problems worse. “Our report underscores that no one should view AI as the solution to the spread of harmful online content,” Samuel Levine, director of the FTC’s Consumer Protection Bureau, said in a statement.

Unfortunately, the technology can be “inaccurate, biased and discriminatory by design”. “Tackling harm online requires a far-reaching societal effort, not an overly optimistic belief that new technologies — which can be both useful and dangerous — will rid us of these problems,” Levine said.

Spotify grabs a deepfake voice boot

Audio streaming giant Spotify has acquired Sonantic, a London-based start-up focused on creating artificial intelligence software capable of generating entirely invented voices.

Sonantic’s technology has been used for games and in Hollywood films, helping to give actor Val Kilmer a voice in Top Gun: Maverick. Kilmer played Iceman in the action film; his lines were spoken by a machine due to speech difficulties after battling throat cancer.

Now, the same technology seems to be making its way to Spotify as well. The obvious application would be to use AI voices to play audiobooks. Spotify, after all, acquired Findaway, an audiobook platform, last year in November. It will be interesting to see if listeners can customize how they want their auto-narrators to sound. Maybe there will be different voices for reading aloud children’s books versus horror stories.

“We’re really excited about the potential to bring Sonantic’s AI voice technology to the Spotify platform and create new experiences for our users,” Ziad Sultan, Spotify’s VP of Personalization, said in a statement. communicated. “This integration will allow us to engage users in a new and even more personalized way,” he hinted.

TSA is testing artificial intelligence software to automatically scan luggage

The US Transportation Security Administration will test whether computer vision software can automatically scan baggage for objects that look strange or are not allowed on flights.

The test will take place in a laboratory and is not yet ready for real airports. The software works with already existing 3D computed tomography (CT) imaging that TSA agents currently use to peek through people’s bags at security checkpoints. If the officers see anything suspicious, they will put the luggage aside and search it.

AI algorithms can automate part of this process; they can identify objects and report instances where they detect particular items.

“As the TSA and other security agencies embrace CT, this application of AI represents a potentially transformative leap in aviation security, making air travel safer and more consistent, while empowering highly trained officers to the TSA to focus on the bags that pose the greatest risk,” Alexis said. Long, product director at Pangiam, the technology company that works with government.

“Our goal is to use artificial intelligence and computer vision technologies to improve security by providing the TSA and security officers with powerful tools to detect prohibitive items that may pose a threat to aviation security. This is an important step towards setting a new safety standard with global implications.” ®

Share.

About Author

Comments are closed.