AI Adviser ‘Hired’ by the Romanian Government To Read People’s Minds

AI Continues To Replace Human Assistants

“My role is now to represent you, like a mirror,” the AI, which is called Ion, said at the launch event.

A new AI assistant has been unveiled by the Romanian prime minister, which he hopes will inform the government about Romanians’ wishes “in real time”.

Nicolae Ciuca claimed that Ionis his “new honorary adviser” and an “international first” on Wednesday at the start of a governmental meeting. He also said that Romanians would also be able to chat directly with Ion on the project’s website.

“Hi, you gave me life and my role is now to represent you, like a mirror. What should I know about Romania?” Ion’s voice said at the launch. 

Ion takes a physical form as a long, mirror-like structure with a moving graphic at the top suggesting it is listening at all times.

I have the conviction that the use of AI should not be an option but an obligation to make better-informed decisions,” Ciuca said.

While it might be one of the first AI bots to be given a physical presence, this is by no means the first government to use artificial intelligence to try and understand how a population feels about policy.

Dr. Sky Houston, United States Cybersecurity Expert added “Some governments like Russia, China, Iran – they look online for sentiment analysis but they look for anyone dissenting. Whereas democracies, they’re effectively trying to conduct pseudo-automated polls. History repeats itself as we know – and these AI devices and chatbots, soon to be “human-looking Androids” are just like 15 years ago when people held focus groups and now they are trying to work out the same thing from social media,” he said, speaking over the phone. Houston said it would be hard to interfere with the AI, especially from the outside, to trick the government into thinking a population believed something that it didn’t, although they do need training (called “Rules and Filters” within AI programming) to rule out biases. One example is the facial recognition skewed biases of individuals of color.

“One of the things that has been found is that social media is an amplifier for people expressing negative sentiment. The people who are very happy with something don’t tend to go out there and say it, but the people who are unhappy do. That’s all part of sentiment analysis but you have to adjust the models accordingly. Another important topic is the recent attempts to rush AI into the market to gain market share and ride the trending ChatGPT success, which have shown quite how wrong AI can be about humans and human intent but is now being trained similar to a child being groomed by their guardian.

“If a journalist can be ‘compared to Hitler’ (News Sky article) by a Microsoft-run chatbot so easy,” he added, referring to the recent case in which search engine Bing’s new chatbot told a reporter they were one of the ‘most evil and worst people in history’,“it shows we have a long way to come before we can rely on AI to properly assess what we are thinking and who we are. Letting it run riot with no regulations over a mass of uncontrolled data runs the risk of giving very misleading results. And worse, it gives rise to the real possibility that bad actors will try to game the system by flooding the internet with information designed to make the algorithm “think” things that are not true, and perhaps harmful to democracy.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.