More than 60 years have passed between the time the notion of artificial intelligence was first proposed and studied and for mankind to reach its current capabilities. In the last decade the field of Artificial Intelligence (AI) has advanced considerably, but never more rapidly than in the last two years when a number of factors have come together in an almost ideal way and provided mankind with spectacular programs such as ChatGPT.
The European Commission requires online platforms to clearly identify content – images (photos and videos), sounds, texts – generated by artificial intelligence (AI). European Commission vice-president for values and transparency Vera Jourova made the call to signatories of the EU’s code of best practice against misinformation, whom she met recently in Brussels. It is a new measure in the Commission’s strategy to prevent the spread of misinformation, especially online misinformation, to ensure the protection of European values and democratic systems.
The EU executive wants to implement this measure immediately, although the biggest platforms, with over 45 million active users in the EU, will only be subject to the obligations of the new Digital Services Act (DSA) from the 25th of August. The regulation requires “the use of visible marking” to ensure that “generated or manipulated audio or video content is recognisable”.
The Code of Practice is not legally binding
The Code of Practice against Disinformation, signed in 2022, which is not legally binding, brings together around 40 organisations, including Facebook, Google, YouTube and TikTok. Interestingly, Twitter recently left the group.
”Signatories that integrate generative artificial intelligence into their services such as Bing Chat for Microsoft and Bard for Google, should integrate the necessary safeguards so that these services cannot be used by malicious actors generating misinformation,” said Vera Jourova.
On the other hand, signatories that have services likely to disseminate artificial intelligence-generated disinformation should put in place technology to recognise this content and clearly indicate it to users, the EU official added.
The AI Regulation prohibits “deliberate” manipulation
The European Union is currently negotiating specific artificial intelligence legislation that would stipulate transparency obligations for generators of ChatGPT editorial content. After months of intense discussions, MEPs reached a provisional political agreement at the end of April on the world’s first Artificial Intelligence regulation. The Artificial Intelligence (AI) Act is a legislative proposal to regulate Artificial Intelligence based on its potential to cause harm and influence social media users’ voting intentions. The text, which is likely to undergo changes, is expected to be put to a vote in the European Parliament. Its principles include human agency and oversight, technical robustness and security, privacy and data governance, transparency, social and environmental welfare, diversity, non-discrimination and fairness.
One of the most controversial topics of discussion in the negotiations concerned the AI systems which have no specific purpose. The outcome – a tentative one – was to impose stricter obligations on a subcategory of general-purpose AI, which includes models such as GhatGPT. It also established that generative AI models should be designed and developed in accordance with EU laws and fundamental rights, including freedom of expression.
Another politically sensitive topic was the type of AI applications that should be banned because they are considered to be an unacceptable risk. This led to a proposal to ban AI tools for general monitoring of interpersonal communication, but this proposal was abandoned. On the other hand, an extension of the ban on biometric identification software was put in place. Initially, this ban only applied in real time, this recognition software could only be used ex-post for serious crimes and with pre-judicial approval.
The AI Regulation also prohibits “intentional” manipulation. The word “intentional” was the subject of debate, as opponents of this wording argued that intentionality could be difficult to prove, but it was ultimately upheld.At the same time, the formula was reached in which the use of AI software for emotion recognition is prohibited in the areas of law enforcement, border management, employment and education.
MEPs’ ban on predictive policing has been extended from criminal to administrative offences, based on the Dutch child benefit scandal that led to thousands of families being wrongly prosecuted for fraud because of a flawed algorithm used by Artificial Intelligence.
Last but not least, high-risk AI models were classified in an Annex. That is, a high-risk model will be considered one that poses a significant risk to health, safety or fundamental rights. Likewise, AI used to manage critical infrastructure, such as energy grids or water management systems, will also be classified as high risk if it involves a serious environmental risk.
MEPs also included additional measures for the process by which providers of high-risk AI models can process sensitive data, such as sexual orientation or religious beliefs, to detect negative predispositions. In particular, in order to allow processing of this particular type of data, biases must not be detectable through synthetic, anonymised, pseudonymised or encrypted data processing. In addition, the assessment must take place in a controlled environment. Sensitive data cannot be transmitted to other parties and must be deleted after the bias assessment. Providers must also document why data processing has taken place. Artificial Intelligence (AI) is considered a “future defining technology”.
What exactly is AI and how does it already affect our lives? Definition of artificial intelligence
Artificial intelligence is seen as central to the digital transformation of society and has become a priority for the European Union, as reflected in its official documents. Future applications are expected to bring enormous changes to society, but AI is already present in our everyday lives. AI is the ability of a machine to mimic human functions such as reasoning, learning, planning and creativity. AI allows technical systems to perceive the environment in which they operate, process this perception and solve problems, acting to achieve a particular goal. The computer receives data (either already prepared or collected via its own sensors, such as a camera), processes it and then reacts.
AI systems are able to adapt their behaviour to some extent, analysing the effects of previous actions and operating autonomously. Some AI technologies have been around for more than 50 years, but increased computing power, the availability of huge amounts of data and new algorithms have led to major advances in AI in recent years.
How is deep fake or propaganda technology used?
Today’s media is being hit by the big problems created by Artificial Intelligence which is “wreaking havoc in the media world”.A study by Reporters Without Borders (RSF) analysed the state of journalism in 180 countries, taking into account political, social and technological changes.
The findings are worrying. The remarkable development of artificial intelligence is creating even more problems for the media. The report says that the disinformation industry is disseminating manipulative content on a large scale, as an investigation by the Forbidden Stories consortium has shown, and that artificial intelligence does not take into account the requirements of quality journalism.
An artificial intelligence program (the fifth version of Midjourney) that generates very high-definition images in response to verbal prompts has been feeding social media with increasingly plausible and undetectable fake “photos”. Many of these posts go viral. Public interest journalism has faced great competition from misleading narratives and fake news promoted by certain media outlets and politicians and artificial intelligence software, especially in the context of the Covid-19 pandemic and, more recently, the war in Ukraine. Unfortunately, there are enough people who tend to trust false information, which sometimes converges with Russian propaganda and fuels distrust in the media.