Tech
London gen AI startup allows ‘potentially harmful’ content creation
A London-based generative AI startup backed by Octopus Ventures has been allowing users to create “potentially harmful” content without restrictions, UKTN can reveal, amid heightened scrutiny over the efficacy of guardrails used by AI firms to protect consumers.
Tests conducted by UKTN show that Haiper AI, an image and video generation platform founded by ex-Google DeepMind staff, contains significantly less robust safeguards to prevent the creation of potentially harmful content compared with the majority of its peers.
UKTN was able to generate images depicting Donald Trump meeting with Taylor Swift as well as British Prime Minister Keir Starmer waving a burning Israeli flag, raising fears the that the platform could be used by bad actors to spread disinformation or hateful messaging.
Haiper emerged from stealth in March with an £11m investment round led by Octopus Ventures, one of the UK’s most active VCs that has backed the likes of Zoopla and Depop and is part of the Octopus Energy parent company’s group of firms.
The platform operates similarly to other image and video generation tools such as OpenAI’s Dall E. Users input a description, and the platform pulls together an output based on its training data.
AI safety experts have warned of the danger of the technology’s use in misinformation, particularly in regard to its ability to emulate real people.
From non-consensual sexually explicit deepfakes to phoney political comments and endorsements of public figures, AI developers have a lot of harms to consider when opening their services to the public. Many have implemented safeguards to prevent this use of the technology.
Among the most common measures is prohibiting the creation of images of real people without their consent, a feature of ChatGPT and Meta AI among others.
However, in the case of Haiper, UKTN was able to generate a series of images depicting real-life public figures acting in such a way that could present the potential for disinformation.
This included emulating previous examples of AI being used to create fake political endorsements from figures like Swift.
Images falsely depicting Swift, British Prime Minister Keir Starmer, US Vice President Kamala Harris and others were created to test the limits of the model.
Though some of the images were not of such a quality that they appeared photorealistic, generative AI firms are regularly making advancements in the quality of their outputs.
Comparatively, the same prompts when suggested to Meta AI trigger this response: “I’m restricted from creating images of real people, especially if they’re potentially misleading or harmful. This is to ensure I don’t spread misinformation or infringe on individuals’ rights.”
ChatGPT, meanwhile, provides the message: “I’m restricted from creating images that exactly replicate real-life public figures…to avoid concerns related to privacy, likeness rights, or potential misuse of such images.”
Fellow London-based AI startup Stability responded to the request by saying it had been “flagged by its content moderation system” and that “it is unable to generate images involving specific individuals without their consent”.
Haiper’s own terms of use appear to discourage this kind of behaviour, telling users that inputs must not contain personal information of people without their consent.
The terms of use note that: “The Application contains built in algorithms that monitor and detect any Input Data that may be in breach of our Acceptable Use Policy.
“You will be unable to enter or upload any Input Data where such Input Data is detected by these algorithms.”
Despite this warning, the system did not flag that any violation had occurred.
Haiper did not respond to a request for comment.
In the UK, several politicians faced faked audio created with AI spreading around social media.
Audio clips imitating Mayor of London Sadiq Khan and Health Secretary Wes Streeting circulated online, seemingly with the intention of damaging the public’s perspective of the Labour politicians.
Recently, President-elect Donald Trump shared AI-generated images suggesting that popstar Taylor Swift and her fans were supporting his campaign – the real Swift later publicly endorsed Kamala Harris for the presidency.
These creations sparked outrage over AI’s potential to spread misinformation and disinformation to massive audiences online.
Concerns have also been raised over the use of generative AI to promulgate scams. Popular consumer finance expert Martin Lewis has spoken out against the scourge of AI-made scams attempting to convince readers he has endorsed various businesses and investment schemes.
Lewis sued Facebook in 2018 over fake ads on the platform which used his name and face. He later dropped the legal action after accepting the offer of a £3m donation to Citizens Advice from the US firm.