Safe and responsible AI development

Can you tell a deep-fake from a real image?
What about generated text from verified facts?

Immensely powerful large language and multi-modal models, like GPT4 and Midjourney’s text-to-image, herald a new era in humanity’s progress towards Artificial General Intelligence (AGI). Systemic changes are reverberating through our society with questionable effects.

There is potential here for deep harm as much as great benefit. Last week alone we have seen incredibly convincing “photos”, from the silly—the Pope wearing a puffer jacket—to the worrying: former UK Prime Minister Boris Johnson fighting arrest. Prompted suitably, ChatGPT will give convincing and totally invented responses on just about any topic.

Expressing deep concern, last week the Future of Life Institute (FLI) published an open letter, signed by many esteemed figures in the tech, literary and journalistic spheres, addressed to all involved in the race towards AGI, asking them to pause their efforts for 6 months.

On top of this, the Italian government has blocked ChatGPT in Italy. Reactions like these abound in social media and politics.

In the wake of this, we in Henesis want to reaffirm our dedication to safe and responsible development of AI. In our R&D we put safety, security, non-discrimination and explainability above all other values.

We hope that other research institutes involved, big and small, will pay attention to the above warning signs and the plea from the FLI, and continue their research with maximum caution and prudence.

For more information read the FLI’s letter » click here.