Tuning a Metaphor Cypher for Diversity Support

Copyright Henry J. Cobb, 2023


The problem with CAPTCHA.. 1

Metaphor Cypher from the Uncanny Valley. 1

Automating the Metaphor Cypher. 1

Tuning for Diversity Support. 2


The problem with CAPTCHA

Image-based CAPTCHA systems are used to filter out automated attacks on systems by requiring the requestor to solve an image-based puzzle. Unfortunately, not all humans are equally able to perceive images so an alternative method is always required and providing multiple paths through defenses leaves these defenses subject to whatever their weakest link is. Additionally, modern machine learning systems are getting better at matching human levels of low-level visual recognition.

Hence a CAPTCHA replacement is needed that is more inclusive for humans and better at excluding automated attacks. It needs to not only prove sufficient for current needs but scale at being easier for humans to solve and ever more infeasible for automated attack as machine learning grows in capacity and capabilities.

Metaphor Cypher from the Uncanny Valley

Modern humanity emerged from bit players in evolution to displacing all other hominids very rapidly after adopting language. Their competitors couldn't solve this "Turing Test" and so were unable to reserve for themselves positions in this new society and faded away, leaving only a few percent genetic contribution.

A language-based test therefore plays to human evolutionary advantage. But a purely mechanical test would give machine learning the advantage. Hence the test must rely on human emotional intelligence which is a requirement for any general intelligence and the unwillingness to automate this prevents Automated General Intelligence (AGI) while providing scope for a puzzle presented terms of a metaphor.

The Metaphor Cypher in the simplest form is to generate a unique metaphor then ask the requestor to prove that they possess emotional intelligence by solving for what the metaphor is really all about.

Automating the Metaphor Cypher

Given a Large Language Model (LLM) that is trained on vast amounts of human generated text:

  1. Pick a few random words out of your LLM
  2. Put these into a random phrase and walk the improbability down to a global minimum, changing the words along the way for best fit.
  3. You now have an original phrase that is highly probable to human ears. This is the key.
  4. Now generate a random vector and add this to the words in your key so they all shift domains in the same direction.
  5. Drive the transformed phrase down to a local minimum, shifting words to match.
  6. Present the resulting Metaphor Cypher for the human to solve for the key while any AI is stumped by the complexity of your LLM.

Now a quantum computer the size of your LLM could randomly try all the vectors near your cypher then solve for the global minimum for each to solve the problem or you could brute force the entire dictionary, but otherwise this is secure against automated attacks.

Tuning for Diversity Support

As this is a purely text-based test, it can be presented as text, voice, braille, and so on. Humans have gone to great lengths to communicate with each other, hence the applicability and effectiveness of this test. However, this wordplay relies on a common language and cultural background that the LLM was sampled from.

The fix is to train different LLMs on all the different cultures of humanity. So insert a "step zero" into the recipe above to have the requestor self-identity their native language and cultural background and present them with the matching test so that they can prove their claim.

This will of course still exclude those humans who lack full control over their own mental facilities. This may be considered a feature or bug as appropriate.