What is Buffaly?

Buffaly is a system for deterministic processing of the meaning behind natural language.

It consists of a:

Most importantly, Buffaly is:

  • Understandable. We can crack it open and see exactly why it interprets language the way it does.
  • Extensible. We we can add new meaning and new understanding at any time. The system can even help to modify itself.
  • Customizable. Completely possible to define entities actions and meaning that are only meaningful to a particular domain or business.

What can Buffaly be used for? : Buffaly gives a predictable, programmatic way to operate on language. It allows us to easily represent and operate on the "meaning behind words" in a transparent and white box way.


Introducing Buffaly

LLM Buffaly
Understands English Yes Yes
Generates Convincing Text Yes No
Costs millions to train Yes No
Can answer questions about anything Yes No
Easily understandable No Yes
Can be modified and extended No Yes
Adapts to each Business No Yes
Easy to Program No Yes
Easy to Integrate No Yes
Predictable No Yes

Buffalo buffalo buffalo

This is the sentence that gave rise to the name Buffaly. Those in the Natural Language Understanding (NLU) community will recognize it immediately.

Why? : Everyone (English speakers) know that a buffalo is an animal.Most everyone also knows Buffalo is a city in New York. But, outside of the NLU community most people don't know that "buffalo" can be used as a verb meaning "to bully or to push around".

For Example:

  • We can interpret that as Buffalo (the animal) push around other buffalo (the animal)

And when we say "Buffalo buffalo buffalo buffalo"

  • We can interpret it as Buffalo from the city of Buffalo (New York) bully other buffalo.

Supposedly this can be taken to something like 13 buffalos -- but my mind gives up somewhere around 4.

The example was created to show the difficulties of creating NLU systems. The rise of the Llamas was actually a direct result of the problems like this and something called the "Bitter Lesson". (link)

But, this problem is solvable. Along with a huge majority of other NLU problems. Without the need to enter into a Matrix style Faustian bargain with AI overlords.

Let's go back to the "buffalo" example. Feeding each of these examples into Buffaly, we get a result:

  • buffalo buffalo buffalo
                                            
        BuffaloAction#2989
            NakedInfinitive.Field.Infinitive = ToBuffalo
            Action.Field.Object = BuffaloAnimal
            Action.Field.Subject = BuffaloAnimal
                                        
                                     

  • buffalo buffalo buffalo buffalo
                                            
        BuffaloAction#2989
            NakedInfinitive.Field.Infinitive = ToBuffalo
            Action.Field.Object = BuffaloAnimal
                CityQualifiable.Field.CityQualifier = BuffaloCity
            Action.Field.Subject = BuffaloAnimal
    
                                        
                                     


Llamas vs Buffalos

With the rise of ChatGPT and other Large Language Models ("the Llamas"), the discussion around traditional NLU has become somewhat muted. With enough data and enough processing, we may never have to think about difficult problems like "buffalo buffalo buffalo" again.

Round image

Really?

Let's take a look at how ChatGPT 4.0 handles this question. But, I want to acknowledge a couple things first:

So let's set how ChatGPT handles this case. But, first, let's make sure it's not just memorizing the pattern by replacing the "buffalo verb" with an actual verb "bully".

So that's a fail. It just mapped my sentence to a closely memorized "8 buffalo" pattern it already had. It seems to show that ChatGPT has memorized the parlor trick but isn't really pulling my sentence apart for meaning. But, the important thing is:

,,We don't know what the llama is thinking.

Round image

When Buffaly understands a sentence we get to see the meaning:

  • buffalo buffalo buffalo buffalo

At least : We can start to get a feeling what is going on inside Buffaly. If it makes a mistake, we stand a chance of 1) catching the problem and 2) correcting the problem.

Compare that to ChatGPT's understanding of the same sentence:

  • That's 1,536 floating point numbers representing the "understanding" of that sentence. It's actually a huge improvement over the 12,288 numbers in the older version.
  • We call this data structure an "embedding vector".


AI Function Calling

The Llamas are great at responding to language and all it’s nuances. If you want an AI girlfriend, or help planning your 11 year old’s birthday party, or a 1000 word blog post -- Llamas are wonderful.

But what if you need to take actions based on language? What if you need precise control over the actions? What if those actions can cost money, time, or customers?

You need control. Llamas don’t give that to you. There are three common approaches to allow the Llamas to take actions.

  1. Intents and Entities
  2. AI Function Calling
  3. JSON / Grammars

What are the current approaches to solving this problem?

Intents and Entities

One approach is to use something like Google's DialogFlow. Here is how setup Google Dialog Flow:

  • We provide the program with a set of example phrases and "Intents"
  • The AI model tries to match similar text to the setup phrases.
  • As new examples come in, it is up to us to tell the model where to classify each phrase.

With DialogueFlow, and most ChatBots, we give examples at various points in our workflow. At no point do we actually work with the meaning. We don't have complete control, you don't even get to really see the meaning that it operates upon. We just hope that there's enough training data to make the system work.

Other limitations:

  • Intents and Entities are incredibly simple -- basically Workflow states
  • It's difficult to add domain specific understandings

AI Function Calling

Another approach is to use OpenAI's function calling:

In this model, we tell the Llama about the available functions (in English), then we hope that it is able to discern the correct one. This has a couple of problems:

  1. We are limited by the Context window. We can only fit a limited number of functions within the Window + the User's Query + any other documentation necessary.
  2. We are still at the mercy of the Llama to interpret things correctly.
  3. The llamas call the functions directly, there is no intermediate "white box" to inspect.
  4. We are limited by parameters, recursion complexity, etc.

Sometimes ChatGPT will even hallucinate functions and call them. Here is an example of that:

And again : We don't have any intermediate between what the language model understands and what we're processing.

JSON/Grammars

Various tools let generate responses in JSON or Grammars, including:

  • Llama.cpp
  • LMQL.ai
  • others

Why isn't this the solution?

All of these solutions require us to tell the Llama about the format to produce. We must specify the universe of possible responses in the limited Context window, plus the user's query, plus any other documentation. It is not an extensible solution.

And, as we'll see later, Buffaly is far from a data format. It's an entire ecosystem of tools designed around creating and operating upon meaning. Llamas are great at a particular task: interpreting natural language.

Let's stop asking llamas to pretend to be something they are not.

A Better Approach

Buffaly builds a whole new series of technologies that makes it easy to work with Natural Language Understanding

  • Prototypes a data structure for representing knowledge graphs
  • Ontology Database - a database for storing knowledge graphs
  • ProtoScript - a scripting language for defining and manipulating knowledge graphs
  • Semantic and Lexical Model - a novel approach to understanding language
  • Deterministic Tagger - a non-random, understandable language parser

It gives you a way to programmatically work with meaning and tie that into actual computer code.

Computer code is very specific and language is not. There must be an interface between the two. No matter how advanced the llamas get, in the end, it's still going to use code to get stuff done. Even if that code is written by a computer program.

Therefore, how we go from natural language to something that the computer code can operate upon is always going to be an interface concern. Buffily aims to simplify this process.

Buffaly is fundamentally better at providing a programmatic way to operate upon meaning.

Most importantly, the system is:

Understandable. We can crack it open and see exactly why it interprets language the way it does.

Extensible. We can add new meaning and new understanding at any time. The system can even help to modify itself.

Customizable. Completely possible to define entities actions and meaning that are only meaningful to a particular domaining.

Transparent

Buffaly is completely transparent in the way that it understands language. When we feed it a piece of text, Buffaly provides it's interpretations in a human readable format.

During the COVID-19 Pandemic, Buffaly was used to help fulfill a huge number of requests for COVID-19 test kits. It would see messages like the following:

,,I am looking for some test kits.

Buffaly uses a state of the art Lexical Model to turn raw text into structure. Internally that structure is represented as a Graph -- a collection of nodes and edges. But, we can easily display the value of that graph in a clear and simple structure:

Simple Structure

                            
    LookFor
    .BoundAction = Looking
    .Length = Continuous
    .Tense = Present
    .Object = TestKit
        .Quantity = Some
        .Plurality = Plural
    .Subject = I

                                    
                                 

The output from Buffaly's Lexical Model is similar to Part of Speech tagged text -- but simpler and more intuitive.

But, Buffaly uses real understanding to interpret the world. After the Lexical Model, the text is fed to a Semantic Model.

Real Understanding

The Semantic Model provides real understanding.

,,I am looking for some test kits.

In English, we use the words "look for" to indicate we are searching for something -- not that we're actually using our eyes for some visual task. Buffaly understands that, semantically disambiguates it, and provides the correct interpretation:

SearchSememe

                            
    .SourceActor = I
        .TargetActor = TestKit
            .Quantity = Some
            .Plurality = Plural
        .YesNo = Yes
        .Tense = Present
                                    
                                 

The "SearchSememe" is a unit of meaning representing that piece of understanding. We can operate upon it. We can perform actions upon it.

No universal language model can possibly know the domain specific knowledge of each application.

In this case "looking for test kits" means "I want to buy test kits".

"I am looking for test kits" implies --> "I want to buy test kits"

Therefore, Buffaly calculates the following implication automatically:

DesireActionSememe

                            
    .Action = 
            [0] = PurchaseSememe
                .SourceActor = I
                .TargetActor = TestKit
                .YesNo = Yes
        .SourceActor = I
                                    
                                 

Buffaly understands this implication and provides that interpretation to the calling application.

Buffaly allows us to operate on a compressed meaning space. We don't have to worry about every possible way of saying something, just the overall meaning. Now, instead of hundreds or thousands of possibilities, we can operate on a single Meaning.

Think about all of the possible statements that could imply the same thing:

  • I would like to get some test kits
  • I was looking to find some test kits
  • I need to buy some test kits.
Think about all of the possible statements that could imply the same thing

Buffaly handles all the ambiguity and provides a simple, understand object for processing:

                            
    PurchaseSememe
        .TargetActor = TestKit
            .Quantity = Some
            .Plurality = Plural
        .YesNo = Yes
                                    
                                 

Easily Programmable

Buffaly is easily extensible and customizable. Unlike the llamas, it's is trivial to add new information to the model at any point. We can extend the model

  1. Using ProtoScript
  2. Or Language

Using ProtoScript

Each domain has it's own language, it's own objects, and it's own understanding. ProtoScript is a language built specifically for building Buffaly's language model.

Returning to the COVID-19 test kit example for a moment. There were generally three types of test kits available and over the counter test kits:

  1. Antigen - these tested for the active virus.
  2. Antibody - these tested for antibodies left over from fighting the virus
  3. PCR - these were sent to the lab for processing
  4. At Home - these were over the counter (antigen) test kits.

Retraining, or even fine-tuning, a LLM every time we want to add new piece of information is unreasonable. We can try to stuff new knowledge into a "context window". But that costs more money due to token size and is limited by the context window length.

Retraining, fine tuning, and context windows are just hacks for the LLMs fundamental inability to learn new facts. Buffaly can be taught new things at any time.

Here is some real code used to extend Buffaly with knowledge of the test kit types.

Here is some real code used to extend Buffaly with knowledge of the test kit types

Or Language

The last thing we want is the language model going off the rails when talking to a customer. Unlike Microsoft's Tay, it's not going to learn racist or sexist ideas and start parroting them to your customer.

But, we can augment Buffaly, during develop, via a language to ProtoScript generator.

We never want to assume the model understands anything, so we always translate from language to code first.

Some examples:

Stan is a person

Stan is a person

Buffalo is a verb

Buffalo is a verb