Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

The cognitive layer alternate approach brings superhuman capabilities with less data and computation resources. At Honu, we envision decoupling the mechanism that holds the

representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).

Sign up for our Beta and be among the first to try Honu

Copyright © Honu AI 2024. All rights reserved.

Sign up for our Beta and be among the first to try Honu

Copyright © Honu AI 2024. All rights reserved.

Sign up for our Beta and be among the first to try Honu

Copyright © Honu AI 2024. All rights reserved.

Sign up for our Beta and be among the first to try Honu

Copyright © Honu AI 2024. All rights reserved.

Imagine an enterprise that can truly think for itself. Where every decision, from the most tactical to the most strategic, is informed by a deep, real-time understanding of the business and its environment. Where silos of data and functionality give way to a seamless flow of intelligence and action. And where human ingenuity is amplified, not replaced, by artificial intelligence.

This is the promise of what we call the Self-Thinking Business - and while it may sound like a distant vision, the building blocks are already here.

In recent years, we've seen an explosion of interest and investment in enterprise AI. From supply chain optimization to personalized customer experiences, machine learning and natural language processing are transforming every function and process. The results have been nothing short of staggering - with leading adopters reporting significant improvements in efficiency, agility, and innovation.

Despite the progress, there's a growing realization that the current approach to enterprise AI is fundamentally limited. No matter how sophisticated the models or vast the data, the intelligence they deliver remains narrow and fragmented. Point solutions can optimize individual decisions or automate specific tasks, but they lack the contextual awareness and adaptability to truly transform the enterprise as a whole.

To put it another way, there's a "cognitive gap" between the piecemeal AI implementations of today and the vision of a truly intelligent, self-optimizing enterprise.

We started Honu in 2021 with the mission of delivering full business autonomy. Back then, way before the current AI-feeding frenzy, we were aware that deeper networks and more powerful chips were not the answer to unleashing AI’s potential. The answer would have to come from a reimagining of the intelligence stack to bridge the gap.

Closing this gap will require a new paradigm - one that goes beyond the siloed, model-centric approaches of the past to enable holistic, contextualized intelligence at scale. A new kind of AI infrastructure which forms a "cognitive layer" that can represent the complex web of entities, relationships, and dynamics that define the modern enterprise, and empower both humans and machines to reason and act on this knowledge in real-time. Today, we introduce the Decision Infrastructure™, a groundbreaking cognitive layer technology that elevates AI from limited tactical automation to strategic decision-making, paving the way for fully autonomous organizations.

In this paper, we shine a spotlight on the gaps in tech’s cognitive abilities and show a new approach, designed by Honu, to supercharge AI’s capabilities by enabling true understanding, reasoning and execution.


AI’s status quo

While many will offer the promise of digital workforces, via agents and frameworks, none have fulfilled the promise of a fully automated company, yet. As we see it, there are 4 characteristics that define current AI approaches and limit the ability to deliver fully autonomous businesses capable of complex decision-making, with no human supervision.

AI status quo problems
AI status quo problems
AI status quo problems
AI status quo problems
AI status quo problems
AI status quo problems
AI status quo problems

A fragmented, siloed approach to the tooling and expertise has created ‘intelligent systems’ that mirror the same organizational patterns and approach the intelligence in the enterprise from a reductionist approach rather than a systems approach. 

While the hope of many ecosystem players was that with ever-increasing computing power and deeper networks/architectures, one might get the machine to make sense and reason for the business, this has not happened.

Our conviction is that the current trajectory won’t be enough to get us there either. At Honu we firmly believe that, with the right approach to building the intelligence stack, we can achieve the first fully autonomous business in the next two years.


GenAI’s limitations

Generative AI has made waves since entering the zeitgeist in 2022, with developers feverishly debating areas such as  ‘autonomous’ complex planning and execution (BabyAGI, AutoGPT, etc.). Although it has signified a big step forward in AI, it is still very much aligned with the current paradigm/trajectory. 

While the rise of LLMs has brought mass awareness of AI and dramatic increases in productivity in specific areas; it has also given us a clearer idea of its limitations. The premise that LLMs are capable of building a world model through complex pattern-finding and abstraction of huge corpus data is questionable at best. 

The eloquence of the output of this solution, by the very nature of how it is trained, disguises the lack of coherent reasoning. 

While some LLM frameworks show reasoning-like behaviors, we have yet to see a framework in the market capable of true cogitation. 

Furthermore, the expensive data and computation needed for LLMs has turned out to be quite lucrative for cloud providers and chip manufacturers, with little or no incentive to explore or invest in alternative, more energy-efficient, and sustainable computing architectures. 


The rise of ‘Autonomous’ Agents

Autonomous agents of all shapes and kinds have proliferated, leveraging multiple frameworks like LangChain, LlamaIndex, MetaGPT, and paradigms like RAG or ReAct. LLMs played a major role here, not just in the algorithm’s capabilities, but also in socializing AI, due to the interactive, conversational interface that they inherently offer. A whole new sector has been born, with all of its tooling, infrastructure, observability tools, and even dedicated hardware. There has been no moment of pause or reconsideration. FOMO from the VC and entrepreneurial communities overtook a sense of deeper inquiry into foundational limitations of such an approach.

In the midst of the AI gold rush, we’ve observed the following key points: 

  • AI Agents excel in all language processing and information retrieval tasks: With limited success in rational decision-making. 

  • A plateauing of the performance of AI: Mostly driven by the limitation of structure that can be extracted by the corpus of data.

  • The one ‘master’ LLM theory has been mostly abandoned: Instead, most ost LLM frameworks have started to incorporate a more structured approach to AI agents, recognising that increased structure leads to more successful frameworks.

On this last point, despite some frameworks showcasing reasoning-like behaviors, they still completely miss the mark on what we would consider as the characteristics of reasoning: i.e. an ‘understanding’ of data, context, business logic, risk, scenario of possible futures, active experimentation and impact measurement. The reality is that while most developers speak of ‘autonomous’ agents’, the ones in the market are more closely aligned to RPA (Robotic Process Automation) than to fully autonomous agents, including all the AI assistants/employees. 


Defining the Decision-Making Pyramid

The biggest business-defining decisions are strategic by nature. For example, instead of asking ‘how to get better website copy or images?’, the question should be ‘how can we effectively spend our resources on product development, marketing, or getting organic traffic?’ While the former could introduce a few % points on the bottom line, the latter is usually the difference between a business shutting down or 10xing its revenue. 

At Honu, we consider the reasoning capabilities of certain systems in terms of the decision-making pyramid (see the graph below):

With current AI capabilities stuck at the tactical/task-oriented level, there’s a long way to go to reach the pinnacle of tech-driven decision-making: a fully autonomous organization. Automating a lot of the operational tasks does not lead to autonomy, you still need to glue all these pieces together - and deliver holistic decision-making. 


The Cognitive Gap

Today all AI used in the context of business sits, at best, at the tactical level of the Decision-Making Pyramid, falling short of the scope and interpretation of the problem trying to be solved. We’ve identified this gap in the shift from tactical to strategic decision-making as the ‘Cognitive Gap’. 

Here are the 10 characteristics that we strongly believe need to happen to close the gap:

  1. Breaking the silos: A comprehensive, shared, holistic model of the business.

  2. Embedding of the business logic: Weaving reasoning into the fabric of the technology used, instead of it being an afterthought.

  3. Contextualisation of the data within the business logic: Provide sufficient information around the data to provide meaning within the context of the business model.

  4. Capabilities for reasoning: Scenario analysis, simulation, and planning.

  5. Risk assessment and interpretation: Understanding and highlighting risks of decisions within the context of the business.

  6. A model of the wider ecosystem: Making it intelligible to agents and how it pertains to the business.

  7. Continuous learning and feedback mechanisms: Experimentation and impact assessment of actions and strategy.

  8. Domain expertise: An appreciation of the domain of validity of the expertise/information. 

  9. Proactive rather than reactive: Agents appearing dynamically at the right place and time based on business state and context without user input.

  10. Requiring less data: Needs to work with limited or no initial data. Takes an online learning approach to incrementally improve performance as more data becomes available.

It is our conviction that the current trajectory of AI won’t naturally lead to an emergence of these capabilities, regardless of the data and computing resources thrown at the problem.  


Rethinking the Intelligence Stack: The Cognitive Layer

We believe that this leap is within reach - but only through a rethinking of the intelligence stack, and by breaking away from the old paradigm described earlier. We propose that what is needed is a new, supplementary layer that sits between the Systems of Record and Systems of Intelligence (be it algorithms, AI Agents, etc.), that we call the ‘Cognitive Layer’. 

One can interpret the cognitive layer as the digital nervous system of the enterprise. It is a connective tissue that builds a cognitive model of the business, along with its objectives, processes, practices - and provides full contextualisation for all its data. In a nutshell, it either directly implements or enables all of the 10 points alluded to in the previous section. 

It is worth noting that the Cognitive Layer we suggest is not an LLM/AI model, but a novel software layer that augments all AI running on top of it with superior capabilities. We are using AI here in its wider sense, these could be either AI Agents, AI algorithms, or even BI/decision support systems. 

Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

The cognitive layer alternate approach brings superhuman capabilities with less data and computation resources. At Honu, we envision decoupling the mechanism that holds the representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).

Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

The cognitive layer alternate approach brings superhuman capabilities with less data and computation resources. At Honu, we envision decoupling the mechanism that holds the

representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).

Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

amechanism that holds the representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).

Businesses in the context of the cognitive layer are no longer static. It is always ’on’. It is dynamic in its representation of the business so that it can mirror what is happening within the actual live operating business (be it an additional sales channel, new supplier, etc.). As the organization grows and matures, the business capabilities morph and change, so it is necessary to adjust the processes and practices being applied - and change the way decisions are being made according to the structure of the business.  

We are developing a model that can simulate forwards and use information from possible futures to make strategic decisions about the business. 


Flipping from a pull to a push system

The presence of a cognitive layer provides agents with dynamic context and an understanding of the problem space, as well as their role within the larger business. This also means that the context that the agent gets about the business happens dynamically and constantly, without the need for text input from the user. It makes these systems proactive rather than reactive, and able to show up and contribute at the right place at the right time in the decision making cycle. This represents a huge shift from a pull to a push system.


Honu’s Decision Infrastructure™

At Honu we are trailblazing this new approach to AI’s role within business and soon releasing the first-of-its kind cognitive layer technology, the Decision Infrastructure™.

Currently in closed alpha, our platform is sector and AI-agnostic. Served as a PaaS model, our Decision Infrastructure™ technology will have an SDK that can be used by application and agent developers alike to leverage this new cognitive layer and build superior AI capabilities for businesses.

Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

The cognitive layer alternate approach brings superhuman capabilities with less

data and computation resources. At Honu, we envision decoupling the mechanism that holds the representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).

Honu’s Decision Infrastructure™ is an asynchronous, event-driven platform that works with standard Python tooling (JavaScript to follow next).  The platform can be easily extended by developers with their own model definitions, service definitions, and data sources. Agents are not hosted in the Decision Infrastructure™ but run remotely on the providers’ infrastructure - for now. 

We are currently working in closed alpha with a few enterprises that span multiple sectors, working on delivering superhuman decision capabilities to their client base, or their internal operations. We aim to open up the platform for general use by the end of the year. 

We have an exciting year ahead and can’t wait to make our advancements accessible to share with you.


Find out more about Honu and its Decision Infrastructure™

For further information about the Decision Infrastructure™ and the SDK, please visit our capabilities page for more info. 

If you have a developer or business with an exciting project and looking to collaborate we'd love to hear from you. Contact us!  

If you are a developer we invite you to sign up here for the wait list for the Beta release. 

Follow us on X to be updated with the latest news.

Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

The cognitive layer alternate approach brings superhuman capabilities with less data and computation resources. At Honu, we envision decoupling the

mechanism that holds the representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).

Achieving The Self-Thinking Business

Achieving The Self-Thinking Business

Achieving The Self-Thinking Business

May 27th, 2024



Releasing AI’s superhuman capabilities with less data and computation

With deep architectures, such as those found in the transformers space, the representation of the problem and the process of determining the solution are tightly intertwined and largely indistinguishable from each other. With LLMs, we are boiling the ocean with terabytes of data and millions of watts of computing resources trying to build a quasi-generalized world model that is then fine-tuned for specific business decisions. 

(N.B.  While smaller, fine-tuned and specialized models are now appearing the essence of the above still holds true)

The cognitive layer alternate approach brings superhuman capabilities with less data and computation resources. At Honu, we envision decoupling the mechanism that holds the representation of the problem space from the systems aiming to solve, optimize, and articulate that shared representation of the problem space, making it accessible, understandable, and adaptable for all systems of  intelligence running on top of it.

By taking this approach, we refocus the attention and resources on an accurate representation of the problem space (as seen in the image above).