Collaborative Design and Formal Verification of Factored Cognition Schemes
Head of Project: Brian Muhia
When chaining parallel and sequential calls to large language models (like LangChain), you implicitly create a causal graph that can be analyzed visually if you have the right tracing tools ( https://github.com/equiano-institute/ice). This report describes different agents using an explicit formalism based on causal influence diagrams, which we can treat as a notation for describing the data flow, components and steps involved when a user makes a request. We use the example diagrams to explain and fix risk scenarios, showing how easy it is to debug agent architectures if you can visually reason about the data flow, and to ask questions about intent alignment for AGI in the context of such agents.
Participants: Brian Muhia (Primary author, CTO of Fahamu Inc), Adrian Kibet (CEO of Fahamu IncĀ )
ā Go home