This is an experimental "Testbed" release, and the backstory is a bit unusual.
I didn't write this code alone. I orchestrated a swarm of AI Agents (using reasoning models like GPT-5.1/Gemini 3) to architect and build it. My job was basically arguing with them to stop them from building "just another vector DB" and actually implement the physics-based O(N) routing logic I envisioned. The previous post was useless because it was a dummy project with fictitious data. I won't delete this one and will leave it for your feedback.
What we achieved (v4.0):
We built an engine that replaces the O(N^2) Attention Matrix with Event-Driven Routing + State Space Models (SSM).
Why I'm sharing this:
The model is currently UNTRAINED (random weights), so it won't write poetry yet. I am releasing the engine to verify the performance claims:
1. Efficiency: At 4096 tokens, it uses ~27,000x fewer FLOPS than a Transformer.
2. Speed: It runs in pure Python. It starts slower (overhead), but overtakes optimized Transformers at ~2k tokens.
3. Logic: v4.0 finally fixed the "Bag-of-Words" issue. It now understands sequence order.
I need the community to stress-test the physics. Clone the repo, run `python run_v3_demo.py`, and tell me if the math holds up.
This is an experimental "Testbed" release, and the backstory is a bit unusual.
I didn't write this code alone. I orchestrated a swarm of AI Agents (using reasoning models like GPT-5.1/Gemini 3) to architect and build it. My job was basically arguing with them to stop them from building "just another vector DB" and actually implement the physics-based O(N) routing logic I envisioned. The previous post was useless because it was a dummy project with fictitious data. I won't delete this one and will leave it for your feedback.
What we achieved (v4.0): We built an engine that replaces the O(N^2) Attention Matrix with Event-Driven Routing + State Space Models (SSM).
Why I'm sharing this: The model is currently UNTRAINED (random weights), so it won't write poetry yet. I am releasing the engine to verify the performance claims:
1. Efficiency: At 4096 tokens, it uses ~27,000x fewer FLOPS than a Transformer. 2. Speed: It runs in pure Python. It starts slower (overhead), but overtakes optimized Transformers at ~2k tokens. 3. Logic: v4.0 finally fixed the "Bag-of-Words" issue. It now understands sequence order.
I need the community to stress-test the physics. Clone the repo, run `python run_v3_demo.py`, and tell me if the math holds up.
Repo: https://github.com/makimilan/pulse-field-corev
Cheers!