Library
Collected works on compression, memory systems, and semantic optimization by
Table of Contents
All entries: original works, annotations, and references curated for long-term retention.
Hypernym Joins Llama Startup Program
H
Library Launch + Entry Filters Added
his page now includes tagged entries, third-party research, and filterable categories for easier browsing. The manuscript-style interface reflects our commitment to treating knowledge as a living document.
LLM Fingerprinting
dentifying and verifying LLM outputs via persistent token-level markers. A submission to the Apart Research Sprint exploring novel approaches to AI output authentication and traceability.
From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning
Meta–Stanford collaboration comparing semantic compression strategies between humans and LLMs. This work provides crucial insights into the fundamental differences in how biological and artificial systems process and compress information.
Hypernym Mercury: Token Optimization Through Semantic Field Constriction
novel (patent-pending) method for semantic compression with controllable granularity and 90%+ token reduction. Benchmarked on Dracula, this work establishes the theoretical foundation for our compression approach.