API v1.2
Stable
Hypernym AI API Documentation

Compress language into high-fidelity summaries with semantic categorization, adaptive compression, and precision similarity scoring. Build memory that persists across time, agents, and tools.

What We Do

Hypernym AI's API offers a structured, efficient way to analyze and categorize text by assigning each paragraph into precise semantic categories, almost like sorting content into an intelligent hash table.

Semantic Categorization

Maps each paragraph into specific "buckets" based on central themes for clear, organized structure.

Adaptive Compression

Calculates optimal compression ratios to distill content while retaining critical meaning.

Precision Similarity

Measures paragraph alignment with semantic categories using proximity scores.

Semantic Filtering

Advanced filtering to exclude content aligned with specified categories automatically.

Prerequisites

Some examples use jq for JSON processing. Install it as follows:

macOS

brew install jq

Ubuntu/Debian

sudo apt-get install jq

Getting Started

Request API Access

To get started with the Hypernym API, you'll need to request access and receive your API key.

Schedule API Access Call

Endpoint Details

POSThttps://fc-api-development.hypernym.ai/analyze_sync

Analyzes the provided essay text synchronously and returns semantic analysis results.

Request Structure

Headers

ParameterTypeRequiredDescription
Content-TypestringRequiredapplication/json
X-API-KeystringRequiredYour API key

Body Parameters

ParameterTypeRequiredDescription
essay_textstringRequiredThe text to be analyzed and compressed
paramsobjectOptionalOptional parameters for compression
filtersobjectOptionalOptional semantic filters to exclude content
ParameterTypeRequiredDescription
min_compression_ratiofloatOptionalMinimum compression ratio (default: 1.0)
min_semantic_similarityfloatOptionalMinimum semantic similarity (default: 0.0)
{
  "filters": {
    "purpose": {
      "exclude": [
        { "semantic_category": "political" },
        { "semantic_category": "investment advice" }
      ]
    }
  }
}

Response Structure

ParameterTypeRequiredDescription
metadataobjectOptionalAPI version, timestamp, and token usage information
requestobjectOptionalEcho of the original request
responseobjectOptionalAnalysis results including compressed text and segments
ParameterTypeRequiredDescription
was_compressedbooleanOptionalWhether this segment was compressed
semantic_categorystringOptionalThe semantic category assigned to this segment
covariant_detailsarrayOptionalKey details extracted from the segment
originalobjectOptionalOriginal text and embedding data
reconstructedobjectOptionalReconstructed text if compressed
semantic_similarityfloatOptionalSimilarity score (0-1)
compression_ratiofloatOptionalCompression ratio (0-1)
excluded_by_filterbooleanOptionalWhether excluded by semantic filter
exclusion_reasonobjectOptionalDetails about filter exclusion

Error Handling

Status CodeDescription
400Bad Request - Invalid input or parameters
403Forbidden - Invalid API key
413Payload Too Large - Text exceeds limit
429Too Many Requests - Rate limit exceeded
5xxServer Error - Retry with exponential backoff

Basic Request Example

curl -X POST https://fc-api-development.hypernym.ai/analyze_sync \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{
"essay_text": "Hypernym builds language tools for developers. Our API compresses text while preserving meaning."
}'

Request with Filters

curl -X POST https://fc-api-development.hypernym.ai/analyze_sync \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{
"essay_text": "Hi, thank you for calling Radical Toys! I would be happy to help with your shipping or returns issue.",
"params": {
"min_compression_ratio": 0.5,
"min_semantic_similarity": 0.8
},
"filters": {
"purpose": {
"exclude": [
{ "semantic_category": "pleasantries" }
]
}
}
}'

Full Response Example

{
  "metadata": {
    "version": "1.2.0",
    "timestamp": "2025-01-24T10:30:00Z",
    "tokens": {
      "in": 1000,
      "out": 500,
      "total": 1500
    },
    "filters_applied": true,
    "excluded_segments_count": 1
  },
  "request": {
    "content": "Hi, thank you for calling Radical Toys!...",
    "params": {
      "min_compression_ratio": 0.5,
      "min_semantic_similarity": 0.8
    }
  },
  "response": {
    "meta": {
      "embedding": {
        "dimensions": 768,
        "model": "text-embedding-3-small"
      }
    },
    "texts": {
      "compressed": "Customer service interaction regarding shipping/returns assistance.",
      "suggested": "Customer service interaction regarding shipping/returns assistance."
    },
    "segments": [
      {
        "was_compressed": true,
        "semantic_category": "Customer Service",
        "covariant_details": ["shipping assistance", "returns support"],
        "original": {
          "text": "I would be happy to help with your shipping or returns issue.",
          "embedding": {
            "dimensions": 768,
            "values": "[... 768 values]"
          }
        },
        "semantic_similarity": 0.89,
        "compression_ratio": 0.65,
        "excluded_by_filter": false
      }
    ]
  }
}

Python Integration

import requests
import json

class HypernymClient:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.base_url = "https://fc-api-development.hypernym.ai"
        self.headers = {
            "Content-Type": "application/json",
            "X-API-Key": api_key
        }
    
    def analyze_text(self, text: str, params: dict = None, filters: dict = None):
        """Analyze text using the Hypernym API"""
        payload = {"essay_text": text}
        
        if params:
            payload["params"] = params
        if filters:
            payload["filters"] = filters
            
        response = requests.post(
            f"{self.base_url}/analyze_sync",
            headers=self.headers,
            json=payload
        )
        
        response.raise_for_status()
        return response.json()

# Usage example
client = HypernymClient("your-api-key")

result = client.analyze_text(
    text="Your text to analyze here...",
    params={
        "min_compression_ratio": 0.5,
        "min_semantic_similarity": 0.8
    }
)

print(f"Compressed: {result['response']['texts']['compressed']}")
print(f"Token reduction: {len(result['request']['content'].split())} -> {len(result['response']['texts']['compressed'].split())}")

Authentication

🔐 API Key Security

  • • Always include your API key in the X-API-Key header
  • • Keep your API key secure and never expose it in client-side code
  • • Use environment variables to store your API key
  • • Rotate your API key regularly for security

Rate Limits

Standard Limits

  • • 100 requests per minute
  • • 1,000 requests per hour
  • • 10,000 requests per day

Enterprise Limits

  • • Custom rate limits
  • • Dedicated infrastructure
  • • Priority support

Best Practices

Recommended

  • • Use HTTPS for all requests
  • • Implement exponential backoff for retries
  • • Cache results when appropriate
  • • Monitor token usage in metadata
  • • Use semantic filters to exclude unwanted content

Avoid

  • • Logging full text content in production
  • • Sending extremely large texts without chunking
  • • Ignoring error responses
  • • Hardcoding API keys in source code

Performance Considerations

Text Size

Optimal performance with texts under 10,000 tokens. Larger texts may require chunking.

Response Time

Typical response times: 1-3 seconds for standard requests, 3-8 seconds for complex analysis.

Caching

Results are deterministic for identical inputs. Cache aggressively for repeated content.