What We Do
Hypernym AI's API offers a structured, efficient way to analyze and categorize text by assigning each paragraph into precise semantic categories, almost like sorting content into an intelligent hash table.
Semantic Categorization
Maps each paragraph into specific "buckets" based on central themes for clear, organized structure.
Adaptive Compression
Calculates optimal compression ratios to distill content while retaining critical meaning.
Precision Similarity
Measures paragraph alignment with semantic categories using proximity scores.
Semantic Filtering
Advanced filtering to exclude content aligned with specified categories automatically.
Prerequisites
Some examples use jq
for JSON processing. Install it as follows:
macOS
Ubuntu/Debian
Getting Started
Request API Access
To get started with the Hypernym API, you'll need to request access and receive your API key.
Schedule API Access CallEndpoint Details
https://fc-api-development.hypernym.ai/analyze_sync
Analyzes the provided essay text synchronously and returns semantic analysis results.
Request Structure
Headers
Parameter | Type | Required | Description |
---|---|---|---|
Content-Type | string | Required | application/json |
X-API-Key | string | Required | Your API key |
Body Parameters
Parameter | Type | Required | Description |
---|---|---|---|
essay_text | string | Required | The text to be analyzed and compressed |
params | object | Optional | Optional parameters for compression |
filters | object | Optional | Optional semantic filters to exclude content |
Parameter | Type | Required | Description |
---|---|---|---|
min_compression_ratio | float | Optional | Minimum compression ratio (default: 1.0) |
min_semantic_similarity | float | Optional | Minimum semantic similarity (default: 0.0) |
{
"filters": {
"purpose": {
"exclude": [
{ "semantic_category": "political" },
{ "semantic_category": "investment advice" }
]
}
}
}
Response Structure
Parameter | Type | Required | Description |
---|---|---|---|
metadata | object | Optional | API version, timestamp, and token usage information |
request | object | Optional | Echo of the original request |
response | object | Optional | Analysis results including compressed text and segments |
Parameter | Type | Required | Description |
---|---|---|---|
was_compressed | boolean | Optional | Whether this segment was compressed |
semantic_category | string | Optional | The semantic category assigned to this segment |
covariant_details | array | Optional | Key details extracted from the segment |
original | object | Optional | Original text and embedding data |
reconstructed | object | Optional | Reconstructed text if compressed |
semantic_similarity | float | Optional | Similarity score (0-1) |
compression_ratio | float | Optional | Compression ratio (0-1) |
excluded_by_filter | boolean | Optional | Whether excluded by semantic filter |
exclusion_reason | object | Optional | Details about filter exclusion |
Error Handling
Status Code | Description |
---|---|
400 | Bad Request - Invalid input or parameters |
403 | Forbidden - Invalid API key |
413 | Payload Too Large - Text exceeds limit |
429 | Too Many Requests - Rate limit exceeded |
5xx | Server Error - Retry with exponential backoff |
Basic Request Example
Request with Filters
Full Response Example
{
"metadata": {
"version": "1.2.0",
"timestamp": "2025-01-24T10:30:00Z",
"tokens": {
"in": 1000,
"out": 500,
"total": 1500
},
"filters_applied": true,
"excluded_segments_count": 1
},
"request": {
"content": "Hi, thank you for calling Radical Toys!...",
"params": {
"min_compression_ratio": 0.5,
"min_semantic_similarity": 0.8
}
},
"response": {
"meta": {
"embedding": {
"dimensions": 768,
"model": "text-embedding-3-small"
}
},
"texts": {
"compressed": "Customer service interaction regarding shipping/returns assistance.",
"suggested": "Customer service interaction regarding shipping/returns assistance."
},
"segments": [
{
"was_compressed": true,
"semantic_category": "Customer Service",
"covariant_details": ["shipping assistance", "returns support"],
"original": {
"text": "I would be happy to help with your shipping or returns issue.",
"embedding": {
"dimensions": 768,
"values": "[... 768 values]"
}
},
"semantic_similarity": 0.89,
"compression_ratio": 0.65,
"excluded_by_filter": false
}
]
}
}
Python Integration
import requests
import json
class HypernymClient:
def __init__(self, api_key: str):
self.api_key = api_key
self.base_url = "https://fc-api-development.hypernym.ai"
self.headers = {
"Content-Type": "application/json",
"X-API-Key": api_key
}
def analyze_text(self, text: str, params: dict = None, filters: dict = None):
"""Analyze text using the Hypernym API"""
payload = {"essay_text": text}
if params:
payload["params"] = params
if filters:
payload["filters"] = filters
response = requests.post(
f"{self.base_url}/analyze_sync",
headers=self.headers,
json=payload
)
response.raise_for_status()
return response.json()
# Usage example
client = HypernymClient("your-api-key")
result = client.analyze_text(
text="Your text to analyze here...",
params={
"min_compression_ratio": 0.5,
"min_semantic_similarity": 0.8
}
)
print(f"Compressed: {result['response']['texts']['compressed']}")
print(f"Token reduction: {len(result['request']['content'].split())} -> {len(result['response']['texts']['compressed'].split())}")
Authentication
🔐 API Key Security
- • Always include your API key in the
X-API-Key
header - • Keep your API key secure and never expose it in client-side code
- • Use environment variables to store your API key
- • Rotate your API key regularly for security
Rate Limits
Standard Limits
- • 100 requests per minute
- • 1,000 requests per hour
- • 10,000 requests per day
Enterprise Limits
- • Custom rate limits
- • Dedicated infrastructure
- • Priority support
Best Practices
Recommended
- • Use HTTPS for all requests
- • Implement exponential backoff for retries
- • Cache results when appropriate
- • Monitor token usage in metadata
- • Use semantic filters to exclude unwanted content
Avoid
- • Logging full text content in production
- • Sending extremely large texts without chunking
- • Ignoring error responses
- • Hardcoding API keys in source code
Performance Considerations
Text Size
Optimal performance with texts under 10,000 tokens. Larger texts may require chunking.
Response Time
Typical response times: 1-3 seconds for standard requests, 3-8 seconds for complex analysis.
Caching
Results are deterministic for identical inputs. Cache aggressively for repeated content.