Our API offers a structured, efficient way to analyze and categorize text by assigning each paragraph into precise semantic categories, almost like sorting content into an intelligent hash table. Here's how it works:
Semantic Categorization: The API maps each paragraph into a specific "bucket" based on its central theme. This process aligns each segment within a distinct semantic grouping, providing a clear, organized structure for dense content.
Adaptive Compression: By calculating optimal "compression ratios," the API distills each paragraph to its core meaning. This enables users to retain critical content while reducing text volume, perfect for content summarization or recommendation engines.
Precision Similarity Scoring: The API measures each paragraph's alignment with its semantic category, offering a "proximity score" that reveals core content density and any digressions. This turns the document into a structured matrix, great for indexing and clustering tasks.
Key Detail Extraction: Each paragraph is distilled into key covariant points, giving users a quick, theme-aligned summary. This allows downstream NLP models or content systems to tap directly into the text’s essential information without needing extensive processing.
In short, the Hypernym API provides a clear, compressed, and categorically sorted overview of complex text, making it ideal for applications where quick, accurate understanding is crucial.
Last edited January 13th, 2025
For example bash command we need JQ.
All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.
URL: https://fc_api_backend.hypernym.ai/analyze_sync
Method: POST
Description: Analyzes the provided essay text synchronously and returns semantic analysis results.
Content-Type: application/json
X-API-Key: your_api_key_here (Replace with your actual API key)
The text of the essay to be analyzed
Minimum compression ratio to consider for suggested output.
Lower values allow more compression (1.0 = no compression, 0.8 = 20% compression, 0.0 = 100% compression)
Minimum semantic similarity to consider for suggested output
Response is a JSON containing:
Object containing the following metadata about the request and response:
API version string
ISO timestamp of request processing
Object containing token counts
Number of input tokens
Number of output tokens
Total tokens processed
Object echoing back the original request:
Original input text
Object containing request paramaters
Minimum compression ratio parameter
Minimum semantic similarity parameter
Object containing analysis results:
Object containing metadata about the analysis
Object with embedding information
Embedding model version
Embedding dimensions
Object containing processed text versions
Most compressed version regardless of parameters
Version meeting parameter thresholds
Array of segment objects, each containing:
Whether segment was compressed
Main theme identified
Array of detail objects
Extracted detail
Detail index
Original segment data
Original input text
Object containing embedding data:
Number of dimensions
Embedding size
Embedding values
Reconstructed segment data (if compressed)
Reconstructed text
Object containing embedding data:
Number of dimensions
Embedding size
Embedding values
Indicates the similarity of the paragraph to the identified semantic category and its covariant details.
1.0 = perfectly similar
0.0 = not similar whatsoever
Indicates to what size the paragraph can be compressed while retaining its meaning with relation to the semantic similarity measure.
0.0 = 100% compression
0.5 = 50% compression
0.8 = 20% compression
1.0 = no compression
Last Edited: 1/13/25
Key Points for Developers
API Base URL: All API endpoints are prefixed with https://fc_api_backend.hypernym.ai.
SSL/TLS: Ensure your client supports HTTPS to securely communicate with the API.
Timeouts: Be mindful of network timeouts and implement retry logic as necessary.
Request Headers
Ensure both Content-Type: application/json and X-API-Key are included in headers
API key must be kept secure and not exposed in client-side code
Request Format
Send JSON object containing:
essay_text: Required string containing full essay/text to analyze
params: Optional object with analysis parameters
min_compression_ratio: Float between 0.0-1.0 (default 0.5)
min_semantic_similarity: Float between 0.0-1.0 (default 0.8)
Response Handling
On success (200 OK):
Parse metadata for token usage and version info
Access compressed text via response.texts.compressed
Access suggested text via response.texts.suggested
Process individual segments as needed
Validate embeddings match expected dimensions
Error Handling
400 Bad Request: Invalid input format or parameters
403 Forbidden: Invalid API key
413 Payload Too Large: Input text exceeds limits
429 Too Many Requests: Rate limit exceeded
5xx errors: Retry with exponential backoff
Security Best Practices
Store API key securely in environment variables or secrets management
Never commit API keys to version control
Use HTTPS for all API communication
Implement request timeouts and circuit breaker
Log response metadata for debugging but not full text content
Performance Considerations
Monitor token counts from metadata for usage tracking
Cache analysis results when appropriate
Use suggested text output based on quality parameters
Consider batch processing for multiple texts
We happily welcome feedback on this documentation and API: hi@hypernym.ai
Usage: The API is licensed for analyzing text within your applications.
Restrictions:
Do not redistribute, resell, or publicly expose the API or its data.
No reverse engineering or disassembly of the API or associated software.
Attribution: Must include acknowledgment of Hypernym AI in your application's documentation.
Contact for Licensing:
Email: chris@hypernym.ai
Process: Request access, agree to terms, and receive your API key.