What We Do

Our API offers a structured, efficient way to analyze and categorize text by assigning each paragraph into precise semantic categories, almost like sorting content into an intelligent hash table. Here's how it works:

Semantic Categorization: The API maps each paragraph into a specific "bucket" based on its central theme. This process aligns each segment within a distinct semantic grouping, providing a clear, organized structure for dense content.

Adaptive Compression: By calculating optimal "compression ratios," the API distills each paragraph to its core meaning. This enables users to retain critical content while reducing text volume, perfect for content summarization or recommendation engines.

Precision Similarity Scoring: The API measures each paragraph's alignment with its semantic category, offering a "proximity score" that reveals core content density and any digressions. This turns the document into a structured matrix, great for indexing and clustering tasks.

Key Detail Extraction: Each paragraph is distilled into key covariant points, giving users a quick, theme-aligned summary. This allows downstream NLP models or content systems to tap directly into the text’s essential information without needing extensive processing.

In short, the Hypernym API provides a clear, compressed, and categorically sorted overview of complex text, making it ideal for applications where quick, accurate understanding is crucial.

Last edited January 13th, 2025

Prerequisites

For example bash command we need JQ.

POST
brew install jq
POST
 sudo apt-get install jq

Endpoint

All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.

URL: https://fc_api_backend.hypernym.ai/analyze_sync

Method: POST

Description: Analyzes the provided essay text synchronously and returns semantic analysis results.

Headers

Content-Type: application/json

X-API-Key: your_api_key_here (Replace with your actual API key)

Request body paramaters
  • essay_text string  required

    The text of the essay to be analyzed

  • params object Optional

  • min_compression_ratio float, default: 1.0

    Minimum compression ratio to consider for suggested output.
    Lower values allow more compression (1.0 = no compression, 0.8 = 20% compression, 0.0 = 100% compression)

  • min_semantic_similarity float, default: 0.0

    Minimum semantic similarity to consider for suggested output

POST
/analyze_sync
curl -s -X POST http://fc-api.hypernym.ai/analyze_sync 
-H "Content-Type: application/json" 
-H "X-API-Key: YOUR_API_KEY" 
-d '{"essay_text": "Your essay text goes here."}'
POST
/analyze_sync

import os
import requests
import json
from dotenv import load_dotenv

load_dotenv()

API_URL = "http://fc-api.hypernym.ai/analyze_sync"
API_KEY = os.getenv("HYPERNYM_API_KEY")

if not API_KEY:
    raise ValueError("HYPERNYM_API_KEY not found in environment variables")

print(f"API Key: {API_KEY}")

headers = {
    "Content-Type": "application/json",
    "X-API-Key": API_KEY
}

essay_text = (
    "Computational Complexity Theory is a fundamental field within theoretical computer science that focuses on classifying "
    "computational problems according to their inherent difficulty and determining the resources needed to solve them. This theory "
    "provides a framework for understanding the limits of what can be achieved with algorithms and computation. The importance of this "
    "field stems from its implications for all computational processes, from simple algorithms that run on personal computers to complex "
    "calculations on supercomputers.\n\n"
    "The origins of Computational Complexity Theory date back to the 1960s, when researchers began to formally investigate the efficiency "
    "of algorithms. The primary goal was to categorize problems based on the amount of computational resources, such as time and memory, "
    "required to solve them. This categorization led to the creation of complexity classes, such as P (Polynomial time), NP (Nondeterministic "
    "Polynomial time), and PSPACE (Polynomial Space), among others. Each class represents a set of problems based on the resources needed "
    "for their solution under specific computational models.\n\n"
    "One of the central questions in Computational Complexity Theory is the P vs NP problem, which asks whether every problem whose solution "
    "can be verified quickly (in polynomial time) can also be solved quickly. This question remains one of the most profound unsolved problems "
    "in computer science and has significant implications for mathematics, cryptography, and the philosophy of science. A solution to this "
    "problem would fundamentally alter our understanding of problem-solving capabilities in the computational realm."
)

payload = {
    "essay_text": essay_text
}

def make_api_request():
    try:
        print("Sending request...")
        response = requests.post(API_URL, headers=headers, json=payload, timeout=30)
        return response
    except requests.exceptions.RequestException as e:
        print(f"Error: An unexpected error occurred while making the API request.")
        print(f"Detailed error: {e}")
        return None

def main():
    response = make_api_request()

    if response:
        print(f"\nHTTP Status Code: {response.status_code}")
        print("\nRaw response:")
        print(response.text)

        print("\nAttempting to format as JSON:")
        try:
            formatted_json = json.dumps(response.json(), indent=2)
            print(formatted_json)
        except json.JSONDecodeError:
            print("Response is not valid JSON:")
            print(response.text)
    else:
        print("Failed to get a response from the API.")

if __name__ == "__main__":
    main()
    

Response

Response is a JSON containing:

Metadata

Object containing the following metadata about the request and response:

  • version string

    API version string

  • timestamp string

    ISO timestamp of request processing

  • tokens object

    Object containing token counts

  • in integer

    Number of input tokens

  • out integer

    Number of output tokens

  • total integer

    Total tokens processed

Request

Object echoing back the original request:

  • content string

    Original input text

  • params object

    Object containing request paramaters

  • min_compression_ratio float, default: 1.0

    Minimum compression ratio parameter

  • min_semantic_similarity float, default: 0.0

    Minimum semantic similarity parameter

Response

Object containing analysis results:

meta

Object containing metadata about the analysis

  • embedding object

    Object with embedding information

  • version string

    Embedding model version

  • dimensions integer

    Embedding dimensions

texts

Object containing processed text versions

  • compressed string

    Most compressed version regardless of parameters

  • suggested string

    Version meeting parameter thresholds

Segments

Array of segment objects, each containing:

  • was_compressed boolean

    Whether segment was compressed

  • semantic_category string

    Main theme identified

  • covariant_details array

    Array of detail objects

  • text string

    Extracted detail

  • n integer

    Detail index

  • original object

    Original segment data

  • text string

    Original input text

  • embedding object

    Object containing embedding data:

  • dimensions integer

    Number of dimensions

  • size_in_bytes integer

    Embedding size

  • values array

    Embedding values

  • reconstructed object

    Reconstructed segment data (if compressed)

  • text string

    Reconstructed text

  • embedding object

    Object containing embedding data:

  • dimensions integer

    Number of dimensions

  • size_in_bytes integer

    Embedding size

  • values array

    Embedding values

  • semantic_similarity float between 0.0 and 1.0

    Indicates the similarity of the paragraph to the identified semantic category and its covariant details.
    1.0 = perfectly similar
    0.0 = not similar whatsoever

  • compression_ratio float between 0.0 and 1.0

    Indicates to what size the paragraph can be compressed while retaining its meaning with relation to the semantic similarity measure.
    0.0 = 100% compression
    0.5 = 50% compression
    0.8 = 20% compression
    1.0 = no compression

RESPONSE
{

  "metadata": {
    "version": "0.1.0",
    "timestamp": "2024-03-21T00:00:00Z",
    "tokens": {
      "in": 1000,
      "out": 500,
      "total": 1500
    }
  },
  "request": {
    "content": "Computational Complexity Theory is a fundamental field...",
    "params": {
      "min_compression_ratio": 0.5,
      "min_semantic_similarity": 0.8
    }
  },
  "response": {
    "meta": {
      "embedding": {
        "version": "0.1.0",
        "dimensions": 512
      }
    },
    "texts": {
      "compressed": "Theory::focus=computational;type=complexity;goal=classification\n\nMethodology::approach=algorithmic;purpose=efficiency",
      "suggested": "Computational Complexity Theory is a fundamental field..."
    },
    "segments": [
      {
        "was_compressed": true,
        "semantic_category": "Theory of computational problem difficulty",
        "covariant_details": [
          {
            "text": "Focuses on classifying computational problem difficulty",
            "n": 0
          }
        ],
        "original": {
          "text": "Computational Complexity Theory is a fundamental field...",
          "embedding": {
            "dimensions": 768,
            "size_in_bytes": 2048,
            "values": "[... 768 values]"
          }
        },
        "reconstructed": {
          "text": "Analysis of computational complexity focusing on...",
          "embedding": {
            "dimensions": 768,
            "size_in_bytes": 2048,
            "values": "[... 768 values]"
          }
        },
        "semantic_similarity": 0.81,
        "compression_ratio": 0.61
      }
    ]
  }
}



additional info


Last Edited: 1/13/25

Key Points for Developers
API Base URL: All API endpoints are prefixed with https://fc_api_backend.hypernym.ai.
SSL/TLS: Ensure your client supports HTTPS to securely communicate with the API.
Timeouts: Be mindful of network timeouts and implement retry logic as necessary.

Request Headers
Ensure both Content-Type: application/json and X-API-Key are included in headers
API key must be kept secure and not exposed in client-side code

Request Format
Send JSON object containing:
essay_text: Required string containing full essay/text to analyze
params: Optional object with analysis parameters
min_compression_ratio: Float between 0.0-1.0 (default 0.5)
min_semantic_similarity: Float between 0.0-1.0 (default 0.8)

Response Handling
On success (200 OK):
Parse metadata for token usage and version info
Access compressed text via response.texts.compressed
Access suggested text via response.texts.suggested
Process individual segments as needed
Validate embeddings match expected dimensions

Error Handling
400 Bad Request: Invalid input format or parameters
403 Forbidden: Invalid API key
413 Payload Too Large: Input text exceeds limits
429 Too Many Requests: Rate limit exceeded
5xx errors: Retry with exponential backoff

Security Best Practices
Store API key securely in environment variables or secrets management
Never commit API keys to version control
Use HTTPS for all API communication
Implement request timeouts and circuit breaker
Log response metadata for debugging but not full text content

Performance Considerations
Monitor token counts from metadata for usage tracking
Cache analysis results when appropriate
Use suggested text output based on quality parameters
Consider batch processing for multiple texts

We happily welcome feedback on this documentation and API: hi@hypernym.ai

Sample integration code

Sample Integration Code
import logging
from decimal import Decimal
from datetime import datetime
from typing import Dict, Any, List, Literal, Optional, Union
from pydantic import BaseModel, Field, ConfigDict


import backoff
import httpx


from ..settings import secure_settings


# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


class HypernymClientError(Exception):
   """Base exception for client errors"""
   pass


class HypernymAPIError(HypernymClientError):
   """Raised when the API returns an error response"""
   def __init__(self, status_code: int, detail: str):
       self.status_code = status_code
       self.detail = detail
       super().__init__(f"API error {status_code}: {detail}")


class HypernymTimeoutError(HypernymClientError):
   """Raised when the API request times out"""
   pass


# Response Models
class EmbeddingMetadata(BaseModel):
   """Metadata about the embedding model used"""
   version: str
   dimensions: int


class TokenCounts(BaseModel):
   """Token usage statistics"""
   in_: int = Field(..., alias="in")
   out: int
   total: int


class Metadata(BaseModel):
   """Top-level metadata about the request/response"""
   version: str
   timestamp: datetime
   tokens: TokenCounts


class RequestParams(BaseModel):
   """Parameters for the analysis request"""
   min_compression_ratio: float = Field(ge=0.0, le=1.0)
   min_semantic_similarity: float = Field(ge=0.0, le=1.0)


class Request(BaseModel):
   """Echo of the original request"""
   content: str
   params: RequestParams


class ResponseMeta(BaseModel):
   """Metadata specific to the response"""
   embedding: EmbeddingMetadata


class ResponseTexts(BaseModel):
   """Processed text outputs"""
   compressed: str
   suggested: str


class EmbeddingData(BaseModel):
   """Embedding vector data"""
   dimensions: int
   size_in_bytes: int
   values: List[float]


class DetailInfo(BaseModel):
   """Individual covariant detail"""
   text: str
   n: int
   embedding: Optional[EmbeddingData] = None


class SegmentData(BaseModel):
   """Common data structure for original and reconstructed segments"""
   text: str
   embedding: EmbeddingData


class ResponseSegment(BaseModel):
   """Individual segment analysis results"""
   was_compressed: bool
   semantic_category: str
   covariant_details: List[DetailInfo]
   original: SegmentData
   reconstructed: Optional[SegmentData] = None
   semantic_similarity: float = Field(ge=0.0, le=1.0)
   compression_ratio: float = Field(ge=0.0, le=1.0)


class Response(BaseModel):
   """Complete response data"""
   meta: ResponseMeta
   texts: ResponseTexts
   segments: List[ResponseSegment]


class SemanticAnalysisResponse(BaseModel):
   """Top-level response model"""
   metadata: Metadata
   request: Request
   response: Response


   model_config = ConfigDict(
       json_schema_extra={
           "example": {
               "metadata": {
                   "version": "0.1.0",
                   "timestamp": "2024-03-21T00:00:00Z",
                   "tokens": {
                       "in": 1000,
                       "out": 500,
                       "total": 1500
                   }
               },
               "request": {
                   "content": "Computational Complexity Theory is a fundamental field...",
                   "params": {
                       "min_compression_ratio": 0.5,
                       "min_semantic_similarity": 0.8
                   }
               },
               "response": {
                   "meta": {
                       "embedding": {
                           "version": "0.1.0",
                           "dimensions": 512
                       }
                   },
                   "texts": {
                       "compressed": "Theory::focus=computational;type=complexity\n\nMethodology::approach=algorithmic",
                       "suggested": "Computational Complexity Theory is a fundamental field..."
                   },
                   "segments": [
                       {
                           "was_compressed": True,
                           "semantic_category": "Theory of computational problem difficulty",
                           "covariant_details": [
                               {
                                   "text": "Focuses on classifying computational problem difficulty",
                                   "n": 0
                               }
                           ],
                           "original": {
                               "text": "Computational Complexity Theory is a fundamental field...",
                               "embedding": {
                                   "dimensions": 768,
                                   "size_in_bytes": 2048,
                                   "values": "[... 768 values]"
                               }
                           },
                           "reconstructed": {
                               "text": "Analysis of computational complexity focusing on...",
                               "embedding": {
                                   "dimensions": 768,
                                   "size_in_bytes": 2048,
                                   "values": "[... 768 values]"
                               }
                           },
                           "semantic_similarity": 0.81,
                           "compression_ratio": 0.61
                       }
                   ]
               }
           }
       }
   )


class EssayTextPayloadV1(BaseModel):
   """Request payload with parameters"""
   essay_text: str = Field(..., title="Essay Text", description="The text to be analyzed")
   params: RequestParams = Field(
       default_factory=lambda: RequestParams(min_compression_ratio=0.5, min_semantic_similarity=0.8)
   )


class HypernymClient:
   """
   A client for interacting with the Hypernym API with built-in retries and error handling.
   """


   def __init__(
       self,
       base_url: str = "https://fc-api.hypernym.ai",
       timeout: float = 600.0,Sam
       max_retries: int = 3,
   ):
       self.base_url = base_url.rstrip('/')
       self.timeout = timeout
       self.max_retries = max_retries
       self.api_key = secure_settings.hypernym_api_key


       # Default headers
       self.headers = {
           "Accept": "application/json",
           "Content-Type": "application/json",
           "X-API-Key": self.api_key,
       }


   def _get_client_defaults(self) -> Dict[str, Any]:
       """Get default httpx client settings"""
       return {
           "timeout": httpx.Timeout(timeout=self.timeout),
           "headers": self.headers,
           "follow_redirects": True,
       }


   @backoff.on_exception(
       backoff.expo,
       (httpx.NetworkError, httpx.TimeoutException),
       max_tries=3,
       giveup=lambda e: isinstance(e, httpx.HTTPStatusError) and e.response.status_code < 500,
   )
   async def get_hypernym_analysis(
       self,
       text: str,
       min_compression_ratio: float = 0.5,
       min_semantic_similarity: float = 0.8
   ) -> SemanticAnalysisResponse:
       """
       Get semantic analysis for the provided text.


       Args:
           text: The text to analyze
           min_compression_ratio: Minimum compression ratio (0.0-1.0)
           min_semantic_similarity: Minimum semantic similarity score (0.0-1.0)


       Returns:
           SemanticAnalysisResponse containing the analysis results


       Raises:
           HypernymAPIError: If the API returns an error response
           HypernymTimeoutError: If the request times out
           HypernymClientError: For other client-related errors
       """


       payload = EssayTextPayloadV1(
           essay_text=text,
           params=RequestParams(
               min_compression_ratio=min_compression_ratio,
               min_semantic_similarity=min_semantic_similarity
           )
       )


       try:
           async with httpx.AsyncClient(**self._get_client_defaults()) as client:
               logger.debug(f"Sending analysis request for text: {text[:100]}...")


               response = await client.post(
                   f"{self.base_url}/analyze_sync",
                   json=payload.model_dump(),
               )


               try:
                   response.raise_for_status()
               except httpx.HTTPStatusError as e:
                   # Handle API-specific error responses
                   error_detail = "Unknown error"
                   try:
                       error_detail = response.json().get("detail", error_detail)
                   except Exception:
                       pass
                   raise HypernymAPIError(e.response.status_code, error_detail) from e


               return SemanticAnalysisResponse.model_validate_json(response.content)


       except httpx.TimeoutException as e:
           raise HypernymTimeoutError(f"Request timed out after {self.timeout} seconds") from e
       except httpx.NetworkError as e:
           raise HypernymClientError(f"Network error occurred: {str(e)}") from e
       except Exception as e:
           raise HypernymClientError(f"Unexpected error: {str(e)}") from e'

Licensing and Terms

Usage: The API is licensed for analyzing text within your applications.
Restrictions:
Do not redistribute, resell, or publicly expose the API or its data.
No reverse engineering or disassembly of the API or associated software.
Attribution: Must include acknowledgment of Hypernym AI in your application's documentation.

Contact for Licensing:
Email: chris@hypernym.ai
Process: Request access, agree to terms, and receive your API key.