Solutions IoT Systems Compliance AI & Data AI Agents Labs Contact
Back to System Logs
Changelog April 2, 2026

The Bytetect Nexus Engineering Changelog

System Architect

ByteTect Labs

Based directly on the architectural patterns, schemas, and tools found in our source code, here is the evolutionary history of the Nexus System.

v1.0.0 - The Nexus Release (Multi-Modal & Orchestration)

The culmination of our LangGraph architecture, bringing real-time visual UI rendering and autonomous creative QA into the loop.

  • Feature: The Orchestrator Node. Replaced the static router with an LLM-driven Orchestrator capable of strict JSON-based routing (system_reroute). It now dynamically delegates tasks across 9 specialized agents.
  • Feature: Zustand Observability Dashboard. Launched useObservabilityStore.ts, mapping WebSocket streaming events (NODE_START, TOKEN, STATUS) directly to a live UI, allowing users to watch the AI's "internal monologue" in real-time.
  • Feature: Visual Critic & Artist Nodes. Integrated Gemini 2.5 Flash and DALL-E 3. The artist_node generates imagery, and the visual_critic_node autonomously scores the image (1-10) against the original prompt, looping up to 5 times to fix hallucinations before showing the user.
  • Fix: React StrictMode WebSocket reconnect bugs mitigated using graceful teardowns in useAgentSocket.ts.

v0.9.0 - Deterministic Finance & Ledger Interfacing

Banning the LLM from doing math. Transitioned financial analysis from generative guessing to deterministic SQL execution.

  • Feature: The CFO Agent (financial_analyst_node). Deployed a ReAct agent equipped with the execute_sql_query tool. It queries the PostgreSQL Transaction table directly, ensuring 100% accuracy on margins, total income, and top expenses.
  • Feature: UI Chart Injection. The CFO agent now outputs structured JSON (chartData), which the frontend intercepts to render interactive Pie/Bar charts instead of raw markdown text.
  • Feature: Invoice Digestion Pipeline. Added digest_invoice_task powered by FinancialExtractionService. Users can upload PDFs to Firebase, and the system extracts Line Items, Dates, and Amounts, automatically fuzz-matching them to existing Business Projects.

v0.8.0 - Enterprise Security & RBAC (Role-Based Access Control)

Upgrading the Vector Database to support strict Multi-Tenancy and internal compliance.

  • Feature: Dynamic Elasticsearch Indices. Refactored VectorRepository to replace a single global index with isolated per-tenant indices (corp_know_{company_id}).
  • Feature: Vector-Level RBAC. Added min_role and allowed_roles to Elasticsearch mappings. If an Admin tags a document as "Secret", the database inherently filters it out of search results for standard member roles.
  • Feature: The Organization Librarian. Added the librarian_node which intelligently links user queries to specific Client dossiers using entity extraction before querying the vector store.

v0.7.0 - Infinite Loop Circuit Breakers & State Management

Addressing the inherent instability of cyclic LangGraph workflows.

  • Feature: Stagnation Detection. Added _is_stagnating logic to router.py. If the Critic gives the Solver the exact same score three times in a row, the system breaks the loop and routes to a Human-in-the-Loop safety_node.
  • Feature: Postgres Checkpointing. Integrated LangGraph's AsyncPostgresSaver (checkpointer.py). Workflow state is now persisted to the database at every step, allowing users to pause a workflow, close their browser, and resume it days later (/chat/resume).
  • Fix: Handled context bloat. Added summarize_messages utility. If a thread exceeds 10 messages, the system compresses the middle context using gpt-4o-mini to save tokens and maintain focus.

v0.6.0 - The Streaming Engine (PartialThoughtExtractor)

Eliminating "dead-air" loading screens for enterprise users.

  • Feature: Live Thought Extraction. Built PartialThoughtExtractor (streaming.py) to parse incomplete JSON chunks coming from the LLM via astream().
  • Feature: WebSocket Infrastructure. Created ws_manager.py to broadcast tokens to the frontend at sub-millisecond latency.
  • Feature: Dual-Channel Telemetry. Agents now stream their internal reasoning to a thought channel (visible in the Activity Feed) and their final outputs to the content channel (the main chat window).

v0.5.0 - The Multi-Agent Foundation (Solver vs. Critic)

Moving beyond standard ChatGPT wrappers into adversarial architecture.

  • Feature: Graph State Initialization. Defined the AgentState blackboard (messages, current_draft, critique_history, artifacts).
  • Feature: Adversarial QA. Introduced the critic_node prompted to be "objective, nitpicky, and constructive," forcing the solver_node to revise its work before presenting it to the user.
  • Feature: The Analyst Node. Added a Business Analyst persona specifically tasked with reviewing outputs for ROI, scalability, and market alignment, separating technical QA from business QA.

The Bedrock Phase

These versions represent the "Bedrock Phase" of the codebase—where Bytetect prioritized security, data integrity, and vendor-agnostic infrastructure long before introducing the complex LangGraph multi-agent workflows.

v0.4.0 - The Polyglot LLM Abstraction & Telemetry Engine

Vendor lock-in is a massive risk for enterprise AI. This release decoupled the application layer from OpenAI, ensuring Bytetect could hot-swap models based on cost, speed, or downtime without rewriting core logic.

  • Feature: The LlmClient Interface (app/llm/factory.py). Built a unified wrapper that standardizes calls to OpenAI, Google Gemini, and Anthropic. An agent can now switch from gpt-4o to gemini-2.5-flash simply by changing the model_name string. The factory handles the heavy lifting of translating unified prompts into provider-specific SDK payloads.
  • Feature: Structured Output Standardization. Implemented ainvoke_structured(). Because Gemini and OpenAI handle JSON schema enforcement entirely differently, this abstraction normalizes the behavior, ensuring our extraction agents (like the Financial Extractor) always return strict Pydantic objects.
  • Feature: Granular Cost Tracking (app/llm/cost.py). Added an interceptor that catches token usage metadata from every API response. It automatically calculates the exact fractional USD cost (e.g., $15.00 per 1M output tokens for GPT-4o) and logs it to the thread metadata, allowing strict cost control over runaway agent loops.
  • Feature: LangSmith Tracing (tracing.py). Wired up trace_llm_call() to pipe raw inputs and outputs directly to LangSmith, ensuring every LLM hallucination could be forensically audited by the engineering team.

v0.3.0 - The RAG Foundation (Retrieval-Augmented Generation)

Before agents can reason, they need memory. This release built the ingestion pipeline to translate raw corporate data into mathematical vectors.

  • Feature: Elasticsearch Vector Database (vector_store.py). Bootstrapped the initial connection to an external Elastic cluster. Configured the dense vector mappings specifically tailored for text-embedding-3-small (1536 dimensions) using cosine similarity.
  • Feature: Deterministic Chunking (app/core/utils/generic.py). Implemented the RecursiveCharacterTextSplitter. Raw documents are mathematically sliced into overlapping chunks (1000 characters, 200 overlap) to ensure the AI never loses context at the boundary of a paragraph.
  • Feature: Automated Metadata Tagging (tagging_service.py). Created a background pipeline that intercepts uploaded text, feeds it to a cheap model (gpt-4o-mini), and forces it to output a concise English summary, translation, and exactly 3-4 slugified taxonomy tags. This ensures messy, unstructured user uploads are perfectly organized before they hit the database.

v0.2.0 - Zero-Trust Identity & State Synchronization

Enterprise data requires bulletproof authentication. This release bridged the gap between stateless API calls and persistent database identity.

  • Feature: Firebase-to-Postgres Auth Sync (app/auth.py). Built the get_current_user dependency. When a user authenticates via Firebase, the backend intercepts the JWT, verifies it via the Firebase Admin SDK, and seamlessly upserts a local PostgreSQL User record. This guarantees every API action is cryptographically tied to a verified identity.
  • Feature: Silent Token Refresh (src/lib/api.ts). Engineered a custom Axios interceptor on the frontend. If a user's session expires and the backend throws a 401 Unauthorized, the interceptor automatically pauses the request, negotiates a new JWT with Firebase in the background, updates the global Zustand store, and replays the failed request—resulting in zero friction for the end-user.
  • Feature: Centralized Exception Handling (exception_handlers.py). Mapped custom Python exceptions (EntityNotFoundError, PermissionDeniedError) to strict, standardized HTTP JSON responses, preventing stack traces from leaking to the frontend.

v0.1.0 - The Monolith Genesis

The day the first lines of infrastructure were laid down. The absolute core of the Bytetect Nexus system.

  • Feature: Relational Domain Modeling (app/models/domain.py). Designed the core SQLModel schema. Established strict hierarchical foreign-key relationships: Company → Client → BusinessProject → Transaction / Project. This strict modeling prevents data leakage across different clients.
  • Feature: Dual-Engine PostgreSQL (app/infrastructure/database.py). Configured SQLAlchemy/SQLModel to use asyncpg for high-concurrency WebSocket routes and psycopg for synchronous background tasks, ensuring the database connection pool doesn't bottleneck under load.
  • Feature: The API Gateway (main.py). Bootstrapped the FastAPI application, implemented secure CORS middleware reading from .env configurations, and structured the initial router hierarchy (/auth, /chat, /projects).
  • Feature: Sentry Observability. Integrated Sentry SDK at the root level, ensuring any catastrophic failure, memory leak, or unhandled exception inside the Python runtime instantly alerts the DevOps team with full stack telemetry.

Stop Wrestling with Chat Wrappers.

We are onboarding early-adopter partners for the Nexus Multi-Agent System. Automate complex workflows deterministically.

Request an Architecture Briefing