Models
Models Page
The Models page provides a centralized inventory of all AI models discovered across your connected environments. It combines discovery with benchmark integration, giving each model context on security, safety, and business alignment.
At the top of the page, four cards provide an overview of the number of models categorized into the following statuses: Models Unreviewed, Models Approved, Models Unwanted, and Models In Review.
Below the cards, there is a donut chart that visualizes the total Models Usage within the SPLX Platform workspace. A single model can be used multiple times across different scanned environments, and this contributes to the total usage count. The chart highlights the top five most-used models with distinct colors, while all other models are grouped under the "Others" category. The chart is interactive, and hovering over any section displays the exact usage count for the corresponding model.
To the right of the donut chart, there is a summarized table of discovered issues. The complete table can be accessed by clicking "See All" which redirects to the Issues page. The table includes the following columns: Model, Issue, and Severity.

Below the chart, the model inventory table lists each discovered model with details:
Name - model identifier (e.g., Gemini 1.5 Pro, Llama 3.1 405B).
Provider - the model vendor (e.g., Google, OpenAI, Meta).
Kind - whether the model is Proprietary or Open Source.
Benchmark Score - linked from the SPLX Benchmarks, showing the overall model’s score (if available).
Environments Used In - icons that on hover show in which environments the model was discovered.
Status - Unreviewed, In Review, Unwanted or Approved. Any model with a status other than "Approved" will automatically generate an issue for that model, which will be listed on the Issue page.
You can search or filter models by name or provider to narrow down results.
Model Details
Clicking on a model in the inventory opens the Model Details view, which provides in-depth information about that model.

Model Card
The Model Card on the left shows metadata for the model, including:
Name - the model identifier (e.g., Gemini 1.5 Pro).
License - the licensing terms under which the model is distributed.
Context Size - maximum context window supported by the model.
Reasoning - indicator if the model supports reasoning capabilities.
Multimodal - indicator if the model supports multimodal input/output.
Number of Parameters - reported parameter count (if available).
This card helps teams quickly understand the model’s technical characteristics and licensing profile.
Benchmark Scores
On the right, Benchmark Scores show how the model performs across SPLX’s benchmarks. A link to View Full Benchmark provides direct access to SPLX Benchmark results for deeper analysis.
Environments Used In
Below, a table lists all environments where the model has been discovered, with details for each occurrence:
Environment Name - the connected environment (e.g., SPLX GitHub V2, SPLX Dev GitLab).
Environment Type - currently GitHub or GitLab.
Detection Time - when the model was identified in that environment.
Asset Location - the specific repository and file path where the model appears.
This provides full traceability, allowing teams to see not just that a model exists in the enterprise, but where it is located and how it is being used.
Last updated