MicroStrategy ONE
Auto Telemetry
Starting in MicroStrategy ONE (June 2024), the schema objects were updated to include feedback objects.
See the following schema objects that were updated MicroStrategy ONE Update 12 for the semantic graph of the Platform Analytics project:
Besides the schema objects listed below, you can use existing Account and Metadata attributes for custom reporting.
If you are using a version of MicroStrategy ONE previous to the most recent release, the following values may vary.
Object |
Type |
Description |
---|---|---|
lu_ai_assistant_category |
table |
This table stores all dedicated features of Auto (For example, Auto Dashboard). |
fact_ai_chatbot_interactions |
table |
This table captures Auto questions submitted by users at the transactional level. |
agg_fact_ai_chatbot_interactions | table | This table contains Auto questions at the daily level, which offers superior query performance. |
Attributes/A13. Auto |
folder |
This folder contains all attributes related to Auto. |
Auto Question | attribute | A call made to the Auto AI service to access its capabilities which typically involves sending input data to the service and receiving a corresponding output or result. |
Feature |
attribute |
A dedicated feature of Auto that is designed to augment the MicroStrategy workflow. |
Metrics/M10. Auto | folder | This folder contains all metrics related to Auto. |
Facts/Auto | folder |
This folder contains all facts related to Auto. |
Auto Question Timestamp |
fact |
A timestamp that indicates when an Auto Question was submitted to the Auto AI service. |
Auto Question Processing Time (ms) | fact |
The time elapsed between when an Auto Question was submitted to the Auto AI service and when a response was received from the Auto AI service. |
Number of Auto Questions |
fact |
The number of Auto Questions submitted to the Auto AI service. |
Auto Active Users |
metric |
The count of users that used Auto. |
Auto Question Processing Time (sec) | metric | The time elapsed between when an Auto Question was submitted to the Auto AI service and when a response was received from the Auto AI service. |
Auto Questions Sent |
metric |
The count of Auto Questions that were submitted to the Auto AI service. |
Auto Question Timestamp (UTC) | metric | A timestamp that indicates when an Auto Question was submitted to the Auto AI service in the UTC timezone. |
Auto User Adoption Rate |
metric |
The percentage of users who used Auto to all active users. |
Average Processing Time (sec) per Question | metric | The average time elapsed between when an Auto Question was submitted to the Auto AI service and when the response was received from the Auto AI service. |
Average Questions per Day |
metric |
The average daily number of Auto Questions submitted by users. |
Active Users |
metric |
The count of users who executed an object. |
Auto Enabled Users | metric | Count of metadata users that have the "Auto Assistant and ML Visualizations" privilege enabled, as per the most recent license compliance check. |
Overall Active Auto Users |
metric (hidden) |
The count of all users that used Auto, regardless of account filter. |
Days of Using Auto | metric (hidden) | The number of days the Auto AI service has been used. |
Auto Adoption | dashboard | A dashboard that contains telemetry, reflects on usage, and Question submission history for Auto. |
See the following feedback objects that were added in MicroStrategy ONE (June 2024):
Object |
Type |
Description |
---|---|---|
fact_ai_questions_view |
table |
This view left joins fact_ai_chatbot_interactions, fact_ai_interpretation_details, and fact_ai_feedback_details to gather all relevant information about questions asked to Auto. |
fact_ai_definitions_view |
table |
This view left joins fact_ai_chatbot_interactions with fact_ai_nugget_details to show which definitions were used to answer a question to Auto. |
lu_ai_feedback_type | table | This lookup table maps feedback_type_ids with descriptions for types of feedback given (I.E. Thumbs Down) on Auto questions by users. |
lu_ai_feedback_category |
table |
This lookup table maps feedback_category_ids with descriptions for categories of feedback (I.E. Incorrect Data or Irrelevant Answer) given to questions asked to Auto. |
Feedback | attribute | This attribute indicates what feedback was given on a question asked to Auto by a user. |
Thumb Down Reason |
attribute |
This attribute indicates why a user gave a thumbs-down to the question asked to Auto. |
Number of Auto Questions using Definitions |
fact |
The count of questions that utilized definitions from Knowledge Assets to calculate the answer. |
Number of Auto Questions Interpreted | fact |
The count of questions that had their interpretation requested by users. |
Number of Auto Questions with Feedback |
fact |
The count of questions that received feedback from users. |
Feedback Details | fact | The text feedback given by users about a question asked to Auto. |
Questions (Definition Referred) |
metric |
The count of questions that utilized definitions from Knowledge Assets to calculate the answer |
Questions (Interpreted) | metric | The count of questions that had their interpretation requested by users. |
Questions (Thumb Down) |
metric |
The count of questions that received feedback from users. |
Definition Referred Rate | metric | The proportion of questions that used definitions. |
Interpretation Request Rate |
metric |
The proportion of questions that users requested interpretation. |
Thumb Down Rate | metric | The proportion of questions that users submitted feedback. |
Effectiveness Rate |
metric |
The ratio of negative feedback to positive or neutral feedback. |
User Feedback |
metric |
The text feedback given from the user about an answer from an Auto question. |