MicroStrategy ONE

Tests That Can Be Performed in Integrity Manager

A single-project integrity test confirms that reports/documents from a single project execute to completion without execution errors. Reports/documents in single-project integrity tests are only checked for execution, they are not compared to other reports or to a baseline.

Some example situations that might warrant single-project integrity testing include:

  • Performing warehouse maintenance, such as deleting presumably unused tables
  • Benchmarking the performance of the Intelligence Server with a performance test
  • Altering the project schema
  • Making changes to MicroStrategy user ACL or privilege rights
  • Modifying the project-wide VLDB settings
  • Modifying the data warehouse ODBC information (executing against a different warehouse)
  • Migrating the project's Intelligence Server to a different operating system
  • Upgrading to a new version of MicroStrategy

Comparative Integrity Tests

In addition to the single-project integrity test, Integrity Manager supports the following comparative integrity tests:

  • Project-versus-project integrity tests execute reports/documents from two different projects and compare them to show any differences.

    Some example situations that might warrant project-versus-project integrity testing include:

    • Moving a project from one environment to another, for example, out of development and into production
    • Modifying the data warehouse ODBC information (executing against a different warehouse)
    • Migrating the Intelligence Server to a different operating system
    • Upgrading to a new version of MicroStrategy
  • Baseline-versus-project integrity tests execute reports/documents from a project and compare them against a previously established baseline set of reports/documents.

    Some example situations that might warrant baseline-versus-project integrity testing include:

    • Only having one test server to work with. You can generate a baseline (by executing a single-project integrity test), apply your changes, and then execute a baseline-versus-project integrity test to see if any reports have changed.
    • Modifying the results for a single report (by replacing that report's files in the baseline) instead of re-executing the entire baseline.
  • Baseline-versus-baseline integrity tests compare reports/documents from two previously established baselines.

    Some example situations that might warrant baseline-versus-project integrity testing include:

    • Comparing baselines from existing tests that were not originally executed against each other
    • Comparing two single-project performance tests, to get an accurate representation of the difference between their performance

In each of these comparative tests, Integrity Manager executes the specified reports/documents in both the baseline and the target. For tested reports, you can compare the report data, generated SQL code, graphs, and Excel and PDF exports. For tested documents, you can compare the Excel and PDF exports. Integrity Manager informs you which reports/documents are different between the two projects and highlights in red the differences between them.

If you test any Intelligent Cube reports, make sure that the Intelligent has been published before you perform the integrity test. Integrity Manager can test the SQL of Intelligent Cubes even if they have not been published, but cannot test Intelligent Cube reports based on an unpublished Intelligent Cube.

In a performance test, Integrity Manager records the time it takes to execute each report/document during the integrity test. You can execute the reports/documents in the integrity test multiple times to get a better idea of the time it takes to execute each report. Integrity Manager records the minimum, maximum, and average execution time for each report/document. If you are comparing two projects, Integrity Manager also records the difference in minimum, maximum, and average execution time for each report/document.

The Integrity Manager Wizard walks you through the process of creating tests. You specify what kind of test to run, what reports/documents to test, and the execution and output settings. Then you can execute the test immediately or save the settings for later use and reuse.

Related Topics

Testing Intelligence Server Performance

Executing a Test Against a Remote Intelligence Server

Saving a Test

Loading and Executing a Saved Test