Specify Test Properties in the Test Manager
The Test Manager has property settings that specify how test cases, test suites, and test
files run. To open the Test Manager, use sltest.testmanager.view
. For information about the Test Manager, see Test
Manager
Test Case, Test Suite, and Test File Sections Summary
When you open a test case, test suite, or test file in the Test Manager, the test settings are grouped into sections. Test cases, test suites, and test files have different sections and settings. Click a test case, test suite, or test file in the Test Browser pane to see its settings.
Test Section | Test Case | Test Suite | Test File |
---|---|---|---|
✔ | ✔ | ✔ | |
✔ | ✔ | ✔ | |
✔ | ✔ | ✔ | |
✔ | |||
✔ | |||
✔ | |||
✔ | |||
✔ | ✔ | ✔ | |
✔ | |||
✔ | |||
✔ | |||
✔ | |||
✔ | |||
✔ | |||
✔ | |||
✔ | |||
✔ | ✔ | ✔ | |
✔ | |||
✔ |
If you do not want to see all of the available test sections, you can use the Test Manager preferences to hide sections:
In the Test Manager toolstrip, click Preferences.
Select the Test File, Test Suite, or Test Case tab.
Select the sections to show, or clear the sections to hide. To show only the sections in which you have already set or changed settings, clear all selections in the Preferences dialog box.
Click OK.
Sections that you already modified appear in the Test Manager, regardless of the preference setting.
To set these properties programmatically, see sltest.testmanager.getpref
and sltest.testmanager.setpref
.
Create Test Case from External File
To use an existing Excel® file that is in the supported Simulink®
Test™ format to create a test case, select Create test case from
external file. Then, enter the path to the file. The corresponding API
properties are IsTestDataReferenced
and
TestDataPath
, which are described in the setProperty
method of sltest.testmanager.TestCase
.
To learn how to format the Excel file, see Microsoft Excel Import, Export, and Logging Format and Create External Data Files to Use in Test Cases.
To use an existing Excel that is not in the supported format, write an adapter function so you can
use that file in the Test Manager. Then, register the file using the
sltest.testmanager.registerTestAdapter
function. If you have
registered an adapter, when you select Create test case from external
file, the Test Manager displays two fields, one for the path to the Excel
file and one for the adapter function name. See sltest.testmanager.registerTestAdapter
for information and an
example.
Tags
Tag your test file, test suite, or test case with categorizations, such as
safety
, logged-data
, or
burn-in
. Filter tests using these tags when executing tests or
viewing results. See Filter Test Execution, Results, and Coverage.
For the corresponding API, see the Tags
property of sltest.testmanager.TestFile
, sltest.testmanager.TestSuite
, or sltest.testmanager.TestCase
, respectively.
Description
Add descriptive text to your test case, test suite, or test file.
For the corresponding API, see the Description
property of sltest.testmanager.TestFile
, sltest.testmanager.TestSuite
, or sltest.testmanager.TestCase
, respectively.
Requirements
If you have Requirements Toolbox™ installed, you can establish traceability by linking your test file, test suite, or test case to requirements. For more information, see Link Test Cases to Requirements (Requirements Toolbox).
To link a test case, test suite, or test file to a requirement:
Open the Requirements Editor. In the Simulink Toolstrip, on the Apps tab, under Model Verification, Validation, and Test, click Requirements Editor.
Highlight a requirement.
In the Test Manager, in the Requirements section, click the arrow next to the Add button and select Link to Selected Requirement.
The requirement link appears in the Requirements list.
For the corresponding API, see the Requirements
property of sltest.testmanager.TestFile
, sltest.testmanager.TestSuite
, or sltest.testmanager.TestCase
, respectively.
RoadRunner
This section appears only for a RoadRunner automated driving test case.
Console Mode — Whether the console mode is on or off. If the checkbox is selected, console mode is on and the RoadRunner user interface does not display when the simulation runs. Use console mode to run a RoadRunner test in a non-graphical environment or if you do not need to view the simulation as it runs.
Force Pacing Off — Whether pacing is on or off. Clear this checkbox to turn pacing on, which lets you change the speed at which the RoadRunner scenario simulation runs. Use If the checkbox is selected, pacing is off and you cannot control the speed of the RoadRunner simulation.
RoadRunner Project Folder — Full path to the RoadRunner project folder.
RoadRunner Installation Folder — Full path to the folder where the RoadRunner application executable is installed.
RoadRunner Scenario — Full path to the preconfigured RoadRunner
scenario file. This file is usually stored in the Scenarios
folder in
the RoadRunner project folder.
System Under Test
Specify the model you want to test in the System Under Test section. To use an open model in the currently active Simulink window, click the Use current model button .
Note
The model must be available on the path to run the test case. You can add the folder that contains the model to the path using the preload callback. See Callbacks.
Specifying a new model in the System Under Test section can cause the model information to be out of date. To update the model test harnesses, Signal Editor scenarios, and available configuration sets, click the Refresh button .
For a RoadRunner test case, you specify the model in System Under Test section that you set as a behavior of an actor in the RoadRunner scenario being tested.
For the corresponding API, see the Model
name-argument pair of
setProperty
.
Test Harness
If you have a test harness in your system under test, then you can select the test harness to use for the test case. If you have added or removed test harnesses in the model, click the Refresh button to view the updated test harness list.
For more information about using test harnesses, see Refine, Test, and Debug a Subsystem.
For the corresponding API, see the HarnessName
name-argument
pair of setProperty
.
Simulation Settings and Release Overrides
To override the simulation mode of the model, select a mode from the list. If the
model contains SIL/PIL blocks, and you need to run in normal mode, enable
Override model blocks in SIL/PIL mode to normal mode. To
change this setting programmatically, see the OverrideSILPILMode
name-argument pair of setProperty
. You can also override
the simulation mode at test execution. For more information, see Override the Simulation Mode During Test Execution.
You can simulate the model and run tests in more than one MATLAB® release that is installed on your system. Use Select releases for simulation to select available releases. You can use releases from R2011b forward.
To add one or more releases so they are available in the Test Manager, click Add releases in Select releases for simulation to open the Release pane in the Test Manager Preferences dialog box. Navigate to the location of the MATLAB installation you want to add, and click OK.
You can add releases to the list and delete them. You cannot delete the release in which you started the MATLAB session.
For more information, see Run Tests in Multiple Releases of MATLAB. For the
corresponding API, see the Release
name-argument pair of setProperty
.
System Under Test Considerations
The System Under Test cannot be in fast restart or external mode.
To stop a test running in Rapid Accelerator mode, press Ctrl+C at the MATLAB command prompt.
When running parallel execution in rapid accelerator mode, streamed signals do not show up in the Test Manager.
The System Under Test cannot be a protected model.
Simulation 1 and Simulation 2
These sections appear in equivalence test cases. Use them to specify the details about the simulations that you want to compare. Enter the system under test, the test harness if applicable, and simulation setting overrides under Simulation 1. You can then click Copy settings from Simulation 1 under Simulation 2 to use a starting point for your second set of simulation settings.
For the test to pass, Simulation 1 and Simulation 2 must log the same signals.
Use these sections with the Equivalence Criteria section to define the premise of your test case. For an example of an equivalence test, see Test Two Simulations for Equivalence.
For the corresponding API, see the SimulationIndex
name-argument
pair of setProperty
.
Parameter Overrides
Specify parameter values in the test case to override the parameter values in the model workspace, data dictionary, base workspace, or in a model reference hierarchy. Parameters are grouped into sets. Turn parameter sets and individual parameter overrides on or off by using the check box next to the set or parameter. To copy an individual parameter or parameter set and paste it into another parameter set, select the parameter and right-click to use Copy and Paste from the context menu.
For an example that uses parameter overrides, see Override Model Parameters in a Test Case. For the
corresponding APIs, see the sltest.testmanager.ParameterOverride
class, and the OverrideStartTime
,
OverrideStopTIme
, OverrideInitialState
,
OverrideModelOutputSettings
, and
ConfigSetOverrideSetting
name-argument pairs of the setProperty
method.
Add Parameter Override
Click Add.
A dialog box opens with a list of parameters. If needed, click Refresh in the dialog box to update the parameter list.
Select the parameter you want to override.
To add the parameter to the parameter set, click OK.
Enter the override value in the parameter Override Value column.
Add Parameter Set
To add an empty parameter set, click Add > Add Parameter Set. The new parameter set appears in the Parameter Overrides table.
Add Parameter Overrides From a File
To add a set of parameter overrides from a MAT-file, including one generated by
Simulink
Design Verifier™, from a MATLAB script (.m
file), or from an Excel file, click Add > Add File. To change an override value that is imported from a file, you must
edit and save the file outside of MATLAB and add the overrides file again.
External files must be in a format that the Test Manager can read. See Create External Data Files to Use in Test Cases. The allowable formats are:
Excel files — For information on how to format Excel files, see also Microsoft Excel Import, Export, and Logging Format
MATLAB scripts (
.m
files) — In MATLAB scripts, assign variables and values using MATLAB syntax.
Restore Default Parameter Value
To delete an override and restore the default parameter value, right-click on the parameter and click Delete in the context menu.
Parameter Overrides Considerations
The Test Manager displays only top-level system parameters from the system under test.
Parameter overrides are evaluated only at test run time.
You can export parameters to MATLAB script files or MAT-files only.
Callbacks
Test-File Level Callbacks
Two callback scripts are available in each test file that execute at different times during a test:
Setup runs before test file executes.
Cleanup runs after test file executes.
For the corresponding test case APIs, see the PreloadCallback
,
PostloadCallback
, CleanupCallback
, and
PreStartRealTimeApplicationCallback
name-argument pairs of
the TestCase
setProperty
method.
For the corresponding test file APIs, see the SetupCallback
and
CleanupCallback
name-argument pairs of the test file
TestFile
setProperty
method.
Test-Suite Level Callbacks
Two callback scripts are available in each test suite that execute at different times during a test:
Setup runs before the test suite executes.
Cleanup runs after the test suite executes.
If a test suite does not have anytest cases, the test suite callbacks do not execute.
For the corresponding APIs, see the SetupCallback
and
CleanupCallback
name-argument pairs of the
TestSuite
setProperty
method.
Test-Case Level Callbacks
Three callback scripts are available in each test case that execute at different times during a test:
Pre-load
runs before the model loads and before the model callbacks.Post-load
runs after the model loads and thePostLoadFcn
model callback.Cleanup
runs after simulations and model callbacks.
See Test Execution Order for information about the order in which callbacks occur and models load and simulate.
To run a single callback script, click the Run button above the corresponding script.
You can use predefined variables in the test case callbacks:
sltest_bdroot
available in Post-Load: The model simulated by the test case. The model can be a harness model.sltest_sut
available in Post-Load: The system under test. For a harness, it is the component under test.sltest_isharness
available in Post-Load: Returns true ifsltest_bdroot
is a harness model.sltest_simout
available in Cleanup: Simulation output produced by simulation.sltest_iterationName
available in Pre-Load, Post-Load, and Cleanup: Name of the currently executing test iteration.sltest_testIteration
available in Pre-Load, Post-Load, and Cleanup: Current test iteration object.sltest_testCase
available in Pre-Load, Post-Load, and Cleanup: Current test case object.sltest_roadRunnerApp
available in Pre-Load, Post-Load, and Cleanup: RoadRunner application object. You can use this variable only for RoadRunner test cases or iterations.sltest_roadRunnerSim
available in Pre-Load, Post-Load, and Cleanup: RoadRunner simulation object. You can use this variable only for RoadRunner test cases or iterations.
disp
and fprintf
do not work in callbacks.
To verify that the callbacks are executed, use a MATLAB script that includes
breakpoints in the callbacks.
The test case callback scripts are not stored with the model and do not override Simulink model callbacks. Consider the following when using callbacks:
Do not modify
sltest_iteration
orsltest_testCase
. Use these callbacks to query aspects of the current iteration or test case, respectively.To stop execution of an infinite loop from a callback script, press Ctrl+C at the MATLAB command prompt.
sltest.testmanager
functions are not supported.
For the corresponding APIs, see the PreloadCallback
,
PostloadCallback
, CleanupCallback
, and
PreStartRealTimeApplicationCallback
name-argument pairs of
the TestCase
setProperty
method.
Assessment Callback
You can enter a callback to define variables and conditions used only in logical and temporal assessments by using the Assessment Callback section. See Assessment Callback in the Logical and Temporal Assessments section for more information.
For the corresponding API, see setAssessmentsCallback
.
Inputs
A test case can use input data from:
A Signal Editor block in the system under test. Select Signal Editor scenario and select the scenario. The system under test can have only one Signal Editor block at the top level.
An external data file. In the External Inputs table, click Add. Select a MAT-file or Microsoft® Excel file.
For more information on using external files as inputs, see Use External File Data in Test Cases. For information about the file format for using Microsoft Excel files in the Test Manager, see Create External Data Files to Use in Test Cases.
Scenarios in a Test Sequence block. First, click the refresh arrow next to the Test Sequence Block field, then select the Test Sequence block in the model that contains the scenarios. If you do not also select a scenario from Override with Scenario and do not use iterations, then the test runs the active scenario in the selected Test Sequence block. If you do not also select a scenario, but do use iterations, then the active scenario in the Test Sequence block is the default for all the iterations.
Use Override with Scenario to override the active scenario in the selected Test Sequence block. Click the refresh arrow next to the Override with Scenario field. Then, select the scenario to use instead of the active scenario or as the default for the iterations. In the Iterations section, you can change the scenario assigned to each iteration. For more information, see Use Test Sequence Scenarios in the Test Sequence Editor and Test Manager.
An input file template that you create and populate with data. See Create External Data Files to Use in Test Cases.
To include the input data in your test results set, select Include input data in test result.
If the time interval of your input data is shorter than the model simulation time, you can limit the simulation to the time specified by your input data by selecting Stop simulation at last time point.
For more information on test inputs, see the Test Authoring: Inputs page.
Edit Input Data Files in Test Manager
From the Test Manager, you can edit your input data files.
To edit a file, select the file and click Edit. You can then edit the data in the Signal Editor for MAT-files or Microsoft Excel for Excel files.
To learn about the format for Excel files, see Input, Baseline, and Other Test Case Data Formats in Excel.
For the corresponding API, see sltest.testmanager.TestInput
.
Simulation Outputs
Use the Simulation Outputs section to add signal outputs to your test results. Signals logged in your model or test harness can appear in the results after you add them as simulation outputs. You can then plot them. Add individual signals to log and plot or add a signal set.
Logged Signals — In the Logged Signals subsection, click Add. Follow the user interface.
To copy a logged signal set to another test case in the same or a different test file, select the signal set in Logged Signals, right-click to display the context menu, and click Copy. Then, in the destination test case, select the signal set in Logged Signals, right-click the signal, and click Paste. You can copy and paste more than one logged signal set at a time.
For a test case, you can use the SDI View File setting to specify
the path to a Simulation Data Inspector (SDI) view file. You can assign a different view
file to each test case. The view file configures which signals to plot and their layout
in the test case results. The Test Manager does not support some configurations in the
SDI view file, such as axes plot layouts other than time plots and axes layouts other
than N-by-M grids. However, the Test Manager
applies a similar configuration, if possible. You cannot save an SDI view file from the
Test Manager, although when you save the test and results in an MLDATX test file, the
file saves the current layout for that test. Use Simulink.sdi.saveView
to create and save an SDI view file. For more
information, see Save and Share Simulation Data Inspector Data and Views.
Other Outputs — Use the options in the Other Outputs subsection to add states, final states, model output values, data store variables, and signal logging values to your test results. To enable selecting one or more of these options, click Override model settings.
States — Include state values between blocks during simulation. You must have a Sequence Viewer block in your model to include state values.
Final states — Include final state values. You must have a Sequence Viewer block in your model to include final state values.
Output — Include model output values.
Data stores — Include logged data store variables in Data Store Memory blocks in the model. This option is selected by default.
Signal logging — Include logged signals specified in the model. This option is selected by default. If you selected Log Signal Outputs when you created the harness, all of the output signals for the component under test are logged and returned in test results, even though they are not listed in the Simulation Outputs section. To turn off logging for one of the signals, in the test harness, right-click a signal and select Stop Logging Selected Signals.
For more information, see Capture Simulation Data in a Test Case. For the
corresponding API, see the OverrideModelOutputSettings
name-argument
pair of setProperty
.
Output Triggers —Use the Output Triggers subsection to specify when to start and stop signal logging based on a condition or duration. ** test passes if pass while triggered even if test fails outside of triggered time/condition true**.
The Start Logging options are:
On simulation start — Start logging data when simulation starts.
When condition is true — Start logging when the specified condition expression is true. Click the edit symbol next to Condition to display an edit box, where you enter the condition.
After duration — Start logging after the specified number of seconds have passed since the start of simulation. Click on the value next to Duration(sec) to display an edit box where you enter the duration in seconds.
The Stop Logging options are:
When simulation stops — Stop logging data when simulation ends.
When condition is true — Stop logging when the specified condition expression is true. Click the edit symbol next to Condition to display an edit box, where you enter the condition. Variables in the condition appear in the Symbols editor, where you can map them to a model element or expression, or rename them.
After duration — Stop logging after the specified number of seconds have passed since logging started. Click on the value next to Duration(sec) to display an edit box where you enter the duration in seconds.
Shift time to zero — Shifts the logging start time to zero. For example, if logging starts at time 2, then selecting this option shifts all times back by 2 seconds.
Symbols — Click Add to map a signal from the model to a symbol name. You can use that symbol in a trigger condition. For information on using and mapping symbols, see Assess Temporal Logic by Using Temporal Assessments
Configuration Settings Overrides
For the test case, you can specify configuration settings that differ from the settings in the model. Setting the configuration settings in the test case enables you to try different configurations for a test case without modifying the model. The configuration settings overrides options are:
Do not override model settings — Use the current model configuration settings
Name — Name of active configuration set. A model can have only one active configuration set. Refresh the list to see all available configuration sets and select the desired one to be active. If you leave the default
[Model Settings]
as the name, the simulation uses the default, active configuration set of the model.Attach configuration set in a file — Path to the external file (File Location) that contains a configuration set variable. The variable you specify in Variable Name references the name of a configuration set in the file. For information on creating a configuration set, see
Simulink.ConfigSet
and Save a Configuration Set. For information on configuration set references, see Share a Configuration with Multiple Models.
For the corresponding API, see the ConfigSetOverrideSetting
,
ConfigSetName
, ConfigSetVarName
,
ConfigSetFileLocation
, and
ConfigSetOverrideSetting
name-argument pairs of setProperty
.
Baseline Criteria
The Baseline Criteria section appears in baseline test cases. When a baseline test case executes, Test Manager captures signal data from signals in the model marked for logging and compares them to the baseline data.
Include Baseline Data in Results and Reports
Click Include baseline data in test result to include baseline data in test result plots and test reports.
Capture Baseline Criteria
To capture logged signal data from the system under test to use as the baseline criteria, click Capture. Then follow the prompts in the Capture Baseline dialog box. Capturing the data compiles and simulates the system under test and stores the output from the logged signals to the baseline. For a baseline test example, see Compare Model Output to Baseline Data.
For the corresponding API, see the captureBaselineCriteria
method.
You can save the signal data to a MAT-file or a Microsoft Excel file. To understand the format of the Excel file, see Input, Baseline, and Other Test Case Data Formats in Excel.
You can capture the baseline criteria using the current release for simulation or another release installed on your system. Add the releases you want to use in the Test Manager preferences. Then, select the releases you want available in your test case using the Select releases for simulation option in the test case. When you run the test, you can compare the baseline against the release you created the baseline in or against another release. For more information, see Run Tests in Multiple Releases of MATLAB.
When you select Excel as the output format, you can specify the sheet name to save the data to. If you use the same Excel file for input and output data, by default both sets of data appear in the same sheet.
If you are capturing the data to a file that already contains outputs, specify the sheet name to overwrite the output data only in that sheet of the file.
To save a baseline for each test case iteration in a separate sheet in the same file, select Capture Baselines for Iterations. This check box appears only if your test case already contains iterations. For more information on iterations, see Test Iterations.
Specify Tolerances
You can specify tolerances to determine the pass-fail criteria of the test case. You can specify absolute, relative, leading, and lagging tolerances for individual signals or the entire baseline criteria set.
After you capture the baseline, the baseline file and its signals appear in the table. In the table, you can set the tolerances for the signals. To see tolerances used in an example for baseline testing, see Compare Model Output to Baseline Data.
For the corresponding API, see the AbsTol
,
RelTol
, LeadingTol
, and
LaggingTol
properties of sltest.testmanager.BaselineCriteria
.
Add File as Baseline
By clicking Add, you can select an existing file as a baseline. You can add MAT-files and Microsoft Excel files as the baseline. Format Microsoft Excel files as described in Input, Baseline, and Other Test Case Data Formats in Excel.
For the corresponding API, see the addInput
method.
Update Signal Data in Baseline
You can edit the signal data in your baseline, for example, if your model changed and you expect different values. To open the Signal Editor or the Microsoft Excel file for editing, select the baseline file from the list and click Edit. See Manually Update Signal Data in a Baseline.
You can also update your baseline when you examine test failures in the data inspector view. See Examine Test Failures and Modify Baselines.
Equivalence Criteria
This section appears in equivalence test cases. The equivalence criteria is a set of signal data to compare in Simulation 1 and Simulation 2. Specify tolerances to regulate pass-fail criteria of the test. You can specify absolute, relative, leading, and lagging tolerances for the signals.
To specify tolerances, first click Capture to run the system under test in Simulation 1 and add signals marked for logging to the table. Specify the tolerances in the table.
After you capture the signals, you can select signals from the table to narrow your results. If you do not select signals under Equivalence Criteria, running the test case compares all the logged signals in Simulation 1 and Simulation 2.
For an example of an equivalence test case, see Test Two Simulations for Equivalence.
For the corresponding API, see the captureEquivalenceCriteria
method.
Iterations
Use iterations to repeat a test with different parameter values, configuration sets, or input data.
You can run multiple simulations with the same inputs, outputs, and criteria by sweeping through different parameter values in a test case.
Models, external data files, and Test Sequence blocks can contain multiple test input scenarios. To simplify your test file architecture, you can run different input scenarios as iterations rather than as different test cases. You can apply different baseline data to each iteration, or capture new baseline data from an iteration set.
You can iterate over different configuration sets, for example to compare results between solvers or data types. You can also iterate over different scenarios in a Test Sequence block.
To create iterations from defined parameter sets, Signal Editor scenarios, Test Sequence scenarios, external data files, or configuration sets, use table iterations. To create a custom set of iterations from the available test case elements, write a MATLAB iteration script in the test case.
To run the iterations without recompiling the model for each iteration, enable Run test iterations in fast restart. When selected, this option reduces simulation time.
For more information about test iterations, including using predefined variables in scripted iterations, see Test Iterations. For more information about fast restart, see How Fast Restart Improves Iterative Simulations.
For the corresponding API, see sltest.testmanager.TestIteration
.
Logical and Temporal Assessments
Create temporal assessments using the form-based editor that prompts you for conditions, events, signal values, delays, and responses. When you collapse the individual elements, the editor displays a readable statement summarizing the assessment. See Assess Temporal Logic by Using Temporal Assessments and Logical and Temporal Assessment Syntax for more information.
To copy and paste an assessment or symbol, select the assessment or symbol and right-click to display the context menu. You can select a single assessment or symbol or select multiple assessments or symbols. Alternatively, to copy or paste selected assessments or symbols, use Ctrl+C or Ctrl+V. Pasting the assessment adds it to the end of the assessment list in the current test case. You can also paste to a different test case. The assessments and their symbol names change to the default names in the pasted assessment. You can also use the context menu to delete assessments. To delete symbols, use the Delete button. If you delete an assessment or symbol, you cannot paste it even if you copied it before deleting it.
Assessment Callback
You can define variables and use them in logical and temporal assessment conditions and expressions in the Assessment Callback section.
Define variables by writing a script in the Assessment Callback section. You can map these variables to symbols in the Symbols pane by right-clicking the symbol, selecting Map to expression, and entering the variable name in the Expression field. For information on how to map variables to symbols, see Map to expression under Resolve Assessment Parameter Symbols.
The Assessment Callback section has access to the predefined
variables that contain test, simulation, and model data. You can define a variable
as a function of this data. For more information, see Define Variables in the Assessment Callback Section. For the
corresponding API methods, see setAssessmentsCallback
and getAssessmentsCallback
.
If your assessments use at least
, at most
,
between
, or until
syntax, select
Extend Results to produce the minimum possible untested
results. In some cases, none or not
all untested results can be tested, so the results
will still show some untested results. When you extend the test results, previously
passing tests might fail. Leave Extend Results checked unless
you need to avoid an incompatibility with earlier test results.
Symbol t
(time)
The symbol t
is automatically bound to simulation time and can
be used in logical and temporal assessment conditions. This symbol does not need to
be mapped to a variable and is not visible in the Symbols pane.
For example, to limit an assessment to a time between 5 and 7 seconds, create a
Trigger-response assessment and, in the trigger
condition, enter t < 5 & t > 7
. To avoid unexpected
behavior, do not define a new symbol t
in the
Symbols pane.
Symbol Data Type
If you map a symbol to a discrete data signal that is linearly interpolated, the interpolation is automatically changed to zero-order hold during the assessment evaluation.
Custom Criteria
This section includes an embedded MATLAB editor to define custom pass/fail criteria for your test. Select function customCriteria(test) to enable the criteria script in the editor. Expand Perform custom criteria analysis on test results to see a list of available predefined variables. Custom criteria operate outside of model run time; the script evaluates after model simulation.
Common uses of custom criteria include verifying signal characteristics or verifying
test conditions. MATLAB Unit Test qualifications provide a framework for verification criteria.
For example, this custom criteria script gets the last value of the signal
PhiRef
and verifies that it equals 0
:
% Get the last value of PhiRef from the dataset Signals_Req1_3 lastValue = test.sltest_simout.get('Signals_Req1_3').get('PhiRef').Values.Data(end); % Verify that the last value equals 0 test.verifyEqual(lastValue,0);
See Process Test Results with Custom Scripts. For a list of MATLAB Unit Test qualifications, see Table of Verifications, Assertions, and Other Qualifications.
You can also define plots in the Custom Criteria section. See Create, Store, and Open MATLAB Figures.
For the corresponding API, see sltest.testmanager.CustomCriteria
.
Coverage Settings
Use this section to configure coverage collection for a test file. The settings propagate from the test file to the test suites and test cases in the test file. You can turn off coverage collection or one or more coverage metrics for a test suite or test case, unless your test is a MATLAB-based Simulink test.
For MATLAB-based Simulink tests, you can change the coverage settings only at the test file level.
If you change the coverage settings in the Test Manager, the changes are not saved to
the MATLAB-based Simulink test script file. If you also set the coverage using the
sltest.plugins.ModelCoveragePlugin
in a MATLAB-based Simulink test script (.m
) file or at the command line, the Test
Manager uses the coverage settings from the test script instead of the Test Manager
coverage settings.
Coverage is not supported for SIL or PIL blocks.
The coverage collection options are:
Record coverage for system under test — Collects coverage for the model or, when included, the component specified in the System Under Test section for each test case. If you are using a test harness, the system under test is the component for which the harness is created. The test harness is not the system under test.
For a block diagram, the system under test is the whole block diagram.
For a Model block, the system under test is the referenced model.
For a subsystem, the system under test is the subsystem.
Record coverage for referenced models — Collects coverage for models that are referenced from within the specified system under test. If the test harness references another model, the coverage results are included for that model, too.
Exclude inactive variants — Excludes from coverage results these variant blocks that are not active at any time while the test runs:
Variant blocks in Simulink with Variant activation time set to startup
Variant configurations in Stateflow® charts
When displaying the test results, if you select or clear this option in the Aggregated Coverage Results section, the coverage results update automatically. For information, see Model Coverage for Variant Blocks (Simulink Coverage).
Note
Coverage settings, including coverage filter files, in the Test Manager override all coverage settings in the model configuration. In the Test Manager, Do not override model settings in the Configuration Settings section and Override model settings in the Simulation Outputs section do not apply to coverage.
By default the Test Manager includes external MATLAB functions and files in the coverage results. You can exclude external
MATLAB functions and files by using
set_param(model,'CovExternalEMLEnable','off','CovSFcnEnable','off');
at the command line. Alternatively, you can exclude MATLAB functions and files by
using the Include in analysis setting in the Coverage Analyzer
app from within the Simulink model.
For more information about collecting coverage, see Collect Coverage in Tests. For the corresponding API, see sltest.testmanager.CoverageSettings
.
For information on the Coverage Metrics options, see Types of Model Coverage (Simulink Coverage).
For information about MATLAB-based Simulink tests, see Using MATLAB-Based Simulink Tests in the Test Manager.
Test File Options
Close open Figures at the end of execution
When your tests generate figures, select this option to clear the working environment of figures after the test execution completes.
For the corresponding API, see the CloseFigures
property of
sltest.testmanager.Options
.
Store MATLAB figures
Select this option to store figures generated during the test with the test file. You can enter MATLAB code that creates figures and plots as a callback or in the test case Custom Criteria section. See Create, Store, and Open MATLAB Figures.
For the corresponding API, see the SaveFigures
property of
sltest.testmanager.Options
.
Generate report after execution
Select Generate report after execution to create a report after the test executes. Selecting this option displays report options that you can set. The settings are saved with the test file.
Note
To enable the options to specify the number of plots per page, select Plots for simulation output and baseline.
By default, the model name, simulation start and stop times, and trigger information are included in the report.
For the corresponding API, see the GenerateReport
property of
sltest.testmanager.Options
.
For detailed reporting information, see Export Test Results and Customize Test Results Reports.
Test File Content
For a MATLAB-based Simulink test, displays the contents of the M file that defines the test. This section appears only if you opened or created a new MATLAB-based Simulink test. See Using MATLAB-Based Simulink Tests in the Test Manager.
See Also
Test Manager | sltest.testmanager.getpref
| sltest.testmanager.setpref