In order to make sure you wrote a component MyComponent in the right way, you need to write proper tests under the corresponding test directorytest/specs/components/MyComponent.
You can run tests during development with yarn test:watch to re-run tests live on each file change or at the end of development by running yarn test.
All PRs must meet or exceed test coverage limits before they can be merged.
Every time tests run, /coverage information is updated. Open coverage/lcov-report/index.html to inspect test coverage. This interactive report will reveal areas lacking test coverage. You can then write tests for these areas and increase coverage.
There are many common things to test for. Because of this, we have test/specs/commonTests.
These tests are typically imported into individual component tests.
Every common test receives your component as its first argument.
import { isConformant } from 'test/specs/commonTests'
import Divider from 'src/components/Divider/Divider'
describe('Divider', () => {
isConformant(Divider)
})This is the only required test. It ensures a consistent baseline for the framework. It also helps you get your component off the ground. You should add this test to new components right away.
isConformant asserts Component conforms to guidelines that are applicable to all components:
- Component is exported or private
- Component name and filename are correct
- Component info file exists at
docs/src/componentInfo/${constructorName}.info.json - Events are properly handled
- Extra props are correctly spread
- Base classNames are applied
- The display name matches the constructor name
Create your test file in test/specs directory. The specs directory mirrors the src directory. The first test should always be isConformant()
For every source file, there needs to be a test file and they should named as <Component>-test.tsx.
There should be one describe block for each prop of your component.
Example for Button component:
import { isConformant } from 'test/specs/commonTests'
import Button from 'src/components/Button'
describe('Button', () => {
isConformant(Button)
describe('accessibility', () => {
...
})
describe('type', () => {
...
})
describe('circular', () => {
...
})
describe('onClick', () => {
...
})
})# Run tests with:
yarn test
# Run tests in watch mode with:
yarn test:watchFor some components, it is necessary to write screenshot tests in order to check that they render properly. For each component example added to the docsite a screenshot test is automatically created. This checks if that the component is rendered in a consistent way, as it looks for visual differences between the previous rendering and the current one. We use screener-io to achieve screenshot testing.
This default test only checks the rendering for the component in its initial state. In order to test the rendering of more complex components, such as a Dropdown, screener provides an api to execute actions on the DOM, in a way similar to end-to-end tests. These tests are executed by Screener as long as both the tests and their files respect the conventions:
- the test file should be placed at the same location as the component example under test.
- the test file should be named exactly as the component example file. If
DropdownExample.shorthand.tsxis to be tested, the screener test file should be namedDropdownExample.shorthand.steps.ts. - the tests should be written as a config file that can contain the following props:
steps: an array of callbacks that accept abuilder(step builder) parameter, as all of them will be chained inscreener.config.js. Thebuilderparameter is actually theStepsobject from screener, instantiated inscreener.config.js.themes: an array of strings representing the theme applied to the component when taking the screenshot; by default, all screenshots are taken forTeamstheme.
import { Dropdown } from '@fluentui/react'
const config: ScreenerTestsConfig = {
themes: ['teams', 'teamsDark', 'teamsHighContrast'],
steps: [
builder => builder.click(`.${Dropdown.slotClassNames.triggerButton}`)
.snapshot('Opens dropdown list'),
builder =>
builder
.click(`.${Dropdown.slotClassNames.triggerButton}`)
.hover(`.${Dropdown.slotClassNames.itemsList} li:nth-child(2)`)
.snapshot('Highlights an item'),
],
}
export default config- by convention, each test is written as a different callback and added to the steps array.
- an actual assertion is performed by taking a
.snapshot(<Your test name here>), as the assertion is performed by comparing the screenshot with the one taken on the previous run and considered correct. - a test can have as many snapshots as needed.
- before the snapshot is taken, steps are added to reach the state of assertion, with methods from
screener-runnerAPI (.click(<selector>),.hover(<selector>)etc.) - tests perform cleanup by default. This means each test is independent of the state of the component from previous tests.
In order to run the tests locally, make sure to have a Screener API key saved in the environment variables on your machine. For instance, on MacOS/Linux you can use export SCREENER_API_KEY=<Your screener key here>.
When ran locally, not all the tests may be needed to run, only the ones added/edited. This can be achieved by changing the regexp in screener.config.js, where the states property is created. Make sure not to commit that change though! Results can be viewed in the Screener online dashboard, among all the other runs (for master, pull requests, etc.)
yarn test:visualBehavior unit tests are generated from the specification written in each behavior file.
Each line under the @specification tag is taken and matched against the regex expression written in testDefinitions.ts file.
-
For a new behavior. In the file
behavior-test.tsxdo following changes:- import the new behavior into the test file
- add the new behavior to the testHelper object like:
testHelper.addBehavior('yourNewBehaviorName', yourNewBehaviorImportedObject) - add regex expression into
testDefinitions.tswhich will match your line written under@specificationtag
-
For an existing behavior:
- add regex expression into
testDefinitions.tswhich will match your line written under@specificationtag
- add regex expression into
Run test and watch:
yarn jest --watch behavior-test
Go into docs\src\behaviorMenu.json file and verify if you can find your line. If not then:
- run command
gulp build:docs:component-menu-behaviors, this will build the file again - verify formatting of the file (if some tag is not missing, etc...) and run command again
Rename all test files title containing behavior-test string.
For example, like (goal of the renaming is reach that these tests will not run):
listBehaviorrrrrrrrrrr-test.tsxlistItemBehaviorrrrrrr-test.tsx
Run the tests again and you should see in console:
PASS test/specs/behaviors/behavior-test.tsx
buttonBehavior.ts
√ Adds role='button' if element type is other than 'button'. (11ms)
√ Adds attribute 'aria-disabled=true' based on the property 'disabled'. (8ms)
√ Adds attribute 'aria-disabled=true' based on the property 'disabled'. This can be overriden by providing 'aria-disabled' property directly to the component.
buttonGroupBehavior.ts
√ Adds role 'presentation' to 'root' component's part (2ms)
√ Wraps component in FocusZone allowing arrow key navigation through the children of the component.
dialogBehavior.ts
√ Adds attribute 'aria-disabled=true' to 'trigger' component's part based on the property 'disabled'.
√ Adds attribute 'aria-modal=true' to 'popup' component's part.
√ Adds attribute 'role=dialog' to 'popup' component's part.
√ Traps focus inside component
gridBehavior.ts
√ Wraps component in FocusZone allowing circular arrow key navigation through the children of the component.
Add description under the @description tag. Like:
/**
* @description
* Image is usually only visual representation and therefore is hidden from screen readers.
Add your spec file into the array of files skipSpecChecksForFiles in testHelper.tsx. And put description in behavior file under @description tag.
Performance tests will measure performance, set a baseline for performance and help guard against regressions.
- To add a perf test, simply add a file with
.perf.in its naming. (As of writing,.perf.files indocs/srcandpackages/perf-testare automatically consumed.) - Formatting follows Storybook CSF convention with special support for
iterationsmetadata which tells the performance testing package how many iterations of your component to render:
// If this file is named ButtonBasic.perf.tsx, it will be picked up as kind of 'ButtonBasic' with story names of 'Blank' and 'WithText'.
export default {
iterations: 5000,
}
export const Blank = () => <Button />
export const WithText = () => <Button content="Click here" />Finding the right number of iterations is a balancing act between having a fast test and getting enough information from the results. For more complex scenarios and components 1 iteration may be enough, while simple components with simple stories may need as many as 5,000.
Run test and watch:
yarn perf:test
After running perf:test, results can be viewed in the packages/perf-test/dist folder with the main entry file being packages/perf-test/dist/perfCounts.html.
There are more detailed commands as well (these must be run from packages/perf-test directory):
| Command | Description |
|---|---|
yarn just perf-test:bundle |
Recreates story bundle. Required if perf stories are added or modified. |
yarn just perf-test:run |
Runs flamegrill against story bundle and generates results. |
yarn just perf-test |
Executes both :bundle and :run. |