Skip to content

Regularly benchmarking and stress-testing the alerting framework and rule types #119845

@mikecote

Description

@mikecote

The alerting system must be regularly benchmarked and stress-tested before every production release. Preferably mirroring known complex customer environments. This ensures we do not introduce any regressions by benchmarking and comparing key health metrics.

There are various ongoing performance testing & framework / tool creation efforts that relate to Kibana, some research has been done to ensure the pros/cons and applicability of each so we can invest where we see the best value proposition balanced with quickest roi we can get. As research continues it seems clear we'll plan to extend one or more tools or frameworks into a given solution. So, while we may start with one tool as an incremental first-step or as a starting point, we're developing this to a set of requirements, foremost.

Front-runner for starting-point tool/library: The Kibana Alerting team / ResponseOps kbn-alert-load Alert / Rule testing tool

  • It is known that this repo is forked and under current usage/dev by several Security side team members, we will research and sync on the current state / capability.
    ... see below for options that were declined for now.

Here are some of the WIP Requirements we are evaluating and building out:

  • enables team to catch *some types of performance regressions within 24 hours merging
  • To be modular wrt to the distinct execution elements:
  • - cluster creation or attachment-to-existing-cluster
  • - - Spin up needed envs of specific config sizing in a viable cloud service that facilitates a performance / stress-test
  • - - allow to connect to a self-managed cluster facilitates better speed of assessing a developer change locally
  • data-load / continuing ingest (at varying scales - what data do we need here, what tool to use to generate the data!)
  • test-setup options: execute a configurable number of set API calls, looped and parameterized (like creating rules)
  • option to allow Kibana / cluster to run indefinitely or for a set amount of time in minutes (latter is currently hardcoded
  • Continuous monitoring and capture of desired metrics:
  • - start and end time of the Rule Executions
  • - metrics to evaluate their potential drift
  • - overall Kibana memory usage / cpu usage stats
  • - overall cluster Kibana health stat (somewill be in event log, cluster health will not be, need to itemize this)
  • - overall health of Rules execution (none are failed unintentionally)
  • integrated with a CI system for nightly (if not more often) run (prototype done in jenkins, not kibana buildkite, fyi)
  • slack channel output of results from test run assessment at end of ci (selectable slack channel)
  • Entails an automated pas/fail assessment of performance (relative comparison or fixed data points?, including health + errors review)
  • - automated assessment must be left as optional, allowing other teams to incrementally adopt usage
  • - option to enable during test api-calls (and a pass/fail metric on if they remain over x threshold of perf)
  • - review of Kibana log for unexpected errors (a grep + pass/fail mark)
  • option to perform or NOT any env clean-up (this is the default, but requirement relates to re-using environments)

Stretch / next goals:

  • Confirm/enable tool to allow testing over different Rule type needs (some WIP by Security team)
  • Confirm/enable tool to allow testing over Cases needs
  • Confirm/enable tool to allow testing over one or more connector 3rd party needs (bulk updates etc)
    • focus on email connector next?

FYI: Frameworks/Tools that have been researched and ruled out for immediate purposes:

  1. Kibana-QA team created an API load testing tool - kibana-load-testing. It was researched by Patrick M in 2020 and Alert/Rules team did not end up collaborating on it, it uses the Kibana HTTP API and so isn't best suited to assess the (background process) Task Manager at the moment

  2. Kibana Working group's coming tool - (including folks like Spencer A / Tyler S / Daniel M / Liza K - they are discussing and working on a performance testing tool and CI integration for Kibana needs.

  • Eric is bringing requirements / context and generally participating with the Kib Perf Working group (v2) to benefit both groups.
  • Their timeline is cited as TBD for when Kibana Task Manager centric automation support will be focused, the UI is where they are investing first (as of Feb 2022). This is partly done knowing that kbn-alert-load tool exists and is sufficient for teams (based on its usasge).

Metadata

Metadata

Assignees

No one assigned

    Labels

    Feature:Alerting/RuleTypesIssues related to specific Alerting Rules TypesFeature:Alerting/RulesFrameworkIssues related to the Alerting Rules FrameworkMetaTeam:ResponseOpsPlatform ResponseOps team (formerly the Cases and Alerting teams) t//estimate:needs-researchEstimated as too large and requires research to break down into workable issues

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions