Releases: ORNL/cyberwheel
v2.0
Cyberwheel v2.0
Emulator support
Cyberwheel now supports emulation with FIREWHEEL, an emulation experimentation tool developed by researchers at Sandia National Laboratories. Instructions for setting up the emulation environment can be found in cyberwheel/emulation. The emulation environment allows you to evaluate both your simulator-trained RL Red and RL Blue agents utilizing the VM-management capabilities of FIREWHEEL.
Multi-Agent Training
This version also allows for multiagent training within the simulated environment, meaning that you can train an RL Red and RL Blue agent simultaneously in one network environment.
Multi-network training support
Multi-network support allows you to train your agent(s) against multiple different networks of various shapes and sizes, and experiment with the agent ability to learn more generalized strategies. For this feature, we've reworked the observation and action spaces to support three network size presets by default, by changing the network_size_compatibility argument between small, medium, and large. The new observation space also allows for more modularity in being able to add custom attributes with less hard-coding into the observation vector.
Next Steps:
- Overhaul visualizations to not require W&B or graphviz
- Develop Unit Tests
- Expanded Documentation/Tutorials
Potential Issues/Bugs
- Much of cyberwheel has largely been tested on Linux and Mac, not so much on Windows
- pygraphviz still remains a problematic package when installing dependencies
For any other bugs or issues experienced, please post it on our Issues page!
v1.1.0
Updating cyberwheel to version 1.1.0
Changes include:
-
Overhauled all argument parsing into modular, hot-swappable YAML files. These define configurations for red/blue agents, host/decoy definitions, network, detector, services, and overall environment (includes training parameters).
-
Replaces the need to pass many arguments in the command line, while allowing command line arguments to override those set in the YAML for quick testing.
-
Implemented an RL Red Agent. This has currently been trained and tested on an inactive blue agent, utilizing the ART Killchain Phases to attack hosts.
-
Implemented an ART Campaign. This heuristic agent runs through a more specific killchain of techniques, useful for specialized use cases or testing with emulation.
-
Allowed user to run cyberwheel with 'python3 -m cyberwheel' for all use cases, making it easier to run through a pipeline of training, evaluating, and visualizing.
-
Various usability improvements.