Skip to content

Yuhala/intel-sgx-real-time-ecrts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

About

  • This artifact contains code and instructions required to reproduce the following results in our ECRTS 2025 paper - On Real-Time Guarantees in Intel SGX and TDX.
  • The artifact provides guidelines to run cyclictests for Intel SGX enclaves on Gramine LibOS and WAMR environment (for WASM), and cyclictest for a native (non TEE) environment, considered the baseline.
  • The repository provides code in three folders: native-cyclictest, gramine-cyclictest, wamr-cyclictest corresponding to each system to be tested; the remainder of this readme provides instructions on how to setup and benchmark these systems.
  • For legal and administrative reasons, we cannot grant access to our Intel TDX server, so instructions to test results for Intel TDX are not part of this readme.

For ECRTS artifact reviewers

  • Because setting up your system to run the benchmarks can be complex, we have pre-configured a server which reviewers can access remotely to run the benchmarks. To use our pre-configured server, reviewers should send their SSH public keys to the email: peterson.yuhala@unine.ch with subject ECRTS artifact reviewer public key.
  • We will then configure the server to allow remote SSH access to run the benchmarks. Once the configuration is done, you will receive a confirmation email on how to proceed to access the server remotely.
  • We encourage the reviewers to organize time slots among themselves to prevent running experiments concurrently on the same server.
  • For reviewers using our pre-configured server via SSH, you could skip to the section Native cyclictest to begin running the benchmarks. Otherwise, please setup your system as described in the following section.
  • The estimated total run time for all the benchmarks is about 15 hours (server setup time not included).

Prerequisites and system setup

  • The instructions here are for a Linux-based system: we tested on both Ubuntu 22.04.5 LTS and Ubuntu 20.04.

  • Install an appropriate Linux real-time patch. We used RT Linux version 6.9.5-rt5 with the PREEMPT patch.

  • Install the SGX software development kit (SDK), platform software (PSW), and SGX Linux kernel driver.

  • To assist in setting up these tools, see instructions in the corresponding readmes in the setup folder:

    1. Installing real-time patch 6.9.5-rt5
    2. Install SGX tools
  • Server characteristics: the SGX evaluations were conducted on a server equipped with an 8-core Intel Xeon Gold 5515+ CPU clocked at 3.20 GHz, 22.5 MiB L3 cache, and 128 GiB of DRAM.

  • OS characteristics: the server runs Ubuntu 20.04.6 LTS and Linux RT version 6.9.5-rt5 with a fully preemptible kernel.

  • Software characteristics:

    • Intel SGX: the server has support for Intel SGX and was configured with 64 GiB of usable SGX EPC.
    • LibOS: we used Gramine SGX version 1.8.
    • Wasm: WebAssembly Micro Runtime (WAMR) from commit #0e4dffc4 + the scheduler management extension for running workloads in SGX enclaves.

Cyclictest setup

  • Install useful software for cyclictest evaluations.
sudo apt-get install build-essential libnuma-dev
sudo apt install stress-ng
sudp apt install gnuplot

Native cyclictest (baseline)

  • Begin by cloning this repository
git clone https://github.com/Yuhala/intel-sgx-real-time-ecrts.git
cd intel-sgx-real-time-ecrts
  • Build cyclictest and hackbench
cd native-cyclictest/rt-tests
make all
  • To run the artifact evaluation for cyclictest in the native environment with all the stressors as described in our paper, launch the script run_cyclictest_native.sh in native-cyclictest/rt-tests folder.
./run_cyclictest_native.sh # Estimated run time ~ 4 hours
  • A successful run will produce 4 files in the results folder: native_idle.ct, native_hackbench.ct, native_stressng_irq.ct, native_stressng_vm.ct representing the cyclictest results for the idle and stressed runs.
  • To process the results and generate corresponding plots, launch the generate_histograms.sh script in the results folder.
cd results
./generate_histograms.sh
  • This will produce a plot in .png format for each benchmark. The 4 generated plots correspond to those shown in Figure 6 of our paper.
  • The generated plots show only the maximum scheduling latencies (default in cyclictest). To view the minimum and average latencies, see the end of the corresonding cyclictest .ct file. For example:
# Min Latencies: 00008 00008 00008 00008
# Avg Latencies: 00062 00062 00062 00062
# Max Latencies: 06617 06611 06632 06614

Gramine LibOS cyclictests

  • Install Gramine LibOS by following the instructions on the Gramine LibOS website. Ideally, follow instructions for Ubuntu 22.04 LTS or 20.04 LTS. NB: If using the pre-configured server, this step has already been done.
  • Copy the built hackbench binary from native-cyclictest/rt-tests folder into the gramine-cyclictest folder.
cd gramine-cyclictest
cp ../native-cyclictest/rt-tests/hackbench .
  • Test cyclictest for Gramine by launching the build.sh script in the gramine-cyclictest folder: the test runs cyclictest for 60s in Gramine LibOS and outputs the results in the file gramine_idle.ct. A successful test will produce results in this file.
  • To run the actual benchmark for cyclictest in Gramine-LibOS with all the stressors as described in our paper, launch the script run_cyclictest_gramine.sh in the gramine-cyclictest folder.
./run_cyclitest_gramine.sh # Estimated run time ~ 4 hours
  • We note that the Gramine manifest file cyclictest.manifest.template has been preconfigured with all the options to run cyclictest in Gramine LibOS for 60 minutes as described in the paper. The line which configures these arguments is:
loader.argv = ["-a", "4-7", "-t", "4", "-m", "-p", "90", "-i", "100", "-h", "10000", "-D", "60m", "-r", "-n"]
  • Similarly, a successful run will produce 4 files in the results folder: gramine_idle.ct, gramine_hackbench.ct, gramine_stressng_irq.ct, gramine_stressng_vm.ct representing the cyclictest results for the idle and stressed runs.
  • To process the results and generate corresponding plots, launch the generate_histograms.sh script in the same folder. This will produce a plot in .png format for each benchmark. The 4 generated plots correspond to those shown in Figure 8 of our paper.
  • As with the previous benchmark, the end of each cyclictest file .ct provides a recap of the minimum, average, and maximum latencies for the evaluated system.

WAMR cyclictests

  • Change directory to the root of the repository.
  • Install all the tools and compile WAMR runtime and cyclictest for Wasm.
./scripts/wamr-install.sh # Estimated run time ~ 45 minutes
  • To run WAMR-based cyclictest in an SGX backed WAMR runtime with all the stressors as described in our paper, execute the corresponding script.
./scripts/run_cyclictest_wasm.sh # Estimated run time ~ 4 hours
  • A successful run will produce 4 files in the results folder of wamr-cyclictest: wasm_idle.ct, wasm_hackbench.ct, wasm_stressng_irq.ct, wasm_stressng_vm.ct representing the cyclictest results for the idle and stressed runs.
  • Change directory to the wamr-cyclictest results folder:
cd wamr-cyclictest/results
  • To process the results and generate corresponding plots, launch the generate_histograms.sh script in the same folder. This will produce a plot in .png format for each benchmark. The 4 generated plots correspond to those shown in Figure 7 of our paper.
./generate_histograms.sh
  • As with the previous benchmarks, the end of each cyclictest file .ct provides a recap of the minimum, average, and maximum latencies for the evaluated system.

Additional information

  • Please note that cyclictest runs are usually done for long hours to obtain useful results, so short runs (e.g., 15 mins or less) are not very representative of the system being benchmarked.
  • Also, the scheduling latencies reported after running these benchmarks are not expected to be exactly the same as those presented in our paper, but we expect them to be close for a system with similar characteristics as ours.

Troubleshooting tips

  • Since each cyclictest run is executed for 1 hour per benchmark, it may be useful to perform shorter tests runs when debugging. This can be achieved by modifying the TIME="60m" variable in the run scripts to a shorter duration, such as TIME="1m" which will run the tests for 1 minute instead.
  • To do a "hard kill" of all cyclictest executions do pkill -9 cyclictest.

About

Artifact for our ECRTS paper on real-time guarantees of Intel SGX

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors