Diary
This is a diary that records the events and thoughts processes that went during the project. This is mostly used to reorganize our thoughts and also keep a trace of how we handled certain issues when we bump into them again at a later time.
1
Playing with the Tauri mock runtime
-
Exploration of Tauri code
tauri::app::App::run_iterationexists to react to a single eventtauri::app::on_event_loop_eventcould be used to fuzz the Tauri app by calling it directly from maintauri::app::Builder::on_page_loadcould be used to fuzz the Tauri app by calling it directly from maintauri::tauri-runtime-wryis the default implementation of the runtimetauri::tauri-runtimeis the runtime interfacewry-runtimeevent loop receives different type of events:tao::Eventreceives from TAOtao::EventLoopWindowTarget?tao::ControlFlow: Poll, Wait, WaitUntil, Exit
-
playing with
mini-appand mock runtime- new fuzz branch in Tauri
- make the mockruntime public
- rust-gdb can be used to break on points such as:
tauri::app::App::run_iteration::hc9a795e571e144bc - trying to hack the function
tauri::app:on_event_loop_event- events are only for window stuff, to interact with command check manager
2
- try to trigger a command programmatically
PendingWindowhas fieldsjs_event_listenersipc_handler
- Check
wrycrate- webview sends message to the Rust host using
window.ipc.postMessage("message")
- webview sends message to the Rust host using
- Try to capture IPC using wireshark
- listening on the loopback interface
- did not work, certainly tauri does not use internet socket
- Try to capture IPC using
strace- we see traces of
recvmsgandsendmsgsyscalls - using
ss -pO | grep mini/WebKitwe see existences of open sockets for these processes - Unix sockets can be tracked using this sockdump
sockdumpcan output to pcap format that is readable by wireshark
- we see traces of
3
- Trying to
sockdumpthe mini-app sockets- checking sockets file in
/proc/$PID/fd lsof -p $PIDlists open files for a process- tauri command does not seem to pass through unix sockets
ssshow that the open sockets have no data going through them- this is confirmed using
sockdump
- checking sockets file in
- Checking
tauri,wryandtaocode to see where IPC comes from- connect to local version of wry and tauri
tao::EventLoop::run_returnwhen spawning x11 thread containslet (device_tx, device_rx) = glib::MainContext::channel(glib::Priority::default());
4
- IPC manager add to Webkit IPC handlers
- at build time of the webview these handlers will generate methods
that can called via
window.webkit.messageHandlers.funcName.postMessage(args) - examples can be seen in
wry/examples/*
- at build time of the webview these handlers will generate methods
that can called via
- From Lucas suggestion
tauri::window::Window::on_messagecan trigger commandhttps://github.com/tauri-apps/tauri-invoke-httpto use http over localhost instead of default Tauri
- Using
tauri::window::Window::on_messagewe manage to run the app and trigger command without webview
5
- import tauri-fork in the fuzz-proto dir
- reinstall necessary tools for new computers
- modify Dockerfile
- remove
cargo chef, don't know why but it mademini-app/src-tauri/src/main.rsan emptymain(){}function - change the architecture
- remove
6
- modify Dockerfile to have missing dependencies
tauri::test::assert_ipc_responseshould be checked to also handle response from the command invoked
Question to Lucas
- IPC lifecycle?
- on init of webview, tauri register in the webview tauri handles
- this tauri handles can be called via
postMessagein webkitgtk - What kind of Linux IPC are actually used in webkitgtk
ipc are actually handled by the webview
- Mockruntime
- essentially what is it? emulation of Wry
- if we want to fuzz the windowing system in the future could it be interesting
fork the mockruntime if you want to fuzz the windowing system rather than forking wry
- HTTP
- does it make the 2 process communicate over http on localhost
- when is it used?
websockets, local devserver could be useful for a man-in-the-middle fuzzer that is able to fuzz both the backend and the webview by sending them HTTP requests
- Architecture questions
- why do use Window and WindowHandle, App and AppHandle
7
libdwis not used incargo b --releasebecause there are no debug info in release profile- fix byte conversion error were the
copy_from_sliceinvolved 2 arrays of different sizes libafl::bolts::launcher::Launcheris used to launch fuzzing on multiple cores for freerun_client()is the closure ran by every core
- Fuzzer behaviour depending on harness result
- When harness crashes with
panic- the fuzzer state is restarted
- re-generating initial corpus
- When harness does not crash but return
ExitKind::CrashorExitKind::Ok- the fuzzer is not restarted and corpus may ran out because not regenerated
- When harness crashes with
libafl::state::StdState::generate_initial_inputs_forcedcreate new inputs even if they are not "interesting"- useful when not using feedback
8
- x86_64 calling convention checked
- for
&strlength is store in rsi and pointer in rdi - for
u32value is stored directly in rdi
- for
- environment variable
LIBAFL_DEBUG_OUTPUThelps with debugging
9
libdwissue- In the docker container it works in release but not in debug
- In local it does not work in both release and debug and this issue is triggered in both cases
libafl_qemu::Emulatordoes not crash itself when the emulated program crash- no way to catch a crash in the emulator?
- Add
InProcessfuzzing- we avoid the dependency issue
- we don't deal with qemu emulator anymore
- steps
- Split
mini-appto have both a binary and a lib - Use the in-memory fuzzing to call functions from the lib
- Split
- separate mini-app into a lib and binary
10
- Flow between app and mockruntime
app::run()-runtime::run()-app::on_event_loop_event-callback
- diff between:
App::run_on_main_thread/RuntimeContext::run_on_main_thread, run stuff on the window processwindow::on_message: pass message to backend process
- need to have a harness that does not exit at the end of the function
- In the
mockruntimethere isapp::Window::DetachedWindow::Dispatcher::close()- it will send the message
Message::CloseWindowwithrun_on_main_thread - the mockruntime intercept it and sends
RuntimeEvent::ExitRequestedto theapp - the
appwill process some stuff inon_event_loop_event - then the event
RuntimeEvent::ExitRequestedwill be sent to the closure given toapp::runat the beginning
- it will send the message
- you can break out of the loop from
runin theMockruntime- by sending a message
Message::CloseWindow - then sending another message which is not
ExitRequestedEventAction::Prevent
- by sending a message
11
- Move code that setup and calls tauri commands to the fuzzer
- now the application can add an almost empty
lib.rsfile to to be fuzzed
- now the application can add an almost empty
- Refactor and clean code
- Bibliography
- tinyinst
12
- Bibliography
- Mdbook
- Plan for the future with Github issues
13
- Read AFL++ docs for code instrumentation
- Redo the dockerfile
- Change to higher version of Debian to have llvm14 - Fail, llvm14 is not new enough to compile rust code
- Change to Ubuntu container 23.04
- Pin the Rust version to 17.0
- Pin compiler version for AFL++ to llvm-16
- Compile with
afl-clang-lto- version of rustc llvm and the llvm you want to use need to match
- check your rustc llvm with
rustc --version --verbose
- check your rustc llvm with
- output llvm with
rustc+ vanilla compilation withafl-clang-ltofails and not practical - trying with
.cargo/config.toml[target.x86_64-unknown-linux-gnu] linker = "afl-clang-lto"
- version of rustc llvm and the llvm you want to use need to match
- Checking if coverage worked by checking asm
afl-clang-ltoneeds more instrumention before in the pipeline- we need to check
cargo-afl
14
- in
cargo-afl- files are compiled with
let mut rustflags = format!( "-C debug-assertions \ -C overflow_checks \ -C passes={passes} \ -C codegen-units=1 \ -C llvm-args=-sanitizer-coverage-level=3 \ -C llvm-args=-sanitizer-coverage-trace-pc-guard \ -C llvm-args=-sanitizer-coverage-prune-blocks=0 \ -C llvm-args=-sanitizer-coverage-trace-compares \ -C opt-level=3 \ -C target-cpu=native " ); rustflags.push_str("-Clink-arg=-fuse-ld=gold ");
- files are compiled with
- Compile mini-app with the function above
- issue all crates are instrumented
export RUSTFLAGS="-C debug-assertions -C overflow_checks -C passes=sancov-module -C codegen-units=1 -C llvm-args=-sanitizer-coverage-level=3 -C llvm-args=-sanitizer-coverage-trace-pc-guard -C llvm-args=-sanitizer-coverage-prune-blocks=0 -C llvm-args=-sanitizer-coverage-trace-compares -C opt-level=3 -C target-cpu=native --cfg fuzzing -Clink-arg=-fuse-ld=gold -l afl-llvm-rt -L /home/adang/.local/share/afl.rs/rustc-1.70.0-90c5418/afl.rs-0.13.3/afl-llvm-rt"- we need to make
-fsanitize-coverage-allowlist=work
15
- Check
LibAFLlibafl_targetslibafl_cc
- Compile with
-C llvm-args=-sanitizer-coverage-trace-pc-guard- it place calls to
__sanitizer_cov_trace_pc_guardat every edge (by default) libafl_targetsimplements__sanitizer_cov_trace_pc_guard- flags
export RUSTFLAGS="-C debug-assertions -C overflow_checks -C passes=sancov-module -C codegen-units=1 -C llvm-args=-sanitizer-coverage-level=3 -C llvm-args=-sanitizer-coverage-trace-pc-guard -C llvm-args=-sanitizer-coverage-prune-blocks=0 -C llvm-args=-sanitizer-coverage-trace-compares -C opt-level=3 -C target-cpu=native --cfg fuzzing -C llvm-artg=-D__sanitizer_cov_trace_pc_guard_init"
sanitize-coverage-allowlist=coverage_allowlist.txtnot supported with rust- linking error,
lddoes not find symbols inlibafl_targets
- it place calls to
- Selective instrumentation
- try allowlist but not working
cargo rustc, which only affects your crate and not its dependencies.- https://stackoverflow.com/questions/64242625/how-do-i-compile-rust-code-without-linking-i-e-produce-object-files
- From Discord:
- "I had good experience with using cargo-fuzz and https://github.com/AFLplusplus/LibAFL/pull/981 together"
- "So cargo-fuzz will instrument everything and that branch has a libfuzzer compatible runtime"
- "In a default cargo-fuzz project, just depend on that LibAFL libfuzzer version instead of the one from crates.io."
- "There is also the (somewhat unmaintained) cargo-libafl crate that could give some pointers"
rustcllvm-argsrustc -C llvm-args="--help-hidden" | nvim -
16
-
cargo-libaflis a fork ofcargo-fuzz -
How does it work with libfuzzer
initcommand creates afuzzdirectory withfuzz_targetswith harness using thefuzz_target!macroCargo.tomlcontaining dependency tolibfuzzer-syslibfuzzer-syscan refer to the original fromcrates.ioor to the ported version fromlibafl
cargo-fuzz runcommand to fuzz the targets- Working when using the deprecrated original
libfuzzer-sys - Failing to link with the version from
libafl - Same error when using
cargo-libafl - Steps:
- Compile the
fuzz_targetswith the commandRUSTFLAGS="-Cpasses=sancov-module -Cllvm-args=-sanitizer-coverage-level=4 -Cllvm-args=-sanitizer-coverage-inline-8bit-counters -Cllvm-args=-sanitizer-coverage-pc-table -Cllvm-args=-sanitizer-coverage-trace-compares --cfg fuzzing -Clink-dead-code -Cllvm-args=-sanitizer-coverage-stack-depth -Cdebug-assertions -C codegen-units=1" "cargo" "build" "--manifest-path" "/home/adang/boum/fuzzy/playground/rust-url/fuzz/Cargo.toml" "--target" "x86_64-unknown-linux-gnu" "--release" "--bin" "fuzz_target_1" - Run the
fuzz_targetswith the commandRUSTFLAGS="-Cpasses=sancov-module -Cllvm-args=-sanitizer-coverage-level=4 -Cllvm-args=-sanitizer-coverage-inline-8bit-counters -Cllvm-args=-sanitizer-coverage-pc-table -Cllvm-args=-sanitizer-coverage-trace-compares --cfg fuzzing -Clink-dead-code -Cllvm-args=-sanitizer-coverage-stack-depth -Cdebug-assertions -C codegen-units=1" "cargo" "run" "--manifest-path" "/home/adang/boum/fuzzy/playground/rust-url/fuzz/Cargo.toml" "--target" "x86_64-unknown-linux-gnu" "--release" "--bin" "fuzz_target_1" "--" "-artifact_prefix=/home/adang/boum/fuzzy/playground/rust-url/fuzz/artifacts/fuzz_target_1/" "/home/adang/boum/fuzzy/playground/rust-url/fuzz/corpus/fuzz_target_1"
- Compile the
- Working when using the deprecrated original
-
fuzz_target!macro definition is incargo-libafl/cargo-libafl-helper -
To have a more complete fuzzer with memory sanitizer and else check
cargo-libafl/cargo-libafl/cargo-libafl-runtime -
Fork
cargo-fuzzorcargo-libaflto use their framework to easily fuzz Tauri applications
17
- Use
cargo-fuzzas frontend for the fuzzing then uselibaflas a backend replacinglibfuzzer - Installing
rustup component add llvm-preview-toolsto see information about code coveragecargo fuzz run fuzz_targetcargo fuzz coverage fuzz_target- Show code coverage with
llvm-cov show>llvm-cov show \ -instr-profile=coverage/fuzz_target_1/coverage.profdata \ -Xdemangler=rustfilt target/x86_64-unknown-linux-gnu/coverage/x86_64-unknown-linux-gnu/release/fuzz_target_1 \ -use-color --ignore-filename-regex='/.cargo/registry' \ -output-dir=coverage/fuzz_target_1/report -format=html \ -show-line-counts-or-regions \ -ignore-filename-regex='/rustc/.+'- docs on https://llvm.org/docs/CommandGuide/llvm-cov.html#llvm-cov-show - bin with coverage information is generated attarget/arch_triple/coverage/arch_triple/release/fuzz_target- profile file is generated atcoverage/fuzz_target/coverage.profdata - Create a summary report with
llvm-cov report>llvm-cov report \ -instr-profile=coverage/fuzz_target_2/coverage.profdata \ -use-color --ignore-filename-regex='/.cargo/registry' \ -Xdemangler=rustfilt target/x86_64-unknown-linux-gnu/coverage/x86_64-unknown-linux-gnu/release/fuzz_target_2
- Swap
libfuzzerbackend withlibafl_libfuzzerversion- doc for options in the
LibAFL/libafl_libfuzzer/src/lib.rs
- doc for options in the
18
- Clone dash
- Clone sqlite
- Modify
dashto make it crash
19
Frida
Frida is a binary analyser with 2 main features - Stalker code-tracing engine - follow threads and trace every instruction that are being called - uses a technique called dynamic recompilation - while a program is running the current basic block is copied and stored in caches - these copy can be modified and executed on demand - the original instructions are unmodified - Interceptor hooks - allows inspection and modification of the flow of function calls - different possible techniques but most common are trampoline based hooks - code is inserted at the beginning of a function A to execute another function B so function B is "inserted" in the middle of function A
Strong points
- Portability: frida works/exists on almost all platforms
- Frida is binary analysis
- works directly on binaries and do not require special compilation
Libafl-frida
libafl-fridauses frida ability to modify the code to- provide coverage
- provide asan
- provide cmplog
- to create more behaviour we just need to implement the
FridaRuntimeand add it to the possible runtimes- for example a runtime that crash on system call
libafl-fridahas been made to fuzz C libraries- no easy way to fuzz a Rust crate
20
Syscall isolation runtime
Intercepting syscalls
- using ldpreload trick
- intercept all libc and
syscallinstruction
Preventing too many false positive
- SET a flag every time you change of running environment (disable flag when running fuzzer code)
- needs to be run single-threaded
- Check for stack trace to see if it came from the Tauri app
- can be costly
- Use fork fuzzing to not have syscalls from the fuzzer?
- EBPF could be a solution to filter false positive? There may be already existing ebpf rules that exist that we could reuse
- Using libafl minimizer
21
tauri-for-fuzzy
window.close()has different behaviour in 1.5 and 2.0
Fuzzer on macos
tauri::Windowother than "main" can't triggeron_message- issue with using
Cores("0")but works fine with other corescores.set_affinity()not supported for MacOS- I have a hunch that
Cores("0")represent inmemory fuzzing
Ideas for Frida
- For injecting library dependency on PE, Mach0 or ELF
- https://github.com/lief-project/LIEF
Interesting project
- ziggy
- fuzzer manager for Rust project
22
- Update docs on syscalls
- Compile
mini-appas a dylib- libafl prevent instrumenting its own crate to prevent weird recursion
- Clean the
mini-appfuzzing code - Make
mini-appdynamic- to use the binary directly and linking with the dynamic
libmini_app.so - LD_LIBRARY_PATH='/home/user/tauri-fuzzer/mini-app/src-tauri/fuzz/target/debug/deps:/home/user/.rustup/toolchains/1.70-x86_64-unknown-linux-gnu/lib:/home/user/tauri-fuzzer/mini-app/src-tauri/fuzz/target/debug:/home/user/.rustup/toolchains/1.70-x86_64-unknown-linux-gnu/lib'
- to use the binary directly and linking with the dynamic
- Create a tauri command that do a system call without using the libc
23
- Create a separate crate
tauri_fuzz_toolsfor helper functions- this function connect Tauri to LibAFL
- Change whole repo to a workspace
- Catch a call to libc
- Check any "calls" and destination address
- we don't need to instrument libc
- we may miss hidden calls
- Instrument the libc and verify the instruction location
- we need to instrument libc and all libc instructions will be analysed
- easier to implement
- Check any "calls" and destination address
- Found how to get libc symbols through
friga_gum::Module::enumerate_exports - Strange "double crash bug"
- does not appear when removing coverage from the runtimes
24
- Inspect minimization
- misunderstanding of what minimization is
- thought that minimization would reduce the number of solutions found to only keep ones with different coverage
- Real use of minimization:
- reduce size of the "interesting" inputs while preserving the code coverage
- removes the "noise" in inputs for easier analysis and mutations
- Docs and examples can be found at:
- https://docs.rs/libafl/latest/libafl/corpus/minimizer/trait.CorpusMinimizer.html
- an example fuzzer in: "LibAFL/fuzzers/libfuzzer_libpng_cmin/src/lib.rs"
25
- on Windows
- change visibility for the different modules
- make sure that given paths are portable
- Noticed that when opening a file
fopenis not called butopen - Another issue is that interceptor do not distinguish between calls from the crates and the code we are targeting
- we need to have an interceptor that sets up a flag on the tauri command we are fuzzing (then it's single threaded?)
26
- Trying to setup the interceptor only when the harness functions are entered
- when entering the tauri command we are fuzzing
- when we are entering the harness:
setup_tauri_mock+on_msg
- In our mental model it's one thread per harness executed
- the
SyscallIsolationRuntimeis initiated for each thread - we should be able to have one flag per
SyscallIsolationRuntimeto setup when the harness function has been entered
- the
- Bug but maybe disable other runtime
27
- Finding function symbol in the runtime with a pointer rather than a name
- name mangling make it harder
- more precise
- the fuzzer intercepts the
opensyscall- this happens in the fuzzer
panic_hookto write state to disk- it's difficult to set the
SyscallIsolationRuntimeflag from thepanic_hook - we dodge the issue by rewriting the
panic_hook
- it's difficult to set the
- this happens with the stalker
- this happens in the fuzzer
28
- Trying to refact
fuzzer.rsto have the same code to usefuzz_oneorLauncher- really difficult due to the numerous traits used by LibAFL
- the trick they use is to use a closure so we don't need to precise a type for all objects used
- but to turn this into a function
- using
implreturn type does not work due to Rust not supporting nestedimpl - returning generic type not really working either since the return type is clearly defined in the function body
- using exact type is super difficult too due to the complexity of the types in LibAFL
- using
- I think I need a rust expert for this
- Writing tests for our fuzz targets
- Issue is that tests that crash actually are handled by the fuzzer and actually
libc::_exit(134) - This is not handled by cargo tests
- What I've tried
#[should_panic]this is not a panic so it does not workpanic::setup_hook(panic!)this is rewritten by the fuzzer =(uses abort rather than panicdoes not work either
- Solved by wrapping the test in another process and using and self calling the binary
with
Command::new(std::env::current_exe())
- Issue is that tests that crash actually are handled by the fuzzer and actually
29
- Working on fuzzing policy
- Need a more generic and flexible way to give a security policy, need the security team for their inputs
- security policies should be provided as constants for performance
- Restructure the project
- fuzzer and security policy code moved to the application being fuzzed
fuzzdirectory - user can now directly see the fuzzer and the policy used rather than looking at external crate
- fuzzer and security policy code moved to the application being fuzzed
- Another race condition happened
- be sure to drop the harness flag before calling any function that might panic
- For conditions we're currently using functions rather than closures
- this is to avoid any rust issue with trait object
- this should be improved in the future
Call Tauri inbuilt command such as fs_readFile
- Improve
create_invoke_payload- allow to have an argument specifying a module
- distinguish between an invocation between a custom command and an inbuilt one
- These commands requires a shared state to be managed by the Tauri mock runtime
- error message triggered is
state() called before manage() for given type - we can't use our helper function
mock_builder_minimal - use
mock_builderinstead
- error message triggered is
- The
InvokeRequestlooks like
InvokeRequest {
cmd: "plugin:fs|read_file",
callback: CallbackFn(
2482586317,
),
error: CallbackFn(
1629968881,
),
url: Url {
scheme: "http",
cannot_be_a_base: false,
username: "",
password: None,
host: Some(
Ipv4(
127.0.0.1,
),
),
port: Some(
1430,
),
path: "/",
query: None,
fragment: None,
},
body: Json(
Object {
"options": Object {
"dir": String("toto"),
},
"path": String("foo.txt"),
},
),
headers: {
"content-type": "application/json",
"origin": "http://127.0.0.1:1430",
"referer": "http://127.0.0.1:1430/",
"accept": "*/*",
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Safari/605.1.15",
"tauri-callback": "2482586317",
"tauri-error": "1629968881",
"tauri-invoke-key": "[Ic/:jX^L^q#hDgJd7)U",
},
invoke_key: "[Ic/:jX^L^q#hDgJd7)U",
}
- Don't forget to configure the
allowlistto allow the scope
30
- Move
mini-app/src-tauri/fuzz/tomini-app-fuzz- seamless transition, just had to change dependency in the workspace
Cargo.toml
- seamless transition, just had to change dependency in the workspace
- Writing a presentation with Reveal.js
- presentation added to the mdbook
- Bump to Rust version 1.76
- Update VM to Rust 1.70 -> 1.76
- Unroll the
clappackage version inlibafl_bolts:~4.4->4.0 (4.5)- We pinned it because it was not compatible with last version of Rust I was using
- Make
LibAFLa submoduleLibAFLis also a Rust workspace itself so we had toexclude = ["LibAFl"]it from the rootCargo.tomlgit config submodule.recurse = truedo not seem to work to pull recursively the last LibAFL commit
- Writing user guide to
31
- Restructure the repo with a classical monorepo architecture
docs/with mdbook and slidesexamples/with mini-app and its fuzz codecrates/withLibAFL,policies,fuzzer
- Create a TOML configuration file for the fuzzer
- more simple intermediary type to
libafl_bolts::FuzzerOptions
- more simple intermediary type to
- Why is our code coverage not working for the moment?
- the
harnessandlibs_to_instrumentoptions were empty meaning the stalker was not applied on any part of executable cmplogmodule is not implemented for x86-64- even when adding the executable to
harness, it is removed bylibafl_fridato avoid the stalker from analyzing its own code and get recursion- this is annoying with rust where you usually use static libraries so you get one big executable
- a solution would be to make LibAFL a dynamic lib
- with a quick try without persevering we get some link errors
- this is not a mess I want to invest time in currently
- another solution would be to be able to give exact memory ranges that we want frida stalker to work on
- currently the precision is per
Module - a module is more a less a library
- for Rust it signifies the whole executable with all its crates + basic C libraries
- ideally we would have the stalker on the main binary and not on any of its crate
- We could make a PR for that
- currently the precision is per
- the
- When running our binaries the fuzz_solutions are written in the wrong directory
cargo testexecutes in the root directory of the crate containing the testscargo runtakes current directory where command is executed as root directory
Porting to 2.0
- InvokeRequest new format
### Template for a plugin InvokeRequest
InvokeRequest {
cmd: "plugin:fs|read_file",
callback: CallbackFn(
3255320200,
),
error: CallbackFn(
3097067861,
),
url: Url {
scheme: "http",
cannot_be_a_base: false,
username: "",
password: None,
host: Some(
Ipv4(
127.0.0.1,
),
),
port: Some(
1430,
),
path: "/",
query: None,
fragment: None,
},
body: Json(
Object {
"options": Object {},
"path": String("README.md"),
},
),
headers: {},
}
- Calling plugin commands with the
MockRuntime(such asfs:readFile)- Scope can be modified programmatically using
let scope = app.fs_scope();
scope.allow_file("/home/adang/boum/playground/rust/tauri2/src-tauri/assets/foo.txt");
RuntimeAuthorityrequires an acl and resolved acl- the
RuntimeAuthority.acl- isn't modifiable programmatically
- defines which permissions are allowed to be used by the application capabilities
- ACL from the runtime authority is generated at buildtime in the
Context - code generation to get the Tauri app context is located at
tauri-codegen::context::context_codegen
Resolved- commands that are allowed/denied
- scopes associated to these commands
- it is initialized from the complete acl and the capabilities declared by the application
- the
- When building a Tauri v2 app
tauri-build:- path to permission manifests from each plugin are stored in environment variables
- 3 env variables per plugin used
DEP_TAURI_PLUGIN_FS_PERMISSION_FILES_PATH- where the permissions declaration for this plugin are declared
DEP_TAURI_PLUGIN_FS_GLOBAL_API_SCRIPT_PATH- JS script containing the API to call commands from the plugin
- I think this is only used when the option
withGlobalTauriis set
DEP_TAURI_PLUGIN_FS_GLOBAL_SCOPE_SCHEMA_PATH- schema for the scopes of the plugin
- 3 env variables per plugin used
- the permissions manifests are parsed
- manifests contain all the permissions declared by plugins
- parse the capabilities file
- check that declared capabilities are compatible with information given by the manifests
- path to permission manifests from each plugin are stored in environment variables
- InvokeRequest
url- to have request that are deemed
Localusetauri://localhost
- to have request that are deemed
- Fuzzer does not need to
tauri_app_builder.run(...)just if- we don't need an event loop
- we don't need to setup the app
- we don't need to interact with the app state
- Url for
InvokeRequestfor local tauri commands is- "http://tauri.localhost" for windows and android
- "tauri://localhost" for the rest
32
- Github actions
- use act to run github actions locally
- to run test as github actions locally
- with linux container:
act -W ".github/workflows/build_and_test.yml" -j Build-and-test-Fuzzer -P ubuntu-latest=catthehacker/ubuntu:act-latest - on windows host:
act -W ".github/workflows/build_and_test.yml" -j Build-and-test-Fuzzer -P windows-latest=self-hosted --pull=false - always do the command twice, the first one usually fails for unknown reasons
- Bug with Rust 1.78
- Rust 1.78 enables debug assertions in std by default
slice::from_raw_partspanics when given a pointer which is not aligned/null/bigger thanisize::max- Bug in libafl_frida which trigger this situation when
stalker_is_enabledis set to true inlibafl_frida/src/helper.rs- and no module is specified to be stalked
- as a reminder stalker is enabled if we want to use the code coverage
- Bug for coverage when stalker is enabled
- in
libafl_frida/src/helper.rs::FridaInstrumentationHelperBuilder::build - the
instrument_module_predicatereturn true for the harness - but the
ModuleMapreturned bygum_sysis empty - this provokes a panic from Rust 1.78
- current fix is to disable coverage but not good enough
- in
33
-
Generating test for cli
- issue killing the fuzzer process after launching it with cli
- how do we get the pid of the fuzzer process which is a different process from the binary ran by
cargo run - rust does not have command with timeout
- We do it by querying the system for process with certain exact name
- this is not super robust
- behaviour is also platform dependent
- we limit this test to linux platform to avoid future complications
-
New issue introduced with Tauri
2.0.0-beta.22fs::read_filereturnsInvokeBody::Raw(Vec<u8>)- to get Rust type from this raw value, Tauri provides this function
pub fn deserialize<T: DeserializeOwned>(self) -> serde_json::Result<T> { match self { ... InvokeBody::Raw(v) => serde_json::from_slice(&v), } }- this is flawed as
serde_json::from_slice(&v)expectsvto bebytes of JSON text(fromserde_jsondocumentation) - what was given from
fs::read_fileare raw bytes of the content of a file and this triggers a serialization error - for the function
deserializeto work we need an additional conversion of the raw bytes into bytes of json text - a proposal that does not completely fix the issue but at least allow us to recuperate a
Vec<u8>that can be used for further conversion:
pub fn deserialize<T: DeserializeOwned>(self) -> serde_json::Result<T> { match self { ... InvokeBody::Raw(v) => { let json_string = serde_json::to_string(&v).unwrap(); serde_json::from_slice(&json_string.into_bytes()) } } }- either the function
deserializein Tauri is wrong or what is returned fromfs::read_fileis wrong
Windows
Issues
Troubles building fuzzer for windows with LibAFL
- execution issue which does not appear when commenting calls to the LibAFL fuzzer
- using the
msvctoolchain- building works fine
- we get
(exit code: 0xc0000139, STATUS_ENTRYPOINT_NOT_FOUND)when running the binary - this happens when windows fails to load a dll
- dependencywalker to investigate can help but now is deprecated
- make sure that there is no discrepancy between loader version and compilation toolchain
- using the
windows-gnutoolchain- I need to install
gccfor linking
- I need to install
- what toolchain should I use?
- depends on which dynamic library I need to link to
- look into libafl repo for hints
- in github action we see that they use the windows default stable toolchain
- that should be
msvc
- that should be
- Error found
TaskEntryDialogentrypoint could not be found- running the fuzzer from windows ui
- or using cbc
- Dependency walker shows the missing modules
- one of the main missing module is
API-MS-WIN-CORE
- one of the main missing module is
- Using
ProcessMonitorwith a filter ontauri_cmd_1.exe- run the executable and you get all the related events
- running the fuzzer from windows ui
- Big chances it is related to
tauri-buildwhich does a lot in windows- reintroduce a
build.rsfile withtauri_build::build() - Find a way to have a generic and minimal
tauri.conf.jsonfor the fuzz directory
- reintroduce a
- Missing C lib or include
- Check for missing libraries from MSVC
- Edit environment variables
- for clues on where to search, check https://www.reddit.com/r/vscode/comments/itdbld/include_path_not_set_when_trying_to_compile_a_c/
fatal error C1083: Cannot open include file: 'stdint.h'- Missing some libraries from MSVC (clang++ tools)
fatal error LNK1104: cannot open file 'msvcprt.lib'- Edit the
LIBenvironment variable
- Edit the
fatal error LNK1120: 7 unresolved externals- make sure to only include libs for x64 (in our case)
- create an env variable
LIBtoC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\lib\x64
fatal error C1034: stdint.h: no include path set- create an env variable
INCLUDEtoC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.42.34433\include
- create an env variable
- Debugging windows github actions
- neko/act does not have a windows vm
- you can run on
act -P windows-latest=-self-hostedif you are on Windows - the issue is that you have to start from a fresh Windows environment if you want to reproduce github action issues
- you can run on
- debug with an ssh session to the github runner
- https://github.com/marketplace/actions/debugging-with-tmate
- neko/act does not have a windows vm
frida_gum does not find any symbol or export in the Rust binary
- check symbols manually with equivalent of
nmwhich isdumpbin.exe- use developer terminal to use
dumpbin.exeeasily- developer terminal can be opened with Visual Studio in Tools > Command Line
- Windows executables are stripped of any export symbols
- use developer terminal to use
- Our previous approach used debug symbols to find the native pointer to the harness
- debug symbols are not available on windows (in the file directly but separate ".pdb" file)
- We change so we use the raw address provided at the beginning to create the
NativePointer
No display from crash result
- When running the fuzzer the crash happens but nothing is displayed
- We change the panic hook order such that original panic hook is executed before the fuzzer panic hook
Error status of crashed program in fuzzer
- In windows the error status chosen by LibAFL is 1 instead of 134
Find equivalent of libc functions
- Example with finding a CRT function that is used to open a file
- Debug a function that is opening a file with Visual Studio and tracks the execution
- fs.rs file needs to be provided.
- It's in
C:\Users\alex-cn\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\sys\windows\fs.rs
- It's in
- Find the function called
c::CreateFileWused t - in the
cdirectory find thatCreateFileWcomes from thekernel32dll
- fs.rs file needs to be provided.
- Check Rust source code and finds the OS-specific implementation
Tests show tauri_fuzz_tools-917323a62e294d07.exe write_foo (exit code: 0xc0000139, STATUS_ENTRYPOINT_NOT_FOUND)
- This is similar error message to previous issue which was missing
tauri_build::build()- checked that build script is executed to build the tests
- issue seems to come from the
tauri-fuzz-toolscrate
- From experiments
tauri_fuzz_toolstests- fails to run from workspace directory with
cargo t- executable produced is bigger than the successful one
- run fine from workspace directory with
cargo t -p tauri_fuzz_tools- executable produced is smaller than the failing one
- run fine when executing
cargo tfrom the crate directory - runs fine when putting
tauri_fuzz_toolsas the sole default member of the workspace - fails when putting
tauri_fuzz_toolsas default member with any other member
- fails to run from workspace directory with
- Adding a Windows Manifest file works to remove the error message
- https://github.com/tauri-apps/tauri/pull/4383/files
- Does not explain why the compilation worked in certain cases but not in other =(
- Tried with crate
embed-manifest- crate seems outdated contain build instruction not recognized
Fetching values from register does not give expected value
- the policy "block_file_by_names" does not work
- Windows do not use utf-8 encoding but utf-16 for strings
- use the
windowscrate to import correct windows type and do type conversion
- use the
Conflicting C runtime library during linking
= note: LINK : warning LNK4098: defaultlib "LIBCMT" conflicts with use of other libs; use /NODEFAULTLIB:library
LINK : error LNK1218: warning treated as error; no output file generated
- This seems to happen
- I don't really know what made this bug appear
- one suspicion is the upgrade to Rust 1.78
- Amr had this first and I only got it when I manually updated my
rustup
- Cause of the event
- conflicting lib c runtime have been found
- I see in the compilation logs that we already link against the "msvcrt.lib" c runtime
- my guess is that some library is trying to link against "libcmt" on top
- Solution found
- linker options added in
.cargo/config.tomlfile
[target.x86_64-pc-windows-msvc] rustflags = ["-C", "link-args=/NODEFAULTLIB:libcmt.lib"] - linker options added in
- to be honest I don't really understand what's happening precisely and I don't want to dig further. But I'm happy to have found a solution quickly but I expect this to bite me back in the future
NtCreateFile use flags different from the doc
- doc: https://learn.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntcreatefile
- from the doc
NtCreateFileis supposed to use flags such as:- FILE_GENERIC_READ: 0x00120089
- FILE_GENERIC_WRITE: 0x00120116
- FILE_READ_DATA: 0x00000001
- from the experimentations we get values such as:
- open file in read only: 0x80100080
- open file in write only: 0x40100080
- this matches other known windows constants that exist are:
- GENERIC_READ: 0x80000000
- GENERIC_WRITE: 0x40000000
- we will use these flags eventhough this is different from what described from the doc
Conversion from windows string to Rust string
- Windows uses wide character for unicode support which are u16
- Also windows api uses string that should be null-terminated
- https://stackoverflow.com/questions/73935490/how-can-i-convert-lpwstr-into-str
Docker on Windows
- Docker daemon can be started by launching Docker desktop
- docker-credential-desktop not installed or not available in PATH
- in the file
C:\Users\user\.docker\config.json - delete
credsStorefield
- in the file
Tools for debugging
ProcessMonitorto see all the events related to a processDependencyWalkerto investigate issue related to modules/dlls
Default policy
-
We want to have a default policy that catches any calls to an external binary that returns an error
- our intuition is a call to an external binary that can result into a syntax error also has a chance to be vulnerable to an exploit
- with the fuzzer there is a high chance "vulnerable" calls to external process will result in syntax error
-
We want to attach to Rust
std::process::Command::spawn/output- I don't see the symbol of these functions in the binary, I don't really get why
-
Maybe the solution is to attach to
execvfamily of calls and monitor the return status of the call- this is lower level that rust
Command, we can catch more external interactions from the app we monitor - I believe this is called by rust
Commandbut I need to check that
- this is lower level that rust
-
All functions from
execfamily callsexecve- from this implementation of libc https://github.com/zerovm/glibc/blob/master/posix/execv.c
-
Fuzzer crashes when monitoring
execv- it does not crash when monitoring other functions
- it crashes in the fuzzer code
- with fuzz_test
- with a rule that never blocks
- it crashes in the harness and is captured by the fuzzer
- with a rule that always blocks
- it crashes in the harness too
- actually the harness has time to finish, corruption appears after the harness
*** stack smashing detected ***: terminated
- with a rule that never blocks
- with fuzz_main
- with a rule that always blocks
- it crashes in the harness when the tauri command is finished but the harness has not finished yet
*** stack smashing detected ***: terminated
- with a rule that never blocks
- it crashes in the harness when the tauri command is finished but the harness has not finished yet
*** stack smashing detected ***: terminated
- with a rule that always blocks
- with fuzz_test
- I think that after the harness the fuzzer calls execve before the flag is removed
- Call order starting from when the harness is being called
- in
libafl::Executor::run_target:let ret = (self.harness_fn.borrow_mut())(input); libafl::executors::inprocess::GenericInProcessExecutorcore::ops::function::FnMut::call_mutls_with_rust::harnesswithls_with_rustthe binary being executed_gum_function_context_begin_invocationgum_tls_key_get_valuepthread_getspecificgum_tls_key_set_valueget_interceptor_thread_contextgum_thread_get_system_errorgum_invocation_stack_pushgum_sign_code_pointer
gum_rust_invocation_listener_on_enterfrida_gum::interceptor::invocation_listener::call_on_enterlibafl_frida::syscall_isolation_rt::HarnessListener::on_enter
gum_thread_set_system_error__errno_location@plt
gum_tls_key_set_valuepthread_setspecific
- harness code ...
- pure asm code that push registers on the stack
- that looks like context switch with context being saved on the stack
_gum_function_context_end_invocationgum_tls_key_set_valuepthread_setspecific@plt
gum_thread_get_system_error__errno_location@plt
get_interceptor_thread_context_frida_g_private_getg_private_get_implpthread_getspecific@plt
gum_sign_code_pointergum_rust_invocation_listener_on_leavefrida_gum::interceptor::invocation_listener::call_on_leavefrida_gum::interceptor::invocation_listener::InvocationContext::from_rawlibafl_frida::syscall_isolation_rt::HarnessListener::on_leave
gum_thread_set_system_error_errno_location@plt
_frida_g_array_set_sizegum_tls_key_set_valuepthread_setspecific
- pure asm code that pop stack values into registers
- restore context switch
- in
__execvpe_common.isra: here we crash
-
Correct execution trace at the end of the harness:
- pure asm code that push registers on the stack
- that looks like context switch with context being saved on the stack
_gum_function_context_end_invocationgum_tls_key_set_valuepthread_setspecific@plt
gum_thread_get_system_error__errno_location@plt
get_interceptor_thread_context_frida_g_private_getg_private_get_implpthread_getspecific@plt
gum_sign_code_pointergum_rust_invocation_listener_on_leavefrida_gum::interceptor::invocation_listener::call_on_leavefrida_gum::interceptor::invocation_listener::InvocationContext::from_rawlibafl_frida::syscall_isolation_rt::HarnessListener::on_leave
gum_thread_set_system_error_errno_location@plt
_frida_g_array_set_sizegum_tls_key_set_valuepthread_setspecific
- pure asm code that pop stack values into registers
- here we don't crash contrary to above
- pure asm code that push registers on the stack
-
New approach where we detach the frida listeners of monitored functions instead of deactivating them
- Contrary to what the docs says, calling
Gum::obtainproduce a deadlock (in doc it's supposed to do a no-op) - Without
Gum::obtainwe can't detach the monitored function listeners
- Contrary to what the docs says, calling
-
Weirdest thing ever: the crash does not appear anymore with gdb when putting a breakpoint on
execve -
I'm temporarily giving up on monitoring
execv- I still think it's the
-
Trying with
__execveinstead ofexecve- maybe C weak links mess up with Frida
- not working either
-
Ok I just notice that my approach was wrong anyway
execveusually called in the child process after being forked- Frida rust bindings do not support monitoring the child process anyway
- I still don't know why there was a bug
Improving engine code
- Our rules now use
Fnclosure trait rather thanfnobject - this allow us to make rules that are more flexible with captured variables and arguments
- the main issue was to use
Box<dyn Fn>>that were also implementingClone- inspiration from Tauri code with all the callbacks
- this thread helped us solve the issue: https://users.rust-lang.org/t/how-to-clone-a-boxed-closure/31035/7
- replace
BoxbyArc - we could also create manual cloneable
Box<dyn Fn>>like this example- https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=6ca48c4cff92370c907ecf4c548ee33c
Improve tests
- Refactor tests to avoid too much repetition
- All tests are gathered in a single crate to avoid too much disk usage
Default policy
-
We have a new approach where we monitor the
std::process::CommandAPI- we detect any new creation process
- we track
Command::status,Command::output,Command::spawn - ideally we could track a single function:
std::sys::windows/unix::process::Command- all the above functions call this private function
- unfortunately this function is private and we can't attach to it with Frida
- actually it seems we can! Just found this in the symbols
"SymbolDetails _ZN3std3sys3pal4unix7process13process_inner71_$LT$impl$u20$std..sys..pal..unix..process..process_common..Command$GT$5spawn17hffc9080bc0517252E: [0x0000555740c67360, 0x0000555740c680c1]",
- we track
- we can also detect error status of external process
- we track
Command::output,Command::status,Child::wait,Child::try_wait,Child::wait_with_output, - an issue is that we don't know from which binary we returned from
- we track
- Limit of our current approach is that we can only detect invocation of external binaries from the Rust API
- we don't detect invocation of ext binaries through libc
fork+execve - but we could monitor
waitandwaitpidto track error status
- we don't detect invocation of ext binaries through libc
- This is not possible currently on Windows
- Windows executables are stripped of their symbols and Frida is unable to find functions from the standard library
- This is ok because we provide a more comprehensive policy that targets functions from
ntdll.dllandkernel32.dll
- we detect any new creation process
-
We monitor
waitandwaitpid- this is a superset of monitoring rust
std::process::Command - we had to modify the policy engine to add a storage that can store function parameter at entry
that can then be reused when analysing the function at exit
- this is necessary due to common C pattern that store results of a function in a mutable pointer given as parameter
- Question: Do libc usually call
waitorwaitpidafter spawning a child process?- they should otherwise it would create zombie process
- Can we do better?
- ideally we would track
fork + execvebut it seems too complex with Frida - external process can be called by other means than creating a child process
- for example in SQL an RPC is used to talk to SQL server and no
forkis ever used - we also need to track networking then =(
- for example in SQL an RPC is used to talk to SQL server and no
- we are using the assumption that a child process will return 0 as exit status when the execution went well. Is it always true?
- ideally we would track
- this is a superset of monitoring rust
-
API for windows
- nice blogpost on process creation in Windows
- https://fourcore.io/blogs/how-a-windows-process-is-created-part-1
- browsing through Rust std we see that to create a child process Rust std uses
CreateProcessfromkernel32.dllNtCreateProcessfromntdll.dll- both of these functions can be used to create children processes
- https://www.codeproject.com/articles/11985/hooking-the-native-api-and-controlling-process-cre
- To wait for a child process, Rust std uses
WaitForSingleObjectfromkernel32.dll- Using the debugger to go deeper in the call stack we reach
NtWaitForSingleObjectNtWaitForSingleObjectuses a syscall so it's the limit between user mode and kernel mode
kernel32.dllsupposedly is more or less a wrapper forntdll.dllwe try to fin- Seems like
WaitForSingleObjectdoes not help us determine if a child process has failed or not- we should try monitoring
GetExitCodeProcess - I don't know if this functions is actually used or not in programs and if monitoring it is worth it
- it's actually used in in Rust
Command::output/status - it works
- we should try monitoring
- nice blogpost on process creation in Windows
libc wait
- we want to also capture error status of child processes that were invoked through the libc API
- from my knowledge these child processes are invoked using
forkthenexecve - one way to get the return status of these child processes is to capture calls to
waitfrom the parent process
- from my knowledge these child processes are invoked using
- the issue with
waitis that the child exit status is returned through mutating a variable that was sent as argument and not through the return value - to fix that we may need to store the pointer that was provided as argument to be able to check it on exit
- we implemented that and it works great
Bug with plugin fs_readFile
- For unknown reason when accessing the filesystem with
tauri_plugin_fsthe interception does not occur- this does not occur when accessing the filesystem with other functions
- Possible reasons for that:
tauri_plugin_fs::read_filedoes not callopen- this is unlikely since
tauri_plugin_fsuses this Rust codelet file = std::fs::OpenOptions::from(open_options.options)
- this is unlikely since
- Tauri plugins are executed in a context which are not tracked by Frida
- In another process?
- Let's check the Tauri changelog
- We solve this in another PR
- From our investigation it seems that listener to the harness does not function
- it works when giving it a pointer to the Tauri commands we want to fuzz
- it does not seem to work when giving it the whole harness
- the address of the harness we give to the fuzzer and the one found in the binary seem to differ, I don't know the cause
- I believe because we improved the code to be polymorphic
- Due to monomorphisation there should be multiple implementation of our generic function
- We changed the way we take harness pointer, make it a function rather than a closure
- Another issue it seems that
fs_read_fileis called out of the monitoring time of the fuzzer runtime- hence calls to the filesystem are not caught
- why does it happen?
- I think it might be because
fs_read_fileis an async function - tested it by making the function sync and it's definitely the culprit
- I think it might be because
Removing LibAFL fork from the project
- the project is more about having a runtime that detects anomalies during fuzzing than creating a fuzzer in itself
- we can decouple the project from LibAFL furthermore and remove our fork of LibAFL to be sync with the upstream version
- for convenience we still are couple with
libafl_fridaby implementing the
Handling async Tauri commands
- big issue: with async Tauri commands our current way to
- we monitor the functions while the harness is executing
- the harness is responsible for calling the Tauri commands
- if the Tauri commands is async then the harness may terminate before the Tauri commands is completed
- when leaving the harness our runtime stop monitoring the target functions and they end up being executed outside of the monitoring time frame
- it's important to stop monitoring functions out of the harness or else we would also block code from the fuzzer
- one way to solve it would be to stop calling Tauri commands through the webview and calling
the tauri command directly like normal function where the Tauri app backend is treated as
a simple crate
- calls to async commands could be made blocking
- we could gain speed
- that would need a big overhaul of the fuzzer