
* 0.13 * z3 * capstone * fixer --------- Co-authored-by: Dominik Maier <domenukk@gmail.com>
Baby tokens fuzzer
tokenizer
are used to split inputs into tokensencoder_decoder
will give every new token a new id and record the mapping relation. Then it can convert tokens toEncodedInput
, vice versa.encoded_mutations
are used to deal with token level mutation, following is the definition: ''' pub fn encoded_mutations() -> tuple_list_type!( EncodedRandMutator, EncodedIncMutator, EncodedDecMutator, EncodedAddMutator, EncodedDeleteMutator, EncodedInsertCopyMutator, EncodedCopyMutator, EncodedCrossoverInsertMutator, EncodedCrossoverReplaceMutator, ) '''