FRET-LibAFL/fuzzers/baby_fuzzer_tokens
2022-10-12 14:57:08 +02:00
..
2021-10-21 16:33:40 +02:00
2022-08-12 02:28:32 +02:00
2021-10-21 16:33:40 +02:00

Baby tokens fuzzer

  1. tokenizer are used to split inputs into tokens
  2. encoder_decoder will give every new token a new id and record the mapping relation. Then it can convert tokens to EncodedInput, vice versa.
  3. encoded_mutations are used to deal with token level mutation, following is the definition: ''' pub fn encoded_mutations() -> tuple_list_type!( EncodedRandMutator, EncodedIncMutator, EncodedDecMutator, EncodedAddMutator, EncodedDeleteMutator, EncodedInsertCopyMutator, EncodedCopyMutator, EncodedCrossoverInsertMutator, EncodedCrossoverReplaceMutator, ) '''