From 403aa1cfec966900250a0c2309994c41321babab Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Simon=20Kl=C3=BCttermann?= Date: Sat, 29 Jan 2022 13:10:12 +0100 Subject: [PATCH] updated readme --- README | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/README b/README index 150c06f..7007b40 100644 --- a/README +++ b/README @@ -5,5 +5,9 @@ To solve this, you can add models depending on correlations between them. But in n2ulayer.py and mu.py define this special kind of neural network. loss.py defines the correlation we want to minimize for use in tensorflow. onemodel.py generates a quick (and quite random) anomaly detection model for use on the data defined in data.py (just a 2d gaussian). 20 models are generated and their predictions (sorted from most normal (green) to most anomal (red)) drawn in the numbered images in imgs -If you use all 20 models and simply average them this results in imgs/recombine.png. Notice how the green points are much more centered. -choosenext +If you use all 20 models and simply average them this results in imgs/recombine.png. Notice how the green points are much more centered. (This image is created by recombine.py) +choosenext.py creates and uses the tensorflow model to find a list of predictions that are least correlated to a given list of predictions +main.py uses this to combine a random model (before.png) with a combination of 4 models (suggestion.png) into updated.png. Notice how the area is covered much better in updated.png then in before.png. +Youre task would be to extend this method to be able to combine arbitrary many models (use remainder in main.py, find a better combination function than combine(a,b) in main.py and introduce and exit condition) and test if this method results in more stable/powerful ensembles. +If you have any questions, please feel free to write an email to Simon.Kluettermann@cs.tu-dortmund.de +