-
Notifications
You must be signed in to change notification settings - Fork 278
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Hi everyone,
Found several bugs while checking the code of ipynb notebooks with benchmark results for 3 environments TinyToy, ToyCTF, Chain.
I think my findings might be useful for community, who uses this nice implementation of cyberattacks simulation.
MOVED TO SEPARATE ISSUE #115
- Issue 1: learner.epsilon_greedy_search(...) is often used for training agents with different algorithms, including DQL in the
dql_run. Howeverdql_exploit_runwith input networkdql_runas policy-agent andeval_episode_countparameter for the number of episodes, gives an impression that runs are used for evaluation of the trained DQN. The only distinguishable difference between 2 runs is epsilon queal to 0, which leads to exploitation mode of training, but does not exclude training, because during run with learner.epsilon_greedy_search theoptimizer.step()is executed on each step of training in the fileagent_dql.py, function call learner.on_step(...).
- Solution: I will include in Pull request the code I used for better evaluation (based on learner.epsilon_greedy_search(...) and generate pictures below.
- Screenshots: Figure 1 & 2 and figure 3 & 4 , shows result of chain network evaluation using corresponding new cell in notebook_benchmark-chain.ipynb. As you can see on figure 1 training on the initial 50 episodes is not enough for owning 100% of the network (AttackerGoal), whereas original run
dql_exploit_runinternally usinglearner.on_step(...)figure 2 leads to much better results, due to optimization process, which still process ongoing experience of agent. We can overcome this inaccurate evaluation and still reach the goal in 100% of times figure 3, while training on 200 episodes with commentedlearner.on_step(). It fixes trained network and stops optimizing during evaluation, but leads to the ownership of all the network with larger amount of learning episodes. This means with 200 episodes it is feasible to learn optimal path of agent attacks inside chain network configuration.
Lastly, figure 4 we can compare those runs with correct evaluation runs on 20 episodes reach 6000+ and 120+ cumulative reward for for 200 and 50 training episodes correspondently.
Figure 1: (after PR) no optimizer during evaluation, 20 trained episodes, 20 evaluation episodes
Figure 2: (before & after PR) dql_exploit_run with optimizer during evaluation, 20 trained episodes, 5 evaluation episodes
Figure3: (after PR) no optimizer during evaluation, 200 trained episodes, 20 evaluation episodes
Figure 4: (after PR) comparison of evaluation for network trained on 200 and 20 episodes, chain network configuration
- Issue 2: During training each episode ends only within the maximum number of iterations, which is due to the mistype in AttackerGoal class. Default value for parameter
own_atleast_percent: float 1.0is included as condition with AND, for raising flagdone = True, thus for TinyToy and ToyCTF (not Chain) leads to long duration of training, wrong RL signal for evaluating Q function and low sample-efficiency.
- Solution: In order to be coherent with originally defined environments, I included changes into gym registry with preserving previous environments version behavior and making new environments with standard behavior of using
done. This means inclusion ofown_atleast_percent: 1.0in initialization of"v0"versions oftoyctfandtinytoyenvironments and creation of new envs 'CyberBattleTiny-v1' and 'CyberBattleToyCTF-v1', by defaultown_atleast_percent=0andown_atleast=6. This is reasonable, due to the fact that CTF solution includes only 6 nodes to be owned and with correct reward engineering training stops at the attack, which owns 6 nodes with highest reward. - Screenshots: Figure 5: Length of training episodes, obvious increase during learning of optimal path
Figure 6: 1500 max iterations during training of 20 episodes, before PR
Figure 7: training on both 20 and 200 episodes, either use more RL techniques or learn for more episodes - PR: included some leftover cells in ToyCTF for comparison, "Before PR", but it could be safely deleted.
MOVED TO SEPARATE ISSUE #115
3. Issue 3: ToyCTF benchmark is inaccurate, because with correct evaluation procedure, like with chain network configuration, agent does not reqch goal of 6 owned nodes after 200 training episodes.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request