I've noticed a number of people using AI Dungeon to test GPT-3's abilities. While it's a great way to see how GPT-3 can power an interesting application. It's a poor test of GPT-3's abilities in general. The first generation of any custom prompt is actually GPT-2.
19
27
11
170
Are there any other differences you can tell us about? Prepending, separating, or wrapping input? Fine tuning on some story focused corpus? Context size limits? Something else?
1
0
0
1
We cut off the generation at certain points (trailing sentences etc...) Disable certain tokens to improve performance or make generation safer, fine-tune on text adventures and only use the last ~1000 tokens of context.
5
0
1
11
Replying to @nickwalton00
The last ~1000 tokens of context "to be remembered" and regular together or only regular? I.e. does remembered stuff have its own space in the prompt?

12:27 PM ยท Aug 3, 2020

0
0
0
0