I'm a newbie to this stuff, but I did manage to get Dalai Llama up and running with llama and alpaca 7B and 13B.
The results I get seem to be random spitouts of reddit or facebook posts. Very often it even says something like "Next Post >>" or has a timestamp of the post.
Is this using the same model? Is there some kind of tuning I have to do?
You have to prompt with the format Stanford tuned it on:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
or
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
The results I get seem to be random spitouts of reddit or facebook posts. Very often it even says something like "Next Post >>" or has a timestamp of the post.
Is this using the same model? Is there some kind of tuning I have to do?