Do we have any idea how big of a file the model actually is? I assumed it worked like the txt2img models where precise data is not hardcoded or stored literally anywhere for reference. A model that could accurately render all the WHOIS data in the world, since addresses are often arbitrary, would by itself be almost as large as the WHOIS database of 400+ million domains. There's not that much about WHOIS data that can be compressed into a neural network as a set of features and spit back out losslessly. So if it had just produced a realistic looking IP address at random, sure... but that is an actual BBC IP address.