“Oh drat these computers. They’re so naughty and so complex. I could just pinch them.” Marvin the Martian
Garbage In, Garbage Out.
ChatGPT and AI are the rage right now and they do show promise but remember that they are like humans in that if they are exposed to faulty (or biased) information they will output faulty and even scary results. Remember Microsoft’s AI that was shut down for its scary, almost nihilistic and dangerous views?
A small flaw in the assumptions entered in a program can have massive negative effects, like people in 2000 assuming that they could continue to earn 12% minimal returns in the market for their retirement. Or insurance companies basing calculations on 30-year government bond rates of 9% fifteen years earlier. Or oversimplifying assumptions like ignoring volcanoes and sunspot activity from climate models and just using trend analysis based on highly incomplete but widely accepted models promulgated by those with financial interest in the theoretical outputs influencing legislation and investment.
Always look at the data fed into the computer to make sure it has integrity, all the way back to the collection thereof and potential bias or flaw in the experiments or capture. Many promising discoveries have been disproven because of fundamental flaws.
Check the assumptions used in the models. Many are unreasonable.
Anybody that says “trust the science” doesn’t understand how science works.
Even the best computers lack common sense and are 100% dependent upon the filters of information they intake. At least we mere humans can ask questions about our information feed.