When the Tool Evaluates Its Own Critics
Here is a short essay on the problems with AI today, followed by an exercise in critical thinking demonstrating logical fallacies. I found the essay somewhere on the internet (don’t remember where), then I asked an AI to look for logical fallacies in the text.
The Original Comment#
“If you did not realize it, almost all information out there is an advertisement where people are trying to sell you something even if it is the advertisement of a viewpoint or an opinion. It is very difficult to get unbiased data or information that allows each person to decide how to use or interpret that information. LLMs have the potential to change that and could simply provide all logical viewpoints about the topic. I tend to disagree with most information until I prove the concept to be logical myself. I actually prefer to go to the library and read textbooks and research articles written before around 1975. Many articles and books after that time seem to be people just rewriting the information in a way that is less true and less insightful and lack details about how the concepts were originally proven. As AI regulations increase uncensored models will likely be harder to find or be banned altogether, so I recommend downloading them and even storing an extra copy in a faraday cage for a rainy day. In the video description I will provide links to low-cost external drives and faraday boxes. A non-aligned offline model like this should be used for privacy reasons or when the latest more powerful LLM results are censored or when it feels like there is bias to the results. These models really are a game changer for two reasons. First users now have access to so much information that would otherwise be difficult to access some countries and even tech companies limit and censor what can be accessed on the internet. now if you can access these models you can access information that was Out Of Reach before. It is important to realize that what information the model is trained on ultimately determines the output of the model. It could be a garbage in garbage out situation so just because you have a non-aligned LLM does not mean that it was trained on all scientific political and religious viewpoints. This means there is very likely some inherent bias based on which tech sources - Meta, OpenAI, Google, Anthropic, xai, or DeepSeek used to train the LLM. However with so much text being used chances are the resulting viewpoint is useful to interact with it is important to realize that an LLM in no way provides an absolute truth and frequently provides incorrect results. That being said LLMs are often the best way to quickly find information, summarize information, and are a really helpful way to turn thoughts into usable code. The second reason these models are a game changer is because running the model offline makes it so big tech and governments cannot see what you are searching or thinking about. Anything you can think you can say and anything you say near a device that is connected to the internet or type into a computer that is connected to the internet can be accessed by tech companies and governments. It really seems more and more that it is impossible to have any privacy even when it comes to your own thoughts. Offline LLMs are a fantastic way to bring privacy back while still allowing users to be able to access important information. It is important to use offline models when working with proprietary, classified, or personal information.”
AI Analysis: Logical Fallacies Detected#
The text above contains several logical fallacies and rhetorical strategies that may undermine its credibility or clarity. Here’s an analysis of the key issues:
1. Overgeneralization#
Example: “Almost all information out there is an advertisement where people are trying to sell you something even if it is the advertisement of a viewpoint or an opinion.”
Issue: This is an overgeneralization. While it’s true that much information is influenced by biases or agendas, claiming that almost all information is an advertisement is an exaggeration. It ignores the existence of neutral, factual, or purely educational content.
Impact: This undermines the credibility of the argument by making it seem overly cynical and dismissive of legitimate sources.
2. Appeal to Tradition#
Example: “I actually prefer to go to the library and read textbooks and research articles written before around 1975. Many articles and books after that time seem to be people just rewriting the information in a way that is less true and less insightful.”
Issue: This is an appeal to tradition, suggesting that older sources are inherently better without providing evidence for this claim. It dismisses modern scholarship without considering advancements in research methods, peer review, or new discoveries.
Impact: This fallacy can lead to outdated or incomplete understanding of topics.
3. Slippery Slope#
Example: “As AI regulations increase, uncensored models will likely be harder to find or be banned altogether…”
Issue: This assumes a worst-case scenario without evidence that such a ban is inevitable. It also conflates regulation with outright prohibition, which are not the same thing.
Impact: This creates unnecessary fear and may lead to irrational decision-making, such as hoarding uncensored models.
4. False Dilemma#
Example: “A non-aligned offline model like this should be used for privacy reasons or when the latest more powerful LLM results are censored or when it feels like there is bias to the results.”
Issue: This presents offline models as the only solution to privacy and censorship issues, ignoring other potential solutions like improved transparency, better regulation, or alternative technologies.
Impact: This limits the discussion and may lead to oversimplified conclusions.
5. Appeal to Fear#
Example: “It really seems more and more that it is impossible to have any privacy even when it comes to your own thoughts.”
Issue: This statement exaggerates the threat to privacy, creating a sense of panic. While privacy concerns are valid, claiming that it’s impossible to have any privacy is hyperbolic.
Impact: This can manipulate the audience into accepting the author’s recommendations without critical evaluation.
6. Cherry-Picking#
Example: “However with so much text being used chances are the resulting viewpoint is useful to interact with…”
Issue: This focuses on the potential usefulness of LLMs while downplaying the significant risks of bias or incorrect information that the author previously acknowledged.
Impact: This selective emphasis can mislead the audience into overlooking the limitations of LLMs.
7. Appeal to Novelty#
Example: “These models really are a game changer…”
Issue: This assumes that LLMs are inherently beneficial simply because they are new or innovative, without fully addressing their limitations or potential downsides.
Impact: This can lead to uncritical acceptance of technology without considering its broader implications.
8. Circular Reasoning#
Example: “It is important to realize that what information the model is trained on ultimately determines the output of the model.”
Issue: This statement is tautological—it repeats the same idea without adding new insight or addressing how to mitigate the issue of biased training data.
Impact: This weakens the argument by failing to provide meaningful solutions or analysis.
9. Straw Man#
Example: “Anything you can think you can say and anything you say near a device that is connected to the internet or type into a computer that is connected to the internet can be accessed by tech companies and governments.”
Issue: This exaggerates the capabilities of tech companies and governments to access private thoughts, creating a straw man argument. While surveillance is a concern, the claim that anything you think can be accessed is unrealistic.
Impact: This distracts from legitimate privacy concerns by making hyperbolic claims.
10. Self-Contradiction#
Example: The author initially criticizes modern information as biased and less insightful but later acknowledges that LLMs, which are trained on modern data, can still provide useful viewpoints.
Issue: This inconsistency undermines the author’s overall argument.
Impact: It creates confusion and reduces the persuasiveness of the argument.
Conclusion#
The text contains several logical fallacies, including overgeneralization, appeal to tradition, slippery slope, false dilemma, appeal to fear, cherry-picking, appeal to novelty, circular reasoning, straw man, and self-contradiction. These fallacies weaken the argument by introducing exaggerated claims, inconsistencies, and manipulative rhetoric. To strengthen the argument, the author should provide evidence, avoid hyperbole, and address counterarguments more fairly.




