Security Researchers and Communities

In many communities, a lot of people believe privacy and security are synonymous. However, this is not the case. While sometimes complimentary, these terms have different meanings, and the concepts they present are applicable in different ways to a variety of things. Due to this, in online discourse, things can get chaotic. The following is an analysis of related general arguments that become a source of conflict when some people interact with security researchers.

Claims of Hypocrisy

> You call x software insecure, yet you use it? You're a hypocrite.

The above referenced is not hypocrisy. In reality, that term is often misunderstood and incorrectly applied. The claim fails to take into consideration extremely common variables, such as preference, threat model, purpose, and more. If anything, a security researcher using that software can act as reason for why they'd be keen on criticizing it; improvement is the goal. Furthermore, a premise and conclusion sharing some common denominator is not proof for the conclusion being true. In privacy oriented communities, these are usually in reference to a person's OS choice, such as using Linux regardless of whether it's secure.

Presentation Means Proof

> This person's information is wrong because it's presented poorly.

Omission or poor expression is not proof for a conclusion being false. This is comparable to the fallacy fallacy, wherein the individual presumes a claim to be wrong due to poor supporting arguments. Even so, something not being true doesn't mean it's false.

Disagreement is Bias

> The author is biased, don't believe them.

This is an unsubstantiated claim. Even in the case where some reason is given other than what's apparent to the person making the claim, it must prove their conclusion. In discussions about security, someone recommending a proprietary product over a FOSS one doesn't mean they are biased. This is attempting to disprove a claim with another, albeit the latter is a non sequitur. A preference for x software making someone irrational in the judgment of criticism about it or praise referencing alternatives would be what actual bias looks like. Had security researchers spent their time looking for ways to defend pieces of software from ill speak instead of auditing it to find vulnerabilities and criticizing it, we wouldn't have much progress in software security. Criticism is vital in our progression toward more secure software, regardless of source disclosure.

Hack Me Then

> The software I use is insecure? Hack me then.

A security researcher not being able to hack you isn't proof for their claims being false; the proposition doesn't constitute any sort of refutation, only minor implication. The claim should be appraised via its sources, premises, and logical structure.

What I Like is Flawed? You're Lying...

> This researcher says x is insecure, so statements about y are false.

This argument presupposes the researcher's claims are false; it's used as a hidden premise. Furthermore, the argument is neither sound nor valid. Even if we were to assume the main premise is true, the conclusion is not consistent with it. Supposing the premise proved the conclusion, for an argument to be sound, its premises must be true. Where no evidence is provided for ideas referencing x, this conclusion is still based on a presupposition.


> That's just your opinion...

This is most often seen in cases where the researcher's opposition has no refutations. If a conclusion is directly supported by proper research or facts, it is not merely opinion. Statements akin to what's seen in the quote are used to discredit people's claims and avoid concession in some or all fronts.